For an enriched reading experience you may visit the online version of this exegesis at https://do.meni.co/phd DOI 10.26180/5d4c35648c01a

This exegesis is submitted as partial fulfilment of the requirements for the degree of Doctor of Philosophy

Bridging the Virtual and Physical: from Screens to Costume

by

Domenico Mazza

Bachelor of Design (Visual Communication)(Hons), 2014

Faculty of Information Technology
Monash University, Caulfield, Melbourne

August 2019

DOI 10.26180/5d4c35648c01a

PDF DOI 10.26180/5d5a214fdf8aa

© Domenico Mazza (2019)

Under the Australian Copyright Act 1968, this exegesis must be used only under the normal conditions of scholarly fair dealing. In particular no results or conclusions should be extracted from it, nor should it be copied or closely paraphrased in whole or in part without the written consent of the author. Proper written acknowledgement should be made for any assistance obtained from this exegesis.

Any third-party content herein that has been reproduced without permissions comply with the fair dealing exemption in the Australian Copyright Act 1968 that permits reproduction of such material for the purposes of criticism and review.

I certify that I have made all reasonable efforts to secure copyright permissions for third-party content included in this exegesis and have not knowingly added copyright content to my work without the owner's permission.

Abstract

There are ways humans act in and experience the physical world that are not reflected in the design of digital media. The term 'digital media' in this case encompasses our televisions, desktop computers, laptops, tablets, smartphones, smartwatches and the like. People who engage in activities through digital media have the ability today to collect and display vast amounts of information from across time and space, almost anywhere. Yet the virtual information presented through digital media accommodates neither the full freedom of intangible human imagination nor the full familiarity or immediacy of engaging with the physical world. This creates a divide between experiences through digital media and those through the surrounding physical world. This research explores ways to conceptualise digital experiences in the physical world.

The outcome of the practice-based design research allows a wide range of design practitioners and researchers, from visual and interaction design to human–computer interaction and textile design, to engage in the conceptualisation, prototyping and presentation of new digital media that addresses the divide between the physical and virtual, through what this research terms Computational Costume. The work enables designers and audiences alike to imagine and experience future technological capabilities without being limited to today's technology or needing to engage advanced visual effects or technical skills. Instead, this practice-based design research has developed and refined the use of lo-fi physical materials, from exhibition to film-making.

Computational Costume has emerged from four investigations into bridging the divide between physical and virtual practices in digital media. Investigations began with supporting people's spatial memory of interactions on screen-based devices through a visual overlay for interfaces known as the Memory Menu. A 99-participant study of the Memory Menu did not find a significant improvement to usability. This result, paired with knowledge obtained from a variety of experienced designers and communicators across art, design, marketing and human–computer interaction, encouraged a shift in focus beyond screen-based digital media. This shift in focus led to a review of and research into ubiquitous and tangible computing, which seeks to engage more of people's surroundings and physical world. This review revealed that a specific focus on whole-body interaction designs was required to break dependence on screen-based devices. This focus led to speculation on how probable technologies centred around augmented reality could enable whole-body, wearable virtual identities to ground interactions through digital media. Computational Costume was conceived in this research from this speculation.

The practice-based research presented in this exegesis and through exhibition contributes a conceptual rationale and accompanying practical approach for developing speculative virtual wearables and objects that ground interactions with digital media in the physical world using lo-fi physical materials. This contribution is embodied by the design of Computational Costume proposed in this research: a speculative design setting and scenarios based on imagined probable technologies centred around augmented reality. This work is explored through lo-fi physical materials activated via exhibition and film-making. This method of exploration enables designers and audiences alike to be liberated from the constraints of today's technology.

Declaration

This document contains no material which has been accepted for the award of any other degree or diploma in any university or other institution and, to the best of my knowledge and belief, the document contains no material previously published or written by another person, except where due reference is made in the text of the documentation.

Domenico Mazza, 10 April 2019

Contents

AbstractDeclarationContentsList of figuresList of tablesAcknowledgementsIntroductionBackgroundSimplifying physical and virtual practices on-screenReviewing physical and virtual practice support across mediaReviewing new physical and virtual practicesCreating new physical and virtual practicesSupporting practices with screen-based digital devicesOn-screen interaction design approachesSupporting on-screen spatial memory through use-wearDesign and communication practices across domainsConclusionDesigning for a wider range of interactions beyond the screenOn-screen interface design reconceptualisationThe Material TurnUbiquitous and tangible computingPhysical objects with virtual overlayManipulable physical and virtual surfaces and objectsEnhanced manual processesAmbient perceptionDependence on many devicesRelieving dependence on screensWhole-body interactionBodily interfacesBody-shadow interfacesOn-body interfacesAugmented realityVirtual objectsWearable interfacesWhole-body engagementWhole-body sensingFloor-based sensingForce feedbackCombined technology approachVirtual identityBody-shadowsContact outlinesVirtual clothingConclusionComputational Costume designBackgroundHyperrealityDark patternsSpeculative designDesign settingCapabilitiesEsemplastic objectsErgonomics and technology reviewAugmented reality see-through devicesForce feedback devicesPortable sound devicesSensing, networking and computing devicesAccessibilityHardware designAugmented reality moduleForce feedback moduleSound moduleCentral unitConclusionDesign scenariosBackgroundCardboard poster and interfaceCardboard posterCardboard interfaceMaterial choiceFindingsComputational Costume v0Work costumePersonal costumeFindingsComputational Costume v1Personal costumeWorksite costumeMedical emergency costumeFindingsComputational Costume v2StoryboardFindingsConclusionConclusionContributionThe Memory MenuInterviewsDesign reviewComputational CostumeFuture workConcluding remarksReference listMemory MenuMemory Menu motivationMemory Menu hypothesesMemory Menu designEthics and recruitmentMenu designStudy procedureMemory Menu evaluationMemory Menu results and analysisInterviewsInterview motivationInterview designEthics and recruitmentInterview procedureInterview codingInterview results and analysisComputational Costume prototyping and presentationBackgroundObjectives for prototyping and presentationMaterial applicationsMock-ups and patternsPaper mock-upsPaper fabric patternsObjects and wearablesCardboard objects and wearablesCotton broadcloth objects and wearablesStiffened cotton broadclothReady-made objects and wearablesPaper wearablesTextile fastenersHook-and-loop fastenersSnap fastenersPin fasteningSupporting structuresCardboard mannequinsSteel wire supportsTimber supportsPresentation methodsSculptureLive performanceVideoConclusion

List of figures

ATELIER allowing people to shift between modes of digital representation. Images from Embodied Interaction – designing beyond the physical–digital divide [25] by Ehn and Linde (2004). Used with permission.

A rendition of a geographic information system (GIS) visualisation encountered in practice. Image is author's own, based upon a working design.

Screenshots of the Patina use-wear effect. Images from Patina: Dynamic Heatmaps for Visualizing Application Usage [66] by Matejka et al. (2013). Used with permission.

Screenshot of Data Mountain. Image from Data Mountain: Using Spatial Memory for Document Management [82] by Robertson et al. (1998). Used with permission.

A commonly used word processor reconceptualised for a large surface accommodating people's wide range of practices and physical abilities. Image is author's own.

Different possible practices defined as areas on a reconfigured word processor design for a large surface. The practice of directly editing a document is highlighted. Arrows indicate connections between areas. Image is author's own.

The Physical Telepresence system in use. Images from Physical Telepresence [63] video by Leithinger et al., MIT Media Lab, Tangible Media Group (2014).

The Urp system in use. Video stills from Urp [98] video by Underkoffler and Ishii, MIT Media Lab, Tangible Media Group (1999).

The inSide system in use. Images from inSide [95] video by Tang et al., MIT Media Lab, Tangible Media Group (2014).

The Perfect Red speculative design. Video stills from Perfect Red [48, pp.47–48] video by Bonanni et al., MIT Media Lab, Tangible Media Group (2012).

The Pillow Talk system in use. Video stills from Pillow Talk [76] video by Joanna Montgomery (2010). Used with permission.

The HandSCAPE system in use. Video stills from HandSCAPE [62] video by Lee et al., MIT Media Lab, Tangible Media Group (2000).

The VIDEOPLACE system in use. Video stills from Videoplace '88 [51] video by Myron Krueger et al. (1988).

The Whole Body Large Wall Display Interface system in use. Video stills from Whole Body Large Wall Display Interaction [91] video by Shoemaker et al. (2010). Used with permission.

The Armura system in use. Images from On-Body Interaction: Armed and Dangerous [42] by Harrison et al. (2012). Used with permission.

The T(ether) system in use. Images from T(ether) – Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation [53] video by Lakatos et al., MIT Media Lab, Tangible Media Group (2014).

The Project North Star system in use. Video stills from Project North Star: Exploring Augmented Reality [59] and Project North Star: Desk UI [58] videos by Leap Motion, Inc. (2018). Used with permission.

The Wall++ system in use. Video stills from Wall++: Room-Scale Interactive and Context-Aware Sensing [104] video by Zhang et al. (2018).

The Augmented Studio system in use. Video stills from Augmented Studio: Projection Mapping on Moving Body for Physiotherapy Education [45] video by Hoang et al., Microsoft Research Centre for SocialNUI (2017).

The Multitoe system in use. Video stills from Multitoe interaction: bringing multi-touch to interactive floors [5] video by Augsten et al., Hasso Plattner Institute (2010). Used with permission.

The Kickables system in use. Video still from Kickables: Tangibles for Feet [90] video by Schmidt et al., Hasso Plattner Institute et al. (2014). Used with permission.

The electrical muscle stimulation force feedback system by Lopes et al. (2018) in use. Video stills from Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation [64] video by Lopes et al., Hasso Plattner Institute (2018). Used with permission.

Mirrorworlds Concept: The Architect. Video still from Mirrorworlds Concept: The Architect [57] by Leap Motion, Inc. (2018). Used with permission.

The Choreomorphy system in use. Video still from Choreomorphy [81] video by El Raheb et al. (2018). Used with permission.

A Mirrorworld in a classroom, for exploring the water cycle of an environmental landscape at room scale. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.

A Mirrorworld in an office collocated with a medical operating theatre. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.

A virtually concealed thief in a speculative augmented reality. Video still from 'Hyper-Reality' [67] by Keiichi Matsuda (2016). Used with permission.

Quotes from Apple Inc. patent Sports monitoring system for headphones, earbuds and/or headsets [80] by Prest et al. (2014). Quotes extracted from patent by Apple Inc.

A rendition of Computational Costume [6] in practice with its hardware design highlighted in blue. Illustration by Janelle Barone, made in collaboration with the author.

Cardboard poster made for the CHI 2017 (Conference on Human Factors in Computing Systems) Student Research Competition, Denver, Colorado, USA, May 2017. Photography by Jon McCormack.

Hand assembly of the cardboard poster. Images are author's own.

Hand and forearm interface mock-up for crafting the cardboard poster. Photography by Jon McCormack.

Computational Costume v0 mannequin front and back. Shown at No Vacancy Gallery QV in Melbourne, Victoria, Australia as part of the Melbourne FashionTech collective's showcase during White Night 17 February 2018. Photography by Jon McCormack.

Map tool in Computational Costume v0. Photography by Jon McCormack.

Computational Costume v0 featuring personal effects. Photography by Jon McCormack.

Computational Costume v0 featuring health treatment plan and swimming goals. Photography by Jon McCormack.

Timeline tool in Computational Costume v0. Photography by Jon McCormack.

Re-enactment video of the Computational Costume v1 performance [71]. Video is author's own.

Computational Costume v1 from left to right: personal costume, worksite costume and medical emergency costume. Photography by Jon McCormack.

A copy of a boarded train and map tool, used to communicate the wearer's location and estimated time of arrival in Computational Costume v1. Photography by Jon McCormack.

Some reading material made public in Computational Costume v1. Photography by Jon McCormack.

A worksite costume indicating a job for the wearer, which can be affixed to the costume in Computational Costume v1. Photography by Jon McCormack.

A medical emergency costume displaying areas of injury with indication of a drug administered (displayed as an 'F' for Fentanyl), heart biometrics, enclosed private records and support sent by loved ones via touch from the map tool in Computational Costume v1. Photography by Jon McCormack.

Computational Costume v2 video [72]. Video is author's own.

The Computational Costume v2 video, costumes and props on display at Monash University's SensiLab The Looking Glass window display, Caulfield, Victoria, Australia from 1 July 2018 until 26 November 2018. Image is author's own.

The Computational Costume v2 costumes and props, with video, on display at the Design Translations exhibition by Health Collab, MADA Gallery, Caulfield, Victoria, Australia, 3–6 December 2018. Image is author's own.

Applying a mark onto the back using an object for marking and costume token at 00:40–00:42 in the Computational Costume v2 video [72]. Images are author's own.

The costume token allows access to a costume. In this case a health professional can see a medical record costume at 00:46–00:52 in the Computational Costume v2 video [72]. Images are author's own.

A health professional applies a ready-made diagram from a wall and specialised treatment plan to the patient's costume for reference at 00:52–01:19 in the Computational Costume v2 video [72]. Images are author's own.

A medical record costume hosting a lifetime of records as chronologically ordered silhouettes at 01:32–01:40 in the Computational Costume v2 video [72]. Images are author's own.

A medical record appears automatically on a wearer in an emergency situation as a call-to-action for bystanders at 01:53–01:56 in the Computational Costume v2 video [72]. Images are author's own.

The map tool allowing access to a birth record's information on birth parents and birthplace at 02:04–02:06 in the Computational Costume v2 video [72]. Images are author's own.

The map tool facilitating communication between wearers and acting as a navigational aid at 02:16–02:23 in the Computational Costume v2 video [72]. Images are author's own.

The map tool allowing access to a remote location for object retrieval at 02:29 in the Computational Costume v2 video [72]. Image is author's own.

The costume and shared objects as tools to manage privacy at 02:40–03:05 in the Computational Costume v2 video [72]. Images are author's own.

The first turn in the practice round of the Memory Menu. Images are author's own.

A use-wear Memory Menu after several turns (above) and a baseline menu (below). Images are author's own.

A selection error is highlighted in Memory Menu, for picking 'Sunflower' instead of the required selection, 'Banana'. Image is author's own.

Accuracy of recollection of the most picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.

Accuracy of recollection of the least picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.

Difficulty reported by participants for 1st menu baseline (grey) with 2nd menu use-wear versus 1st menu use-wear (red) with 2nd menu baseline.

Difficulty reported by participants for 2nd menu baseline (grey) with 1st menu use-wear versus 2nd menu use-wear (red) with 1st menu baseline.

Likert scale responses for effectiveness of the use-wear effect.

68 optional written responses for effectiveness of the use-wear effect coded into three categories.

Likert scale responses for desirability of the use-wear effect.

53 optional written responses for desirability of the use-wear effect coded into four categories.

A mock-up of the Cardboard poster. Image is author's own.

Paper mock-ups of Computational Costume v1. Images are author's own.

Fabric patterns for a shirt and pants. Torso (left), shirt arm (middle) and pants (right). Image is author's own.

Pants and top made from fabric patterns for laser cutting. Images are author's own.

Floating cardboard signage as imagined esemplastic objects, for the exhibition of Computational Costume v2. Image is author's own.

Hand-cut (above) versus laser-cut (below) fabric objects for Computational Costume v1. Images are author's own.

Manual and automated embroidery used for Computational Costume v2. Image is author's own.

Computational Costume design iterations. Images are author's own.

Stiffening cotton broadcloth for folding, from left to right: soaking in 1:1 PVA glue and water solution; air drying; cleaning glue residue; cleaned sheet; ironed sheet; and Miura folded sheet for Computational Costume v2. First image photography by Tonella Scalise, all other images are author's own.

A re-purposed jar used as a prop in the Computational Costume v2 video. Image is author's own.

A ready-made T-shirt adapted for quick release with hook-and-loop fasteners for Computational Costume v1. Images are author's own.

Coveralls with loop fastener strips sewn on (left) for attaching objects with hook fasteners sewn on (middle and right) for Computational Costume v2. Images are author's own.

Attempting to wear a paper costume. First image photography by Toby Gifford, second image is author's own.

Metal snap fasteners used in Computational Costume v2. Image is author's own.

A discreetly placed pin allows the ability to attach a marker to a token in the Computational Costume v2 video. Images is author's own.

The original Computational Costume mannequin (left), the mannequin toppling over with added leg supports (middle) and the updated mannequin configuration with strengthened interlocks and a single vertical support to replace the legs (right). Images are author's own.

Detail of a modular panel affixed on a steel wire mannequin created for Computational Costume v0. Image is author's own.

Timber mast structure used for Computational Costume v0. Image is author's own.

A suit for Computational Costume v2 intended for live performance is shown to audiences directly. Image courtesy of SensiLab, Monash University.

List of tables

An overview of where the four research investigations are located within this exegesis, with section links.

Interviewee specialisations and responses, coded into context, empathy, memory and method/process. Responses with reference to supporting people's memory and cognitive load have been highlighted in bold.

Interviewees' consistent design practice approaches, coded into context, empathy, memory and method/process.

Interviewees' unique design practice summaries.

Interviewees' critiques of the Memory Menu.

Acknowledgements

I have numerous people to thank for supporting my PhD by sharing their skills and their support throughout my journey.

Tim Dwyer, Jon McCormack and Vince Dziekan: for supervising my research with great thoughtfulness, a bit of humour in most meetings, and so much of your time. Collectively, you all helped shape the direction of my PhD from day one, up until the end. You gave me the freedom to pursue my visual design practice in a technical field. You all encouraged me to push into the unknown. Because of this, I was able to come out the other end with valuable lessons, and exciting results, that I could never have anticipated.

Julie Holden: for your great work in editing my writing, and teaching me how to elucidate ideas and outcomes, with greater care and intent. It was a real pleasure to go through different writing options and learn how they work. It was like learning to design again, but through text. I treasure the writing skills I have learned with you, because they have been so valuable—in both reflecting on and articulating my own logic, and communicating my work.

Lizzie Crouch: for your encouragement and valuable advice to share my work with the world. This has put my research in the places and minds of people I would otherwise have not reached.

Elliott, Pat, Dilpreet, Lizzie, Sojung, Yalong, Yingchen, Matt, Leona, Lora, Su-Yiin, Mike, Toby, Shelly and Nina: you have been the most wonderful friends. Even when your companionship and conversations were enough—a few of you even offered bags of organic fruit or vegetables, shared homemade cakes, drinks and sourdough bread starter and lovely gifts, and gave me a hand to install and take down my work.

Simone: sometimes you matched the feedback my supervisors gave me. You even came to my mid-candidature talk of your own accord. You also had the clout to crush my doubts when I thought settling on my costume idea was silly. Your generosity was tremendous.

Nick and Jesse: it did not matter how many times I said I needed to write, you kept trying to distract me. I love you both. You have been the best friends I could ask for—you have given me perspective on all manner of things, many a time. You both made the hard times in life less so, through your kindness.

My mother and father: you have been my greatest supporters for almost 27 and a half years. You have provided for me what you did not have access to. In addition, your great qualities have been infectious: Ma, your abundance of care for those around you; and Pa, your concern for justice and culture—and the fortitude you both possess. For this, I am so fortunate.

Also, there are a number of people to thank for their direct support of my research. In particular:

Finally, I thank the organisations that have generously funded my research, in particular: the Australian Government for its Research Training Program (RTP) scholarship; Monash University for its resources, as well as travel and equipment funding; and Arup's Melbourne office for its additional investment. These investments facilitated the cultivation of the original ideas presented through my research.

Introduction

When people use mainstream digital devices such as a smartphone, television or computer, there is a divide between the reality presented through the device and their surrounding physical reality. This divide matters because physical reality gives purpose and meaning to the applications on a digital device. This notion of a divide draws upon the perspective of embodied interaction. Embodied interaction defines a connection between people's lived experiences and their experiences through digital devices.

Take, for example, two people talking across a long distance through a video call. Through the windows of an on-screen video call, each person has enough information to hear the other and gauge facial expressions, alongside a peek into their surrounding environment.

Valuable information faithfully presented in a physical conversation, such as objects, events and movements in a shared physical space, are lost because of the cropping of the video camera. While this lost information may seem minor, it contributes to the lack of faithfulness of the ambience presented and thereby the interpretation of the session. A compromise is made here on valuable information normally available when communicating, in order to allow today's mainstream digital device to overcome the physical barrier of distance.

Losing information that people are accustomed to is problematic because the surrounding physical reality gives purpose to an activity such as a video call. This problem extends to other actions performed through digital devices, actions that are ultimately grounded in the physical reality outside of the device. Such actions include organising and finding content, and visualising information that references the surrounding world.

In my research, I explore what details can be designed into digital experiences to bring them on par with physical and virtual experiences in the world. Imagine how we as humans live and draw understanding from our experiences in physical reality. People who engage in activities through digital devices experience only a subset of possible experiences in the physical world. I divide these experiences into virtual and physical practices that are engaged across digital devices and in physical reality:

People's virtual and physical practices are endlessly rich and detailed. Yet today's average smartphone or computer use people's senses in limited ways. Such devices are configured to reduce information of vast scale, such as the totality of one's work or environmental surroundings, into the confined boundaries of a screen through windows, files and maps. These confined boundaries are not representative of the world's complexity as experienced outside of the computer screen.

Returning to the example of the video call, there are several issues with the cropping presented by the video camera and video-calling application on-screen:

The aforementioned issues in relation to current mainstream digital media affect the ability of today's digital media designers to present information in a way that is adequately grounded in physical reality. These issues have prompted my research to address how future digital media might be designed to adequately ground interactions with respect to people's lived experiences.

In my research, I investigate four areas to explore the limitations of physical and virtual practices with digital media. They take into consideration today's digital technology, approaches across mainstream media, emerging digital media and imagined digital media. The four areas involve:

Background

The purpose of the four investigations ahead is to remediate the limitations imposed by current mainstream digital media using the perspective of embodied interaction. The perspective suggests paying attention to the connection between people's interactions through digital media and their lived experiences of the world. This can be accomplished through a range of design approaches across media and technologies. In order to cover this wide base, as part of my investigations I:

  1. Reviewed and studied a novel interface design approach through mainstream digital media on-screen
  2. Interviewed a range of experienced practitioners and researchers about their design approaches and feedback in response to work completed up until this point
  3. Reviewed relevant emerging digital media designs
  4. Engaged a speculative design process to imagine improvements to mainstream digital media in a probable future free of today's technological limitations

Table 1.1 provides an overview of where the four research investigations are located within this exegesis.

An overview of where the four research investigations are located within this exegesis, with section links.
1234
Introduction and Background
Simplifying physical and virtual practices on-screenReviewing physical and virtual practice support across mediaReviewing new physical and virtual practicesCreating new physical and virtual practices
Supporting practices with screen-based digital devices Designing for a wider range of interactions beyond the screen Computational Costume design
Conclusion
The Memory MenuInterviewsDesign reviewComputational Costume
Future work and Concluding remarks
Appendix Memory Menu Appendix Interviews Appendix Computational Costume prototyping and presentation

The investigations have been influenced by the concept of the physical–digital divide [25] proposed by Pelle Ehn and Per Linde, which is based on an embodied interaction perspective. The authors' divide takes a paradoxical look at the virtual and physical, based upon combating demassification. Demassification is the loss of material and social properties as physical artefacts evolve into digital artefacts, as proposed by Brown and Duguid [16]. For example, a physical book presented as an e-book on a digital device loses meaning which might be attached to the wear on a book's cover or its pages, or the positioning of a book on a desk alongside other objects.

Brown and Duguid suggest that demassification is paradoxical because supposed improvements in technology that shed physical mass carry repercussions. There are two problems that contribute to the demassification paradox: digital technology has stripped away social practices which once depended on physical form (physical demassification); and consequentially the social practices that relied on congregating around a physical format have disappeared (social demassification) [16, pp.22–25].

Brown and Duguid suggest an awareness of latent border resources [16, pp.6–20] to counter demassification in digital media [16, pp.21–31]. They suggest that digital media is not the sole cause of demassification, but merely a place where demassification can be observed because latent border resources have been overlooked by designers. Designers can correct digital media applications with an awareness of latent border resources. Latent border resources are qualities of an artefact that lie dormant but contain socially shared significance. These dormant qualities lie at the border between direct attention and peripheral attention. As an example, if reading a physical book, we are directly attuned to the pages while peripherally a worn spine may have some personal significance to the reader; however, sitting at the border is the thickness of the pages of an open book clenched with both hands that provides an indirect feeling of progression through the book. These latent border resources are important for designers to recognise, because they can be taken for granted when designing for new media.

In the e-book example, latent border resources have been lost, yet new physical and virtual practices have been allowed which can be iterated on. Now a reader can instantly skip between texts and follow links from one text to another without physically moving. The designer has new latent border resources to discover and provide to the reader, such as simplifying the search of texts by presenting a history of what has been navigated. In this case the new latent border resource provides an advantage over the physical library because using a physical library would require the reader to walk about and carry a stack of books or a reference list to accomplish the same task. This exemplifies how problems from losing mass as the physical becomes virtual can be recovered through careful consideration of the latent border resources available to people.

The example of the e-book gaining new latent border resources despite demassification highlights that physical and virtual practices determine one another or are co-dependent, rather than only working in opposition as suggested by the paradox of demassification. It is therefore important to understand that new latent border resources emerge from new digital media. This is the fundamental reason why the embodied interaction perspective adopted in my research extends from applications of today's technologies to applications of imagined probable technologies.

Simplifying physical and virtual practices on-screen

In the first of the four investigations, I explore how physical and virtual practices can be simplified for on-screen interactions with an awareness of latent border resources. First I acknowledge the variety of interaction design approaches for solving problems on-screen today and develop a novel interface design based on these design approaches.

This section of my research investigates:

  1. What do a variety of interaction design approaches reveal about solving common on-screen interface design problems? This is explored in On-screen interaction design approaches.
  2. How can latent border resources be supported on-screen? This is explored in Supporting on-screen spatial memory through use-wear.

Reviewing physical and virtual practice support across media

As discussed, virtual and physical practices are paradoxical and dependent on one another. This dynamic moves forward as new capabilities are added, refined and replaced in media. Adding new capabilities to digital media requires a look beyond people's practices supported by on-screen interfaces.

In the second of the four investigations: Design and communication practices across domains, I interview a variety of design and communication practitioners about my research conducted so far and what consistent and unique approaches they adopt to support people engaging with media. The culmination of these responses provide an indication of best practice approaches to follow in my research.

This section of my research investigates:

  1. What do a variety of design and communication practitioners have to say about the design approaches adopted in the research?
  2. What consistent and unique approaches exist in designers' practices to supporting people's activities across design and communication domains?

Reviewing new physical and virtual practices

In seeking to resolve the physical–digital divide by tackling demassification, Ehn and Linde (2004) turn to the perspective of embodied interaction to employ new kinds of physical and virtual digital media interactions. Embodied interaction assists designers to reflect on what physical and virtual practices are useful to people. Embodied interaction is a perspective in the field of human–computer interaction (HCI) popularised by Paul Dourish in the seminal book Where The Action Is (2001) [22]. The perspective suggests the meaning we derive from the interfaces of digital devices is largely influenced through having a physical manifestation in the world as experienced through our bodies [22, pp.100–103]. Rather than people being seen by digital media designers as machines that respond in a predictable fashion to familiar metaphors and instructions through digital artefacts, meaning obtained through digital artefacts is intertwined with people's unique lived experiences of the world as a whole, through metaphor and concepts [54] or other media [11].

Ehn and Linde (2004) explored embodied interaction through the ATELIER design research project for physical–digital studio environments for design students [25], shown in Figure 1.1. The studio environment allowed people designing an interactive installation to explore the qualities of a physical space through a physical model design, ambient sound and light projections, before designing a 3D model design. The environment enriched people's conceptualisation of design ideas by providing a wider range of necessary virtual and physical practices to work with, rather than limiting practices only to virtual 3D object creation, sketches and mental visualisations.

ATELIER allowing people to shift between modes of digital representation. Images from Embodied Interaction – designing beyond the physical–digital divide [25] by Ehn and Linde (2004). Used with permission.

The work produced by Ehn and Linde (2004) fits into a pattern of designs that can be referred to as the Material Turn [102][83] in HCI. Outcomes of the The Material Turn in HCI demonstrate how designers can support people by intertwining digital media interactions into the world, as done with ATELIER [25].

In the third of the four investigations: I focus on Ubiquitous and tangible computing, which has been instrumental in supporting the Material Turn in HCI, with particular attention to Whole-body interaction to coalesce interactions.

Ubiquitous computing, a proposal championed by Mark Weiser in 1991, suggests personal computers are a transitional step towards information technology that will one day be as invisible and ubiquitous as the text on signs and candy wrappers [101]. This direction has encouraged the proliferation of devices we experience today. Tangible computing is a subset of ubiquitous computing and attempts to break the many functions of screen-based interactions into standalone artefacts. As an example, the process of sculpting can be achieved through physical platforms and communicated digitally, such as in Physical Telepresence [63] by Leithinger et al. (2014). We find the emergence of this kind of computing today in the mainstream network-connected devices of the internet of things (IoT) (or enchanted objects [85] as described by David Rose) and dynamic materials of the radical atoms research program [48] led by Hiroshi Ishii. In the mainstream, ubiquitous and tangible computing falls back to screen-based devices. Screen-based devices allow the necessary management and networking of IoT devices.

Whole-body interaction presents an alternative to screen-based devices, because it shows how interactions through digital media can be attached to the body and the world instead. With the promise of augmented reality in the future, whole-body interaction devices could offer the ability to substitute screen-based devices with interaction through bodily and physical surfaces, and in mid-air.

This section of my research investigates:

  1. What new physical and virtual practices are offered by the Material Turn in HCI through ubiquitous and tangible computing to address people's dependence on screen-based devices?
  2. How might designers use whole-body interaction as part of the Material Turn in HCI in the future to replace people's dependence on screen-based devices?

Creating new physical and virtual practices

Designers need to be able work beyond the constraints of today's technology to fill the gaps between the promise of outcomes of the Material Turn in HCI and their application in the mainstream. Speculative design is a process which allows designers to work beyond technological constraints to propose alternatives. Anthony Dunne and Fiona Raby in their book Speculative Everything [23] discuss the role of speculative design proposals in opening new perspectives to societal challenges. They argue that such designs add onto the public's vision of reality, challenge it and provide alternatives to it [23, p.189].

In the last of the four investigations: Computational Costume design, speculative design is used as a tool to address identified gaps without subscribing to today's technological constraints, for instance, the requirement for screen-based devices to manage computer networking. Through this investigation I seek to stimulate discussion around what would be both probable and desirable through whole-body interaction supported by augmented reality. Speculative design offers the ability to comment on how the technology has been developed so far and where it can be taken purely for the benefit of people.

A language to communicate and reflect upon speculative designs of whole-body interaction supported by augmented reality is explored in Computational Costume prototyping and presentation. Accessible materials and techniques are engaged to create the necessary illusion to support speculative designs. This is needed because the technology to realise the speculative designs is not yet available. It is not the responsibility of interaction designers to create technologies. It is more valuable for this designer to put forward the design ideas for evaluation and development. This position draws upon the success of interaction designers using paper and cardboard prototyping techniques [24].

This section of my research investigates:

  1. What technology and functionality can be expected in a speculative design for whole-body interaction supported by augmented reality for new physical and virtual practices?
  2. How can designers economically prototype and present speculative designs for whole-body interaction supported by augmented reality for new physical and virtual practices?

Supporting practices with screen-based digital devices

Engagement with digital devices today predominantly involves on-screen interaction. Screen-based devices provide the main means of creating, communicating and looking at digital content. For this reason, designing for screen-based devices is the starting point of my research to understand how people's practices can be better supported through digital devices.

In my research I build upon how people's activities could be supported on mainstream screen-based devices in On-screen interaction design approaches and Supporting on-screen spatial memory through use-wear. I also pay attention to the wide range of design and communication practices through interviews as a counterpoint to my research, in Design and communication practices across domains.

On-screen interaction design approaches

Unravelling interaction design approaches in the real world encouraged the beginning of my practical design research. I worked with engineers at a consulting engineering firm to determine where they could improve their digital media designs. These engineers made a variety of on-screen tools and visualisations to assist their own work and to communicate information with clients. My task was to use the issues I discovered in their design work to come up with generalisable design solutions.

What became visible through talking with a variety of engineers about their designs was both the domain specificity of the content and the use of off-the-shelf interface and visualisation frameworks. The difficulties of this situation were: the content could be quite complex; and visual or interaction design skills were not being engaged to make experiences more palatable.

I found several examples that illustrate this in geographic information system (GIS) visualisations and interfaces. Anecdotally, these systems were more favourable than their predecessors: large printed volumes and maps. However, it was common to see simple design problems—like the one shown in Figure 2.1—which could be easily solved, such as: layout issues where information could be divided into sections to make it easier to navigate; and adding iconography to make common features stand out.

A rendition of a geographic information system (GIS) visualisation encountered in practice. Image is author's own, based upon a working design.

These issues were made clear to the designers of the software, but it was not possible to fix them immediately. The presence of the issues was a symptom of how resources were allocated. There was no expectation to have well-versed designers on-board fulltime. Instead, alongside engineering work, engineers also designed their own on-screen tools when needed. Creating tools like the one shown in Figure 2.1 followed the path of least resistance—it gave the engineers control over the presentation of content that was deeply integrated into their work. This design practice avoided having to adopt ill-fitting tools or to outsource work.

While experiencing this inertia, I reflected upon the different design practices applicable to the situation at hand. In order to contribute, I reflected on four relevant kinds of design practitioners:

Supporting on-screen spatial memory through use-wear

Supporting spatial memory on-screen through use-wear is analogous to using bookmarks in physical books. Use-wear (also known as computational wear, read wear, visit wear or patina), see Figure 2.2, provides a visual signal over parts of the interface which have been interacted with, along with an indication of how frequently those parts have been used [43][92][47][1][66]. People can use this information to pick up where they left off when coming back to an interface, or when exploring a new interface to quickly identify familiar and unfamiliar areas. It is also a signal that can be read by others, because the progress made through the interface, like a bookmark in a book, is openly visible. These kinds of signals present useful latent border resources, as discussed in the Background. Supporting spatial memory is one direct way of supporting latent border resources.

Screenshots of the Patina use-wear effect. Images from Patina: Dynamic Heatmaps for Visualizing Application Usage [66] by Matejka et al. (2013). Used with permission.

Supporting spatial memory on-screen is a well-explored area [88]. Use-wear fits within a variety of novel and established strategies to support the location of objects on-screen. These strategies include, but are not limited to: laying out information as maps; traces and scents; obscuring information; and mnemonics. I explain each in detail below to provide a context for adopting use-wear.

Screenshot of Data Mountain. Image from Data Mountain: Using Spatial Memory for Document Management [82] by Robertson et al. (1998). Used with permission.

Of the spatial memory supporting strategies, use-wear lacks concrete results to suggest that a subtle application on menu interfaces would be effective. Implementations of use-wear have been evaluated at a small scale and shown to be favourable [43][47][1][66]. However, conclusive benefits have only been demonstrated where visibility is obscured in fisheye views [92]. There is also a known benefit when highlighting popular menu items to work with a bubble cursor (or area cursor) designed to capture popular menu items within a widened region around the cursor, as shown in bubbling menus [96]. So far, a subtle use-wear effect on a standard menu has not been validated. Use-wear is also the easiest of the spatial memory strategies to apply in practice. The effect does not require having to restructure an interface, add animations or make people learn additional information such as a mnemonic.

A mixed-methods approach has been used to quantitatively and qualitatively evaluate use-wear for use in practice through the Memory Menu. The Memory Menu study detailed in Memory Menu presents the design and evaluation of a subtle use-wear effect for large menus. The work attempts to support people's spatial memory while using an interface. The work was motivated by its simplicity and applicability in practice. A rigorous online evaluation with 100 participants, and 99 valid results, showed no statistically significant results in favour of the use-wear effect. The null hypothesis H0 was validated and H1 and H2, as detailed in Memory Menu hypotheses, were disqualified. In summary, the use-wear effect did not affect selection times or the memorability of items selected. I did not find a significant effect on participant performance in the use-wear condition. Therefore I was unable to reject the null hypothesis. Qualitative responses for menu difficulty suggest participants had a stronger preference for the use-wear effect after using the baseline menu first. Overall, participants' attraction to the use-wear effect was polarised.

The results are therefore inconclusive as to whether the Memory Menu provides an improvement over traditional non-use-wear menu interfaces. The results illustrate that spatial memory issues are not easily solvable by placing information on top of an interface. As shown at the beginning of this section, alternative ways of supporting spatial memory are generally more involved. In light of this, spatial memory support needs to be carefully designed into an interface from its conception, with consideration of its content, presentation and audience. From this point, I sought a broader line of enquiry.

Design and communication practices across domains

To counterpoint my research, I reflected upon the design and communication practices of practitioners inside and outside of interaction and HCI design. Practitioners outside interaction and HCI design also deal with engaging audiences and supporting their practices. The importance of this process was to learn from practitioners who deal with mediums other than digital devices.

In the previous sections I have looked at how latent border resources on-screen might be supported by directly targeting people's spatial memory. This presents a narrow perspective through a single medium, providing a narrow means to bridge the divide between physical and virtual practices through digital media. Interviews with professionals who collectively engage with a variety of media can provide a valuable source of broader guidance in supporting people's activities through digital media.

Interviews were conducted with a variety of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media, with established and senior experience. The interview motivation, design and results are detailed in Interviews. The 8 interviewees provided their insights into supporting audiences' practices beyond digital devices by responding to the work conducted in Supporting on-screen spatial memory through use-wear, framed around the HCI language of supporting people's memory and reducing their cognitive load.

Open coding of the interviewees' responses reveals a standard pattern of dealing with the audience's context and showing empathy towards them when making design considerations through an iterative process. This stood in contrast to dealing with supporting memory and cognitive load, which were dealt with directly by 5 of 8 and 3 of 8 interviewees, respectively. Interviewees also provided constructive comments to improve the Memory Menu.

Beyond supporting spatial memory and reducing cognitive load as explored with the Memory Menu, interviewees revealed four unique approaches: supporting people's modalities; working within a relatable cultural context and history; avoiding didacticism; and moving away from cultural constraints and what is culturally acceptable. The interviewees' four unique approaches can be seen as contradictory on the topic of culture, as they both rely on culture and defy it. However, as a whole the approaches provide direction for accommodating people's activities with respect to their vast capabilities and environments, as explored in the next section.

Conclusion

In this chapter I have explored how digital media designers might support people's virtual and physical practices through mainstream screen-based devices. I began by investigating the support of spatial memory for on-screen interfaces through a subtle use-wear effect. This approach was initiated on the back of real-world practical experience where it was not ideal to re-design interfaces already used in practice. Experimental results did not find a benefit to the effect as applied in the Memory Menu study. This result encouraged a broader look at designers' practices to support people's activities.

Through a series of interviews a broad range of design and communication practitioners were asked to provide feedback on the work so far conducted on use-wear. They were asked about their own practices to support people's activities. The interviewees revealed ways to improve the work conducted on use-wear and also suggested accommodating people's vast capabilities and environments, while challenging and accommodating the culture which surrounds interaction. Based on the interviewees' advice, I put aside a narrow focus on spatial memory and re-framed my research to target new ways of supporting people's activities based outside of screen-based devices.

Designing for a wider range of interactions beyond the screen

The narrow scope of Supporting on-screen spatial memory through use-wear and consideration of Design and communication practices across domains encouraged a movement towards the kind of design practice engaged in by Ehn and Linde, as discussed in Reviewing new physical and virtual practices. In this design practice, digital and physical devices take on new material expressions, and people take on board new kinds of practices. This involves incorporating a wider range of interactions that come from engagement with the world outside of the screen-based device into people's activities conducted through digital media.

On-screen interface design reconceptualisation

To begin accommodating a wider range of people's lived experiences in my design practice, I applied the concerns of demassification and embodied interaction, as Ehn and Linde did for their design research project ATELIER, to translate a screen-based interface design to a more engaging physical environment, as discussed in Reviewing new physical and virtual practices. As a trial, I began by reconceptualising the design of a commonly used word processing application. My redesign, shown in Figure 3.1, imagined for a large screen surface, caters to the human ability to work outside of a traditional keyboard, pointer and screen size. My design illustrates how it might be possible to conduct the range of activities involved in the process of authoring a document without a screen-based device. The design includes inserting data into templates for fine-grained control of the document design.

A commonly used word processor reconceptualised for a large surface accommodating people's wide range of practices and physical abilities. Image is author's own.

The design process involved cutting the application into its constituent menus and rearranging them to have a neater procedural flow without the constraint of a fixed-size screen. The interface in Figure 3.1 was then divided so different practices occupied their own areas, shown in Figure 3.2. These areas were positioned in a way that relates to their place in the process of writing, from conception to export. As the writer proceeds through the process, they can occupy an area as necessary. The positioning of the areas relates to their relationship in creating a document e.g. the document elements on the left and the finished outcome on the right. Areas specific to formatting were placed as close as possible to where the associated formatting practices take place—on the document itself. Overall, the areas were ready at hand when needed, rather than being concealed. However, the activity is still concentrated on a single surface, when it could have a deeper connection with the surrounding environment.

Different possible practices defined as areas on a reconfigured word processor design for a large surface. The practice of directly editing a document is highlighted. Arrows indicate connections between areas. Image is author's own.

In extending the design proposal shown in Figure 3.1 and Figure 3.2 to integrate it with the surrounding environment, I then imagined how some of the areas might manifest themselves on the bodies of people in a poster design shown in Figure 4.3, to act as extensions of people's hands and arms. This allows the practices to be carried with people when they need them, enabling the interface to feel more like a portable tool to use when needed rather than a fixed surface with options cluttered around a document. This idea has been explored and explained through the design and creation of a 3D cardboard poster, detailed ahead in Cardboard poster and interface.

To ground the design practice defined here and provide direction for it, I reflect on three relevant areas that have contributed to ways in which designers have accommodated interactions beyond screen-based devices:

The Material Turn

The Material Turn as defined by Robles and Wiberg (2010), presents how physical and digital qualities can come together for the benefit of digital media interaction design [83]. The Material Turn can be seen as a concentrated effort to produce work in a similar way to what Ehn and Linde pioneered in 2004 through their design research project ATELIER, as introduced in Reviewing new physical and virtual practices. The Material Turn reveals the potential in blending traditionally non-digital material qualities and physical experiences into digital artefacts.

The Material Turn is distinct from other design directions in HCI to remediate the connection between the physical and digital, by focusing in on the materiality of interactions. This focus on materiality indicates a desire to reconcile the divide between the physical and the digital through new materials and new relationships between materials [83, p.137]. This can be contrasted with standard design approaches that build upon graphical user interfaces (GUI), which rely on metaphorical relations between physical and digital such as files and documents, or tangible computing, which presents physical analogues to digital information [83, p.137]. Materiality presents a broader view of what is possible that extends outside the constraints of the physical affordances of established digital media.

Within the discourse of the Material Turn, it has been encouraged to move away from an allegiance to kinds of materials (e.g. digital devices or non-computational media) and to concentrate instead on the experiences and interactions afforded by any material:

The Material Turn in HCI, as described, shines a light on the virtual and physical practices enabled by digital media that moves beyond screen-based devices. Discourse in the area provides a means to critique the contemporary directions of ubiquitous and tangible computing. I now explore how well this computing bridges the divide in terms of new kinds of experiences afforded and whether material status has been overcome.

Ubiquitous and tangible computing

Ubiquitous computing was proposed by Mark Wiser in 1991 [101] as a future to be achieved where computing is invisible. Ubiquitous computing provides us with a perspective from which to imagine computing that is as ubiquitous as everyday objects and thereby more readily available to the different kinds of situations people experience. The proposal for ubiquitous computing best serves as a device to provoke new design concepts, rather than being an actual kind of digital media. It is argued that the future put forward by ubiquitous computing is proximate [9]; in other words: it's always just out of reach. Ubiquitous computing has arrived through access to all form of screens today from watches to televisions connected to the cloud. However, there is always room to make interaction through computers more and more ubiquitous—we are yet to create advanced artificial agents to converse with in order to perform tasks.

I specifically explore tangible computing, which provides a focus for designers to explore new physical and virtual practices beyond screen-based devices. Tangible computing, as explored in places such as the radical atoms research program [48], shows, for example, in Physical Telepresence [63] by Leithinger et al. (2014), shown in Figure 3.3, how communication can take place over network-connected dynamic surfaces that people can sculpt. Physical Telepresence allows people to share physical forms across long distances with greater sensorial qualities than alternatives such as videos or photographs.

The Physical Telepresence system in use. Images from Physical Telepresence [63] video by Leithinger et al., MIT Media Lab, Tangible Media Group (2014).

Screen-based devices are a locus for digital media interactions that run counter to people's full range of sensorimotor abilities—or what humans can achieve with their senses and physical abilities. People can carry screen-based devices with them practically anywhere, but these do not encourage direct engagement with the world that surrounds them. This matters because surrounding contexts give the representations on screen meaning—such as: a conversation between people that can involve locations, other people and objects; or a virtual model of an object set made for physical interaction.

With respect to the Material Turn discussed, I explore how tangible computing extends digital media into new physical and virtual practices, but also highlight a continued attachment to the material status of digital devices. I explore how tangible computing:

Following the review, I propose ways of Relieving dependence on screens.

Physical objects with virtual overlay

Tangible computing allows designers to present information from the physical world through physical objects with virtual overlays, rather than translating physical objects to be presented completely on-screen. Urp [98] by Underkoffler and Ishii (1999), shown in Figure 3.4, (using the I/O Bulb and Luminous Room system by Underkoffler and Ishii (1998) [97]) demonstrates how people can visualise the shadows and reflections cast by built structures and the airflows travelling around them. Physical tools can be used to measure distances, apply materials like glass and direct wind effects. Information that might be lost in a 2D representation is gained.

The Urp system in use. Video stills from Urp [98] video by Underkoffler and Ishii, MIT Media Lab, Tangible Media Group (1999).

However, in Urp the physical objects used are not mutable like virtual objects on-screen. There is no possibility to modify the models or rearrange them into new views, for instance, by slicing them. I explore ahead how tangible computing presents physical materials which can be manipulated physically and virtually.

Manipulable physical and virtual surfaces and objects

Tangible computing allows for physically and virtually manipulating objects and surfaces. This extends the ability of physical models, as presented in physical objects with virtual overlay, so they can be treated in a similar way to virtual objects on-screen while also accommodating different physical uses. This interaction extends across large surfaces, as well as objects at room scale and hand scale:

The inSide system in use. Images from inSide [95] video by Tang et al., MIT Media Lab, Tangible Media Group (2014).

These works illustrate how different kinds of objects can be manipulated physically and virtually, rather than purely physically as a model or purely virtually as an object on-screen.

Enhanced manual processes

Being able to modify objects physically and virtually, as explored in the previous section, can enhance manual processes. Virtual tools that allow operations such as instantly copying and moving objects or generating accurate geometry can be applied to physical operations. In addition, physical actions can be communicated across digital networks.

The Perfect Red speculative design. Video stills from Perfect Red [48, pp.47–48] video by Bonanni et al., MIT Media Lab, Tangible Media Group (2012).

Ambient perception

The tangible computing explored so far comes with the benefit of alleviating direct attention, by utilising people's ambient perception. Screen-based devices traditionally require direct focus and are otherwise not intended to be easily visible. Tangible computing, which occupies a 3D presence, is visible peripherally and from afar, provided it is large enough and contrasts with surrounding objects.

The sculpting and movement of 3D forms are given greater expression in: surfaces like those of Urp, explored in Physical objects with virtual overlay; the works explored in Manipulable physical and virtual surfaces and objects; and materials like Perfect Red, explored in the previous section. Greater expression comes from the movement of bodies to perform direct physical actions, rather than the movement of hands and arms across a trackpad, keyboard or screen, in relation to flat representations. The greater expressions serve as rich latent border resources (see Background) in the form of prominent movements which can be mentally attached in order to form making and collaborative activities.

Below I explore a few ways that designers have deliberately leveraged people's ambient perception in a discreet fashion to relieve the need for direct attention in order to use devices:

The Pillow Talk system in use. Video stills from Pillow Talk [76] video by Joanna Montgomery (2010). Used with permission.

Dependence on many devices

So far, I have explored how tangible computing brings many benefits by bringing together physical and virtual practices. However, an issue which has been glossed over in the development of tangible computing is that interactions rest upon a range of physical devices. Tangible computing rests upon the material status (see Materiality of interaction in the The Material Turn) of common household or office objects and furniture. The issue is, activities possible through applications on screen-based devices shed themselves into a range of individual devices that may or may not cooperate.

In practice, dependence on the presence of many devices is not ideal in that it requires people to furnish their homes and offices with the right kind of devices and to maintain them, rather than relying on a few powerful screen-based devices. In practice, these tangible devices have not presented true freedom from screen-based devices because they need to be centrally managed by a screen-based device. This is evidenced in the mainstream adoption of tangible computing through the IoT (or enchanted objects [85]). IoT is not as complex as the tangible computing covered here, yet it presents the closest mainstream generation of computing beyond screen-based devices. Examples of the IoT are network-connected lights, toys, home appliances and blinds or doors—which can perform automated actions and communicate their status. An IoT device today may not be a fully actuated and sensing surface, as shown in Manipulable physical and virtual surfaces and objects, due to high cost and proof-of-concept status. However, at a fundamental level the IoT and tangible computing allow information to be captured and shared between physical devices to enable new kinds of interactions with digital media. The IoT is beginning to realise some of the vision of tangible computing by bringing computational ability to objects in our environment.

Despite tangible computing being an apparent antithesis to screen-based devices, IoT devices are dependent on screen-based devices to work. The earliest developments in tangible computing have also hinted at this dependence. In 2000, the HandSCAPE [62] digital tape measure, shown in Figure 3.8, showed how the distance and orientation of measurements could be gathered by a network-connected tape measure to generate a virtual 3D solid of the object being measured. The measurements were displayed on a screen because it is the most economical format to do so, even by today's standards, using a smartphone.

The HandSCAPE system in use. Video stills from HandSCAPE [62] video by Lee et al., MIT Media Lab, Tangible Media Group (2000).

The continued dependence of tangible computing and the IoT on screen-based devices is attributable to unrivalled convenience through providing:

By recognising these conveniences, designers can conceive new methods to apply the same effects without depending on screen-based devices.

Relieving dependence on screens

It is possible to relieve dependence on screens by replacing the conveniences of screen-based devices that support dependence on many devices. As described, conveniences that need to be factored in are: dynamic controls; sensor information; and ubiquity. Technology for Augmented reality see-through devices, explored ahead, shows promise in achieving this by allowing the portable superimposition of visuals through wearable glasses.

Dynamic controls: augmented reality devices can extend the presentation of dynamic controls outside of screen-based devices and the fixed areas provided by projectors. Tangible computing works as shown in Physical objects with virtual overlay and Manipulable physical and virtual surfaces and objects rely on projectors to overlay dynamic information and controls on any surface within a fixed area. Augmented reality could extend this to any surface by providing the necessary ubiquitous display.

Sensor information: augmented reality wearables can also allow the same supportive sensing technologies found in screen-based devices.

Ubiquity: it should be noted that augmented reality, as proposed, only allows the management of tangible computing and the IoT to be ubiquitous. This does not target the heart of the matter, which lies in the material status of tangible computing and the IoT.

Despite their advantages, it can be argued that tangible computing and the IoT have never been designed to function as independently as screen-based devices do, because tangible computing and IoT devices have been designed in a world where screen-based devices are already able to simplify the networking and management of devices. For this reason, I now investigate how interactions outside of screen-based devices can be as ubiquitous as interactions are on-screen by exploring the specific area of whole-body interaction, which allows people's own bodies and surrounding environments to act as the primary surfaces for people's activities through digital devices.

Whole-body interaction

Whole-body interaction [26] is a specific subset of the ubiquitous and tangible computing explored already. Whole-body interaction involves both input from and feedback through: physical motion; the normal five senses plus the senses of balance and proprioception; cognitive state; emotional state; and social context [26, p.1]. The central tenet of whole-body interaction is to utilise a greater range of human abilities for the use of digital media.

Whole-body interaction is particularly compelling for my research because it dives into how people can avoid relying on screen-based devices. I explore ahead how whole-body interaction is intrinsically grounded by people themselves, or representations of themselves, in the environments where they find themselves.

I have already touched on works which involve greater use of the body in Ubiquitous and tangible computing through the use of physical objects and surfaces. I build upon this by reviewing how whole-body interaction:

Following the review of these areas, I suggest a Combined technology approach centred around augmented reality to support people's Virtual identity in order to ground interactions through digital media.

Bodily interfaces

I recognise bodily interfaces in whole-body interaction as interfaces that use people's bodies as the primary surfaces for digital media interactions. These interfaces present an alternative to the interfaces presented on screen-based devices or the surfaces of physical objects. I classify bodily interfaces into:

Body-shadow interfaces

Body-shadow interfaces rely on input from people's hands and arms, with their bodies projected on sharable surfaces.

The VIDEOPLACE system in use. Video stills from Videoplace '88 [51] video by Myron Krueger et al. (1988).
The Whole Body Large Wall Display Interface system in use. Video stills from Whole Body Large Wall Display Interaction [91] video by Shoemaker et al. (2010). Used with permission.
On-body interfaces

On-body interfaces rely on input from people's hands and arms, with interfaces projected on their bodies.

The Armura system in use. Images from On-Body Interaction: Armed and Dangerous [42] by Harrison et al. (2012). Used with permission.

Both body-shadow and on-body interfaces describe valuable ways of centring digital media interactions on the body. With and without visual feedback, it is possible to use the body as a container for interactions, allowing interactions to be attached to the body in shared spaces. However, most of the works are constrained to set areas by the use of projectors. This work could be enhanced by breaking the constraints of screens with augmented reality.

Augmented reality

Augmented reality allows the projection of virtual objects among physical objects. Augmented reality is generally presented through screens; however, Augmented reality see-through devices, explored ahead, show how wearables can allow augmented reality to be portable. This advancement would alleviate the need to depend on screens and bodily interfaces as explored.

Augmented reality for whole-body interaction involves input from the body and surrounding environment to create and position virtual objects and act as a trackable surface for wearable interfaces, allowing augmented reality to fit as a replacement for screens used in bodily interfaces.

Virtual objects

Augmented reality allows the possibility of creating and manipulating virtual objects in physical dimensions.

The T(ether) system in use. Images from T(ether) – Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation [53] video by Lakatos et al., MIT Media Lab, Tangible Media Group (2014).

The need to carry a screen-based device to view virtual objects presents a major issue, as the interactor has to actively carry and interact with a device in order to manipulate objects. Wearable augmented reality, with virtual wearable interfaces, would allow the possibility of freeing the hands as in bodily interfaces.

Wearable interfaces

I use the term 'wearable interfaces' in the context of augmented reality to refer to design concepts that can transform people's hands and surrounding area into interfaces, as touched upon in bodily interfaces.

The Project North Star system in use. Video stills from Project North Star: Exploring Augmented Reality [59] and Project North Star: Desk UI [58] videos by Leap Motion, Inc. (2018). Used with permission.

Whole-body engagement

The interfaces previously explored in augmented reality and bodily interfaces focus exclusively on the use of hands and arms. Whole-body interaction allows the possibility of engaging the whole body through large surfaces that have been designed to register input from other parts of the body. This expands people's range of expression when interacting through digital media. Whole-body interaction can allow what I term, based on my observation, whole-body sensing for registering movements and projecting information on bodies and floor-based sensing for registering footsteps and the movement of objects of floors.

Whole-body sensing

Whole-body sensing devices allow the form and positioning of the body to be used for digital media interactions [26, p.140].

The Wall++ system in use. Video stills from Wall++: Room-Scale Interactive and Context-Aware Sensing [104] video by Zhang et al. (2018).
The Augmented Studio system in use. Video stills from Augmented Studio: Projection Mapping on Moving Body for Physiotherapy Education [45] video by Hoang et al., Microsoft Research Centre for SocialNUI (2017).
Floor-based sensing

Building on whole-body sensing [26, p.140], I use the term 'floor-based sensing' to refer to distinct works that allow people's steps and engagement with objects on floors to be captured for digital media interactions.

The Multitoe system in use. Video stills from Multitoe interaction: bringing multi-touch to interactive floors [5] video by Augsten et al., Hasso Plattner Institute (2010). Used with permission.
The Kickables system in use. Video still from Kickables: Tangibles for Feet [90] video by Schmidt et al., Hasso Plattner Institute et al. (2014). Used with permission.

Force feedback

Force feedback provides the physical sensation a person would expect from a physical action like holding or moving an object without the object existing physically [64]. While physical feedback in tangible computing is standard, force feedback is a missing element of whole-body interaction.

The electrical muscle stimulation force feedback system by Lopes et al. (2018) in use. Video stills from Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation [64] video by Lopes et al., Hasso Plattner Institute (2018). Used with permission.

Combined technology approach

The combination of augmented reality, whole-body engagement and force feedback could mean people only need wearable devices for digital media interaction, in contrast to the Dependence on many devices explored in tangible computing. Wearable devices can be used to hold virtual objects and display interfaces on and off the body in bodily interfaces, as explored in augmented reality. These applications can be extended to whole-body engagement, as explored. The addition of force feedback can provide physical feedback to people for engaging with virtual objects.

Mirrorworlds Concept: The Architect. Video still from Mirrorworlds Concept: The Architect [57] by Leap Motion, Inc. (2018). Used with permission.

Virtual identity

Virtual identities are an implicitly applied or neglected part of the design of bodily interfaces and the application of augmented reality, force feedback and whole-body engagement. Virtual identities ground people's presence in shared whole-body interaction environments with some kind of visual identification to distinguish one another. Some examples have applied some kind of virtual identity to distinguish multiple people. However, many of the examples shown are technical proof-of-concept demonstrations that have not factored in virtual identities. However, imagining shared environments enabled by a combined technology approach requires a deliberate focus on the design and application of virtual identities for people. I now explore how virtual identities have been applied in whole-body interaction.

I call attention to three predominant ways of applying virtual identities in whole-body interaction:

Body-shadows

Body-shadows mirror the outlines (or shadows) of people on shared wall surfaces in order to contain interfaces and distinguish people. They are explored in bodily interfaces.

Contact outlines

Contact outlines highlight regions where people are making contact with a shared surface. These outlines have been used in whole-body engagement where people cannot be highlighted on walls.

Virtual clothing

Virtual clothing is visuals projected on people including wearable interfaces and information. These visuals have been explored in bodily interfaces and the application of augmented reality and whole-body engagement for whole-body interaction.

The Choreomorphy system in use. Video still from Choreomorphy [81] video by El Raheb et al. (2018). Used with permission.

Virtual clothing can be likened to the utility of physical clothing for identification, storage and expressiveness. Paired with augmented reality, the interfaces and information presented through virtual clothing could be as ubiquitous as physical clothing. Virtual clothing has the potential to extend what we already do with physical clothing. Speculative visions of whole-body interaction in augmented reality have touched on the use of ubiquitous virtual identities through virtual clothing, in particular, Mirrorworlds [68] by Keiichi Matsuda and Anna Mill (2018) and Hyper-Reality [67] by Keiichi Matsuda (2016).

A Mirrorworld in a classroom, for exploring the water cycle of an environmental landscape at room scale. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.
A Mirrorworld in an office collocated with a medical operating theatre. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.
A virtually concealed thief in a speculative augmented reality. Video still from 'Hyper-Reality' [67] by Keiichi Matsuda (2016). Used with permission.

The explorations of donning virtual identities through costume as shown through speculation are limited in practice in the whole-body interaction field. The explorations shown only scratch the surface of what might be possible through virtual identities using a Combined technology approach, as explored. There is the need for a detailed exploration of how virtual identities might ground digital media interactions that are not dependent on screen-based devices.

Conclusion

Looking beyond screen-based devices in this chapter has led to investigating outcomes of the Material Turn in HCI. The Material Turn presents a focus on the materiality of interactions—or how materials, whether they are physical or digital, come together to support people's activities. In exploring ubiquitous and tangible computing I found beneficial ways in which physical and virtual materials could come together in new ways to support people. However, this computing in practice relies on screen-based devices for management. Investigating the specific area of whole-body interaction reveals how interactions can take place on people's bodies and their surrounding environment through a speculative combined technology approach based in augmented reality. The role of people's virtual identity in augmented reality to ground interactions, is both a crucial, and underexplored area that requires further work.

Computational Costume design

In Designing for a wider range of interactions beyond the screen, I reviewed how digital media could be designed with greater consideration of people's abilities and surrounding environments. This exploration revealed Ubiquitous and tangible computing, which allow people to deal with a range of manipulable and ambient physical and virtual objects. However, as this kind of digital media moves from research to the mainstream, it is dependent on screen-based devices for its operation. To remediate this dependence, I reviewed the specific area of Whole-body interaction. I showed how people might use a range of technologies, primarily based in augmented reality, to support virtual identities to ground interactions through digital media.

Through the design of Computational Costume proposed in this research, I develop the application of virtual identities in a speculative augmented reality to ground interactions through digital media.

Background

There are several perspectives which need to be acknowledged in the development of Computational Costume:

Hyperreality

Hyperreality, as discussed by Leonardo Bonanni in 2006 in the context of HCI, suggests transforming people's physiological perception of space [12], in contrast to ubiquitous computing which presents a proximate future that never arrives [9]. This problem can be understood when considering that today's proliferation of screen-based devices connected to the internet can be claimed as ubiquitous, as well as future augmented reality headsets capable of presenting virtual objects ubiquitously. This is problematic because the trajectory of ubiquity is uncertain with regard to the abilities enabled by it. This proposition holds no design value, because it is not particularly useful to think of computing as ubiquitous or not. It is more useful to consider which design trajectory of ubiquity designers should take. Hyperreality is useful for imagining digital media interaction that is an extension of people's physical and social realities. It is a more descriptive perspective than ubiquitous computing because it shifts designers' imagination from computers that are ubiquitous to a reality that is in some capacity enhanced through computers.

As with ubiquitous computing, there is no concrete technological approach for pursuing a hyperreality and to an extent this also presents a proximate future. Bonnani's hyperreality is achieved through ubiquitous and tangible computing. Bonnani suggests that augmented reality only overlays the world with useful information [12, pp.130–131] and is more cognitively intensive to use, requiring focused attention on the task at hand [12, pp.131]. Bonnani cites an example of augmented reality where its application was poor, rather than the technology being deficient. I have explored how augmented reality would be more beneficial than ubiquitous and tangible computing. While technology choices under a hyperreality can sway, the perspective ultimately speaks of an enhanced reality.

Baudrillard's hyperreality described in 1981, which inspired Bonanni's perspective, spans digital and analogue media. Hyperreality originally carried a negative connotation as a representation of real models without origin or reality [8]. Bonanni describes Baudrillard's definition as places that feel more real than the real world by blending an existing environment with simulated sensations [12, p.130].

Hyperrealities are found everywhere, from works of fiction such as books and games, to paintings, advertisements and theme parks. Their applications can be quite innocuous and intended to entertain or teach. Yet they can mislead and promote unrealistic images that promote harmful attitudes or behaviours. For instance, drama television shows may inspire a longing for unrealistic ideas of romance or body image whose direct pursuit is often counterintuitive to understanding and overcoming issues of self-esteem. As with any media, it is up to everyone to recognise and act on issues as they arise. This applies to how a hyperreality through Computational Costume would be managed.

The Capabilities of Computational Costume described ahead are based on supporting a virtual and physical hyperreality. People's perception of their surrounding environment is transformed through vision, sound and touch.

Dark patterns

Hyperreality and augmented reality, as proposed to enhance people's whole field of view, are powerful. They require strong ethical considerations around how images are presented and what images are presented. To tackle this issue, we can look at design patterns today that are coercive—such that they encourage behaviours outside of people's intentions. An example involves unwittingly providing private information to generate content that people will have a higher degree of engaging with. In certain situations, this can run counter to a person wanting or needing to concentrate on more important activities. Such design patterns are known as dark patterns [14][35]. An awareness of these patterns allows designers and audiences to recognise them and avoid them if they choose to.

Dark patterns are design patterns that coerce people into situations that are culturally undignified or behaviours that are not beneficial for them, originally identified by Harry Brignull in 2013 [14]. Grey et al. (2018) define how interfaces today can nag people into performing an action, obstruct people from performing an action, disguise relevant information, privilege some actions over others and force particular actions [35]. Greenberg et al. (2014) go further to raise issues about privacy in physical public spaces with reference to fictitious and real examples in film and advertising campaigns where individuals receive targeted advertising from surfaces that track them [36].

Dark patterns are the foremost consideration of the Computational Costume designs presented through Design scenarios ahead. The work respects people's privacy and tactfully presents virtual wearable and objects to support people.

Speculative design

Today's augmented reality technology does not support the design of Computational Costume which requires interactions to be grounded by people's virtual identity through whole-body interaction. For this reason I have engaged a speculative design process which divides Computational Costume into both the probable technology, in Ergonomics and technology review, and the speculative Design scenarios which it supports.

Speculative design serves as a broad framework for creating design ideas based on forecasts of the future to stimulate discussion. Anthony Dunne and Fiona Raby in their book Speculative Everything [23] discuss the role of speculative design proposals in opening new perspectives on the challenges we all face. They argue that such designs add onto the public's vision of reality, challenge it and provide alternatives to it [23, p.189].

Computational Costume should be accepted as a provocation for what could be achieved, rather than a prediction of what is likely to eventuate. Supporting virtual identity is tuned to replacing dependence on screen-based devices. Another design option could include diminishing reliance on technology in order to help people cultivate manual skills such as freehand drawing or navigation without digital location tracking. For Computational Costume, speculative design is an experiential vehicle for developing, reflecting upon and evaluating ideas without investing in technology development.

The complete Design setting for Computational Costume presented ahead is a speculative design. The design is intended to provide the groundwork for developing new digital media that is not dependent on screen-based devices. Design scenarios are explored and presented through a range of lo-fi physical media as shown in Computational Costume prototyping and presentation.

Design setting

As covered in the preceding Background, Computational Costume is a speculative design of a hyperreality which avoids dark patterns. This informs the speculative design setting which Computational Costume's Design scenarios ahead are built upon. The speculative design setting for Computational Costume consists of two areas:

Each of the two areas in the speculative design setting inform one another. The augmented reality capabilities influence the selection of technology, and the forecasts of technology serve as a guide to inform possible designs through augmented reality. Without this combined investigation, it would not be possible to lend credence to the likelihood of the speculative design.

Capabilities

Computational Costume, as a hardware and software design, offers the ability to engage with virtual wearables and virtual objects that may also have physical qualities:

Together, virtual wearables and virtual objects can be regarded as an esemplastic hybrid of physical and virtual objects. They fit into the vision of a cohesive and transformative Hyperreality. I will collectively term hybrid physical and virtual objects esemplastic objects.

Esemplastic objects

In this section, the provenance of the term 'esemplastic' is discussed based on Ehn and Linde's (2004) ATELIER design research project discussed in previous sections. The authors seek the esemplastic unification of place through appropriation of space, configurability of artefacts, and place making games [25]. I use the term 'esemplastic objects' to refer to objects that combine the qualities of virtual and physical materials such that, with adequate technology, the materials are neither virtual projections nor physical objects. Rather, physical and virtual qualities come together into a consistent material. As an example, in a speculative augmented reality a person could engage with a portable physical object whose form transforms virtually.

Based on human experiences of physical and virtual objects today, I highlight where esemplastic objects in Computational Costume differ. The points below can be regarded as the constraints on Computational Costume's technical capabilities that serve as a springboard for possible designs:

The suggested capabilities of esemplastic objects listed above carry conceptual and technical drawbacks:

By factoring in the drawbacks between technology and concepts, technologies can be designed to suit a functional conceptual ideal. Technology proposed cannot be so impractical that it will not fit an appropriate form factor in the future. For this reason the Design scenarios presented ahead remain realistically achievable on both conceptual and technical fronts. Thus, ahead I explore considerations of both ergonomics and technology.

Ergonomics and technology review

I now review existing and proposed form factors for wearable technologies that would in a probable future support the capabilities of Computational Costume, as covered. The technologies seek to simulate in as much detail as possible:

From an ergonomic perspective, the wearable technology choices in Computational Costume should allow people to move as freely as possible. The technology should be as self-contained and modular as possible, so parts can work independently of any other system—or allow for some sensory channels only, covered briefly in Accessibility. Also, the technology should be easy to wear and easy to maintain. The range of technologies presented below have been curated according to these requirements in the proposed Hardware design.

Augmented reality see-through devices

There are three possible form factors for augmented reality vision through see-through devices:

These devices range in performance and wearability. They stand in opposition to augmented reality pass-through devices, which can encumber the wearer's movement. Pass-through devices capture vision and surface geometries and pass them to a display within virtual reality headsets. An example of this augmented reality can be found in the ZED Mini combined with the HTC Vive [93]. These devices are excluded from the hardware design detailed ahead because they offer greater performance at the cost of a comfortable form factor.

Smartglasses present the most attractive form factor and are light and self-contained. They fit augmented reality capabilities into a device which resembles regular glasses. The compact form factors come at the cost of performance. Smartglasses serve principally as assistive devices, by overlaying the wearer's field of view with graphics, allowing the wearer to send and receive messages, take phone calls, capture images and follow turn-by-turn navigation. Simple voice recognition and taps on the glasses allow input. Smartglasses are exemplified by devices such as Google Glass [34] by Google (2013), which began the trend. These devices are survived today by devices such as the Vuzix Blade [100] by Vuzix Corporation (2018).

A future vision teases at how smartglasses will overtake today's mixed-reality headsets in both form factor and performance. The M3000 [99] is a speculative vision for smartglasses by Vuzix Corporation (2017) presenting a fully featured and compact monocular design. The device features an enhanced field of view with object and surface recognition for graphics that is comparable in performance to mixed-reality headsets today.

Pendant-worn augmented reality presents an alternative to see-through devices that is capable of projecting tracked graphics onto surfaces. This technology does not require the wearer to have glasses or a headset on, allowing some extra freedom of movement and a complete field of view, a preferable option when headgear is neither practical nor comfortable for all-day use. However, the technology does come at the cost of privacy, because graphics are projected onto shared surfaces for anyone nearby to see.

Pendant-worn augmented reality, such as the Portable Lumipen presents two issues: the wearer's body cannot be projected on and holographic images are not possible. Projection on a wearer might be possible if the device were used as a kind of handheld torch to illuminate graphics on the body. Also, to achieve holograms the wearer's point of view needs to be tracked.

Mixed-reality headsets provide augmented reality vision with hologram level graphics, surface and object tracking, sound and input through hand gestures and voice control. These headsets offer a greater level of visual fidelity than smartglasses and pendant-worn devices. While these headsets are heavier than smartglasses, comfort is increased by offsetting computing to a tethered standalone unit which can be worn. Mixed-reality headsets include the HoloLens 2 [74] headset by Microsoft (2019) and Magic Leap One [65] system by Magic Leap (2018). These devices only differ in form factor and how they take input from the wearer's hands:

Mixed-reality headsets can vary in comfort. Preference goes towards the HoloLens 2, which offers a greater level of physical freedom. The headset's functionality is self-contained and the form factor is adjustable so augmented reality vision can be easily stopped by lifting up the lenses.

Force feedback devices

Force feedback devices provide the artificial sensation of interacting with a physical 3D volume when interacting with a virtual volume. I identify two methods for supplying force feedback: wearable electrical muscle stimulation (EMS) electrode pads and wearable counterweighted electric motors.

EMS provides a range of possible force feedback options through multiple areas of the hands and arms.

The force feedback technology proposed by Lopes et al. (2018) is moderately comfortable to wear, requiring the application of patches and an EMS power unit. The EMS electrode pads are connected to a portable EMS unit which is connected to a computer and HoloLens mixed-reality headset carried in a bag. Over time it would be expected that this hardware will be reduced in size and weight. Also, the adhesive EMS electrode pads might one day be replaced with a specially made fitted garment embedded with electrodes to allow easy application and removal, and connection to an EMS unit and computer.

Counterweighted electric motors can provide force feedback through wearable hand controllers. These kinds of devices can be regarded as less invasive, although they are a larger and heavier option that is concentrated in one area. In contrast, the weight of EMS electrodes on the hands and arms along with supporting EMS equipment in a carrybag is lighter and better distributed.

A reduction in the size of counterweighted electric motor force feedback devices could make them a viable and easier to manage alternative to wearing a garment with EMS electrode pads. However, EMS electrode pads already provide a discreet way to supply force feedback.

Portable sound devices

To create authentic esemplastic objects that blend the physical and virtual, the augmented reality see-through devices and force feedback devices presented require audial feedback and recording to complement visual and haptic channels of perception. Portable sound devices found on the market today provide insight into how audio can be heard and recorded with attention to wearers' comfort.

Wireless earphones/microphones provide convenient and compact audio listening and recording which adapt to the wearer's actions.

For Computational Costume, an earphone-charging enclosure could be combined with a computing device.

Binaural hearing/recording provides a way for earphones to be permeable to ambient sounds in a surrounding physical environment for increased comfort.

Designers can imagine how the functionality of binaural earphones could be applied to wireless earphones such as AirPods for increased comfort. The ability for these wireless earphones to detect the wearer's voice could be used to mute ambient sound recordings while the wearer is speaking to prevent them hearing the feedback of their own voice.

Bone-conduction transducers provide a means to listen to audio without blocking the wearer's ear canal for comfort.

Merging augmented reality and audio reproduction could be convenient, but would limit the ability to use audio only on a standalone device. Some situations may warrant audio only for accessibility reasons or physical exercise where glasses could fall off. A wireless earphone option could be useful for wearers who need audio only.

Sensing, networking and computing devices

For Computational Costume, augmented reality, force feedback and portable sound devices need to be brought together by computing which allows: interoperability between hardware components; necessary environmental sensing to recognise and track surroundings, objects and other wearers; authentication of wearers for wearer privacy and integrity of wearer tracking; and networking between wearers. There are a range of existing and proposed devices which can help achieve this, including: smartwatches, earphones and wearable motion- and surface-capture devices.

Smartwatches combine a range of motion and biometric sensing, networking and computing in very compact wearable devices.

The form factor of the smartwatch could serve a similar purpose to the auxiliary computing of Magic Leap One's Lightpack, previously explored in augmented reality see-through devices. However, designers might choose to only keep biometric sensing on the wrist and use a smartwatch-like form factor as a clip-on, handheld or pendant-worn auxiliary computer.

Earphones have the potential to contain some of the computing, motion-sensing and biometric capabilities of smartwatches. Earphone head-motion capture and biometrics provide the possibility to offload sensing and ambient sound from augmented reality headsets (previously explored in augmented reality see-through devices) to improve their form factor while containing some of the biometric sensing of smartwatches.

Quotes from Apple Inc. patent Sports monitoring system for headphones, earbuds and/or headsets [80] by Prest et al. (2014). Quotes extracted from patent by Apple Inc.

one embodiment of the invention can, for example, include at least: receiving head motion data pertaining to a head motion of a user of the electronic device; determining whether the head motion data matches any of a plurality of predetermined head gestures; and identifying an action associated with the matching predetermined head gesture. [80, § 2 lines 21–27]

An entire head gesture language may be developed. [80, § 6 lines 38–39]

The monitoring system can also facilitate sensing of other user characteristics (e.g., biometric data) such as temperature, perspiration and heart rate. [80, § 3 lines 67–70]

In relation to Computational Costume, the earphones proposed by Prest et al. (2014) have the potential to act as an auxiliary motion and biometric sensing device, as well as a standalone device. The head tracking present in the earphones can replace or back up the head tracking available in augmented reality headsets. In addition, the earphones can collect biometrics and so replace a wrist-worn device, allowing the earphones to also sense whether they are being worn. If the system detects that only audio is being received by the wearer, it could allow visual and haptic information to be replaced with audio and an artificial agent to take commands from the wearer.

Computational Costume requires reliable wearable motion and surface capture information to map all surfaces available to a wearer and to take hand gestures as input. Mixed-reality headsets already incorporate this functionality. However, compact standalone motion and surface capture devices today allow greater fidelity. These compact devices signal what might be possible in the future.

Designers could imagine how motion and surface capture could be supported through a device worn like a pendant, such as the Portable Lumipen, or clipped onto clothing, such as the Magic Leap One Lightpack, previously explored. Any small motion and surface capture device that must be on the head can remain in place, while a clip-on or pendant which can be stably affixed can house more advanced hardware.

Accessibility

The technology to support Computational Costume consists of various parts to transmit vision, audio and touch to wearers. As hinted in discussions of modularity thus far in sensing, networking and computing devices, the various parts can be arranged in a modular fashion to increase accessibility to the different sensory channels available. When one part such as vision, haptics, sound, or motion and surface capture is known to be unavailable, other parts can activate and translate in their place. As an example, in situations where a wearer is vision-impaired or an augmented reality headset has no charge left, the system can activate earphones to listen to other channels in audio, with commands taken through voice by an artificial agent or hand-gesture commands captured by a motion-capture device.

In Hardware design I discuss how the technologies presented here for Computational Costume might come together into a modular and accessible combination of devices.

Hardware design

The hardware design of Computational Costume builds upon the preceding Ergonomics and technology review. The hardware design proposed ahead supports the speculative capabilities of Computational Costume in accordance with wearers' comfort and accessibility to a range of sensory channels such as: vision, haptics, sound, and motion and surface capture. This imagining of a probable hardware design serves to lend credence to the viability of the Computational Costume's conceptual design in practice, as rendered in Figure 4.2.

A rendition of Computational Costume [6] in practice with its hardware design highlighted in blue. Illustration by Janelle Barone, made in collaboration with the author.

Building directly upon the four classes of devices covered in Ergonomics and technology review, the Computational Costume hardware design is comprised of three modules which connect to a central unit:

Augmented reality module

The augmented reality module can be imagined as a thickly framed pair of glasses that can be separated in half at the bridge into two monocular frames. The module can be worn as binocular eyeglasses or as a monocular lens. The design features:

The design features the following hardware for:

Force feedback module

The force feedback module can be imagined as a long-sleeve thermal shirt paired with a pocket-sized electrical muscle stimulation (EMS) power pack. This shirt fits closely to the skin and is worn under clothing. The shirt is tethered to the removable EMS power pack. The design features:

The design features the following hardware for:

Sound module

The sound module can be imagined as consisting of two wireless earbud earphones. The earphones are accompanied by a small charging case. The design features:

The design features the following hardware for:

Central unit

The central unit can be imagined as consisting of two square pucklike units connected via a belt which can be worn as a waist belt, pendant or sash. The unit facilitates front and rear motion and surface capture, identification of wearers to allow the projection of individuals' Computational Costumes and auxiliary computing for the augmented reality module, force feedback module and sound module. The design features:

The design features the following hardware for:

Conclusion

By imagining the speculative hardware design of Computational Costume, designers can take a liberating technology-agnostic approach to the whole design process. The hardware design covered, serves as a probable guide for what software could achieve in the near future based on hardware developments today and foreseeable advancements in wearable technologies for: augmented reality; force feedback; sound reproduction; and compact computing, sensing and networking. This informed design frees designers from the template of software interactions possible through today's technology. Also, speculative software designs that come out of this technology-agnostic approach can help set the benchmark for hardware development.

Design scenarios

Following on from the speculative design setting previously described, I present speculative design scenarios describing the engagement allowed by Computational Costume. The scenarios present the ways in which Computational Costume software could work for its wearers in imagined scenarios where people are not dependent on screen-based devices. Instead, wearers adopt whole-body virtual identities to ground practices through digital media.

Background

These speculative design scenarios exist to liberate designers from the constraints associated with today's wearable technology capabilities. These design scenarios are constructed and presented using lo-fi physical materials and processes, covered in detail in appendix Computational Costume prototyping and presentation, to liberate designers from the need to create or build their own technology. Together, these features enable a range of designers, whether they are more technically inclined or inclined towards artistic craft, to focus their design skills to freely develop the design of Computational Costume to address imagined scenarios. In addition to this, the scenarios created here are presented in a way that aims to be highly accessible to audiences so they can experience and comment on the work.

Scenario-based design is a well-established design process in developing emerging technologies. Scenario-based design is a family of techniques in which the use of a future system is concretely described at an early point in the development process [86]. The envisaged scenarios assist the development of a functional system by allowing designers to examine and challenge the fit between their design ideas and prototypes, and applicable scenarios. The process is comparable to design probes [69] for proposing alternative design scenarios to stakeholders (e.g. users or designers) to challenge established perceptions that influence design outcomes and assess the viability of designs. Computational Costume presents alternative interaction possibilities that can be shown to audiences and designers through scenarios for further development into functional designs.

I present four different Computational Costume designs which offer a range of scenarios. The designs seek to advance the development of a hyperreality filled with esemplastic objects that blur physical and virtual materials together. This is achieved through items such as wearable whole-body virtual identities. Also, in the face of such powerful technology, I propose designs that avoid coercive dark patterns to support people's privacy while interacting.

To summarise the iterative exploration and development of Computational Costume:

It should be noted that the Computational Costume designs have been in part influenced by the continued refinement of the prototyping and presentation methods. Scenarios begin with designs that are sympathetic to a static presentation of mock virtual scenes. As ideas have grown, the final scenarios evolve to incorporate dynamic representations as film-making and video editing are adopted to communicate how mock esemplastic objects made from physical materials would act in actuality.

Cardboard poster and interface

To begin illustrating the potential of esemplastic objects in a speculative augmented reality, I created a mock virtual/physical (esemplastic) poster using cardboard, as shown in Figure 4.3, as well as an imagined wearable interface to facilitate the poster's creation, as shown in Figure 4.5. The cardboard poster itself was an encapsulation of the review and design concepts that inspired its creation—covered in Supporting practices with screen-based digital devices and Designing for a wider range of interactions beyond the screen.

Cardboard poster

The cardboard poster served a real purpose in a real scenario to present my research alongside an extended abstract at a conference research competition (see [70]). The design challenged the standard poster presentation scenario by presenting research content that inspired the viewer's imagination to see the poster as a direct product of the research. The poster allowed viewers to engage with it as if it were an esemplastic object in practice. The work allowed viewing and direct interaction through manipulable sections on the bottom area of the poster, in addition to inspiring viewers' imagination about how such an esemplastic sculpture would have been assembled by hand.

Cardboard poster made for the CHI 2017 (Conference on Human Factors in Computing Systems) Student Research Competition, Denver, Colorado, USA, May 2017. Photography by Jon McCormack.
Hand assembly of the cardboard poster. Images are author's own.

The cardboard poster presents a proof-of-concept for the direct hand creation and assembly of esemplastic objects, as shown in Figure 4.4. For the constructed poster (Figure 4.3), viewers were allowed to move pieces on the bottom section of the poster to explore the evolution of my early word processing application interface redesign, shown in Figure 3.1, from placement options on-screen to outside of the screen and on-body. However, in a full speculative augmented reality viewers might also be able to take copies of the whole poster, share them and even add their own touches. Through an imagined cardboard interface I indicate the kind of functionality that viewers would have access to in engaging with esemplastic objects like the cardboard poster.

Cardboard interface

To complement the proof-of-concept poster, I imagined and designed a wearable virtual interface mock-up, shown in Figure 4.5, that would allow the required functionality for dealing with esemplastic objects and the creation of the cardboard poster. The interface was inspired by my early word processing application interface redesign, shown in Figure 3.1, which I explored at the beginning of Designing for a wider range of interactions beyond the screen. The application interface redesign highlighted how menus could be arranged procedurally for different activities. With this in mind, a natural extension of the idea with respect to whole-body interaction and augmented reality was to allow people to wear interface options that would be useful to carry at all times.

Hand and forearm interface mock-up for crafting the cardboard poster. Photography by Jon McCormack.

The cardboard interface (Figure 4.5) consists of a menu accessible through bracelets and rings for the forearm and fingers. The bracelets closest to the body control the highest level features—the main interaction modes such as: explore, sculpt and capture. The rings on the fingers control the lowest level features of the preceding modes—in the example shown, these are the colour, quality, thickness and type of line for drawing in a 'brush' mode. The principle of high-to-low level option selection acts as an extension of controlled human bodily movement—where the most focused and detailed physical movements occur at the physical extremities.

Material choice

The austere material choice for the cardboard poster and interface aims to discreetly echo the concerns of the Material Turn in HCI by drawing designers' attention to the materiality of interactions, not the materials and status attached to particular media and devices. Cardboard presented as a mock virtual/physical esemplastic medium is unbiased, or amaterial if you will, because it is not digital and it is not serving the usual purpose of cardboard for packaging. This choice presents a shift from established digital media constraints to rationalised speculative interaction design concepts.

Cardboard is also accessible to work and engage with, opening up design and prototyping processes for digital media to a wider audience of both designers and participants. This is explored further in the prototyping discussion ahead in Computational Costume prototyping and presentation.

Findings

The cardboard poster and interface presented the application of imagined esemplastic objects in practice through an amaterial approach. The poster challenged viewers' concept of a traditionally flat physical poster presentation format for viewing only. The work offered the ability to imagine how an esemplastic object could allow more depth to physical and virtual media today.

To grow upon the strengths of the work, the most significant features of the cardboard poster and interface which needed to be elaborated upon through further design were:

Computational Costume v0

Computational Costume v0 builds upon the functionality of esemplastic objects and the amaterial imagination of esemplastic objects explored in cardboard poster and interface. Extending the cardboard interface, which was wearable on the hands and forearms, Computational Costume v0, as shown in Figure 4.6, describes how the rest of the body and additional tools could be used. Computational Costume v0 presents how miniaturised versions of objects similar to the cardboard poster might be stored on costumes. In addition, costumes and objects can be accessed through two tools: a map tool to navigate beyond a wearer's field of view; and a timeline tool to navigate beyond the present time.

At a public fashion exhibition, Computational Costume v0 presented how wearable virtual identities and tools could intertwine themselves with the activities of wearers. A mannequin donning multiple imagined costumes sought to present a range of scenarios all in one place, as shown in Figure 4.6. The mannequin held one half of the costume for the wearer's work life and another half for their personal life. The work and personal costumes showed how a virtual costume could replace the use of tools today for work and communication. Overall, the work attempted to illustrate as much potential in the concepts as possible in a way such that viewers might be able to project their own experiences by standing alongside the mannequin.

Computational Costume v0 mannequin front and back. Shown at No Vacancy Gallery QV in Melbourne, Victoria, Australia as part of the Melbourne FashionTech collective's showcase during White Night 17 February 2018. Photography by Jon McCormack.
Work costume

The work costume was a simple representation of a stop/go signal for a roadside construction worker, as shown in Figure 4.6. The half-costume was accompanied by a small virtual map to track the status of a colleague's signal and to communicate between one another to coordinate signals, as shown in Figure 4.7.

Map tool in Computational Costume v0. Photography by Jon McCormack.
Personal costume

The personal life costume presented a shirt with personal effects as storage and decoration, as shown in Figure 4.8, alongside pants with a more utilitarian purpose for activity tracking and marking out a health treatment plan. The objects placed on the costume had a symbolic connection to areas of the body. For instance, the health treatment plan was targeted to the leg, as were the 'swimming goals', as shown in Figure 4.9. A timeline tool to track changes on the costume over time and access them complemented the work, as shown in Figure 4.10.

Computational Costume v0 featuring personal effects. Photography by Jon McCormack.
Computational Costume v0 featuring health treatment plan and swimming goals. Photography by Jon McCormack.
Timeline tool in Computational Costume v0. Photography by Jon McCormack.
Findings

The overall assemblage of scenarios in Computational Costume v0 was didactic. The work was dependent on explanatory captions in describing the function of the costume. Members of the public took time to look at the work, but were not able to fully conceptualise the ideas in their mind until they received answers to questions they had about the work. To correct this, future costumes would be presented in a more realistic manner, similar to the cardboard poster and interface.

Computational Costume v0 filled in gaps not covered by the cardboard poster and interface which preceded it. This involved:

Successive designs refined the above two areas, in addition to refinements to the presentation style to help audiences conceptualise the ideas presented without assistance.

Computational Costume v1

Computational Costume v1 was designed to incorporate relatable experiences through a day in the life of a wearer. It was presented as a three-minute on-stage performance for a science communication competition, a re-enactment is shown in Figure 4.11. In this performance, I acted as the wearer in three situations marked by three different costumes shown in Figure 4.12.

Computational Costume v1 builds upon the costumes and supporting tools established in Computational Costume v0. The performance begins with a personal costume used on public transport on the way to work. The costume is followed by a worksite costume which activates at the wearer's workplace. The performance concludes with a medical emergency costume which activates an on-body health record for use in an emergency. These costumes echo some of the features and purpose for objects shown previously on the personal costume and work costume from Computational Costume v0. However, the presentation in Computational Costume v1 goes into more depth than in Computational Costume v0, using what are known as endowed props [46] to imbue physical props with virtual functionality through acting, as I illustrate ahead.

Re-enactment video of the Computational Costume v1 performance [71]. Video is author's own.
Computational Costume v1 from left to right: personal costume, worksite costume and medical emergency costume. Photography by Jon McCormack.
Personal costume

The personal costume in Computational Costume v1 builds upon the purpose of esemplastic objects and the map tool presented in Computational Costume v0. The personal costume here is focused on presenting how the wearer might perform common activities such as reading and communicating through esemplastic objects and a map tool. Shown in Figure 4.13 is an object representing the train the wearer has boarded, with time and destination information which can be sent to another wearer through the map tool to communicate location and time of arrival. In the performance, the action is a seamless hand and arm movement between the wall of the imagined train and the map which contains links to other wearers.

An additional touch in the performance involves the demonstration of how private reading could be made public to others, as shown in Figure 4.14. This action seeks to show how an act that is traditionally public through the use of physical media such as books and newspapers can once again become public through digital media by choice, through an esemplastic object.

A copy of a boarded train and map tool, used to communicate the wearer's location and estimated time of arrival in Computational Costume v1. Photography by Jon McCormack.
Some reading material made public in Computational Costume v1. Photography by Jon McCormack.
Worksite costume

Building on the work costume presented in Computational Costume v0 which showed how a costume would serve as a work tool, the worksite costume in Computational Costume v1 shows how a costume can be further intertwined with the practice of working. In the performance, the worksite costume activates at the wearer's workplace and indicates jobs which need to be done through a sign and directional arrow, as shown in Figure 4.15. The job can be affixed to the costume so it is visible to others.

The worksite costume demonstrates the utility of a virtual identity as a tool for an individual and a set group. The features of the worksite costume allow individuals or other wearers to set tasks as a direct extension of a context-aware uniform, allowing a seamless fit between the activities at hand and the information to coordinate and manage these activities when required. Visual objects can be re-purposed to signal intent to others that is visible to other workers directly and through the map tool.

A worksite costume indicating a job for the wearer, which can be affixed to the costume in Computational Costume v1. Photography by Jon McCormack.
Medical emergency costume

The medical emergency costume builds upon a small part of the personal costume presented in Computational Costume v0 which reveals a medical record; see Figure 4.9. In the performance, the medical emergency costume is activated when the wearer is injured at the worksite. The medical emergency costume shows how a medical record could work across a series of scenarios, as shown in Figure 4.16, from marking out the area of injury to providing a surface which medical practitioners can access and leave critical information on. The costume allows the map tool to enable access to information on events that occurred before an emergency and to act as a direct line of communication between the injured wearer and their loved ones.

The medical emergency costume, along with the preceding worksite costume, provides another instance of contextually activated virtual identities. Again, the costume is a useful tool for allowing the coordination and management of information in a direct way. This begins with a clear alert to surrounding people that there is a problem at hand and that help has been called. There is also a meaningful connection between information and actions on the bodily surface: the wearer's state is presented on the body that is the source of the information, and a loved one can reach over a geographical distance to provide direct support as if they were physically nearby.

A medical emergency costume displaying areas of injury with indication of a drug administered (displayed as an 'F' for Fentanyl), heart biometrics, enclosed private records and support sent by loved ones via touch from the map tool in Computational Costume v1. Photography by Jon McCormack.
Findings

Computational Costume v1 builds substantially on Computational Costume v0 by providing greater clarity on the usefulness of contextually aware costumes and the ability to draw direct connections between surrounding objects and activities related to wearers.

Several aspects of the Computational Costume v1 solo presentation took away from the work's message. One of the competition judges mistook the work as being part of the quantified self movement for lifelogging, which involves collecting a range of data from individuals and presenting it. This happened despite showing the ways in which the work was advantageous for communication. I suspect the misinterpretation was a product of presenting a range of scenarios on my own where other people and the surrounding environment were imagined. For first-time onlookers, it was not unreasonable to receive such a misinterpretation when there was only three minutes to listen and watch alongside imagining a range of different surrounding environments and actors that had no physical representation. Additionally, the novelty of removing quick-release clothing gained audible attention from the audience. The mock effect was distracting—when the real effect would involve virtual clothes that change instantly.

Based on the findings, future Computational Costume ideas needed to build on:

Computational Costume v2

Computational Costume v2 presented a deeper focus on the workings of the health record and the map tool explored through Computational Costume v0 and Computational Costume v1. The work combines the physical and performative displays previously explored in Computational Costume v0 and Computational Costume v1 through the use of film-making, while adding greater detail to objects and how they work in public versus private scenarios.

A video, shown in Figure 4.17, was produced to show how Computational Costume v2 worked, with clever editing of shots to make physical props appear as if they were esemplastic objects. For example, objects could instantly appear and change, unlike in physical performances. The video demonstrates first-person perspectives of how a wearer is enabled to access information privately as an individual or group. This is juxtaposed with third-person perspectives of how information is visible publicly to an observer. In addition, it is clear to see how interactions with other people and the surrounding environment are possible. Ahead, I present the key scenes through a storyboard with descriptions of the scenes. Where possible, the video was presented along with an exhibition of physical props, as shown in Figure 4.18 and Figure 4.19, to allow viewers to examine the finer details they may have missed while watching the video.

Computational Costume v2 video [72]. Video is author's own.
The Computational Costume v2 video, costumes and props on display at Monash University's SensiLab The Looking Glass window display, Caulfield, Victoria, Australia from 1 July 2018 until 26 November 2018. Image is author's own.
The Computational Costume v2 costumes and props, with video, on display at the Design Translations exhibition by Health Collab, MADA Gallery, Caulfield, Victoria, Australia, 3–6 December 2018. Image is author's own.
Storyboard

In Figure 4.20, the Computational Costume v2 design begins with a costume token allowing the management of a costume and ability to access hard-to-reach areas—by placing a mark representing pain over the spine.

Applying a mark onto the back using an object for marking and costume token at 00:40–00:42 in the Computational Costume v2 video [72]. Images are author's own.

Sharing tokens in Figure 4.21 or any object in Figure 4.28 acts as a means to allow information access to others. In Figure 4.21 a patient hands over to a health professional a token that represents back pain and which allows the private viewing and exchange of objects on the patient's medical record.

The costume token allows access to a costume. In this case a health professional can see a medical record costume at 00:46–00:52 in the Computational Costume v2 video [72]. Images are author's own.

In Figure 4.22, a health professional can demonstrate and apply a ready-made object to explain a medical condition and provide instructions the wearer can follow at a later time. These objects are added to the record and become available to both parties for future reference.

A health professional applies a ready-made diagram from a wall and specialised treatment plan to the patient's costume for reference at 00:52–01:19 in the Computational Costume v2 video [72]. Images are author's own.

The exchange of information in Figure 4.22 becomes part of a family of medical records on the patient's overall medical record costume as shown in Figure 4.23. The overall medical record is a chronology of silhouettes representing various milestones such as vaccination and medical conditions which collate medical imagery and prescriptions.

A medical record costume hosting a lifetime of records as chronologically ordered silhouettes at 01:32–01:40 in the Computational Costume v2 video [72]. Images are author's own.

In Figure 4.24 the relevant parts of the medical record costume can act as an emergency-activated call-to-action in reference to the medical emergency costume in Computational Costume v1.

A medical record appears automatically on a wearer in an emergency situation as a call-to-action for bystanders at 01:53–01:56 in the Computational Costume v2 video [72]. Images are author's own.

A map tool which builds on the design of map tools featured in Computational Costume v0 and Computational Costume v1 allows access to other wearers and objects outside of an individual's field of view. The tool is explored with greater detail, as shown in Figure 4.25, Figure 4.26 and Figure 4.27. The map tool allows milestones marked on a medical record to be associated with locations and other objects, in Figure 4.25, such as a birth record, which is connected to birth parents and a birthplace from the past. In this instance, the map tool combines the functionality of the timeline tool explored in Computational Costume v0; see Figure 4.10.

The map tool allowing access to a birth record's information on birth parents and birthplace at 02:04–02:06 in the Computational Costume v2 video [72]. Images are author's own.

In Figure 4.26 the map tool acts as a communication tool and navigational aid, allowing communication between wearers and navigation to another wearer's location by following an environmental marking.

The map tool facilitating communication between wearers and acting as a navigational aid at 02:16–02:23 in the Computational Costume v2 video [72]. Images are author's own.

In Figure 4.27 the map tool also allows access to other environments, useful for object retrieval when something has been forgotten at another place.

The map tool allowing access to a remote location for object retrieval at 02:29 in the Computational Costume v2 video [72]. Image is author's own.

The Computational Costume v2 video concludes with a private gathering. Access is granted to an interested passerby through sharing an object visible to the group, as shown in Figure 4.28.

The costume and shared objects as tools to manage privacy at 02:40–03:05 in the Computational Costume v2 video [72]. Images are author's own.
Findings

Computational Costume v2 presents the most fully formed design concepts and presentation style for Computational Costume. Computational Costume v2 provides a compelling vision to audiences by using film-making to combine the strengths of physical props and performance as explored in previous designs. Video shows how esemplastic objects work as part of an imagined ecosystem of objects that facilitate: personal and group activities for health, work, play and emergencies; as well as the use of environmental surfaces for esemplastic objects.

Audiences have drawn the clearest understanding of Computational Costume from the video-only and exhibition-with-video formats adopted in Computational Costume v2. Costumes and props used in the filming of Computational Costume v2 serve to accompany presentations of the video, as shown in Figure 4.18, providing audiences with a chance to see design details they may have missed in the video. The video embellishes the costumes and props with meaning drawn from the narrative presented through the video. The immediacy of physical props is greater than pausing video frames to appreciate physical details.

The activities with esemplastic objects and between people in Computational Costume v2 highlight several generalisable interactions which can be applied to new scenarios by designers:

Conclusion

Computational Costume presents a speculative design setting and design scenarios which enable designers to create, reflect upon, present and evaluate imagined speculative designs for whole-body interaction through augmented reality. The combination of the imagined setting and scenarios allows designers to work outside of the constraints imposed by today's technology.

Through the speculative designs presented, designers can address how people's dependence on screen-based devices can be replaced—which is not possible when designing within the constraints imposed by today's technology. The speculative design process adopted for Computational Costume allows designers to work with imagined capabilities that advance what is possible physically and virtually through digital media today. This process has been paired with an ergonomics and technology review that has informed a probable hardware design to support the abilities of Computational Costume. The designs have a chance to influence the trajectory of developments for the future.

The development of speculative designs through design scenarios have covered: partial wearables and esemplastic objects, all the way to the design of a whole ecosystem of interactions through esemplastic objects and costumes. The cardboard poster and interface shows how people might directly make and interact with esemplastic objects. Computational Costume v0 and Computational Costume v1 illustrate the use of contextually activated costumes and tools for a variety of scenarios. These works culminate in the presentation of Computational Costume v2, which illustrates Computational Costume in action, with more detailed objects and tools, and interaction between wearers. The development of designs has provided a list of generalisable interactions, as detailed in Findings, which can be expanded upon and applied in new design scenarios.

The presentation of imagined design scenarios through a variety of formats has shown how physical wearables and objects are best presented through a combination of physical performance, exhibition and film-making. Lo-fi physical material props can be presented through video to illustrate how speculative esemplastic objects would work as part of an imagined ecosystem with many people. Speculative designs presented through this medium enable designers to reflect on their own work, as well as giving unfamiliar audiences the ability to engage with the imagined ideas. This is all possible without investing in the creation of new technology hardware or software. In Computational Costume prototyping and presentation I provide a review of physical materials and presentation techniques used which covers how to best support the imagination of Computational Costume designs.

Conclusion

My research has explored how a wide range of designers might bridge the divide that persists between people's virtual and physical practices through the design of digital media, from the design of the mainstream screen-based devices people use today to tangible computing, which seeks to use a combination of physical and virtual materials, to whole-body interaction supported by augmented reality. My body of work has questioned the design of digital media and suggested a way forward for designers based on speculative prototyping and presentation through:

Contribution

In its entirety my research reveals a conceptual rationale for developing speculative virtual wearables and objects that ground interactions with digital media through the physical world. This has been the impetus for the design of Computational Costume: a speculative design setting and scenarios based on imagined probable technologies centred around augmented reality. Computational Costume presents the use of esemplastic objects and wearables that combine both physical and virtual qualities through a combination of augmented reality, force feedback and use of the physical world. Designs for this imagined digital media have been developed and refined through the presentation of lo-fi physical materials, exhibition and film-making, enabling designers and audiences alike to be liberated from the constraints of today's technology. Digital media designers across the spectrum from visual and interaction design to HCI research, as well as textile design, can engage simple materials to present and evaluate speculative design ideas for new digital media. In addition, audiences can engage with new digital media designs in an inviting format.

Detailed ahead are the variety of investigative approaches I have applied to bridge the divide between physical and virtual practices through the design of digital media. Each set of results through an iterative design process and reflective analysis motivated subsequent investigations. Each investigation has addressed the main problem of bridging the physical and virtual, beginning with a novel design intervention for common screen-based devices and ending with a progressive call to action for designers to take forward.

The Memory Menu

I commenced with Simplifying physical and virtual practices on-screen. I sought to address demassification, or how shared social properties are lost as physical artefacts become digital. These shared social properties, known as latent border resources, can be applied by designers to simplify people's activities. I explored how additional latent border resources might be applied on-screen by supporting spatial memory through use-wear. This approach was developed and studied through the design of a menu overlay which I called the Memory Menu. The work was developed in response to on-screen interaction design approaches encountered in real-world practice where interfaces could benefit from a standardised and easy-to-apply design intervention.

The results of the Memory Menu study did not find a significant improvement to usability. This inspired looking more broadly at ways people are supported through a range of media.

Interviews

In Reviewing physical and virtual practice support across media I sought the advice of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media. This was done to broadly address how people are supported through a range of media. Through interviews with these researchers and practitioners I explored what they did in their design practices to support people's activities, as well as asking for their thoughts on the Memory Menu.

The interviews revealed a series of consistent approaches and useful ways to extend the Memory Menu design. The interviewees, through their individual considerations, collectively suggested accommodating people's activities by addressing their vast capabilities and environments. Reflecting on this result led me to the application of latent border resources beyond screen-based devices.

Design review

In Reviewing new physical and virtual practices I explored how people's activities through digital media might be supported by designing for a wider range of interactions beyond the screen. I looked towards outcomes of the Material Turn in HCI and found concerns surrounding the materiality of interactions—a design consideration for what digital media allows people to do, rather than privileging any kind of device or material. This provided a basis for reviewing the design of ubiquitous and tangible computing, which proposes how digital media might be better enmeshed in people's activities.

The ubiquitous and tangible computing design review identified ways to support people's engagement through a greater range of senses and their surrounding environment. However, it was possible to see how this new computing remains dependent on screen-based devices as they emerge in the form of mainstream IoT devices. The specific area of whole-body interaction provides a way forward by demonstrating ways in which the body and surrounding environment can be used to ground digital media interactions through virtual identity.

Computational Costume

In Creating new physical and virtual practices I conceived of a speculative design of imagined virtual wearables and objects supported by probable technologies which I called the Computational Costume. The Computational Costume consists of imagined probable technologies and design scenarios. The concepts revealed through the work explore how people's need for screen-based devices could be replaced. The designs show how wearers can:

The development of Computational Costume prototyping and presentation, in Computational Costume prototyping and presentation, explores the application of accessible lo-fi physical materials and processes. This allows designers to prototype and present imagined interactions with digital media through film-making and exhibition. My method enables designers to conceptualise and present the future of digital media without engaging technical skills or working within the constraints of today's technology. In addition, audiences can experience new digital media and how it might feel without adopting mysterious new technology.

Future work

My research has shown how to take discussions of how to supporting people's activities through digital media away from a basis on today's technology. Computational Costume illustrates that it is possible to make a contribution to the design of speculative technologies with the clever use of lo-fi physical materials for prototyping and presentation through exhibition and film-making. Designers can make a greater impact by conceiving desirable conceptual ideas as probable targets, rather than conforming to the limitations of technology today.

With my research, designers can take forward the design of Computational Costume to encompass more practices for people. Also, designers can reimagine the use and design of physical devices—as physical practices with devices are absorbed into new virtual practices in Computational Costume.

Computational Costume can be used beyond presenting information on the body. Questions remain around how people might use Computational Costume to express themselves and engage one another through play. Humans already do this through what they wear across a range of events from parties to theatre. However, this comes with the need for access to physical clothes to change appearance. There are ways to overcome these constraints of physical clothing. For example, people's Computational Costume could virtually change in response to what they are doing or where they are while wearing a comfortable physical outfit concealed by Computational Costume. Computational Costume could visually accentuate movement when exercising or performing, or gradually unfurl esemplastic objects when meeting someone for the first time.

Esemplastic objects and wearables have the capacity to replace the function of purely physical objects and wearables such as fashion and digital devices. Physical objects and wearables that are not completely consumed by new virtual practices would transform into more essential and streamlined forms. For instance, physical fashion might become more pared back and focus on utility such as warmth and cooling if aesthetics can be applied esemplastically. A similar situation applies to digital devices. For instance, small digital cameras could be absorbed by camera-enabled wearable augmented reality glasses which can also track hand gestures to frame photographs. This could extend to the physical design of advanced digital cameras with heavy optics. Features such as the screen, grip, button, dial and viewfinder could be streamlined into a cylindrical hardware design with augmented reality controls and framing, alongside direct virtual access to the photos taken.

Concluding remarks

Much of my research has been a transitional exploration. It began with a desire to learn about HCI problems and evaluation processes, but this soon evolved into a survey and design concepts for how interactions with digital media could be better intertwined with their surrounding physical environment.

With my research I hope to enable a wide range of designers to engage in the creation of future digital media by taking a device-agnostic approach. The lo-fi prototyping and presentation approaches presented open up the possibility of engaging in developing how people interact through future wearable technology. Designers would not need to be hardware or software engineers or to design within today's technological constraints. Furthermore, the work allows audiences to engage in a manner that is not demanding in a technical way or indicative of a dystopian future. Digital media designers and audiences are encouraged to work together to shape a desirable future for digital media.

My work opens up the possibility for imagining a variety of new design scenarios based on current and foreseeable design problems, to create and present new speculative virtual wearables and objects. The area is ripe with possibilities when liberating the development of technology from today's image of technology. It is time to use the design process to release promising speculative design ideas and shape emerging technology into the desirable.

I urge digital media designers to take the work forward and challenge what people are able to do with digital media today and into the future.

Reference list

  1. Alexander, J., Cockburn, A., Fitchett, S., Gutwin, C., & Greenberg, S. (2009). Revisiting read wear: analysis, design, and evaluation of a footprints scrollbar. CHI '09 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1665–1674. https://doi.org/10.1145/1518701.1518957
  2. Amann, M., & Minge, O. (2012). Biodegradability of poly(vinyl acetate) and related polymers. In B. Rieger, A. Künkel, G. W. Coates, R. Reichardt, E. Dinjus, & T. A. Zevaco (Eds.), Synthetic Biodegradable Polymers (pp.137–172). Berlin, Heidelberg: Springer. https://doi.org/10.1007/12_2011_153
  3. Apple Inc. (2018, March 19). AirPods – Technical specifications. Retrieved 21 January 2019 from https://support.apple.com/kb/SP750
  4. Apple Inc. (2018, September 20). Apple Watch Series 4 – Technical Specifications. Retrieved 21 January 2019 from https://support.apple.com/kb/SP778
  5. Barone, J., & Mazza, D. (2019). Computational Costume rendition. https://doi.org/10.26180/5d504d8a489ae
  6. Baudisch, P., Tan, D., Collomb, M., Robbins, D., Hinckley, K., Agrawala, M., et al. (2006). Phosphor: explaining transitions in the user interface using afterglow effects. UIST '06 Proceedings of the 19th annual ACM Symposium on User Interface Software and Technology, 169–178. https://doi.org/10.1145/1166253.1166280
  7. Baudrillard, J. (1981). Simulacra and Simulation. (S. Glaser, Trans.). University of Michigan Press.
  8. Bell, G., & Dourish, P. (2007). Yesterday's tomorrows: notes on ubiquitous computing's dominant vision. Personal and Ubiquitous Computing, 11(2), 133–143. https://doi.org/10.1007/s00779-006-0071-x
  9. Bezerianos, A., Dragicevic, P., & Balakrishnan, R. (2006). Mnemonic rendering: an image-based approach for exposing hidden changes in dynamic displays. UIST '06 Proceedings of the 19th annual ACM Symposium on User Interface Software and Technology, 159–168. https://doi.org/10.1145/1166253.1166279
  10. Bolter, J. D., & Grusin, R. (1998). Remediation. MIT Press.
  11. Bonanni, L. (2006). Living with hyper-reality. In Ambient Intelligence in Everyday Life (Vol. 3864, pp.130–141). Berlin, Heidelberg: Springer. https://doi.org/10.1007/11825890_6
  12. Bower, G. H. (1970). Analysis of a mnemonic device: modern psychology uncovers the powerful components of an ancient system for improving memory. American Scientist, 58(5), 496–510. Retrieved from https://www.jstor.org/stable/27829239
  13. Brignull, H. (2013, August 29). Dark patterns: inside the interfaces designed to trick you. Retrieved 7 January 2019 from https://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-trick-you
  14. Brown, J. S., & Duguid, P. (1994). Borderline issues: social and material aspects of design. Human–Computer Interaction, 9(1), 3–36. https://doi.org/10.1207/s15327051hci0901_2
  15. Browne, D., Totterdell, P., & Norman, M. (Eds.). (1990). Adaptive User Interfaces. Academic Press. Retrieved from https://www.sciencedirect.com/book/9780121377557/adaptive-user-interfaces
  16. Cockburn, A., Kristensson, P. O., Alexander, J., & Zhai, S. (2007). Hard lessons: effort-inducing interfaces benefit spatial learning. CHI '07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1571–1580. https://doi.org/10.1145/1240624.1240863
  17. Cooper, A., Reimann, R., Cronin, D., & Noessel, C. (2014). About Face: The Essentials of Interaction Design. Wiley.
  18. Corbin, J., & Strauss, A. (2008). Strategies for qualitative data analysis. In Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3rd ed., pp.65–86). SAGE. https://doi.org/10.4135/9781452230153
  19. Dourish, P. (2001). Where the Action is. MIT Press.
  20. Dunne, A., & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge, Massachusetts: MIT Press.
  21. Ehn, P., & Kyng, M. (1991). Cardboard computers: mocking-it-up or hands-on the future. In J. Greenbaum & M. Kyng (Eds.), Design at Work (pp.169–196). Hillsdale, NJ, USA: L. Erlbaum.
  22. Ehn, P., & Linde, P. (2004). Embodied interaction: designing beyond the physical-digital divide. Proceedings of Futureground: Design Research Society International Conference 2004. Retrieved from https://www.researchgate.net/publication/237327428_Embodied_Interaction_-_Designing_Beyond_the_Physical-Digital_Divide
  23. England, D. (Ed.). (2011). Whole Body Interaction. London: Springer. https://doi.org/10.1007/978-0-85729-433-3
  24. exiii Inc. (2018, October 2). exiii releases EXOS Wrist DK2. Retrieved 16 January 2019 from https://exiii.jp/2018/10/02/exos_wrist_dk2_en/
  25. Findlater, L., Moffatt, K., McGrenere, J., & Dawson, J. (2009). Ephemeral adaptation: the use of gradual onset to improve menu selection performance. CHI '09 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1655–1664. https://doi.org/10.1145/1518701.1518956
  26. Follmer, S., Leithinger, D., Olwal, A., Hogge, A., & Ishii, H. (2013). inFORM: dynamic physical affordances and constraints through shape and object actuation. UIST '13 Proceedings of the 26th annual ACM Symposium on User Interface Software and Technology, 417–426. https://doi.org/10.1145/2501988.2502032
  27. Genç, C., Buruk, O. T., Yılmaz, S. I., Can, K., & Özcan, O. (2018). Exploring computational materials for fashion: recommendations for designing fashionable wearables. International Journal of Design, 12(3), 1–19. Retrieved from http://www.ijdesign.org/index.php/IJDesign/article/view/2831/826
  28. Giaccardi, E., & Karana, E. (2015). Foundations of materials experience. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2447–2456. https://doi.org/10.1145/2702123.2702337
  29. Google. (2013, February). Tech specs – Google Glass. Retrieved 21 January 2019 from https://support.google.com/glass/answer/3064128
  30. Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark (patterns) side of UX design. CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3173574.3174108
  31. Greenberg, S., Boring, S., Vermeulen, J., & Dostal, J. (2014). Dark patterns in proxemic interactions: a critical perspective. DIS '14 Proceedings of the 2014 Conference on Designing Interactive Systems, 523–532. https://doi.org/10.1145/2598510.2598541
  32. Gustafson, S. (2013, November 25). Imaginary Interfaces. Retrieved from https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/deliver/index/docId/6660/file/gustafson_diss.pdf
  33. Gutwin, C., & Cockburn, A. (2006). Improving list revisitation with ListMaps. AVI '06 Proceedings of the Working Conference on Advanced Visual Interfaces, 396–403. https://doi.org/10.1145/1133265.1133347
  34. Haroz, S., Kosara, R., & Franconeri, S. (2015). ISOTYPE visualization – working memory, performance, and engagement with pictographs. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1191–1200. https://doi.org/10.1145/2702123.2702275
  35. Hill, W. C., Hollan, J. D., Wroblewski, D., & McCandless, T. (1992). Edit wear and read wear. CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3–9. https://doi.org/10.1145/142750.142751
  36. Howard, S., Carroll, J., Murphy, J., & Peck, J. (2002). Using "endowed props" in scenario-based design (pp.1–9). Presented at the second Nordic Conference on Human–Computer Interaction. https://doi.org/10.1145/572020.572022
  37. Hurst, A., Mankoff, J., Dey, A. K., & Hudson, S. E. (2007). Dirty desktops: using a patina of magnetic mouse dust to make common interactor targets easier to select. UIST '07 Proceedings of the 20th annual ACM Symposium on User Interface Software and Technology, 183–186. https://doi.org/10.1145/1294211.1294242
  38. Krueger, M. W. (1991). Artificial Reality II. Addison-Wesley.
  39. Krueger, M. W., Gionfriddo, T., & Hinrichsen, K. (1985). VIDEOPLACE—an artificial reality. CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 35–40. https://doi.org/10.1145/317456.317463
  40. Kutz, N. (2018). Modular placement and prototyping. Retrieved 8 April 2019 from https://www.kobakant.at/DIY/?p=7197
  41. Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press.
  42. Leap Motion, Inc. (2013). Leap Motion Controller. Retrieved from https://www.leapmotion.com/technology/
  43. Leap Motion, Inc. (2018, April 9). Unveiling Project North Star. Retrieved 22 March 2019 from http://blog.leapmotion.com/northstar/
  44. Leap Motion, Inc. (2018, June 6). Leap Motion Project North Star – Mechanical Guide. Retrieved 21 January 2019 from https://leapmotion.github.io/ProjectNorthStar/mechanical.html
  45. Magic Leap. (2018). Magic Leap One: Creator Edition. Retrieved 16 January 2019 from https://www.magicleap.com/magic-leap-one
  46. Matejka, J., Grossman, T., & Fitzmaurice, G. (2013). Patina: dynamic heatmaps for visualizing application usage. CHI '13 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3227–3236. https://doi.org/10.1145/2470654.2466442. Retrieved from https://www.autodeskresearch.com/publications/patina
  47. Matsuda, K., & Mill, A. (2018). Mirrorworlds. Retrieved 19 December 2018 from http://blog.leapmotion.com/mirrorworlds/
  48. Mattelmäki, T. (2006). Design Probes. University of Art and Design Helsinki. Retrieved from http://urn.fi/URN:ISBN:951-558-212-1
  49. Mazza, D. (2017). Reducing cognitive load and supporting memory in visual design for HCI. The 2017 CHI Conference Extended Abstracts, 142–147. https://doi.org/10.1145/3027063.3048430
  50. Mazé, R., & Redström, J. (2005). Form and the computational object. Digital Creativity, 16(1), 7–18. https://doi.org/10.1080/14626260500147736
  51. Microsoft. (2019). Microsoft HoloLens. Retrieved 1 April 2019 from https://www.microsoft.com/en-us/hololens
  52. Mullet, K., & Sano, D. (1995). Designing Visual Interfaces. Prentice-Hall.
  53. Perrault, S. T., Lecolinet, E., Bourse, Y. P., Zhao, S., & Guiard, Y. (2015). Physical loci: leveraging spatial, object and semantic memory for command selection. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 299–308. https://doi.org/10.1145/2702123.2702126
  54. Prest, C., & Hoellwarth, Q. C. (2014, February 18). Sports monitoring system for headphones, earbuds and/or headsets. (Apple Inc., Ed.). Retrieved from https://patents.google.com/patent/US8655004B2/en
  55. Robertson, G., Czerwinski, M., Larson, K., Robbins, D. C., Thiel, D., & van Dantzich, M. (1998). Data mountain: using spatial memory for document management. UIST '98 Proceedings of the 11th annual ACM Symposium on User Interface Software and Technology, 153–162. https://doi.org/10.1145/288392.288596
  56. Robles, E., & Wiberg, M. (2010). Texturing the “material turn” in interaction design. TEI '10 Proceedings of the 4th International Conference on Tangible, Embedded, and Embodied Interaction, 137–144. https://doi.org/10.1145/1709886.1709911
  57. Roland Corporation. (2010). Roland CS-10EM Binaural Microphones/Earphones. Retrieved from https://www.roland.com/us/products/cs-10em/
  58. Rose, D. (2014). Enchanted Objects: Innovation, design, and the future of technology. Simon and Schuster.
  59. Rosson, M. B., & Carroll, J. M. (2012). Scenario-based design. In J. Jacko (Ed.), Human–Computer Interaction Handbook (3rd ed.). CRC Press. https://doi.org/10.1201/b11963
  60. rwinj, Zeller, M., & Bray, B. (2018, March 21). Gestures – Mixed Reality. Retrieved 18 January 2019 from https://docs.microsoft.com/en-us/windows/mixed-reality/gestures
  61. Scarr, J., Cockburn, A., & Gutwin, C. (2013). Supporting and exploiting spatial memory in user interfaces. Foundations and Trends in Human–Computer Interaction, 6(1), 1–84. https://doi.org/10.1561/1100000046
  62. Scarr, J., Cockburn, A., Gutwin, C., & Bunt, A. (2012). Improving command selection with CommandMaps. CHI '12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 257–266. https://doi.org/10.1145/2207676.2207713
  63. Skopik, A., & Gutwin, C. (2005). Improving revisitation in fisheye views with visit wear. CHI '05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 771–780. https://doi.org/10.1145/1054972.1055079
  64. Tsandilas, T., & schraefel, M. C. (2007). Bubbling menus: a selective mechanism for accessing hierarchical drop-down menus. CHI '07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1195–1204. https://doi.org/10.1145/1240624.1240806
  65. Underkoffler, J., & Ishii, H. (1998). Illuminating light: an optical design tool with a luminous-tangible interface. CHI '98 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 542–549. https://doi.org/10.1145/274644.274717
  66. Vuzix Corporation. (2018). Vuzix Blade. Retrieved from https://www.vuzix.com/products/blade-smart-glasses
  67. Weiser, M. (1991). The computer for the 21st century. Scientific American, 94–104.
  68. Wiberg, M. (2016). Interaction, new materials & computing – Beyond the disappearing computer, towards material interactions. Materials and Design, 90, 1200–1206. https://doi.org/10.1016/j.matdes.2015.05.032
  69. Willett, W., Heer, J., & Agrawala, M. (2007). Scented widgets: improving navigation cues with embedded visualizations. IEEE Transactions on Visualization and Computer Graphics, 13(6), 1129–1136. https://doi.org/10.1109/TVCG.2007.70589

Memory Menu

Memory Menu motivation

I designed the Memory Menu to evaluate a subtle use-wear effect, as defined in Supporting on-screen spatial memory through use-wear. The effect is intended to add a valuable latent border resource in response to demassification, as covered in the Background. The study is motivated by a need to solve real-world interface design problems with minimal impact on the design process, as uncovered in On-screen interaction design approaches. In addition, I address a lack of conclusive evidence to support a subtle use-wear effect, as explored in Supporting on-screen spatial memory through use-wear.

Memory Menu hypotheses

The Memory Menu was designed to evaluate two hypotheses: H1 item selection time will be quicker for a menu with a use-wear effect applied; and H2 memory of items selected will be richer for the use-wear menu.

The null hypothesis was: H0 there is no benefit to highlighting the usage of menu items through a use-wear effect.

Memory Menu design

The Memory Menu was designed to be a short 10–15 minute study which could be completed by participants from the comfort of their own computer. Data would be collected by the web application and stored in a secure database at the conclusion of the study.

Ethics and recruitment

Ethics approval from the Monash University Human Research Ethics Committee (MUHREC) was granted for a 100-participant online study. The majority of the participants found a hyperlink to the study through an online advertisement on the Monash University newsletter Monash Memo. Other participants found the link via social media online and word-of-mouth.

Menu design

To evaluate the hypotheses, the Memory Menu was divided into three parts: first, a short training round menu for seven turns with the use-wear effect applied, shown in Figure A.1; second, a menu with the use-wear effect or no effect applied, picked at random, shown in Figure A.2; and third, a menu with the opposite effect applied. Participants were prompted 30 times for each menu to pick an item, confirm they understood the prompt and make their selection. Errors made throughout the test were clearly highlighted, as shown in Figure A.3. This was to ensure that the highlighting effect was not compromised and to encourage accurate selections.

The first turn in the practice round of the Memory Menu. Images are author's own.
A use-wear Memory Menu after several turns (above) and a baseline menu (below). Images are author's own.
A selection error is highlighted in Memory Menu, for picking 'Sunflower' instead of the required selection, 'Banana'. Image is author's own.

Each of the menus presented consisted of pictographs and words from six distinct and contrasting categories: food, objects, emotions, animals, transport and plants. The contrasting categories existed so a selection bias could be applied, to simulate real situations where a person has a particular interest. Without this, the selections would be distributed randomly. Items from each category had a probability of being selected 2%, 3%, 10%, 15%, 30% and 40% of the time, respectively. The bias and layout of the items were applied randomly. Additionally, menus were completely refreshed when switching from the first menu to the second menu. Pictographs were switched to words and vice versa, and the selection bias and layout randomly selected again.

Study procedure

Participants would complete a practice round for 7 turns and then move on to the real menus for 30 turns each, one with the use-wear effect applied and the other without.

Participants were asked a series of questions to determine how diverse the sample group was and to help determine the cause of any bias found in the final results. Participants were asked provide their gender (optionally), their age range, level of tiredness, primary language and professional discipline.

At the conclusion of the first and second tests, questions were asked to gauge the difficulty of the tests and how well the interactor could recollect the most and least frequently picked categories. At the conclusion of the first and second tests, participants were asked to classify difficulty on a Likert scale, and pick from a selection of pictograms and words they were exposed to, choosing the most selected and least selected. Finally, participants were asked if the use-wear made the task easier on a Likert scale and optionally why they chose the response. In the same way, participants were asked if the use-wear was desirable. Participants could leave any additional notes at the end.

It was not possible to complete the experiment on displays that were too small, such as smartphones. As a minimum requirement, participants could only commence the study if their window dimensions were at least 1024 × 600 pixels or greater.

Memory Menu evaluation

The Memory Menu was tested quantitatively and qualitatively using a mixed methods approach in order to validate its effectiveness as a latent border resource. I was interested in both the participants' qualitative experiences of the use-wear effect and observing any empirical evidence of a difference in selection times.

To validate H1, I evaluated Memory Menu item selection times to see if they were significantly faster than the regular baseline menu. I performed a two-tailed, paired t-test (α 0.05) for each participant’s first menu vs second menu selection times.

To validate H2, I gathered information to see whether participants could recollect the most and least frequently picked item categories, and direct responses about the effectiveness of the use-wear, as covered in the study procedure.

To capture any unusual events or biases, information about the participants, the menus generated for them and their web browser were collected, as well as additional notes. Specifically, menu arrangements, tasks presented and mistakes made were collected, allowing the reconstruction of the menus presented if needed, in case of rendering issues. Information on gender, age, primary language and professional background were collected, in case there were any issues with the participants' understanding of the content or instructions, web browser or window size. In one case, a note left by a participant alongside collected information confirmed an instance of a malfunction in the menu rendering—these results were excluded during the final analysis.

Memory Menu results and analysis

The null hypothesis H0, was validated by a statistically insignificant difference in selection times between the use-wear effect and baseline menu. A two-tailed, paired t-test (α 0.05) for each participant’s first menu vs second menu selection times show that only 6 participants of 99 revealed statistically significant results. One participant demonstrated that they experienced a rendering malfunction and were removed from the 100 results. A careful analysis of the individual significant results reveals instances where certain turns in the baseline menu were prolonged. This could have been caused by a difference in difficulty between menus. It is possible the selection bias and arrangement of items required less scanning time in the use-wear menu or that some selections were difficult in the baseline menu. Overall, this disqualifies H1.

The validation of the null hypothesis is corroborated by figures for correct responses in recollecting the most and least picked categories. No significant difference could be found between the recollection of categories between the baseline and use-wear menu. A correct response involves a participant selecting what they remember being the most or least picked category after completing the menu tasks. The answers were compared against the selection bias applied to the menus, as described in Menu design. Responses could be 1 off, 2 off, 3 off and so on from the correct answer. Correct responses for the recollection of the most picked category were similar between the baseline and use-wear menus, as shown in Figure A.4. For the recollection of the least picked category, use-wear shows some advantage; however, it is not significant, as shown in Figure A.5. Overall, this disqualifies H2.

 
Correct 63%  
Correct 67%  
1 off 28%  
1 off 26%  
2 off 8%  
2 off 5%  
3 off 1%  
3 off 2%  
4 off 0%  
4 off 0%  
5 off 0%  
5 off 0%  
 
Accuracy of recollection of the most picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.
 
Correct 36%  
Correct 49%  
1 off 40%  
1 off 32%  
2 off 14%  
2 off 9%  
3 off 5%  
3 off 8%  
4 off 2%  
4 off 2%  
5 off 3%  
5 off 0%  
 
Accuracy of recollection of the least picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.

Qualitative responses on the difficulty of the baseline vs use-wear menu show a bias. When observing responses of the baseline and use-wear menu completed as the first test, the results are similar; see Figure A.6. For the second test, the results show the baseline is reported as harder, at 45%, while the use-wear is reported as easier at 41%; see Figure A.7. However, the quantitative results have already shown no statistically significant benefits and this nullifies the audience's response.

It should be noted that, by my oversight, first tests and second tests were assigned randomly, rather than being distributed into two even groups. 57 of 99 participants (58%) completed the baseline first and 42 of 99 participants (42%) completed the use-wear menu first. The test order should have been distributed evenly, so 50% of participants completed the baseline menu first. However, the difference is marginal. Only 6 less participants should have completed the baseline first instead of use-wear, so the results are not skewed.

 
Very difficult 0%  
Very difficult 0%  
Difficult 7%  
Difficult 7%  
Neutral 31%  
Neutral 21%  
Easy 39%  
Easy 48%  
Very easy 23%  
Very easy 24%  
 
Difficulty reported by participants for 1st menu baseline (grey) with 2nd menu use-wear versus 1st menu use-wear (red) with 2nd menu baseline.
 
More difficult 45%  
More difficult 26%  
The same 31%  
The same 33%  
Easier 24%  
Easier 41%  
 
Difficulty reported by participants for 2nd menu baseline (grey) with 1st menu use-wear versus 2nd menu use-wear (red) with 1st menu baseline.

Qualitative responses for effectiveness, counted in Figure A.8, and desirability, counted in Figure A.10, ranked highly. When coding optional written responses for effectiveness and desirability, the audience was polarised on effectiveness and showed a general desirability. Effectiveness responses from 68 participants were aggregated into three categories: not effective, partially effective and effective, shown in Figure A.9. Of this group, roughly half felt the use-wear was completely effective. Desirability responses from 53 participants were aggregated into four categories: improved parsing, favourable, distracting and indifference, shown in Figure A.11.

 
Strongly disagree 1%  
Disagree 4%  
Somewhat disagree 15%  
Neither agree or disagree 10%  
Somewhat agree 33%  
Agree 25%  
Strongly agree 12%  
 
Likert scale responses for effectiveness of the use-wear effect.
 
Not effective 31% (21 respondents)  
Partially effective 16% (11 respondents)  
Effective 53% (36 respondents)  
 
68 optional written responses for effectiveness of the use-wear effect coded into three categories.
 
Very undesirable 4%  
Undesirable 8%  
Neutral 16%  
Desirable 61%  
Very desirable 11%  
 
Likert scale responses for desirability of the use-wear effect.
 
Improved parsing 51% (27 respondents)  
Favourable 25% (13 respondents)  
Distracting 15% (8 respondents)  
Indifference 9% (5 respondents)  
 
53 optional written responses for desirability of the use-wear effect coded into four categories.

Interviews

Interview motivation

Crossdisciplinary interviews offer the opportunity to evaluate the practical work completed so far in Supporting on-screen spatial memory through use-wear from a variety of perspectives. The range of interviewees' perspectives can provide direction for future works, based on consistent advice which would be indicative of best practice approaches.

Interview design

In exploring how different disciplines support people's activities through various media, the interviews had two aims: to gain critical feedback on the practical work conducted so far, and defining a way forward based on the interviewees' design practices for supporting people's activities.

To gain critical feedback I framed the interviews around my research conducted to this point on spatial memory on-screen. I specifically asked how the interviewees sought to reduce cognitive load and support spatial memory, if they used those approaches. This language was used at the time to frame to work concerning HCI; however, the interviews deliberately allowed interviewees to provide their own language and approaches.

To collect responses beyond my research work, and language specific to HCI, a semi-structured interview approach was adopted. The interviews involved a series of specific questions about my research and general questions about the interviewees' practices. Any points needing clarification were elaborated upon with off-script questions. The interview process sought to capture as much information about people's unique practices as possible.

Ethics and recruitment

Ethics approval from the Monash University Human Research Ethics Committee (MUHREC) was granted for speaking to 10 interviewees for a maximum of 1 hour. Interviewees were recruited from known sources and suggestions from these sources.

To speak with a broad sample group, I picked a variety of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media, with established and senior experience. The specific specialisations of the interviewees are defined ahead in the Interview results and analysis in Table B.1.

Interview procedure

Interviews were performed one on one, in person, with a semi-structured approach. Off-script questions were asked to gain clarification on answers from participants. Also, knowledge about my own as well as interviewees' work was brought to interviews to ensure a dialogue could be sustained.

The following is a full description of the process with a rationale for each step:

  1. To set the tone for the purpose of the interviews, they began with a brief introduction on my part: Let me introduce myself—I am Domenico Mazza, a PhD student working on making complex information presented on screens easier to absorb by supporting people's memory while interacting. The intent of the interviews is to get an understanding of how my research fits into practice and how I should shape it. The interview should take no longer than 1 hour.
  2. To gain an idea of the interviewee's involvement in design, I asked them to reciprocate with an introduction of the design work they were involved in: What kind of design work are you involved in?
  3. To gauge whether the interviewee considered memorability I asked whether they think about memorability or the memory capacity of an audience or person interacting with their work.
  4. To gauge what makes the interviewee's practice distinct, I asked: what distinguishes their professional practice from others. To support a dialogue, this was preceded by showing work I have produced within my own professional practice and the concerns behind it.
  5. To gain a specific idea of methodologies followed by the interviewee, I asked: if you want to clearly communicate information though design—say a particular message for something, or a kind of functionality—what principles or methods would you follow? To support a dialogue, I brought up what I knew about the interviewee's work beforehand.
  6. To gauge whether memorability and cognitive load were considered by the interviewee, I asked: if they ever considered the cognitive load placed on an audience? And how they would handle cognitive load issues in their work. Immediately before asking the questions, an overview of relevant concepts and the Memory Menu design were shown. This was done to ensure interviewees were not left wondering what supporting spatial memory in HCI involves.
  7. To establish whether considerations of memorability and cognitive load had any resonance with the interviewee, I asked: what they thought of a design perspective or logic based on reducing cognitive load and supporting memory?
  8. To conclude, I gained feedback on the Memory Menu design by asking the interviewee what they thought of it.

The questions and protocol above were tested before being applied to practice on a professional design colleague. This helped iron out issues, especially in describing concepts around supporting spatial memory on-screen.

To allow responses to be classified (or coded), the interviews were recorded and transcribed, and question sheets with notes scanned. I was able to code responses to determine patterns and unique responses.

Interview coding

An open coding approach was used to suit the open nature of the interviews. Classifications (or codes) could only be determined once the data was collected. Theoretical questions were asked while sifting through the qualitative data. Theoretical questions are questions that help the researcher to see process, variation, and so on, and to make connections between concepts [20, p.8]. This was necessary as different interviewees used different expressions and examples to describe what often turned out to be similar concepts. For instance, one interviewee from a marketing background described designing for people's needs and wants, another from a cognitive science background described this as designing for the users' preferred modalities.

The open coding took several passes to ensure adequate coverage of the responses provided. Certain responses required re-evaluating codes to see whether they held relevance or needed adjustment. Adjustments were made to fix codes that were too generalised or too specific, e.g. at one stage the code for context was divided into three kinds of context: spatial, audience and domain. The language used by interviewees has been preserved in reporting the Interview results and analysis, to retain the intended meaning of responses.

Interview results and analysis

The adopted interview design generated four results tables from which to draw an analysis:

The specialisations of interviewees have been highlighted in Table B.1 to reveal the backgrounds of interviewees. IDs allows the ability to trace interviewees' responses throughout the four tables.

The emergent codes from the interviewees' responses revolved around considering context, maintaining empathy, considering memory and following a set method/process. Responses with reference to supporting people's memory and cognitive load have been highlighted in bold.

The interviewee specialisations and responses in Table B.1 below represent a raw data summary, which is broken down further on by looking at consistent design practice approaches in Table B.2 and unique design practice summaries in Table B.3.

Interviewee specialisations and responses, coded into context, empathy, memory and method/process. Responses with reference to supporting people's memory and cognitive load have been highlighted in bold.
IDSpecialisationResponses
P1
  • Communications design and functionality
  • Interface design
  • Cognitive science
  • Does not compete against the user's workflow.
  • Supports memory: Pen-based input—permanent ink trace and location act as a memory aid to trigger a reminder.
  • Very interested in cognitive load: considers different contexts/situations and supporting self-management of cognitive load by providing a range of modalities to choose from.
P2
  • Marketing: strategy and presentation of Expression of Interest (EOI) and Capability Statement
  • Ex-teacher
  • Guides communication based on client defined criteria and client's successes and failures in the past.
  • Supports memory: making a vital impression by emphasising the main point.
  • Human approach in place of cognitive load: how information is taken on-board and pedagogical theory.
  • Accessibility through common language, common sense and empathy.
P3
  • Printmaking: etching, screen-printing, lithography, relief
  • Publisher
  • Teacher
  • Avoids didacticism by ensuring work is not overtly obvious in its message or sentiment, allowing the opportunity for deeper engagement. An 'aesthetic hook' is implemented to captivate the audience for this kind of engagement.
  • Not consciously supporting memory: working in a tradition/context which is relatable to the audience. This includes: history, subjective experience and documentation of process and outcome as memory.
  • Emphasis placed on physical interaction, layers and obscuring in printmaking through qualities of ink.
  • Invariably, but not consciously considering cognitive load: Narrative to avoid overwhelming audience and qualities of printmaking ink to set a visual hierarchy.
P4
  • Visualisation of geographic information system (GIS) and application programming interface (API) data
  • Work is tailored to needs of technical audience. Outcomes situate data spatially on maps, with heatmaps where applicable.
  • Does not consciously support memory: encourages habituation through design consistency and simple explainable functions. Believes legibility informs memory.
  • Supports cognitive load, without attention to memory: by avoiding too many data dimensions.
P5
  • Web design for complex information
  • Organisational strategy
  • Photography
  • Participatory photography
  • Engages with target audience to determine offerings, interaction preferences and how the design is situated and responds to various audience contexts.
  • Does not consciously support memory: uses content positioning; consistency; supports remembering where you are while interacting; uses relatable images and graphics; assists orientating by memory of surroundings through maps and value of spatial and individual context.
  • Does not support cognitive load: determines layout based on user behaviour and contextual relevance to audience.
P6
  • Graphic design: branding, identity, strategy, positioning
  • Print and digital design
  • Emphasis on design research (scrapbooking, gap finding, iterative idea refinement and using applicable data).
  • The design brief plays a critical part—a guiding document based on research, market data and client collaboration to ensure the final outcome meets client and audience expectation.
  • Does not consciously support memory: works towards an intelligent and considered solution.
  • Does not consciously support cognitive load: engages typesetting skills and design hierarchy; follows audience and public relations requirements, or required outcome based on design brief, to create a good design.
P7
  • Geographic information systems (GIS), cartography, statistics, presentation, data analysis
  • Creativity techniques
  • Investigates the relevant domain of a design outcome. Workshops help to decipher source material and get audience feedback.
  • Does not support memory: assists perception/recognition based on audience context e.g. recognition of areas on maps; memorable map legend, with clear scale; greater contextual information to help make mental links e.g. exploratory data analysis in crime analysis require identifying links from notes and patterns; context as memory, by adding the user's experience/knowledge into tools and displaying histories.
  • Supports cognitive load: by feeding information gradually to audience, by showing necessary data or minimising amount information to comprehend e.g. focus+context visualisation.
P8
  • Cultural identity
  • User experience (UX)
  • Perception of: space, identity, engagement with world
  • Works against human cultural constraints in digital media. Seeks to rewrite existing visual grammar by seeking what is 'digitally native' or has the least amount of human influence on it. Forms a narrative by providing senses of agency—to be able to act on something and observe a meaningful response or consequence based on an action.
  • Does not support memory: works with a novel idea. No clues or tutorials, allows exploration. Quantity of tasks impacts memory.
  • Considers cognitive load, but not as a driving factor: utilises user feedback; rationalises humans can adapt despite the capacity to overload; frustrated with new modalities and metaphors as impertinent cultural artefacts impact cognitive load, such as endless tasks to perform through an interface.

The responses shown in Table B.1 speak volumes about individual approaches to design problems, but also reveal consistent threads, in particular dealing with the audience's context and showing empathy towards them when making design considerations. Because of this, all interviewees had a strategy for dealing with memory and cognitive load, when only 5 of 8 interviewees dealt with memorability head on and 3 of 8 dealt with cognitive load head on. The language around cognitive load was popular with the interaction and HCI designers. However, it was common to see synonyms for consistent approaches, highlighted in Table B.2, that bridge the differences in language towards common goals.

Interviewees' consistent design practice approaches, coded into context, empathy, memory and method/process.
IDConsistent responses
P2, P4, P6, P7
  • Understand the domain.
  • Understand the target market.
P1, P3, P5, P8
  • Understand user/audience behaviour.
P1, P2, P4, P5, P6, P8
  • Audience's wants.
  • User's modalities.
  • Audience's needs.
  • Audience's expectations.
P3, P4, P6, P7, P8
  • Avoid overwhelming.
  • Use storytelling.
  • Use narrative.
  • Support audience's agency.
P1, P5
  • Support purposeful behaviour (e.g. helping audiences avoid errors and confusion).
P3, P4, P5
  • Use familiar visuals.
P1, P2, P5
  • Employ a 'human-centred' approach.
P1, P4, P5, P6, P8
  • Employ an iterative design process.
P1, P4, P7, P8
  • Create and evaluate prototypes.
P3, P4, P5, P6, P7
  • Conduct workshops (with client or audience).

The consistent responses shown in Table B.2 reveal common goals that move beyond supporting memorability and cognitive load. The only consistent approach for supporting memory was to use familiar visuals, while any considerations for cognitive load were interspersed, with considerations instead placed towards: the target audience and their knowledge; the audience's behaviour and supporting purposeful behaviour; and working closely with their audiences to iteratively generate designs. It should be noted that these considerations are quite standard to design practice.

In summarising the interviewees' unique practices in Table B.3 it is possible to paint a way forward beyond standard design practice.

Interviewees' unique design practice summaries.
IDResponse
P1Values supporting or enhancing a user’s current method of interacting. Feedback collected from users is used to ensure the design intervention does not compete against the user’s cognitive load or workflow, and to instead offer suitable options.
P2Guides their communication based on applicable criteria defined by the client, as well as criteria based on what the client has experienced in the past in terms of successes and failures.
P3Values working within a historical and cultural context so the work is relatable to an audience. The practitioner also values ‘avoiding didacticism’, a rule to ensure that work produced is not overtly obvious in its message or sentiment, but allows the opportunity for a deeper engagement. This works in a similar vein to ‘narrative’. The practitioner acknowledges the approach only works if the audience engages with a work; however, an ‘aesthetic hook’ is implemented to ensure an aesthetic quality engages an audience.
P4Values consistency and simplicity in displaying data for a specific technical domain. Outcomes situate data spatially on maps, with the use of heatmapping where possible.
P5Values engagement with the target audience to ensure design outcomes are accessible. Time is taken to interact with the audience on determining offerings, interaction preferences, as well as how the design is situated and responds to various audience contexts.
P6Places high emphasis on design research. The design brief plays a critical part as a guiding document based on research, market data and client collaboration to ensure the final outcome meets client and audience expectations.
P7Values understanding the relevant domain of a design outcome. Workshops conducted go towards helping the practitioner decipher source material to create a design, and getting audience feedback to improve a design.
P8Focuses on going against cultural constraints in digital media by rewriting existing visual grammar and seeking what is ‘digitally native’ or has the least amount of human influence on it. While the outcomes are not meant to be readily understood, applications are user tested and exploration is encouraged and allowed. The practitioner also touched on how narratives are formed by giving an audience a sense of agency to be able to act on something and observe a meaningful response or consequence from it.

The interviewees' unique practices, as shown in Table B.3, reveal four unique approaches to further what can be done to support people's activities beyond supporting spatial memory and reducing cognitive load in the Memory Menu:

With respect to advancing the design of the Memory Menu, interviewees offered the following critiques, shown in Table B.4. These critiques were given after hearing an explanation of the concepts behind the Memory Menu and seeing an overview of the design.

Interviewees' critiques of the Memory Menu.
IDResponse
P1
  • Consider a multi-modal approach as alternative.
  • Evaluate multiple prototypes empirically in parallel, rather than just one.
  • Design for the extreme user.
P2
  • Cater to each individual's different approaches—consider what works and does not work for them.
P3
  • Allow for different options, consider: events over time, way of working with colour, location, seasons, years or financial quarters.
  • Consider how the use-wear effect is learned and at what stage to implement it. Consider habits formed.
P4
  • The use-wear effect may occlude information.
  • Test the use-wear's effect on habits.
  • Potential hunting effect—where the user loses track of information as it adapts because of the use-wear.
  • Potential to apply the use-wear effect to maps and activities in locations.
P5
  • The use-wear enables a journey for learning.
P6
  • The use-wear effect should be implemented before an interface is learned.
P7
  • Has seen a similar use-wear effect before.
  • Consider individualised cues and predictive cues based on community activity.
P8
  • There are a range of criteria, or phenomena, available for contextualising user activity.

Critiques of the Memory Menu, as shown in Table B.4, were varied with a range of constructive comments. The most common response was to consider different adaptiveness options based on an individual's context and needs. This was put simply by P8 as: determining what phenomena should be used for contextualisation. Suggestions included applying the use-wear effect based on events, a time range or directly to activities in physical spaces on a map (P3, P4). Other suggestions included using Memory Menu as a learning tool (P3, P5) and a way to show the usage patterns of others (P7), which has been done in Patina [66] by Matejka et al. (2013). Criticism was centred around how Memory Menu might occlude information or go against the user’s will, which is a known issue in adaptive interfaces known as hunting [17, p.208]. As an alternative, P1 advocated the development of a multi-modal based approach.

Computational Costume prototyping and presentation

The prototyping and presentation of Computational Costume facilitates the imagination of speculative digital media, without investing in the development of technological hardware or software. Computational Costume Design scenarios are created with lo-fi physical materials which are brought to life through exhibition, performance and film-making. Effects such as digital wearable interfaces are presented through wearable materials and defined through their presentation. This approach enables designers to readily experiment and evaluate design ideas without being constrained by limitations imposed by today's technology.

In this section, I provide:

Background

Prototyping Computational Costume using lo-fi physical materials, exhibition, performance and film-making stands in contrast to usual methods for exploring the design of augmented reality and wearable digital media. The methods adopted in my research allow greater flexibility to explore imagined ideas.

The prototyping of Computational Costume follows from the tradition of inexpensive and versatile cardboard and paper prototyping used in evaluating conceptual designs for digital media on-screen. This kind of prototyping is considered by Ehn and Kyng (1991) in Cardboard Computers as a kind of design game for envisionment that allows hands-on experience instead of conceptualising designs through schematics [24]. The physical materials used are readily available, easy to assemble with basic craft skills and durable enough for their intended use as prototypes and props.

Alternatives to physical materials which are generally used to explore new digital media require a larger investment in skills, technical hardware and time. These alternatives include: visual effects, programmed virtual graphics and electronics:

Lo-fi physical materials are a speedier medium to work with for designers who are more adept at conceptual development. These designers can focus on the design concept at hand by encouraging audiences to use their imagination, in contrast to engaging technical skills for composing realistic visual effects, programmed virtual graphics or electronics. Designers who are adept at applying the aforementioned technical skills may still choose to use lo-fi physical materials as a first step in their design process to develop, present and evaluate ideas without using final materials.

Objectives for prototyping and presentation

The objectives for prototyping and presentation provide a simple rubric for guiding the successful application of physical materials in Computational Costume. The materials used in Computational Costume design scenarios were applied with the criteria listed below as a guide.

The prototyping and presentation of materials in Computational Costume are judged principally on the:

To achieve the above objectives I have opted to apply craft processes using lo-fi physical media such as: textiles, paper and cardboard.

For Computational Costume to meet the above criteria for the application of physical materials, materials must be:

For Computational Costume to meet the set criteria for presentation of physical materials, presentations should be:

A few optional material objectives have been adhered to throughout the design process where possible. These objectives have involved the use of:

Material applications

A range of materials have been applied in different ways to produce designs and imagined effects for Computational Costume. The materials used possess their own strengths and weaknesses in achieving the objectives for prototyping and presentation as covered. I review the application of materials for:

Mock-ups and patterns

Paper mock-ups and Paper fabric patterns for cutting textiles for garments have allowed the planning of designs before committing to the use of final materials. These items have been predominantly generated using paper, which is inexpensive and available in a variety of sizes, allowing a wide range of uses.

Paper mock-ups

Paper has allowed the quick mock-up of different prototypes for Objects and wearables, allowing flexible concept iterations before committing to the design of materials for fabrication.

An example of such a mock-up is the layout of the cardboard poster shown in Figure C.1, that allowed the easy application, removal and visualisation of ideas in 3D by using a combination of paper and cardboard.

A mock-up of the Cardboard poster. Image is author's own.

A similar paper mock-up process was used for the development of Computational Costume v1 performance, shown in Figure C.2. Paper notes resembling planned objects were arranged on garments to determine what should be made for the performance.

Paper mock-ups of Computational Costume v1. Images are author's own.
Paper fabric patterns

Fabric patterns allow the refinement of garment designs before committing to final fabrics. Once patterns are finalised, they go on to become templates for cutting fabrics. I have used paper as an economical and flexible material to design and produce fabric patterns, as shown in Figure C.3.

The paper fabric patterns shown in Figure C.3, do not look like conventional fabric patterns, which represent each full section of fabric for a garment. Each pattern shown is cut into a folded sheet of fabric. My pattern design allows cutting within the constraints of a laser cutter bed and saving the amount of seams be sewn for pants.

For the shirt torso and arm patterns: the straight sides of the patterns are aligned with the fold on the sheet which is not be cut. The fold acts as the middle of the final unfolded fabric section which is to be joined with the other sections.

For the pants pattern: two separate pieces emerge as each leg. The curved area in the middle of each piece is sewn together to form the internal side of the pants crotch, while the left and right edges of each individual piece are sewn together to form the circumference for each leg.

Fabric patterns for a shirt and pants. Torso (left), shirt arm (middle) and pants (right). Image is author's own.

The coloured sections of the garments shown in Figure C.4 correspond to the pattern sections shown in Figure C.3.

Pants and top made from fabric patterns for laser cutting. Images are author's own.

To ensure the patterns were appropriately sized and fitted I deconstructed secondhand clothing that was appropriately sized and fitted. It is important to note the clothes were used as a guide to determine correct sizing and form, and not to copy the garment design, which would infringe copyright.

Objects and wearables

Imagined esemplastic objects and wearables feature prominently in Computational Costume. The materials used for these items serve to illustrate the potential for both virtual and physical qualities. In Presentation methods I explore how physical materials can spark audiences' imagination to demonstrate virtual qualities. However, below I explore how different lo-fi materials have been engaged to serve a variety of necessary abilities for exhibition, performance and film-making.

Cardboard objects and wearables

Cardboard has served as a simple all-purpose material for both presenting objects and wearables, as first explored in the Cardboard poster and interface. The material has been: easy to find, resilient and austere. Also, as explored in Material choice, cardboard is an ideal candidate for the speculative design explored through Computational Costume, because it has no conceptual attachment to digital media and its traditional attachment to packaging is not played upon.

Cardboard objects

Cardboard objects have been generally made from flat laser cut cut-outs, with interlocking slots for making 3D forms without the use of glue. Double-corrugated (two-layer) cardboard has been used, over common single-corrugated (one-layer) cardboard. 7 mm double-corrugated board found for free from discarded large-appliance packaging has been used for creating the cardboard poster, as well as the signage shown in Figure C.5, and Cardboard mannequins, for Computational Costume v2. These objects have worked particularly well except where too much force has been placed on interlocks, as with the initial version of cardboard mannequins.

Floating cardboard signage as imagined esemplastic objects, for the exhibition of Computational Costume v2. Image is author's own.
Cardboard wearables

Thin single-corrugated board, found for free from discarded small-electronics packaging, has been used for the wearable Cardboard interface. This kind of thin board offers a better combination of strength and flexibility for small wearables. However, it is not as flexible as textiles covered ahead.

Cotton broadcloth objects and wearables

Cotton broadcloth has served as an all-purpose material for both objects and wearables, ideal for its flexibility and availability in a wide range of colours. In addition, the material is a natural fibre and is biodegradable—so any ecological harm from its disposal is diminished.

In addition to its normal abilities, cotton broadcloth can made rigid for situations that demand flexibility and rigidity. The process is covered in Stiffened cotton broadcloth.

Cotton broadcloth objects

Cotton broadcloth objects made for Computational Costume have been laser-cut to allow for graphical details, alongside hand-cutting and machine stitching as alternatives. Figure C.6 presents a comparison of a hand-cut object alongside laser-cut objects. Laser-cut objects were adopted for their refined finish.

Hand-cut (above) versus laser-cut (below) fabric objects for Computational Costume v1. Images are author's own.

For applying details, I have used laser cutting, machine stitching and machine-stitched embroidery. I was able to use a sewing machine with the capability to apply embroidered lettering, as shown in Figure C.7.

Manual and automated embroidery used for Computational Costume v2. Image is author's own.

For applying visual details, alternatives such as handwriting, screen-printing and conventional ink or laser printing on cardstock could have been used. I opted for the methods applied because they were the most visually striking. In addition the fabric was a more durable prop and I was able to make them using only hand tools, sewing machine and laser cutter. If objects needed to be reproduced in larger numbers, screen-printing on fabric or conventional printing on cardstock would have been more suitable.

Cotton broadcloth wearables

The use of cotton broadcloth for wearables has evolved with the needs of the project. Exploration began with an adaptable design, economical designs, and ended with traditional clothing that could be easily worn, reproduced and configured; see Figure C.8.

Computational Costume design iterations. Images are author's own.

As shown in Figure C.8, Computational Costume has gone through various design stages, most of which never made it to final applications. Most of these applications were centred on the idea that Computational Costume would be engaged for user studies. Designs commenced with a modular design where a whole-body could be temporarily wrapped and easily removed for analysis. However, this method proved to be both timely to assemble and prone to slipping and constant re-adjustment as wearers moved. Designs then moved onto a poncho design which was easily reproducible with minimal material and easy to wear over clothing. In later designs, the ponchos aimed to cover as large a surface as possible and allow wearers to hold materials in discreet pockets.

As designs transitioned away from a user-study context to a performance context, easy application, removal and material economy were put aside for visual impact. Designs would be worn like regular clothing and match the intended form of a working Computational Costume. This was achieved through a traditionally fit and shaped garment, as shown at the end of Figure C.8 and Figure C.4.

Stiffened cotton broadcloth

Cotton broadcloth was stiffened by hand to create the map tool in Computational Costume v2. This relatively straightforward and economical option allowed the ability to create rigid structures with the durability and flexibility of a fabric. This finishing process adds to the repertoire of material options for Computational Costume. However, the process can impact on the biodegradability of the treated material.

Stiffening cotton broadcloth involves placing the textile in a liquid solution consisting of equal parts water and polyvinyl acetate (PVA) glue and air drying it. PVA glue was chosen over a starch solution to avoid the risk of attracting pests that might feed on the starch. However, PVA glue is only biodegradable in certain circumstances [2] and cannot be treated as an easily biodegradable material.

In Figure C.9, the process of stiffening cotton begins with soaking the entire textile in the 1:1 PVA and water solution. The soaked textile is left to dry, paying attention to minimising areas for the glue to accumulate by removing any visible excess globs of glue. Dried excess glue is cleaned off by dabbing small amounts of water over the dried glue and brushing the dried excess off. The dried sheet is then hand steam-ironed on a light steam setting to flatten out all wrinkles. At this point, the cotton fibres share similar properties to a light cardstock with the flexibility of a fabric, allowing the material to be folded while retaining any creases made.

Stiffening cotton broadcloth for folding, from left to right: soaking in 1:1 PVA glue and water solution; air drying; cleaning glue residue; cleaned sheet; ironed sheet; and Miura folded sheet for Computational Costume v2. First image photography by Tonella Scalise, all other images are author's own.
Ready-made objects and wearables

Ready-made objects and wearables have proven valuable for saving time and resources. These items can be modified slightly to suit an intended purpose, rather than creating new items from scratch. The use of existing objects and wearables has been useful in several circumstances, as described below.

In Computational Costume v2 a small jar was re-purposed with minor modification to act a container for a marker representing pain, as shown in Figure C.10. The lid of the jar was painted and a purpose-made fabric label was wrapped around the jar. The jar could be modified in a way such that it was less or a jar and more of a prop fitting into the flow of the film, as shown in Figure 4.20. This saved the need to design and make a container from scratch.

A re-purposed jar used as a prop in the Computational Costume v2 video. Image is author's own.

Ready-made clothing has been modified to avoid time spent on producing new garments. For Computational Costume v1, T-shirts had their machine-sewn seams unpicked and replaced with small pieces of hook-and-loop fastener strips to allow the quick removal of them on-stage, as shown in Figure C.11.

A ready-made T-shirt adapted for quick release with hook-and-loop fasteners for Computational Costume v1. Images are author's own.

For a preliminary version of Computational Costume v2, a pair of coveralls had loop fastener strips sewn on them to allow the attachment of objects with hook fasteners sewn on, for a live performance, as shown in Figure C.12.

Coveralls with loop fastener strips sewn on (left) for attaching objects with hook fasteners sewn on (middle and right) for Computational Costume v2. Images are author's own.

Ultimately, the most successful modifications of ready-made objects were paired with performances where the illusion of esemplastic objects was not broken. As covered in Computational Costume v1, movements associated with the removal of clothing distracted audiences and took away from the intended effect. It follows that the ready-made items used in Computational Costume need to be removed of additional meaning attached to them by association. They need to be as neutral as the cardboard, as discussed.

Paper wearables

The documentation here on paper costume serves as a warning. Light tissue paper and cardstock used in Computational Costume v0 presented an economical way to add large areas of colour and freely applied graphics. However, the same benefits do not apply when using paper as wearable for a moving wearer. As shown in Figure C.13, paper is not a flexible enough material. The properties of paper limit it to applications that avoid shearing forces, such as small wearables like the Cardboard interface or as pieces affixed to a larger wearable.

Attempting to wear a paper costume. First image photography by Toby Gifford, second image is author's own.

Textile fasteners

For Computational Costume objects and wearables, textile fasteners have been used extensively in non-standard ways. Hook-and-loop fasteners, metal Snap fasteners and Pin fastening have been used for keeping imagined objects and wearables attached when needed. However, metal snap fasteners, and simple steel pins have best suited the objectives for prototyping and presentation, by being discrete, affordable and more readily biodegradable than polymer hook-and-loop fasteners.

Hook-and-loop fasteners

Hook-and-loop fasteners have been used extensively, although their use has been made redundant as Computational Costume has developed.

Hook-and-loop fasteners have been used in Computational Costume v1 for quick release T-shirts, as shown in Figure C.11, and attaching removable objects to modified coveralls, as shown in Figure C.12 for a preliminary version of Computational Costume v2. In an unused design, hook-and-loop fasteners were used to allow patches of fabric to be worn over clothing, as shown in Figure C.8.

The act of physically attaching and detaching objects in the way allowed by hook-and-loop fasteners is only useful in situations where attachment needs to be performed without having to give direct attention to the fastener, such as in performances. However, exhibition and film-making have presented more compelling presentation modes, as explored in Design scenarios. In addition, alternatives like Snap fasteners are adequate enough and can be arranged to make detachment and attachment easy and secure.

Snap fasteners

Metal snap fasteners, as shown in Figure C.14, present a favourable alternative to Hook-and-loop fasteners.

Metal snap fasteners used in Computational Costume v2. Image is author's own.

Snap fasteners attach and detach at fixed points, allowing a reliable and consistent connection between materials. Snap-fastened materials can only join together at the same point. In addition, snap fasteners are much more discreet than hook-and-loop fasteners.

There are several situations where snap fasteners would have been best suited, such as the quick-release T-shirts in Computational Costume v1, as shown in Figure C.11, and the layering of medical records for Computational Costume v2.

Pin fastening

Metal pins can be used as substitutes for Snap fasteners in situations where materials only need to be joined together temporarily or in a very discreet way.

In the video for Computational Costume v2, a discreetly placed pin is used to attach a marker on a small token, as shown in Figure C.15. When done with caution, to avoid poking oneself, this method offers the most flexible and discreet fastening option for exhibition and props in film-making.

A discreetly placed pin allows the ability to attach a marker to a token in the Computational Costume v2 video. Images is author's own.

Supporting structures

The use of physical materials in Computational Costume also extends to the creation of supporting structures. These structures act to hold imagined objects and wearables for exhibition and even small props in film-making. The materials need to be both strong enough and discreet enough to allow the works shown to be the primary focus. Below I cover experiences of creating and using: Cardboard mannequins, Steel wire supports and Timber supports.

Cardboard mannequins

Cardboard mannequins have been fabricated to hold costumes made for Computational Costume v2, as shown in Figure 4.18 and Figure 4.19. They have been made as a substitute for purchasing costly mannequins. But importantly, cardboard carries an amaterial quality which has intrinsic meaning to Computational Costume, as discussed. This also applies here, because the designs presented are not fashion items and using traditional mannequins can carry this meaning.

Strong, double-corrugated board is a suitable material for mannequins. In the right form, this board can carry the weight of clothing. However, care needs to be taken with how and where weight is distributed. The first configuration of the cardboard mannequins was designed to have arms and legs which could be articulated. As shown in Figure C.16, the interlocks on the arm and leg pieces had too much pressure placed on them. The pressure caused the interlocks to pinch and buckle, despite adding supports. The issues had to be quickly fixed with adhesive tape and steel wire, and a singular vertical support to replace the legs.

The original Computational Costume mannequin (left), the mannequin toppling over with added leg supports (middle) and the updated mannequin configuration with strengthened interlocks and a single vertical support to replace the legs (right). Images are author's own.

Keeping in line with the objectives for prototyping and presentation, a future version of the mannequin would correct any interlock that would slip under gravity with a removable support such as steel wire clips or twine. The support need to be easy to apply while a garment is on. As for any mannequin, the garment needs to be applied on the arms and torso before the arms and torso can be attached together—because the arms on a mannequin are not as flexible as humans' arms.

Steel wire supports

Steel wire has been used for both large- and small-scale supporting structures. Steel wire has proven most useful at a small scale. Steel wire has been used at a small scale for propping up loose textiles or tightening structural joints that might otherwise be limp, for exhibition.

Before conceiving of cardboard mannequins, a steel wire mannequin was made for Computational Costume v0. The intention of the steel wire mannequin was to allow the ability to hook small items and panels on any area, as shown in Figure C.17. However, this modular platform was fickle to create and work with. In addition, the findings of Computational Costume v0 led to reconsidering the role of mannequins towards having a complementary role, explored in Computational Costume v2.

Detail of a modular panel affixed on a steel wire mannequin created for Computational Costume v0. Image is author's own.
Timber supports

Timber was used as a stronger alternative to cardboard for Computational Costume. Timber supports, when constructed well, are useful for exhibition, as shown in Figure C.18, and holding video props, as shown in Figure 4.25. Also, when timber is presented in an austere way, that is, without ornament, it carries the same amaterial quality as cardboard.

A grounded wooden support is much less time-consuming to apply than ceiling hanging for situations where objects need to be suspended. Ceiling hanging is used for the exhibition of Computational Costume v2, as shown in Figure 4.18 and Figure 4.19.

In Figure C.18, dowels can be fitted tightly into a stable ground support using the exact circumference drill bit. This support holds smaller horizontal dowels for hanging objects. A supporting structure made in this way is easy to both erect and move when needed.

Timber mast structure used for Computational Costume v0. Image is author's own.

Presentation methods

The presentation of Computational Costume has evolved to ensure audiences can adequately conceptualise the ideas presented. In line with the objectives for prototyping and presentation, uninitiated audiences and designers alike should be able to conceptualise the imagined effects.

Three presentation methods have been explored, with varying success. I present my experiences with Sculpture, Live performance and Video.

Sculpture

Sculpture has been used often to communicate what Computational Costume can do. Sculpture has worked best where works are featured in a video and exhibited alongside the video.

The use of sculpture began with the cardboard poster and interface. The cardboard poster in particular was shown to audiences as part of the CHI 2017 (Conference on Human Factors in Computing Systems) Student Research Competition alongside traditional posters. Through form and content, the poster explained the rationale for esemplastic objects while also being a representation of such an object.

The findings from presenting the cardboard poster indicated that audiences had praise for the design. However, information on how well the concept was conveyed was not available. So the design was elaborated on again through sculpture in Computational Costume v0.

Computational Costume v0 imagined through sculpture sought to reveal how both esemplastic objects and wearables would come together. The sculpture was arranged in a way to maximise the amount of scenarios presented, alongside didactic captions to compensate. However, this was too much for a sculpture of a single Computational Costume wearer.

The findings from presenting Computational Costume v0 revealed the general public did not immediately understand the concept. When viewers received answers to questions they had about the work, they were able to conceptualise the ideas. I can speculate that more sculptures as part of a scene may have helped paint a clearer picture.

Sculpture has worked best as a complement to the video made for Computational Costume v2. Physical props and wearables used in the video are presented alongside the video in an exhibition, as shown in Figure 4.18. Through this method, audiences can see physical materials presented through video in a way that sustains the imagination of Computational Costume. Audiences can then refer to the sculptures presented in a scene to observe details they may have missed.

Live performance

Live performance was engaged in an attempt to imagine Computational Costume through a live experience. This technique was used in Computational Costume v1 and the preliminary stages of Computational Costume v2.

Computational Costume v1 was presented through a series of quickly removable costumes in a three-minute performance for a science communication competition. The findings revealed that the work's key message was not adequately conveyed. The performance was densely packed with scenarios. In addition, the performance was isolated to a single performer, so a more concrete imagination of a complete ecosystem with multiple actors present was lacking.

The early stages of Computational Costume v2 were planned to be a live performance where the wearer could engage directly with the audience. A modified pair of coveralls, as shown in Figure C.12, could be used to show esemplastic objects directly in action. In practice this was attempted with one wearer, which led to problems in showing audiences the work. As shown in Figure C.19, the coveralls and objects had to be placed on a surface so everyone was able to engage. A model wearing the garment with an actor to engage with the objects would have been an ideal way to encourage audience participation.

A suit for Computational Costume v2 intended for live performance is shown to audiences directly. Image courtesy of SensiLab, Monash University.

In practice, attempts at live performance needed further development. With enough performers a live performance could be a compelling platform to show audiences how Computational Costume could work. However, instead of developing live performances I adopted film-making as an accurate and accessible means to present a complete ecosystem for Computational Costume through video.

Video

The use of film-making to present imagined Computational Costume ecosystems through video has been the most compelling presentation method to date. Videos allow audiences to see imagined objects and wearables engaged as accurately as possible.

In Computational Costume v2 video allows the presentation of engagement with multiple people and the surrounding environment. Additionally, video editing can be used to create several useful effects, such as:

Video allows the combination of the best of sculpture and live performance. Both prototypes and performances are presented in the clearest fashion. Alongside these benefits, physical props used in film-making can be exhibited alongside video to allow audiences to observe details they may have missed when watching the video, as shown in Figure 4.18.

Conclusion

The material applications and presentation methods explored for Computational Costume provide guidance for designers wanting to build upon the work. I have illustrated the lessons learned from creating and presenting imagined design scenarios and provided insight into material applications and presentation methods which have worked best.

Collectively the work enables designers to imagine objects and wearables for new digital media by engaging lo-fi physical materials, including: cardboard, textiles and paper, with supporting structures made from cardboard, steel wire and timber. These materials have been used over a wide range of presentation methods including: sculpture, live performance and video. The methods presented here stand as an accessible alternative to using advanced visual effects or programmed virtual graphics and electronics.

In following the objectives for prototyping and presentation I have discovered video exhibited with sculpture as the best practice approach for audiences and designers alike to conceptualise Computational Costume. Designers can film performances with physical materials in set environments. Perspectives of different actors and editing can maintain the illusion that physical materials are esemplastic objects with combined virtual and physical qualities. These videos can be experienced alongside physical props through exhibition to allow audiences to engage with the finer details of featured designs.