For an enriched reading experience you may visit the online version of this exegesis at https://do.meni.co/phd DOI 10.26180/5d4c35648c01a
This exegesis is submitted as partial fulfilment of the requirements for the degree of Doctor of Philosophy
Bridging the Virtual and Physical: from Screens to Costume
by
Domenico Mazza
Bachelor of Design (Visual Communication)(Hons), 2014
Faculty of Information Technology
Monash University, Caulfield, Melbourne
August 2019
DOI 10.26180/5d4c35648c01a
PDF DOI 10.26180/5d5a214fdf8aa
© Domenico Mazza (2019)
Under the Australian Copyright Act 1968, this exegesis must be used only under the normal conditions of scholarly fair dealing. In particular no results or conclusions should be extracted from it, nor should it be copied or closely paraphrased in whole or in part without the written consent of the author. Proper written acknowledgement should be made for any assistance obtained from this exegesis.
Any third-party content herein that has been reproduced without permissions comply with the fair dealing exemption in the Australian Copyright Act 1968 that permits reproduction of such material for the purposes of criticism and review.
I certify that I have made all reasonable efforts to secure copyright permissions for third-party content included in this exegesis and have not knowingly added copyright content to my work without the owner's permission.
Abstract
There are ways humans act in and experience the physical world that are not reflected in the design of digital media. The term 'digital media' in this case encompasses our televisions, desktop computers, laptops, tablets, smartphones, smartwatches and the like. People who engage in activities through digital media have the ability today to collect and display vast amounts of information from across time and space, almost anywhere. Yet the virtual information presented through digital media accommodates neither the full freedom of intangible human imagination nor the full familiarity or immediacy of engaging with the physical world. This creates a divide between experiences through digital media and those through the surrounding physical world. This research explores ways to conceptualise digital experiences in the physical world.
The outcome of the practice-based design research allows a wide range of design practitioners and researchers, from visual and interaction design to human–computer interaction and textile design, to engage in the conceptualisation, prototyping and presentation of new digital media that addresses the divide between the physical and virtual, through what this research terms Computational Costume. The work enables designers and audiences alike to imagine and experience future technological capabilities without being limited to today's technology or needing to engage advanced visual effects or technical skills. Instead, this practice-based design research has developed and refined the use of lo-fi physical materials, from exhibition to film-making.
Computational Costume has emerged from four investigations into bridging the divide between physical and virtual practices in digital media. Investigations began with supporting people's spatial memory of interactions on screen-based devices through a visual overlay for interfaces known as the Memory Menu. A 99-participant study of the Memory Menu did not find a significant improvement to usability. This result, paired with knowledge obtained from a variety of experienced designers and communicators across art, design, marketing and human–computer interaction, encouraged a shift in focus beyond screen-based digital media. This shift in focus led to a review of and research into ubiquitous and tangible computing, which seeks to engage more of people's surroundings and physical world. This review revealed that a specific focus on whole-body interaction designs was required to break dependence on screen-based devices. This focus led to speculation on how probable technologies centred around augmented reality could enable whole-body, wearable virtual identities to ground interactions through digital media. Computational Costume was conceived in this research from this speculation.
The practice-based research presented in this exegesis and through exhibition contributes a conceptual rationale and accompanying practical approach for developing speculative virtual wearables and objects that ground interactions with digital media in the physical world using lo-fi physical materials. This contribution is embodied by the design of Computational Costume proposed in this research: a speculative design setting and scenarios based on imagined probable technologies centred around augmented reality. This work is explored through lo-fi physical materials activated via exhibition and film-making. This method of exploration enables designers and audiences alike to be liberated from the constraints of today's technology.
Declaration
This document contains no material which has been accepted for the award of any other degree or diploma in any university or other institution and, to the best of my knowledge and belief, the document contains no material previously published or written by another person, except where due reference is made in the text of the documentation.
Domenico Mazza, 10 April 2019
Contents
List of figures
ATELIER allowing people to shift between modes of digital representation. Images from Embodied Interaction – designing beyond the physical–digital divide [25] by Ehn and Linde (2004). Used with permission.
A rendition of a geographic information system (GIS) visualisation encountered in practice. Image is author's own, based upon a working design.
Screenshots of the Patina use-wear effect. Images from Patina: Dynamic Heatmaps for Visualizing Application Usage [66] by Matejka et al. (2013). Used with permission.
Screenshot of Data Mountain. Image from Data Mountain: Using Spatial Memory for Document Management [82] by Robertson et al. (1998). Used with permission.
A commonly used word processor reconceptualised for a large surface accommodating people's wide range of practices and physical abilities. Image is author's own.
Different possible practices defined as areas on a reconfigured word processor design for a large surface. The practice of directly editing a document is highlighted. Arrows indicate connections between areas. Image is author's own.
The Physical Telepresence system in use. Images from Physical Telepresence [63] video by Leithinger et al., MIT Media Lab, Tangible Media Group (2014).
The Urp system in use. Video stills from Urp [98] video by Underkoffler and Ishii, MIT Media Lab, Tangible Media Group (1999).
The inSide system in use. Images from inSide [95] video by Tang et al., MIT Media Lab, Tangible Media Group (2014).
The Perfect Red speculative design. Video stills from Perfect Red [48, pp.47–48] video by Bonanni et al., MIT Media Lab, Tangible Media Group (2012).
The Pillow Talk system in use. Video stills from Pillow Talk [76] video by Joanna Montgomery (2010). Used with permission.
The HandSCAPE system in use. Video stills from HandSCAPE [62] video by Lee et al., MIT Media Lab, Tangible Media Group (2000).
The VIDEOPLACE system in use. Video stills from Videoplace '88 [51] video by Myron Krueger et al. (1988).
The Whole Body Large Wall Display Interface system in use. Video stills from Whole Body Large Wall Display Interaction [91] video by Shoemaker et al. (2010). Used with permission.
The Armura system in use. Images from On-Body Interaction: Armed and Dangerous [42] by Harrison et al. (2012). Used with permission.
The T(ether) system in use. Images from T(ether) – Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation [53] video by Lakatos et al., MIT Media Lab, Tangible Media Group (2014).
The Project North Star system in use. Video stills from Project North Star: Exploring Augmented Reality [59] and Project North Star: Desk UI [58] videos by Leap Motion, Inc. (2018). Used with permission.
The Wall++ system in use. Video stills from Wall++: Room-Scale Interactive and Context-Aware Sensing [104] video by Zhang et al. (2018).
The Augmented Studio system in use. Video stills from Augmented Studio: Projection Mapping on Moving Body for Physiotherapy Education [45] video by Hoang et al., Microsoft Research Centre for SocialNUI (2017).
The Multitoe system in use. Video stills from Multitoe interaction: bringing multi-touch to interactive floors [5] video by Augsten et al., Hasso Plattner Institute (2010). Used with permission.
The Kickables system in use. Video still from Kickables: Tangibles for Feet [90] video by Schmidt et al., Hasso Plattner Institute et al. (2014). Used with permission.
The electrical muscle stimulation force feedback system by Lopes et al. (2018) in use. Video stills from Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation [64] video by Lopes et al., Hasso Plattner Institute (2018). Used with permission.
Mirrorworlds Concept: The Architect. Video still from Mirrorworlds Concept: The Architect [57] by Leap Motion, Inc. (2018). Used with permission.
The Choreomorphy system in use. Video still from Choreomorphy [81] video by El Raheb et al. (2018). Used with permission.
A Mirrorworld in a classroom, for exploring the water cycle of an environmental landscape at room scale. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.
A Mirrorworld in an office collocated with a medical operating theatre. Shown is the augmented reality setting (above) and the corresponding physical setting (below). Images from Leap Motion, Inc. [68] by Keiichi Matsuda, illustrations by Anna Mill (2018). Used with permission.
A virtually concealed thief in a speculative augmented reality. Video still from 'Hyper-Reality' [67] by Keiichi Matsuda (2016). Used with permission.
Quotes from Apple Inc. patent Sports monitoring system for headphones, earbuds and/or headsets [80] by Prest et al. (2014). Quotes extracted from patent by Apple Inc.
A rendition of Computational Costume [6] in practice with its hardware design highlighted in blue. Illustration by Janelle Barone, made in collaboration with the author.
Cardboard poster made for the CHI 2017 (Conference on Human Factors in Computing Systems) Student Research Competition, Denver, Colorado, USA, May 2017. Photography by Jon McCormack.
Hand assembly of the cardboard poster. Images are author's own.
Hand and forearm interface mock-up for crafting the cardboard poster. Photography by Jon McCormack.
Computational Costume v0 mannequin front and back. Shown at No Vacancy Gallery QV in Melbourne, Victoria, Australia as part of the Melbourne FashionTech collective's showcase during White Night 17 February 2018. Photography by Jon McCormack.
Map tool in Computational Costume v0. Photography by Jon McCormack.
Computational Costume v0 featuring personal effects. Photography by Jon McCormack.
Computational Costume v0 featuring health treatment plan and swimming goals. Photography by Jon McCormack.
Timeline tool in Computational Costume v0. Photography by Jon McCormack.
Re-enactment video of the Computational Costume v1 performance [71]. Video is author's own.
Computational Costume v1 from left to right: personal costume, worksite costume and medical emergency costume. Photography by Jon McCormack.
A copy of a boarded train and map tool, used to communicate the wearer's location and estimated time of arrival in Computational Costume v1. Photography by Jon McCormack.
Some reading material made public in Computational Costume v1. Photography by Jon McCormack.
A worksite costume indicating a job for the wearer, which can be affixed to the costume in Computational Costume v1. Photography by Jon McCormack.
A medical emergency costume displaying areas of injury with indication of a drug administered (displayed as an 'F' for Fentanyl), heart biometrics, enclosed private records and support sent by loved ones via touch from the map tool in Computational Costume v1. Photography by Jon McCormack.
Computational Costume v2 video [72]. Video is author's own.
The Computational Costume v2 video, costumes and props on display at Monash University's SensiLab The Looking Glass window display, Caulfield, Victoria, Australia from 1 July 2018 until 26 November 2018. Image is author's own.
The Computational Costume v2 costumes and props, with video, on display at the Design Translations exhibition by Health Collab, MADA Gallery, Caulfield, Victoria, Australia, 3–6 December 2018. Image is author's own.
Applying a mark onto the back using an object for marking and costume token at 00:40–00:42 in the Computational Costume v2 video [72]. Images are author's own.
The costume token allows access to a costume. In this case a health professional can see a medical record costume at 00:46–00:52 in the Computational Costume v2 video [72]. Images are author's own.
A health professional applies a ready-made diagram from a wall and specialised treatment plan to the patient's costume for reference at 00:52–01:19 in the Computational Costume v2 video [72]. Images are author's own.
A medical record costume hosting a lifetime of records as chronologically ordered silhouettes at 01:32–01:40 in the Computational Costume v2 video [72]. Images are author's own.
A medical record appears automatically on a wearer in an emergency situation as a call-to-action for bystanders at 01:53–01:56 in the Computational Costume v2 video [72]. Images are author's own.
The map tool allowing access to a birth record's information on birth parents and birthplace at 02:04–02:06 in the Computational Costume v2 video [72]. Images are author's own.
The map tool facilitating communication between wearers and acting as a navigational aid at 02:16–02:23 in the Computational Costume v2 video [72]. Images are author's own.
The map tool allowing access to a remote location for object retrieval at 02:29 in the Computational Costume v2 video [72]. Image is author's own.
The costume and shared objects as tools to manage privacy at 02:40–03:05 in the Computational Costume v2 video [72]. Images are author's own.
The first turn in the practice round of the Memory Menu. Images are author's own.
A selection error is highlighted in Memory Menu, for picking 'Sunflower' instead of the required selection, 'Banana'. Image is author's own.
Accuracy of recollection of the most picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.
Accuracy of recollection of the least picked category for baseline versus use-wear menus. Correct selections are compared against selections that are one off, two off, three off and so on from being correct.
Difficulty reported by participants for 1st menu baseline (grey) with 2nd menu use-wear versus 1st menu use-wear (red) with 2nd menu baseline.
Difficulty reported by participants for 2nd menu baseline (grey) with 1st menu use-wear versus 2nd menu use-wear (red) with 1st menu baseline.
Likert scale responses for effectiveness of the use-wear effect.
68 optional written responses for effectiveness of the use-wear effect coded into three categories.
Likert scale responses for desirability of the use-wear effect.
53 optional written responses for desirability of the use-wear effect coded into four categories.
A mock-up of the Cardboard poster. Image is author's own.
Paper mock-ups of Computational Costume v1. Images are author's own.
Fabric patterns for a shirt and pants. Torso (left), shirt arm (middle) and pants (right). Image is author's own.
Pants and top made from fabric patterns for laser cutting. Images are author's own.
Floating cardboard signage as imagined esemplastic objects, for the exhibition of Computational Costume v2. Image is author's own.
Hand-cut (above) versus laser-cut (below) fabric objects for Computational Costume v1. Images are author's own.
Manual and automated embroidery used for Computational Costume v2. Image is author's own.
Computational Costume design iterations. Images are author's own.
Stiffening cotton broadcloth for folding, from left to right: soaking in 1:1 PVA glue and water solution; air drying; cleaning glue residue; cleaned sheet; ironed sheet; and Miura folded sheet for Computational Costume v2. First image photography by Tonella Scalise, all other images are author's own.
A re-purposed jar used as a prop in the Computational Costume v2 video. Image is author's own.
A ready-made T-shirt adapted for quick release with hook-and-loop fasteners for Computational Costume v1. Images are author's own.
Coveralls with loop fastener strips sewn on (left) for attaching objects with hook fasteners sewn on (middle and right) for Computational Costume v2. Images are author's own.
Attempting to wear a paper costume. First image photography by Toby Gifford, second image is author's own.
Metal snap fasteners used in Computational Costume v2. Image is author's own.
A discreetly placed pin allows the ability to attach a marker to a token in the Computational Costume v2 video. Images is author's own.
The original Computational Costume mannequin (left), the mannequin toppling over with added leg supports (middle) and the updated mannequin configuration with strengthened interlocks and a single vertical support to replace the legs (right). Images are author's own.
Detail of a modular panel affixed on a steel wire mannequin created for Computational Costume v0. Image is author's own.
Timber mast structure used for Computational Costume v0. Image is author's own.
A suit for Computational Costume v2 intended for live performance is shown to audiences directly. Image courtesy of SensiLab, Monash University.
List of tables
An overview of where the four research investigations are located within this exegesis, with section links.
Interviewee specialisations and responses, coded into context, empathy, memory and method/process. Responses with reference to supporting people's memory and cognitive load have been highlighted in .
Interviewees' consistent design practice approaches, coded into context, empathy, memory and method/process.
Acknowledgements
I have numerous people to thank for supporting my PhD by sharing their skills and their support throughout my journey.
Tim Dwyer, Jon McCormack and Vince Dziekan: for supervising my research with great thoughtfulness, a bit of humour in most meetings, and so much of your time. Collectively, you all helped shape the direction of my PhD from day one, up until the end. You gave me the freedom to pursue my visual design practice in a technical field. You all encouraged me to push into the unknown. Because of this, I was able to come out the other end with valuable lessons, and exciting results, that I could never have anticipated.
Julie Holden: for your great work in editing my writing, and teaching me how to elucidate ideas and outcomes, with greater care and intent. It was a real pleasure to go through different writing options and learn how they work. It was like learning to design again, but through text. I treasure the writing skills I have learned with you, because they have been so valuable—in both reflecting on and articulating my own logic, and communicating my work.
Lizzie Crouch: for your encouragement and valuable advice to share my work with the world. This has put my research in the places and minds of people I would otherwise have not reached.
Elliott, Pat, Dilpreet, Lizzie, Sojung, Yalong, Yingchen, Matt, Leona, Lora, Su-Yiin, Mike, Toby, Shelly and Nina: you have been the most wonderful friends. Even when your companionship and conversations were enough—a few of you even offered bags of organic fruit or vegetables, shared homemade cakes, drinks and sourdough bread starter and lovely gifts, and gave me a hand to install and take down my work.
Simone: sometimes you matched the feedback my supervisors gave me. You even came to my mid-candidature talk of your own accord. You also had the clout to crush my doubts when I thought settling on my costume idea was silly. Your generosity was tremendous.
Nick and Jesse: it did not matter how many times I said I needed to write, you kept trying to distract me. I love you both. You have been the best friends I could ask for—you have given me perspective on all manner of things, many a time. You both made the hard times in life less so, through your kindness.
My mother and father: you have been my greatest supporters for almost 27 and a half years. You have provided for me what you did not have access to. In addition, your great qualities have been infectious: Ma, your abundance of care for those around you; and Pa, your concern for justice and culture—and the fortitude you both possess. For this, I am so fortunate.
Also, there are a number of people to thank for their direct support of my research. In particular:
- Jon McCormack, Elliott Wilson and Lizzie Crouch: for providing and running SensiLab—the wonderful and near-limitless multidisciplinary place which has nurtured both me and my work
- Allison Mitchell: for her diligent support behind the scenes to ensure myself and the supervisory team did what we needed to do, in addition to facilitating all of my reviews and my final exhibition
- The PhD milestone review panels: chaired by Michael Morgan and, over the years, attended by Michael Wybrow, Mark Guglielmetti, Gene Bawden and Indae Hwang; their fresh input has been invaluable for reflecting on my research and determining the best direction forward
- Allison Mitchell, Ammie Julai and Elena Galimberti: for facilitating my final exhibition with such care—as the first practice-based research exhibition for the faculty
- Illustrator Janelle Barone: for bringing my vision of Computational Costume to life with a wonderful illustration.
- Professional accredited editor Mary-Jo O’Rourke AE: for providing copyediting and proofreading services according to the national university-endorsed ‘Guidelines for thesis editing’ (Institute of Professional Editors, 2019)
- Andrew Maher and Stewart Bird (while incumbent) and Alexandra Sinickas at Arup, Melbourne: for their encouragement, conversations and time they allowed me to engage with their colleagues to inspire my research
- All individuals who participated in my Memory Menu study or interviews, in total 108 people who generously gave their time for my research
Finally, I thank the organisations that have generously funded my research, in particular: the Australian Government for its Research Training Program (RTP) scholarship; Monash University for its resources, as well as travel and equipment funding; and Arup's Melbourne office for its additional investment. These investments facilitated the cultivation of the original ideas presented through my research.
Introduction
When people use mainstream digital devices such as a smartphone, television or computer, there is a divide between the reality presented through the device and their surrounding physical reality. This divide matters because physical reality gives purpose and meaning to the applications on a digital device. This notion of a divide draws upon the perspective of embodied interaction. Embodied interaction defines a connection between people's lived experiences and their experiences through digital devices.
Take, for example, two people talking across a long distance through a video call. Through the windows of an on-screen video call, each person has enough information to hear the other and gauge facial expressions, alongside a peek into their surrounding environment.
Valuable information faithfully presented in a physical conversation, such as objects, events and movements in a shared physical space, are lost because of the cropping of the video camera. While this lost information may seem minor, it contributes to the lack of faithfulness of the ambience presented and thereby the interpretation of the session. A compromise is made here on valuable information normally available when communicating, in order to allow today's mainstream digital device to overcome the physical barrier of distance.
Losing information that people are accustomed to is problematic because the surrounding physical reality gives purpose to an activity such as a video call. This problem extends to other actions performed through digital devices, actions that are ultimately grounded in the physical reality outside of the device. Such actions include organising and finding content, and visualising information that references the surrounding world.
In my research, I explore what details can be designed into digital experiences to bring them on par with physical and virtual experiences in the world. Imagine how we as humans live and draw understanding from our experiences in physical reality. People who engage in activities through digital devices experience only a subset of possible experiences in the physical world. I divide these experiences into virtual and physical practices that are engaged across digital devices and in physical reality:
- Virtual practices that encompass people's imagination consisting of perspectives and ideas
- Physical practices that encompass people's spatial understandings and actions
People's virtual and physical practices are endlessly rich and detailed. Yet today's average smartphone or computer use people's senses in limited ways. Such devices are configured to reduce information of vast scale, such as the totality of one's work or environmental surroundings, into the confined boundaries of a screen through windows, files and maps. These confined boundaries are not representative of the world's complexity as experienced outside of the computer screen.
Returning to the example of the video call, there are several issues with the cropping presented by the video camera and video-calling application on-screen:
- Communication between the callers is restricted to windows that cut out information from their ambient environments. This design removes distractions, yet prevents engagement with anything else that could be significant to a conversation.
- As a substitute for a surrounding physical environment, it is possible to collaborate through sharing files and screens via a video-call application, although these collaborations have limitations. Actions such as drawing are inferred from the movement of a cursor disconnected visually from the interactor performing the action with their body.
- Shared on-screen surfaces for video calling today lack information, as they are either blank or a digital wallpaper with some recognisable application windows and icons generally used by only one person. This acts as a poor substitute for a surface with objects or a space with even more objects and perhaps more than one inhabitant conducting their activities—all contributing meaning to what is visible.
The aforementioned issues in relation to current mainstream digital media affect the ability of today's digital media designers to present information in a way that is adequately grounded in physical reality. These issues have prompted my research to address how future digital media might be designed to adequately ground interactions with respect to people's lived experiences.
In my research, I investigate four areas to explore the limitations of physical and virtual practices with digital media. They take into consideration today's digital technology, approaches across mainstream media, emerging digital media and imagined digital media. The four areas involve:
- Simplifying physical and virtual practices on-screen: what information can be added to simplify interaction on-screen by creating connections to activities in physical reality?
- Reviewing physical and virtual practice support across media: how are practitioners across communication and design disciplines working to support people through different media?
- Reviewing new physical and virtual practices: how is emerging digital media bridging today's physical–digital divide?
- Creating new physical and virtual practices: how might we imagine a way forward based on identifiable gaps in current emerging media to bridge the physical–digital divide?
Background
The purpose of the four investigations ahead is to remediate the limitations imposed by current mainstream digital media using the perspective of embodied interaction. The perspective suggests paying attention to the connection between people's interactions through digital media and their lived experiences of the world. This can be accomplished through a range of design approaches across media and technologies. In order to cover this wide base, as part of my investigations I:
- Reviewed and studied a novel interface design approach through mainstream digital media on-screen
- Interviewed a range of experienced practitioners and researchers about their design approaches and feedback in response to work completed up until this point
- Reviewed relevant emerging digital media designs
- Engaged a speculative design process to imagine improvements to mainstream digital media in a probable future free of today's technological limitations
Table 1.1 provides an overview of where the four research investigations are located within this exegesis.
The investigations have been influenced by the concept of the physical–digital divide [25] proposed by Pelle Ehn and Per Linde, which is based on an embodied interaction perspective. The authors' divide takes a paradoxical look at the virtual and physical, based upon combating demassification. Demassification is the loss of material and social properties as physical artefacts evolve into digital artefacts, as proposed by Brown and Duguid [16]. For example, a physical book presented as an e-book on a digital device loses meaning which might be attached to the wear on a book's cover or its pages, or the positioning of a book on a desk alongside other objects.
Brown and Duguid suggest that demassification is paradoxical because supposed improvements in technology that shed physical mass carry repercussions. There are two problems that contribute to the demassification paradox: digital technology has stripped away social practices which once depended on physical form (physical demassification); and consequentially the social practices that relied on congregating around a physical format have disappeared (social demassification) [16, pp.22–25].
Brown and Duguid suggest an awareness of latent border resources [16, pp.6–20] to counter demassification in digital media [16, pp.21–31]. They suggest that digital media is not the sole cause of demassification, but merely a place where demassification can be observed because latent border resources have been overlooked by designers. Designers can correct digital media applications with an awareness of latent border resources. Latent border resources are qualities of an artefact that lie dormant but contain socially shared significance. These dormant qualities lie at the border between direct attention and peripheral attention. As an example, if reading a physical book, we are directly attuned to the pages while peripherally a worn spine may have some personal significance to the reader; however, sitting at the border is the thickness of the pages of an open book clenched with both hands that provides an indirect feeling of progression through the book. These latent border resources are important for designers to recognise, because they can be taken for granted when designing for new media.
In the e-book example, latent border resources have been lost, yet new physical and virtual practices have been allowed which can be iterated on. Now a reader can instantly skip between texts and follow links from one text to another without physically moving. The designer has new latent border resources to discover and provide to the reader, such as simplifying the search of texts by presenting a history of what has been navigated. In this case the new latent border resource provides an advantage over the physical library because using a physical library would require the reader to walk about and carry a stack of books or a reference list to accomplish the same task. This exemplifies how problems from losing mass as the physical becomes virtual can be recovered through careful consideration of the latent border resources available to people.
The example of the e-book gaining new latent border resources despite demassification highlights that physical and virtual practices determine one another or are co-dependent, rather than only working in opposition as suggested by the paradox of demassification. It is therefore important to understand that new latent border resources emerge from new digital media. This is the fundamental reason why the embodied interaction perspective adopted in my research extends from applications of today's technologies to applications of imagined probable technologies.
Simplifying physical and virtual practices on-screen
In the first of the four investigations, I explore how physical and virtual practices can be simplified for on-screen interactions with an awareness of latent border resources. First I acknowledge the variety of interaction design approaches for solving problems on-screen today and develop a novel interface design based on these design approaches.
This section of my research investigates:
- What do a variety of interaction design approaches reveal about solving common on-screen interface design problems? This is explored in On-screen interaction design approaches.
- How can latent border resources be supported on-screen? This is explored in Supporting on-screen spatial memory through use-wear.
Reviewing physical and virtual practice support across media
As discussed, virtual and physical practices are paradoxical and dependent on one another. This dynamic moves forward as new capabilities are added, refined and replaced in media. Adding new capabilities to digital media requires a look beyond people's practices supported by on-screen interfaces.
In the second of the four investigations: Design and communication practices across domains, I interview a variety of design and communication practitioners about my research conducted so far and what consistent and unique approaches they adopt to support people engaging with media. The culmination of these responses provide an indication of best practice approaches to follow in my research.
This section of my research investigates:
- What do a variety of design and communication practitioners have to say about the design approaches adopted in the research?
- What consistent and unique approaches exist in designers' practices to supporting people's activities across design and communication domains?
Reviewing new physical and virtual practices
In seeking to resolve the physical–digital divide by tackling demassification, Ehn and Linde (2004) turn to the perspective of embodied interaction to employ new kinds of physical and virtual digital media interactions. Embodied interaction assists designers to reflect on what physical and virtual practices are useful to people. Embodied interaction is a perspective in the field of human–computer interaction (HCI) popularised by Paul Dourish in the seminal book Where The Action Is (2001) [22]. The perspective suggests the meaning we derive from the interfaces of digital devices is largely influenced through having a physical manifestation in the world as experienced through our bodies [22, pp.100–103]. Rather than people being seen by digital media designers as machines that respond in a predictable fashion to familiar metaphors and instructions through digital artefacts, meaning obtained through digital artefacts is intertwined with people's unique lived experiences of the world as a whole, through metaphor and concepts [54] or other media [11].
Ehn and Linde (2004) explored embodied interaction through the ATELIER design research project for physical–digital studio environments
for design students [25], shown in Figure 1.1. The studio environment allowed people designing an interactive installation to explore the qualities of a physical space through a physical model design, ambient sound and light projections, before designing a 3D model design. The environment enriched people's conceptualisation of design ideas by providing a wider range of necessary virtual and physical practices to work with, rather than limiting practices only to virtual 3D object creation, sketches and mental visualisations.
The work produced by Ehn and Linde (2004) fits into a pattern of designs that can be referred to as the Material Turn [102][83] in HCI. Outcomes of the The Material Turn in HCI demonstrate how designers can support people by intertwining digital media interactions into the world, as done with ATELIER [25].
In the third of the four investigations: I focus on Ubiquitous and tangible computing, which has been instrumental in supporting the Material Turn in HCI, with particular attention to Whole-body interaction to coalesce interactions.
Ubiquitous computing, a proposal championed by Mark Weiser in 1991, suggests personal computers are a transitional step towards information technology that will one day be as invisible and ubiquitous as the text on signs and candy wrappers [101]. This direction has encouraged the proliferation of devices we experience today. Tangible computing is a subset of ubiquitous computing and attempts to break the many functions of screen-based interactions into standalone artefacts. As an example, the process of sculpting can be achieved through physical platforms and communicated digitally, such as in Physical Telepresence [63] by Leithinger et al. (2014). We find the emergence of this kind of computing today in the mainstream network-connected devices of the internet of things (IoT) (or enchanted objects [85] as described by David Rose) and dynamic materials of the radical atoms research program [48] led by Hiroshi Ishii. In the mainstream, ubiquitous and tangible computing falls back to screen-based devices. Screen-based devices allow the necessary management and networking of IoT devices.
Whole-body interaction presents an alternative to screen-based devices, because it shows how interactions through digital media can be attached to the body and the world instead. With the promise of augmented reality in the future, whole-body interaction devices could offer the ability to substitute screen-based devices with interaction through bodily and physical surfaces, and in mid-air.
This section of my research investigates:
- What new physical and virtual practices are offered by the Material Turn in HCI through ubiquitous and tangible computing to address people's dependence on screen-based devices?
- How might designers use whole-body interaction as part of the Material Turn in HCI in the future to replace people's dependence on screen-based devices?
Creating new physical and virtual practices
Designers need to be able work beyond the constraints of today's technology to fill the gaps between the promise of outcomes of the Material Turn in HCI and their application in the mainstream. Speculative design is a process which allows designers to work beyond technological constraints to propose alternatives. Anthony Dunne and Fiona Raby in their book Speculative Everything [23] discuss the role of speculative design proposals in opening new perspectives to societal challenges. They argue that such designs add onto the public's vision of reality, challenge it and provide alternatives to it [23, p.189].
In the last of the four investigations: Computational Costume design, speculative design is used as a tool to address identified gaps without subscribing to today's technological constraints, for instance, the requirement for screen-based devices to manage computer networking. Through this investigation I seek to stimulate discussion around what would be both probable and desirable through whole-body interaction supported by augmented reality. Speculative design offers the ability to comment on how the technology has been developed so far and where it can be taken purely for the benefit of people.
A language to communicate and reflect upon speculative designs of whole-body interaction supported by augmented reality is explored in Computational Costume prototyping and presentation. Accessible materials and techniques are engaged to create the necessary illusion to support speculative designs. This is needed because the technology to realise the speculative designs is not yet available. It is not the responsibility of interaction designers to create technologies. It is more valuable for this designer to put forward the design ideas for evaluation and development. This position draws upon the success of interaction designers using paper and cardboard prototyping techniques [24].
This section of my research investigates:
- What technology and functionality can be expected in a speculative design for whole-body interaction supported by augmented reality for new physical and virtual practices?
- How can designers economically prototype and present speculative designs for whole-body interaction supported by augmented reality for new physical and virtual practices?
Supporting practices with screen-based digital devices
Engagement with digital devices today predominantly involves on-screen interaction. Screen-based devices provide the main means of creating, communicating and looking at digital content. For this reason, designing for screen-based devices is the starting point of my research to understand how people's practices can be better supported through digital devices.
In my research I build upon how people's activities could be supported on mainstream screen-based devices in On-screen interaction design approaches and Supporting on-screen spatial memory through use-wear. I also pay attention to the wide range of design and communication practices through interviews as a counterpoint to my research, in Design and communication practices across domains.
On-screen interaction design approaches
Unravelling interaction design approaches in the real world encouraged the beginning of my practical design research. I worked with engineers at a consulting engineering firm to determine where they could improve their digital media designs. These engineers made a variety of on-screen tools and visualisations to assist their own work and to communicate information with clients. My task was to use the issues I discovered in their design work to come up with generalisable design solutions.
What became visible through talking with a variety of engineers about their designs was both the domain specificity of the content and the use of off-the-shelf interface and visualisation frameworks. The difficulties of this situation were: the content could be quite complex; and visual or interaction design skills were not being engaged to make experiences more palatable.
I found several examples that illustrate this in geographic information system (GIS) visualisations and interfaces. Anecdotally, these systems were more favourable than their predecessors: large printed volumes and maps. However, it was common to see simple design problems—like the one shown in Figure 2.1—which could be easily solved, such as: layout issues where information could be divided into sections to make it easier to navigate; and adding iconography to make common features stand out.
These issues were made clear to the designers of the software, but it was not possible to fix them immediately. The presence of the issues was a symptom of how resources were allocated. There was no expectation to have well-versed designers on-board fulltime. Instead, alongside engineering work, engineers also designed their own on-screen tools when needed. Creating tools like the one shown in Figure 2.1 followed the path of least resistance—it gave the engineers control over the presentation of content that was deeply integrated into their work. This design practice avoided having to adopt ill-fitting tools or to outsource work.
While experiencing this inertia, I reflected upon the different design practices applicable to the situation at hand. In order to contribute, I reflected on four relevant kinds of design practitioners:
- Visual designers, who are adept at applying tacit knowledge of the elements and principles of visual design to interaction design problems. This professional practice is well documented in Designing Visual Interfaces [77] by Mullet and Sano (1995).
- Interaction designers, who work similarly to visual designers, with the ability to recognise interaction design principles and perform user evaluations to validate the effectiveness of designs. This professional practice is well documented in About Face: the Essentials of Interaction Design [19] by Cooper et al. (2014).
- Unspecialised visual/interaction designers, who apply established visual layouts and interfaces to projects without formal training in order to save expending resources to engage professional designers. This approach comes with mixed results, as shown in Figure 2.1 and experienced throughout the design work I observed.
- HCI researchers, who carry the work of visual and interaction designers forward into novel design spaces with rigorous evaluations to determine the validity of designs. An example of this kind of approach can be found in ISOTYPE Visualization – Working Memory, Performance, and Engagement with Pictographs [41] by Haroz et al. (2015), where the researchers formally evaluate the effectiveness of ISOTYPE (International System of Typographic Picture Education) pictorial symbols applied to information visualisations.
Supporting on-screen spatial memory through use-wear
Supporting spatial memory on-screen through use-wear is analogous to using bookmarks in physical books. Use-wear (also known as computational wear, read wear, visit wear or patina), see Figure 2.2, provides a visual signal over parts of the interface which have been interacted with, along with an indication of how frequently those parts have been used [43][92][47][1][66]. People can use this information to pick up where they left off when coming back to an interface, or when exploring a new interface to quickly identify familiar and unfamiliar areas. It is also a signal that can be read by others, because the progress made through the interface, like a bookmark in a book, is openly visible. These kinds of signals present useful latent border resources, as discussed in the Background. Supporting spatial memory is one direct way of supporting latent border resources.
Supporting spatial memory on-screen is a well-explored area [88]. Use-wear fits within a variety of novel and established strategies to support the location of objects on-screen. These strategies include, but are not limited to: laying out information as maps; traces and scents; obscuring information; and mnemonics. I explain each in detail below to provide a context for adopting use-wear.
- Maps provide more effective representations of information than lists or ribbon command interfaces [89], especially when revisited [40]. Data Mountain [82] by Robertson et al. (1998), shown in Figure 2.3, advantageously replaces web browser bookmarks in a list with user-arrangeable stacks of thumbnails on an inclined plane.
- Traces and scents, like use-wear, involve leaving behind useful information, such as showing a trail of pages which have been navigated in the form of breadcrumb navigation, or providing a hint about information behind a hyperlink in the form of compact summaries or scents, such as scented widgets [103]. Animations also leave behind useful information by signifying different actions associated with on-screen windows through mnemonic rendering [10] and with graphics and input areas through afterglow effects [7] or by revealing the most popular choices in a menu ahead of other items through gradual onset [29].
- Obscuring information induces people to learn where information is. Such an example is a frost-brushing interface [18], where people are forced to recall spatial information by brushing away a frost effect from the interface to reveal the information. The effect supports spatial learning [18]. The frost effect performs the opposite of a use-wear effect.
- Mnemonics require people to practise an easily recallable pattern. An example is the method of loci (or memory palace) technique. This technique traces back to antiquity as a way to recall vast tracts of information by assigning chunks of information to mentally visualisable objects placed within a sequence of physical spaces known as loci [13]. The technique does not directly support spatial memory on-screen; however, it does allow people deliberate access to their spatial memory abilities, which can be used to recall commands. The technique has been used in the Physical Loci system [79], where it has been shown that commands can be recalled and invoked more effectively than traditional menus, with the added ability to share the commands with others.
Of the spatial memory supporting strategies, use-wear lacks concrete results to suggest that a subtle application on menu interfaces would be effective. Implementations of use-wear have been evaluated at a small scale and shown to be favourable [43][47][1][66]. However, conclusive benefits have only been demonstrated where visibility is obscured in fisheye views [92]. There is also a known benefit when highlighting popular menu items to work with a bubble cursor (or area cursor) designed to capture popular menu items within a widened region around the cursor, as shown in bubbling menus [96]. So far, a subtle use-wear effect on a standard menu has not been validated. Use-wear is also the easiest of the spatial memory strategies to apply in practice. The effect does not require having to restructure an interface, add animations or make people learn additional information such as a mnemonic.
A mixed-methods approach has been used to quantitatively and qualitatively evaluate use-wear for use in practice through the Memory Menu. The Memory Menu study detailed in Memory Menu presents the design and evaluation of a subtle use-wear effect for large menus. The work attempts to support people's spatial memory while using an interface. The work was motivated by its simplicity and applicability in practice. A rigorous online evaluation with 100 participants, and 99 valid results, showed no statistically significant results in favour of the use-wear effect. The null hypothesis H0 was validated and H1 and H2, as detailed in Memory Menu hypotheses, were disqualified. In summary, the use-wear effect did not affect selection times or the memorability of items selected. I did not find a significant effect on participant performance in the use-wear condition. Therefore I was unable to reject the null hypothesis. Qualitative responses for menu difficulty suggest participants had a stronger preference for the use-wear effect after using the baseline menu first. Overall, participants' attraction to the use-wear effect was polarised.
The results are therefore inconclusive as to whether the Memory Menu provides an improvement over traditional non-use-wear menu interfaces. The results illustrate that spatial memory issues are not easily solvable by placing information on top of an interface. As shown at the beginning of this section, alternative ways of supporting spatial memory are generally more involved. In light of this, spatial memory support needs to be carefully designed into an interface from its conception, with consideration of its content, presentation and audience. From this point, I sought a broader line of enquiry.
Design and communication practices across domains
To counterpoint my research, I reflected upon the design and communication practices of practitioners inside and outside of interaction and HCI design. Practitioners outside interaction and HCI design also deal with engaging audiences and supporting their practices. The importance of this process was to learn from practitioners who deal with mediums other than digital devices.
In the previous sections I have looked at how latent border resources on-screen might be supported by directly targeting people's spatial memory. This presents a narrow perspective through a single medium, providing a narrow means to bridge the divide between physical and virtual practices through digital media. Interviews with professionals who collectively engage with a variety of media can provide a valuable source of broader guidance in supporting people's activities through digital media.
Interviews were conducted with a variety of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media, with established and senior experience. The interview motivation, design and results are detailed in Interviews. The 8 interviewees provided their insights into supporting audiences' practices beyond digital devices by responding to the work conducted in Supporting on-screen spatial memory through use-wear, framed around the HCI language of supporting people's memory and reducing their cognitive load.
Open coding of the interviewees' responses reveals a standard pattern of dealing with the audience's context and showing empathy towards them when making design considerations through an iterative process. This stood in contrast to dealing with supporting memory and cognitive load, which were dealt with directly by 5 of 8 and 3 of 8 interviewees, respectively. Interviewees also provided constructive comments to improve the Memory Menu.
Beyond supporting spatial memory and reducing cognitive load as explored with the Memory Menu, interviewees revealed four unique approaches: supporting people's modalities; working within a relatable cultural context and history; avoiding didacticism; and moving away from cultural constraints and what is culturally acceptable. The interviewees' four unique approaches can be seen as contradictory on the topic of culture, as they both rely on culture and defy it. However, as a whole the approaches provide direction for accommodating people's activities with respect to their vast capabilities and environments, as explored in the next section.
Conclusion
In this chapter I have explored how digital media designers might support people's virtual and physical practices through mainstream screen-based devices. I began by investigating the support of spatial memory for on-screen interfaces through a subtle use-wear effect. This approach was initiated on the back of real-world practical experience where it was not ideal to re-design interfaces already used in practice. Experimental results did not find a benefit to the effect as applied in the Memory Menu study. This result encouraged a broader look at designers' practices to support people's activities.
Through a series of interviews a broad range of design and communication practitioners were asked to provide feedback on the work so far conducted on use-wear. They were asked about their own practices to support people's activities. The interviewees revealed ways to improve the work conducted on use-wear and also suggested accommodating people's vast capabilities and environments, while challenging and accommodating the culture which surrounds interaction. Based on the interviewees' advice, I put aside a narrow focus on spatial memory and re-framed my research to target new ways of supporting people's activities based outside of screen-based devices.
Designing for a wider range of interactions beyond the screen
The narrow scope of Supporting on-screen spatial memory through use-wear and consideration of Design and communication practices across domains encouraged a movement towards the kind of design practice engaged in by Ehn and Linde, as discussed in Reviewing new physical and virtual practices. In this design practice, digital and physical devices take on new material expressions, and people take on board new kinds of practices. This involves incorporating a wider range of interactions that come from engagement with the world outside of the screen-based device into people's activities conducted through digital media.
On-screen interface design reconceptualisation
To begin accommodating a wider range of people's lived experiences in my design practice, I applied the concerns of demassification and embodied interaction, as Ehn and Linde did for their design research project ATELIER, to translate a screen-based interface design to a more engaging physical environment, as discussed in Reviewing new physical and virtual practices. As a trial, I began by reconceptualising the design of a commonly used word processing application. My redesign, shown in Figure 3.1, imagined for a large screen surface, caters to the human ability to work outside of a traditional keyboard, pointer and screen size. My design illustrates how it might be possible to conduct the range of activities involved in the process of authoring a document without a screen-based device. The design includes inserting data into templates for fine-grained control of the document design.
The design process involved cutting the application into its constituent menus and rearranging them to have a neater procedural flow without the constraint of a fixed-size screen. The interface in Figure 3.1 was then divided so different practices occupied their own areas, shown in Figure 3.2. These areas were positioned in a way that relates to their place in the process of writing, from conception to export. As the writer proceeds through the process, they can occupy an area as necessary. The positioning of the areas relates to their relationship in creating a document e.g. the document elements on the left and the finished outcome on the right. Areas specific to formatting were placed as close as possible to where the associated formatting practices take place—on the document itself. Overall, the areas were ready at hand when needed, rather than being concealed. However, the activity is still concentrated on a single surface, when it could have a deeper connection with the surrounding environment.
In extending the design proposal shown in Figure 3.1 and Figure 3.2 to integrate it with the surrounding environment, I then imagined how some of the areas might manifest themselves on the bodies of people in a poster design shown in Figure 4.3, to act as extensions of people's hands and arms. This allows the practices to be carried with people when they need them, enabling the interface to feel more like a portable tool to use when needed rather than a fixed surface with options cluttered around a document. This idea has been explored and explained through the design and creation of a 3D cardboard poster, detailed ahead in Cardboard poster and interface.
To ground the design practice defined here and provide direction for it, I reflect on three relevant areas that have contributed to ways in which designers have accommodated interactions beyond screen-based devices:
- The Material Turn, defined by Robles and Wiberg (2010) as a movement in HCI towards bringing together physical and digital qualities [83]
- Ubiquitous and tangible computing, which presents research towards prolific devices that privilege engagement with physical materials and objects, over screens; these devices have been explored in the radical atoms research program [48] and are emerging in the mainstream network-connected devices of the IoT (or enchanted objects [85])
- Whole-body interaction research [26], which leverages greater use of human movement and senses for interaction through computers, and presents an alternative to dependence on prolific devices
The Material Turn
The Material Turn as defined by Robles and Wiberg (2010), presents how physical and digital qualities can come together for the benefit of digital media interaction design [83]. The Material Turn can be seen as a concentrated effort to produce work in a similar way to what Ehn and Linde pioneered in 2004 through their design research project ATELIER, as introduced in Reviewing new physical and virtual practices. The Material Turn reveals the potential in blending traditionally non-digital material qualities and physical experiences into digital artefacts.
The Material Turn is distinct from other design directions in HCI to remediate the connection between the physical and digital, by focusing in on the materiality of interactions. This focus on materiality indicates a desire to reconcile the divide between the physical and the digital through new materials and new relationships between materials [83, p.137]. This can be contrasted with standard design approaches that build upon graphical user interfaces (GUI), which rely on metaphorical relations between physical and digital such as files and documents, or tangible computing, which presents physical analogues to digital information [83, p.137]. Materiality presents a broader view of what is possible that extends outside the constraints of the physical affordances of established digital media.
Within the discourse of the Material Turn, it has been encouraged to move away from an allegiance to kinds of materials (e.g. digital devices or non-computational media) and to concentrate instead on the experiences and interactions afforded by any material:
- Material experiences: Giaccardi and Karana (2015) [33] define a framework for material experiences in HCI which has grown out of the Material Turn. Material experiences consider the experiences people have with and through materials, rather than focusing purely on the physical qualities of materials [33]. The authors look at: people's interpretations; affective and sensorial experiences; and the performative abilities afforded by various media.
- Materiality of interaction: Wiberg (2016) argues for a focus on the materiality of interaction and not the material status of the computer [102]. Material status in the sense described is a bias that privileges one kind of digital device over another or thinks of non-computational materials as more authentic or real, whereas a focus on the materiality of interaction privileges a focus on interaction—an understanding of emerging experiences as part of a larger history of interaction.
The Material Turn in HCI, as described, shines a light on the virtual and physical practices enabled by digital media that moves beyond screen-based devices. Discourse in the area provides a means to critique the contemporary directions of ubiquitous and tangible computing. I now explore how well this computing bridges the divide in terms of new kinds of experiences afforded and whether material status has been overcome.
Ubiquitous and tangible computing
Ubiquitous computing was proposed by Mark Wiser in 1991 [101] as a future to be achieved where computing is invisible. Ubiquitous computing provides us with a perspective from which to imagine computing that is as ubiquitous as everyday objects and thereby more readily available to the different kinds of situations people experience. The proposal for ubiquitous computing best serves as a device to provoke new design concepts, rather than being an actual kind of digital media. It is argued that the future put forward by ubiquitous computing is proximate [9]; in other words: it's always just out of reach. Ubiquitous computing has arrived through access to all form of screens today from watches to televisions connected to the cloud. However, there is always room to make interaction through computers more and more ubiquitous—we are yet to create advanced artificial agents to converse with in order to perform tasks.
I specifically explore tangible computing, which provides a focus for designers to explore new physical and virtual practices beyond screen-based devices. Tangible computing, as explored in places such as the radical atoms research program [48], shows, for example, in Physical Telepresence [63] by Leithinger et al. (2014), shown in Figure 3.3, how communication can take place over network-connected dynamic surfaces that people can sculpt. Physical Telepresence allows people to share physical forms across long distances with greater sensorial qualities than alternatives such as videos or photographs.
Screen-based devices are a locus for digital media interactions that run counter to people's full range of sensorimotor abilities—or what humans can achieve with their senses and physical abilities. People can carry screen-based devices with them practically anywhere, but these do not encourage direct engagement with the world that surrounds them. This matters because surrounding contexts give the representations on screen meaning—such as: a conversation between people that can involve locations, other people and objects; or a virtual model of an object set made for physical interaction.
With respect to the Material Turn discussed, I explore how tangible computing extends digital media into new physical and virtual practices, but also highlight a continued attachment to the material status of digital devices. I explore how tangible computing:
- Enables Physical objects with virtual overlay, rather than translating physical objects to screens
- Enables Manipulable physical and virtual surfaces and objects, to relieve the limitations of using physical objects and screen-based devices
- Enables Enhanced manual processes by combining virtual tools with physical objects
- Enables the use of Ambient perception, instead of direct attention to devices
- Relies on the material status of common physical objects, resulting in people's Dependence on many devices, which can only be managed by screen-based devices
Following the review, I propose ways of Relieving dependence on screens.
Physical objects with virtual overlay
Tangible computing allows designers to present information from the physical world through physical objects with virtual overlays, rather than translating physical objects to be presented completely on-screen. Urp [98] by Underkoffler and Ishii (1999), shown in Figure 3.4, (using the I/O Bulb and Luminous Room system by Underkoffler and Ishii (1998) [97]) demonstrates how people can visualise the shadows and reflections cast by built structures and the airflows travelling around them. Physical tools can be used to measure distances, apply materials like glass and direct wind effects. Information that might be lost in a 2D representation is gained.
However, in Urp the physical objects used are not mutable like virtual objects on-screen. There is no possibility to modify the models or rearrange them into new views, for instance, by slicing them. I explore ahead how tangible computing presents physical materials which can be manipulated physically and virtually.
Manipulable physical and virtual surfaces and objects
Tangible computing allows for physically and virtually manipulating objects and surfaces. This extends the ability of physical models, as presented in physical objects with virtual overlay, so they can be treated in a similar way to virtual objects on-screen while also accommodating different physical uses. This interaction extends across large surfaces, as well as objects at room scale and hand scale:
- Tangible CityScape [94] by Tang et al. (2013) is a room-scale surface that uses dynamic actuators to present physical cityscapes and associated data such as traffic moving through a city.
- Physical Telepresence [63] by Leithinger et al. (2014) (using the inForm system [30] by Follmer et al. (2013)), shown in Figure 3.3, allows people to see and move objects from a distance using a projected surface which is actuated and able to sense depressions.
- Proxemic Transitions [37] by Grønbæk et al. (2017) shows how furniture with projected information can be adapted to suit a person who is standing or sitting.
- ChainFORM [78] by Nakagaki et al. (2016) shows how a handheld modular actuated hardware device can transform into different tools for drawing and displaying information.
- inSide [95] by Tang et al. (2014), shown in Figure 3.5, shows how layers and structural qualities of an object can be revealed through a virtual overlay. Hand gestures can be used to slice open objects or make them transparent. Touches can depress surfaces visually.
These works illustrate how different kinds of objects can be manipulated physically and virtually, rather than purely physically as a model or purely virtually as an object on-screen.
Enhanced manual processes
Being able to modify objects physically and virtually, as explored in the previous section, can enhance manual processes. Virtual tools that allow operations such as instantly copying and moving objects or generating accurate geometry can be applied to physical operations. In addition, physical actions can be communicated across digital networks.
- For sculpting, Perfect Red [48, pp.47–48], a speculative design by Bonanni et al. (2012), shown in Figure 3.6, presents a clay like material that allows sculpting by hand and with hand tools. The sculpting is enhanced with features found in computer-aided design (CAD). Like creating a form with CAD, the clay can be snapped to primary geometries (e.g. a circle or rectangle) or cut perfectly by drawing a line. In addition, forms can be cloned and fused perfectly.
- For communication and remote activities, Physical Telepresence [63] by Leithinger et al. (2014), shown in Figure 3.3, allows people to see and move objects from a distance using a projected surface which is actuated and able to sense depressions.
Ambient perception
The tangible computing explored so far comes with the benefit of alleviating direct attention, by utilising people's ambient perception. Screen-based devices traditionally require direct focus and are otherwise not intended to be easily visible. Tangible computing, which occupies a 3D presence, is visible peripherally and from afar, provided it is large enough and contrasts with surrounding objects.
The sculpting and movement of 3D forms are given greater expression in: surfaces like those of Urp, explored in Physical objects with virtual overlay; the works explored in Manipulable physical and virtual surfaces and objects; and materials like Perfect Red, explored in the previous section. Greater expression comes from the movement of bodies to perform direct physical actions, rather than the movement of hands and arms across a trackpad, keyboard or screen, in relation to flat representations. The greater expressions serve as rich latent border resources (see Background) in the form of prominent movements which can be mentally attached in order to form making and collaborative activities.
Below I explore a few ways that designers have deliberately leveraged people's ambient perception in a discreet fashion to relieve the need for direct attention in order to use devices:
- The Good Night Lamp by Alexandra Deschamps-Sonsino (2005) [21] is a small, house-shaped lamp that serves to indicate presence across a distance by acting like a shared light switch. Your own lamp can be switched on in order to switch on a lamp far away, sending a signal that acts as a less obtrusive substitute for sending a text message. The signal can be observed in a way that is akin to being in the same physical space as someone else.
- In a more intimate fashion than the Good Night Lamp, Pillow Talk [76] by Joanna Montgomery (2010) (first canvassed as a concept in Interactive Pillows [73] by Christina von Dorrien et al. (2005)), shown in Figure 3.7, shows how pillows can be used in long-distance relationships to signal presence by glowing when a partner rests on their pillow. This conventional activity is fashioned by tangible computing into something more powerful and simpler than communication through screen-based alternatives.
Dependence on many devices
So far, I have explored how tangible computing brings many benefits by bringing together physical and virtual practices. However, an issue which has been glossed over in the development of tangible computing is that interactions rest upon a range of physical devices. Tangible computing rests upon the material status (see Materiality of interaction in the The Material Turn) of common household or office objects and furniture. The issue is, activities possible through applications on screen-based devices shed themselves into a range of individual devices that may or may not cooperate.
In practice, dependence on the presence of many devices is not ideal in that it requires people to furnish their homes and offices with the right kind of devices and to maintain them, rather than relying on a few powerful screen-based devices. In practice, these tangible devices have not presented true freedom from screen-based devices because they need to be centrally managed by a screen-based device. This is evidenced in the mainstream adoption of tangible computing through the IoT (or enchanted objects [85]). IoT is not as complex as the tangible computing covered here, yet it presents the closest mainstream generation of computing beyond screen-based devices. Examples of the IoT are network-connected lights, toys, home appliances and blinds or doors—which can perform automated actions and communicate their status. An IoT device today may not be a fully actuated and sensing surface, as shown in Manipulable physical and virtual surfaces and objects, due to high cost and proof-of-concept status. However, at a fundamental level the IoT and tangible computing allow information to be captured and shared between physical devices to enable new kinds of interactions with digital media. The IoT is beginning to realise some of the vision of tangible computing by bringing computational ability to objects in our environment.
Despite tangible computing being an apparent antithesis to screen-based devices, IoT devices are dependent on screen-based devices to work. The earliest developments in tangible computing have also hinted at this dependence. In 2000, the HandSCAPE [62] digital tape measure, shown in Figure 3.8, showed how the distance and orientation of measurements could be gathered by a network-connected tape measure to generate a virtual 3D solid of the object being measured. The measurements were displayed on a screen because it is the most economical format to do so, even by today's standards, using a smartphone.
The continued dependence of tangible computing and the IoT on screen-based devices is attributable to unrivalled convenience through providing:
- Dynamic controls, which are readily available through screens, as opposed to embedding a standard interface in every tangible computing device, for connectivity and maintenance
- Sensor information, which can be captured and shared from screen-based devices like smartphones and smartwatches; this allows inference of a person's presence or absence without additional sensing devices in tangible computing
- Ubiquity—screen-based devices are common and usually ready at hand with many functions, whereas a tangible computing device takes a specific role and may remain in a particular place.
By recognising these conveniences, designers can conceive new methods to apply the same effects without depending on screen-based devices.
Relieving dependence on screens
It is possible to relieve dependence on screens by replacing the conveniences of screen-based devices that support dependence on many devices. As described, conveniences that need to be factored in are: dynamic controls; sensor information; and ubiquity. Technology for Augmented reality see-through devices, explored ahead, shows promise in achieving this by allowing the portable superimposition of visuals through wearable glasses.
Dynamic controls: augmented reality devices can extend the presentation of dynamic controls outside of screen-based devices and the fixed areas provided by projectors. Tangible computing works as shown in Physical objects with virtual overlay and Manipulable physical and virtual surfaces and objects rely on projectors to overlay dynamic information and controls on any surface within a fixed area. Augmented reality could extend this to any surface by providing the necessary ubiquitous display.
Sensor information: augmented reality wearables can also allow the same supportive sensing technologies found in screen-based devices.
Ubiquity: it should be noted that augmented reality, as proposed, only allows the management of tangible computing and the IoT to be ubiquitous. This does not target the heart of the matter, which lies in the material status of tangible computing and the IoT.
Despite their advantages, it can be argued that tangible computing and the IoT have never been designed to function as independently as screen-based devices do, because tangible computing and IoT devices have been designed in a world where screen-based devices are already able to simplify the networking and management of devices. For this reason, I now investigate how interactions outside of screen-based devices can be as ubiquitous as interactions are on-screen by exploring the specific area of whole-body interaction, which allows people's own bodies and surrounding environments to act as the primary surfaces for people's activities through digital devices.
Whole-body interaction
Whole-body interaction [26] is a specific subset of the ubiquitous and tangible computing explored already. Whole-body interaction involves both input from and feedback through: physical motion; the normal five senses plus the senses of balance and proprioception; cognitive state; emotional state; and social context [26, p.1]. The central tenet of whole-body interaction is to utilise a greater range of human abilities for the use of digital media.
Whole-body interaction is particularly compelling for my research because it dives into how people can avoid relying on screen-based devices. I explore ahead how whole-body interaction is intrinsically grounded by people themselves, or representations of themselves, in the environments where they find themselves.
I have already touched on works which involve greater use of the body in Ubiquitous and tangible computing through the use of physical objects and surfaces. I build upon this by reviewing how whole-body interaction:
- Enables Bodily interfaces that allow people to use their own bodies as interfaces for digital media
- Utilises, and could utilise, Augmented reality for engaging the body with virtual objects and environments
- Utilises Force feedback to allow people to feel virtual objects as if they were physical
- Enables Whole-body engagement through areas of the body not normally engaged through tangible computing or screen-based devices
Following the review of these areas, I suggest a Combined technology approach centred around augmented reality to support people's Virtual identity in order to ground interactions through digital media.
Bodily interfaces
I recognise bodily interfaces in whole-body interaction as interfaces that use people's bodies as the primary surfaces for digital media interactions. These interfaces present an alternative to the interfaces presented on screen-based devices or the surfaces of physical objects. I classify bodily interfaces into:
- Body-shadow interfaces, which use the outline (or shadow) of people's bodies as an information space on another surface
- On-body interfaces, which directly use the body as an information space or tool
Body-shadow interfaces
Body-shadow interfaces rely on input from people's hands and arms, with their bodies projected on sharable surfaces.
- VIDEOPLACE [50] [51] [49] by Myron Krueger et al. (1985–1991), shown in Figure 3.9, demonstrates how people's bodies can control large projected interfaces and also be interfaces in themselves for communication. This was achieved by mirroring outlines of people's bodies (or shadows), at any scale, and placing them in virtual environments where the people's hands could be used to virtually draw and type, and move, swing and grab objects. VIDEOPLACE allows people to engage their bodies for communication with others and engagement with agents, and to use projected interfaces.
- Body-Centric Interaction Techniques for Very Large Wall Displays by Shoemaker et al. (2010) [91], shown in Figure 3.10, uses the shadow of people's bodies as an interface on a shared display. The body can be used to store tools and information, and exchange and access information. Virtual tools can be taken out of a pocket [91, 01:30], files can be shared [91, 02:24] and people can extend their shadows to grab objects that are out of reach [91, 00:55].
On-body interfaces
On-body interfaces rely on input from people's hands and arms, with interfaces projected on their bodies.
- Armura [42] by Harrison et al. (2012), shown in Figure 3.11, takes input from the movement of people's hands and arms, to project information and interfaces on people's hands and arms, allowing people to graphically browse interfaces and virtually draw on their hands.
- Imaginary Interfaces [39][38] by Sean Gustafson (2010–2013) demonstrates how people can gesture in mid-air or operate memorised interfaces mapped on their hands. Unlike Armura, no visual feedback is provided. Gustafson shows how people can create and edit drawings in mid-air and learn unknown interfaces with audio feedback.
Both body-shadow and on-body interfaces describe valuable ways of centring digital media interactions on the body. With and without visual feedback, it is possible to use the body as a container for interactions, allowing interactions to be attached to the body in shared spaces. However, most of the works are constrained to set areas by the use of projectors. This work could be enhanced by breaking the constraints of screens with augmented reality.
Augmented reality
Augmented reality allows the projection of virtual objects among physical objects. Augmented reality is generally presented through screens; however, Augmented reality see-through devices, explored ahead, show how wearables can allow augmented reality to be portable. This advancement would alleviate the need to depend on screens and bodily interfaces as explored.
Augmented reality for whole-body interaction involves input from the body and surrounding environment to create and position virtual objects and act as a trackable surface for wearable interfaces, allowing augmented reality to fit as a replacement for screens used in bodily interfaces.
Virtual objects
Augmented reality allows the possibility of creating and manipulating virtual objects in physical dimensions.
- T(ether) [53] by Lakatos et al. (2014), shown in Figure 3.12, demonstrates a proof-of-concept for how virtual objects of physical dimensions can be manipulated in an augmented reality. The system is supported by tablet computers that act as windows to a shared augmented reality space. People wear gloves that enable the collection of information for sculpting augmented reality objects on-screen.
The need to carry a screen-based device to view virtual objects presents a major issue, as the interactor has to actively carry and interact with a device in order to manipulate objects. Wearable augmented reality, with virtual wearable interfaces, would allow the possibility of freeing the hands as in bodily interfaces.
Wearable interfaces
I use the term 'wearable interfaces' in the context of augmented reality to refer to design concepts that can transform people's hands and surrounding area into interfaces, as touched upon in bodily interfaces.
- Project North Star [60] by Leap Motion, Inc. (2018), shown in Figure 3.13, shows how interfaces might extend from the sides of people's hands [59] and be controlled in mid-air [58], with the possibility of grasping and moving 3D objects [59, 00:55–01:00]. The same fundamental hand tracking technology has been used to show engagement with 3D objects in virtual reality [31]. These are a compelling series of interactions that visually mimic direct interaction with physical objects.
Whole-body engagement
The interfaces previously explored in augmented reality and bodily interfaces focus exclusively on the use of hands and arms. Whole-body interaction allows the possibility of engaging the whole body through large surfaces that have been designed to register input from other parts of the body. This expands people's range of expression when interacting through digital media. Whole-body interaction can allow what I term, based on my observation, whole-body sensing for registering movements and projecting information on bodies and floor-based sensing for registering footsteps and the movement of objects of floors.
Whole-body sensing
Whole-body sensing devices allow the form and positioning of the body to be used for digital media interactions [26, p.140].
- Wall++ by Zhang et al. (2018) [104], shown in Figure 3.14, senses multiple touch points on walls and whole-body movements near walls. This system could complement the interfaces explored in augmented reality and bodily interfaces to interact with objects and mirror people's bodies on surfaces.
- By sensing the forms of bodies, the body can act as a canvas for digital media interaction, as suggested by Hoang et al. (2018) [44] and shown through Augmented Studio [45] by Hoang et al. (2017), shown in Figure 3.15. This work shows how anatomical information can be projected on bodies for educational purposes, allowing individuals to learn more about their own anatomy, look at the anatomy of others or simultaneously understand their own anatomy while participating with someone else [44, pp.259–260]. The authors suggest people had a stronger connectedness with the information because of the tight coupling between the anatomical visuals, bodies and shared experiences [44, pp.260–261]. The projection method used does not provide as much detail as a screen-based representation [44, p.260], yet designers can imagine augmented reality correcting this technical limitation. This work demonstrates the strong potential of coupling information to people's bodies where it is relevant.
Floor-based sensing
Building on whole-body sensing [26, p.140], I use the term 'floor-based sensing' to refer to distinct works that allow people's steps and engagement with objects on floors to be captured for digital media interactions.
- Multitoe [5] by Augsten et al. (2010), shown in Figure 3.16, applies multiple touch point sensing to floors and recognition of individuals through shoe sole patterns. The authors explored how the work can allow individuals to access a keyboard interface on floors [5]. This kind of interaction could be used in conjunction with activities that involve Manipulable physical and virtual surfaces and objects to allow a person to continue using their hands and use their feet to activate or deactivate a particular function, tool or view.
- Kickables [90] by Schmidt et al. (2014), shown in Figure 3.17, uses floor-based sensing for tracking objects. The work shows how kickballs can be used as part of information representations and interfaces [90]. Combined with works like Multitoe [5] by Augsten et al. (2010), interfaces on floors could incorporate a physical object like a kickball to make interfaces stand out through sight and feel. This can be likened to a floor pedal for a sewing machine, which can be found by feel alone.
Force feedback
Force feedback provides the physical sensation a person would expect from a physical action like holding or moving an object without the object existing physically [64]. While physical feedback in tangible computing is standard, force feedback is a missing element of whole-body interaction.
- A discreet method of applying force feedback involves wearable electrodes being used to transmit force feedback, as shown in Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation [64] by Lopes et al. (2018) in Figure 3.18. The work shows how force feedback can be communicated to simulate the movement of virtual furniture [64, 00:13–00:40] or repurpose a common physical object alongside force feedback to adjust a setting dial [64, 01:00–01:20]. People might not be able to scale virtual mountains with force feedback underneath their feet. However, it can simulate some common physical feedback scenarios that can be applied to a range of virtual objects or interfaces.
Combined technology approach
The combination of augmented reality, whole-body engagement and force feedback could mean people only need wearable devices for digital media interaction, in contrast to the Dependence on many devices explored in tangible computing. Wearable devices can be used to hold virtual objects and display interfaces on and off the body in bodily interfaces, as explored in augmented reality. These applications can be extended to whole-body engagement, as explored. The addition of force feedback can provide physical feedback to people for engaging with virtual objects.
- In imagining a place for this technology beyond proof-of-concept demonstrations, Mirrorworlds [68] by Keiichi Matsuda and Anna Mill (2018) provides a speculative vision for whole-body interaction through augmented reality. The work Mirrorworlds Concept: The Architect [57] by Leap Motion, Inc. (2018), shown in Figure 3.19, imagines how spaces can be transformed to allow collocated communication. The work Mirrorworlds Concept: Channels of Perception [56] by Leap Motion, Inc. (2018) also shows how people can completely change their surrounding environment. Mirrorworlds builds upon Physical Telepresence [63] by Leithinger et al. (2014) explored in Manipulable physical and virtual surfaces and objects through tangible computing. The clear benefit of Mirrorworlds is that they could one day use wearable devices to replace the burden of dealing with physical platforms and devices in fixed locations.
Virtual identity
Virtual identities are an implicitly applied or neglected part of the design of bodily interfaces and the application of augmented reality, force feedback and whole-body engagement. Virtual identities ground people's presence in shared whole-body interaction environments with some kind of visual identification to distinguish one another. Some examples have applied some kind of virtual identity to distinguish multiple people. However, many of the examples shown are technical proof-of-concept demonstrations that have not factored in virtual identities. However, imagining shared environments enabled by a combined technology approach requires a deliberate focus on the design and application of virtual identities for people. I now explore how virtual identities have been applied in whole-body interaction.
I call attention to three predominant ways of applying virtual identities in whole-body interaction:
- Body-shadows, which mirror the outlines of people's bodies on a shared surface
- Contact outlines, which highlight regions where a person is making contact with a shared surface
- Virtual clothing, which acts as virtual overlays people can wear
Body-shadows
Body-shadows mirror the outlines (or shadows) of people on shared wall surfaces in order to contain interfaces and distinguish people. They are explored in bodily interfaces.
- VIDEOPLACE by Myron Krueger et al. (1985–1991), as previously explored, uses coloured shadows of varying scale to identify people [51], although this does not serve a utilitarian purpose to identify who people are, but rather for some visual contrast.
- Body-Centric Interaction Techniques for Very Large Wall Displays [91] by Shoemaker et al. (2010), as previously explored, uses the shadows of people to contain objects, allow the transfer of files and extend to out-of-reach areas. The shadows do not serve to identify individuals, because people must stand in front of a projection for the shadows to appear.
Contact outlines
Contact outlines highlight regions where people are making contact with a shared surface. These outlines have been used in whole-body engagement where people cannot be highlighted on walls.
- Multitoe [5] by Augsten et al. (2010), as previously explored, applies a unique name and coloured footprint to each surface people walk on. This idea could be extended to assist people to find others, if a path is shared, or people could privately reclaim information about their own paths for navigation.
Virtual clothing
Virtual clothing is visuals projected on people including wearable interfaces and information. These visuals have been explored in bodily interfaces and the application of augmented reality and whole-body engagement for whole-body interaction.
- Augmented Studio by Hoang et al. (2017), as previously explored, allows human anatomy to be viewed on bodily surfaces at 1:1 scale [45].
- Armura by Harrison et al. (2012), as previously explored, projects information on the body that is easily identifiable to other people [42].
- Choreomorphy [81] by El Raheb et al. (2018), shown in Figure 3.20, explores the use of virtual clothing and effects for choreography captured in virtual reality. The virtual clothing presented in Choreomorphy takes on an expressive role, rather than having just a utilitarian purpose.
Virtual clothing can be likened to the utility of physical clothing for identification, storage and expressiveness. Paired with augmented reality, the interfaces and information presented through virtual clothing could be as ubiquitous as physical clothing. Virtual clothing has the potential to extend what we already do with physical clothing. Speculative visions of whole-body interaction in augmented reality have touched on the use of ubiquitous virtual identities through virtual clothing, in particular, Mirrorworlds [68] by Keiichi Matsuda and Anna Mill (2018) and Hyper-Reality [67] by Keiichi Matsuda (2016).
- Mirrorworlds [68] by Matsuda and Mill (2018) shows how people and their environments can be transformed in order to accomplish focused tasks. The work applies virtual identities that suit the tasks at hand. One setting presents students exploring how the water cycle works on a virtual environmental landscape at room scale in their classroom, as shown in Figure 3.21. Another setting shows the collocation of an office and a live medical operating theatre, as in Figure 3.22. In the classroom setting, students have virtual clothes and distinct virtual helmets. In the medical setting, medical imaging is applied on and extends from the patient, while colleagues and instruments are highlighted.
- Mirrorworlds [68] by Matsuda and Mill (2018) highlights how virtual identities can benefit whole-body interaction, the identities allowing people involved in shared settings to focus on critical information in situ without distraction. This provides a direction designers can build upon to blend people into imagined environments.
- Hyper-Reality by Keiichi Matsuda (2016) [67] shares a glimpse into how virtual identities can be used to accomplish a task. Hyper-Reality is a concept film of a dystopic augmented reality future—the environment is oversaturated with superimposed visuals that are deeply integrated into people's day-to-day practices. One scene reveals a thief stealing a valuable identification implant from the film's protagonist, shown in Figure 3.23. The thief dons a ghostlike virtual identity that conceals their own identity to accomplish the task. This is the only moment in the film when a full virtual identity is donned by a character. Despite the chilling nature of the scene, it demonstrates how virtual identities can be expressively powerful while also serving as a tool.
The explorations of donning virtual identities through costume as shown through speculation are limited in practice in the whole-body interaction field. The explorations shown only scratch the surface of what might be possible through virtual identities using a Combined technology approach, as explored. There is the need for a detailed exploration of how virtual identities might ground digital media interactions that are not dependent on screen-based devices.
Conclusion
Looking beyond screen-based devices in this chapter has led to investigating outcomes of the Material Turn in HCI. The Material Turn presents a focus on the materiality of interactions—or how materials, whether they are physical or digital, come together to support people's activities. In exploring ubiquitous and tangible computing I found beneficial ways in which physical and virtual materials could come together in new ways to support people. However, this computing in practice relies on screen-based devices for management. Investigating the specific area of whole-body interaction reveals how interactions can take place on people's bodies and their surrounding environment through a speculative combined technology approach based in augmented reality. The role of people's virtual identity in augmented reality to ground interactions, is both a crucial, and underexplored area that requires further work.
Computational Costume design
In Designing for a wider range of interactions beyond the screen, I reviewed how digital media could be designed with greater consideration of people's abilities and surrounding environments. This exploration revealed Ubiquitous and tangible computing, which allow people to deal with a range of manipulable and ambient physical and virtual objects. However, as this kind of digital media moves from research to the mainstream, it is dependent on screen-based devices for its operation. To remediate this dependence, I reviewed the specific area of Whole-body interaction. I showed how people might use a range of technologies, primarily based in augmented reality, to support virtual identities to ground interactions through digital media.
Through the design of Computational Costume proposed in this research, I develop the application of virtual identities in a speculative augmented reality to ground interactions through digital media.
Background
There are several perspectives which need to be acknowledged in the development of Computational Costume:
- Resolving the limiting perspective of ubiquitous computing through Hyperreality
- Understanding ethical considerations concerning augmented reality through Dark patterns
- Imagining probable technologies and scenarios through Speculative design
Hyperreality
Hyperreality, as discussed by Leonardo Bonanni in 2006 in the context of HCI, suggests transforming people's physiological perception of space [12], in contrast to ubiquitous computing which presents a proximate future that never arrives [9]. This problem can be understood when considering that today's proliferation of screen-based devices connected to the internet can be claimed as ubiquitous, as well as future augmented reality headsets capable of presenting virtual objects ubiquitously. This is problematic because the trajectory of ubiquity is uncertain with regard to the abilities enabled by it. This proposition holds no design value, because it is not particularly useful to think of computing as ubiquitous or not. It is more useful to consider which design trajectory of ubiquity designers should take. Hyperreality is useful for imagining digital media interaction that is an extension of people's physical and social realities. It is a more descriptive perspective than ubiquitous computing because it shifts designers' imagination from computers that are ubiquitous to a reality that is in some capacity enhanced through computers.
As with ubiquitous computing, there is no concrete technological approach for pursuing a hyperreality and to an extent this also presents a proximate future. Bonnani's hyperreality is achieved through ubiquitous and tangible computing. Bonnani suggests that augmented reality only overlays the world with useful information [12, pp.130–131] and is more cognitively intensive to use, requiring focused attention on the task at hand [12, pp.131]. Bonnani cites an example of augmented reality where its application was poor, rather than the technology being deficient. I have explored how augmented reality would be more beneficial than ubiquitous and tangible computing. While technology choices under a hyperreality can sway, the perspective ultimately speaks of an enhanced reality.
Baudrillard's hyperreality described in 1981, which inspired Bonanni's perspective, spans digital and analogue media. Hyperreality originally carried a negative connotation as a representation of real models without origin or reality [8]. Bonanni describes Baudrillard's definition as places that feel more real than the real world by blending an existing environment with simulated sensations
[12, p.130].
Hyperrealities are found everywhere, from works of fiction such as books and games, to paintings, advertisements and theme parks. Their applications can be quite innocuous and intended to entertain or teach. Yet they can mislead and promote unrealistic images that promote harmful attitudes or behaviours. For instance, drama television shows may inspire a longing for unrealistic ideas of romance or body image whose direct pursuit is often counterintuitive to understanding and overcoming issues of self-esteem. As with any media, it is up to everyone to recognise and act on issues as they arise. This applies to how a hyperreality through Computational Costume would be managed.
The Capabilities of Computational Costume described ahead are based on supporting a virtual and physical hyperreality. People's perception of their surrounding environment is transformed through vision, sound and touch.
Dark patterns
Hyperreality and augmented reality, as proposed to enhance people's whole field of view, are powerful. They require strong ethical considerations around how images are presented and what images are presented. To tackle this issue, we can look at design patterns today that are coercive—such that they encourage behaviours outside of people's intentions. An example involves unwittingly providing private information to generate content that people will have a higher degree of engaging with. In certain situations, this can run counter to a person wanting or needing to concentrate on more important activities. Such design patterns are known as dark patterns [14][35]. An awareness of these patterns allows designers and audiences to recognise them and avoid them if they choose to.
Dark patterns are design patterns that coerce people into situations that are culturally undignified or behaviours that are not beneficial for them, originally identified by Harry Brignull in 2013 [14]. Grey et al. (2018) define how interfaces today can nag people into performing an action, obstruct people from performing an action, disguise relevant information, privilege some actions over others and force particular actions [35]. Greenberg et al. (2014) go further to raise issues about privacy in physical public spaces with reference to fictitious and real examples in film and advertising campaigns where individuals receive targeted advertising from surfaces that track them [36].
- Hyper-Reality by Keiichi Matsuda (2016) [67] provides an arresting insight into a speculative augmented reality overrun by persistent advertising and incentives for consumption. The work presents worst case scenarios for a dystopian speculative augmented reality that is overrun with dark patterns.
Dark patterns are the foremost consideration of the Computational Costume designs presented through Design scenarios ahead. The work respects people's privacy and tactfully presents virtual wearable and objects to support people.
Speculative design
Today's augmented reality technology does not support the design of Computational Costume which requires interactions to be grounded by people's virtual identity through whole-body interaction. For this reason I have engaged a speculative design process which divides Computational Costume into both the probable technology, in Ergonomics and technology review, and the speculative Design scenarios which it supports.
Speculative design serves as a broad framework for creating design ideas based on forecasts of the future to stimulate discussion. Anthony Dunne and Fiona Raby in their book Speculative Everything [23] discuss the role of speculative design proposals in opening new perspectives on the challenges we all face. They argue that such designs add onto the public's vision of reality, challenge it and provide alternatives to it [23, p.189].
Computational Costume should be accepted as a provocation for what could be achieved, rather than a prediction of what is likely to eventuate. Supporting virtual identity is tuned to replacing dependence on screen-based devices. Another design option could include diminishing reliance on technology in order to help people cultivate manual skills such as freehand drawing or navigation without digital location tracking. For Computational Costume, speculative design is an experiential vehicle for developing, reflecting upon and evaluating ideas without investing in technology development.
The complete Design setting for Computational Costume presented ahead is a speculative design. The design is intended to provide the groundwork for developing new digital media that is not dependent on screen-based devices. Design scenarios are explored and presented through a range of lo-fi physical media as shown in Computational Costume prototyping and presentation.
Design setting
As covered in the preceding Background, Computational Costume is a speculative design of a hyperreality which avoids dark patterns. This informs the speculative design setting which Computational Costume's Design scenarios ahead are built upon. The speculative design setting for Computational Costume consists of two areas:
- The design of virtual wearables and objects for whole-body interaction set in a speculative augmented reality; this is detailed ahead in Capabilities
- The forecast of technology to support the speculative augmented reality; this is detailed ahead in Ergonomics and technology review and Hardware design
Each of the two areas in the speculative design setting inform one another. The augmented reality capabilities influence the selection of technology, and the forecasts of technology serve as a guide to inform possible designs through augmented reality. Without this combined investigation, it would not be possible to lend credence to the likelihood of the speculative design.
Capabilities
Computational Costume, as a hardware and software design, offers the ability to engage with virtual wearables and virtual objects that may also have physical qualities:
- Virtual wearables are analogous to bodily interfaces and virtual identity, as previously explored in Whole-body interaction.
- Virtual objects are analogous to manipulable physical and virtual surfaces and objects, which engage enhanced manual processes and ambient perception in tangible computing, as previously explored in Ubiquitous and tangible computing.
Together, virtual wearables and virtual objects can be regarded as an esemplastic hybrid of physical and virtual objects. They fit into the vision of a cohesive and transformative Hyperreality. I will collectively term hybrid physical and virtual objects esemplastic objects.
Esemplastic objects
In this section, the provenance of the term 'esemplastic' is discussed based on Ehn and Linde's (2004) ATELIER design research project discussed in previous sections. The authors seek the esemplastic unification of place through appropriation of space, configurability of artefacts, and place making games
[25]. I use the term 'esemplastic objects' to refer to objects that combine the qualities of virtual and physical materials such that, with adequate technology, the materials are neither virtual projections nor physical objects. Rather, physical and virtual qualities come together into a consistent material. As an example, in a speculative augmented reality a person could engage with a portable physical object whose form transforms virtually.
Based on human experiences of physical and virtual objects today, I highlight where esemplastic objects in Computational Costume differ. The points below can be regarded as the constraints on Computational Costume's technical capabilities that serve as a springboard for possible designs:
- Sight: esemplastic objects are visible within people's complete field of view, within and beyond people's spatial proximity and time. There is no clipping of virtual objects presented on a screen, no field of view of a headset or glasses. Additionally, objects with an attributable location (e.g. geolocation) and timestamp can be accessed from anywhere in space and time (present and past), regardless of proximity or time.
- Haptics: esemplastic objects can utilise physical objects or the human body as a physical surface for virtual augmentation. For purely virtual representations without a physical surface, artificial force feedback (detailed ahead in Force feedback devices) can simulate physical sensations of tactility and force where a physical object cannot.
- Hearing: esemplastic objects can be heard just like physical objects and environments, although without the limitations of proximity associated with physical objects and environments.
- Movement and access: esemplastic objects can be accessed from any time or location, with immediate control over access and visibility. In addition, movement through information can be guided through textual/visual search and object/feature recognition. Movement can also be guided by the application of geometries such as shapes, grids and paths for applications from drawing to navigation.
- Communication: esemplastic objects can allow communication between different physical locations. In addition, artificial agents could substitute for interaction with people.
The suggested capabilities of esemplastic objects listed above carry conceptual and technical drawbacks:
- Conceptually: the experiences listed above are not intended to fill in for all human senses and faculties. The list is only a collection of experiences that I have deemed relevant for interactions in the near future based on the probable direction of technology today. For example, experiences of pain and temperature have been excluded. However, this does not discount their technical viability or usefulness in the future.
- Technically: the supporting technologies require the use of wearable technologies that can be worn comfortably on the body. These technologies must be lightweight and able to be removed and cleaned easily for physical safety and hygiene. Because of size constraints, wearable augmented reality glasses today rely on environmental lighting (unlike illuminated displays). Also, wearable force feedback cannot convey the feeling of an object's mass.
By factoring in the drawbacks between technology and concepts, technologies can be designed to suit a functional conceptual ideal. Technology proposed cannot be so impractical that it will not fit an appropriate form factor in the future. For this reason the Design scenarios presented ahead remain realistically achievable on both conceptual and technical fronts. Thus, ahead I explore considerations of both ergonomics and technology.
Ergonomics and technology review
I now review existing and proposed form factors for wearable technologies that would in a probable future support the capabilities of Computational Costume, as covered. The technologies seek to simulate in as much detail as possible:
- Augmented vision through Augmented reality see-through devices
- Haptic feedback through Force feedback devices
- Sound reproduction through Portable sound devices
- Movement tracking, geolocation, networking, biometric authentication and computing through combined Sensing, networking and computing devices
From an ergonomic perspective, the wearable technology choices in Computational Costume should allow people to move as freely as possible. The technology should be as self-contained and modular as possible, so parts can work independently of any other system—or allow for some sensory channels only, covered briefly in Accessibility. Also, the technology should be easy to wear and easy to maintain. The range of technologies presented below have been curated according to these requirements in the proposed Hardware design.
Augmented reality see-through devices
There are three possible form factors for augmented reality vision through see-through devices:
- Smartglasses which present simple augmented reality projections such as notifications through a form factor which resembles conventional glasses
- Pendant-worn devices which present tracked projections on objects; while not technically, see-through, these present visuals similarly to see-through devices
- Mixed-reality headsets which present tracked projections on objects and hand tracking through a headset form factor
These devices range in performance and wearability. They stand in opposition to augmented reality pass-through devices, which can encumber the wearer's movement. Pass-through devices capture vision and surface geometries and pass them to a display within virtual reality headsets. An example of this augmented reality can be found in the ZED Mini combined with the HTC Vive [93]. These devices are excluded from the hardware design detailed ahead because they offer greater performance at the cost of a comfortable form factor.
Smartglasses present the most attractive form factor and are light and self-contained. They fit augmented reality capabilities into a device which resembles regular glasses. The compact form factors come at the cost of performance. Smartglasses serve principally as assistive devices, by overlaying the wearer's field of view with graphics, allowing the wearer to send and receive messages, take phone calls, capture images and follow turn-by-turn navigation. Simple voice recognition and taps on the glasses allow input. Smartglasses are exemplified by devices such as Google Glass [34] by Google (2013), which began the trend. These devices are survived today by devices such as the Vuzix Blade [100] by Vuzix Corporation (2018).
A future vision teases at how smartglasses will overtake today's mixed-reality headsets in both form factor and performance. The M3000 [99] is a speculative vision for smartglasses by Vuzix Corporation (2017) presenting a fully featured and compact monocular design. The device features an enhanced field of view with object and surface recognition for graphics that is comparable in performance to mixed-reality headsets today.
Pendant-worn augmented reality presents an alternative to see-through devices that is capable of projecting tracked graphics onto surfaces. This technology does not require the wearer to have glasses or a headset on, allowing some extra freedom of movement and a complete field of view, a preferable option when headgear is neither practical nor comfortable for all-day use. However, the technology does come at the cost of privacy, because graphics are projected onto shared surfaces for anyone nearby to see.
- The Portable Lumipen [75] by Miyashita et al. (2018) provides a functional vision of a pendant-worn augmented reality device with projection mapping and hand gesture recognition and pointing.
Pendant-worn augmented reality, such as the Portable Lumipen presents two issues: the wearer's body cannot be projected on and holographic images are not possible. Projection on a wearer might be possible if the device were used as a kind of handheld torch to illuminate graphics on the body. Also, to achieve holograms the wearer's point of view needs to be tracked.
Mixed-reality headsets provide augmented reality vision with hologram level graphics, surface and object tracking, sound and input through hand gestures and voice control. These headsets offer a greater level of visual fidelity than smartglasses and pendant-worn devices. While these headsets are heavier than smartglasses, comfort is increased by offsetting computing to a tethered standalone unit which can be worn. Mixed-reality headsets include the HoloLens 2 [74] headset by Microsoft (2019) and Magic Leap One [65] system by Magic Leap (2018). These devices only differ in form factor and how they take input from the wearer's hands:
- The HoloLens 2 relies on a series of hand gestures such as a pinching gesture to make an air tap [87]. The HoloLens 2 can also be lifted up like a mask while still being worn.
- The Magic Leap One offers a similar set of hand gestures to the HoloLens 2, in addition to a dedicated controller with motion sensing and a trackpad. The form factor of the Magic Leap One is distributed across the Lightpack which performs computing, the Lightwear headset and handheld Control [65].
Mixed-reality headsets can vary in comfort. Preference goes towards the HoloLens 2, which offers a greater level of physical freedom. The headset's functionality is self-contained and the form factor is adjustable so augmented reality vision can be easily stopped by lifting up the lenses.
Force feedback devices
Force feedback devices provide the artificial sensation of interacting with a physical 3D volume when interacting with a virtual volume. I identify two methods for supplying force feedback: wearable electrical muscle stimulation (EMS) electrode pads and wearable counterweighted electric motors.
EMS provides a range of possible force feedback options through multiple areas of the hands and arms.
- The EMS force feedback system Adding Force Feedback to Mixed Reality Experiences and Games using Electrical Muscle Stimulation [64] by Lopes et al. (2018) demonstrates the possibility of feeling the force of moving virtual objects such as moving furniture, turning a dial, pushing a button and rolling a marble on a tray. Force feedback sensations are provided through EMS electrode pads that are placed in key areas on the wearer's wrists and arms to stimulate muscles, and thereby simulate the proprioceptive forces felt when interacting with physical objects. However, the EMS electrode pads are not applied onto the muscles in the fingers for more delicate physical operations. To overcome this, physical objects can be used as supports for virtual objects—such as turning a cup for a virtual dial and using a board for a balancing game [64].
The force feedback technology proposed by Lopes et al. (2018) is moderately comfortable to wear, requiring the application of patches and an EMS power unit. The EMS electrode pads are connected to a portable EMS unit which is connected to a computer and HoloLens mixed-reality headset carried in a bag. Over time it would be expected that this hardware will be reduced in size and weight. Also, the adhesive EMS electrode pads might one day be replaced with a specially made fitted garment embedded with electrodes to allow easy application and removal, and connection to an EMS unit and computer.
Counterweighted electric motors can provide force feedback through wearable hand controllers. These kinds of devices can be regarded as less invasive, although they are a larger and heavier option that is concentrated in one area. In contrast, the weight of EMS electrodes on the hands and arms along with supporting EMS equipment in a carrybag is lighter and better distributed.
- The EXOS Wrist DK2 [28] by exiii (2018) can simulate force feedback by applying a physical counterforce to the wrist through actuating counterweights with electric motors. The device can simulate the feeling of resting a hand on a virtual object, in addition to the feeling of shooting particles or hitting particles that are in mid-air [27].
A reduction in the size of counterweighted electric motor force feedback devices could make them a viable and easier to manage alternative to wearing a garment with EMS electrode pads. However, EMS electrode pads already provide a discreet way to supply force feedback.
Portable sound devices
To create authentic esemplastic objects that blend the physical and virtual, the augmented reality see-through devices and force feedback devices presented require audial feedback and recording to complement visual and haptic channels of perception. Portable sound devices found on the market today provide insight into how audio can be heard and recorded with attention to wearers' comfort.
Wireless earphones/microphones provide convenient and compact audio listening and recording which adapt to the wearer's actions.
- AirPods [3] by Apple Inc. (2016) use an array of sensors to detect when the earphones are worn to control audio playback and to direct microphones to more clearly capture the wearer's voice. The earphones can also be stored in an enclosure which doubles as a battery pack to recharge the earphones.
For Computational Costume, an earphone-charging enclosure could be combined with a computing device.
Binaural hearing/recording provides a way for earphones to be permeable to ambient sounds in a surrounding physical environment for increased comfort.
- CS-10EM Binaural Microphones/Earphones [84] by Roland Corporation (2010) show how earphones when worn can use their position to simultaneously record and playback ambient sounds.
Designers can imagine how the functionality of binaural earphones could be applied to wireless earphones such as AirPods for increased comfort. The ability for these wireless earphones to detect the wearer's voice could be used to mute ambient sound recordings while the wearer is speaking to prevent them hearing the feedback of their own voice.
Bone-conduction transducers provide a means to listen to audio without blocking the wearer's ear canal for comfort.
- Google Glass augmented reality glasses by Google (2013) use a bone-conduction transducer [34] to supply audio to the wearer.
Merging augmented reality and audio reproduction could be convenient, but would limit the ability to use audio only on a standalone device. Some situations may warrant audio only for accessibility reasons or physical exercise where glasses could fall off. A wireless earphone option could be useful for wearers who need audio only.
Sensing, networking and computing devices
For Computational Costume, augmented reality, force feedback and portable sound devices need to be brought together by computing which allows: interoperability between hardware components; necessary environmental sensing to recognise and track surroundings, objects and other wearers; authentication of wearers for wearer privacy and integrity of wearer tracking; and networking between wearers. There are a range of existing and proposed devices which can help achieve this, including: smartwatches, earphones and wearable motion- and surface-capture devices.
Smartwatches combine a range of motion and biometric sensing, networking and computing in very compact wearable devices.
- The Apple Watch Series 4 by Apple Inc. (2018) includes a suite of short- and long-range radios for networking and geolocation, vibrotactile haptics, microphone, speaker, small display, heart biometrics and motion sensors [4]. The position where smartwatches are worn allows easy access for heart biometrics and sensing physical activity from motions of the arm and the body as a whole.
The form factor of the smartwatch could serve a similar purpose to the auxiliary computing of Magic Leap One's Lightpack, previously explored in augmented reality see-through devices. However, designers might choose to only keep biometric sensing on the wrist and use a smartwatch-like form factor as a clip-on, handheld or pendant-worn auxiliary computer.
Earphones have the potential to contain some of the computing, motion-sensing and biometric capabilities of smartwatches. Earphone head-motion capture and biometrics provide the possibility to offload sensing and ambient sound from augmented reality headsets (previously explored in augmented reality see-through devices) to improve their form factor while containing some of the biometric sensing of smartwatches.
- The patent application Sports monitoring system for headphones, earbuds and/or headsets [80] by Prest et al. (2014) shows how earphones might receive head-motion data and biometric data including heart rate, as well as temperature and perspiration; see Figure 4.1.
In relation to Computational Costume, the earphones proposed by Prest et al. (2014) have the potential to act as an auxiliary motion and biometric sensing device, as well as a standalone device. The head tracking present in the earphones can replace or back up the head tracking available in augmented reality headsets. In addition, the earphones can collect biometrics and so replace a wrist-worn device, allowing the earphones to also sense whether they are being worn. If the system detects that only audio is being received by the wearer, it could allow visual and haptic information to be replaced with audio and an artificial agent to take commands from the wearer.
Computational Costume requires reliable wearable motion and surface capture information to map all surfaces available to a wearer and to take hand gestures as input. Mixed-reality headsets already incorporate this functionality. However, compact standalone motion and surface capture devices today allow greater fidelity. These compact devices signal what might be possible in the future.
- The Leap Motion Controller [55] hand-motion sensor by Leap Motion Inc. (2013) paired with augmented reality in Project North Star [60][59][58] by Leap Motion Inc. (2018) provides unparalleled interaction with virtual wearables and objects using the hands. The sensor is mounted onto an augmented reality headset to create a singular wearable unit for vision and motion and surface capture [61].
Designers could imagine how motion and surface capture could be supported through a device worn like a pendant, such as the Portable Lumipen, or clipped onto clothing, such as the Magic Leap One Lightpack, previously explored. Any small motion and surface capture device that must be on the head can remain in place, while a clip-on or pendant which can be stably affixed can house more advanced hardware.
Accessibility
The technology to support Computational Costume consists of various parts to transmit vision, audio and touch to wearers. As hinted in discussions of modularity thus far in sensing, networking and computing devices, the various parts can be arranged in a modular fashion to increase accessibility to the different sensory channels available. When one part such as vision, haptics, sound, or motion and surface capture is known to be unavailable, other parts can activate and translate in their place. As an example, in situations where a wearer is vision-impaired or an augmented reality headset has no charge left, the system can activate earphones to listen to other channels in audio, with commands taken through voice by an artificial agent or hand-gesture commands captured by a motion-capture device.
In Hardware design I discuss how the technologies presented here for Computational Costume might come together into a modular and accessible combination of devices.
Hardware design
The hardware design of Computational Costume builds upon the preceding Ergonomics and technology review. The hardware design proposed ahead supports the speculative capabilities of Computational Costume in accordance with wearers' comfort and accessibility to a range of sensory channels such as: vision, haptics, sound, and motion and surface capture. This imagining of a probable hardware design serves to lend credence to the viability of the Computational Costume's conceptual design in practice, as rendered in Figure 4.2.
Building directly upon the four classes of devices covered in Ergonomics and technology review, the Computational Costume hardware design is comprised of three modules which connect to a central unit:
- An Augmented reality module, which draws inspiration from augmented reality see-through devices
- A Force feedback module, which draws inspiration from force feedback devices
- A Sound module, which draws inspiration from portable sound devices
- A Central unit, which draws inspiration from sensing, networking and computing devices and permits the whole system's modular design
Augmented reality module
The augmented reality module can be imagined as a thickly framed pair of glasses that can be separated in half at the bridge into two monocular frames. The module can be worn as binocular eyeglasses or as a monocular lens. The design features:
- A bulge on the temples behind the end piece and hinge to house a small augmented reality projector which shines onto the lens; the projector aperture is accompanied by a microphone pinhole
- Apertures for a small camera and a circular infrared (IR) sensor in a vertical orientation on the end piece; the apertures face the environment in front of the wearer
- A bulge to house a bone transducer for sound at the temple tips; this area features a combined power button and contacts for a magnetic power and data connector
The design features the following hardware for:
- Vision: augmented reality projectors for each lens
- Sensing: head-motion tracking and visual object detection (to complement central unit motion and surface capture)
- Networking: short-range radio connection with central unit
- Modularity: monocular and binocular modes which are substitutable for an audio/earphones only mode
- Sound: x2 bone-conduction transducers and x2 microphones
- Power and data: combined on/off button and magnetically latched data/charging port on the temple tips of the glasses, with discreet battery on each side of the glasses
Force feedback module
The force feedback module can be imagined as a long-sleeve thermal shirt paired with a pocket-sized electrical muscle stimulation (EMS) power pack. This shirt fits closely to the skin and is worn under clothing. The shirt is tethered to the removable EMS power pack. The design features:
- EMS electrode pads that are woven into the arms of a long-sleeved shirt on the wrist, upper forearm, bicep and their opposing sides; the shirt features contacts for magnetic data and power connection on either side of the shirt's waistline to accommodate placement of the power pack in pants pockets
- An EMS power pack about the size of a pack of cards containing a battery and a combined power button and contacts for a magnetic power and data connector; the connector allows a data and power connection to the EMS shirt using a cable and charging of the power pack when not in use
The design features the following hardware for:
- Haptics: electrical muscle stimulation (EMS) pack that supplies power to an electrode garment with x12 electrode pads
- Networking: short-range radio connection between the EMS pack and central unit
- Modularity: can be substituted by basic vibrotactile haptics from central unit
- Power and data: combined on/off button and magnetically latched data/charging port on EMS power pack, with internal battery; a magnetic data/power connector on the electrode garment; a removable wire connector featuring a pinhole-sized LED connection-status indicator
Sound module
The sound module can be imagined as consisting of two wireless earbud earphones. The earphones are accompanied by a small charging case. The design features:
- Earphones which are shaped into an earbud form that can nestle into the ear canal without bulging too far outside of the outer ear; the earphones feature a pinhole microphone hole facing the wearer's environment
- An earphone charging case with two grooves to store and charge the earphones; the case features a combined on/off button and magnetically latched data/charging port, and is sized to accommodate the width and height of the central unit so they can be connected together seamlessly
The design features the following hardware for:
- Sound: speaker and microphone in each earphone for playback and recording of audio and live playback of ambient sound to compensate for the wearer's ears being blocked
- Networking: short-range radio for connection with central unit
- Sensing: head-motion tracking (as a backup to augmented reality module tracking) and voice sensing for noise reduction and muting ambient sound recording, as well as heart biometrics for health care and detecting whether earphones are being worn
- Modularity: can be used without the augmented reality module by connecting directly to the central unit; the earphones can be used together or on their own
- Power and data: earphone case with battery for earphone storage and earphone battery charging, with combined on/off button and data/charging port with LED signal indicator, allowing connection to central unit or charger; earphones and earphone case feature contacts for charging; earphone activation is triggered by removing earphones from their case and when wearing is sensed
Central unit
The central unit can be imagined as consisting of two square pucklike units connected via a belt which can be worn as a waist belt, pendant or sash. The unit facilitates front and rear motion and surface capture, identification of wearers to allow the projection of individuals' Computational Costumes and auxiliary computing for the augmented reality module, force feedback module and sound module. The design features:
- A primary puck worn facing the front of the wearer, roughly two-thirds the size of a regular square coaster with the thickness of a standard deck of cards. The unit features a large aperture for a wide-angle camera lens for image capture, and surface and motion capture. Above the lens aperture is an IR emitter in each top corner, to assist with surface and motion capture. Below the lens, there is a small IR emitter to act as a wearer identification beacon. The primary central unit features a small monochrome touch display on the rear to identify connected modules and see their status, along with basic setting options. Alongside the display there is a small fingerprint reader for wearer authentication. On either side of the unit there is a combined on/off button and magnetically latched data/charging port to allow connection to the rear unit through a belt connector. There is also a connector on the bottom side for connecting the sound module.
- A secondary puck worn facing the rear of the wearer, also square and smaller than the primary puck to comfortably accommodate the same wide-angle lens, and IR beacon as on the primary puck. The unit allows the capture of 360° images when paired with the front camera and the ability to recognise other wearers behind the wearer's field of view. On either side of the unit there is a combined on/off button and magnetically latched data/charging port to allow connection to the front unit through a belt connector.
- A two-piece belt which acts a connector between the primary and secondary pucks. The belt features a repeating barcode pattern to co-facilitate the identification of wearers alongside an IR identification beacon from the front and rear pucks, and short-range radio identification. Each belt piece features magnetic data/power contacts on each end to allow connection between the front and rear pucks.
The design features the following hardware for:
- Identification: the belt connectors allow visual determination of other wearers through visual tracking of a barcode pattern, paired with a digital-camera visible IR beacon (from the front and rear pucks of the central unit). This is complemented with short-range radio for further authentication of wearers. Biometric authentication through fingerprint reader on the primary puck allows the wearer to authenticate their use when wearing.
- Fitting: the primary and secondary pucks magnetically latch onto the adjustable belt connectors, to be worn as a waist belt, pendant or sash.
- Networking: short-range radio for connection with modules and other units. Long-range radio for wide area network (WAN) access to services and other units. Geolocation radios for geotagging data with location information.
- Sensing: IR sensing cameras and IR emitters for hand tracking, motion sensing, surface recognition and detecting the correct orientation of the pucks.
- Display: headless mode (when no modules are connected) through monochrome touch display to establish connections with modules, check central unit and module status, system settings and access a backup interface when no modules are connected.
- Sound: speaker and microphone as backup when modules are unavailable.
- Haptics: vibrotactile motors in front and rear pucks as a backup when the force feedback module is unavailable.
- Power and data: combined on/off button and magnetically latched data/charging port on either side of the front and rear pucks, and the bottom of the front puck for the sound module. The front and rear pucks contain their own batteries and can share power between each other and with the sound module or force feedback module.
Conclusion
By imagining the speculative hardware design of Computational Costume, designers can take a liberating technology-agnostic approach to the whole design process. The hardware design covered, serves as a probable guide for what software could achieve in the near future based on hardware developments today and foreseeable advancements in wearable technologies for: augmented reality; force feedback; sound reproduction; and compact computing, sensing and networking. This informed design frees designers from the template of software interactions possible through today's technology. Also, speculative software designs that come out of this technology-agnostic approach can help set the benchmark for hardware development.
Design scenarios
Following on from the speculative design setting previously described, I present speculative design scenarios describing the engagement allowed by Computational Costume. The scenarios present the ways in which Computational Costume software could work for its wearers in imagined scenarios where people are not dependent on screen-based devices. Instead, wearers adopt whole-body virtual identities to ground practices through digital media.
Background
These speculative design scenarios exist to liberate designers from the constraints associated with today's wearable technology capabilities. These design scenarios are constructed and presented using lo-fi physical materials and processes, covered in detail in appendix Computational Costume prototyping and presentation, to liberate designers from the need to create or build their own technology. Together, these features enable a range of designers, whether they are more technically inclined or inclined towards artistic craft, to focus their design skills to freely develop the design of Computational Costume to address imagined scenarios. In addition to this, the scenarios created here are presented in a way that aims to be highly accessible to audiences so they can experience and comment on the work.
Scenario-based design is a well-established design process in developing emerging technologies. Scenario-based design is a family of techniques in which the use of a future system is concretely described at an early point in the development process
[86]. The envisaged scenarios assist the development of a functional system by allowing designers to examine and challenge the fit between their design ideas and prototypes, and applicable scenarios. The process is comparable to design probes [69] for proposing alternative design scenarios to stakeholders (e.g. users or designers) to challenge established perceptions that influence design outcomes and assess the viability of designs. Computational Costume presents alternative interaction possibilities that can be shown to audiences and designers through scenarios for further development into functional designs.
I present four different Computational Costume designs which offer a range of scenarios. The designs seek to advance the development of a hyperreality filled with esemplastic objects that blur physical and virtual materials together. This is achieved through items such as wearable whole-body virtual identities. Also, in the face of such powerful technology, I propose designs that avoid coercive dark patterns to support people's privacy while interacting.
To summarise the iterative exploration and development of Computational Costume:
- A Cardboard poster and interface imagines the creation, presentation of and engagement with esemplastic objects through a wearable hand/forearm interface through a real-world conference poster design.
- Computational Costume v0 builds on the cardboard poster and interface by imagining the use of the whole-body as a surface for object storage and communication for personal and workplace scenarios alongside tools which allow people to engage with objects outside of their field of view and time. The work is presented on a mannequin.
- Computational Costume v1 iterates on Computational Costume v0 with more clearly defined and visually balanced costumes that fit a sequence of scenarios. The work is performed in front of an audience.
- Computational Costume v2 iterates on Computational Costume v0 and Computational Costume v1 by refining the presentation of a body-projected health record and associated data, along with details of engagement between multiple costume wearers and surrounding environments, for private and public communication through the body and world-projected navigation. The work is presented through a combination of exhibition and video.
It should be noted that the Computational Costume designs have been in part influenced by the continued refinement of the prototyping and presentation methods. Scenarios begin with designs that are sympathetic to a static presentation of mock virtual scenes. As ideas have grown, the final scenarios evolve to incorporate dynamic representations as film-making and video editing are adopted to communicate how mock esemplastic objects made from physical materials would act in actuality.
Cardboard poster and interface
To begin illustrating the potential of esemplastic objects in a speculative augmented reality, I created a mock virtual/physical (esemplastic) poster using cardboard, as shown in Figure 4.3, as well as an imagined wearable interface to facilitate the poster's creation, as shown in Figure 4.5. The cardboard poster itself was an encapsulation of the review and design concepts that inspired its creation—covered in Supporting practices with screen-based digital devices and Designing for a wider range of interactions beyond the screen.
Cardboard poster
The cardboard poster served a real purpose in a real scenario to present my research alongside an extended abstract at a conference research competition (see [70]). The design challenged the standard poster presentation scenario by presenting research content that inspired the viewer's imagination to see the poster as a direct product of the research. The poster allowed viewers to engage with it as if it were an esemplastic object in practice. The work allowed viewing and direct interaction through manipulable sections on the bottom area of the poster, in addition to inspiring viewers' imagination about how such an esemplastic sculpture would have been assembled by hand.
The cardboard poster presents a proof-of-concept for the direct hand creation and assembly of esemplastic objects, as shown in Figure 4.4. For the constructed poster (Figure 4.3), viewers were allowed to move pieces on the bottom section of the poster to explore the evolution of my early word processing application interface redesign, shown in Figure 3.1, from placement options on-screen to outside of the screen and on-body. However, in a full speculative augmented reality viewers might also be able to take copies of the whole poster, share them and even add their own touches. Through an imagined cardboard interface I indicate the kind of functionality that viewers would have access to in engaging with esemplastic objects like the cardboard poster.
Cardboard interface
To complement the proof-of-concept poster, I imagined and designed a wearable virtual interface mock-up, shown in Figure 4.5, that would allow the required functionality for dealing with esemplastic objects and the creation of the cardboard poster. The interface was inspired by my early word processing application interface redesign, shown in Figure 3.1, which I explored at the beginning of Designing for a wider range of interactions beyond the screen. The application interface redesign highlighted how menus could be arranged procedurally for different activities. With this in mind, a natural extension of the idea with respect to whole-body interaction and augmented reality was to allow people to wear interface options that would be useful to carry at all times.
The cardboard interface (Figure 4.5) consists of a menu accessible through bracelets and rings for the forearm and fingers. The bracelets closest to the body control the highest level features—the main interaction modes such as: explore, sculpt and capture. The rings on the fingers control the lowest level features of the preceding modes—in the example shown, these are the colour, quality, thickness and type of line for drawing in a 'brush' mode. The principle of high-to-low level option selection acts as an extension of controlled human bodily movement—where the most focused and detailed physical movements occur at the physical extremities.
Material choice
The austere material choice for the cardboard poster and interface aims to discreetly echo the concerns of the Material Turn in HCI by drawing designers' attention to the materiality of interactions, not the materials and status attached to particular media and devices. Cardboard presented as a mock virtual/physical esemplastic medium is unbiased, or amaterial if you will, because it is not digital and it is not serving the usual purpose of cardboard for packaging. This choice presents a shift from established digital media constraints to rationalised speculative interaction design concepts.
Cardboard is also accessible to work and engage with, opening up design and prototyping processes for digital media to a wider audience of both designers and participants. This is explored further in the prototyping discussion ahead in Computational Costume prototyping and presentation.
Findings
The cardboard poster and interface presented the application of imagined esemplastic objects in practice through an amaterial approach. The poster challenged viewers' concept of a traditionally flat physical poster presentation format for viewing only. The work offered the ability to imagine how an esemplastic object could allow more depth to physical and virtual media today.
To grow upon the strengths of the work, the most significant features of the cardboard poster and interface which needed to be elaborated upon through further design were:
- Speculative design through amaterial imagination: specifically, the place for wearable materials, the use of colour and presentations that immerse viewers further into the imagined augmented reality
- The functionality of the imagined esemplastic medium: in particular, through virtual identity—only briefly touched on through the wearable cardboard interface
Computational Costume v0
Computational Costume v0 builds upon the functionality of esemplastic objects and the amaterial imagination of esemplastic objects explored in cardboard poster and interface. Extending the cardboard interface, which was wearable on the hands and forearms, Computational Costume v0, as shown in Figure 4.6, describes how the rest of the body and additional tools could be used. Computational Costume v0 presents how miniaturised versions of objects similar to the cardboard poster might be stored on costumes. In addition, costumes and objects can be accessed through two tools: a map tool to navigate beyond a wearer's field of view; and a timeline tool to navigate beyond the present time.
At a public fashion exhibition, Computational Costume v0 presented how wearable virtual identities and tools could intertwine themselves with the activities of wearers. A mannequin donning multiple imagined costumes sought to present a range of scenarios all in one place, as shown in Figure 4.6. The mannequin held one half of the costume for the wearer's work life and another half for their personal life. The work and personal costumes showed how a virtual costume could replace the use of tools today for work and communication. Overall, the work attempted to illustrate as much potential in the concepts as possible in a way such that viewers might be able to project their own experiences by standing alongside the mannequin.
Work costume
The work costume was a simple representation of a stop/go signal for a roadside construction worker, as shown in Figure 4.6. The half-costume was accompanied by a small virtual map to track the status of a colleague's signal and to communicate between one another to coordinate signals, as shown in Figure 4.7.
Personal costume
The personal life costume presented a shirt with personal effects as storage and decoration, as shown in Figure 4.8, alongside pants with a more utilitarian purpose for activity tracking and marking out a health treatment plan. The objects placed on the costume had a symbolic connection to areas of the body. For instance, the health treatment plan was targeted to the leg, as were the 'swimming goals', as shown in Figure 4.9. A timeline tool to track changes on the costume over time and access them complemented the work, as shown in Figure 4.10.
Findings
The overall assemblage of scenarios in Computational Costume v0 was didactic. The work was dependent on explanatory captions in describing the function of the costume. Members of the public took time to look at the work, but were not able to fully conceptualise the ideas in their mind until they received answers to questions they had about the work. To correct this, future costumes would be presented in a more realistic manner, similar to the cardboard poster and interface.
Computational Costume v0 filled in gaps not covered by the cardboard poster and interface which preceded it. This involved:
- The careful use of coloured materials to present identifiable whole-body virtual identities that are useful for particular situations
- Complementary tools to the costume to extend people's reach beyond their proximity
Successive designs refined the above two areas, in addition to refinements to the presentation style to help audiences conceptualise the ideas presented without assistance.
Computational Costume v1
Computational Costume v1 was designed to incorporate relatable experiences through a day in the life of a wearer. It was presented as a three-minute on-stage performance for a science communication competition, a re-enactment is shown in Figure 4.11. In this performance, I acted as the wearer in three situations marked by three different costumes shown in Figure 4.12.
Computational Costume v1 builds upon the costumes and supporting tools established in Computational Costume v0. The performance begins with a personal costume used on public transport on the way to work. The costume is followed by a worksite costume which activates at the wearer's workplace. The performance concludes with a medical emergency costume which activates an on-body health record for use in an emergency. These costumes echo some of the features and purpose for objects shown previously on the personal costume and work costume from Computational Costume v0. However, the presentation in Computational Costume v1 goes into more depth than in Computational Costume v0, using what are known as endowed props [46] to imbue physical props with virtual functionality through acting, as I illustrate ahead.
Personal costume
The personal costume in Computational Costume v1 builds upon the purpose of esemplastic objects and the map tool presented in Computational Costume v0. The personal costume here is focused on presenting how the wearer might perform common activities such as reading and communicating through esemplastic objects and a map tool. Shown in Figure 4.13 is an object representing the train the wearer has boarded, with time and destination information which can be sent to another wearer through the map tool to communicate location and time of arrival. In the performance, the action is a seamless hand and arm movement between the wall of the imagined train and the map which contains links to other wearers.
An additional touch in the performance involves the demonstration of how private reading could be made public to others, as shown in Figure 4.14. This action seeks to show how an act that is traditionally public through the use of physical media such as books and newspapers can once again become public through digital media by choice, through an esemplastic object.
Worksite costume
Building on the work costume presented in Computational Costume v0 which showed how a costume would serve as a work tool, the worksite costume in Computational Costume v1 shows how a costume can be further intertwined with the practice of working. In the performance, the worksite costume activates at the wearer's workplace and indicates jobs which need to be done through a sign and directional arrow, as shown in Figure 4.15. The job can be affixed to the costume so it is visible to others.
The worksite costume demonstrates the utility of a virtual identity as a tool for an individual and a set group. The features of the worksite costume allow individuals or other wearers to set tasks as a direct extension of a context-aware uniform, allowing a seamless fit between the activities at hand and the information to coordinate and manage these activities when required. Visual objects can be re-purposed to signal intent to others that is visible to other workers directly and through the map tool.
Medical emergency costume
The medical emergency costume builds upon a small part of the personal costume presented in Computational Costume v0 which reveals a medical record; see Figure 4.9. In the performance, the medical emergency costume is activated when the wearer is injured at the worksite. The medical emergency costume shows how a medical record could work across a series of scenarios, as shown in Figure 4.16, from marking out the area of injury to providing a surface which medical practitioners can access and leave critical information on. The costume allows the map tool to enable access to information on events that occurred before an emergency and to act as a direct line of communication between the injured wearer and their loved ones.
The medical emergency costume, along with the preceding worksite costume, provides another instance of contextually activated virtual identities. Again, the costume is a useful tool for allowing the coordination and management of information in a direct way. This begins with a clear alert to surrounding people that there is a problem at hand and that help has been called. There is also a meaningful connection between information and actions on the bodily surface: the wearer's state is presented on the body that is the source of the information, and a loved one can reach over a geographical distance to provide direct support as if they were physically nearby.
Findings
Computational Costume v1 builds substantially on Computational Costume v0 by providing greater clarity on the usefulness of contextually aware costumes and the ability to draw direct connections between surrounding objects and activities related to wearers.
Several aspects of the Computational Costume v1 solo presentation took away from the work's message. One of the competition judges mistook the work as being part of the quantified self movement for lifelogging, which involves collecting a range of data from individuals and presenting it. This happened despite showing the ways in which the work was advantageous for communication. I suspect the misinterpretation was a product of presenting a range of scenarios on my own where other people and the surrounding environment were imagined. For first-time onlookers, it was not unreasonable to receive such a misinterpretation when there was only three minutes to listen and watch alongside imagining a range of different surrounding environments and actors that had no physical representation. Additionally, the novelty of removing quick-release clothing gained audible attention from the audience. The mock effect was distracting—when the real effect would involve virtual clothes that change instantly.
Based on the findings, future Computational Costume ideas needed to build on:
- Clearly illustrating interaction with the surrounding environments and additional actors
- Producing mock effects as close as possible to their real counterparts
Computational Costume v2
Computational Costume v2 presented a deeper focus on the workings of the health record and the map tool explored through Computational Costume v0 and Computational Costume v1. The work combines the physical and performative displays previously explored in Computational Costume v0 and Computational Costume v1 through the use of film-making, while adding greater detail to objects and how they work in public versus private scenarios.
A video, shown in Figure 4.17, was produced to show how Computational Costume v2 worked, with clever editing of shots to make physical props appear as if they were esemplastic objects. For example, objects could instantly appear and change, unlike in physical performances. The video demonstrates first-person perspectives of how a wearer is enabled to access information privately as an individual or group. This is juxtaposed with third-person perspectives of how information is visible publicly to an observer. In addition, it is clear to see how interactions with other people and the surrounding environment are possible. Ahead, I present the key scenes through a storyboard with descriptions of the scenes. Where possible, the video was presented along with an exhibition of physical props, as shown in Figure 4.18 and Figure 4.19, to allow viewers to examine the finer details they may have missed while watching the video.
Storyboard
In Figure 4.20, the Computational Costume v2 design begins with a costume token allowing the management of a costume and ability to access hard-to-reach areas—by placing a mark representing pain over the spine.
Sharing tokens in Figure 4.21 or any object in Figure 4.28 acts as a means to allow information access to others. In Figure 4.21 a patient hands over to a health professional a token that represents back pain and which allows the private viewing and exchange of objects on the patient's medical record.
In Figure 4.22, a health professional can demonstrate and apply a ready-made object to explain a medical condition and provide instructions the wearer can follow at a later time. These objects are added to the record and become available to both parties for future reference.
The exchange of information in Figure 4.22 becomes part of a family of medical records on the patient's overall medical record costume as shown in Figure 4.23. The overall medical record is a chronology of silhouettes representing various milestones such as vaccination and medical conditions which collate medical imagery and prescriptions.
In Figure 4.24 the relevant parts of the medical record costume can act as an emergency-activated call-to-action in reference to the medical emergency costume in Computational Costume v1.
A map tool which builds on the design of map tools featured in Computational Costume v0 and Computational Costume v1 allows access to other wearers and objects outside of an individual's field of view. The tool is explored with greater detail, as shown in Figure 4.25, Figure 4.26 and Figure 4.27. The map tool allows milestones marked on a medical record to be associated with locations and other objects, in Figure 4.25, such as a birth record, which is connected to birth parents and a birthplace from the past. In this instance, the map tool combines the functionality of the timeline tool explored in Computational Costume v0; see Figure 4.10.
In Figure 4.26 the map tool acts as a communication tool and navigational aid, allowing communication between wearers and navigation to another wearer's location by following an environmental marking.
In Figure 4.27 the map tool also allows access to other environments, useful for object retrieval when something has been forgotten at another place.
The Computational Costume v2 video concludes with a private gathering. Access is granted to an interested passerby through sharing an object visible to the group, as shown in Figure 4.28.
Findings
Computational Costume v2 presents the most fully formed design concepts and presentation style for Computational Costume. Computational Costume v2 provides a compelling vision to audiences by using film-making to combine the strengths of physical props and performance as explored in previous designs. Video shows how esemplastic objects work as part of an imagined ecosystem of objects that facilitate: personal and group activities for health, work, play and emergencies; as well as the use of environmental surfaces for esemplastic objects.
Audiences have drawn the clearest understanding of Computational Costume from the video-only and exhibition-with-video formats adopted in Computational Costume v2. Costumes and props used in the filming of Computational Costume v2 serve to accompany presentations of the video, as shown in Figure 4.18, providing audiences with a chance to see design details they may have missed in the video. The video embellishes the costumes and props with meaning drawn from the narrative presented through the video. The immediacy of physical props is greater than pausing video frames to appreciate physical details.
The activities with esemplastic objects and between people in Computational Costume v2 highlight several generalisable interactions which can be applied to new scenarios by designers:
- The use of tokens to make modifications to a larger object in hard-to-reach areas; see Figure 4.20
- The use of tokens and objects to control access to costumes and objects; see Figure 4.21 and Figure 4.28
- The free creation and application of esemplastic objects on the surrounding environment and costumes; see Figure 4.22
- The collation of milestones on a costume using bodily silhouettes as chronological markers; see Figure 4.23
- Contextual activation of costumes, such as in emergencies or proximity; see Figure 4.24 and Figure 4.28
- The use of a map tool to navigate space and time beyond the wearer's field of view, useful for finding historical information related to objects (see Figure 4.25), communication (see Figure 4.26) and object retrieval (see Figure 4.27)
- The ability for spatial information on the map tool to be projected onto the wearer's surrounding environment; see Figure 4.26
Conclusion
Computational Costume presents a speculative design setting and design scenarios which enable designers to create, reflect upon, present and evaluate imagined speculative designs for whole-body interaction through augmented reality. The combination of the imagined setting and scenarios allows designers to work outside of the constraints imposed by today's technology.
Through the speculative designs presented, designers can address how people's dependence on screen-based devices can be replaced—which is not possible when designing within the constraints imposed by today's technology. The speculative design process adopted for Computational Costume allows designers to work with imagined capabilities that advance what is possible physically and virtually through digital media today. This process has been paired with an ergonomics and technology review that has informed a probable hardware design to support the abilities of Computational Costume. The designs have a chance to influence the trajectory of developments for the future.
The development of speculative designs through design scenarios have covered: partial wearables and esemplastic objects, all the way to the design of a whole ecosystem of interactions through esemplastic objects and costumes. The cardboard poster and interface shows how people might directly make and interact with esemplastic objects. Computational Costume v0 and Computational Costume v1 illustrate the use of contextually activated costumes and tools for a variety of scenarios. These works culminate in the presentation of Computational Costume v2, which illustrates Computational Costume in action, with more detailed objects and tools, and interaction between wearers. The development of designs has provided a list of generalisable interactions, as detailed in Findings, which can be expanded upon and applied in new design scenarios.
The presentation of imagined design scenarios through a variety of formats has shown how physical wearables and objects are best presented through a combination of physical performance, exhibition and film-making. Lo-fi physical material props can be presented through video to illustrate how speculative esemplastic objects would work as part of an imagined ecosystem with many people. Speculative designs presented through this medium enable designers to reflect on their own work, as well as giving unfamiliar audiences the ability to engage with the imagined ideas. This is all possible without investing in the creation of new technology hardware or software. In Computational Costume prototyping and presentation I provide a review of physical materials and presentation techniques used which covers how to best support the imagination of Computational Costume designs.
Conclusion
My research has explored how a wide range of designers might bridge the divide that persists between people's virtual and physical practices through the design of digital media, from the design of the mainstream screen-based devices people use today to tangible computing, which seeks to use a combination of physical and virtual materials, to whole-body interaction supported by augmented reality. My body of work has questioned the design of digital media and suggested a way forward for designers based on speculative prototyping and presentation through:
- Simplification of people's activities through on-screen digital media, in Simplifying physical and virtual practices on-screen
- Review of designers' support of practices across a range of media, in Reviewing physical and virtual practice support across media
- Review of emerging practices in new digital media that seek to avoid dependence on screen-based devices, in Reviewing new physical and virtual practices
- Creation of new practices with digital media that do not depend on screen-based devices in a speculative future, in Creating new physical and virtual practices
Contribution
In its entirety my research reveals a conceptual rationale for developing speculative virtual wearables and objects that ground interactions with digital media through the physical world. This has been the impetus for the design of Computational Costume: a speculative design setting and scenarios based on imagined probable technologies centred around augmented reality. Computational Costume presents the use of esemplastic objects and wearables that combine both physical and virtual qualities through a combination of augmented reality, force feedback and use of the physical world. Designs for this imagined digital media have been developed and refined through the presentation of lo-fi physical materials, exhibition and film-making, enabling designers and audiences alike to be liberated from the constraints of today's technology. Digital media designers across the spectrum from visual and interaction design to HCI research, as well as textile design, can engage simple materials to present and evaluate speculative design ideas for new digital media. In addition, audiences can engage with new digital media designs in an inviting format.
Detailed ahead are the variety of investigative approaches I have applied to bridge the divide between physical and virtual practices through the design of digital media. Each set of results through an iterative design process and reflective analysis motivated subsequent investigations. Each investigation has addressed the main problem of bridging the physical and virtual, beginning with a novel design intervention for common screen-based devices and ending with a progressive call to action for designers to take forward.
The Memory Menu
I commenced with Simplifying physical and virtual practices on-screen. I sought to address demassification, or how shared social properties are lost as physical artefacts become digital. These shared social properties, known as latent border resources, can be applied by designers to simplify people's activities. I explored how additional latent border resources might be applied on-screen by supporting spatial memory through use-wear. This approach was developed and studied through the design of a menu overlay which I called the Memory Menu. The work was developed in response to on-screen interaction design approaches encountered in real-world practice where interfaces could benefit from a standardised and easy-to-apply design intervention.
The results of the Memory Menu study did not find a significant improvement to usability. This inspired looking more broadly at ways people are supported through a range of media.
Interviews
In Reviewing physical and virtual practice support across media I sought the advice of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media. This was done to broadly address how people are supported through a range of media. Through interviews with these researchers and practitioners I explored what they did in their design practices to support people's activities, as well as asking for their thoughts on the Memory Menu.
The interviews revealed a series of consistent approaches and useful ways to extend the Memory Menu design. The interviewees, through their individual considerations, collectively suggested accommodating people's activities by addressing their vast capabilities and environments. Reflecting on this result led me to the application of latent border resources beyond screen-based devices.
Design review
In Reviewing new physical and virtual practices I explored how people's activities through digital media might be supported by designing for a wider range of interactions beyond the screen. I looked towards outcomes of the Material Turn in HCI and found concerns surrounding the materiality of interactions—a design consideration for what digital media allows people to do, rather than privileging any kind of device or material. This provided a basis for reviewing the design of ubiquitous and tangible computing, which proposes how digital media might be better enmeshed in people's activities.
The ubiquitous and tangible computing design review identified ways to support people's engagement through a greater range of senses and their surrounding environment. However, it was possible to see how this new computing remains dependent on screen-based devices as they emerge in the form of mainstream IoT devices. The specific area of whole-body interaction provides a way forward by demonstrating ways in which the body and surrounding environment can be used to ground digital media interactions through virtual identity.
Computational Costume
In Creating new physical and virtual practices I conceived of a speculative design of imagined virtual wearables and objects supported by probable technologies which I called the Computational Costume. The Computational Costume consists of imagined probable technologies and design scenarios. The concepts revealed through the work explore how people's need for screen-based devices could be replaced. The designs show how wearers can:
- Store and display information on themselves
- Store and display information on their surrounding environment
- Display information in response to surrounding context, such as location or emergency
- Privately share and access information; and
- Access and communicate information outside of their field of view through a map
The development of Computational Costume prototyping and presentation, in Computational Costume prototyping and presentation, explores the application of accessible lo-fi physical materials and processes. This allows designers to prototype and present imagined interactions with digital media through film-making and exhibition. My method enables designers to conceptualise and present the future of digital media without engaging technical skills or working within the constraints of today's technology. In addition, audiences can experience new digital media and how it might feel without adopting mysterious new technology.
Future work
My research has shown how to take discussions of how to supporting people's activities through digital media away from a basis on today's technology. Computational Costume illustrates that it is possible to make a contribution to the design of speculative technologies with the clever use of lo-fi physical materials for prototyping and presentation through exhibition and film-making. Designers can make a greater impact by conceiving desirable conceptual ideas as probable targets, rather than conforming to the limitations of technology today.
With my research, designers can take forward the design of Computational Costume to encompass more practices for people. Also, designers can reimagine the use and design of physical devices—as physical practices with devices are absorbed into new virtual practices in Computational Costume.
Computational Costume can be used beyond presenting information on the body. Questions remain around how people might use Computational Costume to express themselves and engage one another through play. Humans already do this through what they wear across a range of events from parties to theatre. However, this comes with the need for access to physical clothes to change appearance. There are ways to overcome these constraints of physical clothing. For example, people's Computational Costume could virtually change in response to what they are doing or where they are while wearing a comfortable physical outfit concealed by Computational Costume. Computational Costume could visually accentuate movement when exercising or performing, or gradually unfurl esemplastic objects when meeting someone for the first time.
Esemplastic objects and wearables have the capacity to replace the function of purely physical objects and wearables such as fashion and digital devices. Physical objects and wearables that are not completely consumed by new virtual practices would transform into more essential and streamlined forms. For instance, physical fashion might become more pared back and focus on utility such as warmth and cooling if aesthetics can be applied esemplastically. A similar situation applies to digital devices. For instance, small digital cameras could be absorbed by camera-enabled wearable augmented reality glasses which can also track hand gestures to frame photographs. This could extend to the physical design of advanced digital cameras with heavy optics. Features such as the screen, grip, button, dial and viewfinder could be streamlined into a cylindrical hardware design with augmented reality controls and framing, alongside direct virtual access to the photos taken.
Concluding remarks
Much of my research has been a transitional exploration. It began with a desire to learn about HCI problems and evaluation processes, but this soon evolved into a survey and design concepts for how interactions with digital media could be better intertwined with their surrounding physical environment.
With my research I hope to enable a wide range of designers to engage in the creation of future digital media by taking a device-agnostic approach. The lo-fi prototyping and presentation approaches presented open up the possibility of engaging in developing how people interact through future wearable technology. Designers would not need to be hardware or software engineers or to design within today's technological constraints. Furthermore, the work allows audiences to engage in a manner that is not demanding in a technical way or indicative of a dystopian future. Digital media designers and audiences are encouraged to work together to shape a desirable future for digital media.
My work opens up the possibility for imagining a variety of new design scenarios based on current and foreseeable design problems, to create and present new speculative virtual wearables and objects. The area is ripe with possibilities when liberating the development of technology from today's image of technology. It is time to use the design process to release promising speculative design ideas and shape emerging technology into the desirable.
I urge digital media designers to take the work forward and challenge what people are able to do with digital media today and into the future.
Reference list
- Alexander, J., Cockburn, A., Fitchett, S., Gutwin, C., & Greenberg, S. (2009). Revisiting read wear: analysis, design, and evaluation of a footprints scrollbar. CHI '09 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1665–1674. https://doi.org/10.1145/1518701.1518957
- Amann, M., & Minge, O. (2012). Biodegradability of poly(vinyl acetate) and related polymers. In B. Rieger, A. Künkel, G. W. Coates, R. Reichardt, E. Dinjus, & T. A. Zevaco (Eds.), Synthetic Biodegradable Polymers (pp.137–172). Berlin, Heidelberg: Springer. https://doi.org/10.1007/12_2011_153
- Apple Inc. (2018, March 19). AirPods – Technical specifications. Retrieved 21 January 2019 from https://support.apple.com/kb/SP750
- Apple Inc. (2018, September 20). Apple Watch Series 4 – Technical Specifications. Retrieved 21 January 2019 from https://support.apple.com/kb/SP778
- Augsten, T., Kaefer, K., Meusel, R., Fetzer, C., Kanitz, D., Stoff, T., et al. (2010). Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input. The 23rd Annual ACM Symposium, 209–218. https://doi.org/10.1145/1866029.1866064. Video available at https://www.youtube.com/embed/spiKgkW1UmI
- Barone, J., & Mazza, D. (2019). Computational Costume rendition. https://doi.org/10.26180/5d504d8a489ae
- Baudisch, P., Tan, D., Collomb, M., Robbins, D., Hinckley, K., Agrawala, M., et al. (2006). Phosphor: explaining transitions in the user interface using afterglow effects. UIST '06 Proceedings of the 19th annual ACM Symposium on User Interface Software and Technology, 169–178. https://doi.org/10.1145/1166253.1166280
- Baudrillard, J. (1981). Simulacra and Simulation. (S. Glaser, Trans.). University of Michigan Press.
- Bell, G., & Dourish, P. (2007). Yesterday's tomorrows: notes on ubiquitous computing's dominant vision. Personal and Ubiquitous Computing, 11(2), 133–143. https://doi.org/10.1007/s00779-006-0071-x
- Bezerianos, A., Dragicevic, P., & Balakrishnan, R. (2006). Mnemonic rendering: an image-based approach for exposing hidden changes in dynamic displays. UIST '06 Proceedings of the 19th annual ACM Symposium on User Interface Software and Technology, 159–168. https://doi.org/10.1145/1166253.1166279
- Bolter, J. D., & Grusin, R. (1998). Remediation. MIT Press.
- Bonanni, L. (2006). Living with hyper-reality. In Ambient Intelligence in Everyday Life (Vol. 3864, pp.130–141). Berlin, Heidelberg: Springer. https://doi.org/10.1007/11825890_6
- Bower, G. H. (1970). Analysis of a mnemonic device: modern psychology uncovers the powerful components of an ancient system for improving memory. American Scientist, 58(5), 496–510. Retrieved from https://www.jstor.org/stable/27829239
- Brignull, H. (2013, August 29). Dark patterns: inside the interfaces designed to trick you. Retrieved 7 January 2019 from https://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-trick-you
- Brooker, C. (2016). Black Mirror – Playtest. (D. Trachtenberg, Ed.). Retrieved from https://www.netflix.com/title/70264888
- Brown, J. S., & Duguid, P. (1994). Borderline issues: social and material aspects of design. Human–Computer Interaction, 9(1), 3–36. https://doi.org/10.1207/s15327051hci0901_2
- Browne, D., Totterdell, P., & Norman, M. (Eds.). (1990). Adaptive User Interfaces. Academic Press. Retrieved from https://www.sciencedirect.com/book/9780121377557/adaptive-user-interfaces
- Cockburn, A., Kristensson, P. O., Alexander, J., & Zhai, S. (2007). Hard lessons: effort-inducing interfaces benefit spatial learning. CHI '07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1571–1580. https://doi.org/10.1145/1240624.1240863
- Cooper, A., Reimann, R., Cronin, D., & Noessel, C. (2014). About Face: The Essentials of Interaction Design. Wiley.
- Corbin, J., & Strauss, A. (2008). Strategies for qualitative data analysis. In Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3rd ed., pp.65–86). SAGE. https://doi.org/10.4135/9781452230153
- Deschamps-Sonsino, A. (2005). Good Night Lamp. Retrieved from http://goodnightlamp.com. Video available at https://www.youtube.com/embed/FxLsZUTXYEU
- Dourish, P. (2001). Where the Action is. MIT Press.
- Dunne, A., & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge, Massachusetts: MIT Press.
- Ehn, P., & Kyng, M. (1991). Cardboard computers: mocking-it-up or hands-on the future. In J. Greenbaum & M. Kyng (Eds.), Design at Work (pp.169–196). Hillsdale, NJ, USA: L. Erlbaum.
- Ehn, P., & Linde, P. (2004). Embodied interaction: designing beyond the physical-digital divide. Proceedings of Futureground: Design Research Society International Conference 2004. Retrieved from https://www.researchgate.net/publication/237327428_Embodied_Interaction_-_Designing_Beyond_the_Physical-Digital_Divide
- England, D. (Ed.). (2011). Whole Body Interaction. London: Springer. https://doi.org/10.1007/978-0-85729-433-3
- exiii Inc. (2018). EXOS DK2 Demonstration. Retrieved from https://www.youtube.com/embed/7AjKKttW_nw
- exiii Inc. (2018, October 2). exiii releases EXOS Wrist DK2. Retrieved 16 January 2019 from https://exiii.jp/2018/10/02/exos_wrist_dk2_en/
- Findlater, L., Moffatt, K., McGrenere, J., & Dawson, J. (2009). Ephemeral adaptation: the use of gradual onset to improve menu selection performance. CHI '09 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1655–1664. https://doi.org/10.1145/1518701.1518956
- Follmer, S., Leithinger, D., Olwal, A., Hogge, A., & Ishii, H. (2013). inFORM: dynamic physical affordances and constraints through shape and object actuation. UIST '13 Proceedings of the 26th annual ACM Symposium on User Interface Software and Technology, 417–426. https://doi.org/10.1145/2501988.2502032
- Fox, B., & Schubert, M. (2018, June 8). Leap Motion Interaction Experiment: Shortcuts. Leap Motion Inc. Retrieved from https://gallery.leapmotion.com/shortcuts/. Video available at https://www.youtube.com/embed/LFRKEmzrzP8
- Genç, C., Buruk, O. T., Yılmaz, S. I., Can, K., & Özcan, O. (2018). Exploring computational materials for fashion: recommendations for designing fashionable wearables. International Journal of Design, 12(3), 1–19. Retrieved from http://www.ijdesign.org/index.php/IJDesign/article/view/2831/826
- Giaccardi, E., & Karana, E. (2015). Foundations of materials experience. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2447–2456. https://doi.org/10.1145/2702123.2702337
- Google. (2013, February). Tech specs – Google Glass. Retrieved 21 January 2019 from https://support.google.com/glass/answer/3064128
- Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark (patterns) side of UX design. CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3173574.3174108
- Greenberg, S., Boring, S., Vermeulen, J., & Dostal, J. (2014). Dark patterns in proxemic interactions: a critical perspective. DIS '14 Proceedings of the 2014 Conference on Designing Interactive Systems, 523–532. https://doi.org/10.1145/2598510.2598541
- Grønbæk, J. E., Korsgaard, H., Petersen, M. G., Birk, M. H., & Krogh, P. G. (2017). Proxemic transitions: designing shape-changing furniture for informal meetings. CHI '17 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 7029–7041. https://doi.org/10.1145/3025453.3025487. Video available at https://www.youtube.com/embed/6qXf1tR-S-8
- Gustafson, S. (2013, November 25). Imaginary Interfaces. Retrieved from https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/deliver/index/docId/6660/file/gustafson_diss.pdf
- Gustafson, S., Bierwirth, D., & Baudisch, P. (2010). Imaginary interfaces: spatial interaction with empty hands and without visual feedback. UIST '10 Proceedings of the 23nd annual ACM Symposium on User Interface Software and Technology, 3–12. https://doi.org/10.1145/1866029.1866033. Video available at https://www.youtube.com/embed/718RDJeISNA
- Gutwin, C., & Cockburn, A. (2006). Improving list revisitation with ListMaps. AVI '06 Proceedings of the Working Conference on Advanced Visual Interfaces, 396–403. https://doi.org/10.1145/1133265.1133347
- Haroz, S., Kosara, R., & Franconeri, S. (2015). ISOTYPE visualization – working memory, performance, and engagement with pictographs. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1191–1200. https://doi.org/10.1145/2702123.2702275
- Harrison, C., Ramamurthy, S., & Hudson, S. E. (2012). On-body interaction: armed and dangerous. TEI '12 Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, 69–76. https://doi.org/10.1145/2148131.2148148. Video available at https://www.youtube.com/embed/p54PTI5puKY
- Hill, W. C., Hollan, J. D., Wroblewski, D., & McCandless, T. (1992). Edit wear and read wear. CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3–9. https://doi.org/10.1145/142750.142751
- Hoang, T. N., Ferdous, H. S., Vetere, F., & Reinoso, M. (2018). Body as a canvas: an exploration on the role of the body as display of digital information. DIS '18 Proceedings of the 2018 Designing Interactive Systems Conference, 253–263. https://doi.org/10.1145/3196709.3196724. Video available at https://www.youtube.com/embed/Fa67sNzLLKs
- Hoang, T., Reinoso, M., Joukhadar, Z., Vetere, F., & Kelly, D. (2017). Augmented studio: projection mapping on moving body for physiotherapy education. CHI '17 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 1419–1430. https://doi.org/10.1145/3025453.3025860. Video available at https://www.youtube.com/embed/R_mkNlKoPTM
- Howard, S., Carroll, J., Murphy, J., & Peck, J. (2002). Using "endowed props" in scenario-based design (pp.1–9). Presented at the second Nordic Conference on Human–Computer Interaction. https://doi.org/10.1145/572020.572022
- Hurst, A., Mankoff, J., Dey, A. K., & Hudson, S. E. (2007). Dirty desktops: using a patina of magnetic mouse dust to make common interactor targets easier to select. UIST '07 Proceedings of the 20th annual ACM Symposium on User Interface Software and Technology, 183–186. https://doi.org/10.1145/1294211.1294242
- Ishii, H., Lakatos, D., Bonanni, L., & Labrune, J.-B. (2012). Radical atoms: beyond tangible bits, toward transformable materials. Interactions, 19(1), 38–51. https://doi.org/10.1145/2065327.2065337. Perfect Red video available at https://player.vimeo.com/video/61141209
- Krueger, M. W. (1991). Artificial Reality II. Addison-Wesley.
- Krueger, M. W., Gionfriddo, T., & Hinrichsen, K. (1985). VIDEOPLACE—an artificial reality. CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 35–40. https://doi.org/10.1145/317456.317463
- Krueger, M. W., Hinrichsen, K., Gionfriddo, T., & Sonnanburg, J. (1988). Videoplace '88. SIGGRAPH 1988: Art Show. Retrieved from https://digitalartarchive.siggraph.org/artwork/myron-w-krueger-videoplace-88/. Video available at https://player.vimeo.com/video/274261717
- Kutz, N. (2018). Modular placement and prototyping. Retrieved 8 April 2019 from https://www.kobakant.at/DIY/?p=7197
- Lakatos, D., Blackshaw, M., Olwal, A., Barryte, Z., Perlin, K., & Ishii, H. (2014). T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation (pp.90–93). Presented at the the 2rd ACM symposium, New York, New York, USA: ACM. https://doi.org/10.1145/2659766.2659785. Video available at https://player.vimeo.com/video/42173010
- Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press.
- Leap Motion, Inc. (2013). Leap Motion Controller. Retrieved from https://www.leapmotion.com/technology/
- Leap Motion, Inc. (2018). Mirrorworlds Concept: Channels of Perception. Retrieved from https://www.youtube.com/embed/kTv7aQx09XI
- Leap Motion, Inc. (2018). Mirrorworlds Concept: The Architect. Retrieved from https://www.youtube.com/embed/QGUhFcRjgus
- Leap Motion, Inc. (2018). Project North Star: Desk UI. Retrieved from https://www.youtube.com/embed/6dB1IRg3Qls
- Leap Motion, Inc. (2018). Project North Star: Exploring Augmented Reality. Retrieved from https://www.youtube.com/embed/7m6J8W6Ib4w
- Leap Motion, Inc. (2018, April 9). Unveiling Project North Star. Retrieved 22 March 2019 from http://blog.leapmotion.com/northstar/
- Leap Motion, Inc. (2018, June 6). Leap Motion Project North Star – Mechanical Guide. Retrieved 21 January 2019 from https://leapmotion.github.io/ProjectNorthStar/mechanical.html
- Lee, J., Su, V., Ren, S., & Ishii, H. (2000). HandSCAPE: a vectorizing tape measure for on-site measuring applications, 137–144. https://doi.org/10.1145/332040.332417. Video available at https://player.vimeo.com/video/48558764
- Leithinger, D., Follmer, S., Olwal, A., & Ishii, H. (2014). Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. UIST '14 Proceedings of the 27th annual ACM Symposium on User Interface Software and Technology, 461–470. https://doi.org/10.1145/2642918.2647377. Video available at https://player.vimeo.com/video/108402837
- Lopes, P., You, S., Ion, A., & Baudisch, P. (2018). Adding force feedback to mixed reality experiences and games using electrical muscle stimulation. CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3173574.3174020. Video available at https://www.youtube.com/embed/mgGX6p0rA54
- Magic Leap. (2018). Magic Leap One: Creator Edition. Retrieved 16 January 2019 from https://www.magicleap.com/magic-leap-one
- Matejka, J., Grossman, T., & Fitzmaurice, G. (2013). Patina: dynamic heatmaps for visualizing application usage. CHI '13 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3227–3236. https://doi.org/10.1145/2470654.2466442. Retrieved from https://www.autodeskresearch.com/publications/patina
- Matsuda, K. (2016). Hyper-Reality. Retrieved from http://hyper-reality.co/. Video available at https://player.vimeo.com/video/166807261
- Matsuda, K., & Mill, A. (2018). Mirrorworlds. Retrieved 19 December 2018 from http://blog.leapmotion.com/mirrorworlds/
- Mattelmäki, T. (2006). Design Probes. University of Art and Design Helsinki. Retrieved from http://urn.fi/URN:ISBN:951-558-212-1
- Mazza, D. (2017). Reducing cognitive load and supporting memory in visual design for HCI. The 2017 CHI Conference Extended Abstracts, 142–147. https://doi.org/10.1145/3027063.3048430
- Mazza, D. (2018). Computational Costume v1. https://doi.org/10.26180/5d4bc68e3357e. Video available at https://player.vimeo.com/video/338686312
- Mazza, D. (2018). Computational Costume v2. https://doi.org/10.26180/5d4bc13d2caa3. Video available at https://player.vimeo.com/video/274045926
- Mazé, R., & Redström, J. (2005). Form and the computational object. Digital Creativity, 16(1), 7–18. https://doi.org/10.1080/14626260500147736
- Microsoft. (2019). Microsoft HoloLens. Retrieved 1 April 2019 from https://www.microsoft.com/en-us/hololens
- Miyashita, L., Yamazaki, T., Uehara, K., Watanabe, Y., & Ishikawa, M. (2018). Portable Lumipen: Dynamic SAR in your hand. 2018 IEEE International Conference on Multimedia and Expo (ICME), 1–6. https://doi.org/10.1109/ICME.2018.8486514. Video available at https://www.youtube.com/embed/LwXPpxpupoM
- Montgomery, J. (2010). Pillow Talk. Video available at https://www.youtube.com/embed/wrfJ9EOSFEU
- Mullet, K., & Sano, D. (1995). Designing Visual Interfaces. Prentice-Hall.
- Nakagaki, K., Dementyev, A., Follmer, S., Paradiso, J. A., & Ishii, H. (2016). ChainFORM: a linear integrated modular hardware system for shape changing interfaces. UIST '16 Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 87–96. https://doi.org/10.1145/2984511.2984587. Video available at https://player.vimeo.com/video/193779890
- Perrault, S. T., Lecolinet, E., Bourse, Y. P., Zhao, S., & Guiard, Y. (2015). Physical loci: leveraging spatial, object and semantic memory for command selection. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 299–308. https://doi.org/10.1145/2702123.2702126
- Prest, C., & Hoellwarth, Q. C. (2014, February 18). Sports monitoring system for headphones, earbuds and/or headsets. (Apple Inc., Ed.). Retrieved from https://patents.google.com/patent/US8655004B2/en
- Raheb, El, K., Tsampounaris, G., Katifori, A., & Ioannidis, Y. (2018). Choreomorphy: a whole-body interaction experience for dance improvisation and visual experimentation. AVI '18 Proceedings of the 2018 International Conference on Advanced Visual Interfaces, Article No. 27. https://doi.org/10.1145/3206505.3206507. Video available at https://www.youtube.com/embed/fwtqp6cwkXU
- Robertson, G., Czerwinski, M., Larson, K., Robbins, D. C., Thiel, D., & van Dantzich, M. (1998). Data mountain: using spatial memory for document management. UIST '98 Proceedings of the 11th annual ACM Symposium on User Interface Software and Technology, 153–162. https://doi.org/10.1145/288392.288596
- Robles, E., & Wiberg, M. (2010). Texturing the “material turn” in interaction design. TEI '10 Proceedings of the 4th International Conference on Tangible, Embedded, and Embodied Interaction, 137–144. https://doi.org/10.1145/1709886.1709911
- Roland Corporation. (2010). Roland CS-10EM Binaural Microphones/Earphones. Retrieved from https://www.roland.com/us/products/cs-10em/
- Rose, D. (2014). Enchanted Objects: Innovation, design, and the future of technology. Simon and Schuster.
- Rosson, M. B., & Carroll, J. M. (2012). Scenario-based design. In J. Jacko (Ed.), Human–Computer Interaction Handbook (3rd ed.). CRC Press. https://doi.org/10.1201/b11963
- rwinj, Zeller, M., & Bray, B. (2018, March 21). Gestures – Mixed Reality. Retrieved 18 January 2019 from https://docs.microsoft.com/en-us/windows/mixed-reality/gestures
- Scarr, J., Cockburn, A., & Gutwin, C. (2013). Supporting and exploiting spatial memory in user interfaces. Foundations and Trends in Human–Computer Interaction, 6(1), 1–84. https://doi.org/10.1561/1100000046
- Scarr, J., Cockburn, A., Gutwin, C., & Bunt, A. (2012). Improving command selection with CommandMaps. CHI '12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 257–266. https://doi.org/10.1145/2207676.2207713
- Schmidt, D., Ramakers, R., Pedersen, E. W., Jasper, J., Köhler, S., Pohl, A., et al. (2014). Kickables: tangibles for feet, 3143–3152. https://doi.org/10.1145/2556288.2557016. Video available at https://www.youtube.com/embed/DvAwN1MVLSo
- Shoemaker, G., Tsukitani, T., Kitamura, Y., & Booth, K. S. (2010). Body-centric interaction techniques for very large wall displays. NordiCHI '10 Proceedings of the 6th Nordic Conference on Human—Computer Interaction: Extending Boundaries, 463–472. https://doi.org/10.1145/1868914.1868967. Video available at https://www.youtube.com/embed/hOAmreHmSVg
- Skopik, A., & Gutwin, C. (2005). Improving revisitation in fisheye views with visit wear. CHI '05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 771–780. https://doi.org/10.1145/1054972.1055079
- Stereolabs. (2018). Multiplayer AR Ping Pong – ZED Mini. Retrieved from https://www.youtube.com/embed/rfskhlS-XT0
- Tang, S. K., Sekikawa, Y., Leithinger, D., Follmer, S., & Ishii, H. (2013). Tangible CityScape. Retrieved from https://tangible.media.mit.edu/project/tangible-cityscape. Video available at https://player.vimeo.com/video/100085426
- Tang, S. K., Sekikawa, Y., Perlin, K., Larson, K., & Ishii, H. (2014). inSide. Retrieved from https://tangible.media.mit.edu/project/inside/. Video available at https://player.vimeo.com/video/100085425
- Tsandilas, T., & schraefel, M. C. (2007). Bubbling menus: a selective mechanism for accessing hierarchical drop-down menus. CHI '07 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1195–1204. https://doi.org/10.1145/1240624.1240806
- Underkoffler, J., & Ishii, H. (1998). Illuminating light: an optical design tool with a luminous-tangible interface. CHI '98 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 542–549. https://doi.org/10.1145/274644.274717
- Underkoffler, J., & Ishii, H. (1999). Urp: a luminous-tangible workbench for urban planning and design. CHI '99 Proceedings of the SIGCHI conference on Human Factors in Computing Systems, 386–393. https://doi.org/10.1145/302979.303114. Video available at https://player.vimeo.com/video/48600713
- Vuzix Corporation. (2017). M3000 – The Next Generation of Smart Glasses for Enterprise. Retrieved from https://www.youtube.com/embed/y6SGlOLVpg8
- Vuzix Corporation. (2018). Vuzix Blade. Retrieved from https://www.vuzix.com/products/blade-smart-glasses
- Weiser, M. (1991). The computer for the 21st century. Scientific American, 94–104.
- Wiberg, M. (2016). Interaction, new materials & computing – Beyond the disappearing computer, towards material interactions. Materials and Design, 90, 1200–1206. https://doi.org/10.1016/j.matdes.2015.05.032
- Willett, W., Heer, J., & Agrawala, M. (2007). Scented widgets: improving navigation cues with embedded visualizations. IEEE Transactions on Visualization and Computer Graphics, 13(6), 1129–1136. https://doi.org/10.1109/TVCG.2007.70589
- Zhang, Y., Yang, C. J., Hudson, S. E., Harrison, C., & Sample, A. (2018). Wall++: room-scale interactive and context-aware sensing. CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3173574.3173847. Video available https://www.youtube.com/embed/HSPdcuDT5fU
Memory Menu
Memory Menu motivation
I designed the Memory Menu to evaluate a subtle use-wear effect, as defined in Supporting on-screen spatial memory through use-wear. The effect is intended to add a valuable latent border resource in response to demassification, as covered in the Background. The study is motivated by a need to solve real-world interface design problems with minimal impact on the design process, as uncovered in On-screen interaction design approaches. In addition, I address a lack of conclusive evidence to support a subtle use-wear effect, as explored in Supporting on-screen spatial memory through use-wear.
Memory Menu hypotheses
The Memory Menu was designed to evaluate two hypotheses: H1 item selection time will be quicker for a menu with a use-wear effect applied; and H2 memory of items selected will be richer for the use-wear menu.
The null hypothesis was: H0 there is no benefit to highlighting the usage of menu items through a use-wear effect.
Memory Menu design
The Memory Menu was designed to be a short 10–15 minute study which could be completed by participants from the comfort of their own computer. Data would be collected by the web application and stored in a secure database at the conclusion of the study.
Ethics and recruitment
Ethics approval from the Monash University Human Research Ethics Committee (MUHREC) was granted for a 100-participant online study. The majority of the participants found a hyperlink to the study through an online advertisement on the Monash University newsletter Monash Memo. Other participants found the link via social media online and word-of-mouth.
Menu design
To evaluate the hypotheses, the Memory Menu was divided into three parts: first, a short training round menu for seven turns with the use-wear effect applied, shown in Figure A.1; second, a menu with the use-wear effect or no effect applied, picked at random, shown in Figure A.2; and third, a menu with the opposite effect applied. Participants were prompted 30 times for each menu to pick an item, confirm they understood the prompt and make their selection. Errors made throughout the test were clearly highlighted, as shown in Figure A.3. This was to ensure that the highlighting effect was not compromised and to encourage accurate selections.
Each of the menus presented consisted of pictographs and words from six distinct and contrasting categories: food, objects, emotions, animals, transport and plants. The contrasting categories existed so a selection bias could be applied, to simulate real situations where a person has a particular interest. Without this, the selections would be distributed randomly. Items from each category had a probability of being selected 2%, 3%, 10%, 15%, 30% and 40% of the time, respectively. The bias and layout of the items were applied randomly. Additionally, menus were completely refreshed when switching from the first menu to the second menu. Pictographs were switched to words and vice versa, and the selection bias and layout randomly selected again.
Study procedure
Participants would complete a practice round for 7 turns and then move on to the real menus for 30 turns each, one with the use-wear effect applied and the other without.
Participants were asked a series of questions to determine how diverse the sample group was and to help determine the cause of any bias found in the final results. Participants were asked provide their gender (optionally), their age range, level of tiredness, primary language and professional discipline.
At the conclusion of the first and second tests, questions were asked to gauge the difficulty of the tests and how well the interactor could recollect the most and least frequently picked categories. At the conclusion of the first and second tests, participants were asked to classify difficulty on a Likert scale, and pick from a selection of pictograms and words they were exposed to, choosing the most selected and least selected. Finally, participants were asked if the use-wear made the task easier on a Likert scale and optionally why they chose the response. In the same way, participants were asked if the use-wear was desirable. Participants could leave any additional notes at the end.
It was not possible to complete the experiment on displays that were too small, such as smartphones. As a minimum requirement, participants could only commence the study if their window dimensions were at least 1024 × 600 pixels or greater.
Memory Menu evaluation
The Memory Menu was tested quantitatively and qualitatively using a mixed methods approach in order to validate its effectiveness as a latent border resource. I was interested in both the participants' qualitative experiences of the use-wear effect and observing any empirical evidence of a difference in selection times.
To validate H1, I evaluated Memory Menu item selection times to see if they were significantly faster than the regular baseline menu. I performed a two-tailed, paired t-test (α 0.05) for each participant’s first menu vs second menu selection times.
To validate H2, I gathered information to see whether participants could recollect the most and least frequently picked item categories, and direct responses about the effectiveness of the use-wear, as covered in the study procedure.
To capture any unusual events or biases, information about the participants, the menus generated for them and their web browser were collected, as well as additional notes. Specifically, menu arrangements, tasks presented and mistakes made were collected, allowing the reconstruction of the menus presented if needed, in case of rendering issues. Information on gender, age, primary language and professional background were collected, in case there were any issues with the participants' understanding of the content or instructions, web browser or window size. In one case, a note left by a participant alongside collected information confirmed an instance of a malfunction in the menu rendering—these results were excluded during the final analysis.
Memory Menu results and analysis
The null hypothesis H0, was validated by a statistically insignificant difference in selection times between the use-wear effect and baseline menu. A two-tailed, paired t-test (α 0.05) for each participant’s first menu vs second menu selection times show that only 6 participants of 99 revealed statistically significant results. One participant demonstrated that they experienced a rendering malfunction and were removed from the 100 results. A careful analysis of the individual significant results reveals instances where certain turns in the baseline menu were prolonged. This could have been caused by a difference in difficulty between menus. It is possible the selection bias and arrangement of items required less scanning time in the use-wear menu or that some selections were difficult in the baseline menu. Overall, this disqualifies H1.
The validation of the null hypothesis is corroborated by figures for correct responses in recollecting the most and least picked categories. No significant difference could be found between the recollection of categories between the baseline and use-wear menu. A correct response involves a participant selecting what they remember being the most or least picked category after completing the menu tasks. The answers were compared against the selection bias applied to the menus, as described in Menu design. Responses could be 1 off, 2 off, 3 off and so on from the correct answer. Correct responses for the recollection of the most picked category were similar between the baseline and use-wear menus, as shown in Figure A.4. For the recollection of the least picked category, use-wear shows some advantage; however, it is not significant, as shown in Figure A.5. Overall, this disqualifies H2.
Qualitative responses on the difficulty of the baseline vs use-wear menu show a bias. When observing responses of the baseline and use-wear menu completed as the first test, the results are similar; see Figure A.6. For the second test, the results show the baseline is reported as harder, at 45%, while the use-wear is reported as easier at 41%; see Figure A.7. However, the quantitative results have already shown no statistically significant benefits and this nullifies the audience's response.
It should be noted that, by my oversight, first tests and second tests were assigned randomly, rather than being distributed into two even groups. 57 of 99 participants (58%) completed the baseline first and 42 of 99 participants (42%) completed the use-wear menu first. The test order should have been distributed evenly, so 50% of participants completed the baseline menu first. However, the difference is marginal. Only 6 less participants should have completed the baseline first instead of use-wear, so the results are not skewed.
Qualitative responses for effectiveness, counted in Figure A.8, and desirability, counted in Figure A.10, ranked highly. When coding optional written responses for effectiveness and desirability, the audience was polarised on effectiveness and showed a general desirability. Effectiveness responses from 68 participants were aggregated into three categories: not effective, partially effective and effective, shown in Figure A.9. Of this group, roughly half felt the use-wear was completely effective. Desirability responses from 53 participants were aggregated into four categories: improved parsing, favourable, distracting and indifference, shown in Figure A.11.
Interviews
Interview motivation
Crossdisciplinary interviews offer the opportunity to evaluate the practical work completed so far in Supporting on-screen spatial memory through use-wear from a variety of perspectives. The range of interviewees' perspectives can provide direction for future works, based on consistent advice which would be indicative of best practice approaches.
Interview design
In exploring how different disciplines support people's activities through various media, the interviews had two aims: to gain critical feedback on the practical work conducted so far, and defining a way forward based on the interviewees' design practices for supporting people's activities.
To gain critical feedback I framed the interviews around my research conducted to this point on spatial memory on-screen. I specifically asked how the interviewees sought to reduce cognitive load and support spatial memory, if they used those approaches. This language was used at the time to frame to work concerning HCI; however, the interviews deliberately allowed interviewees to provide their own language and approaches.
To collect responses beyond my research work, and language specific to HCI, a semi-structured interview approach was adopted. The interviews involved a series of specific questions about my research and general questions about the interviewees' practices. Any points needing clarification were elaborated upon with off-script questions. The interview process sought to capture as much information about people's unique practices as possible.
Ethics and recruitment
Ethics approval from the Monash University Human Research Ethics Committee (MUHREC) was granted for speaking to 10 interviewees for a maximum of 1 hour. Interviewees were recruited from known sources and suggestions from these sources.
To speak with a broad sample group, I picked a variety of researchers and practitioners from backgrounds based in modern and traditional art, design and communication practices, on and off digital media, with established and senior experience. The specific specialisations of the interviewees are defined ahead in the Interview results and analysis in Table B.1.
Interview procedure
Interviews were performed one on one, in person, with a semi-structured approach. Off-script questions were asked to gain clarification on answers from participants. Also, knowledge about my own as well as interviewees' work was brought to interviews to ensure a dialogue could be sustained.
The following is a full description of the process with a rationale for each step:
- To set the tone for the purpose of the interviews, they began with a brief introduction on my part:
Let me introduce myself—I am Domenico Mazza, a PhD student working on making complex information presented on screens easier to absorb by supporting people's memory while interacting. The intent of the interviews is to get an understanding of how my research fits into practice and how I should shape it. The interview should take no longer than 1 hour
. - To gain an idea of the interviewee's involvement in design, I asked them to reciprocate with an introduction of the design work they were involved in:
What kind of design work are you involved in?
- To gauge whether the interviewee considered memorability I asked whether they
think about memorability or the memory capacity of an audience or person interacting
with their work. - To gauge what makes the interviewee's practice distinct, I asked: what distinguishes their professional practice from others. To support a dialogue, this was preceded by showing work I have produced within my own professional practice and the concerns behind it.
- To gain a specific idea of methodologies followed by the interviewee, I asked:
if you want to clearly communicate information though design—say a particular message for something, or a kind of functionality—what principles or methods would you follow?
To support a dialogue, I brought up what I knew about the interviewee's work beforehand. - To gauge whether memorability and cognitive load were considered by the interviewee, I asked: if they
ever considered the cognitive load placed on an audience?
And how they would handle cognitive load issues in their work. Immediately before asking the questions, an overview of relevant concepts and the Memory Menu design were shown. This was done to ensure interviewees were not left wondering what supporting spatial memory in HCI involves. - To establish whether considerations of memorability and cognitive load had any resonance with the interviewee, I asked: what they thought
of a design perspective or logic based on reducing cognitive load and supporting memory?
- To conclude, I gained feedback on the Memory Menu design by asking the interviewee what they thought of it.
The questions and protocol above were tested before being applied to practice on a professional design colleague. This helped iron out issues, especially in describing concepts around supporting spatial memory on-screen.
To allow responses to be classified (or coded), the interviews were recorded and transcribed, and question sheets with notes scanned. I was able to code responses to determine patterns and unique responses.
Interview coding
An open coding approach was used to suit the open nature of the interviews. Classifications (or codes) could only be determined once the data was collected. Theoretical questions were asked while sifting through the qualitative data. Theoretical questions are questions that help the researcher to see process, variation, and so on, and to make connections between concepts
[20, p.8]. This was necessary as different interviewees used different expressions and examples to describe what often turned out to be similar concepts. For instance, one interviewee from a marketing background described designing for people's needs and wants, another from a cognitive science background described this as designing for the users' preferred modalities.
The open coding took several passes to ensure adequate coverage of the responses provided. Certain responses required re-evaluating codes to see whether they held relevance or needed adjustment. Adjustments were made to fix codes that were too generalised or too specific, e.g. at one stage the code for context was divided into three kinds of context: spatial, audience and domain. The language used by interviewees has been preserved in reporting the Interview results and analysis, to retain the intended meaning of responses.
Interview results and analysis
The adopted interview design generated four results tables from which to draw an analysis:
- Interviewee specialisations and responses, Table B.1
- Consistent design practice approaches, Table B.2
- Unique design practice summaries, Table B.3
- Memory Menu critique, Table B.4
The specialisations of interviewees have been highlighted in Table B.1 to reveal the backgrounds of interviewees. IDs allows the ability to trace interviewees' responses throughout the four tables.
The emergent codes from the interviewees' responses revolved around considering context, maintaining empathy, considering memory and following a set method/process. Responses with reference to supporting people's memory and cognitive load have been highlighted in bold.
The interviewee specialisations and responses in Table B.1 below represent a raw data summary, which is broken down further on by looking at consistent design practice approaches in Table B.2 and unique design practice summaries in Table B.3.
ID | Specialisation | Responses |
---|---|---|
P1 |
|
|
P2 |
|
|
P3 |
|
|
P4 |
|
|
P5 |
|
|
P6 |
|
|
P7 |
|
|
P8 |
|
|
The responses shown in Table B.1 speak volumes about individual approaches to design problems, but also reveal consistent threads, in particular dealing with the audience's context and showing empathy towards them when making design considerations. Because of this, all interviewees had a strategy for dealing with memory and cognitive load, when only 5 of 8 interviewees dealt with memorability head on and 3 of 8 dealt with cognitive load head on. The language around cognitive load was popular with the interaction and HCI designers. However, it was common to see synonyms for consistent approaches, highlighted in Table B.2, that bridge the differences in language towards common goals.
ID | Consistent responses |
---|---|
P2, P4, P6, P7 |
|
P1, P3, P5, P8 |
|
P1, P2, P4, P5, P6, P8 |
|
P3, P4, P6, P7, P8 |
|
P1, P5 |
|
P3, P4, P5 |
|
P1, P2, P5 |
|
P1, P4, P5, P6, P8 |
|
P1, P4, P7, P8 |
|
P3, P4, P5, P6, P7 |
|
The consistent responses shown in Table B.2 reveal common goals that move beyond supporting memorability and cognitive load. The only consistent approach for supporting memory was to use familiar visuals, while any considerations for cognitive load were interspersed, with considerations instead placed towards: the target audience and their knowledge; the audience's behaviour and supporting purposeful behaviour; and working closely with their audiences to iteratively generate designs. It should be noted that these considerations are quite standard to design practice.
In summarising the interviewees' unique practices in Table B.3 it is possible to paint a way forward beyond standard design practice.
ID | Response |
---|---|
P1 | Values supporting or enhancing a user’s current method of interacting. Feedback collected from users is used to ensure the design intervention does not compete against the user’s cognitive load or workflow, and to instead offer suitable options. |
P2 | Guides their communication based on applicable criteria defined by the client, as well as criteria based on what the client has experienced in the past in terms of successes and failures. |
P3 | Values working within a historical and cultural context so the work is relatable to an audience. The practitioner also values ‘avoiding didacticism’, a rule to ensure that work produced is not overtly obvious in its message or sentiment, but allows the opportunity for a deeper engagement. This works in a similar vein to ‘narrative’. The practitioner acknowledges the approach only works if the audience engages with a work; however, an ‘aesthetic hook’ is implemented to ensure an aesthetic quality engages an audience. |
P4 | Values consistency and simplicity in displaying data for a specific technical domain. Outcomes situate data spatially on maps, with the use of heatmapping where possible. |
P5 | Values engagement with the target audience to ensure design outcomes are accessible. Time is taken to interact with the audience on determining offerings, interaction preferences, as well as how the design is situated and responds to various audience contexts. |
P6 | Places high emphasis on design research. The design brief plays a critical part as a guiding document based on research, market data and client collaboration to ensure the final outcome meets client and audience expectations. |
P7 | Values understanding the relevant domain of a design outcome. Workshops conducted go towards helping the practitioner decipher source material to create a design, and getting audience feedback to improve a design. |
P8 | Focuses on going against cultural constraints in digital media by rewriting existing visual grammar and seeking what is ‘digitally native’ or has the least amount of human influence on it. While the outcomes are not meant to be readily understood, applications are user tested and exploration is encouraged and allowed. The practitioner also touched on how narratives are formed by giving an audience a sense of agency to be able to act on something and observe a meaningful response or consequence from it. |
The interviewees' unique practices, as shown in Table B.3, reveal four unique approaches to further what can be done to support people's activities beyond supporting spatial memory and reducing cognitive load in the Memory Menu:
- Supporting multiple modalities or catering to people's wide range of needs and abilities in different situations
- Working within a cultural context and history which are relatable to people
- Allowing people to explore on their own by avoiding didacticism
- Exploring what is possible with a design by moving away from cultural constraints and what is culturally accepted
With respect to advancing the design of the Memory Menu, interviewees offered the following critiques, shown in Table B.4. These critiques were given after hearing an explanation of the concepts behind the Memory Menu and seeing an overview of the design.
ID | Response |
---|---|
P1 |
|
P2 |
|
P3 |
|
P4 |
|
P5 |
|
P6 |
|
P7 |
|
P8 |
|
Critiques of the Memory Menu, as shown in Table B.4, were varied with a range of constructive comments. The most common response was to consider different adaptiveness options based on an individual's context and needs. This was put simply by P8 as: determining what phenomena should be used for contextualisation. Suggestions included applying the use-wear effect based on events, a time range or directly to activities in physical spaces on a map (P3, P4). Other suggestions included using Memory Menu as a learning tool (P3, P5) and a way to show the usage patterns of others (P7), which has been done in Patina [66] by Matejka et al. (2013). Criticism was centred around how Memory Menu might occlude information or go against the user’s will, which is a known issue in adaptive interfaces known as hunting [17, p.208]. As an alternative, P1 advocated the development of a multi-modal based approach.
Computational Costume prototyping and presentation
The prototyping and presentation of Computational Costume facilitates the imagination of speculative digital media, without investing in the development of technological hardware or software. Computational Costume Design scenarios are created with lo-fi physical materials which are brought to life through exhibition, performance and film-making. Effects such as digital wearable interfaces are presented through wearable materials and defined through their presentation. This approach enables designers to readily experiment and evaluate design ideas without being constrained by limitations imposed by today's technology.
In this section, I provide:
- The Background for prototyping and presentation using lo-fi physical materials, exhibition, performance and film-making
- The Objectives for prototyping and presentation that guide how materials are applied in Computational Costume
- A review of how materials have been applied for imagining Computational Costume in Material applications
- A review of the presentation methods used for imagining Computational Costume in Presentation methods
Background
Prototyping Computational Costume using lo-fi physical materials, exhibition, performance and film-making stands in contrast to usual methods for exploring the design of augmented reality and wearable digital media. The methods adopted in my research allow greater flexibility to explore imagined ideas.
The prototyping of Computational Costume follows from the tradition of inexpensive and versatile cardboard and paper prototyping used in evaluating conceptual designs for digital media on-screen. This kind of prototyping is considered by Ehn and Kyng (1991) in Cardboard Computers as a kind of design game for envisionment
that allows hands-on experience instead of conceptualising designs through schematics [24]. The physical materials used are readily available, easy to assemble with basic craft skills and durable enough for their intended use as prototypes and props.
Alternatives to physical materials which are generally used to explore new digital media require a larger investment in skills, technical hardware and time. These alternatives include: visual effects, programmed virtual graphics and electronics:
- Visual effects: speculative visions for augmented reality are shown through film-making with advanced visual effects, such as Hyper-Reality [67] by Keiichi Matsuda (2016) and Playtest [15] on the TV series Black Mirror by Charlie Brooker (2016).
- Programmed virtual graphics: virtual wearables require the use and development of computing hardware and graphics, such as Project North Star [60] by Leap Motion, Inc. (2018) [31] [58] [59].
- Electronics: physical wearables that feature digital abilities are created with electronics, such as Exploring Computational Materials for Fashion: Recommendations for Designing Fashionable Wearables [32] by Genç et al. (2018), and the work of the art collective KOBAKANT; see Modular placement and prototyping [52] by Nadja Kutz (2018).
Lo-fi physical materials are a speedier medium to work with for designers who are more adept at conceptual development. These designers can focus on the design concept at hand by encouraging audiences to use their imagination, in contrast to engaging technical skills for composing realistic visual effects, programmed virtual graphics or electronics. Designers who are adept at applying the aforementioned technical skills may still choose to use lo-fi physical materials as a first step in their design process to develop, present and evaluate ideas without using final materials.
Objectives for prototyping and presentation
The objectives for prototyping and presentation provide a simple rubric for guiding the successful application of physical materials in Computational Costume. The materials used in Computational Costume design scenarios were applied with the criteria listed below as a guide.
The prototyping and presentation of materials in Computational Costume are judged principally on the:
- Ability to achieve effects with as minimal cost in time and resources as possible
- Ability for designers and uninitiated audiences alike to conceptualise the imagined effects produced
To achieve the above objectives I have opted to apply craft processes using lo-fi physical media such as: textiles, paper and cardboard.
For Computational Costume to meet the above criteria for the application of physical materials, materials must be:
- Readily available and inexpensive
- Strong when required to be strong e.g. when one material is holding the weight of another
- Flexible when required to be flexible e.g. when a material is being worn
For Computational Costume to meet the set criteria for presentation of physical materials, presentations should be:
- Authentic: display imagined effects as closely as possible to the intended digital effect e.g. if an imagined object's movement is instantaneous, it should be presented as such
- Non-didactic: avoid didactic explanations of imagined effects unless it is absolutely necessary
A few optional material objectives have been adhered to throughout the design process where possible. These objectives have involved the use of:
- Biodegradable materials: where possible, materials that are not readily biodegradable have been avoided or substituted for in cases where a practical alternative exists; the rationale for this objective is that it is not a requirement for the prototypes produced through this work to last for hundreds of years, and materials that are not easily biodegradable may go on to become ecological hazards if disposed of through landfill
- Upcycled materials: viable materials destined for landfill, such as packaging or material offcuts, have been reused for works produced; this process adds value to seemingly valueless materials and reclaims materials that would otherwise go to waste
Material applications
A range of materials have been applied in different ways to produce designs and imagined effects for Computational Costume. The materials used possess their own strengths and weaknesses in achieving the objectives for prototyping and presentation as covered. I review the application of materials for:
- Mock-ups and patterns to roughly compose and plan designs before committing to the use of final materials
- Creating imagined esemplastic Objects and wearables
- Textile fasteners used on objects and wearables for performance and presentation
- Supporting structures for exhibition materials
Mock-ups and patterns
Paper mock-ups and Paper fabric patterns for cutting textiles for garments have allowed the planning of designs before committing to the use of final materials. These items have been predominantly generated using paper, which is inexpensive and available in a variety of sizes, allowing a wide range of uses.
Paper mock-ups
Paper has allowed the quick mock-up of different prototypes for Objects and wearables, allowing flexible concept iterations before committing to the design of materials for fabrication.
An example of such a mock-up is the layout of the cardboard poster shown in Figure C.1, that allowed the easy application, removal and visualisation of ideas in 3D by using a combination of paper and cardboard.
A similar paper mock-up process was used for the development of Computational Costume v1 performance, shown in Figure C.2. Paper notes resembling planned objects were arranged on garments to determine what should be made for the performance.
Paper fabric patterns
Fabric patterns allow the refinement of garment designs before committing to final fabrics. Once patterns are finalised, they go on to become templates for cutting fabrics. I have used paper as an economical and flexible material to design and produce fabric patterns, as shown in Figure C.3.
The paper fabric patterns shown in Figure C.3, do not look like conventional fabric patterns, which represent each full section of fabric for a garment. Each pattern shown is cut into a folded sheet of fabric. My pattern design allows cutting within the constraints of a laser cutter bed and saving the amount of seams be sewn for pants.
For the shirt torso and arm patterns: the straight sides of the patterns are aligned with the fold on the sheet which is not be cut. The fold acts as the middle of the final unfolded fabric section which is to be joined with the other sections.
For the pants pattern: two separate pieces emerge as each leg. The curved area in the middle of each piece is sewn together to form the internal side of the pants crotch, while the left and right edges of each individual piece are sewn together to form the circumference for each leg.
The coloured sections of the garments shown in Figure C.4 correspond to the pattern sections shown in Figure C.3.
To ensure the patterns were appropriately sized and fitted I deconstructed secondhand clothing that was appropriately sized and fitted. It is important to note the clothes were used as a guide to determine correct sizing and form, and not to copy the garment design, which would infringe copyright.
Objects and wearables
Imagined esemplastic objects and wearables feature prominently in Computational Costume. The materials used for these items serve to illustrate the potential for both virtual and physical qualities. In Presentation methods I explore how physical materials can spark audiences' imagination to demonstrate virtual qualities. However, below I explore how different lo-fi materials have been engaged to serve a variety of necessary abilities for exhibition, performance and film-making.
Cardboard objects and wearables
Cardboard has served as a simple all-purpose material for both presenting objects and wearables, as first explored in the Cardboard poster and interface. The material has been: easy to find, resilient and austere. Also, as explored in Material choice, cardboard is an ideal candidate for the speculative design explored through Computational Costume, because it has no conceptual attachment to digital media and its traditional attachment to packaging is not played upon.
Cardboard objects
Cardboard objects have been generally made from flat laser cut cut-outs, with interlocking slots for making 3D forms without the use of glue. Double-corrugated (two-layer) cardboard has been used, over common single-corrugated (one-layer) cardboard. 7 mm double-corrugated board found for free from discarded large-appliance packaging has been used for creating the cardboard poster, as well as the signage shown in Figure C.5, and Cardboard mannequins, for Computational Costume v2. These objects have worked particularly well except where too much force has been placed on interlocks, as with the initial version of cardboard mannequins.
Cardboard wearables
Thin single-corrugated board, found for free from discarded small-electronics packaging, has been used for the wearable Cardboard interface. This kind of thin board offers a better combination of strength and flexibility for small wearables. However, it is not as flexible as textiles covered ahead.
Cotton broadcloth objects and wearables
Cotton broadcloth has served as an all-purpose material for both objects and wearables, ideal for its flexibility and availability in a wide range of colours. In addition, the material is a natural fibre and is biodegradable—so any ecological harm from its disposal is diminished.
In addition to its normal abilities, cotton broadcloth can made rigid for situations that demand flexibility and rigidity. The process is covered in Stiffened cotton broadcloth.
Cotton broadcloth objects
Cotton broadcloth objects made for Computational Costume have been laser-cut to allow for graphical details, alongside hand-cutting and machine stitching as alternatives. Figure C.6 presents a comparison of a hand-cut object alongside laser-cut objects. Laser-cut objects were adopted for their refined finish.
For applying details, I have used laser cutting, machine stitching and machine-stitched embroidery. I was able to use a sewing machine with the capability to apply embroidered lettering, as shown in Figure C.7.
For applying visual details, alternatives such as handwriting, screen-printing and conventional ink or laser printing on cardstock could have been used. I opted for the methods applied because they were the most visually striking. In addition the fabric was a more durable prop and I was able to make them using only hand tools, sewing machine and laser cutter. If objects needed to be reproduced in larger numbers, screen-printing on fabric or conventional printing on cardstock would have been more suitable.
Cotton broadcloth wearables
The use of cotton broadcloth for wearables has evolved with the needs of the project. Exploration began with an adaptable design, economical designs, and ended with traditional clothing that could be easily worn, reproduced and configured; see Figure C.8.
As shown in Figure C.8, Computational Costume has gone through various design stages, most of which never made it to final applications. Most of these applications were centred on the idea that Computational Costume would be engaged for user studies. Designs commenced with a modular design where a whole-body could be temporarily wrapped and easily removed for analysis. However, this method proved to be both timely to assemble and prone to slipping and constant re-adjustment as wearers moved. Designs then moved onto a poncho design which was easily reproducible with minimal material and easy to wear over clothing. In later designs, the ponchos aimed to cover as large a surface as possible and allow wearers to hold materials in discreet pockets.
As designs transitioned away from a user-study context to a performance context, easy application, removal and material economy were put aside for visual impact. Designs would be worn like regular clothing and match the intended form of a working Computational Costume. This was achieved through a traditionally fit and shaped garment, as shown at the end of Figure C.8 and Figure C.4.
Stiffened cotton broadcloth
Cotton broadcloth was stiffened by hand to create the map tool in Computational Costume v2. This relatively straightforward and economical option allowed the ability to create rigid structures with the durability and flexibility of a fabric. This finishing process adds to the repertoire of material options for Computational Costume. However, the process can impact on the biodegradability of the treated material.
Stiffening cotton broadcloth involves placing the textile in a liquid solution consisting of equal parts water and polyvinyl acetate (PVA) glue and air drying it. PVA glue was chosen over a starch solution to avoid the risk of attracting pests that might feed on the starch. However, PVA glue is only biodegradable in certain circumstances [2] and cannot be treated as an easily biodegradable material.
In Figure C.9, the process of stiffening cotton begins with soaking the entire textile in the 1:1 PVA and water solution. The soaked textile is left to dry, paying attention to minimising areas for the glue to accumulate by removing any visible excess globs of glue. Dried excess glue is cleaned off by dabbing small amounts of water over the dried glue and brushing the dried excess off. The dried sheet is then hand steam-ironed on a light steam setting to flatten out all wrinkles. At this point, the cotton fibres share similar properties to a light cardstock with the flexibility of a fabric, allowing the material to be folded while retaining any creases made.
Ready-made objects and wearables
Ready-made objects and wearables have proven valuable for saving time and resources. These items can be modified slightly to suit an intended purpose, rather than creating new items from scratch. The use of existing objects and wearables has been useful in several circumstances, as described below.
In Computational Costume v2 a small jar was re-purposed with minor modification to act a container for a marker representing pain, as shown in Figure C.10. The lid of the jar was painted and a purpose-made fabric label was wrapped around the jar. The jar could be modified in a way such that it was less or a jar and more of a prop fitting into the flow of the film, as shown in Figure 4.20. This saved the need to design and make a container from scratch.
Ready-made clothing has been modified to avoid time spent on producing new garments. For Computational Costume v1, T-shirts had their machine-sewn seams unpicked and replaced with small pieces of hook-and-loop fastener strips to allow the quick removal of them on-stage, as shown in Figure C.11.
For a preliminary version of Computational Costume v2, a pair of coveralls had loop fastener strips sewn on them to allow the attachment of objects with hook fasteners sewn on, for a live performance, as shown in Figure C.12.
Ultimately, the most successful modifications of ready-made objects were paired with performances where the illusion of esemplastic objects was not broken. As covered in Computational Costume v1, movements associated with the removal of clothing distracted audiences and took away from the intended effect. It follows that the ready-made items used in Computational Costume need to be removed of additional meaning attached to them by association. They need to be as neutral as the cardboard, as discussed.
Paper wearables
The documentation here on paper costume serves as a warning. Light tissue paper and cardstock used in Computational Costume v0 presented an economical way to add large areas of colour and freely applied graphics. However, the same benefits do not apply when using paper as wearable for a moving wearer. As shown in Figure C.13, paper is not a flexible enough material. The properties of paper limit it to applications that avoid shearing forces, such as small wearables like the Cardboard interface or as pieces affixed to a larger wearable.
Textile fasteners
For Computational Costume objects and wearables, textile fasteners have been used extensively in non-standard ways. Hook-and-loop fasteners, metal Snap fasteners and Pin fastening have been used for keeping imagined objects and wearables attached when needed. However, metal snap fasteners, and simple steel pins have best suited the objectives for prototyping and presentation, by being discrete, affordable and more readily biodegradable than polymer hook-and-loop fasteners.
Hook-and-loop fasteners
Hook-and-loop fasteners have been used extensively, although their use has been made redundant as Computational Costume has developed.
Hook-and-loop fasteners have been used in Computational Costume v1 for quick release T-shirts, as shown in Figure C.11, and attaching removable objects to modified coveralls, as shown in Figure C.12 for a preliminary version of Computational Costume v2. In an unused design, hook-and-loop fasteners were used to allow patches of fabric to be worn over clothing, as shown in Figure C.8.
The act of physically attaching and detaching objects in the way allowed by hook-and-loop fasteners is only useful in situations where attachment needs to be performed without having to give direct attention to the fastener, such as in performances. However, exhibition and film-making have presented more compelling presentation modes, as explored in Design scenarios. In addition, alternatives like Snap fasteners are adequate enough and can be arranged to make detachment and attachment easy and secure.
Snap fasteners
Metal snap fasteners, as shown in Figure C.14, present a favourable alternative to Hook-and-loop fasteners.
Snap fasteners attach and detach at fixed points, allowing a reliable and consistent connection between materials. Snap-fastened materials can only join together at the same point. In addition, snap fasteners are much more discreet than hook-and-loop fasteners.
There are several situations where snap fasteners would have been best suited, such as the quick-release T-shirts in Computational Costume v1, as shown in Figure C.11, and the layering of medical records for Computational Costume v2.
Pin fastening
Metal pins can be used as substitutes for Snap fasteners in situations where materials only need to be joined together temporarily or in a very discreet way.
In the video for Computational Costume v2, a discreetly placed pin is used to attach a marker on a small token, as shown in Figure C.15. When done with caution, to avoid poking oneself, this method offers the most flexible and discreet fastening option for exhibition and props in film-making.
Supporting structures
The use of physical materials in Computational Costume also extends to the creation of supporting structures. These structures act to hold imagined objects and wearables for exhibition and even small props in film-making. The materials need to be both strong enough and discreet enough to allow the works shown to be the primary focus. Below I cover experiences of creating and using: Cardboard mannequins, Steel wire supports and Timber supports.
Cardboard mannequins
Cardboard mannequins have been fabricated to hold costumes made for Computational Costume v2, as shown in Figure 4.18 and Figure 4.19. They have been made as a substitute for purchasing costly mannequins. But importantly, cardboard carries an amaterial quality which has intrinsic meaning to Computational Costume, as discussed. This also applies here, because the designs presented are not fashion items and using traditional mannequins can carry this meaning.
Strong, double-corrugated board is a suitable material for mannequins. In the right form, this board can carry the weight of clothing. However, care needs to be taken with how and where weight is distributed. The first configuration of the cardboard mannequins was designed to have arms and legs which could be articulated. As shown in Figure C.16, the interlocks on the arm and leg pieces had too much pressure placed on them. The pressure caused the interlocks to pinch and buckle, despite adding supports. The issues had to be quickly fixed with adhesive tape and steel wire, and a singular vertical support to replace the legs.
Keeping in line with the objectives for prototyping and presentation, a future version of the mannequin would correct any interlock that would slip under gravity with a removable support such as steel wire clips or twine. The support need to be easy to apply while a garment is on. As for any mannequin, the garment needs to be applied on the arms and torso before the arms and torso can be attached together—because the arms on a mannequin are not as flexible as humans' arms.
Steel wire supports
Steel wire has been used for both large- and small-scale supporting structures. Steel wire has proven most useful at a small scale. Steel wire has been used at a small scale for propping up loose textiles or tightening structural joints that might otherwise be limp, for exhibition.
Before conceiving of cardboard mannequins, a steel wire mannequin was made for Computational Costume v0. The intention of the steel wire mannequin was to allow the ability to hook small items and panels on any area, as shown in Figure C.17. However, this modular platform was fickle to create and work with. In addition, the findings of Computational Costume v0 led to reconsidering the role of mannequins towards having a complementary role, explored in Computational Costume v2.
Timber supports
Timber was used as a stronger alternative to cardboard for Computational Costume. Timber supports, when constructed well, are useful for exhibition, as shown in Figure C.18, and holding video props, as shown in Figure 4.25. Also, when timber is presented in an austere way, that is, without ornament, it carries the same amaterial quality as cardboard.
A grounded wooden support is much less time-consuming to apply than ceiling hanging for situations where objects need to be suspended. Ceiling hanging is used for the exhibition of Computational Costume v2, as shown in Figure 4.18 and Figure 4.19.
In Figure C.18, dowels can be fitted tightly into a stable ground support using the exact circumference drill bit. This support holds smaller horizontal dowels for hanging objects. A supporting structure made in this way is easy to both erect and move when needed.
Presentation methods
The presentation of Computational Costume has evolved to ensure audiences can adequately conceptualise the ideas presented. In line with the objectives for prototyping and presentation, uninitiated audiences and designers alike should be able to conceptualise the imagined effects.
Three presentation methods have been explored, with varying success. I present my experiences with Sculpture, Live performance and Video.
Sculpture
Sculpture has been used often to communicate what Computational Costume can do. Sculpture has worked best where works are featured in a video and exhibited alongside the video.
The use of sculpture began with the cardboard poster and interface. The cardboard poster in particular was shown to audiences as part of the CHI 2017 (Conference on Human Factors in Computing Systems) Student Research Competition alongside traditional posters. Through form and content, the poster explained the rationale for esemplastic objects while also being a representation of such an object.
The findings from presenting the cardboard poster indicated that audiences had praise for the design. However, information on how well the concept was conveyed was not available. So the design was elaborated on again through sculpture in Computational Costume v0.
Computational Costume v0 imagined through sculpture sought to reveal how both esemplastic objects and wearables would come together. The sculpture was arranged in a way to maximise the amount of scenarios presented, alongside didactic captions to compensate. However, this was too much for a sculpture of a single Computational Costume wearer.
The findings from presenting Computational Costume v0 revealed the general public did not immediately understand the concept. When viewers received answers to questions they had about the work, they were able to conceptualise the ideas. I can speculate that more sculptures as part of a scene may have helped paint a clearer picture.
Sculpture has worked best as a complement to the video made for Computational Costume v2. Physical props and wearables used in the video are presented alongside the video in an exhibition, as shown in Figure 4.18. Through this method, audiences can see physical materials presented through video in a way that sustains the imagination of Computational Costume. Audiences can then refer to the sculptures presented in a scene to observe details they may have missed.
Live performance
Live performance was engaged in an attempt to imagine Computational Costume through a live experience. This technique was used in Computational Costume v1 and the preliminary stages of Computational Costume v2.
Computational Costume v1 was presented through a series of quickly removable costumes in a three-minute performance for a science communication competition. The findings revealed that the work's key message was not adequately conveyed. The performance was densely packed with scenarios. In addition, the performance was isolated to a single performer, so a more concrete imagination of a complete ecosystem with multiple actors present was lacking.
The early stages of Computational Costume v2 were planned to be a live performance where the wearer could engage directly with the audience. A modified pair of coveralls, as shown in Figure C.12, could be used to show esemplastic objects directly in action. In practice this was attempted with one wearer, which led to problems in showing audiences the work. As shown in Figure C.19, the coveralls and objects had to be placed on a surface so everyone was able to engage. A model wearing the garment with an actor to engage with the objects would have been an ideal way to encourage audience participation.
In practice, attempts at live performance needed further development. With enough performers a live performance could be a compelling platform to show audiences how Computational Costume could work. However, instead of developing live performances I adopted film-making as an accurate and accessible means to present a complete ecosystem for Computational Costume through video.
Video
The use of film-making to present imagined Computational Costume ecosystems through video has been the most compelling presentation method to date. Videos allow audiences to see imagined objects and wearables engaged as accurately as possible.
In Computational Costume v2 video allows the presentation of engagement with multiple people and the surrounding environment. Additionally, video editing can be used to create several useful effects, such as:
- Instant costume switching, shown in Figure 4.21 and Figure 4.24; this effect shows how virtual costumes would switch. In practice this avoids the need to wear multiple layers or show the physical removal of layers as in live performance.
- Perspective effects exploring what different wearers see, shown in Figure 4.20, Figure 4.21, Figure 4.22, Figure 4.24, Figure 4.26 and Figure 4.28; this effect allows viewers to experience what imagined wearers would experience.
- Object placement in particular video shots, shown in Figure 4.26 and Figure 4.27; this effect shows how virtual objects would work. In practice this avoids the need to cumbersomely carry multiple objects together—an issue in live performance that was dealt with by wearing multiple layers and hiding objects under layers until needed.
Video allows the combination of the best of sculpture and live performance. Both prototypes and performances are presented in the clearest fashion. Alongside these benefits, physical props used in film-making can be exhibited alongside video to allow audiences to observe details they may have missed when watching the video, as shown in Figure 4.18.
Conclusion
The material applications and presentation methods explored for Computational Costume provide guidance for designers wanting to build upon the work. I have illustrated the lessons learned from creating and presenting imagined design scenarios and provided insight into material applications and presentation methods which have worked best.
Collectively the work enables designers to imagine objects and wearables for new digital media by engaging lo-fi physical materials, including: cardboard, textiles and paper, with supporting structures made from cardboard, steel wire and timber. These materials have been used over a wide range of presentation methods including: sculpture, live performance and video. The methods presented here stand as an accessible alternative to using advanced visual effects or programmed virtual graphics and electronics.
In following the objectives for prototyping and presentation I have discovered video exhibited with sculpture as the best practice approach for audiences and designers alike to conceptualise Computational Costume. Designers can film performances with physical materials in set environments. Perspectives of different actors and editing can maintain the illusion that physical materials are esemplastic objects with combined virtual and physical qualities. These videos can be experienced alongside physical props through exhibition to allow audiences to engage with the finer details of featured designs.