Can co-creation in extended reality technologies facilitate the design process?

Hatana El-Jarn (Department of Media, Leeds Trinity University, Leeds, UK)
Glen Southern (Department of Media, Leeds Trinity University, Leeds, UK)

Journal of Work-Applied Management

ISSN: 2205-2062

Article publication date: 9 July 2020

Issue publication date: 13 October 2020

3581

Abstract

Purpose

The purpose of this paper is to explore the benefits of co-creation/co-design using extended reality (XR) technologies during the initial stages of the design process. A review of the emerging co-creation tools within XR will be examined along with whether they offer the potential to improve the design process; this will also highlight the gaps on where further research is required.

Design/methodology/approach

The paper draws on professional and academic experiences of the authors in creative practices within the realm of XR technology, co-creation and co-design. In addition, a review of the current literature on emerging technologies and work-based learning will offer further insight on the themes covered.

Findings

To design, collaborate, iterate and amend with colleagues and peers in a virtual space gives a wide range of obvious benefits. Creative practitioners both in education and employment are working more collaboratively with the advancement of technology. However, there is a need to find a space where collaboration can also offer the opportunity for co-creation that improves the initial stages of the design process. This technology also offers solutions on the constraints of distance and ameliorates creative expression.

Research limitations/implications

There is an opportunity to test the ideas expressed in this paper empirically; this can be done through testing co-creation tools with professionals, work-based learners and students.

Originality/value

The paper will add to the existing literature on emerging technologies as a unique environment to improve co-create/co-design the visuals created during the fuzzy front end of the design process and offer a potential framework for future empirical work.

Keywords

Citation

El-Jarn, H. and Southern, G. (2020), "Can co-creation in extended reality technologies facilitate the design process?", Journal of Work-Applied Management, Vol. 12 No. 2, pp. 191-205. https://doi.org/10.1108/JWAM-04-2020-0022

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Hatana El-Jarn and Glen Southern

License

Published in Journal of Work-Applied Management. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The global adoption of digital technology allows individuals, businesses and communities to engage, collaborate and communicate synchronously using a variety of digital tools (Lee et al., 2012; Gu et al., 2011). Digital technology is now extensively used by businesses worldwide to enhance economic growth, improve productivity and create new services (Billon et al., 2010; Park et al., 2015; Leong and Sung, 2018 and Sung et al., 2019). The range of businesses adopting digital technologies is continuously expanding because of their ease of use (Fagan et al., 2012) and the consequent reduction of time constraints (Wang et al., 2019), especially when geographical distance is a factor (Uva et al., 2010). It is particularly attractive for collaboration, co-creation and co-design for creative practitioners working on projects simultaneously, who may require real-time shared platforms to progress projects (Gu et al., 2011). This can take the form of designers working alongside other designers, designer and stakeholder/client, learner and teacher or art director and designer. While digital technological developments have led to significant improvements in supporting changes in design practice, very little is known about the role and the impact of these technologies on design collaboration and how different types of digital technologies affect the design process. The knowledge of this impact is essential as the collaborative and co-creation/design process has always been integral to the success of design projects, from ideas, concepts, prototyping to finished design.

A recent solution to the issue of geographical distance and time constraints is to generate ideas and concepts, and then collaborate and co-create using extended reality technologies. While the demand for this way of working is high, tools within the technology have so far been limited and thus have failed to encourage universities and companies to engage in these new ways of working. However, recent advances in co-creation tools within extended realities offer an enhanced, vibrant space for learning, collaboration and co-creation/design where users can deepen connections through creative expression. Although some research has been carried out on the use of co-creation tools within these systems, it does not provide adequate guidance on how ideation and concepting could be effectively combined with co-creation tools and thus exploited by learners and professionals within the creative industry. To address this shortcoming, our paper will review the latest solutions in extended realities and illustrate how ideation, concepting and co-creation can be an efficient, enriching experience for learners and professionals. For the purposes of clarity, it will be useful to first offer our interpretation of co-creation and co-design and extended realities.

Co-creation and co-design

Following an extensive review of the literature, we found “co-creation” was employed in many scholarly fields, including health, engineering, architecture, technology, to name a few. The terms “co-creation” and “co-design” are used interchangeably, with definitions being blurred depending on who is referring to them. There is currently no dictionary definition of “co-design” or “co-creation”, and opinions on what it should be involved vary widely (Sanders and Stappers, 2008). However, some scholars use “co-creation” to address how social, cultural, economic and technological changes enable organisations, groups and individuals to interact, collaborate and solve problems by jointly generating solutions and creating value (Russo-Spena and Mele, 2012; Chesbrough, 2011; Russo-Spena and Mele, 2012). Gu et al. (2011) note that co-design is a human-centred technical activity, which allows multiple people to work collaboratively on design projects, across different time zones, spaces, disciplines and cultures. Sanders and Stappers (2008) narrow down the term to highlight the developmental stage in the design process and to refer to designers and non-designers working together during this stage. Co-creation “refers to any act of collective ‘creativity’”, including those prior to the developmental stage such as idea and concept generation, thus placing co-design as a specific instance of co-creation (Sanders and Stappers, 2008). Co-design is also used as a technique in collaborative design, involving participants working simultaneously in the co-creation process. Co-creation is based on the belief that the end user's presence is essential in the creative process, bringing together users and designers to work towards a shared goal (Dorta et al., 2019). Working with stakeholders, designers, researchers, end-users provides a platform to consider one other's views, beliefs, needs, preferences and generates solutions together (Schuler and Namioka, 1993).

Co-creation uses similar principles of research and concept design to identify a solution that provides users with better experiences and businesses with improved and innovative services (Dorta et al., 2019). Working collaboratively for a resolution is not a new idea. Dewey (1997) advocated organising processes of shared problem-solving, in which people collaboratively explore, discuss and define a problem and collaboratively develop and evaluate possible solutions. For example, inquiry as a process starts from a problematic situation that moves (by productively combining doing and thinking) to a resolution: “Inquiry is the controlled or directed transformation of an indeterminate situation into a unified whole” (Steen, 2013). In such a process of inquiry, the aim is not to develop universal knowledge that represents some external reality, but to bring people together so that they can jointly explore, try out, learn and bring about change in the desired direction (Steen et al., 2011). Collaboratively working through the design process has also been called participatory design, which has been linked to similar activities such as co-creation and co-design. Participatory design and co-design share the advantages of integrating individuals’ skills and experience(s), as they are revealed through the unfolding exchanges (Schuler and Namioka, 1993).

Participatory design started in Scandinavia in the early 1970s, aiming to involve workers in shaping the future of their work through a more democratic process (Sharp et al., 2019). It is used most frequently in the planning and discovery phase of a research cycle. Participatory design as a research method encourages collaboration, rotating the roles of researcher, designer and user to elicit collective creativity in the design development process (Sanders and Stappers, 2008). In addition, designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own (Trischler et al., 2017). In participatory design/co-design, the user, for whom the service/product is under development, is the “expert of their experience and contributes with increased agency to knowledge development, idea generation and concept development” (Sanders and Stappers, 2008). The researcher and designers provide the tools for expression and ideation of needs and desires. Sanders and Stappers (2008) note that “participation at the moment of idea generation” is an important place to be practising participatory design but is also important throughout the design process at all key moments of decision. However, Opiyo (2016) argues that designers need effective techniques to visually project their ideas about complex products of systems at the “fuzzy front end” of the design process. The “fuzzy front end” refers to actions, such as ideation and concepting at the beginning of the development process, when the product or service is still in the deciding stage and making changes is less expensive (Khurana and Rosenthal, 1998). Figure 1 presents a simple interpretation of the fuzzy front end, an essential part of the design process. However, it is worth noting that Koen et al. (2001) outline five distinct front-end elements before the idea generation stage: opportunity identification, opportunity analysis, idea genesis, idea selection and idea and technology development. While collaborative problem-solving through co-creation (Basadur et al., 2000) inspires new ways of solution synthesis (Goldschmidt, 2007), visual representations are central to attaining effective collaboration and vital in achieving creative solutions because they are means of communication successfully capturing and conveying information to collaborating parties. They also offer a means to explore various quality aspects such as functionality and usability whilst keeping an eye on the user environment and contexts (Opiyo, 2016). There are several immersive technology environments, known also as extended realities, offering the potential to co-create/design during the fuzzy front-end period of ideation and concepting. The following section will offer some exploration of these extended realities in the first instance and will then link this to the use of co-creation and co-design throughout the stages of the design process.

Extended reality technologies

Extended reality (XR) is a developing field, and definitions are likely to change, but currently, it is widely used as an umbrella term for virtual reality (VR), augmented reality (AR) and mixed reality (MR) while not being limited to these areas (Ludlow, 2015). The potential for this platform to alter how we work, collaborate and generate projects is extensive (Lee, 2012; Guo, 2015). XR has recently become more accessible and economically viable and is now used in everyday life, most notably in the entertainment industry – for example, in video games and mobile gaming – but also engineering, architecture, healthcare and education (Quint et al., 2015; Bellini et al., 2016). The technology is used for a variety of applications such as collaboration, visualisation, co-designing/co-creation, training and education (Scholz and Smith, 2016; Bacca et al., 2015). Some companies have focused on one specific area of XR. For example, Apple has revealed it has no plans to incorporate any VR-capable hardware solutions and instead has been promoting a wide range of AR applications to enhance its iPhone and iPad ranges. Apple's 2020 iPad was released with a built-in LIDAR (“light detection and ranging”, or, alternatively, “light imaging, detection, and ranging”) scanner. Oculus and Vive have created VR head-mounted displays (HMDs) and so has Microsoft. However, Microsoft has adopted the term “Mixed Reality Headset”, which is a misnomer, as the headset is a VR headset; MR is another field within XR. A LIDAR feature on an iPad would seem to be an obvious statement that Apple is focusing on AR and not the wider scope of XR.

XR offers the potential of co-creation with experiences that have never been achieved at this level before. HMDs, Goggles and mobile technology now have the potential to deliver high-quality graphics and experiences are being offered on mobile phones, untethered platforms and globally making them attractive to businesses and consumers alike. VR is one of the most common forms of XR and is generally defined as a computer-generated experience whereby the user is transported into a simulated virtual world (Uhomoibhi et al., 2019). There are various levels of immersive technology that exist within VR, classified into non, semi- and fully immersive systems (Fox et al., 2009). In non-immersive VR, users do not have a virtual surround view of the simulated environment, whereas a semi-immersive VR experience offers a bigger view of the 3D generated virtual world, mainly using a large screen or an HMD/Goggles. Fully immersive virtual environments are often implemented using a cave automatic virtual environment (CAVE), which is intended for several users to be immersed at the same time (Pallot et al., 2017) using multiple large-screen devices, with projectors directed to between three and six of the surfaces of a room-sized cube to offer a complete (3D) view of the virtual environment, as demonstrated in Figure 2, and commonly combined with other devices such as gloves, touch screens, wands or controllers. However, CAVE can be quite expensive for many small businesses; instead, cheaper options are available, such as the HMD, although some HMD users report motion sickness, which poses challenges (Baxter and Hainey, 2019). AR, not be confused with VR, is also an interactive experience of a real-world environment in which real-world objects are enhanced or overlaid with computer-generated imagery simultaneously, such that they are perceived as an immersive aspect of the real environment (Schnall et al., 2012). Wu et al. (2013) state that AR as a system fulfils three basic features: a combination of real and virtual worlds, real-time interaction and accurate 3D registration of virtual and real objects. In this way, AR alters users' continuing perception of the real-world environment.

MR utilises both VR and AR to produce new environments and visualisations, where physical and digital objects co-exist and interact in real time (Figure 4). MR does not exclusively take place in either the physical or virtual world but is a hybrid encompassing both AR and augmented virtuality via immersive technology (Milgram and Kishino, 1994; Mountain and Liarokapis, 2007). MR is also referred to as merged reality, hybrid reality or augmented virtuality and has been used across a variety of sectors, including the reconstruction of heritage restoration projects (Huang et al., 2009), augmenting visits to museums and historic buildings (White et al., 2004) and medical training (Albrechta et al., 2013). Other studies have explored social VR (Dorta et al., 2019) co-creating in VR (Ranjbarfard and Heidari Sureshjani, 2018) and using VR in establishing product aesthetics (Valencia-Romero and Lugo, 2017). However, few studies have been found concerning the role of VR and AR in the early conceptual stages of the design process (Ekströmer and Wever, 2019). A pilot study carried out by Ekstromer and Wever (2019) explored VR sketching and found it overall useful in the early stages of the design process compared to using real pen and paper. However, the participants found the ability to “undo” in VR (Gravity Sketch) increased the length of time sketching, which resulted in fewer completed sketches than pen and paper. We would argue this may be due to the participant's lack of experience of sketching within VR, rather than using the undo feature. The additional lines or “mistakes” made through sketching on paper are usually accepted as part of the development process (Dennet, 2020), and the same could be encouraged when sketching in VR. This would reduce the need to use the undo feature and reduce the time trying to fix “mistakes”. Karakaya and Demirkan (2015) found that digital environments help the externalisation of design decisions in sketching and drawing. The externalisation supports creativity by translating mental imagery into conceptual representations, moving towards concrete representations (Fischer et al., 2007). Shneiderman et al. (2020) state that the goal of a digital environment is to develop better software that allows users to be more productive and innovative, an example provided in Figure 3 of creating high-specification visuals in an immersive environment. In addition, collaboration-orientated tools within a digital environment have a positive impact on social creativity (Shneiderman, 2007). The collaborative tools focused on within this paper are XR technologies, and the following sections will expand on ideation and concepting within VR.

Ideation and concepts in a virtual reality environment

Ideation and the formation of concepts have been evolving alongside technology with many industries needing to produce designs (Knoll and Horton, 2011). Design teams in a range of industries have explored ways to improve the process of generating initial ideas based on the requirements of the product or external stakeholders. One ways to improve on the process of ideation is by offering more opportunities for increasing the number of iterations. Being able to increase the number of ideas inevitably leads to more avenues of exploration and thus a better chance of producing designs that are innovative (Karakaya and Demirkan, 2015). Currently, design teams are often split into smaller groups to develop ideas and concepts and then re-grouped to review those concepts together. There may be less confident members in the team who may have a valid or pertinent idea but might not have an opportunity to put it forward due to lack of experience, confidence or time. Collaborating in a virtual space could benefit these contributors by building a culture that encourages innovation through generating an increased number of ideas and/or iterations (Russo-Spena and Mele, 2012). This will often lead to ideas being proposed that might have once seemed absurd. However, mistakes during the design process are key to making progress and an essential opportunity for learning (Dennett, 2020). Therefore, teams can put forward ideas without fear of criticism in this early stage, as the fuzzy front end has the potential to produce more focused results. Similarly, creating a climate where the ideation sessions are short and energetic rather than a slog will often lead to more and/or results. Creation tools within VR offer collaborators a vehicle with which to co-create, develop and exchange ideas (Dorta et al., 2019), make mistakes, iterate and concept. The following section will offer critical overview into some of the current VR creation software available today.

Virtual reality creation software

The authors, based on professional and academic experience, carried out an initial review of a wider range of VR creations tools and found an extensive list of companies and products that have been created in the past five years, some of which have the potential to include co-creation features in future updates. There is scope to carry out research into some of these newer features entering the market.

Adobe Medium: Adobe acquired a voxel modelling app called Medium from Facebook at the end of 2019. Medium is a voxel sculpting software, which is a type of sculpting where 3D models are created entirely out of 3D cubes or more accurately within a 3D grid. Artists are able to create these models by using VR controllers to add geometry inside the specified grid offering the feeling of “sculpting” in the air in VR space. In 2016, Medium (Oculus Medium, 2018) built into its software the ability to have an artist invite another person into their virtual workspace (Thoravi Kumaravel et al., 2019), but whilst this may have been revolutionary at the time, it was limited, not allowing any interaction beyond audio. The artist was able to record him/herself in VR from multiple angles, which was useful, but without the visual interaction of other users, co-creation was restricted. In addition, with the primary artist being in control of the scene and creating a closed environment (like a room), the second artist would have felt that he/she was being catapulted through the air in VR space.

Gravity Sketch: Gravity Sketch (Gravity Sketch 2017) started out as an iPad app for sketching and evolved into a fully capable VR creation toolset featuring not only spline-based modelling (NURBS: Non-Uniform Rational B Splines – a mathematical way of describing a curve with control points that a designer uses to manipulate a mode(l)), but more recently subdivision polygonal modelling. Subdivision polygonal modelling is a technique for making high-resolution models by manipulating a lower-resolution model and using software to subdivide it, to give a smoother, more rounded appearance. The core tools are very similar to those used in the automotive industry, aviation and shipbuilding, when numerical accuracy is crucial. Gravity Sketch is now working on a presentation tool that will allow users to not only be together in a virtual creation environment but to interact and work collaboratively on a concept. The co-creation features within Gravity Sketch are already well established and have been tested extensively. Two concept artists carried out a 2-h creation session where they worked on the same creature model (visual example provided in Figure 5). We found the most powerful and exciting aspect of it was being able to discuss and iterate together asynchronously in a localised collaborative virtual environment. Outside of VR, an iteration would be done, and then there would be a time delay whilst that iteration was passed onto the creative lead for them to review it and authorise the next stage. Then, it would be sent back for more iterations, and the cycle would continue until there was some agreement on the design. That would then be sent to an executive (creative director or similar) who might need more iterations. To have the capacity for asynchronous discussions regarding the design, iterations and editing in real time within a supported VR environment has the potential to reduce fuzzy front-end problems (Opiyo, 2016). In addition, the next set of features to be added to Gravity Sketch will include presentation tools, annotation and mark-up capabilities. This would allow a design lead to review the model and add notes and feedback inside the virtual environment or even face-to-face with the designer's virtual avatar (an icon or figure representing a particular person in a virtual digital environment).

MasterpieceVR: MasterpieceVR is another voxel-based VR modelling program and has an online collaborative workspace that is not region restricted, allowing up to four sculptors to work simultaneously on the same project, with up to 20 viewers who can watch without active participation. These sessions can be set to private or public, with an optional room password. This alone makes it attractive to industry when compared to Adobe Medium in its current version. All mesh/ribbon (geometry) and clay features can be used, except, at present, “undo”. While layers are not featured, the ability to convert clay to mesh is. The ability to create stamps (pre-made repeatable shapes) is also a feature that can be used by four participating artists. Environments and resolutions can be changed as well, giving the ability to save and export to industry standard 3D model formats on a participant's local machine at any given point. However, a study conducted by Yu Fei and Khalid (2019) found that none of their students adopted MasterpieceVR for co-creating surroundings because of issues with the co-creation features, including the fact that the user importing the subject could only view it and was unable to save the co-created object. The version Yu Fei and Khalid (2019) were using has now been updated several times, with many of the reported issues having been addressed.

Facebook Spaces: Facebook created Facebook Spaces (Facebook, 2020) in VR to allow users to interact in an immersive virtual environment where participants could be transported into the same space, simulating a room. It uses Facebook's own avatar system, and these avatars appear around a table in the virtual space. Facebook Spaces can create a VR room for users to spend time together (virtually and with audio) while also being able to share their social media content with one another. The software uses the Oculus Rift Touch controller to navigate, interact and, to some degree, create; the software is geared towards Facebook's own users (see Table 1).

How can extended reality creation tools improve the design process?

The authors have extensive practical experience within the visual effects industry, working with large design teams internationally, from Disney to Netflix Films, SKY TV and advertising campaigns. Our knowledge also extends into curriculum development and delivery of digital design practices, games and media. Using our collective experience as well as the current literature of co-creation, co-design and collaboration using traditional digital techniques and immersive digital environments, we found XR tools have the potential to improve the chaotic early stage of the design process and therefore has a wide range of benefits, including cost savings, more efficient designers, more time allocated to the manufacturing or post-production end of the process, resources being used effectively, designers being more collaborative. Working together at an early stage, inside an immersive VR allows teams to spend time on other parts of the process. Where creative leads can work with artists, side by side, from the early stages (Schuler and Namioka, 1993), time can be saved by not having to wait for notes or review feedback. An example of a real session of ideation and concepting is offered to provide a clear example of a collaborative VR project used for a Netflix TV show. The example outlines a real example of two processes in designing concepts for an episodic series (M&E, Media and Entertainment).

Creative brief: For Netflix, we were commissioned to work on a 13-episode series that features a vampire-like creature. The written brief was vague but did have enough to suggest the shape, the wings, number of limbs, facial features and skin type.

Example with virtual reality

G.S (based in the UK) received the brief, which would normally range from vague to very detailed, outlining specific design guidelines. A creative lead (based in the USA) joined me in VR during the initial creation session and led the design from the start. This particular brief was nonspecific and therefore offered more scope for creative expression; I chose to use Oculus Medium, which has the ability to allow more than one person at a time into the virtual space. I started the project by iterating ideas from the brief.

GS reviewed the design with the creative lead in VR (this is often the creative director or production designer, depending on the industry). The feedback was instant, giving me space and time to continue refining ideas at this early stage, focusing on the initial silhouettes. Feedback was given with notes (an industry term for written feedback) and draw-overs (an industry term for drawing over an image to highlight required changes) within our team. I prepared images selected from the initial designs that had been highlighted. Those notes focused on two designs that were working well, and they were presented to the client. The feedback required at this stage was the third time amendments were made and allowed us to quickly agree on a final design. This process cuts out 2D sketching, and we went directly into modelling in 3D. This means that decisions about the character could be made from all angles and limited the amount of “turnarounds” that would need to be created for approval.

There were over 80 different VR iterations made in session 1, which were refined down to ten by the end of day 1. See Figure 6.

Example without virtual reality

On similar projects, not using VR, there are several stages of initial sketches before getting to any sculpted concepts. The initial ideas are presented as quick 2D pencil sketches and crude digital paintings or thumbnails. This round of feedback is normally shared internally, and notes given on form, shape, overall design.

The initial ideas are sent to the client for their thoughts, and the team waits for feedback and notes (comments on what the director is looking for in term of likes and dislikes).

Feedback is often limited as to which sketches or amendments the director requires, but further ideas are taken forward into the next round or two of idea development.

In round 2, we focus on the ideas the client points out and another set of sketches and painting is created. This is still in 2D (pencil or digital). These are prepared and sent to the client for feedback. At this point, it may be possible to begin creating 3D models (digitally, using non-VR programs like ZBrush, Maya, Cinema4D). The refinements of the designs (3D renders at this stage) are presented to the client with turnarounds (images of the design from all sides of the 3D model).

This process, for a similar set of concepts to that described in the example using VR, would take approximately four days (32+ h of work).

The above is a brief overview of one example, illustrating how co-creation, at the early stages of ideation in VR can be beneficial to all involved. The virtual space puts individuals (no matter where in the world) into a space simulating a room/studio, allowing immediate feedback and making iterations less cumbersome. This has the potential for cost saving and a reduction to some of the constraints geographical distance may pose.

Outlining the process of concepting for a TV show, we suggest some of the possible pros and cons below. For this example, we used Oculus Medium as that allows only two people into the VR space at the same time, with the second person unable to edit the models in this version. This highlights areas of strength and areas where improvements could be made within the technology but also will reveal some of the limitations to users wanting to embark on developing their practice within immersive environments.

Pros

  1. Having two or more creatives and/or creative leads in VR is a positive (Goldschmidt, 2007). The experience is speeded up by not having to stop and wait for feedback or notes.

  2. Instant group idea generation (Sanders and Stappers, 2008).

  3. Fuzzy front end can be streamlined when the team comes together in VR space. More input at the early stages speeds the design process up (Sanders and Stappers, 2008). Possible cost savings in time and labour.

  4. Art directing for changes can be speeded up through asynchronous communication and revisions tend to be fewer, as changes can be made instantly. Thus, more time is spent designing and less on iterating from feedback.

  5. As a VR artist, it is liberating to be in a virtual space with other creatives, sharing ideas and developing these in real time, rather than working independently on separate machines and then meeting to discuss, sometimes losing momentum with the ideas and concepts (Opiyo, 2016).

Cons

  1. Currently, there are limited 2D co-creation tools (Schnall et al., 2012).

  2. There is a higher hardware cost than standalone systems when you factor in the graphics card, HMD and controllers, limiting some users and possible collaborations.

  3. Most concept art teams are made up of artists from a 2D background with limited knowledge of working in 3D (Opiyo, 2016).

Training costs need to be factored in if a design team needs to adopt VR.

Collaborative working can be challenging for artists who prefer working in isolation or independently and may be less productive working in real time with clients or other designers.

The images in Figure 6 highlight the significant number of detailed concept designs made during the first day of the design process. This demonstrates the speed at which the block-out stage can be achieved using creation tools in VR. The speed, quality and the number of ideas generated is supported by all parties communicating asynchronously, in the same space, offering live direction and feedback. This is particularly important in the early stages of the design process. This collaborative immersive experience has the potential to far outweigh other CAD methods of co-creating, designing and iterating in the early stage of the design process.

Conclusion

This paper was inspired by the extensive experiences of the authors and their need to develop skills to integrate the real with the digital, in line with the evolving practices and increasing globalisation of sectors. It presents the early research stages of future work on the role of co-creation and the fuzzy front end of the design process using XR technologies. It points out the possible advantages and practicability of co-creating using XR through a real-life example of the ideation and concepting stage using traditional and VR methods. We found no other digital platform capable of offering an almost lifelike immersive environment/experience of working in the same space, sharing ideas while simultaneously making changes to a virtual 3D object. Co-creating within VR simulates an open space, which liberates one from the confinements of the two-dimensional digital screen, offering the freedom of space to express, explore and innovate. Traditional methods of concepting and validation have been time-consuming and costly, while immersive virtual environments offer users the opportunity of asynchronous contribution without the limitation of geographic location or technical expertise. In addition, multiple iterations and increased opportunity to generate many more ideas, in less time and with reduced costs, improves the chances of a successful product. A wide range of sectors are showing increasing interest in these new tools, including automotive design, industrial design, media and entertainment, set design, illustration, concepting for media and entertainment, previz for movies and layout. Using XR creation tools and specifically the immersive VR tools explored in this paper suggest that designing in VR could have a huge impact on the industry.

Future work

Whilst carrying out a review of the current work in this area, we found there was a notable lack of empirical studies from the past five years on using modern VR equipment, software and the exploration required with those design tools.

Our future work will investigate how XR co-creation tools impact the creative process in the initial design stages. Research into multiple ideation techniques with a team in an immersive virtual environment compared with existing traditional design environments will highlight the challenges when working remotely or in an immersive environment.

Figures

Visual representation based on the fuzzy front-end concept

Figure 1

Visual representation based on the fuzzy front-end concept

Visual representation of cave SouthernGFX 2020

Figure 2

Visual representation of cave SouthernGFX 2020

VR view in an immersive VR creation tool such as gravity sketch SouthernGFX 2020

Figure 3

VR view in an immersive VR creation tool such as gravity sketch SouthernGFX 2020

XR is an umbrella term that covers a variety of technologies that enhance our senses

Figure 4

XR is an umbrella term that covers a variety of technologies that enhance our senses

Two VR artists using co-creation tools inside Gravity Sketch. View from VR artist 2

Figure 5

Two VR artists using co-creation tools inside Gravity Sketch. View from VR artist 2

Contact sheet of designs from the example using VR sculpting

Figure 6

Contact sheet of designs from the example using VR sculpting

VR creation tools available currently (2020)

Oculus quillUnboundVR canvasKingspray
Masterpiece VRSubstanceCoolPaintr VRKodon
Gravity SketchKanovaAnimVRDreams on Sony Playstation VR
TvoriA-PainterVox VRBlender
TiltBrushPaint VRBlocksKingspray

References

Albrechta, U., Nolla, C. and von Jan, U. (2013), “Explore and experience: mobile augmented reality for medical training”, in Lehmann, C.U., Ammenwert, C. and Nahr, C. (Eds), MEDINFO 2013: Studies in Health Technologies and Informatics, Vol. 192, IMIE and IOS Press, Copenhagen, pp. 382-386.

Bacca, J., Baldiris, S., Fabregat, R., Kinshuk. and Graf, S. (2015), “Mobile augmented reality in vocational education and training”, Procedia Computer Science, Vol. 75, pp. 49-58.

Basadur, M., Pringle, P., Speranzini, G. and Bacot, M. (2000), “Collaborative problem solving through creativity in problem definition: expanding the pie”, Creativity and Innovation Management, Vol. 9 No. 1, pp. 54-76.

Baxter, G. and Hainey, T. (2019), “Student perceptions of virtual reality use in higher education”, Journal of Applied Research in Higher Education. doi: 10.1108/jarhe-06-2018-0106.

Bellini, H., Chen, W., Sugiyama, M., Shin, M., Alam, S. and Takayama, D. (2016), Virtual & Augmented Reality: Understanding the Race for the Next Computing Platform, Profiles Innovation, The Goldman Sachs Group, pp. 1-30.

Billon, M., Lera-Lopez, F. and Marco, R. (2010), “Differences in digitalization levels: a multivariate analysis studying the global digital divide”, Review of World Economics, Vol. 146 No. 1, pp. 39-73.

Chesbrough, H. (2011), Open Services Innovation, Jossey-Bass, San Francisco, CA.

Dennett, D. (2020), “How to make mistakes”, available at: http://cogprints.org/288/1/howmista.htm (accessed 1 April 2020).

Dewey, J. (1997), Experience and Education, Touchstone, New York, NY.

Dorta, T., Safin, S., Boudhraâ, S. and Marchand, E.B. (2019). “Co-Designing in Social VR’: Process awareness and suitable representations to empower user participation”, ArXiv, abs/1906.11004, Vol. 2, pp. 141-150.

Ekströmer, P. and Wever, R. (2019), “‘Ah, i see what you didn't mean’ exploring computer aided design tools for design ideation”, The Design Journal, Vol. 22 sup 1, pp. 1883-1897.

Fagan, M., Kilmon, C. and Pandey, V. (2012), “Exploring the adoption of a virtual reality simulation”, Campus-Wide Information Systems, Vol. 29 No. 2, pp. 117-127.

Fischer, G., Rohde, M. and Wulf, V. (2007), “Community-based learning: the core competency of residential, research-based universities”, International Journal of Computer-Supported Collaborative Learning, Vol. 2 No. 1, pp. 9-40.

Fox, J., Arena, D. and Bailenson, J. (2009), “Virtual reality”, Journal of Media Psychology, Vol. 21 No. 3, pp. 95-113.

Goldschmidt, G. (2007), “To see eye to eye: the role of visual representations in building shared mental models in design teams”, CoDesign, Vol. 3 No. 1, pp. 43-50.

Gu, N., Kim, M. and Maher, M. (2011), “Technological advancements in synchronous collaboration: the effect of 3D virtual worlds and tangible user interfaces on architectural design”, Automation in Construction, Vol. 20 No. 3, pp. 270-278.

Guo, Q. (2015), “Learning in a mixed reality system in the context of ‘Industrie 4.0,’”, Journal of Technology Education, Vol. 3 No. 2, pp. 92-115.

Huang, Y., Liu, Y. and Wang, Y. (2009), “AR-View: an augmented reality device for digital reconstruction of Yuangmingyuan”, ISMARAMH. IEEE international symposium on mixed and augmented reality-arts, media and humanities, pp. 3-7.

Karakaya, A. and Demirkan, H. (2015), “Collaborative digital environments to enhance the creativity of designers”, Computers in Human Behavior, Vol. 42, pp. 176-186.

Khurana, A. and Rosenthal, S. (1998), “Towards holistic ‘front ends’ in new product development”, Journal of Product Innovation Management, Vol. 15 No. 1, pp. 57-74.

Knoll, S. and Horton, G. (2011), “Changing the perspective: using a cognitive model to improve thinkLets for ideation”, Journal of Management Information Systems, Vol. 28 No. 1, pp. 85-114.

Koen, P., Ajamian, G., Burkart, R., Clamen, A., Davidson, J., D'Amore, R., Elkins, C., Herald, K., Incorvia, M., Johnson, A., Karol, R., Seibert, R., Slavejkov, A. and Wagner, K. (2001), “Providing clarity and a common language to the ‘fuzzy front end’”, Research-Technology Management, Vol. 44 No. 2, pp. 46-55.

Lee, S., Olson, L. and Trimi, S. (2012), “Co-innovation: convergenomics, collaboration,and co-creation for organizational values”, Management Decision, Vol. 50 No. 5, pp. 817-883.

Leong, K. and Sung, A. (2018), “Fintech (Financial Technology): what is it and how to use technologies to create business value in fintech way?”, International Journal of Innovation, Management and Technology, Vol. 9 No. 2, pp. 74-78.

Ludlow, B. (2015), “Virtual reality: emerging applications and future directions”, Rural Special Education Quarterly, Vol. 34 No. 3, pp. 3-10.

Milgram, P. and Kishino, F. (1994) “A taxonomy of mixed reality visual displays”, IEICE Transactions on Information and Systems, Vol. E77-D No. 12, pp. 1321-1329.

Mountain, D. and Liarokapis, F. (2007), “Mixed reality (MR) interfaces for mobile information systems”, ASLIB Proceedings, Vol. 59 Nos 4/5, pp. 422-436.

Opiyo, E. (2016), “An approach to represent and communicate product or system design ideas at the fuzzy-front end of the design process”, Systems, Vol. 4 No. 1, p. 8.

Pallot, M., Christmann, O., Richir, S., Dupont, L., Boly, V. and Morel, L. (2017), “ICE breaking”, Proceedings of the Virtual Reality International Conference – Laval Virtual 2017 on – VRIC, Vol. 17, doi: 10.1145/3110292.3110316.

Park, S., Choi, D. and Hong, P. (2015), “Club convergence and factors of digital divide across countries”, Technological Forecasting and Social Change, Vol. 96, pp. 92-100.

Quint, F., Sebastian, K. and Gorecky, D. (2015), “A mixed-reality learning environment”, Procedia Computer Science, Vol. 75, pp. 43-48.

Ranjbarfard, M. and Heidari Sureshjani, M. (2018), “Offering a framework for value co-creation in virtual academic learning environments”, Interactive Technology and Smart Education, Vol. 15 No. 1, pp. 2-27.

Russo‐Spena, T. and Mele, C. (2012), “‘Five Co‐s’ in innovating: a practice‐based view”, Journal of Service Management, Vol. 23 No. 4, pp. 527-553.

Sanders, E. and Stappers, P. (2008), “Co-creation and the new landscapes of design”, CoDesign, Vol. 4 No. 1, pp. 5-18.

Schnall, S., Hedge, C. and Weaver, R. (2012), “The immersive virtual environment of the digital fulldome: considerations of relevant psychological processes”, International Journal of Human-Computer Studies, Vol. 70 No. 8, pp. 561-575.

Scholz, J. and Smith, A. (2016), “Augmented reality: designing immersive experiences that maximize consumer engagement”, Business Horizons, Vol. 59 No. 2, pp. 149-161.

Schuler, D. and Namioka, A. (1993), Participatory Design, L. Erlbaum Associates, Hillsdale, NJ.

Sharp, H., Rogers, Y. and Preece, J. (2019), Interaction Design, Wiley, Indianapolis, IN.

Shneiderman, B. (2007), “Creativity support tools: accelerating discovery and innovation”, Communications of the ACM, Vol. 50 No. 12, pp. 20-32.

Shneiderman, B., Fischer, G., Czerwinski, M., Resnick, M. and Myers, B. (2020), “Workshop on creativity support tools – june 13-14, 2005”, Cs.umd.edu, available at: http://www.cs.umd.edu/hcil/CST (accessed 5 April 2020).

Sketch, G. (2017), “Gravity sketch: an intuitive 3D design tool for everyone”, available athttps://www.gravitysketch.com/ (accessed 5 March 2020).

Steen, M. (2013), “Co-design as a process of joint inquiry and imagination”, Design Issues, Vol. 29 No. 2, pp. 16-28.

Steen, M., Manschot, M. and De Koning, N. (2011), “Benefits of co-design in service design projects”, International Journal of Design, Vol. 5 No. 2, pp. 53-60.

Sung, A., Leong, K., Sironi, P., O’Reilly, T. and McMillan, A. (2019), “An exploratory study of the FinTech (Financial Technology) education and retraining in UK”, Journal of Work-Applied Management, Vol. 11 No. 2, pp. 187-198.

Thoravi Kumaravel, B., Nguyen, C., DiVerdi, S. and Hartmann, B. (2019), “TutoriVR”, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems – CHI, Vol. 19, doi: 10.1145/3290605.3300514.

Trischler, J., Pervan, S., Kelly, S. and Scott, D. (2017), “The value of codesign’, Journal of Service Research, Vol. 21 No. 1, pp. 75-100.

Uhomoibhi, J., Onime, C. and Wang, H. (2019), “A study of developments and applications of mixed reality cubicles and their impact on learning”, The International Journal of Information and Learning Technology, Vol. 37 Nos 1/2, pp. 15-31.

Uva, A., Cristiano, S., Fiorentino, M. and Monno, G. (2010), “Distributed design review using tangible augmented technical drawings”, Computer-Aided Design, Vol. 42 No. 5, pp. 364-372.

Valencia-Romero, A. and Lugo, J. (2017), “An immersive virtual discrete choice experiment for elicitation of product aesthetics using Gestalt principles”, Design Science, Vol. 3, doi: 10.1017/dsj.2017.12.

Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., Sun, M. and Zhang, X. (2019), “A comprehensive survey of AR/MR-based co-design in manufacturing”, Engineering with Computers. doi: 10.1007/s00366-019-00792-3.

White, M., Mourkoussis, N., Darcy, J., Petridis, P., Liarokapis, F., Lister, P.F., Walczak, K., Wolciechowski, R., Cellary, W., Chmielewski, J., Stawniak, M., Wiza, W., Patel, M., Stevenson, J., Manley, J., Giorgini, F., Sayd, P. and Gaspard, F. (2004), “ARCO- an architecture for digitization, management and presentation of virtual exhibitions”, Proceedings of the CGI’2004 Conference, Hersonissos, Crete, June 2004, Los Alamitos, California, IEEE Computer Society (2004), pp. 622-625.

Wu, H., Lee, S., Chang, H. and Liang, J. (2013), “Current status, opportunities and challenges of augmented reality in education”, Computers and Education, Vol. 62, pp. 41-49.

Yu, F. and Khalid, M.S. (2019), “Cross-location and cross-disciplinary collaborative prototyping using virtual reality in higher education”, Proceedings of the 18th European Conference on e-Learning: ECEL 2019 Copenhagen. Reading, UK, Academic Conferences and Publishing International, p. 722.

Further reading

Franklin, R. (2020), “Facebook spaces: a new way to connect with friends in VR”, (2020), About Facebook, available at: https://newsroom.fb.com/2017/04/facebook-spaces (accessed 5 April 2020).

Mitchell, V., Ross, T., May, A., Sims, R. and Parker, C. (2015), “Empirical investigation of the impact of using co-design methods when generating proposals for sustainable travel solutions”, CoDesign, Vol. 12 No. 4, pp. 205-220.

Nygaard, K. (1990), “The origins of the Scandinavian school, why and how?”, in Participatory Design Conference, (accessed 5 April 2020).

Preece, J. and Shneiderman, B. (2009), “The reader-to-leader framework: motivating technology-mediated social participation”, AIS Transactions on Human-Computer Interaction, Vol. 1 No. 1, pp. 13-32.

Rahimian, F. and Ibrahim, R. (2011), “Impacts of VR 3D sketching on novice designers’spatial cognition in collaborative conceptual architectural design”, Design Studies, Vol. 32 No. 3, pp. 255-291.

Trabelsi-Zoghlami, A. and Touzani, M. (2019), “How real are virtual experiences?”, European Journal of Marketing. doi: 10.1108/ejm-10-2017-0776.

Acknowledgements

This work was inspired by the late Professor Ruth Helyer and therefore would like to dedicate this work to her memory.

Corresponding author

Hatana El-Jarn is the corresponding author and can be contacted at: H.El-Jarn@leedstrinity.ac.uk

About the authors

Dr Hatana El-Jarn is a Lecturer and Senior Fellow of the Higher Education Academy in Digital and Social Media at Leeds Trinity University developing curriculum on emerging technologies such as Extended Realities. She has over 15 years of teaching experience in higher education; preceding this, she was an Art Director for one of the largest online game fanzines in the UK and still maintains her own design practice. Her research interests are wide, based on an interdisciplinary approach, including: aesthetics in user interface design (UX), computer mediated communication and how as a society as well as human beings are interacting with social media technologies. Her current projects include virtual reality and the fuzzy front end of ideation, the Veiled Selfie and Bare Feet Pashtana: the realisation of non-fictional narratives as work of art from a cultural and global perspective.

Glen Southern runs SouthernGFX, a small Cheshire-based studio specializing in character and creature design with clients, including SKY, Roald Dahl, Mackinnon and Saunders, Nike, Sainsbury's, Adidas, Netflix, Screen Scene, Disney, Lego, Pixologic, Wacom. He has been using and training ZBrush in the UK for over 15 years and is a Wacom Ambassador for the UK and Ireland. More recently, he has been creating in the VR space, working with companies like Oculus Medium and now Gravity Sketch in the UK. He is currently doing a Work-Based Learning Master's Degree in Digital Design in Immersive Creative Technologies.

Related articles