Interface Design Considerations for 3D and Augmented Virtual Learning Environments

This research paper was written as part of my participation in the graduate course Computer Interface Design for Learning at the George Washington University.

Introduction

Emerging technologies broaden the range of tools available for users to communicate, create, and access information. Display devices are no longer limited to computer monitors, projectors, speakers, and printers; the keyboard, mouse and scanner are joined by a growing variety of input devices.

3D virtual worlds are immersive and highly customisable digital spaces in which the user, through the agency of an avatar, engages with rendered objects and spaces. Augmented Reality blends digital data with the physical spaces inhabited by the users themselves. The devices used to access these spaces determine the degree to which a user is immersed in the experience. These range from visual representations on a computer screen, to fully immersive environments that stimulate almost all the senses.

Planning for instruction using these virtual and augmented spaces demands consideration of three factors: the virtual space design, the means of navigation and manipulation, and the interaction devices through which the user perceives the virtual space.

Designing the Virtual Space

3D VLEs support open exploration and collaboration amongst learners. Preparing for learning in 3D virtual spaces involves planning in minute detail not only the instructional design, but arrangement of the virtual space. Whereas a physical classroom already has a defined spaced with furnishings and materials to support teaching and learning, a virtual space is a blank canvas where anything is possible. Because a digital avatar has no need to be protected from the elements, or furnishings to increase comfort, these spaces can be designed specifically to serve the learner achievement of the learning task.

Designing a 3D VLE includes consideration of Knowledge Assets including all information and rendered elements in the Instructional Places within which these assets exist for retrieval and manipulation by the Actors who exercise intention with the knowledge assets and each other (Bouras, Igglesis, & Kapoulas, 2004). The virtual space is the metaphor through which the learner interfaces with the learning material and should reflect the instructional strategy employed (Reeves & Minocha, 2011). Instructional pedagogies will offer familiar learning spaces where content is delivered to the learner. Constructivist pedagogies demand an environment that is rich with embedded content for learners to explore and make meaning. Simulations and role-play reflect experiential pedagogy and offer opportunities for learners to experience content that may otherwise be inaccessible to them. These last two setting take advantage of the unique affordances of virtual learning spaces, but also demand a high degree of authenticity and fidelity to the settings and experiences modeled.

Hanson & Shelton (2008) describe how instructional design will inform the design of the virtual world, level of desired immersion, modes of sensory feedback, and degree of user interactivity when employing a 3DVLE. Unlike physical experiences, every sensory input and output has to be considered and activated to increase learning.

Figure 1: Traditional classroom setup reflects instructional pedagogy.  Image retrieved from http://lindenlab.wordpress.com/2008/11/26/stories-from-second-life-how-languagelab-gave-language-learning-a-new-lease-on-life/

Figure 1: Traditional classroom setup reflects instructional pedagogy.
Image retrieved from http://lindenlab.wordpress.com/2008/11/26/stories-from-second-life-how-languagelab-gave-language-learning-a-new-lease-on-life/

 

Figure 2: Spaces that encourage exploration and discovery reflect constructivist learning pedagogy.  Image retrieved from http://secondlife.com/destination/euclidia-space-planetarium

Figure 2: Spaces that encourage exploration and discovery reflect constructivist learning pedagogy.
Image retrieved from http://secondlife.com/destination/euclidia-space-planetarium

 

http://knowledgecast.wordpress.com/

Figure 3: Simulations and role-play reflects experiential learning pedagogy.
Image retrieved from http://knowledgecast.wordpress.com/

 

 

High representational fidelity can increase a user’s sense of presence within a virtual space. The degree to which a virtual object or space reflects an analogous physical space and the authenticity of interaction will influence the degree to which the learner suspends disbelief and fully engages with those virtual elements (Dalgarno & Lee, 2010).

 

Constructing virtual environments that are beyond the learners’ common experience, such as the interior of a cell, or the surface of another planet reveals additional opportunities not only in the modeling of the environment, but the means of navigation through that space.  How one navigates is as important a design consideration as the physical placement of information in the virtual space (Dillenbourg, 2000). Visual modeling of factors such as friction, opposing forces, and gravity can increase the authenticity of the learning experience.

The elements placed within the virtual space are critical.  Dillenbourg (2000) notes that “… environments where students see the same objects enrich more interactions than that of those where they see each other…” suggesting that even more important to learning and engagement than having avatars is to have purposeful spaces and objects that stimulate communication amongst learners.

In addition to the social opportunities provided by virtual learning spaces, there is need to consider the relationship the user has to the computer.

Tung et al  (2006) explored how a young learner’s awareness of the computer as a responsive player in a learning experience influences how they participate and take feedback from the program. The authors suggest against blatant anthropomorphization suggesting, rather, that subtle social cues built in to computer responses help young learners see the computer as a trusted friend rather than a computer.

Feedback systems in 3D VLEs will also reflect pedagogical approaches. Nelson (2007) defines different guidance strategies for use in virtual learning environments

Tacit guidance reflects the belief that students should construct their own meaning operating in discovery learning mode without direct instruction. Nelson (2007) question whether this is possible in practice as some form of response system will guide discovery toward mastery of a particular learning outcome. However, if the learning outcome is itself how to learn, then the content is secondary and serves as the hook for engagement. In this mode of guidance even the navigation and interaction interface should be self-discoverable through experimentation and observation.

Collaborative Guidance leverages the social affordances of 3D VLEs. Learners co-create, cooperate, collaborate, and construct knowledge in social groups. Such a guidance system will require a space for exploration, a means of engaging in communication and, perhaps, additional spaces within which learners construct models of their understanding (Dickey, 2005).
Reflective Guidance offers prompts and hints that encourage metacognition in learners about both content and learning process. Reflective guidance systems externalise the students' thinking, illuminate a learning path and, possibly, reveal next steps. Nelson (2007) also points out that lack of reflective guidance can hinder learning.

Thornburg (2004) offers a primordial metaphor for structuring spaces, both physical and virtual, that reflect different kinds of cognitive engagement. The campfire space is for story telling or learn from experts. The watering hole is a space for communicating and learning from one’s peers while the cave is a place of solitary meditation where personal schemas are considered. Virtual space design should offer contextual cues as to the function of the space; familiar metaphors such as Thornburg’s can serve an orienting function before proceeding into less familiar and more fantastic virtual spaces.

Command of the Virtual Space

To function within a virtual space, the user must be able to move the avatar through the environment, select and manipulate objects within the environment, and affect change through system commands. (Bowman, Krujiff, LaViola Jr., & Poupyrev, 2001)

Navigation

Even familiar looking virtual spaces can be challenging to navigate because of unfamiliar positioning cues. When considering immersion into unfamiliar virtual spaces, it is most important to have clear spatial positioning feedback for the user (Bowman, Davis, Hodges, & Badre, 1999). Designers should consider using interaction methods that map to familiar models of manipulation, but are reflective of the purpose or intent of the 3D VLE.

Bowman distinguished between travel which is the movement from one place to another, and wayfinding which is the cognitive process of blending intention with action (Bowman, Koller, & Hodges, 1997). User navigation and wayfinding is more effective when accompanied by directional cues, recognizable environmental structure and landmarks (Vila, Beccue, & Anandikar, 2003). A compass or large scale map showing present position are two such navigational tools that provide spatial feedback to the user (Figure 2). Research suggests that gender influences how spatial information from a virtual experience is processed. Males are more likely to navigate using geographical landmarks while women are more likely to use navigational cues available in the environment such as paths and signs (Ali & Nordin, 2011). These tools may also contribute to more successful learning experiences by providing information in more than one form.

Figure 4: Selection and manipulations of objects in Second Life by ray casting, and spatial navigational aids.
Figure 4: Selection and manipulations of objects in Second Life by ray casting, and spatial navigational aids.

Selection & Manipulation

Successful 3D interface design builds on familiar interaction techniques and takes creatively simple approaches to existing design principles. Hands, in the physical world, are the means by which most object selection and manipulation is performed. In 3D spaces, a virtual hand or pointer on screen serves the purpose of selecting, grasping and manipulating the virtual environment (Poupyrev, Weghorst, Billinghurst, & Ichikawa, 1998). Signaling intent in the virtual space when the avatar is at a distance is done through a technique called ray casting in which the user points at and signals intent to interact with the object Figure 4). Ray casting is categorized as a magic form of interaction which includes those unique affordances of virtual worlds that have no analogue in physical reality like flying, walking through solid objects (Bowman, Krujiff, LaViola Jr., & Poupyrev, 2001).

System Commands

 

Figure 5: Second Life employs movable and customizable floating menu pallets in addition to a top-of-screen drop-down menu structure.

Figure 5: Second Life employs movable and customizable floating menu pallets in addition to a top-of-screen drop-down menu structure.

 

In a three dimensional world, conventional two-dimensional menu and command structures pose a design challenge. Graphical menus can be evoked that either overlay the viewed workspace such as Google Glass (Figure 7), or can occlude the workspace space like Second Life’s floating menu pallets (Figure 5). Bowman et al (2001) suggest that 2D menus should be layered for simplicity of display and should not intrude into the virtual space except when necessary. Minecraft’s interface hides menu commands until called by the user (Figure 6). Command designs may also be embedded within the 3D environment and rendered objects themselves enabling more intuitive interactions. Whatever system is used, sufficient user feedback is necessary when engaging with them in order to minimise mode errors.

Figure 7: Projected translucent text and images on Google Glass eyepiece receive voice commands or touch/swipe commands on the side of the device.  Image retrieved from  http://www.dvice.com/2013-5-8/video-what-you-really-see-when-you-look-through-google-glass

Figure 7: Projected translucent text and images on Google Glass eyepiece receive voice commands or touch/swipe commands on the side of the device.
Image retrieved from
http://www.dvice.com/2013-5-8/video-what-you-really-see-when-you-look-through-google-glass

 

 

Interfacing with the Virtual Space

Interacting with a virtual space occurs through activation of program sequences embedded in the virtual world itself. The devices with which we execute those commands form the tangible physical interface to the virtual world. Designed to enable communication between computers and humans, emerging technologies offer an increasing range of possibilities that involve all the senses.

The design of virtual environments is well informed by standards (Blade & Padgett, 2002) and a growing body of empirical research into best practice with 3D VLEs. Conventional input and output devices are also well researched and understood. Less conventional input and output devices are still being explored, problems identified, and research conducted to mitigate those challenges.

Input

Moving from point-and-click interfaces to more natural forms of interaction including voice, gesture, and handwriting recognition (Saffer, 2009). Touch responsive systems both receive input, and generate pressure responses that simulate an object’s pliability (Reitinger, Werlberger, Bornik, Beichel, & Schmalstieg, 2005). Haptic input devices provide an alternative to point and click interfaces and can provide a learning experience that more closely reflects the performance context (Jun Lee, 2012). Even electroencephalography (EEG) can be employed in a sort of mind-control of computing devices (McFarland, 2012).

Research will continue to reveal design considerations for using these kinds of interfaces. For example Demi (2007) tested various input devices for controlling movement and manipulation in 3D virtual space and determined that bimanual, or two-handed systems are less prone to error and easier to learn than unimanual systems. Handedness also plays a role in ease of input. Ones’ dominant hand is better for finer micro-manipulation and the other hand for macro-manipulation. It is important to provide control mechanism customizations that accommodate preference for bimanual input and handedness.

Output

Virtual reality is a fully immersive experience that occludes the user’s sense of physical reality and replaces it with a digital sensations. Careful fitting of these devices is important to minimise spatial disorientation (Milgram, 2006).

Augmented reality, on the other hand blends both physical and digital realities (Bowman, Krujiff, LaViola Jr., & Poupyrev, 2001). Transparent displays such as Google Glass or heads-up-displays in vehicles, must account for the myriad conditions in which they are employed. Gabbard, Swan, & Hix (2006) are pursuing a display engine that responds to environmental conditions such as brightness, colour, and texture then selects a text display that maximises contrast without being disruptive or distracting (Gabbard, Swan II, & Hix, 2006).

 

In responsive physical spaces, the physical environment itself is equipped with sensors that perceive and respond to ambient conditions like movement and sound (Eng, et al., 2006) (Kiyokawa, 2012). These systems allow for kinetic interactions without having to wear any special equipment. Mapping sounds to virtual objects can also serve to mimic tactile experiences through vibration patterns (El Saddik, Orozco, Eid, & Cha, 2011).

Research continues into developing communication systems between humans and computers that rely on ambient information gathered by the computer, processed and used to respond to users implicit intention or states of being (Iizuka, Marocco, Ando, & Maeda, 2012) . In this way, the user’s reality is enhanced not with digital data, but with physical changes to the space that meet the user’s unspoken needs as perceived by the computer through body language, physical activity, time of day, and location. In enhanced environments such as this, a responsive device recognizing these user characteristics could change the interface to reflect user need or ability to interact (Mashita, et al., 2012)

Bring Your Own Interface

A variety of technical interfaces that are sometimes inaccessible, expensive, and difficult to use. While there certainly are effective uses for those devices, it is worth considering a far cheaper, more ubiquitous, and highly customisable interface for learning in virtual spaces. With the wide variety of tools available to learners now, it is not unreasonable to expect that many will already have the tools required for the most basic access to 3D VLEs. Dede (2004a) (2004b) has predicted that augmented reality will serve distributed learning communities and learners will self-select the tools and applications with which to engage with that content.

Institutional benefits to controlling interface mechanisms, include opportunities for branding, standardizing user experience, offering technical support for approved tools, and access to user engagement metrics (Severance, Hardin, & Whyte, 2008). Nevertheless, the means by which learners access content, communicate with peers, and contribute their own created knowledge assets to the learning community is ever growing. Severance et al. (2008) call this a “functionality mash-up”, where the users defined need to consume and produce content is met using self-selected tools, techniques, and communities. In this way, learners create their own personal learning ecosystem.

Architectural standards for data sharing are key to making this work. Severance et al. (2008) go on to explore various standards, some complementary, some competing, that contribute to greater interoperability amongst data sources for content management and modes of communication. Such standards allow for the creation of a broad range of interfaces for users to retrieve and contribute to the same bank of knowledge assets.

Personal Learning Environments, in this context, may be understood as an interface to knowledge. The tools used to engage with the learning ecosystem are not themselves the learning environment, rather they give access to the learning environment (Wilson, 2008). That is to say that the people, digital space, and the knowledge assets form the learning environment. The interface is made up of the tools employed by the learner. Knowledge assets may originate from many different service points, the tools aggregate and filter that content as defined by the learner using the tools they have selected to perform that function.

In Open learning systems, content is co-created in virtual spaces by like-minded learners and forms a body of knowledge accessible through a learner’s self-selected tools. In this respect, the knowledge assets need only be indexed and available online existing independently of the means of delivery. Agnostic of any particular platform, users are free to choose their own interface for content and communication.

Design consideration for AR: physical occlusion due to user interference – hand seen but not “layered” relative to the digital environment.

Conclusion

There is no doubt that 3D virtual learning spaces offer tremendous opportunities for rich, exciting, and engaging activities. Attending to the careful design of the virtual space, understanding the how to leverage the means of navigation and manipulation, and the appreciating the affordances and constraints of input and display devices, will help the user take full advantage of the virtual space. Dillenbourg (2000) suggests that while there may not be conclusive evidence that virtual learning spaces have a direct effect on the efficacy or economics of education, they do provide teachers and learners a unique set of affordances.

Innovative technologies open up new ways of learning and working. A new tool may well have an obvious primary function, but when the tool is put to use a wider range of affordances are likely to be uncovered. As those who like to experiment and ride the breaking wave of innovation share their trials, observations, and experiences, researchers can compile case studies to inform further research that starts to define and shape those experiences into definable affordances. Subsequently, researchers can then conduct more empirical studies that determine the effectiveness of the tools for different applications which helps providing guidance and best-practice for those that use the tool.

Works Cited

Ali, D. F., & Nordin, M. S. (2011, September). Gender issues in virtual reality learning environments. Journal of Edupres, 1, 65-76.

Blade, R. A., & Padgett, M. L. (2002). Virtual environments standards and terminology. In K. M. Stanney (Ed.), Handbook of virtual environments (pp. 15-27). London: Lawrence Erlbaum Associates.

Bouras, C., Igglesis, V., & Kapoulas, V. (2004). A web based virtual community: Functionality and architecture issues. Proceedings of IADIS International Conference Web Based Communities, (pp. 59-66). San Sebastian, Spain.

Bowman, D. A., Davis, E. T., Hodges, L. F., & Badre, A. N. (1999). Maintaining spatial orientation during travel in an immersive virtual environment. Presence: Teleoperators and Virtual Environments, 8(6), 618-631.

Bowman, D. A., Koller, D., & Hodges, L. F. (1997, March). Travel in immersive virtual envirinments: An evaluation of viewpoint motion control techniques. An evaluation of viewpoint motion control techniques." Virtual Reality Annual International Symposium, 1997., IEEE 1997, 45-52.

Bowman, D. A., Kruijff, E., LaViola Jr., J. J., & Poupyrev, I. (2001). An introduction to 3D user interface design. Presence: Teleoperators and Virtual Environments, 10(1), 96-108.

Bowman, D. A., Krujiff, E., LaViola Jr., J. J., & Poupyrev, I. (2001). An introduction to 3D user interface design. Presence: Teleoperators and Virtual Environments, 10(1), 96-108.

Dalgarno, B., & Lee, M. J. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10-32.

Dede, C. (2004a). Enabling Distributed Learning Communities via Emerging Technologies--Part One. T.H.E. Journal, 32(2), 12.

Dede, C. (2004b). Enabling Distributed Learning Communities via Emerging Technologies--Part Two. T.H.E. Journal, 32(3), 16.

Demi, B. (2007). Human factors issues on the design of telepresence systems. Presence, 16(5), 471-487.

Dickey, M. D. (2005, April-August). Brave New (Interactive) Worlds: A review of the design affordances and constraints of two 3D virtual worlds as interactive learning environments. Interactive Learning Environments, 13(1-2), 121-137.

Dillenbourg, P. (2000). Virtual learning environments. Learning in the new millennium: Building new education strategies for schools.

El Saddik, A., Orozco, M., Eid, M., & Cha, J. (2011). Haptics: General priniples. In Haptics Technologies (pp. 1-20). Springer Berlin Heidelberg.

Eng, K., Mintz, M., Delbruck, T., Douglas, R. J., Whatley, A. M., Manzolli, J., & Verschure, P. M. (2006). An investigation of collective human behaviour in large-scale mixed reality spaces. Presence, 15(4), 403-418.

Gabbard, J. L., Swan II, J. E., & Hix, D. (2006). The effects of text drawing styles, backgrounds, textures, and natural lighting on text legibility in outdoor augmented reality. Presence: Teleoperators & Virtual Environments, 15(1), 16-32.

Hanson, K., & Shelton, B. E. (2008). Design and development of virtual reality: Analysis of challenges faced by educators. Educational Technology & Society, 11(1), 188-131.

Iizuka, H., Marocco, D., Ando, H., & Maeda, T. (2012, March 4-8). Turn-taking supports humanlikeness and communication in perceptual crossing experiments — Toward developing human-like communicable interface devices. Virtual Reality Short Papers and Posters (VRW), 2012 IEEE (pp. 1-4). Orange County: IEEE. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6180953&isnumber=6180843

Jun Lee, W. K.-I. (2012). An intravenous injection simulator using augmented reality for veterinary education and its evaluation. 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (pp. 31-34). Nanyang: SIGGRAPH. doi:http://doi.acm.org/10.1145/2407516.2407524

Kiyokawa, K. H. (2012). Owens Luis — A context-aware multi-modal smart office chair in an ambient environment. Virtual Reality Short Papers and Posters (VRW) (pp. 1-4). Orange County: IEEE.

Mashita, T., Komaki, D., Iwata, M., Shimatani, K., Miyamoto, H., Hara, T., . . . Nishio, S. (2012). A content search system for mobile devices based on user context recognition. Virtual Reality Short Papers and Posters (VRW). Orange County: IEEE.

McFarland, D. S. (2012, June). Electroencephalographic (EEG) control of three-dimensional movement. Journal of Neural Engineering, 7(3). doi:doi:10.1088/1741-2560/7/3/036007

Milgram, P. (2006). Some human factors considerations for designing mixed reality interfaces. In V. M. Applications (Ed.), Meeting Proceedings RTO-MP-HFM-136, Keynote 1., (pp. KN1-1 - KN1-14). Neuilly-sur-Seine, France. Retrieved from http://www.rto.nato.int/abstracts.asp

Nelson, B. C. (2007). Exploring the use of individualized, reflective guidance in an educational multi-user virtual environment. Journal of Science Education and Technology, 16(1), 83-97.

Own, Z.-Y., Chen, D.-U., & Wang, Z.-I. (2011). Female-friendly user interfacedesign on a cosmetic chemistry web learning site. Internationl Journal of Instructionl Media, 38(1), 87-109.

Poupyrev, I., Weghorst, S., Billinghurst, M., & Ichikawa, T. (1998, August). Egocentric object manipulation in virtual environments: Empirical evaluation of interaction techniques. Computer Graphics Forum, 17(3), 41-52.

Reeves, A. J., & Minocha, S. (2011). Relating pedagogical and learning space designs in Second Life. In A. Cheney, & R. L. Sanders (Eds.), Teaching and Learning in 3D Immersive Worlds: Pedagogical Models and Constructivist Approaches (pp. 31-60). USA: IGI GLobal.

Reitinger, B., Werlberger, P., Bornik, A., Beichel, R., & Schmalstieg, D. (2005). Spatial measurements for medical augmented reality. International Symposium on Mixed and Augmented Reality (pp. 208-209). Vienna: IEEE.

Saffer, D. (2009). Designing Gestural Interfaces. (M. Treseler, Ed.) Cambridge: O’Reilly Media, Inc.

Severance, C., Hardin, J., & Whyte, A. (2008). The coming functionality mash-up in personal learning environments. Interactive Learning Environments, 16(1), 47-62.

Thornburg, D. D. (2004, October). Campfires in cyberspace: Primordial metaphors for learning in the 21st century. International Journal of Instructional Technology and Distance Learning, 1(10), 3-10.

Tung, F.-W., & Deng, Y.-S. (2006). Designing social presence on e-learning environments: Testing the effect of interactivity on children. Interactive Learning Environments, 14(3), 251-264.

Vila, J., Beccue, B., & Anandikar, S. (2003). The gender factor in virtual reality navigation and wayfinding. HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Volume 7. IEEE Computer Society.

Wilson, S. (2008). Patterns of personal learning environments. Interactive Learning Environments, 16(1), 17-34.

 


For the next couple of years much of my time will be spent on coursework as I have enrolled in George Washington University's Graduate Certificate in eLearning, the first step toward completing the Masters Degree in Education Technology Leadership. In the spirit of learning in public, I plan to use my blog as a thinking and processing space. I'll use the #GWETL tag here on the blog and the same hashtag when tweets are course related. At the moment, I'm registered in Critical Issues in Distance Education and Computer Interface Design for Learning.

What do you think? Share you thoughts below...

%d bloggers like this: