Augmented Reality (AR) is the mid-point on a continuum between the real physical world around us, and the virtual digital world online superimposing information on our sensory experiences as we move through time and space (Milgram, 1994). Viewing physical objects through a mobile’s camera, AR uses image recognition, geo-location, the device’s accelerometer, and online databases to provide information relevant in time and space to the user. Research continues into different interaction methods and display possibilities making engagement with online data more natural and intuitive. This report explores current research in AR and associated technologies in order to understand possibilities for learners today and in the future.
The device camera in Figure 1 shows the scene in front of the user. Recognizing the user’s geographic location, the device feeds current information about the buildings and businesses in the scene. The accelerometer ensures the data points stay attached to associated buildings as the user changes viewpoint. In other applications, the user can view a 3D model through the device camera as though it existed in physical space.
As interesting as current AR implementations are, research into new interaction mechanisms and data delivery will have far-reaching and deep impact on how we engage with digital content. educational institutions and philosophies. An exploration of AR and associated research follows concluding with a summary vision of future learning experiences.
Interest in AR
Figure 2 When viewed through the device camera, an AR app will recognize trigger images and will overlay digital data
While the New Media Consortium’s Horizon Report anticipates widespread educational adoption of AR in four to five years (Johnson, 2012), AR is already being used in many ways outside of teaching and learning. The nature of the technology is such that it often goes unnoticed is seamlessly integrated with our sensory experiences. The Report points to the use of AR to superimpose the line of scrimmage on a football field or highlight the puck in a fast-moving hockey game. Entertainment application like Zappar allow users to “create, explore, search and share augmented reality and vision based experiences” (Zappar, 2013) attached to t-shirts, hats, jigsaw puzzles, flashlights, and device cases.
Early adopting educators are exploring AR with students most simply as a way to virtually tack multimedia content to the bulletin board. Examples in higher education, employ AR to bring safe hands-on learning with virtual models of otherwise inaccessible, dangerous, or expensive materials. Manipulating trigger image cards, students can experiment with expensive laser holography equipment to develop the psychomotor skills required in the performance context (Yamaguchi, 2012). Veterinary students in early stages of training use a physical animal model and needle with haptic response actuators with an AR overlay to learn intravenous injection procedures (Jun Lee, 2012). On a larger scale, learning spaces like museums are augmenting the visitor experience with AR experiences (Veldman, 2011). The museum experience can be customised responding to a visitor’s expressed interest.
Figure 3 Educators on Twitter sharing student AR experiences
The way we request and receive information is increasingly more integrated with our natural behaviours in physical spaces. How information is pushed, and what information arrives in response to a request is increasingly more integrated with our position in space and time. Thinking forward, it is exciting to imagine a completely customised education experience delivered to every child based on their interests, behaviours, state of mind, position in space, and time of day.
Components of Augmented Reality
Unlike a Quick Response (QR) code which merely sends the user to online content viewed through a browser, AR superimposes a variety of content as though it existed in real space. Hsin-Kai Wu (2013) suggests AR should be understood as a concept rather than a specific technology. It is helpful to understand AR as a negotiation between the user and content delivery systems leveraging the power of several technologies to create intuitive and seamless interactions between user and technology as illustrated in Figure 4. The negotiated system has several elements:
1) Content creation: Online databases are stocked with data provided by individuals, ambient information gathered by connected sensors, and media pieces from professional, commercial, or entertainment interests. Increasingly, content is created more by sensors responding to individuals users, and less from the direct input of users themselves (Avilés-López, 2012).
2) Need identification: A user’s information needs are expressed by the user, “perceived” by technology, or determined through a negation between the two. A wearable camera that feeds the computer images of the world as seen by the user (Bostanci, 2012), or IoT devices that gather ambient data from the user and the user’s environment during periods of activity (motor, mental, physiological) can respond to observable changes in state to create an optimal environment by adjusting room temperature, lighting, colour intensity of a viewed screen (Kiyokawa, 2012). User interfaces and operating systems themselves are being designed to respond to a user’s state (Mashita, et al., 2012). For example, if the device (and therefore the user) is in motion, the interface is simpler with precise menu, titles, and larger buttons; when the device is still, the interface is detailed and offer more complex interactions. AR in this instance is not a data overlay, but is digital processing of gathered data to make physical and environmental changes to meet user's needs.
Iizuka (2012) describes “experimental semiotics” as computers responding to non-verbal cues from the user (intellectual, emotional, psychological communication) in order to meet information and environmental needs. He goes on to say, "... the integrated system for the ambient information society needs to actively interact with users somehow to read their intention or states rather than passively collecting information."
3) Content selection: Content requested, or “pulled” by the user through a search engine is based on his expressed need. Often context-aware, the search engine provides content that responds to contextual information based on the device position and user’s current state in addition to user-expressed needs. In pushed content, device position and user profile is examined by content providers and services which send information to the user based on the user’s current state and perceived need. Pariser (2011) raises many issues about this kind of filtering suggesting that an algorithm intended to provide more relevant results may, in fact, be sending users into a feedback loop in which the scope of their search results becomes increasingly narrow and reflective of the user’s point of view.
4) Content delivery: The means by which the user receives information is still primarily visual and auditory though work continues on development and implementation of haptic (touch) response systems. Mobile devices display content in any connected location. Innovations like Google Glass (Hayes, 2012) and display-enabled contact lenses bring hands-free access to information that is integrated with our physiology (Parviz, 2009). Interactive projected images can be displayed for shared experiences (MacFarlane, 2013) with images completely covering all surfaces of a room using multiple projectors (Hanhoon Park, 2005).
5) Interaction: Once received, the user has a variety of means with which to interact with the data. Keyboard and mouse is no longer the only means of input. Handwriting recognition with a stylus, voice recognition, touch screens, and gesture recognition (Saffer, 2009) create opportunities for new ways to interact with data and devices. Haptic response systems recognize tactile input on customized surfaces, (Reitinger, Werlberger, Bornik, Beichel, & Schmalstieg, 2005) (Jun Lee, 2012). Development of a negotiated non-verbal tactile communication system will allow the user and computer to evolve their own strategies for engagement (Iizuka, Marocco, Ando, & Maeda, 2012), and electroencephalography (EEG) input devices, while still in very early development, allow users to control devices with their minds (McFarland, 2012).
Figure 4 Negotiated Content delivery based on technology-perceived and user-expressed needs
Effects on Learning
Investing in hardware, software, and professional development for new technology, it is reasonable to expect some benefits in terms of learning gains or productivity. In a study using AR to support experiential learning, the unique technology experience generated significant interest and enthusiasm for the learning task compared to the control group (Juan, 2010). This novelty effect demonstrably enhanced learning but only temporarily. The study also found little difference in learning outcomes between the AR group and the control group.
In another study where AR was used as part of a gamified learning experience, Rubina-Freitas (2008) determined that while there was a demonstrable improvement in learning gains over the control group, it was attributable to the gaming structure of the lesson rather than the AR component in particular. Nevertheless, it is suggested that learner motivation itself may be sufficient reason to adopt a new technology because the absence of technology may be a barrier to learning, not just a disincentive to participate (Salvador-Herranz, et al., 2013).
Given the new modes of interaction and content presentation, the most authentic and effective ways of using the technology are likely still being determined. There are no demonstrable benefits, other than the novelty effect, of AR activities that merely translate traditional exercises into an augmented experience. Research identifying the learning domains most affected by AR experiences leads to a better understanding of where best to employ the technology (Schmitz, October, 16-18, 2012).
Combining AR with computer assisted learning applications, Liarokapis et al (2002) explored academic and social elements of augmented activities. While AR interactions are based on an individual's point of view, the opportunity for connectivist learning models still exists as content is still digital and can be shared easily.
In my current position as a classroom teacher, I take the opportunity to explore AR with my students and colleagues. I see the motivational appeal of the technology and question at what point attention directed to novelty is directed to intended content. While simple experimentation and thought about the technology is fun and engaging, it is important to determine whether the technology contributes to the students’ learning.
I have worked with others to create AR displays of student work for school board presentations and parent nights at the school and am working on creating a school AR channel with which families can access multimedia content using pictures in the school newsletter as trigger images. I would also like to augment my existing texts creating a flipped, or blended learning experience using AR to provide on-demand video, 3D models, and interactive manipulatives.
While AR succeeds as a motivational hook, gains in learning are less consistently demonstrated. Successful use of new technologies depends on matching technologies with the learning outcomes it best supports. Because of the visual-spatial nature of 3D AR, motor skill learning in particular, can be enhanced through direct manipulation of objects that mimic real conditions.
For intellectual skills, learning gains are attributable to quality engagement, rather than the AR itself. Additionally, a clear articulation of institutional support for digital learning increases the likelihood of successful implementation (Bhati, 2010).
The convergence of so many technologies is creating new ways of interacting and engaging with the world leading to new ways of thinking. It could be we have not yet discovered the best application of these new tools for enhancing learning. Perhaps there are as-yet unmeasured indicators that would support continued use and investment in education.
Envisioning the future, we could see education delivery happening in non-traditional spaces outside the regular school day as the learners’ devices engage them in problem solving activities customised to their demonstrated levels of proficiency. The activities will be tied to their current activity in their current location. Teachers will not deliver lessons, but will coordinate learning. Skill development will focus less on specific content, and more on process and problem solving. In a time when everything you might ever want to know is instantly accessible, there is a need to rethink the focus of education.
Imagine a 12 year old in the back seat on a family trip looking out the Google glass window. Content delivery systems identify a gap in her content learning from a geography activity and it begins to label landforms as the family travels. She accesses an AR model of the terrain outside and using her book as a target image, she views the surrounding terrain from all sides in three dimensions. She completes the quiz on her learning group’s learning management system and receives a badge of achievement on her digital backpack.
Her younger brother is struggling with perspective in art so his window creates a vanishing point grid aligned with the scene outside. Coming to an understanding, he uses his mobile device to sketch out a picture using perspective and sends it to his learning cohort. Within a few minutes he receives some responses congratulating him on his progress along with some pictures his peers drew.
The convergence of so many technologies into a unified system of information sharing makes possible a greater, deeper understanding of our world. That these systems are increasingly integrated into our sensory experiences brings us closer to Kurzweil’s Singularity, the complete integration of the human organism with digital communication (Ptolemy, 2009).
“One of the things our grandchildren will find quaintest about us is that we distinguish the digital from the real.” —William Gibson in a Rolling Stone interview, November 7, 2007
- Avilés-López, E. G.-M. (2012, March 2). Mashing up the Internet of Things: a framework for smart environments. EURASIP Journal on Wireless Communications and Networking. doi:doi:10.1186/1687-1499-2012-79
- Bhati, N. M. (2010). Barriers and facilitators to the adoption of tools for online pedagogy. International Journal of Pedagogies & Learning, 5(3), 5-19.
- Billinghurst, M. (2002, December). Augmented Reality in the Classroom. Retrieved from New Horizons for Learning: http://www.newhorizons.org
- Bimber, O. (2012, July ). What's Real About Augmented Reality? [Guest editor's introduction]. Computer, 45(7), 24-25.
- Bostanci, E. C. (2012). Vision-based user tracking for outdoor augmented reality. The Seventeenth IEEE Symposium on Computers and Communication (ISCC’12) (pp. 566-568). Cappadocia: IEEE.
- Dede, C. (2004). Enabling Distributed Learning Communities via Emerging Technologies--Part One. T.H.E. Journal, 32(2), 12.
- Dede, C. (2004). Enabling Distributed Learning Communities via Emerging Technologies--Part Two. 32(3), 16.
- Educause Learning Initiative. (2005, September). 7 things you should know about augmented reality. Retrieved from Educause Learning Initiative: http://www.educause.edu/library/resources/7-things-you-should-know-about-augmented-reality
- Eisele-Dyrli, K. (2010). San Diego pilot: Latest test of augmented reality. District Administration, 46(6), p. 18.
- Feng Zhou, D. H.-L. (2008). Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. International Symposium on Mixed and Augmented Reality. Cambridge. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4637362&isnumber=4637297
- Freitas, R. C. (2008). SMART: a SysteM of Augmented Reality for Teaching 2nd grade students. People and Computers XXII Culture, Creativity, Interaction. 2, pp. 27-30. Swinton: British Computer Society.
- Hanhoon Park, M.-H. L.-J.-I. (2005). Specular reflection elimination for projection-based augmented reality. International Symposium on Mixed and Augmented Reality, (pp. 194-195). Vienna.
- Hayes, A. (2012). Reflections: Glass & Mobile Learning. (L. E. Dyson, Ed.) anzMLearn Transactions on Mobile Learning, 1, 5-9. Retrieved from http://research.it.uts.edu.au/tedd/anzmlearn
- Hsin-Kai Wu, S. W.-Y.-Y.-C. (2013, March). Current status, opportunities and challenges of augmented reality in education. Computers & Education, 62, 41-49. Retrieved from http://www.sciencedirect.com/science/article/pii/S0360131512002527
- Iizuka, H., Marocco, D., Ando, H., & Maeda, T. (2012, March 4-8). Turn-taking supports humanlikeness and communication in perceptual crossing experiments — Toward developing human-like communicable interface devices. Virtual Reality Short Papers and Posters (VRW), 2012 IEEE (pp. 1-4). Orange County: IEEE. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6180953&isnumber=6180843
- Johnson, L. A. (2012). NMC Horizon Report: 2012 K-12 Edition. Austin, Texas:: The New Media. Retrieved from http://www.iste.org/docs/documents/2012-horizon-report_k12.pdf?sfvrsn=2
- Juan, C. L. (2010). Learning Words Using Augmented Reality. International Conference on Advanced Learning Technologies (ICALT) (pp. 422-426). Sousse: IEEE. Retrieved from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5572407&isnumber=5571093
- Jun Lee, W. K.-I. (2012). An intravenous injection simulator using augmented reality for veterinary education and its evaluation. 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (pp. 31-34). Nanyang: SIGGRAPH. doi: http://doi.acm.org/10.1145/2407516.2407524
- Kiyokawa, K. H. (2012). Owens Luis — A context-aware multi-modal smart office chair in an ambient environment. Virtual Reality Short Papers and Posters (VRW) (pp. 1-4). Orange County: IEEE.
- Klopfer, E. &. (2005). Developing games and simulations for today and tomorrow's tech savvy youth. TechTrends: Linking Research & Practice to Improve Learning, 49(3), 33-41.
- Liarokapis, F. P. (2002). Multimedia augmented reality interface for e-learning (MARIE). World Transactions on Engineering and Technology Education, 1(2), 173-176. Retrieved from http://nestor.coventry.ac.uk/~fotisl/publications/WTETE2002.pdf
- Mashita, T., Komaki, D., Iwata, M., Shimatani, K., Miyamoto, H., Hara, T., . . . Nishio, S. (2012). A content search system for mobile devices based on user context recognition. Virtual Reality Short Papers and Posters (VRW). Orange County: IEEE.
- Milgram, P. K. (1994, December). A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information Systems, E77-D(12), 1321-1329.
- Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. New York: Penguin Press.
- Parviz, B. A.-.. (2009, September). IEEE Spectrum. Retrieved from IEEE Spectrum: http://spectrum.ieee.org/biomedical/bionics/augmented-reality-in-a-contact-lens/0
- Reitinger, B., Werlberger, P., Bornik, A., Beichel, R., & Schmalstieg, D. (2005). Spatial measurements for medical augmented reality. International Symposium on Mixed and Augmented Reality (pp. 208-209). Vienna: IEEE.
- Saffer, D. (2009). Designing Gestural Interfaces. (M. Treseler, Ed.) Cambridge: O’Reilly Media, Inc.
- Salvador-Herranz, G., Perez-Lopez, D., Ortega, M., Soto, E., Alcaniz, M., & Contero, M. (2013). Manipulating Virtual Objects with Your Hands: A Case Study on Applying Desktop Augmented Reality at the Primary School. Proceedings of the Forty-Sixth Annual Hawaii International Conference on System Sciences (pp. 31-39). Grand Wailea: Hawaii International Conference on System Sciences.
- Schmitz, B. S. (October, 16-18, 2012). An Analysis of the Educational Potential of Augmented Reality Games for Learning. In J. M. M. Specht (Ed.), Proceedings of the 11th World Conference on Mobile and Contextual Learning 2012, (pp. 140-147). Helsinki, Finland.
- West, D. M. (2012). Digital schools: How technology can transform education. Washington, D.C.: Brookings Institution.
- Wrzesien, M. A. (2010). Learning in serious virtual worlds: Evaluation of learning effectiveness and appeal to students in the E-Junior project. Computers & Education, 55(1), 178-187.
- Wrzesien, M. A. (2010). Learning in serious virtual worlds: Evaluation of learning effectiveness and appeal to students in the E-Junior project. Computers & Education, 55(1), 178-187.
- Yamaguchi, T. H. (2012). New education system for construction of optica lholography setup - Tangible learning with Augmented Reality. The 9th International Symposium on Display Holography. Cambridge, Massachusetts. doi:10.1088/1742-6596/415/1/012064
- Zappar. (2013, March 1). Terms & Conditions. Retrieved April 15, 2013, from Zappar: http://www.zappar.com/terms/