The present invention relates to techniques for generating virtual objects corresponding to real, physical objects. In certain embodiments, the invention relates to the provision of a deconstruction manager for synchronising virtual objects and real, physical objects.
Mixed-Reality (MR) is the spectrum that connects physical environments, absent from virtual representations of any kind, to completely virtual ones, allowing the co-existence of physical and computer-generated elements in real-time. Its potential relies on the possibility of enhancing reality, making invisible things visible (Pastoor and Conomis, 2005) and sometimes, due to its synthetic nature, modifying the physical laws governing reality implementing diverse metaphors (visual, auditory and haptic) not available in the physical world (Ellis, 1994). A mixed-reality space can be built upon the dual-reality principle of reflecting actions between elements within a physical and virtual environment (Lifton 2007; 2009; 2010).
Systems are known in which a virtual object is generated that corresponds to a physical object in the real world. For example, cyber-physical systems (CPS) refers to systems with integrated computational and physical capabilities that bridge the cyber-world of computing and communications with the physical world (Rajkumar et al., 2010; Baheti and Gill, 2011). Embedded computers and networks monitor and control physical processes, usually with feedback loops where these processes affect computations and vice versa (Lee, 2008). CPS are usually implemented to monitor and control applications in physical and engineered systems using embedded computing. Information taken from the tangible world is based on physical variables (i.e. Temperature, humidity) and represented as two-dimensional (2D) abstract objects (e.g. Graphs, tables).
In the field of electronic gaming, gaming platforms are known that use tangible user interfaces (TUI) (Ishii and Ullmer, 1997) to augment the real physical world by coupling digital information to everyday physical objects and environments. Examples include game controllers, dance pads, sophisticated on-body gesture recognition controls such as Nintendo Wii or Ubisoft's Rocksmith.
In one such example, a real electric guitar is connected to a virtual interface in order to teach the end user to play the guitar in an individual learning session (or collaborative learning session if the other user has an extra electric guitar or electric bass). In such examples, a user can perform an action on a real object updating a state in the virtual world (e.g. a real string pressed is reflected on the virtual fret board and playing the sound).
Another example is known in which a virtual representation of a building is connected with physical sensors in the real world counterpart. Lifton (2007; 2009; 2010) used a bespoke sensor/actuator node as embedded in a power strip (called PLUG) to link virtual and physical worlds. This, sent the data collected to the virtual world, creating different metaphors that showed the data in real-time. Finally, multiple PLUGs were distributed within a physical building, creating a ubiquitous networked sensor/actuator infrastructure of interconnected nodes that reflected their current status on a virtual map of the building.
More advanced systems are described in Peña-Ríos et al, “Developing xReality objects for mixed-reality environments”, (Department of Computer Science, University of Essex, UK; Faculty of Computing and Information Technology, King Abdulaziz University, KSA.) which discussed the concept of “xReality” objects, and Peña-Ríos et al “Using mixed-reality to develop smart environments”, (Department of Computer Science, University of Essex, UK; Faculty of Computing and Information Technology, King Abdulaziz University, KSA.) which discusses “mixed-reality” techniques for use in everyday environments.
The examples of multidimensional spaces discussed so far represent different interactions between users/objects and the environment they belong, either virtual or physical. Unidirectional communication happens when actions from one environment are reflected in the other (affecting one or more users) but the feedback is not reciprocal. One example is the ISO/IEC 23005 standard specification, as it reflects haptic feedback based on actions happening in the virtual world, but it does not allow the modification of virtual environments (e.g. 4D movie), thus no dual-reality exists in such environments. Another example can be found in traditional TUIs (e.g. a joystick), where the action executed in the physical (e.g. pressing a button) has an effect in a virtual environment (e.g. a video game) and can be followed by all the players in the session, but an event in the virtual world would not modify the physical space. Moreover, such implementations try to create immersion in one (usually virtual) space only.
Bidirectional communications between virtual and physical worlds, involve the creation of blended-reality spaces where interaction happens in both worlds reflecting the changes in both. Those changes can be represented as 2D elements such as graphs or charts (e.g. smart home applications such as Samsung's Smart Home or the Phillips Hue system allow to change the physical status of objects via a software application) or metaphors (e.g. data pond at MIT's DualReality Lab). or mirrored to 3D virtual objects (e.g. VirCA project's virtual robot or the appliances at Essex' Intelligent Virtual Household). In these examples the relationship one-virtual object mirrored to one-physical object allows the creation of dual-reality states. A benefit of implementing these mirrored objects in collaborative environments with multiple users, is that the physical object can be remotely controlled via the virtual mirrored element. This represents an advantage for collaborative work between dispersed teams, where the use of specialised equipment might be restricted to specific geographical locations. However, none of these techniques provide a means for advanced mixed-reality systems to be readily implemented, particularly systems involving ad-hoc combinations of multiple real and virtual objects and multiple users, and systems capable of interacting independently of geographical location. Some limitations for the use of current shared physical-virtual object implementations are:
The present invention provides apparatus, systems and methods for interconnecting multiple distant learning environments, allowing bidirectional communication between environments, smart objects and users using a synchronising mechanism to mix distributed physical and virtual devices. The goal of this interconnected learning environment is to enable hands-on activities for distance learners based in a collaborative group-based learning session.
In accordance with certain aspects of the invention, a system is provided for generating at least one virtual object corresponding to at least one physical object, wherein the system is arranged to update the virtual object responsive to changes in state of the physical object, and the system is arranged to update the physical object responsive to changes in state of the virtual object.
In certain examples, the virtual object can be displayed to a user at a terminal remote from the real object. The user can manipulate the virtual object via the terminal, responsive to which, the system is arranged to control the state of the real object.
Mixes of real and virtual objects operate as a holistic system independently of geographical separation.
In certain examples, a deconstruction manager mechanism arranged to synchronise the physical object with the virtual object.
In accordance with certain aspects of the invention, a technique is provided which enables an advanced mixed-reality system to be implemented, for example, by the provision of a deconstruction manager. The mixed-reality system can support interactions with numerous users at numerous different locations.
In certain examples, the deconstruction manager comprises first functionality which identifies the physical object and the virtual object and further functionality which maintains characteristic information relevant for the physical object and the virtual object.
In certain examples, the deconstruction manager comprises a continuum manager mechanism arranged to identify the physical object and the virtual object from the first functionality and to identify characteristic information of the physical object and the virtual object from the further functionality, and the deconstruction manager is arranged to link the physical object with the virtual object, thereby enabling the physical object to be synchronised with the virtual object.
In certain examples, the deconstruction manager is implemented as software running on a server.
In certain examples, the deconstruction manager orchestrates the interaction of mixes of corresponding, or unique real objects and virtual objects to form a system that functions as a whole and independent of the distance separating the components.
In certain examples, the virtual object is displayed at a terminal remote from the physical object and a user can manipulate the virtual object via the terminal, responsive to which, the system is arranged to control the state of the physical object.
In certain examples, the system is arranged to generate further virtual objects, corresponding to further physical objects, each further virtual object being associated with one of the further physical objects.
In accordance with certain aspects of the invention, there is provided a server having software running thereon providing an instance of a deconstruction manager which is arranged to generate and maintain a virtual object associated with a physical object connected via a data connection to the server, said virtual object displayable on a mixed-reality graphical user interface of a user terminal connected to the server.
In certain examples, the deconstruction manager is arranged to synchronise the physical object with the virtual object.
In certain examples, the instance of the deconstruction manager communicates data to and from the physical object and the user terminal allowing a state of a controllable element of the physical object to be controlled via the mixed-reality graphical user interface of the user terminal.
In accordance with certain aspects of the invention, there is provided a system for enabling mixed-reality interaction. The system comprises at least one physical object corresponding to at least one virtual object, said physical object responsive to changes in state of the virtual object, and said virtual object responsive to changes in state of the physical object; a server that manages communication between the physical object and virtual object, and a graphical user interface that enables input to be received from users in remote locations to change the status of the physical object and virtual object and to combine the physical object and virtual object.
In accordance with examples of the invention, a technique is provided that enables given instances of a shared reality to be dynamically set and managed thereby creating user and machine adjustable virtuality. For example, this includes supporting the mixing of physical and virtual components (that may or may not correspond to each other). In other examples certain embodiments would enable eLearning to be extended from “on-screen” collaborative activities (e.g. problem solving, concept formation, etc.) to include off-screen activities constructing physical devices such as those associated with engineering laboratories (e.g. collaboratively building physical internet-of-things devices via users and systems that are geographically dispersed, etc.)
In contrast to prior art systems and techniques, dynamic-reality partition management is provided (i.e. maintenance of adjustable multi state, user and multi information flows) which enables possibilities for creation by enabling the combination and use of disaggregated services/functions into new functionalities created by end-users.
Certain embodiments of the present invention will now be described hereinafter, by way of example only, with reference to the accompanying drawings in which:
In the drawings like reference numerals refer to like parts.
A real object 101 is connected, via a data connection, to a server 102. The server 102 is further connected to a plurality of user terminals 103, 104. Each user terminal 103, 104 includes a display means (e.g. a monitor) and user input means (e.g. a mouse and keyboard). The real object 101 includes a controllable element 105 (e.g. a light) which can be controlled by a control mechanism on the real (i.e. physical) object 101. Further, information about the state of the controllable element 105 of the real object 101 is communicated from the real object 101, for example via the control mechanism, to the server 102.
The server 102 includes thereon software arranged to generate and maintain a virtual representation of the real object 101, referred to as a “virtual object”. The “virtual object” may graphically correspond to the real object 101, including, for example reflecting the current state of the controllable element 105 (e.g. whether the light is on or off). The server 102 is arranged to communicate data corresponding to the virtual object to the user terminals 103, 104. The user terminals 103, 104 are arranged to display the virtual object on a mixed-reality graphical user interface (GUI). The mixed-reality GUI is arranged to allow a user of a user terminal to manipulate the virtual object via the user input means, for example manipulating the controllable element 105 (e.g. turning the light on or off). The user terminal is arranged to communicate manipulation data corresponding to this manipulation of the virtual object back to the server 102. The software running on the server 102 is arranged to convert the manipulation data into control data and communicate this to the real object 101. The real object 101 includes means to use the control data (for example the control mechanism described above) to control the controllable element in accordance with the manipulation of the virtual object performed by the user of the user terminal (e.g. turn the light on or off).
In this way a number of users who may be physical separated can use a mix of a physical and virtual environment to collaborate together to control a real object. Real objects that can be connected to a network and controlled in this way are referred to as “smart objects”.
The arrangement described with reference to
In order to facilitate such advanced systems, a mechanism can be provided that dynamically allocates and manages deconstruction and reconstruction of mixed-reality partitions.
Such a mechanism can serve as a component of advanced collaborative online systems such as those in education (e.g. mixed-reality engineering labs), training (e.g. surgery, flight, repair), field servicing (e.g. fixing of field faults through mixed reality mirroring and presence), games (e.g. dual-reality gaming) or business (e.g. multi-national R&D).
Fundamentally all realities are composed of combinations of mixes of (low-level) physical and virtual components. The challenge for creating any particular variation is how to manage the deconstruction, reconstruction and maintenance of the system across the different realities.
Typically, electronic computer systems are constructed from a combination of networked components (extending from high-level holistic systems down to low-level sub-systems). Likewise, learning comprises combinations of lower level skills/knowledge.
It has been recognised that both computers and education could be viewed as a deconstruction and reconstruction of these basic elements. Accordingly a “deconstruction management engine” that enables various realities to be created and managed to meet the needs of users has been provided. This concept is illustrated in
The Deconstruction Management Engine 205 includes a Pedagogical Environment Manager 206, a Learning Relationship Manager 207, a Human Computer Interfaces (HCI) module 208, a Global Reality Manager 209 and a Reality Continuum Deconstruction Manager 210.
The Pedagogical Environment Manager 206 is a mechanism that manages the pedagogical objects 204, combining them into learning activities that will be undertaken by the users, and linking those pedagogical objects 204 to the mixed-reality objects involved in the lessons.
The Learning Relationship Manager 207 controls users' preferences and aptitudes matching them to pedagogical objects via the Pedagogical Environment Manager.
The Human Computer Interfaces (HCI) module 208 is a high-level representation of different interfaces that can interact with the wider model (e.g. mobile devices, mixed-reality platforms, desktop interfaces, etc.).
The Global Reality Manager 209 is a module that interconnects multiple remote implementations of the Deconstruction Management Engine.
The Reality Continuum Deconstruction Manager 210 is the mechanism that identifies, interconnects and synchronises physical and virtual objects.
Deconstruction Manager 210 includes an ID object element 211, a Reconstruction Continuum Manager 212 and a Knowledge Base of Deconstructed Objects 213.
The ID object element 211 is an element that allows identification of the objects (virtual and physical).
The Knowledge Base of Deconstructed Objects 213, contains the information relevant for each object either virtual or real.
The Reconstruction Continuum Manager 212 is the mechanism that identifies the objects and their characteristics using information from the knowledge base of deconstructed objects, creating a link between objects, and keeping their statuses updated and synchronised.
As can be seen from
The Atomic Object Exploration Agent 213a is representative of a function that performs a process that is responsible for: (1) discovery of atomic elements; (2) consistency checks between the real world and the knowledge base representation, and (3) adding unique atomic elements to the database. Typically, the process is technology agnostic, allowing operation with a diverse range of devices.
The Object Knowledge Base 213b is representative of a function that contains a record of objects known by the system and which describes their properties.
Meta-Object Exploration Agent 213c is representative of a function that performs a process that pro-actively searches for (1) atomic elements with similar properties, (2) atomic elements with complementary properties, (3) atomic elements that have been previously combined.
Further, as can be seen from
The User's Needs Agent 212a is representative of a function that performs a process that accepts the learning needs via the pedagogical environment manager in order to identify suitable objects to meet the learning goals.
The Reconstruction Continuum Manager 212b is a function that performs a process that provides a mechanism that identifies the objects and their characteristics using information from the Knowledge Base of Deconstructed Objects 213 creating a link between objects and keeping their statuses updated and synchronised.
The Opportunity Agent 212c is a function that performs a process that provides suggestions based on the outcome of the Meta-Object Exploration Agent 213c. This process can also works in reverse where learning needs can drive the meta-object exploration activity.
In certain examples of the invention, an instance of the Reality Continuum Deconstruction Manager 210 can be implemented as software running on a server.
A server 307 has running thereon software which is arranged to generate and maintain virtual objects associated with the real objects 301, 302, 303, 304. As described with reference to
In order to facilitate the operation of the system shown in
The Reality Continuum Deconstruction Manager 308 ensures that the real objects (e.g. the states of the controllable elements of the real objects) remain synchronised with the representation of the real objects (i.e. the virtual objects). This is achieved, at least in part, by an ID object element of the Reality Continuum Deconstruction Manager 308 allowing the real objects 301, 302, 303, 304 and the corresponding virtual objects to be identified, by a Knowledge Base of Deconstructed Objects of the Reality Continuum Deconstruction Manager 308 containing relevant information about each virtual object and each real object, and Reconstruction Continuum Manager of the Reality Continuum Deconstruction Manager 308. The Reconstruction Continuum Manager combines information from the ID object element and the Knowledge Base of Deconstructed Objects to create a link between the real objects and virtual objects, and keep their statuses updated and synchronised.
As will be understood, the Reality Continuum Deconstruction Manager 308 typically operates during initialisation and operation of the mixed-reality system shown in
For example, if a real object (e.g. any smart object with network capabilities) is connected to a local network (e.g. connected to a system such as that illustrated in
Option A creates a new synchronised virtual-physical object (referred to as an “xReality” object). This concept is illustrated in
Option B creates a virtual-physical object that is synchronised across two or more physical environments. This concept is illustrated in
Finally, by having a number of xReality objects, they can be combined in the virtual world, forming a new composite object. This object can be used for the creation of prototypes between dispersed teams, as each real environment would have one part of the mashup (e.g. sensors in one side of the environment and displays in the other side). This concept is illustrated in
After doing an extensive literature review on current shared physical-virtual object implementations, some limitations that it was possible to identify are:
In contrast to techniques facilitated by certain embodiments of the invention, prior art techniques typically provide no possibility of allowing xReality objects to be modified, regrouped into new shapes/services by end-users, or added to by additional virtual/physical parts to change their configuration (e.g. additional sensors/actuators).
In contrast to techniques facilitated by certain embodiments of the invention, prior art techniques are typically configured only to execute just particular actions, such as activate/deactivate single functionality (e.g. switch on/off a light, moving from A to B in the case of the robots), limiting possibilities of collaboration and creation. For example, prior art techniques may be restricted to represent either an object (e.g. a robot) or ambient variables (e.g. wind and lightening in a 4D movie) but not both.
Typically, in contrast to techniques facilitated by certain embodiments of the invention in prior art techniques, collaborative work is represented only by remote users following the actions of the mixed-reality object via the virtual representation, or triggering a pre-programmed behaviour as, typically, an object's programming is done separately using traditional 2D GUI tools (e.g. LEGO's NXT programming IDE). Typically, in prior art techniques, when the virtual world is used to connect two distant environments, there is only one physical object available in one of the environments.
As will be understood, various components of the Deconstruction Management Engine depicted in
It will be understood that in certain examples, there is not necessarily a one to one correspondence between real objects and virtual objects.
The following explains further concepts from the background art, and further explains concepts associated with implementation of certain embodiments of the invention.
The use of technology in education poses many challenges, especially to distance learners who often feel isolated and experience lack of motivation in completing on-line activities. The challenge is bigger for students working on laboratory based activities, especially in areas that involve collaborative group-work involving physical entities. Embodiments seek to create a learning environment based on collaborative multidimensional spaces, able to connect learners in different geographical locations and foster collaboration and engagement.
In the following a conceptual and architectural model is introduced for the creation of a learning environment able to support the integration of physical and virtual objects, creating an immersive mixed-reality laboratory that seamlessly unites the working environment of geographically dispersed participants (teachers and students), grounded on the theories of constructionism and collaborative learning.
The shift from classroom instruction to ubiquitous student-centred learning has provided a number of technology-based platforms designed to enhance the learning ecosystem; understanding ecosystem as “the complex of living organisms, their physical environment, and all their interrelationships in a particular unit of space” (Encyclopaedia Britannica, 2015). This vision goes from complete campus implementations, considering educational, administrative and social aspects such as the one introduced by Ng et al. (2010); to specific setups designed for specific stakeholders.
Gü tl and Chang (2008) analysed diverse approaches focused on the learning process itself, identifying important aspects which need to be considered in technology-based learning environments:
For a mixed-reality learning environment, context-aware technology should also be considered that is able to identify users, objects, and the physical environment. The Mixed-Reality Smart Learning (MiReSL) model is discussed below, which is a conceptual architectural model. The MiReSL model incorporates a Smart Learning approach (u-learning with a cloud computing infrastructure) with the (De)constructed model of components for teaching (Units of Learning) and learning (physical and virtualised objects) to deliver personalised content enhanced with co-creative mixed reality activities that support the learning-by-doing vision of the constructionist approach.
Finally, the model is supported by a highly-available technological infrastructure based on cloud computing which provides benefits such as: a) the possibility to store, share and adapt resources within the cloud, b) increased mobility and accessibility, c) and the capacity to keep a unified track of learning progresses, d) the use of resources such as synchronous sharing and asynchronous storage allows the model to be available at any moment that the student requires (Kim et al., 2011; Sultan, 2010).
The MiReSL Model is proposed as a complete ecosystem for the use of mixed-reality in learning. The MiReSL model described has been used as a reference point at the Immersive Education Lab Research Group at the University of Essex (Alrashidi et al., 2013; Alzahrani et al., 2015; Felemban, 2015).
Based on this strategy, a model is proposed for interconnecting multiple distant learning environments, allowing bidirectional communication between environments, smart objects and users using a synchronising mechanism to mix distributed physical and virtual devices. The goal of this interconnected learning environment is to enable hands-on activities for distance learners based in a collaborative group-based learning session.
A blended-reality space can be built upon the dual-reality principle of reflecting actions between elements within a physical and virtual environment.
Finally, environmental variables within the physical space (i.e. temperature, humidity, light level, etc.) could be captured via networked sensors and reflected in the virtual environment in multiple ways, for example, the value of a light sensor can be mapped to the sun within the virtual world, creating virtual sunsets and sunrises synchronised with the ones in the physical world. A change in the virtual world cannot be directly reflected in the physical environment (e.g. we cannot change the sun position at will), but the change could be translated using diverse actuators within a closed physical smart environment (i.e. a smart room).
Embodiments relate to synchronisation between objects and environmental variables across multiple dual-reality spaces. Embodiments relate to the creation of a distributed blended-reality space able to allow users in different locations interact and share objects, extending the spaces to allow them to work in collaborative hands-on learning activities.
The following introduces the proposed mixed-reality learning environment, the InterReality Portal, and the distributed architecture of interconnected portals that allow learners in geographically distributed locations work in collaboration, creating a large-scale education environment.
The InterReality Portal can be defined as a collection of interconnected physical and virtual devices that allow users to complete activities between the two extreme points of Milgram's Virtuality Continuum. From the educational point of view, and inspired in Callaghan (2010a) Science Fiction Prototype (SFP), it can be defined as a mixed-reality learning environment that allows remote students to do activities together using a mixture of physical and virtual objects, grounded on the learning-by-doing vision of constructivism. It is conceptually formed by four layers (
To link and synchronise physical and virtual worlds, any interaction/change in the physical world is identified by the Context-Awareness agent (CAag) (in the data acquisition layer); and sent to the Mixed-Reality agent (MRag) (in the event processing layer). Then, the Mixed-Reality agent (MRag) executes a correspondent action on the 3D virtual environment based on the behaviours available for that particular action and reflecting any changes accordingly. Similarly, changes from virtual to physical, are detected by the CAag and passed upon to the MRag, which sends the correspondent behaviour to the physical object (
xReality Objects
Cross-reality (xReality) objects are smart networked objects coupled to their virtual representations, updated and maintained in real time within a mixed-reality space. The difference between smart objects and xReality objects is that the digital representation of the latter emulates the shape, look and status of the physical object in a 3D virtual environment, and allows bidirectional updates; whereas the digital representation of a smart object, if implemented, is commonly represented as a 2D graphic or table in a graphical user interface (GUI).
The synchronisation in real-time between physical objects and their virtual representations is exemplified in
Managing Multiple xReality Objects
As described in the previous section, the synchronisation between one physical object and one virtual object, creates a mirrored xReality object that exists in both worlds simultaneously; this real-time synchronisation that allows object's existence in both worlds is defined as a dual-reality state.
The possibility of having multiple xReality objects in a distributed mixed-reality architecture introduces different scenarios for collaboration between distant users.
Moreover, it allows the creation of mashups between local xReality objects and distant xReality objects, or the interaction between complete xReality objects (understanding them as a physical object linked to its virtual representation) with virtual objects without links to a physical object.
Scenario S1 in
When an additional user joins, he/she can interact with the remote physical object via the shared virtual representation (scenario S2); or via a local object linked to the same virtual representation (scenario S3), creating a many-to-one relationship (many physical objects connected to one virtual representation). The relationship between physical and virtual objects described in scenario S3 represents an extended blended-reality environment (
The scenarios described so far use only one physical object in the local environment or one in both, the local and the remote space; however, when adding more physical objects to each environment it is possible to create mashups between multiple xReality objects. Scenario S4 describes a collaborative session where users combine xReality objects that physically exist in the owner's local environment, but can be shared and combined using its virtual representation in the virtual world, creating a complete new object in the virtual. As an analogy, this can be seen as a puzzle where each of the participants have one or more pieces that allows the completion of the final object inside the virtual world (
Finally, scenario S5 shows the possibility of having two or more xReality objects that do not complement to each other, but instead, both exist as separate entities inside the common virtual space (
By way of an illustration of the different combinations that can be used to create a xReality object, we can imagine that two users (A and B) are collaborating in the creation of a clock alarm (
The scenarios described in the previous section introduce the possibility of having different degrees of mixed-reality between two or more interconnected environments based on communication between xReality objects. The more xReality objects are used in a shared environment, the less simulated the environment is, and vice versa (
The creation of the proposed blended-reality distributed architecture poses two different types of challenges: firstly, the creation of a technical infrastructure able to work as a link between remote environments by reflecting information of physical/virtual objects in real-time. Secondly, the ability of such environment to allow remote users performing collaborative activities, to generate a specific outcome, depending on the context where the technology is used (e.g. a learning outcome, a functional prototype, etc.).
The first challenge has been discussed above. Regarding the second challenge, it is necessary first to identify the uses and dimensions of distributed mixed-reality. Lee et al. (2009) identified three key dimensions for ubiquitous virtual reality (U-VR) that can be applied to distributed mixed-reality:
Similarly, Alrashidi et al. (2013) proposed a 4-Dimensional Learning Activity Framework (4DLAT) that classifies learning activities by number of learners and complexity of the task. Thus, as part of the MiReSL model, a classification of learning activities is proposed to identify the affordances of the proposed model, but above all, MiReSL learning activities classification (MiReSL-LA) helps to delimit and design the activities that can be done within the InterReality system (i.e. InterReality Portal and xReality objects).
MiReSL-LA (
This is not a strict classification, as many activities can be classified in two or more categories simultaneously and could fuse with one another in order to create new learning experiences. Based on this classification,
In addition to the challenges previously described, the proposed blended-reality distributed architecture presents the challenge of bridging the model of distributed xReality objects with the pedagogical model of constructionist laboratories to produce a solution for distributed mixed-reality laboratories. The use of deconstructionism in a collaborative mixed-reality laboratory architecture can be used to unify a constructionist pedagogy (in which learning is a consequence of the correlation between performing active tasks that construct meaningful tangible objects in the real world and relating them to personal experiences and ideas), with a set of mirrored physical/virtual objects and their supporting soft components (e.g. programs, software processes, threads, apps), which can be construct/deconstruct in different mashups to support science and engineering hands-on activities. Table 1 summarises the affordances of the proposed InterReality System towards the creation of a mixed-reality learning environment formed by multiple interconnected multidimensional spaces, and able to support collaborative hands-on activities within distance learners.
The Mixed-Reality Smart Learning model (MiReSL) has been described above as a learning ecosystem based on a conceptual computational architecture that included aspects such as personalisation, content creation, assessment and mixed-reality learning environment. Along with this model, a classification of learning activities (MiReSL-LA) that can be performed in mixed-reality learning environments was proposed to identify the affordances of such model. The MiReSL model was introduced for context. Also provided above is a high-level overview of a mixed-reality distributed computing architecture based on two main supporting concepts that form an InterReality system: The InterReality Portal and xReality objects. The InterReality system was proposed as a solution to bridging virtual and physical worlds, and to merging remote spaces towards the creation of a distributed blended-reality space via the synchronisation of their elements.
Also described above are combinations of synchronised physical and virtual objects based on the principle of dual-reality and the concept of cross-reality first defined by Lifton (2007); Paradiso and Landay (2009), but which, by way of a contribution to the field, were extended from single one-to-one virtual/physical relationships to multiple combinations in different scenarios, exploring different possibilities for managing, sharing and using objects within a blended-reality space; and allowing users to adjust the degree of mixed-reality based on the number of xReality objects used. Both elements of the InterReality system, the InterReality portal and the xReality objects, presented a simple principle which could be applied to different scenarios of collaboration between geographically distributed teams, such as product design in a Research & Development department, or an educational scenario.
Features, integers, characteristics or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of the features and/or steps are mutually exclusive. The invention is not restricted to any details of any foregoing embodiments. The invention extends to any novel one, or novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
1517101.0 | Sep 2015 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2016/053002 | 9/27/2016 | WO | 00 |