The present disclosure relates generally to methods and systems for conveying information of objects and more particularly, to methods and systems for representing and interacting information of objects with geo-located markers.
Technology advances have enabled mobile personal computing devices to become more capable and ubiquitous. In many cases these devices will have both a display as well as a combination of sensors, for example, GPS, accelerometers, gyroscopes, cameras, light meters, and compasses or some combination thereof. These devices may include mobile computing devices as well as bead mounted displays,
These mobile personal computing devices are increasingly capable of both displaying information for the user as well as supplying contextual information to other systems and applications on the device. Such contextual information can be used to determine the location, orientation and movement of the user interface display of the device.
In one aspect a bead mounted display (HMD) is provided. The HMD may include (1) a see-through or semi-transparent display (e.g., a display that allows transmission of at least some visible light that impinges upon the HMD) that allows the user to see the real-world environment and to display generated images superimposed over or provided in conjunction with a real-world view as perceived by the wearer through the lens elements and (2) electronic or analog sensors that can establish the physical context of the display. By way of example and without limitation, the sensors could include any one or more of a motion detector (e.g., a gyroscope and/or an accelerometer), a camera, a location determination device (e.g., a GPS device, a NFC reader), a magnetometer, and/or an orientation sensor (e.g., a theodolite, infra-red sensor).
In this aspect, the display on the HMD may include a visual representation of a reticle with a fixed point of reference to the user. Additionally, the display may also provide a visual representation of some number of geo-located markers representing objects or points of interest in three dimensional space that are visible in the user's current field of view.
A user wishing to select a geo-located marker in order to, for example, obtain reference information or digitally interact with it, may physically move the display device such that the reticle rendered on the display will appear in close proximity to a chosen marker also rendered on the display. Holding the display device in this position for a specified period of time may result in selection of the chosen marker. Upon selection, subsequent information may be rendered on the display or some action related to that marker may be executed.
a illustrates examples of point references according to a Cartesian coordinate system;
b illustrates examples of point references according to a Spherical coordinate system;
a is a diagrammatic representation of a reticle consistent with the exemplary disclosed embodiments;
b is another diagrammatic representation of a reticle consistent with the exemplary disclosed embodiments;
Mobile personal computing devices can be used as a portable display used to interact in interesting ways with the real world. To overlay information or interact with objects in the real-world, points of interest may be defined and associated with locations in three dimensional space, and rendered in such a way that allows the user to visualize them on a display.
The location definition, reference information and the metadata associated with these objects and points of interest can be digitally created, stored and managed by computer applications or through user interaction with computer applications. Visual representations of certain objects and points of interest may be rendered on the device display and associated with objects, people or locations in the real world. Such visual representations may be referred to as “geo-located markers.”
A method and system for enabling users to select and interact with geo-located markers simply by moving the display will in many cases he more efficient, more intuitive, and safer than using peripheral devices and methods (e.g., such as a touch-screen, mouse, or track pad).
Exemplary methods and systems are described herein. It should be understood that the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features. The exemplary embodiments described herein are not meant to be limiting. It will be readily understood that certain aspects of the disclosed, systems and methods can be arranged and combined in a wide variety of different configurations, all of which are contemplated herein.
In one exemplary embodiment, a head-mounted display (HMD) is provided that includes a see-through display and sensor systems that provide output from which the device's location, orientation, and bearing (for example, latitude, longitude, altitude, pitch, roll or degree tilt from horizontal and vertical axes, and compass heading) may be determined. The HMD could be configured as glasses that can be worn by a person. Further, one or more elements of the sensor system may be located on peripheral devices physically separate from the display.
Additionally, in one embodiment, the HMD may rely on a computer application to instruct the device to render overlay information on the display field of view. This computer application creates and maintains a coordinate system that corresponds to locations in the real physical world. The maintained coordinate system may include either a two dimensional Cartesian coordinate system, a three dimensional Cartesian coordinate system, a two dimensional Spherical coordinate system, a three dimensional Spherical coordinate system, or any other suitable coordinate system.
The application may use information from the HMD sensor systems to determine where the user of the HMD is located in the coordinate system, and to calculate the points in the coordinate system that are visible in the user's current field of view. The user's field of view may include a two dimensional plane, rendered to the user using one display (monocular) or two displays (binocular). For example, based on output of the sensors associated with the HMD, the location of the user relative to a predetermined coordinate system may be determined as well as the user's orientation relative to other objects defined (or not defined) within the coordinate system. Further, based on the output of the sensors, the direction in which the user is looking may also he determined, and the geo-located objects defined in the coordinate system to be displayed within the user's field of view may be determined. Such sensors may include GPS units to determine latitude and longitude, altimeters to determine altitude, magnetometers (compasses) to determine orientation or a direction that a user is looking, accelerometers (e.g., three axis accelerometers) to determine the direction and speed of movements associated with HMD 200, etc. In some embodiments, computer vision based algorithms to detect markers, glyphs, objects, QR codes and QR code readers may be employed to establish the position of HMD 200.
If the user of the HMD moves (and the HMD moves correspondingly with the user), the sensors in the HMD provide data to the application which may prompt or enable the application to monitor information associated with the display including, for example, the current location, orientation and/or hearing of the display unit. This information, in turn, may be used to update or change aspects of images or information presented to the user within the user's field of view on the display unit.
Server system 110 may be a system configured to provide and/or manage services associated with geo-located markers to users. Consistent with the disclosure, server system 110 may provide information of available geo-located markers to user system 120. Server system may also update the information to user system 120 when the physical position of user system 120 changes.
Server system 110 may include one or more components that perform processes consistent with the disclosed embodiments. For example, server system 110 may include one or more computers, e.g., processor device 111, database 113, etc., configured to execute software instructions programmed to perform aspects of the disclosed embodiments, such as creating and maintaining a global coordinate system, providing geo-markers to users for display, transmit information, associated with the geo-markers to user system 120, etc. In one aspect, server system 110 may include database 113. Alternatively, database 113 may be located remotely from the server system 110. Database 113 may include computing components (e.g., database management system, database server, etc.) configured to receive and process requests for data stored in memory devices of database(s) 113 and to provide data from database 113.
User system 120 may include a system associated with a user (e.g., customer) that is configured to perform one or more operations consistent with the disclosed embodiments. In one embodiment an associated user may operate user system 120 to perform one or more such operations. User system 120 may include a communication interface 1.21, a processor device 123, a memory 124, a sensor array 125, and a display 122. The processor device 123 may be configured to execute software instructions programmed to perform aspects of the disclosed embodiments. User system 120 may be represented in the form of head mounted display (HMDs). Although in the present disclosure user system 120 is described in connection with a HMD, user system 120 may include tablets, mobile phone(s), laptop computers, and any other computing device(s) known to those skilled in the art.
Communication interface 121 may include one or more communication components, such as cellular, WIFI, or Bluetooth transceivers. The display 122 may be a translucent display or semi-transparent display. The display 122 may even include opaque lenses or components, e.g., where the images seen by the user are projected onto opaque components based on input signals from a forward looking camera as well as other computer-generated information. Furthermore, the display 122 may employ a waveguide, or it may project information using holographic images. The sensor array 125 may include one or more GPS sensors, cameras, barometric sensors, proximity sensors, physiological monitoring sensors, chemical sensors, magnetometers, gyroscopes, accelerometers, and the like.
Processor devices 111 and 123 may include one or more suitable processing devices, such as a microprocessor, controller, central processing unit, etc. In some embodiments, processor devices 111 and/or 123 may include a microprocessor from the Pentium™ or Xeon™ family manufactured by Intel™, the Turion™ family manufactured by AMD™, or any of various processors manufactured by Sun Microsystems or other microprocessor manufacturers.
Consistent with disclosed embodiments, one or more components of system 100, including server system 110 and user system 120, may also include one or more memory devices (such as memories 112 and 124) as shown in exemplary form in
In some embodiments, server system 110 and user system 120 may also include one or more additional components (not shown) that provide communications with other components of system environment 100, such as through network 130, or any other suitable communications infrastructure.
Network 130 may be any type of network that facilitates communications and data transfer between components of system environment 100, such as, for example, server system 110 and user system 120. Network 130 may be a Local Area Network (LAN), a Wide Area Network (WAN), such as the Internet, and may be a single network or a combination of networks. Further, network 130 may reflect a single type of network or a combination of different types of networks, such as the Internet and public exchange networks for wireline and/or wireless communications. Network 130 may utilize cloud computing technologies that are familiar in the marketplace. Moreover, any part of network 130 may be implemented through traditional infrastructures or channels of trade, to permit operations associated with financial accounts that axe performed manually or in-person by the various entities illustrated in
The HMD 200 may also include a Global Positioning System (GPS) unit 202. GPS units receive signals transmitted by a plurality of geosynchronous earth orbiting satellites in order to triangulate the location of the GPS unit. In more sophisticated systems, the GPS unit may repeatedly forward a location signal to an EMU to supplement the IMUs ability to compute position and velocity, thereby improving the accuracy of the IMU. In the present case, the HMD 200 may employ GPS to identify a location of the HMD device.
As mentioned above, the HMD 200 may include a number of features relating to sensory input and sensory output. Here, HMD 200 may include at least a front racing camera 203 to provide visual (e.g., video) input, a display (e.g., a translucent or a stereoscopic translucent display) 204 to provide a medium for displaying computer-generated information to the user, a microphone 205 to provide sound input and audio buds/speakers 206 to provide sound output. In some embodiments, the visually conveyed digital data may be received by the HMD 200 through the front facing camera 203.
The HMD 200 may also have communication capabilities, similar to conventional mobile devices, through the use of a cellular, WIFI, Bluetooth or tethered Ethernet connection. The HMD 200 may also include an on-board microprocessor 208. The on-board microprocessor 208, may control the aforementioned and other features associated with the HMD 200.
a illustrates examples of point references according to a Cartesian coordinate system 300a. As shown in
b illustrates examples of point references according to a Spherical coordinate system 300b. As shown in
The geo-located marker coordinate locations and associated reference and metadata may be stored and managed by a computer application. The computer application instructs or otherwise causes the HMD to display one or more visual elements on the display which correspond to the locations in the coordinate system defined by the geo-located marker. For example, the geo-located markers rendered on the HMD display may correspond to those with coordinates visible in the user's field of view. For example, as shown in
Geo-located markers may include representations of physical objects, such as locations, people, devices, and non-physical objects such as information sources and application interaction options. Geo-located markers may be visually represented on the display as icons, still or video images, or text. Geo-located markers may appear in close proximity, or overlap each other on the display. Such markers may be grouped into a single marker representing a collection or group of markers.
Geo-located markers may persist for any suitable time period. In some embodiments the geo-located markers may persist indefinitely or may cease to exist after use. Geo-located markers may also persist temporarily for any selected length of time (e.g., less than 1 sec, 1 sec, 2 sec, 5 sec, more than 5 sec, etc. after being displayed).
In some embodiments, one geo-located marker may represent a cluster of objects or points of interest. When the geo-located marker is selected, the representation of the marker on the user's display may change into additional or different icons, etc, representative of or associated with the cluster of objects or points in interest. One or more of the subsequently displayed items on the screen may be further selected by the user.
Geo-located markers may be shared across systems, applications and users, or may be locally confined to a single system, application or user.
In some embodiments, the HMD may provide a reticle which serves as a representation of a vector originating at a fixed location relative to the user and projecting in a straight line out into the coordinate system. Such a reticle may assist the user in orienting the HMD device relative to their real-world environment as well as to geo-located markers which may be rendered on the user's display in locations around the user.
a is a diagrammatic representation of a reticle 500a consistent with the exemplary disclosed embodiments. As shown in
In some embodiments, the reticle may be represented on the display as one or more icons, still or video images, or text. Various aspects of the reticle may change to provide user feedback. For example, any of the size, color, shape, orientation, or any other attribute associated with the reticle may be changed in order to provide feedback to a user.
The reticle position on the display may be modified or changed. For example it may be rendered in the center of the field of view of the user, or at any other location on the field of view of the user.
Alternatively or additionally, the vector may be implemented as a plane rather than as a line.
In the field of view, the reticle may be fixed relative to the display, but the geo-located objects may be free to move in and out of the field of view. Thus, in some embodiments, the user can move the display such that the reticle overlaps a geo-located marker on the display. This action causes the vector to intercept a geo-located object in the coordinate system.
In some embodiments, when the vector overlaps a geo-located object, and the user holds this overlap in a stable position for an amount of time, this may trigger an application event to select that marker and initiate a system response. The desired time to hold in place (e.g., “dwell time”) may be configurable and machine learnable.
Proximity of overlap may employ logic to assist the user in their action. For example, the application may utilize snap logic or inferred intent such that exact pixel overlay between the reticle and the geo-located object marker may not be required for selection.
To indicate or confirm a selection, feedback to the user may be provided by the system, including but not limited to the marker or reticle changing color, shape or form, additional information presented on the display, haptic feedback on a separate device, an audio sound, etc. In response to selection, various interactions may occur. For example, in some embodiments, selection of a marker may cause an interaction to take place, including but not limited to, presenting menu options for the user, displaying information and metadata about the marker, triggering some transaction or behavior in another system or device. In some embodiments, a marker may be associated with a person, and selection of the marker may initiate a communication (e.g., a phone call or video call) to the person.
Geo-located markers need not always be associated with objects, locations, etc. having fixed locations. For example, such markers may be associated with people or other movable objects, such as cars, vehicles, personal items, mobile devices, tools, or any other movable object. The position of such movable objects may be hacked, for example, with the aid of various position locating sensors or devices, including GPS units.
Further, geo-located objects can be defined at any time through a multitude of processes. For example, a user may identify an object and designate the object for inclusion into the tracking database. Using one or more input devices (e.g., input keys, keyboard, touchscreen, voice controlled input devices, gestures of the hand etc., a mouse, pointers, joystick, or any other suitable input device), the user may also specify the coordinate location, metadata, object information or an action or actions to be associated with the designated object. Designation of geo-located objects for association with geo-located markers may also be accomplished dynamically and automatically. For example, if processor device 123 or processor device 111 recognizes a QR code within a field of view of the HMD 200, then such a code may initiate generation of a geo-located marker associated with one or more objects within the field of view. Similarly, if processor device 123 or processor device 111 recognizes a certain object or object type (e.g, based on image data acquired from the user's environment), then a geo-located marker can be created and associated with the recognized object. Further still, geo-located markers may be generated according to predefined rules. For example, a rule may specify that a geo-located marker is to be established and made available for display at a certain time and at a certain location, or relative to a certain object, person, place, etc. Additionally, when a user logs into a system, the user may be associated with a geo-located marker.
Processing associated with defining geo-located markers, identifying geo-located markers to display, or any other functions associated with system 100 may be divided among processor devices 111 and 123 in any suitable arrangement. For example, in some embodiments, HMD 200 can operate in an autonomous or semi-autonomous manner, and processing device 123 may be responsible for most or all of the functions associated with defining, tracking, identifying, displaying, and interacting with the geo-located markers. In other embodiments, most or all of these tasks may be accomplished by processor device 111 on server system 110. In still other embodiments these tasks may be shared more evenly between processor device 111 and processor device 123. In some embodiments, processor device 111 may send tracking information to HMD 200, and processor 123 may handle fee tasks of determining location, orientation, field of view, and vector intersections in order to update the display of HMD 200 with geo-located markers and enable and track selection, or interactions with those markers.
In some embodiments, the set of geo-located markers displayed on HMD 200 may be determined, as previously noted, based on an intersection of the user's field of view with locations of tracked items associated with geo-located markers. Other filtering schemes, however, are also possible. For example, in some embodiments, only those geo-located markers within a certain distance of the user (e.g., 10 m, 20 m, 50 m, 100 m, 1 mile, 10 miles, etc) will be displayed on the user's field of view. In another embodiment, only those geo-located markers of a certain type or associated with, certain metadata (e.g., another user in a user's “contact list”) will be displayed on the user's field of view.
In another example, in the user's field of view an icon is rendered to represent the location of a colleague 100 miles away, and when the user aligns the cross-hairs reticle on the icon and holds it for 0.5 seconds, a menu option to initiate a phone call to that colleague may be presented to the user. In yet another example, in the user's field of view an icon is rendered to represent a piece of equipment which is connected to a communications network, and when the user aligns the cross-hairs reticle on the icon and holds it for 1.5 seconds, a command is sent from either the server system or the user system to turn the equipment on or off.
It should be further understood that arrangements described herein are for purposes of example only. As such, those skilled in the art will appreciate that other arrangements and other elements (e.g. machines, interfaces, functions, orders, and groupings of functions, etc) can be used instead, and some elements may be omitted altogether according to the desired results. Further, many of the elements that are described are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will he apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims, along with the Ml scope of equivalents to which such claims are entitled. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
This application is based on and claims priority to U.S. Provisional Application No. 61/764,688, filed on Feb. 14, 2013, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61764688 | Feb 2013 | US |