The present invention relates to guide image management apparatuses.
Patent Document 1 discloses a guide providing system for guiding a user in a virtual space. This guide providing system generates scene graphs. In a scene graph, multiple objects in the virtual space appear as nodes. In the scene graph, hierarchical interrelationships between the objects are described. The guide providing system determines whether a difference between a current scene graph and a previous scene graph is greater than or equal to a fixed difference. When the difference is greater than or equal to the fixed difference, the guide providing system stores the current scene graph. The guide providing system provides the user with a recommended area to visit and route information based on the scene graph and a visit history of the user.
The conventional guide providing system provides a recommended area to visit and route information; however, it cannot provide a guide image representing a location of a virtual object placed in a virtual space. In particular, when an environment of an area in which a virtual object is placed is changed, the conventional guide providing system has a disadvantage of being unable to provide a user with a new guide image.
A guide image management apparatus according to this disclosure includes a manager configured to manage one or more guide images in association with a virtual object virtually placed in a real space, the one or more guide images being each obtained by capturing an area of the real space including a location of the virtual object, and a communication controller configured to: cause a communication apparatus to transmit a first guide image of the one or more guide images to a user apparatus; and cause the communication apparatus to receive, from the user apparatus, a captured image generated by the user apparatus executing image capture, in which the manager is configured to manage the captured image as a new guide image when the captured image satisfies a registration condition.
According to this disclosure, when an environment of an area in which a virtual object is placed is changed, it is possible to provide a new guide image representing a location of the virtual object.
With reference to
In the information processing system 1, the location management server 40 and the user apparatus 10-k are connected to, and are communicable with, each other via a communication network NET. The object management server 50 and the user apparatus 10-k are connected to, and are communicable with, each other via the communication network NET. The object management server 50 and the structure management server 60 are connected to, and are communicable with, each other via the communication network NET. The terminal apparatus 20-k and the pair of XR glasses 30-k are connected to, and are communicable with, each other. In
The terminal apparatus 20-k functions as a relay apparatus configured to relay communication between the pair of XR glasses 30-k and the location management server 40 and communication between the pair of XR glasses 30-k and the object management server 50. The terminal apparatus 20-k is constituted of, for example, a smartphone or a tablet device.
The pair of XR glasses 30-k is to be worn on the head of the user U[k]. The pair of XR glasses 30-k is a see-through type of glasses that can display a virtual object. The user U[k] visually recognizes a real space through the pair of XR glasses 30-k and visually recognizes the virtual object through the pair of XR glasses 30-k. The virtual object is placed at a location in a virtual space in association with a location in the real space. The user U[k] uses the pair of XR glasses 30-k to recognize a mixed reality space in which the real space and the virtual space are mixed together.
A virtual object may be placed permanently or may be placed for only a limited period during which an event is held. In addition, an area in which the virtual object is to be placed is limited. In a service in which a virtual object is used, notifying the user U[k] of an area in which the virtual object is placed contributes to improvement in convenience of the service. Thus, the object management server 50 transmits a guide image obtained by capturing an area of the real space including a location of a virtual object virtually placed in the real space, as a first guide image, to the user apparatus 10-k.
As described above, the guide image is obtained by capturing an area of the real space including a location of a virtual object. However, in the real space, a new structure may be disposed in an area around the virtual object, or a structure may be removed from the area. When an environment of the area around the virtual object is changed as described above, it is different for the user U[k] to search for the virtual object VO by using the guide image as a clue. Thus, the information processing system 1 updates the guide image in accordance with a change in the environment of the area around the virtual object. For example, when a new structure is disposed in an area around the entrance of Ikebukuro Station, the guide image G1 shown in
The pair of XR glasses 30-k shown in
The location management server 40 stores a feature-point map M. The feature-point map M is data indicative of a plurality of feature points in a three-dimensional global coordinate system. The feature-point map M is generated, for example, by extracting a plurality of feature points from images obtained by a stereo camera capturing an area around an area in which a virtual object is to be placed. In the feature-point map M, locations in the real space are represented in the global coordinate system.
The location management server 40 extracts a plurality of feature points from the captured image Gk. The location management server 40 compares the extracted plurality of feature points with the plurality of feature points stored in the feature-point map M to determine a capture location at which the image capture is executed so as to generate the captured image Gk, and a capture orientation in which the image capture is executed so as to generate the captured image Gk. The location management server 40 transmits the location information Pk indicative of the capture location and the orientation information Dk indicative of the capture orientation back to the terminal apparatus 20-k.
The pair of XR glasses 30-k periodically transmits captured images Gk individually to the location management server 40 to periodically acquire pairs of location information Pk and orientation information Dk individually. The pair of XR glasses 30-k tracks local coordinates of the pair of XR glasses 30-k in real time. The pair of XR glasses 30-k uses the location information Pk and the orientation information Dk acquired from the location management server 40 to correct a location and orientation of the pair of XR glasses 30-k in real time. This correction allows the pair of XR glasses 30-k to recognize in real time a location and orientation of the pair of XR glasses 30-k represented in the global coordinate system. In the following explanation, information indicative of the location generated through this correction may be referred to as location information Pck, and information indicative of the orientation generated through this correction may be referred to as orientation information Dck.
When the object management server 50 receives the location information Pck and the orientation information Dck from the user apparatus 10-k, the object management server 50 executes a rendering of a virtual object based on the location information Pck and the orientation information Dck. The object management server 50 transmits a virtual object image representing the rendered virtual object to the user apparatus 10-k. In this example, the virtual object image is a three-dimensional image. When the user apparatus 10-k receives the virtual object image, the user apparatus 10-k causes the pair of XR glasses 30-k to display the virtual object image.
The object management server 50 manages one or more guide images in association with a virtual object. When the user apparatus 10-k approaches a location of the virtual object, the object management server 50 transmits a first guide image selected from the one or more guide images to the user apparatus 10-k.
The structure management server 60 manages the space structure data. The space structure data is data indicative of real objects in the real space, the real objects being represented in a mesh structure having a plurality of surfaces. The space structure data is indicated in the global coordinate system.
The space structure data has two main uses. A first use is a use to represent physical phenomena of a virtual object, such as shielding of the virtual object and reflection of the virtual object. For example, when the virtual object is a ball and the ball is thrown toward a wall, the space structure data is used to represent that the ball bounces off the wall. When an obstacle is disposed between a user and the virtual object, the space structure data is used to hide the virtual object. A second use is a use to improve view of a service developer in a state in which the developer determines whether to place a virtual object. The developer sets a reference point in the real space and places the virtual object based on the reference point. The reference point may be referred to as an anchor. The reference point is set on a plane in the real space. Since the space structure data uses the mesh structure to represent the plurality of surfaces, the developer can set the reference point on a surface of a real object disposed in the real space by using the space structure data.
The bridge 93 is provided with a capturing device 36. The capturing device 36 is, for example, a camera. The capturing device 36 generates the captured image Gk by capturing the outside world. The capturing device 36 provides the captured image Gk. Each of the lenses 90L and 90R includes a one-way mirror. The frame 94 is provided with either a liquid crystal panel for the left eye of the user or an organic EL panel for the left eye, and with an optical member for guiding light beams, which are emitted by a display panel for the left eye, to the lens 90L. The liquid crystal panel or the organic EL panel is collectively referred to as a display panel. Light beams from the outside world pass through the one-way mirror provided in the lens 90L to be directed to the left eye, and the light beams guided by the optical member are reflected by the one-way mirror to be directed to the left eye. The frame 95 is provided with a display panel for the right eye of the user and with an optical member for guiding light beams, which are emitted by the display panel for the right eye, to the lens 90R. Light beams from the outside world pass through the one-way mirror provided in the lens 90R to be directed to the right eye, and the light beams guided by the optical member are reflected by the one-way mirror to be directed to the right eye.
A display 38, which is described below, includes the lens 90L, the display panel for the left eye, the optical member for the left eye, the lens 90R, the display panel for the right eye, and the optical member for the right eye.
According to the above-described configuration, the user U[k] can watch images displayed by the display panel in a transparent state in which the images are superimposed on images of the outside world. The pair of XR glasses 30-k causes the display panel for the left eye to display a left-eye image of stereo-pair images and causes the display panel for the right eye to display a right-eye image of the stereo-pair images. Thus, the pair of XR glasses 30-k causes the user U[k] to feel as if the displayed images have depth and have a stereoscopic effect.
The processor 31 is a processor configured to control the entire pair of XR glasses 30-k. The processor 31 is constituted of a single chip or of multiple chips, for example. The processor 31 is constituted of a central processing unit (CPU) that includes, for example, interfaces for peripheral devices, arithmetic units, registers, etc. One, some, or all of the functions of the processor 31 may be implemented by hardware such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA). The processor 31 executes various processing in parallel or sequentially.
The storage device 32 is a recording medium readable and writable by the processor 31. The storage device 32 stores a plurality of programs including a control program PR1 to be executed by the processor 31. The storage device 32 functions as a work area for the processor 31.
The detector 35 detects a state of the pair of XR glasses 30-k. The detector 35 includes, for example, an inertial sensor such as an acceleration sensor for sensing acceleration and a gyroscopic sensor for sensing angular acceleration, and a geomagnetic sensor for sensing directions. The acceleration sensor senses acceleration in a direction along an axis that is any one of an X-axis, a Y-axis, and a Z-axis that are perpendicular to one another. The gyroscopic sensor senses angular acceleration of rotation having a rotation axis that is any one of the X-axis, the Y-axis, and the Z-axis. The detector 35 can generate orientation information indicative of an orientation of the pair of XR glasses 30-k based on output information from the gyroscopic sensor. Movement information, which is described below, includes acceleration information indicative of acceleration for each of the three axes and angular acceleration information indicative of angular acceleration for each of the three axes. The detector 35 provides the processor 31 with the orientation information indicative of the orientation of the pair of XR glasses 30-k, the movement information on movement of the pair of XR glasses 30-k, and direction information indicative of a direction of the pair of XR glasses 30-k.
The capturing device 36 provides the captured image Gk obtained by capturing the outside world. The capturing device 36 includes lenses, a capturing element, an amplifier, and an AD converter, for example. Light beams focused through the lenses are converted by the capturing element into a captured image signal, which is an analog signal. The amplifier amplifies the captured image signal and provides the amplified captured image signal to the AD converter. The AD converter converts the amplified captured image signal, which is an analog signal, into the captured image information, which is a digital signal. The captured image information is provided to the processor 31. The captured image Gk provided to the processor 31 is provided to the terminal apparatus 20-k via the communication apparatus 37.
The communication apparatus 37 is hardware that is a transmitting and receiving device configured to communicate with other devices. For example, the communication apparatus 37 may be referred to as a network device, a network controller, a network card, a communication module, etc. The communication apparatus 37 may include a connector for wired connection, and the object management server 50 corresponding to the connector may include circuitry. The communication apparatus 37 may include a wireless communication interface. The connector for wired connection and an interface circuit for wired connection may conform to wired LAN, IEEE1394, or USB. The wireless communication interface may conform to wireless LAN or Bluetooth (registered trademark), etc.
The display 38 is a device for displaying images. The display 38 displays a variety of types of images under the control of the processor 31.
In the above-described configuration, the processor 31 reads the control program PR1 from the storage device 32. The processor 31 executes the control program PR1 to function as a communication controller 311, an estimator 312, and a display controller 313.
The communication controller 311 causes the communication apparatus 37 to transmit the captured image Gk to the location management server 40 and causes the communication apparatus 37 to receive the location information Pk and the orientation information Dk transmitted from the location management server 40. The communication controller 311 causes the communication apparatus 37 to transmit the captured image Gk and a capture parameter, which includes the location information Pck and the orientation information Dck, to the object management server 50. The communication controller 311 causes the communication apparatus 37 to receive the guide image and the virtual object image transmitted from the object management server 50. Communication between the location management server 40 and the pair of XR glasses 30-k and communication between the object management server 50 and the pair of XR glasses 30-k are executed via the terminal apparatus 20-k.
The estimator 312 corrects the location information Pk and the orientation information Dk periodically received from the location management server 40 based on the orientation information, the movement information, and the direction information provided by the detector 35. This correction allows the estimator 312 to estimate in real time the location information Pck indicative of a location of the pair of XR glasses 30-k and the orientation information Dck indicative of an orientation of the pair of XR glasses 30-k.
The display controller 313 causes the display 38 to display the guide image and the virtual object image.
The processor 51 is a processor configured to control the entire object management server 50. The processor 51 is constituted of a single chip or of multiple chips, for example. The processor 51 is constituted of a central processing unit (CPU) that includes, for example, interfaces for peripheral devices, arithmetic units, registers, etc. One, some, or all of the functions of the processor 51 may be implemented by hardware such as a DSP, an ASIC, a PLD, or an FPGA. The processor 51 executes various processing in parallel or sequentially.
The storage device 52 is a recording medium readable and writable by the processor 51. The storage device 52 stores a plurality of programs including a control program PR2 to be executed by the processor 51, a first database DB1, and a second database DB2. The storage device 52 functions as a work area for the processor 51.
The first database DB1 is used to manage virtual objects to be virtually placed by the developer in the real space.
The second database DB2 is used to manage guide images.
In the example shown in
The communication apparatus 53 shown in
The display 54 is a device for displaying images and text information. The input device 55 includes, for example, a pointing device such as a keyboard, a touch pad, a touch panel, or a mouse.
The processor 51 reads the control program PR2 from the storage device 52 and executes the control program PR2. As a result, the processor 51 functions as a communication controller 511, a manager 512, a selector 513, and a determiner 514.
The communication controller 511 causes the communication apparatus 53 to transmit the first guide image of the one or more guide images to the user apparatus 10-k and causes the communication apparatus 53 to receive, from the user apparatus 10-k, a captured image and a capture parameter generated by the user apparatus 10-k executing image capture.
The manager 512 manages the one or more guide images in association with a virtual object. Specifically, the manager 512 manages the first database DB1 and the second database DB2. A record included in the first database DB1 and a record included in the second database DB2 are in association with a virtual object ID. The one or more guide images are each obtained by capturing an area of the real space including a location of the virtual object virtually placed in the real space.
When the captured image acquired from the user apparatus 10-k satisfies a registration condition, the manager 512 manages the captured image as a new guide image. Specifically, when the captured image satisfies the registration conditions, the manager 512 adds a new record corresponding to the new guide image to the second database DB2. The new record includes a guide image ID, a virtual object ID, the guide image, the capture parameter, and an acquisition date and time.
The registration condition may include a condition in which a captured image Gk does not include a portion that infringes a portrait right. An administrator may view the captured image Gk to determine whether the captured image Gk does not include any portion that infringes a portrait right. Alternatively, the manager 512 may execute the following processing to determine whether a portrait right is infringed. First, the manager 512 executes recognition processing to recognize a human face by analyzing the captured image Gk. Second, the manager 512 executes calculation processing to calculate the ratio of an area of an image of the recognized human face to an area of the entire captured image Gk. Third, the manager 512 executes determination processing to compare the calculated ratio with a predetermined value. In the determination processing, when the ratio is less than the predetermined value, the manager 512 determines that the captured image Gk does not include any portion that infringes a portrait right. In the determination processing, when the ratio is greater than or equal to the predetermined value, the manager 512 determines that the captured image Gk includes a portion that infringes a portrait right.
The manager 512 may use a trained model, which is trained to learn relationships between captured images Gk and infringements of portrait rights, to determine whether the captured image Gk does not include a portion that infringes a portrait right. This trained model is generated in a training phase in which a set of label data and a captured image Gk is used as training data. The label data indicates a determination by the administrator viewing the captured image Gk to determine whether a portrait right is infringed. In an operation phase, when a captured image Gk is input into the trained model, the trained model provides either output data indicating that a portrait right is infringed or output data indicating that a portrait right is not infringed. The manager 512 uses the output data for determination whether the captured image Gk does not include a portion that infringes a portrait right.
The registration condition may include a condition in which a degree of similarity between a captured image Gk and a guide image to be compared among the one or more guide images is less than or equal to a threshold. The manager 512 determines, based on the guide image to be compared, its capture parameter, the captured image acquired from the user apparatus 10-k, the location information acquired from the user apparatus 10-k, and the orientation information acquired from the user apparatus 10-k, whether the captured image Gk satisfies this registration requirement. Specifically, the manager 512 uses a projective transformation matrix to convert the captured image Gk captured at a viewpoint of the pair of XR glasses 30-k into a conversion image that is an image of the captured image Gk viewed from a viewpoint at which the guide image to be compared is captured. The manager 512 calculates the projective transformation matrix based on the acquired location information and the acquired orientation information and the capture parameter of the guide image to be compared. The manager 512 calculates a degree of similarity between the conversion image and the guide image to be compared. In the calculation of the degree of similarity, for example, the manager 512 first detects feature points from the conversion image and detects feature points from the guide image to be compared in the same manner. Next, the manager 512 may compare the feature points from the conversion image with the feature points from the guide image to be compared so as to determine the degree of similarity. Thereafter, the manager 512 compares the degree of similarity with the threshold to determine whether the captured image Gk satisfies the registration condition. When the manager 512 registers the new guide image in the second database DB2, the manager 512 deletes a record including the guide image to be compared from the second database DB2. This deletion updates a previous guide image to the new guide image. For example, the guide image G1 shown in
In the above description, the manager 512 uses the projective transformation matrix to convert the captured image Gk; however, the manager 512 may use the projective transformation matrix to convert the guide image to be compared. In this case, the manager 512 may calculate a degree of similarity between the captured image Gk and an image generated by converting the guide image to be compared.
The selector 513 shown in
The determiner 514 determines whether a location of the user apparatus 10-k is within the predetermined area. The predetermined area includes the area of the real space in which a virtual object is virtually placed. The predetermined area may be set for each virtual object or may be set for each guide image. Alternatively, the predetermined area may be set for each area in which a plurality of virtual objects is placed. Specifically, the determiner 514 determines whether the location indicated by the location information Pck received by the communication apparatus 53 from the user apparatus 10-k is within the predetermined area indicated by the transmission condition.
Transmission processing, by which the object management server 50 transmits the first guide image, and update processing, by which the object management server 50 updates the one or more guide images will be described.
At step S10, the processor 51 determines whether a captured image and a capture parameter are received from the user apparatus 10-k. The capture parameter includes the location information Pck and the orientation information Dck. The processor 51 repeats the processing at step S10 until the determination at step S10 is affirmative.
When the determination at step S10 is affirmative, the processor 51 determines whether the location of the user apparatus 10-k is within the predetermined area (step S11). Specifically, the processor 51 determines whether the location indicated by the location information Pck acquired via the communication apparatus 53 satisfies a transmission condition for each guide image ID, which is stored in the second database DB2.
When the determination at step S11 is negative, the processor 51 ends the processing. On the other hand, when the determination at step S11 is affirmative, the processor 51 selects a guide image (step S12). When a single transmission condition is satisfied at step S11, the processor 51 selects a guide image for the satisfied transmission condition as the first guide image. On the other hand, when two or more transmission conditions are satisfied at step S11, the processor 51 selects the first guide image from among two or more guide images for the satisfied two or more transmission conditions.
Specifically, the processor 51 executes the following processing. In first processing, the processor 51 reads two or more capture parameters corresponding to two or more guide images ID for the satisfied transmission conditions, from the second database DB2. In second processing, the processor 51 determines a capture condition, which is closest to a capture condition corresponding to the capture parameter received from the user apparatus 10-k, from among capture conditions indicated by the read two or more capture parameters. In third processing, the processor 51 determines a guide image ID corresponding to the closest capture condition. In fourth processing, the processor 51 reads, as the first guide image, a guide image corresponding to the determined guide image ID from the second database DB2. The processor 51 executes the first processing to the fourth processing to select the first guide image from among the two or more guide images.
At step S13, the processor 51 causes the communication apparatus 53 to transmit the first guide image to the user apparatus 10-k.
In the transmission processing described above, at step S10 and at step S13, the processor 51 functions as the communication controller 511. At step S11, the processor 51 functions as the determiner 514. At step S12, the processor 51 functions as the selector 513.
At step S20, the processor 51 determines whether a captured image Gk and a capture parameter are received from the user apparatus 10-k. The capture parameter includes the location information Pck and the orientation information Dck. The processor 51 repeats the processing at step S20 until the determination at step S20 is affirmative.
When the determination at step S20 is affirmative, the processor 51 determines whether there is a virtual object to be superimposed on the captured image Gk (step S21). Specifically, the processor 51 determines, based on the orientation information Dck and a location indicated by the location information Pck acquired via the communication apparatus 53, whether a virtual object exists in the field of view of the user U[k]. For example, it is assumed that a captured image Gk shown in
When the determination at step S21 is negative, the processor 51 ends the processing. On the other hand, when the determination at step S21 is affirmative, the processor 51 determines whether the captured image Gk includes a portion that infringes a portrait right (step S22). When the determination at step S22 is affirmative, the processor 51 ends the processing. Thus, a captured image Gk, in which a human face is captured to cause infringement of a portrait right, is not adopted as a guide image.
On the other hand, when the determination at step S22 is negative, the processor 51 calculates a degree of similarity between the captured image and a guide image (step S23). In this case, the processor 51 uses the second database DB2 to extract a combination of the guide image (the guide image to be compared) corresponding to an ID of the virtual object to be superimposed on the captured image and a capture parameter corresponding to the ID of the virtual object to be superimposed on the captured image. The processor 51 executes, based on the capture parameter of the captured image and the capture parameter of the guide image, projective transformation on the captured image to generate a conversion image that is an image of the captured image viewed from a viewpoint at which the guide image is captured. The processor 51 calculates the degree of similarity based on the conversion image and the guide image.
In the above-described extraction of the combination of the guide image and the capture parameter, a plurality of combinations may be extracted. For example, it is assumed that contents stored in the second database DB2 are the contents shown in FIG. 8, and it is assumed that the virtual object ID is “V001.” In this case, a combination of a guide image and a capture parameter that correspond to the guide image ID “G001” and a combination of a guide image and a capture parameter that correspond to the guide image ID “G002” are extracted. When such a plurality of combinations of guide images and capture parameters is extracted, the processor 51 determines, as a guide image to be compared, a guide image having a capture parameter closest to the capture parameter of the captured image among the extracted plurality of capture parameters. The processor 51 calculates a degree of similarity between the determined guide image to be compared and the captured image.
After the degree of similarity is calculated at step S23, the processor 51 determines whether the degree of similarity is less than or equal to the threshold (step S24). When the determination at step S24 is negative, the processor 51 ends the update processing.
On the other hand, when the determination at step S24 is affirmative, the processor 51 manages the captured image Gk as a new guide image (step S25). Specifically, the processor 51 adds a new record to the second database DB2 and deletes a record corresponding to the previous guide image. The new record includes a guide image ID, a virtual object ID, a guide image, a capture parameter, a transmission condition, and an acquisition date and time.
For example, it is assumed that contents stored in the second database DB2 are the contents shown in
In the above-described update processing, at step S20, the processor 51 functions as the communication controller 511. At steps S21 to S25, the processor 51 functions as the manager 512.
According to the above description, the object management server 50 includes the manager 512 and the communication controller 511. The manager 512 manages one or more guide images in association with a virtual object virtually placed in the real space, the one or more guide images being each obtained by capturing an area of the real space including a location of the virtual object. The communication controller 511 causes the communication apparatus 53 to transmit a first guide image of the one or more guide images to the user apparatus 10-k, and causes the communication apparatus 53 to receive, from the user apparatus 10-k, a captured image Gk generated by the user apparatus 10-k executing image capture. The manager 512 manages the captured image Gk as a new guide image when the captured image Gk satisfies a registration condition.
Since the object management server 50 includes the above-described configuration, it is possible to update the one or more guide images using the captured image Gk when an environment of an area around the virtual object is changed. As a result, the object management server 50 can provide a new guide image to a user, allowing the user to use the new guide image as a clue to search for the virtual object.
The communication controller 511 causes the communication apparatus 53 to receive, from the user apparatus 10-k, the captured image Gk and a capture parameter related to a condition of capture of the captured image Gk. In a state in which the manager 512 manages the captured image Gk as the new guide image, the manager 512 manages the new guide image and the capture parameter in association with each other. According to the above-described configuration, the object management server 50 can manage a capture condition of a guide image.
The object management server 50 further includes the selector 513. In a state in which the manager 512 manages two or more guide images in association with the virtual object, the selector 513 selects, based on the capture parameter received from the user apparatus 10-k and on capture parameters in association with the two or more guide images, a guide image captured on a capture condition closest to a capture condition indicated by the capture parameter of the captured image Gk, as the first guide image, from among the two or more guide images.
According to the above-described configuration, the object management server 50 transmits, to the user apparatus 10-k, the first guide image captured on the capture condition closest to the capture parameter received from the user apparatus 10-k. Thus, since the first guide image dependent on a state of the user apparatus 10-k is transmitted, the user U[k] can readily move to an area in which the virtual object is virtually placed compared to a configuration in which a guide image freely selected from the two or more guide images is transmitted as the first guide image.
The object management server 50 further includes the determiner 514 configured to determine whether the user apparatus 10-k is within the predetermined area including the area of the real space in which the virtual object is virtually placed. The communication controller 511 causes the communication apparatus 53 to transmit the first guide image to the user apparatus 10-k in response to a determination by the determiner 514 being affirmative, and prohibits the communication apparatus 53 from transmitting the first guide image to the user apparatus 10-k in response to the determination by the determiner 514 being negative.
According to the above-described configuration, the object management server 50 transmits the first guide image to the user apparatus 10-k only when the user apparatus 10-k is within the predetermined area; thus, when the user apparatus 10-k approaches the virtual object, the first guide image is transmitted. Consequently, the user U[k] can receive the first guide image when the user U[k] approaches the virtual object. Thus, it is possible to receive the first guide image at a timing at which a guide is required.
The object management server 50 manages the captured image Gk as the new guide image when the captured image Gk satisfies the registration condition. The registration condition includes a condition in which the captured image Gk does not include a portion that infringes a portrait right. Although a human face may be captured in the captured image Gk, the object management server 50 does not manage the captured image Gk that infringes a portrait right as the new guide image; thus it is possible to prevent an infringement of a portrait right in advance.
The object management server 50 manages the captured image Gk as the new guide image when the captured image Gk satisfies the registration condition. The registration condition includes a condition in which a degree of similarity between the captured image Gk and a guide image to be compared among the one or more guide images is less than or equal to the threshold. According to the above-described configuration, when the degree of similarity between the captured image Gk and the guide image to be compared is less than or equal to the threshold, the captured image Gk is managed as the new guide image. Thus, the captured image Gk can be used to update the one or more guide images when it is detected that an environment of an area around the virtual object has been changed.
This disclosure is not limited to the embodiment described above. Specific modifications will be explained below. Two or more modifications freely selected from the following modifications may be combined.
The user apparatus 10-k according to this embodiment includes the terminal apparatus 20-k and the pair of XR glasses 30-k. The terminal apparatus 20-k functions as a relay apparatus configured to relay communication between the pair of XR glasses 30-k and the location management server 40, and communication between the pair of XR glasses 30-k and the object management server 50. This disclosure is not limited to a configuration in which the user apparatus 10-k includes the terminal apparatus 20-k and the pair of XR glasses 30-k. For example, the pair of XR glasses 30-k may have a function of communicating with the location management server 40 and a function of communicating with the object management server 50. In this case, the user apparatus 10-k may be included in the pair of XR glasses 30-k.
The terminal apparatus 20-k may include the functions of the pair of XR glasses 30-k. In this case, the user apparatus 10-k is constituted of the terminal apparatus 20-k. However, the terminal apparatus 20-k differs from the pair of XR glasses 30-k in that a virtual object is displayed in two dimensions.
In the above-described embodiment, the condition in which the user apparatus 10-k is within the predetermined area is used as the transmission condition for the first guide image. When this transmission condition is satisfied, the object management server 50 transmits the first guide image to the user apparatus 10-k. However, the transmission condition for the first guide image is not limited to the transmission condition described above. For example, in a state in which the user apparatus 10-k displays a map in which icons are placed at different locations of different virtual objects and an icon is selected by the user, the object management server 50 may transmit a first guide image corresponding to the selected icon to the user apparatus 10-k.
In the above-described embodiment, the condition in which the captured image Gk does not include a portion that infringes a portrait right is adopted as the registration condition for the captured image Gk. However, as the registration condition, a condition in which the captured image Gk does not include a portion that infringes a copyright may be adopted. An administrator may view the captured image Gk to determine whether the captured image Gk includes a portion that infringes a copyright.
Software, instructions, etc., may be transmitted and received via communication media. For example, when software is transmitted by a website, a server, or other remote sources, by using wired technologies such as coaxial cables, optical fiber cables, twisted-pair cables, and digital subscriber lines (DSL), and wireless technologies such as infrared radiation and radio and microwaves by using wired technologies, or by wireless technologies, these wired technologies and wireless technologies, wired technologies, or wireless technologies, are also included in the definition of communication media.
Although this disclosure is described in detail, it is obvious to those skilled in the art that the present invention is not limited to the embodiment described in the specification. This disclosure can be implemented with a variety of changes and in a variety of modifications, without departing from the spirit and scope of the present invention as defined in the recitations of the claims. Consequently, the description in this specification is provided only for the purpose of explaining examples and should by no means be construed to limit the present invention in any way.
1 . . . information processing system, 10-1-j . . . user apparatus, 11, 21, 51 . . . processor, 13, 23, 53 . . . communication apparatus, 20-1-20-j . . . terminal apparatus, 30-1-30-j . . . pair of XR glasses, 511 . . . communication controller, 512 . . . manager, 513 . . . selector, 514 . . . determiner, Dck . . . orientation information, Pck . . . location information, G1, G2 . . . guide image, Gk . . . captured image, VO1, VOx . . . virtual object.
Number | Date | Country | Kind |
---|---|---|---|
2022-065581 | Apr 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2023/007342 | 2/28/2023 | WO |