The present invention relates to an information processing apparatus, and to a technology for displaying information on an object recognized from a video.
Hitherto, there has been known a technology for projecting and displaying related information (such related information is also referred to as augmented information) on a target that a user is focusing on. For example, in Japanese Patent Application Laid-open No. 08-86615, there is disclosed a technology for capturing a target object, retrieving related information on the captured target object, and projecting the retrieved related information on the target object. In that technology, a position and a shape of the target object are detected, and the retrieved related information is projected on the target object based on the detected position and shape.
In US 2012/0313974, there is proposed a technology for changing, when projecting information, a size and a position of an image to be projected in accordance with a distance from an apparatus projecting the information to a projection target.
When projecting on an object existing in the real world related information on that object, the amount of related information that is easily viewed by the user depends on a projection environment. For example, for a mobile projection device capable of being carried around by the user, the distance between the user and the object serving as a projection surface varies. In general, the range of the projection surface over which projection light is irradiated becomes wider as the projection apparatus moves further away from the projection surface while the projection apparatus continues to project the same image. At this time, on the projection surface, the characters and images included in a projection image look bigger. However, when an object existing in the real world is used as the projection surface, the size of the object does not change regardless of the distance between the user and the projection surface. Therefore, the object appears relatively smaller with respect to the projection image, which becomes larger and larger, as the user moves further away from the projection surface. More specifically, when the distance between the user and the projection surface is larger, the partial area on which the related information on the object is to be projected becomes smaller with respect to the overall projection image. In a situation in which the size of the projection range is limited, when there is a large amount of information to be projected, it becomes more difficult to view the content of that information.
In Japanese Patent Application Laid-open No. 08-86615, there is disclosed a technology for generating, when projecting information, projection content based on a relative position between an apparatus and a projection target. However, in Japanese Patent Application Laid-open No. 08-86615, there is no disclosure about changing the amount of related information to be projected in accordance with the projection environment.
In US 2012/0313974, there is a disclosure about changing the size and the position of the projection image in accordance with the distance to the projection surface, but there is no disclosure about changing the amount of related information to be projected based on the projection environment. The present invention has been created in view of the problems described above. It is an object of the present invention to, when related information on an object is to be projected on the object in an environment in which the distance between a projection apparatus and the object to serve as a projection surface is variable, improve the visibility of the related information.
According to the present disclosure, an information processing apparatus comprises: an image capture unit configured to capture a target object; a generation unit configured to generate, by using a captured image captured by the image capture unit, presentation information for displaying related information corresponding to the target object at a position specified based on a position of the target object; an image display unit configured to display an image including the generated presentation information; and a distance acquisition unit configured to acquire a distance between the target object and the information processing apparatus, wherein the generation unit is configured to generate, when the captured image and the distance acquired by the distance acquisition unit are larger than predetermined values, the presentation information by changing an amount of information to be included in the presentation information.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
The CPU 101 is configured to control each of the hardware components by reading and executing a program stored in the ROM 102 and the HDD 104.
The program may be, for example, a control program for implementing each of the functions, sequences, and processing steps that are described later. That control program and various types of data to be referenced during execution of the control program are recorded in the ROM 102. The RAM 103 includes, for example, a work area to be used by the CPU 101, a load area for the control program.
The HDD 104 is a type of storage. The HDD 104, which is configured to store the control program, is read as appropriate by the CPU, for example. The camera 105 is configured to capture an image of a target object serving as a captured target, and to store the captured image in the RAM 103 and the HDD 104. The projector 106 is configured to project information stored in the RAM 103 and the HDD 104. The camera 105 and the projector 106 are mounted to the information processing apparatus 10 so that a photographable range of the camera 105 and a projectable range of the projector 106 overlaps. The bus 107 is configured to transfer, for example, address signals instructing the components to be controlled by the CPU 101, control signals for controlling each of the components, and data to be passed to and from each of the devices.
The control program may be recorded in advance in the ROM 102 or the HDD 104, or may be stored as necessary in the ROM 102 or the HDD 104 from an external apparatus, an external recording medium, for example. The CPU 101 is configured to implement each function by executing the control program recorded in the ROM 102 or the HDD 104.
In this embodiment, a projector is used as an example of an image display unit. However, the present invention is not limited to this example, and may use any device capable of displaying a live view image, e.g., a head-mounted display, a smartphone, or a tablet computer.
The recognition unit 111 is configured to recognize an object identification (ID) and a position of an object having a specific shape included in the captured image captured by the camera 105. In this embodiment, a marker is used as the object having a specific shape, and a marker ID is used as the object ID. The marker is a specific pattern in which information, such as numerical values and characters, are embedded. As a representative example, a one-dimensional barcode or a two-dimensional barcode may be used. The marker ID is information embedded in the marker. The marker ID is used as an ID for distinguishing markers having different patterns.
The related information acquisition unit 112 is configured to acquire related information associated with a recognized marker ID. A correspondence relation between the marker ID and the related information is stored in advance in the ROM 102 or the HDD 104. Related information stored in a server external to the information processing apparatus 10 may also be acquired by using a communication device. In this embodiment, pairs of one or more items and a value classified for any one of those items are associated as the related information with each individual marker ID. In this embodiment, in particular, information on the content of a container to which a marker has been applied is associated as the related information. The term “container” as used herein refers to a physical object capable of storing articles in its interior. A representative example of such a container is a box. For example, for one marker ID, related information on a plurality of items, such as a “product name”, a “size”, and a “color”, of the article, which is the content of the container to which that marker has been attached, may be associated.
The projection environment estimation unit 113 is configured to estimate, based on the captured image, information on environmental factors influencing projection (hereinafter referred to as “projection environment”), such as a distance between the information processing apparatus 10 and a projection surface, a color of the projection surface, and ambient light. The presentation information generation unit 114 is configured to group the related information, and to generate presentation information based on the grouped related information. The projector 106 is configured to receive the presentation information from the presentation information generation unit 114, and to project the received presentation information. The presentation information is information representing the content of the related information to be presented to the user. For example, the related information is information output as an image to the projector 106 in order to be presented to the user. In this embodiment, in particular, information (related information) on the content of the container (e.g., box) to which a marker has been attached is presented on and around that container. As a result, the user can visually obtain information on the contents of the container by turning the information processing apparatus 10 toward the container.
In this embodiment, a plurality of markers may be recognized at the same time, and a plurality of pieces of presentation information may be simultaneously projected. Therefore, the term “presentation information” may refer to each of a plurality of partial images included in an image representing one screen output by the projector 106. The presentation information generation unit 114 is configured to change the number of pieces of presentation information and the amount of information contained in the presentation information based on the projection environment. For example, even when there are a plurality of marker IDs that have been recognized, depending on the projection environment, control is performed for consolidating into one piece of presentation information to be projected.
Next, an operation example of the information processing apparatus 10 configured as described above is described.
As illustrated in
Next, the recognition unit 111 determines whether or not one or more markers are included in the captured image (Step S203). When there are no markers included in the captured image (Step S203: N), the CPU 101 ends this sequence.
When a marker is included in the captured image (Step S203: Y), the related information acquisition unit 112 acquires the related information corresponding to the marker ID of each marker recognized by the recognition unit 111 in Step S202 (Step S204). Next, the presentation information generation unit 114 determines whether or not there are a plurality of marker IDs (Step S205). An example of a case in which there are a plurality of marker IDs is when a box having a marker printed on each side is lifted up, and the lifted-up surface serves as the projection surface. When information on the objects (products) contained in the box is associated with the marker IDs in advance, detailed information on the contents of the box may be referenced as necessary by using the information processing apparatus 10 without opening the box.
When there is one marker ID (Step S205: N), the presentation information generation unit 114 generates presentation information for a single marker (Step S206). When there are a plurality of marker IDs (Step S205: Y), the presentation information generation unit 114 performs processing for generating presentation information for each of the plurality of markers (Step S207). The processing for generating the presentation information may also be performed on information corresponding to the plurality of markers as a whole. As described later, the determination regarding which processing method is to be used may be performed based on the content of the related information associated with the markers or based on the distances between the markers and the information processing apparatus 10. Lastly, the presentation information generation unit 114 instructs the projector 106 to project the presentation information (Step S208), and then ends the processing.
In Step S202, the recognition unit 111 acquires the marker ID and the position of each marker included in the captured image. That example is based on the assumption that the captured area and the projection area of the projector 106 match, or that the captured area is smaller than the projection area. However, the captured area may be wider than the projection area. In such a case, the marker ID and the position of each marker included in the captured image and in the projection area may be acquired. As a method of acquiring the projection area in the captured image, a known method may be used. As a simple method, all pixels are projected in red by the projector 106, and the projected image is captured by an image capture apparatus. The projection area in the captured image may be determined by taking the range in which the pixels are red in the captured image to be the projection area.
For example, when the horizontal field of view is 90 degrees and the number of pixels in the captured image is 640 pixels (width) by 480 pixels (height), a one meter-long target object at a distance of 1 m appears as 320 pixels in the captured image. When the marker width is known in advance to be 10 cm, if the marker width in the captured image is 32 cm, the distance can be estimated to be 1 m, and if the marker width in the captured image is 20 cm, the distance can be estimated to be 1.6 m. In general, the angle of view and the number of pixels used for the estimation are determined based on the characteristics of a lens and an image sensor, which are each one of the components of the camera 105.
Next, the presentation information generation unit 114 generates the presentation information in accordance with the related information corresponding to the marker, the distance to the marker, and the position of the marker (Step S213), and then ends the processing.
In this marker, the value for the item “product name” is “LC204”, the value for the item “size” is “20 cm”, the value for the item “color” is “black”, the value for the item “product ID” is “LC204_240BK”, and the value for the item “image” is “202240BK.jpg”. Those values may be set to arbitrary information, and character information or image information may be used for those values. In the example shown in
The marker area 306 is an area defined in order to perform positioning processing when the projection image is output by the information processing apparatus 10 so as to be superimposed on a marker in the real world. The marker area 306 is not represented as an image on the projection image. In
In this embodiment, because the presentation information generation unit 114 is configured to not generate an image of the marker in the projection image 301, the image of the marker is not itself illustrated in
As illustrated in
The position and the size of the marker in the projection image are calculated by using a correction parameter to convert the position and the size of the marker area 306 included in the captured image. In general, the camera 105 and the projector 106 in the information processing apparatus 10 have a mounted position, a projection angle, and a lens focal length that are different from each other.
As a result, even when the target object is captured by the camera 105, and the captured image is projected as it is by the projector 106, the target object appearing in the captured image is not projected in the same position and the same size as the actual target object. In order to project information matching the position and the size of the target object included in the captured image, it is necessary for the shape of the marker included in the captured image to be converted into the shape in the projection image. The correction parameter used for the conversion may be determined based on, for example, an image obtained by the camera 105 capturing a test pattern image projected by the projector 106.
Through converting in this manner, when information is projected on the marker area 306 in the captured image, that information is superimposed on a marker 303, which is illustrated in
The projection environment estimation unit 113 estimates the size of the projectable surface based on the captured image (Step S401). For example, the projectable surface may be set as an area surrounded by a plurality of markers, or as an area surrounding the marker. In particular, an area that is near the marker and that the presentation information can be viewed from may be set as the projectable surface. In this example, the projectable surface is determined by utilizing a color gradient of the projection image. The projection environment estimation unit 113 recognizes the color gradient from the captured image, and projects a projection image in which the projectable surface is an area that is adjacent to the marker and that has a color gradient of a predetermined value or less. The reason for projecting the projection image in this manner is because the projection image is more difficult to view when the color gradient is large, but is easier to view when the color gradient is small. The presentation information generation unit 114 generates the presentation information in accordance with the related information corresponding to the marker, the projectable surface surrounding the marker, and the position of the marker (Step S402), and then ends the processing.
An example of a method of generating the presentation information in accordance with the projectable surface is now described. First, the projection environment estimation unit 113 calculates the surface area of the projectable surface to be included in the video. In this embodiment, the surface area of the projectable surface is calculated in units of square pixels. Next, the presentation information generation unit 114 includes in the presentation information related information in an amount that fits within the surface area of the projectable surface. In this embodiment, with the lower left position of the marker area 306 serving as the origin, the presentation information generation unit 114 displays pieces of presentation information 411 and 412.
In Step S212 of
In Step S213 of
In the example described above, the amount of information is reduced when the f-number is at a minimum. However, the amount of information may be reduced when the f-number is smaller than a predetermined value.
Further, the presentation information generation unit 114 may be configured to change a display mode of the presentation information when the f-number is at a minimum or is smaller than the predetermined value. In this case, it is preferred that the display mode be changed so as to improve the visibility of the presentation information. For example, the display mode may be changed by increasing the size of the characters, increasing the character spacing, or displaying an icon in place of displaying characters.
In Step S401 of
The presentation information may be generated by various methods. The presentation information may be stored based on the size of the projectable surface in order of the information capable of fitting in the size of the projectable surface. In this case, the information requiring a smaller display surface area, e.g., information having a smaller number of characters, is preferentially stored in the presentation information. Further, a priority may be determined for each value contained in the related information, and the values may be included in the presentation information in descending order of priority.
When the values are stored in the presentation information in order of priority, a situation may occur in which a value having a large required display surface area (hereinafter referred to as “value A”) cannot be stored, but a value having a lower priority and a small required display surface area can be stored. In this case, the value A is not stored in the presentation information, and a determination is made as to whether or not the value having the next highest priority can be stored in the presentation information. The priority may be determined in advance, or may be specified by the user. In the embodiment described above, the presentation information is projected on a separate area from the marker. However, the position and the size may be determined so that the presentation information is projected superimposed on the marker. Further, the information may be projected in any one of the marker directions so that the information is not projected superimposed on the marker. When the color gradient of the marker is strong, visibility can be ensured by projecting the information so that the information is not projected superimposed on the marker.
Next, the presentation information generation unit 114 determines the amount of presentation information in accordance with the distance between the markers and the information processing apparatus 10 (Step S502). In particular, in this embodiment, as an example of processing for increasing or decreasing the amount of information, the number of pieces of presentation information to be generated is determined in accordance with the distance between the markers and the information processing apparatus 10. More specifically, the distance between the information processing apparatus 10 and the markers is calculated, and the number of pieces of presentation information is determined in accordance with the calculated value. In this case, the “number of pieces of presentation information” corresponds to the number of images to be arranged as independent partial areas in the projection image output from the projector 106.
For example, in the case of an application in which text information is arranged in a graphic object having a rectangular shape, a speech balloon shape, for example, and projection is performed so that the graphic object is not superimposed on the actual object, the number of graphic objects may be considered to be the number of pieces of presentation information. In this embodiment, when the distance between the markers and the information processing apparatus 10 is a predetermined value or more, a total of one piece of presentation information for the plurality of markers is generated, and when that distance is less than the predetermined value, the same number of pieces of presentation information as the number of recognized markers is generated. In this example, the amount of information to be included in one piece of presentation information is roughly constant. Therefore, when the same number of pieces of presentation information as the number of markers is generated, the amount of projectable information is more than when only one piece of presentation information is generated. When the distance between the information processing apparatus 10 and each marker is different, the average of those distances is taken as the distance between the information processing apparatus 10 and the markers. In the following description, the predetermined value is 1.5 m.
Next, the presentation information generation unit 114 generates the presentation information based on the amount of information determined in Step S502 (Step S503). In this embodiment, the determined number of pieces of presentation information is generated.
An area 513 represents the position and the size in the projection image 501 of the marker having a marker ID S1_LC204_230BK. Presentation information 515 represents the presentation information generated by the presentation information generation unit 114 when the distance between the information processing apparatus 10 and the markers is 2 m. In this example, when the distance is 2 m, the presentation information generation unit 114 determines in Step S502 that one piece of presentation information is to be generated.
In
The presentation information generation unit 114 generates, when generating a plurality of pieces of presentation information, the presentation information so that the content of each piece of presentation information is different. Specifically, among the pieces of related information corresponding to each marker, a piece of information having a different value for all of the marker IDs is preferentially included in the presentation information.
As shown in
In
In
In
The presentation information generation unit 114 estimates, for all of the markers included in the captured image, the projectable surface including the markers (Step S601). The projectable surface may be estimated by, similar to the processing in Step S401 of
Next, the presentation information generation unit 114 determines the number of pieces of presentation information to be generated based on the size of the projectable surface of each marker (Step S602). An example of the method of determining the number of pieces of presentation information is a method in which the number of pieces of presentation information is determined in accordance with a total value of the surface area of the projectable surface. In this embodiment, when the surface area of the projectable surface is less than 50,000 square pixels, one piece of presentation information is generated, and when the surface area of the projectable surface is 50,000 square pixels or more, the same number of pieces of presentation information as the number of recognized markers is generated. Next, the presentation information generation unit 114 generates as many pieces of presentation information as the number determined in Step S602 (Step S603).
In
In
In the examples of
However, depending on the color of the projection surface, there may be a case in which an area including a given marker and an area including another marker are not contiguous in the projectable surface. An example of such a case is when a plurality of areas having a small color gradient are split by an area having a large color gradient.
In such a case, when presentation information that straddles a plurality of areas in one projectable surface is projected, visibility may decrease due to a difference in the color gradient, for example. Therefore, in the example illustrated in
In this example, because an area having a large color gradient is present in the projectable surface, the projectable surface is divided by the presentation information generation unit 114 into a first divided surface 711 and a second divided surface 712. The markers 623 and 624 are included in the divided surface 711. The markers 621 and 622 are included in the second divided surface 712, which is in a separate area that is not contiguous to the divided surface 711.
As described above, in Step S602, the number of pieces of presentation information to be generated in the projectable surface is determined to be one. However, the presentation information generation unit 114 generates one piece of presentation information for each of the divided surface 711 and the second divided surface 712. As a result, two pieces of presentation information are projected on the pre-divided projectable surface.
As illustrated in
Thus, a suitable amount of presentation information in accordance with the projection environment may be projected by changing the content and the number of pieces of presentation information in accordance with the distance between the information processing apparatus 10 and the markers, and the surface area and the shape of the projectable surface.
In Step S502 and Step S602, the presentation information generation unit 114 may determine the number of pieces of presentation information in accordance with the f-number of the projector 106. As an example, when the f-number is opened to a maximum level, the presentation information generation unit 114 may reduce the number of pieces of presentation information, and increase the size of the characters and the images to be displayed by that reduction amount. The fact that the user is opening the diaphragm to its maximum level means that the projection video of the projector 106 is so difficult to see that it is necessary to increase the brightness of the projector 106 to its maximum level. As a result, reducing of the number of pieces of presentation information when the diaphragm is opened to its maximum level and increasing of the size of the characters and images to be displayed by that reduction amount has an effect of improving visibility for the user. In place of reducing the number of pieces of presentation information, the display mode of the presentation information may also be changed by increasing the visibility of the presentation information.
In this embodiment, a projector is used as an example of the image display unit. However, the present invention is not limited to this, and may use any display device capable of displaying a live view image, such as a head-mounted display, a smartphone, and a tablet computer. The term “distance” when using a projector as the image display unit refers to the distance from the information processing apparatus to the projection surface. On the other hand, because the concept of a projection surface does not apply when using a display device as the image display unit, the term “distance” refers to the distance from the information processing apparatus to the target object.
When a display device is used as the image display unit, information can also be displayed on a position different from the target object. For example, information can be displayed even in a space surrounding the target object.
Information may also be displayed by a method different from that in this embodiment. For example, information may be displayed in a place that is not on the target object. However, when there are a plurality of target objects arranged across the entire screen, and there is no room to display information other than on the target objects, it is difficult to display information on a place that is not on a target object. In this case, an effect of improving visibility for the user can be obtained by the method according to this embodiment. Further, in this embodiment, a case is described in which the recognition unit 111 is configured to recognize the markers. However, the recognition unit 111 may use bodies other than a marker for recognition. For example, the recognition unit 111 may be configured to recognize objects, and not markers. In this case, the recognition unit 111 is configured to identify an object ID by recognizing the shape of the object. Therefore, the target object itself is recognized as a marker.
A known technology is utilized to recognize the shape of the object. An example of a simple method is to prepare an image obtained in advance by capturing the shape of the object from a plurality of angles, and perform partial matching with the captured image. It is also effective to prepare the shape of the object as a 3D model, and perform matching with a range image (in this case, a device capable of range acquisition, e.g., a range image sensor or a stereo camera, is necessary).
In the first embodiment, when there are a plurality of markers in a captured image, an example in which one piece of presentation information is generated for each of the plurality of markers and an example in which only one piece of presentation information is generated are described. When one piece of presentation information is generated for each marker, the pieces of presentation information and the markers correspond to each other on a “one-to-one basis”, and the relation between the pieces of presentation information and the markers is clear. When only one piece of presentation information is generated, it is clear that that piece of presentation information corresponds to all of the markers on a “1:all” basis. Therefore, in those examples, the correspondence relation between the markers and the pieces of presentation information is clear.
The number of pieces of presentation information is not limited to this. For example, the number of pieces of presentation information may be an arbitrary number of from two or more to less than the number of markers. Increasing or decreasing of the number of pieces of presentation information in that range allows the amount of information to be finely adjusted in accordance with the projection environment, and visibility to be improved. However, in this case, there are a plurality of pieces of presentation information and a plurality of markers, and hence the correspondence relation between the pieces of presentation information and the markers is not clear.
In a second embodiment of the present invention, a method is described for a case in which, when the relation between the markers and the pieces of presentation information is not clear, the positions of the pieces of presentation information are determined so as to clarify the relation between the markers and the pieces of presentation information. In this embodiment, a plurality of markers are grouped into N-number of marker groups each including one or more markers, and one piece of presentation information is associated with each marker group. N is a number of from two or more to less than the number of markers.
Because there may be cases in which there are a plurality of combinations of the markers to be included in the marker groups, among the plurality of combinations, it is desired that a combination that increases visibility be selected. Therefore, in this embodiment, combinations are selected in which, among the markers to be included in the marker groups, markers having a high similarity degree are arranged in the same marker group, and markers having a low similarity degree are arranged in different marker groups. Specifically, the similarity degree among the markers forming each marker group is calculated, and the combination of the markers to be included in each marker group is determined by using each similarity degree.
When the distances among the markers to be included in one marker group are far apart, regardless of where the presentation information is displayed, the presentation information is displayed at a position far away from at least one of the markers. As a result, it is difficult for the user to identify the markers corresponding to the presentation information. Therefore, in this embodiment, the information processing apparatus 10 is configured to perform processing for calculating the distances among the markers forming the marker groups, and determining the combination of the markers to be included in each marker group by using the distance calculated for each marker group. As described above, the target objects themselves may also be used as markers. In this case, the distances among the target objects are calculated, and the combinations of the target objects to be included in target object groups are determined.
In
As illustrated in
As shown in
As shown in
SELs 1 to 4 indicate the results of selecting one marker from among the four markers, and allocating the selected marker to a marker group 1 and the remaining three markers to a marker group 2. SELs 5 to 7 show the results of selecting two markers from among the four markers, and allocating the selected markers to the marker group 1 and the remaining two markers to the marker group 2. As shown in
The presentation information generation unit 114 identifies the markers allocated to each marker group for each combination of the marker groups (Step S901). For example, for the SEL 1 shown in
Next, the presentation information generation unit 114 calculates the similarity degree among the markers included in each of the marker groups (Step S902). In this embodiment, as a method of calculating the similarity degree, the similarity degree of the related information corresponding to each marker is utilized. The similarity degree may be defined by an arbitrary method. However, in this embodiment, the similarity degree is defined based on the number of pieces of related information that have the same value for all of the markers included in the group. When the number of pieces of related information having the same value is larger, the similarity degree is determined to be higher. When the number of markers included in a group is one, the similarity degree is taken to be zero.
Therefore, for example, for the SEL 1 shown in
The presentation information generation unit 114 determines whether or not the processing of Step S902 is complete for all the marker groups (Step S903). When the processing is not complete (Step S903: N), the processing of Step S902 is executed again. When the processing is complete (Step S903: Y), the presentation information generation unit 114 determines whether or not the processing is complete for all the selection combinations of the marker groups (Step S904). In the example shown in
When there is only one selection combination that gives the highest total similarity degree (Step S905: N), the presentation information is generated by using that selection combination (Step S908). When there are a plurality of selection combinations that give the highest total similarity degree (Step S905: Y), the presentation information generation unit 114 calculates the inter-marker distance in the marker group for those selection combinations (Step S906).
In this embodiment, the inter-marker distance in the marker group is calculated based on the expression illustrated in
The presentation information generation unit 114 generates the presentation information by using the selection combination giving the smallest total inter-marker distance in the group among the selection combinations of the groups giving the highest similarity degree (Step S907), and then ends the processing.
The processing for determining which selection combination is to be used to generate the presentation information among the selection combinations of the marker groups shown in
First, the presentation information generation unit 114 executes the processing of Step S902 on the two marker groups included in the selection combination SEL 1. Because only one marker is included in the marker group 1 of SEL 1, the similarity degree is zero, and because the shared value for the markers included in the marker group 2 is the product name LC204, the similarity degree is one. As a result, the similarity degree of the SEL 1 is obtained by adding the similarity degree of the marker group 1 and the similarity degree of the marker group 2, to give a total of one. Similarly, the similarity degrees of the SEL 2, the SEL 3, and the SEL 4 are also all one.
On the other hand, for the SEL 5, in the marker group 1, because the LC204 and the size of 23 cm match in the related information, the similarity degree is two. In the marker group 2 as well, the product name and the size match in the related information, and hence the similarity degree is two. Therefore, the total similarity degree is 4.
For the SEL 6, in the marker group 1, the size in the related information is different, but the color matches, and hence the similarity degree is two. Similarly, the similarity degree for the marker group 2 is two. Therefore, the total similarity degree is four.
For the SEL 7, in both the marker group 1 and the marker group 2, only the product name matches in the related information, and hence the similarity degree for each marker group is one. Therefore, the total similarity degree is two.
Thus, in Steps S905, the presentation information generation unit 114 determines that there are two selection combinations giving the highest similarity degree, namely, the SEL 5 and the SEL 6. Next, in Step S906, the presentation information generation unit 114 calculates the inter-marker distance in the marker group for each marker group included in the SEL 5 and the SEL 6.
In the calculation expression of
Next, based on the positions of the markers shown in
The inter-marker distance of the marker group 2 of the SEL 5 is also calculated to be 600 (XM=400, YM=100). As a result, the total inter-marker distance in the marker group of the marker groups included in the SEL 5 is 1,200.
Next, the inter-marker distance of the marker group 1 of the SEL 6 is calculated to be about 721 (XM=400, YM=300). Further, the inter-marker distance of the marker group 2 of the SEL 6 is calculated to be about 721 (XM=400, YM=300). As a result, the total inter-marker distance in the marker group of the marker groups included in the SEL 6 is 1,442. Therefore, in Step S907, the presentation information generation unit 114 determines that the presentation information is to be generated based on the SEL 5, which has the smallest inter-marker distance in the marker group. Further, the presentation information generation unit 114 projects the presentation information for the marker group 1, which includes the marker 623 and the marker 624, and projects the presentation information for the marker group 2, which includes the marker 621 and the marker 622.
In
As illustrated in
Presentation information 724 is projected for the two markers 621 and 622. As the values of the presentation information 723, “LC204” and “23 cm” are projected. As a result, marker groups including markers having a high similarity degree can be generated, and presentation information can be presented for each marker group. Further, the correspondence relation between the markers and the presentation information is clearer.
Thus, determination of the selection combinations of the marker groups based on the related information on the markers and the positions of the markers included in the captured image enables presentation information to be presented for marker groups that have a high similarity degree in content and that are close to each other.
In
For example, the information processing apparatus 10 may be configured to present information on M-number (M is a number determined in advance) of higher-level groups having a higher priority based on a priority determined in advance. When the number of groups is larger than M, information on several of the groups having a lower priority is not presented.
Similarly, presentation information 1114 is presentation information on the product B and presentation information 1115 is presentation information on the product C. For example, in this store, a focus is placed on the sale of the product A and the product B, and hence the priorities are set as “product A >product B >product C”, and the number of pieces of information to be presented is set to M=2. In
As another embodiment, the priority of a product that is a short distance from the center of projection may be set higher. In this case, group information is presented in order of closeness to the center of projection. As yet another embodiment, the number of pieces of information to be presented may be determined in advance, or the number may be determined in accordance with the size of the projectable area.
In
However, the present invention is not limited to this. Markers having related information that does not match may also be present in the groups. Therefore, a group may be formed that includes a marker having related information that does not match, and information on that group may be presented.
For example, when the user wishes to view the overall projection image from overhead from a distance, there may be a case in which there is a need to present information that permits a slight level of information error and gives priority to visibility. In this case, through forming a group that includes a marker having related information that does not match, there is an advantage in that the user can more easily schematically grasp the overall projection image.
In
In this example, the product C is included in the group of the product A. However, whether the product C is included in the group of the product A or in the group of the product B may be freely determined. For example, which group the product C is to be included in may be determined based on the relative position of each of the product A and the product B with respect to the product C. Further, for example, the distance between a center position of each group of the product A and the product B and the center position of the product C may be calculated, and a group may be formed that includes the product C in the product having the smaller distance.
Further, the group to which the product C belongs may also be determined so that there is a smaller number of corners of the frame surrounding the group. In
In
In
In
Thus, in the examples illustrated in
As described above, according to the present invention, visibility by the user when presentation information is projected can be improved by using a captured image to change the amount of related information. The above-mentioned various embodiments are exemplary embodiments, and various modifications may be made thereto. For example, in the above-mentioned embodiments, a two-dimensional barcode is used as the marker. However, the marker is not limited to a barcode, and any graphic or the like may be used as the marker as long as the graphic or the like is identifiable. A color barcode, for example, may also be used.
Thus, according to the present invention, when related information on an object is to be projected on the object in an environment in which the distance between the projection apparatus and the object to serve as the projection surface is variable, the visibility of the related information can be improved.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2015-180978, filed Sep. 14, 2015, and No. 2016-051570, filed Mar. 15, 2016 which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2015-180978 | Sep 2015 | JP | national |
2016-051570 | Mar 2016 | JP | national |