The present disclosure relates to an information processing device, an information processing method, and a program.
Conventionally, a device that displays a point indicated by a user has been proposed (refer to PTL 1, for example).
[PTL 1]
JP 5614014 B
In the device disclosed in PTL 1, it is assumed that the surface of an object positioned in a direction indicated by a user is a point indicated by the user. Accordingly, there is concern about a point that the user wants to indicate is limited and inconsistency of the position of an indication point and an actual display position.
An object of the present disclosure is to provide an information processing device, an information processing method, and a program capable of presenting candidates for points that a user wants to indicate (hereinafter appropriately referred to as candidate points), which are positioned in an operation direction of the user.
The present disclosure is, for example, an information processing device including a controller that detects at least two candidate points positioned in a detected operation direction, switches the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and displays at least the selected candidate point.
In addition, the present disclosure is, for example, an information processing method, using a controller, including detecting at least two candidate points positioned in a detected operation direction, switching the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and displaying at least the selected candidate point.
In addition, the present disclosure is, for example, a program causing a computer to execute an information processing method, using a controller, including detecting at least two candidate points positioned in a detected operation direction, switching the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and displaying at least the selected candidate point.
According to at least one embodiment of the present disclosure, it is possible to present candidate points positioned in an operation direction of a user. Note that the advantageous effect described here is not necessarily limiting, and any advantageous effects described in the present disclosure may be achieved. Further, interpretation of the content of the present disclosure should not be limited by the exemplified advantageous effect.
Hereinafter, embodiments of the present technique will be described with reference to the drawings. Note that the description will be given in the following order.
Embodiments described below are preferred specific examples of the present disclosure, and the content of the present disclosure is not limited to the embodiments.
First, an overview of an embodiment will be described. The present embodiment is realized by a so-called projection mapping system that projects and displays (projects) various types of information on a predetermined object.
The projection mapping system 1 includes an information processing device 2 which controls projection display according to projection mapping and a table 3. Predetermined information is projected and displayed on a predetermined projection area 3a on the surface of the table 3 according to control of the information processing device 2. For example, an intersection point of an operation direction pointed at by an index finger F of a user (operator) and the surface of the table 3 is displayed on the projection area 3a as an indication point P1.
Meanwhile, although
First, in order to facilitate understanding of the present disclosure, problems to be considered in an embodiment will be described.
In the case of such an example, an intersection point Pa, an intersection point Pb and an intersection point Pc are present as intersection points of the operation direction FD and the surfaces of the respective objects, as illustrated in
In addition, as illustrated in
[Configuration Example of Information Processing Device]
The input unit 200 is a device that detects states of the table surface of the table 3 and an object on the table 3, an input device that receives a user operation or the like, or the like. For example, a device having an image sensor is an example of the input unit 200, and more specifically, a red/green/blue (RGB) camera is an example. In addition, the input unit 200 may be a range sensor, and a stereo camera, a time of flight (ToF) camera, a structured light camera, and the like are examples of the range sensor. A sensor other than these exemplified sensors may be applied as the input unit 200.
The environment recognition unit 201 receives input information input from the input unit 200 and estimates three-dimensional structure information of the table 3 and an object on the table 3. For example, when the input unit 200 includes a range sensor, three-dimensional structure information is estimated by applying a known 3D model to a three-dimensional point group acquired from the range sensor or applying a plane to a local point group to reconstruct the point group. Of course, other methods may be applied as a method of estimating the three-dimensional structure information. The three-dimensional structure information acquired by the environment recognition unit 201 is supplied to the candidate point detection unit 203.
The human recognition unit 202 receives the input information input from the input unit 200 and estimates posture information of the user. The posture information of the user includes information about at least the hands (including fingers). An example of a method of estimating the posture information of the user is a method of representing the posture of the user as sets of feature points of joints of the hands or the body and estimating the positions thereof through a neural network. Of course, other methods may be applied as a method of estimating the posture information of the user. Meanwhile, as feature points of the hands, fingertips, knuckles, the bases of the fingers, the centers of the palms of the hands, wrists, and the like, like points indicated by white circles in
The candidate point detection unit 203 receives the posture information supplied from the human recognition unit 202 and the three-dimensional structure information supplied from the environment recognition unit 201 and detects candidate points positioned in an operation direction in which finger pointing is performed on the basis of the information. For example, an extension of a straight line (finger pointing line) that passes through a fingertip position of an index finger (e.g., a fingertip point corresponding to number 7 in
As an example, an intersection point of an operation direction and an object surface (e.g., the surface of the table 3 and the surface of an object placed on the table 3) estimated by the three-dimensional structure information is detected as a candidate point. Further, there are cases in which a single candidate point is detected by the candidate point detection unit 203 and there are also cases in which a candidate point group including a plurality of candidate points is detected by the candidate point detection unit 203. Candidate point information about a candidate point detected by the candidate point detection unit 203 is supplied to the selection unit 204 and the display control unit 205.
Meanwhile, a finger pointing line may not be a straight line, for example. In addition, not only an intersection point with respect to an object surface but also a point between surfaces may be a candidate point. Further, detected candidate points may be narrowed down according to a condition and an area set in advance. This will be described in detail later.
The selection unit 204 receives the candidate point information supplied from the candidate point detection unit 203 and the posture information supplied from the human recognition unit 202 and specifies a determined point that has been determined to be selected from among a plurality of candidate points. Meanwhile, a candidate point and a determined point are regions corresponding to predetermined positions and are not necessarily limited to a dot-shaped region. In the present embodiment, in a case where a plurality of candidate points detected by the candidate point detection unit 203 are present, candidate points to be selected are switched according to a gesture of the user. For example, the selection unit 204 detects presence or absence of a predetermined gesture on the basis of the posture information. Then, when the predetermined gesture is detected, the selection unit 204 switches (changes) candidate points to be selected. A candidate point to be selected is determined to be selected according to a gesture for determining selection, elapse of time for selection, or the like and specified as a determined point. Meanwhile, when there is a single candidate point, candidate point switching is not performed. Selection information acquired according to operation of the selection unit 204 is supplied to the display control unit 205.
The display control unit 205 mainly performs control with respect to projection display. For example, the display control unit 205 performs control for projection display of candidate points for the surface of the table 3 and an object on the table 3 on the basis of the candidate point information supplied from the candidate point detection unit 203. In addition, the display control unit 205 performs control with respect to display for switching candidate points to be selected on the basis of the selection information input from the selection unit 204.
The output unit 206 is a device that outputs information according to control of the display control unit 205. The output unit 206 is, for example, a projector or a head up display (HUD). Further, a device in addition to the projector (e.g., a display or a speaker) may be included in the output unit 206.
In the present embodiment, as an example, a controller is composed of the candidate point detection unit 203, the selection unit 204 and the display control unit 205.
[Operation Example of Information Processing Device]
An operation example of the information processing device 2 will be schematically described. The environment recognition unit 201 acquires three-dimensional structure information on the basis of input information input from the input unit 200. In addition, the human recognition unit 202 acquires posture information of a user on the basis of the input information input from the input unit 200. The candidate point detection unit 203 detects an operation direction on the basis of the posture information. Then, the candidate point detection unit 203 detects candidate points positioned in the operation direction. Candidate point information about the detected candidate points is supplied to the display control unit 205. The display control unit 205 performs control with respect to projection display of the candidate points on the basis of the candidate point information. The output unit 206 operates according to control of the display control unit 205 and thus the candidate points are presented to the user.
The selection unit 204 detects whether a predetermined gesture has been performed on the basis of the posture information. When the gesture has been performed, selection information is output to the display control unit 205. The display control unit 205 controls the output unit 206 such that projection display for switching candidate points to be selected is performed on the basis of the selection information. For example, the display control unit 205 generates display data such that candidate points to be selected are switched and controls the output unit 206 such that projection display based on the display data is performed. The output unit 206 operates according to control of the display control unit 205, and thus a state in which the candidate points to be selected are switched is presented to the user. At least the candidate points to be selected are presented to the user through projection display according to projection.
Meanwhile, the selection unit 204 detects whether a gesture for determining a candidate point to be selected as a determined point has been performed on the basis of the posture information. When the gesture has been performed, determination information representing the purport of the gesture is output. Processing according to an application is performed on the basis of the determination information. For example, processing of projecting and displaying predetermined information such as fireworks at the position of the determined point and processing of reproducing sound or the like from the position of the determined point are performed. The determination information output from the selection unit 204 is supplied to a component that executes the above-described processing. The component may be included in the information processing device 2.
Next, specific examples related to presentation of candidate points will be described.
When the user U performs finger pointing using an index finger F, an operation direction FD that is a direction indicated by the finger pointing is detected by the information processing device 2. Meanwhile, although the operation direction FD is described by a line that is actually invisible in the present embodiment, it may be visible according to hologram or the like.
Then, the information processing device 2 detects candidate points positioned in the operation direction FD. In the example illustrated in
As described above, in the general technology, the candidate point PA is recognized as an indication point and various types of processing are performed when a shielding object such as the object 31A is present in the operation direction FD. Accordingly, there is a problem that at least one point that cannot be indicated at the operation position of the user U, such as the candidate points PB and PC, more specifically, a candidate point positioned behind the object 31A that is a shielding object cannot be indicated. However, in the present embodiment, the candidate points PB and PC are also displayed and selectable. Accordingly, the user U may select the candidate point PB or the candidate point PC when a point at which that the user wants to point is the candidate point PB or the candidate point PC. Of course, the user U may select the candidate point PA when a point that the user wants to indicate is the candidate point PA. Accordingly, the user U can also select a point shielded by the object 31A as an indication point. Meanwhile, the inside of a shielding object positioned in the operation direction FD is another example of a point that cannot be indicated at the operation position of the user U.
In this case, a determined point that is at least a selected candidate point is projected and displayed. Accordingly, another user present near the projection area 3a (not illustrated) can recognize which point is an indication point.
A second example with respect to presentation of candidate points will be described with reference to
In the example illustrated in
Accordingly, the display control unit 205 generates image data such that the candidate points PB to PE present behind the object 41A can be recognized by the user U. Then, the display control unit 205 controls the output unit 206 such that the image data is projected and displayed, for example, on the surface of the object 41A that is visibly recognized by the user U. According to such processing, the candidate points PA to PE are projected and displayed on the surface of the object 41A, as illustrated in
Meanwhile, as illustrated in
In addition, as illustrated in
Meanwhile, the example presented above is merely an example and the present disclosure is not limited thereto. Candidate points detected according to processing of the information processing device 2 on the basis of a candidate point detection example which will be described later are presented through projection display in a state in which the user can recognize them.
[Candidate Point Detection Example]
Next, an example with respect to which point will be detected by the candidate point detection unit 203 as a candidate point in candidate point detection processing performed by the candidate point detection unit 203 will be described.
The first example is an example in which the candidate point detection unit 203 detects intersection points of the operation direction. FD and the surface of the object as candidate points. As illustrated in
Meanwhile, the object 51 is, for example, a real object (a real object that is not hollow). Accordingly, the candidate points PD and PE cannot be presented to the user Ii as they are. In such a case, the display control unit 205 generates image data in which the candidate points PD and PE are present inside the object 51 and projects and displays an image based on the image data on the surface of the object 51, as described above with reference to
Meanwhile, although it will be described in detail later, when a candidate point inside the object 51 is determined as a determined point, the display control unit 205 may generate image data in which the inside of the object 51 is transparent and visible and project and display an image based on the image data on the surface of the object 51, for example. When the user U wants to know the structure and state of the inside of the object 51, he/she may determine the candidate points PD and PE inside the object 51 as determined points.
The operation direction FD is detected as in the first example. The third example is an example of detecting a predetermined point between the surfaces of objects in the operation direction FD as a candidate point. For example, a vicinity of the center between the fingertip of an index finger F and a surface of the object 52 (surface positioned on the side of the user U) is detected as a candidate point PF. In addition, a vicinity of the center between the object 52 and the object 53 is detected as a candidate point PG. Further, a vicinity of the center between internal circumferential surfaces of the object 53 is detected as a candidate point PH. Further, a vicinity of the center between the object 53 and the table 3 is detected as a candidate point PI. In this manner, predetermined points between objects may be detected as candidate points. Further, a point other than a vicinity of the center between objects may be detected as a candidate point.
Meanwhile, an object on which an image will be projected and displayed is not present at a point between objects corresponding to the candidate point PF or the like. Accordingly, when a candidate point is detected as in this example, display on a point corresponding to the candidate point is performed through augmented reality (AR), virtual reality (VR), or the like. For example, when the candidate point PG is selected as a determined point, predetermined display using AR or the like (e.g., display of a mascot character) is performed between the object 52 and the object 53. In addition, when the candidate point PH is selected as a determined point, display such as pouring a drink into the cup-shaped object 53 is performed through AR, or the like.
In the above-described third example, intersection points of the operation direction FD and the surfaces of objects may be detected as candidate points as in the first example. In addition, in the above-described third example, predetermined points inside objects may be detected as candidate points as in the second example.
Candidate point detection examples have been described above. A dot-shaped mark indicating a candidate point, or the like is projected and displayed on the surface of an object at a position corresponding to each detected candidate point.
[Settings for Candidate Points]
Candidate point detection examples have been described above. In the present embodiment, predetermined detection rules can be set for candidate points detected by the candidate point detection unit 203. The predetermined detection rules are settings with respect to an area and a position at which a candidate point is detected. Although settings with respect to a detection area in which a candidate point is detected will be mainly described below, settings with respect to projection display of a candidate point (e.g., a color and a size when a candidate point is projected and displayed) may be possible.
The detection rules are as follows, for example.
A rule of excluding a candidate point positioned on the back side of an object as viewed by the user U.
A rule of indicating which pattern in the above-described first to third examples is used to detect a candidate point.
(Meanwhile, Switching of Patterns May be Performed Through a Gesture or the Like.)
A rule of excluding a candidate point from candidate points if a selection action is undefined in an application.
Settings with respect to candidate points are performed through an appropriate method such as an input through a gesture, an input to an input device such as a remote controller or a button, or an audio input.
An area in which a candidate point is detected may be specifically set by the user U.
For example, as illustrated in
Here, a case in which the user U performs an operation of indicating the projection area 3a with an index finger F from the left side (the side of the object 61) when facing the drawing is conceived. An intersection point of the operation direction FD of the index finger F and the surface of the projection area 3a is projected and displayed by the information processing device 2 as a point P5. Then, the user U performs an operation of rotating the index finger F, as illustrated in
When the first example is applied as a candidate point detection example, candidate points PJ, PK, PL and PM positioned in the area AR1 are detected, as illustrated in
Further, information indicating the set area AR1 is input to the candidate point detection unit 203 as area information, as illustrated in
The candidate point detection rules are not only set by a user but also included in advance in the candidate point detection unit 203. For example, the candidate point detection unit 203 may have a rule of excluding a candidate point at a position at which projection display cannot be performed, or the like.
It is possible to avoid inconvenience, for example, due to presentation of many candidate points to the user U by allowing settings with respect to candidate points to be performed, more specifically, settings with respect to areas in which candidate points are detected and settings for excluding a candidate point from detected candidate points to be performed. For example, it is possible to avoid difficulty in viewing due to projection display of many candidate points and inconvenience that the number of times candidate points are switched increases when a determined point is determined.
[With Respect to Selection of Candidate Point]
As described above, detected candidate points are appropriately projected and displayed. A predetermined candidate point is selected when a plurality of candidate points are present, and if a position corresponding to the selected candidate point is a point that the user U wants to indicate, the candidate point is determined as a determined point. Meanwhile, at least a candidate point to be selected is projected and displayed. In the present embodiment, the selected candidate point is projected and displayed along with other candidate points.
In the form of the present embodiment, when a plurality of candidate points are present, candidate points to be selected are switched according to a predetermined gesture of the user U. Specifically, the selection unit 204 detects a predetermined gesture or the like on the basis of posture information and switches candidate points to be selected when the gesture is performed. Selection information indicating candidate points to be selected is supplied to the display control unit 205. The display control unit 205 performs control with respect to presentation for switching candidate points to be selected on the basis of the selection information.
(With Respect to Presentation of Candidate Points to be Selected)
(With Respect to Switching of Candidate Points to be Selected)
Next, an example of a gesture for switching candidate points to be selected will be described.
As illustrated in
For example, when the gesture of clicking the thumb F1 (first click) is performed, the candidate point to be selected is switched from the candidate point PN to the candidate point PO. Further, when the gesture of clicking the thumb F1 (second click) is performed, the candidate point to be selected is switched from the candidate point PO to the candidate point PP. Further, when the gesture of clicking the thumb F1 (third click) is performed, the candidate point to be selected is switched from the candidate point PP to the candidate point PN.
Here, it is assumed that a gesture of twisting a hand is performed, for example. Upon detection of this gesture, the selection unit 204 determines that a mode has switched to a switching mode in which the candidate point to be selected is switched and performs processing of switching the candidate point to be selected. In this state, a gesture of bending the index finger F, for example, is performed. The candidate point to be selected is switched according to the gesture. For example, when the gesture of bending the index finger F is performed once in the initial state, the candidate point to be selected is switched from the candidate point PN to the candidate point PO. When the gesture of bending the index finger F is additionally performed once, the candidate point to be selected is switched from the candidate point PO to the candidate point PP. When the gesture of bending the index finger F is additionally performed once, the candidate point to be selected is switched from the candidate point PP to the candidate point PN. In this manner, candidate points to be selected are continuously switched in the first example and the second example.
The third example of a gesture for switching candidate points to be selected is an example of switching candidate points to be selected in response to the size of an angle θ1 formed by the index finger F and the thumb F1. For example, two threshold values Th1 and Th2 are set for the angle θ1. When the angle formed by the index finger F and the thumb F1 is small, specifically, if the angle θ1 is less than the threshold value Th1, the candidate point PN becomes a candidate point to be selected. In addition, when a gesture of increasing the angle formed by the index finger F and the thumb F1 is performed and thus the angle θ1 becomes equal to or greater than the threshold value Th1, the candidate point to be selected is switched from the candidate point PN to the candidate point PO. When a gesture of increasing the angle formed by the index finger F and the thumb F1 is additionally performed and thus the angle θ1 becomes equal to or greater than the threshold value Th2, the candidate point to be selected is switched from the candidate point PO to the candidate point PP. Meanwhile, when a gesture of decreasing the angle formed by the index finger F and the thumb F1, for example, is performed, specifically, when the angle θ1 becomes less than the threshold value Th1, the candidate point to be selected is switched from the candidate point PP to the candidate point PN. Meanwhile, (discontinuous) switching may be performed such that the candidate point to be selected jumps from the candidate point PN to the candidate point PP according to, for example, an operation of increasing the angle θ1 at a time in this example. Meanwhile, two threshold values are present because there are three candidate points in this example. Of course, the set number of threshold values, and the like may be appropriately changed.
Although candidate points to be selected are switched in response to an angle formed by the index finger F and the thumb F1 of one hand in the above-described third example, candidate points to be selected may be switched in response to an angle formed by the index finger F of one hand and the index finger F2 of the other hand, as illustrated in
For example, switching may be performed such that a candidate point positioned on the near side becomes a candidate point to be selected when an angle θ2 formed by the index finger F and the index finger F2 is large, as illustrated in
In the case of a gesture using fingers of both hands, candidate points to be selected can also be switched according to gestures below. For example, when a gesture of making the index finger F and the index finger F2 approximately parallel to each other is performed, candidate points to be selected may be sequentially switched from a candidate point on the near side to a candidate point on the far side. In addition, when a gesture of crossing the index finger F and the index finger F2 in the form of the figure of 8 is performed, for example, candidate points to be selected may be sequentially switched from a candidate point on the far side to a candidate point on the near side.
In the fourth example, information for assisting in selection of a candidate point is projected and displayed near the candidate point. For example, a numeral such as “1” is projected and displayed on the same surface as the object 71 on which the candidate point PN is projected and displayed near the candidate point PN, as illustrated in
The user. U performs a gesture of indicating a number using a finger of the opposite hand of the hand having the index finger F. Candidate points to be selected are switched according to this gesture. Meanwhile,
The fifth example is an example in which candidate points positioned near intersection points of operation directions become candidate points to be selected. The user U performs finger pointing using the index finger F of one hand and finger pointing using the index finger F2 of the other hand. Operation directions FD and FD2 in response to respective finger pointing operations are detected. A candidate point positioned at an intersection point of the operation directions FD and FD2 or a candidate point closest to the intersection point becomes a candidate point to be selected. The user U can switch the candidate point to be selected by changing the direction in which the index finger F2 is directed.
The sixth example is an example in which candidate points to be selected are switched in response to a visual line direction of the user U. The visual line direction of the user U is detected, for example, by the human recognition unit 202 and information representing the detected visual line direction is included in posture information. The selection unit 204 switches candidate points to be selected on the basis of the visual line direction included in the posture information.
As illustrated in
[With Respect to Determined Point]
(With Respect to Determination of Determined Point)
When a plurality of detected candidate points are present, for example, one of the plurality of detected candidate points is determined as a determined point. For example, in a case where a certain candidate point is being selected, when a gesture different from a gesture of switching the candidate point to be selected is detected, the candidate point to be selected is determined as a determined point. When a certain candidate point is being selected, the candidate point to be selected may be determined as a determined point according to audio input such as “determine that.”
A determined point may be determined without a specific input. As in the example illustrated in
(Processing Example when Determined Point has been Determined)
Next, an example of processing performed when a determined point has been determined will be described. Of course, the processing example below is an example and processing that will be performed when a determined point has been determined can be appropriately set according to the use of the information processing device 2, and the like. A configuration for performing processing carried out when a determined point has been determined is appropriately added to the above-described configuration of the information processing device 2.
For example, a predetermined image (which may be a still image or a moving image) is projected and displayed at the position of a determined point. The display method is not limited to projection and the predetermined image may be displayed at the position of the determined point through AR or the like. In addition, it is assumed that a three-dimensional map model is placed on the projection area 3a. Description corresponding to a point determined as a determined point (e.g., description or explanation about the origin) may be reproduced.
Only designation of a specific point or person in a certain space may be performed. For example, when a speaker finger-points, for example, a listener positioned at the back in a press conference or a classroom, there is also a case in which surrounding people cannot understand the listener that the speaker finger-points because listeners are also present in front. If the technology according to the present embodiment is applied, when the speaker performs finger pointing, a candidate point is set for each listener positioned in an operation direction, for example. The speaker determines a determined point among the candidate points and displays the determined point in an emphasized manner, and thus surrounding people can recognize a listener designated by the speaker. Meanwhile, in this use example, candidate points are projected and displayed on the bodies of listeners, for example.
Further, in the present embodiment, candidate points can also be set inside an object on the projection area 3a of the table 3, as described above (refer to
As illustrated in
[Processing Flow]
In step ST12, the human recognition unit 202 estimates feature points of a human body. The human recognition unit 202 generates posture information based on the feature points of the human body and outputs the posture information to the candidate point detection unit 203 and the selection unit 204. Then, the processing proceeds to step ST14.
Meanwhile, in step ST13, the environment recognition unit 201 recognizes a three-dimensional structure on the table 3, more specifically, on the projection area 3a and generates three-dimensional structure information. The environment recognition unit 201 outputs the generated three-dimensional structure information to the candidate point detection unit 203. Then, the processing proceeds to step ST14.
In step ST14, the candidate point detection unit 203 performs candidate point detection processing for detecting an operation direction and detecting candidate points present in the operation direction. Then, the processing proceeds to step ST15.
In step ST15, for example, the candidate point detection unit 203 determines whether candidate points have been detected. Here, in a case where no candidate point is detected, such as a case where finger pointing is not performed, the processing returns to step ST11.
When candidate points have been detected in determination processing of step ST15, the display control unit 205 performs control for projection display of the detected candidate points. Then, the detected candidate points are presented to the user U through the output unit 206. Then, the processing proceeds to step ST16.
In step ST16, it is determined whether a specific gesture of the user U has been detected. This determination processing is performed, for example, by the selection unit 204 on the basis of the posture information. Here, the specific gesture is a gesture for switching candidate points to be selected. When the specific gesture has been performed, the processing proceeds to step ST17.
In step ST17, the selection unit 204 switches candidate points to be selected according to the detected specific gesture. Then, the selection unit 204 outputs selection information representing a candidate point to be selected to the display control unit 205. Then, the processing proceeds to step ST18.
In step ST18, display control processing is performed by the display control unit 205. For example, the display control unit 205 performs control such that the candidate point to be selected is emphasized as compared to other candidate points on the basis of the selection information. According to such control, the candidate point to be selected is emphasized and presented to the user U through the output unit 206. Then, the processing proceeds to step ST19.
In step ST19, the selection unit 204 detects whether a determination input (e.g., a gesture) for determining the candidate point to be selected as a determined point is performed. Here, when the determination input is performed, processing in response to an application is performed at a point near the determined point and the processing ends. When the determination input is not performed, the processing returns to step ST16.
When the specific gesture of the user U is not detected in the determination processing of step ST16, the processing proceeds to step ST20. In step ST20, the selection unit 204 determines whether a previous frame of an image includes the candidate point to be selected. Here, when the previous frame includes the candidate point to be selected, the processing proceeds to step ST21.
In step ST21, since the previous frame includes the candidate point to be selected although the specific gesture is not performed, it is conceived that the specific gesture has been performed and a certain candidate point has been selected previously (e.g., about tens of frames ago). Accordingly, the selection unit 204 does not switch the candidate point to be selected in the previous frame, in other words, generates selection information for maintaining the candidate point to be selected and outputs the generated selection information to the display control unit 205 in step ST21. Then, the processing proceeds to step ST18. In step ST18, the display control unit 205 performs display control processing for projection display such that the candidate point to be selected is not switched, that is, a state in which the predetermined candidate point is currently presented as the candidate point to be selected is maintained. Then, the processing proceeds to step ST19 and processing of step ST19 and subsequent processing are performed. Since the processing of step ST19 and subsequent processing have already been described, redundant description is omitted.
When the previous frame does not include the candidate point to be selected in the determination processing of step ST20, the processing proceeds to step ST22. Since this case is a first stage (initial stage) in which the detected candidate point is presented to the user U, for example, a candidate point closest to the user U from among a plurality of candidate points becomes the candidate point to be selected. Then, the processing proceeds to step ST18 in which the display control unit 205 performs display control processing for projection display such that the candidate point closest to the user U from among the plurality of candidate points becomes the candidate point to be selected. Then, the processing proceeds to step ST19 and the processing of step ST19 and subsequent processing are performed. Since the processing of step ST19 and subsequent processing have already been described, redundant description is omitted.
Meanwhile, a candidate point to be selected presented to the user U from among the plurality of candidate points in the initial stage can be appropriately changed. In addition, a candidate point to be selected may not be presented to the user U in the initial stage. Further, even when the previous frame includes the candidate point to be selected, selection of the candidate point may be canceled when a large action of the user U is detected.
Although the embodiments of the present disclosure are specifically described above, the content of the present disclosure is not limited to the above-described embodiment, and various modifications are possible based on the technical idea of the present disclosure. Hereinafter, modified examples will be described.
Although the direction extending from the index finger F for finger pointing in a straight line is defined as an operation direction FD in the above-described embodiment, the present disclosure is not limited thereto. The operation direction FD may be a curved line, a line corresponding to a combination of a straight line and a curved line, or the like.
The candidate point detection unit 203 may detect an operation direction FD according to a gesture of the user U.
As illustrated in
The operation direction FD may be reflected by an object such as a wall.
The position of the wall 90 is recognized by the environment recognition unit 201. For example, the user U performs finger pointing toward the wall 90 using the index finger F and an operation direction FD according to finger pointing is detected. The operation direction FD intersects the surface of the wall 90 that is not an object on the table 3. In such a case, the candidate point detection unit 203 sets the operation direction FD such that it is reflected at the wall 90 and detects candidate points positioned in the reflected operation direction FD intersection points of the reflected operation direction FD and the surfaces of the objects 92 and 93). In this manner, the user can reflect the operation direction FD and prevent a candidate point with respect to an object positioned on the near side (e.g., the object 91) from being detected by the user U.
In the above-described embodiment, the fact that presence of a candidate point at a place to be hidden and thus invisible from the user U can be presented to the user U has been described. (refer to
In addition, information representing the positions of detected candidate points (e.g., map information having a relatively small size in which the positions of the candidate points are mapped to objects on the projection area 3a) may be projected and displayed near the user U. The user U can ascertain the positions of the candidate points by viewing such information and refer to the positions when selecting a candidate point. Such information may be displayed on an electronic device such as a smartphone carried by the user U or may be projected and displayed on a ceiling.
Although the user U described in the above-described embodiment is assumed as a person in general, it may be a robot. An object on which projection display is performed in the above-described embodiment may be any of resin products, pottery, optical transparent members such as glass, the air, and the like. In addition, the size of the projection area may be appropriately changed, and an appropriate number of information processing devices may be used in response to the size of the projection area, or the like. The projection area is not limited to the surface of the table and may be a floor and the like. Furthermore, the operation direction may be detected on the basis of an action of pointing with a pen or the like instead of finger pointing.
The present disclosure can also be realized by a device, a method, a program, a system, and the like. For example, a program that executes the functions described in the above-described embodiment is caused to be downloadable, and a device that does not have the functions described in the embodiment can perform control described in the embodiment in the device by downloading and installing the program. The present disclosure can also be realized by a server that distributes such a program. In addition, technical features in the embodiment and features described in the modified examples can be appropriately combined.
The present disclosure may also be configured as follows.
(1)
An information processing device including a controller configured to detect at least two candidate points positioned in a detected operation direction, switches the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and to display at least the selected candidate point.
(2)
The information processing device according to (1), wherein the candidate points include at least one point that cannot be indicated at an operation position of a user.
(3)
The information processing device according to (2), wherein the point is on a back side of a shielding object positioned in the operation direction.
(4)
The information processing device according to (2), wherein the point is inside of a shielding object positioned in the operation direction.
(5)
The information processing device according to any one of (1) to (4), wherein the controller is configured to display the candidate points according to projection display on a predetermined object positioned in the operation direction.
(6)
The information processing device according to (5), wherein the controller is configured to project and to display information indicating positions of the detected candidate point near a user.
(7)
The information processing device according to any one of (1) to (6), wherein the controller is configured to project and to display, on a surface of an object positioned in the operation direction on the side of the user, a candidate point present in at least one of the inside of the object and the back side of the object.
(8)
The information processing device according to any one of (1) to (7), wherein the controller is configured to emphasize and to display the selected candidate point.
(9)
The information processing device according to any one of (1) to (8), wherein the controller is configured to detect intersection points of the operation direction and surfaces of objects positioned in the operation direction as the candidate points.
(10)
The information processing device according to any one of (1) to (9), wherein settings with respect to the candidate points are possible.
(11)
The information processing device according to (10), wherein settings with respect to the candidate points are settings with respect to an area in which the candidate points are detected.
(12)
The information processing device according to any one of (1) to (11), wherein the controller is configured to switch the candidate points according to a detected gesture.
(13)
The information processing device according to (12), wherein the operation direction is detected on the basis of finger pointing of a user and the gesture is a gesture using a finger of the user.
(14)
The information processing device according to any one of (1) to (13), wherein the controller continuously switches the candidate points.
(15)
The information processing device according to any one of (1) to (13), wherein the controller is configured to discontinuously switch the candidate points.
(16)
The information processing device according to any one of (1) to (15), wherein the controller is configured to set the operation direction according to a gesture of a user.
(17)
The information processing device according to (4), wherein, when a candidate point corresponding to the inside of the shielding object is determined as a determined point, an internal structure of the shielding object is projected and displayed on the shielding object.
(18)
An information processing method, by a controller, including detecting at least two candidate points positioned in a detected operation direction, switching the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and displaying at least the selected candidate point.
(19)
A program causing a computer to execute an information processing method, by a controller, including detecting at least two candidate points positioned in a detected operation direction, switching the candidate points such that a predetermined candidate point is selected from the at least two candidate points, and displaying at least the selected candidate point.
Number | Date | Country | Kind |
---|---|---|---|
2018-115140 | Jun 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/014643 | 4/2/2019 | WO | 00 |