The present invention relates to a notification device.
Priority is claimed on Japanese Patent Application No. 2018-113865, filed Jun. 14, 2018, the content of which is incorporated herein by reference.
Conventionally, technology for providing information about an obstacle to a person with impaired vision by outputting a sound for providing a notification representing that there is an obstacle nearby is known (Patent Document 1). Also, technology for providing information about an obstacle to a person with impaired vision by generating a guidance sound on the basis of the distance to an obstacle and a head-related transfer function and outputting the generated guidance sound is known (Patent Document 2). Also, technology for providing information about an obstacle to a person with impaired vision by outputting digital sound data subjected to stereophonic processing on the basis of a position of the obstacle is known (Patent Document 3).
[Patent Document 1]
[Patent Document 2]
[Patent Document 3]
However, in the conventional technology, it may be difficult to accurately convey the position, the direction, the distance, and the like of an obstacle to a person to be guided.
The present invention has been made in consideration of the above-described circumstances and an objective of the present invention is to provide a notification device, a notification method, and a program capable of accurately conveying the position, the direction, the distance, and the like of an object to a person to be guided.
A notification device according to the present invention adopts the following configuration.
(1): According to an aspect of the present invention, there is provided a notification device including: a detector configured to detect a physical object around a moving person; a relative position acquirer configured to acquire a relative position with respect to the physical object for which the detector is designated as a base point; a storage storing sound information in which sounds, which are emitted from a plurality of positions away from a predetermined recording point within a predetermined recording space, are pre-recorded for each of the plurality of positions and the recorded sounds are associated with relative positional relationships between the recording point and the plurality of positions; and a selector configured to select the sound information associated with the relative position from the sound information stored in the storage on the basis of the relative positional relationship associated with the relative position acquired by the relative position acquirer, wherein the notification device causes a generator to generate the sounds of the sound information selected by the selector to notify the moving person of information about the physical object detected by the detector.
According to the aspect (1) of the present invention, the position, the direction, the distance, and the like of an object can be accurately conveyed to a person to be guided.
Hereinafter, embodiments of a notification device of the present invention will be described with reference to the drawings.
The base 10 supports each part provided in the sound guidance system 1. The base 10 has, for example, a shape similar to a frame of eyeglasses, and is worn on the face of the person to be guided by the sound guidance system 1. Also, the base 10 may support a pair of left and right lenses in addition to parts provided in the sound guidance system 1.
The camera 20 is, for example, a digital camera that uses a solid-state image sensor such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS). As shown in
The speaker 30 operates under control of the notification device 100 and outputs a sound. As shown in
Specifically, the right camera 20-1 and the right speaker 30-1 are provided on the right side of the person to be guided when the base 10 is worn by the person to be guided and the left camera 20-2 and the left speaker 30-2 are provided on the left side of the person to be guided when the base 10 is worn by the person to be guided. Also, the right camera 20-1 and the left camera 20-2 are provided on the front of the base 10 so that a view in front of the person to be guided is imaged. Also, the right speaker 30-1 is provided at a position where the output sound can be easily heard by the right ear of the person to be guided among positions supported by a temple on the right side of the base 10 and the left speaker 30-2 is provided at a position where the output sound can be easily heard by the left ear of the person to be guided among positions supported by a temple on the left side of the base 10.
Also, the right speaker 30-1 and the left speaker 30-2 may have shapes similar to those of earphones which are inserted into and used in the ears of the person to be guided.
The notification device 100 is attached to any location on the base 10. In the example shown in
The storage 200 is implemented by, for example, an HDD, a flash memory, an electrically erasable programmable read only memory (EEPROM), a read only memory (ROM), a random access memory (ROM), or the like. For example, the storage 200 stores a program read and executed by a hardware processor. Also, for example, the sound source information 202 is pre-stored in the storage 200.
Also, dimensions of the recording space are examples and the present invention is not limited thereto. Also, the width, the depth, and the height of the recording space may be divided according to a length other than 0.5 [m] described above.
Returning to
Here, all of the volumes of the sounds emitted at each position P are the same. Therefore, the volume of the sound represented by the sound information SD increases as the sound is close to the sound of the sound information SD associated with the position P of column F, row 1, and layer a and decreases as the sound is close to the sound of the sound information SD associated with the position P of column A, column K, row 10, and layer j.
Also, the first type sound information SD1 and the second type sound information SD2 are examples of the sound information SD and the present invention is not limited thereto. The sound information SD may include, for example, information of sounds acquired by tapping materials (for example, plastic and the like) other than metal or wood at each position P of the recording space.
Returning to
The coordinate calculator 302 extracts a feature point of an object near the person to be guided on the basis of an image captured by the camera 20 and calculates coordinates CT of the extracted feature point. For example, the coordinate calculator 302 calculates the coordinates CT of the feature point of the object for use in guiding the person to be guided designated as a calculation target among objects in a real space corresponding to the recording space when a position of the person to be guided is designated as the origin O. Here, it is preferable that the object for use in guiding the person to be guided be an object having continuity in a direction in which the person to be guided is guided. Objects having continuity in a direction in which the person to be guided is guided are, for example, guardrails, fences, curbs that separate sidewalks and roadways, and the like. For example, the coordinates CT are calculated as coordinates CT(W, D, H) according to a first distance W in the left-right direction from the person to be guided to the feature point of the object, a second distance D in the forward direction from the person to be guided to the feature point of the object, and a third distance H in the height direction from the person to be guided to the feature point of the object.
For example, the position identifier 304 identifies the position of the object. Specifically, when the position of the person to be guided is designated as the origin O and the direction in which the person to be guided is guided is designated as a direction of rows 1 to 10, the position identifier 304 identifies a position P of the recording space corresponding to coordinates CT calculated by the coordinate calculator 302. The coordinate calculator 302 and the position identifier 304 are examples of a “relative position acquirer.”
The material determiner 306 determines a material of the object near the person to be guided on the basis of the image captured by the camera 20. For example, the material determiner 306 determines a material of an object in a real space that matches the recording space when the position of the person to be guided is designated as the origin O among objects shown in the image captured by the camera 20.
Also, the material determiner 306 may be configured to determine the material of the object on the basis of a ratio of light received by a light receiver to light (for example, near infrared rays) output by a light projector to the object. In this case, the sound guidance system 1 includes the light projector and the light receiver.
The selector 308 selects sound information SD corresponding to the position P as sound information SD corresponding to the material from the sound information SD included in the sound source information 202 on the basis of the position P of the object identified by the position identifier 304 and the material of the object determined by the material determiner 306. Also, the selector 308 outputs a sound of the selected sound information SD through the speaker 30.
Here, in the first type sound information SD1 of the position P(E, 1, a) and the first type sound information SD1 of the position P(E, 8, a), the volume of the sound of the latter first type sound information SD1 is lower. Therefore, the person to be guided can recognize a position of the object OB1, a material of the object OB1, the distance to the object OB1, and the guidance direction according to the sound of the first type sound information SD1 sequentially output from the speaker 30.
Also, even if the object has continuity in the guidance direction with respect to the person to be guided, it may not preferable to use the object for guiding the person to be guided when the object (for example, eaves of a shop, a street tree, or the like) is at an excessively high position (for example, layer g (3.0 [m]) or more). Thus, the selector 308 may not select the sound information SD associated with the position P having a predetermined height or more (for example, layer g or more) among the positions P identified by the position identifier 304. The predetermined height is an example of a “second threshold value.”
The notification device 100 of the present embodiment includes: an imager (the camera 20 in the present example) configured to perform an imaging process; the coordinate calculator 302 configured to calculate coordinates CT of a feature point of an object from an image captured by the camera 20; the position identifier 304 configured to identify a position P of the object on the basis of the coordinates CT of the feature point; and the selector 308 configured to select sound information SD to be output to a generator from a plurality of pieces of sound information SD so that a volume changes in accordance with a direction in which a person to be guided is guided and the distance to the object on the basis of the plurality of pieces of sound information SD in which sounds emitted from a plurality of positions P of a space where the sounds are recorded are recorded for each position P and the position P of the object. The generator (the speaker 30 in the present example) configured to generate a sound represented by the sound information SD selected by the selector 308 outputs the sound, so that it is possible to accurately convey the position, the direction, the distance, and the like of the object to the person to be guided.
Also, in the notification device 100 of the present embodiment, the sound information SD includes sound information SD (the first type sound information SD1 and the second type sound information SD2 in the present example) representing a timbre that reminds of a material of the object (metal and wood in the present example) and it is possible to convey the position, the direction, the distance, and the like of the object to the person to be guided and to remind the person of a type of the object.
Also, the selector 308 may select sound information SD other than the sound information SD based on the material of the object determined by the material determiner 306.
Also, the sound information SD may be information representing a sound recorded in a recording space where reverberation occurs. In this case, the person to be guided can more accurately ascertain the position, the direction, the distance, and the like of an object according to the reverberation of the sound information SD output by the speaker 30.
Hereinafter, a second embodiment of the present invention will be described. In the first embodiment, a case in which sound information SD corresponding to an object is fixedly sent to a person to be guided has been described. In the second embodiment, a case in which an object to which the person to be guided should pay particular attention among the objects is determined and the person to be guided is notified of the object will be described. Also, components similar to those in the above-described embodiment are denoted by the same reference signs and a description thereof will be omitted.
[About Controller 300a]
The storage 200a pre-stores sound source information 202a and danger level information 204.
Returning to
The danger level determiner 310 determines a type of object on the basis of an image captured by the camera 20. Also, the danger level determiner 310 determines the danger level of the object on the basis of the determined type of object and the danger level information 204.
The selector 308 of the present embodiment further selects the sound information SD on the basis of the danger level of the object determined by the danger level determiner 310. For example, when the danger level of the object whose position P is determined by the position identifier 304 is greater than or equal to a predetermined threshold value (for example, danger level “4”), the selector 308 selects the danger sound information SD4 associated with the position P of the object. The predetermined threshold value is an example of a “first threshold value.”
Here, when a plurality of pieces of sound information SD of objects other than the object for use in guiding the person to be guided (i.e., an object whose danger level is greater than or equal to the predetermined threshold value) are output according to the position P of the object, a process of outputting the plurality of pieces of sound information SD may interfere with guidance for the person to be guided. Thus, the selector 308 selects only the danger sound information SD4 associated with the position P closest to the person to be guided among the positions P of the objects whose danger level is determined to be greater than or equal to the predetermined threshold value and outputs the selected danger sound information SD through the speaker 30.
In the second embodiment, the notification device 100a executes step S107 after step S106. The danger level determiner 310 determines a danger level of an object (step S107). Also, the selector 308 of the second embodiment selects sound information SD from the sound source information 202 in accordance with a position P of the object identified by the position identifier 304, the material of the object determined by the material determiner 306, and the danger level of the object determined by the danger level determiner 310 (step S109).
As described above, the notification device 100a of the present embodiment includes a danger level determiner 310 configured to determine a danger level of an object identified by the position identifier 304 on the basis of the danger level information 204 in which the object and the danger level of the object are mutually associated. When the danger level of the object is greater than or equal to the predetermined threshold value, the selector 308 can select sound information SD (the danger sound information SD4 in the present example) to be output and allow the person to be guided to pay attention to the position, the direction, the distance, and the like of an object having a high danger level.
Hereinafter, a third embodiment of the present invention will be described. In the third embodiment, a case in which a position of an object is further identified on the basis of position information representing the position of the object in addition to an image captured by the camera 20 will be described. Components similar to those in the above-described embodiment are denoted by the same reference signs and a description thereof will be omitted.
[About Sound Guidance System 1b]
The navigation device 40 includes, for example, a global navigation satellite system (GNSS) receiver 41, a navigation human machine interface (HMI) 42, and a route determiner 43. The navigation device 40 retains first map information 44 in a storage device such as a hard disk drive (HDD) or a flash memory.
The GNSS receiver 41 identifies a current position of the person to be guided on the basis of a signal received from a GNSS satellite.
The navigation HMI 42 includes a display device, a speaker, a touch panel, keys, and the like. A destination is input to the navigation HMI 42 by the person to be guided. When the person to be guided has another HMI, the navigation HMI 42 may be configured to share some or all of the functions with the other HMI device. Information representing the destination is output to the notification device 100a.
For example, the route determiner 43 determines a route from the position of the person to be guided (or any input position) identified by the GNSS receiver 41 to the destination input to the navigation HMI 42 with reference to the first map information 44. For example, the first map information 44 is information in which a road shape is expressed by a link representing a road and nodes connected by the link. The first map information 44 may include road curvature, point of interest (POI) information, and the like.
For example, the navigation device 40 may be implemented by a function of a terminal device such as a smartphone or a tablet terminal owned by the person to be guided. The navigation device 40 may transmit the current position and the destination to a navigation server via a communication device (not shown) and acquire a route equivalent to the route on the map from the navigation server.
The position identifier 304 of the present embodiment acquires the current position of the person to be guided acquired by the GNSS receiver 41 and identifies the current position (the origin O) of the person to be guided in the first map information 44. Also, the position identifier 304 identifies a position of an object in a real space that matches a recording space in the first map information 44 on the basis of the current position (the origin O) of the person to be guided in the first map information 44. Here, the first map information 44 may be associated with information about a shape of a building, a shape of a road, or the like. In this case, the position identifier 304 can further more accurately identify the position P of the object on the basis of coordinates CT calculated by the coordinate calculator 302 and the first map information 44.
Also, the first map information 44 may be associated with a name of a building, a name of a facility, and the like as well as information about a shape of a building and a shape of a road. Here, if the name of the building or the name of the facility is “∘∘ parking lot” or “∘∘ commercial facility,” many vehicles or people may enter or leave the building or facility with that name and it is preferable for the person to be guided to pay attention to the entrance/exit.
Hereinafter, a fourth embodiment of the present invention will be described. In the above-described embodiment, a case in which the sound guidance system is a wearable device has been described. In the fourth embodiment, a case in which the person to be guided is guided by a sound guidance system 1c mounted in an autonomously traveling vehicle will be described. Also, components similar to those in the above-described embodiment are denoted by the same reference signs and a description thereof will be omitted.
In the present embodiment, the speaker 30 may be a speaker in which the right speaker 30-1 and the left speaker 30-2 are integrally formed. The speaker 30 outputs a sound of sound information SD in a direction in which the person to be guided is present (a rearward direction in the present example).
For example, the base 10a has a rod shape and supports each part provided in the sound guidance system 1c. Also, protrusions nob1 to nob2 are provided on the base 10a so that luggage or the like of the person to be guided can be hung.
For example, the vehicle 400 is equipped with each part provided in the sound guidance system 1c and travels in front of the person to be guided. The vehicle 400 is driven by an electric motor and operates using electric power with which a secondary battery or a fuel cell is discharged. The vehicle 400 is, for example, an autonomous stable unicycle. The vehicle 400 travels on the basis of control of a control device (not shown) mounted in the vehicle 400. Also, the vehicle 400 may travel on the basis of information representing a direction supplied from the notification devices 100 to 100a (for example, information representing the position P).
Here, in the sound guidance system 1c, it is difficult to use the position of the sound guidance system 1c as the origin O because the person to be guided is at a position at a predetermined distance (for example, several meters [m]) from the sound guidance system 1c. Therefore, the sound guidance system 1c is required to correct the position P on the basis of a difference between the position of the person to be guided and the position of the sound guidance system 1c (the speaker 30).
In this case, for example, the sound guidance system 1c includes a rear camera (not shown) that images the person to be guided behind the sound guidance system 1c in addition to the camera 20 that images a view in a guidance direction. Also, the position identifier 304 of the present embodiment identifies the position of the person to be guided on the basis of the image captured by the rear camera and corrects the origin O on the basis of a difference between the position of the sound guidance system 1c and the position of the person to be guided. The position identifier 304 identifies the position P of the object on the basis of the corrected origin O.
Thereby, the sound guidance system 1c can accurately convey the position, the direction, the distance, and the like of an object to the person to be guided by traveling ahead of the person to be guided even if the person to be guided does not wear the wearable device and notifying the person of the sound information SD through the speaker 30.
Also, although the sound guidance system 1c normally outputs a sound of the sound information SD through the speaker 30 in the present embodiment, information representing selected sound information SD is transmitted to a terminal device TM according to a place where there is a lot of noise in the surroundings or a place where it is not preferable to make a sound. In this case, the sound guidance system 1c and the wireless earphone device 500 are connected by, for example, a network so that communication is possible. The network includes some or all of a wide area network (WAN), a local area network (LAN), the Internet, a dedicated circuit, a radio base station, a provider, and the like.
The person to be guided has the terminal device TM capable of communicating with the sound guidance system 1c. The terminal device TM includes a speaker 70 and a wireless earphone device 500.
The speaker 30 operates under the control of the wireless earphone device 500 and outputs a sound. As shown in
The wireless earphone device 500 includes a communication device (not shown), receives the sound information SD from the sound guidance system 1c and outputs the sound represented by the received sound information SD through the speaker 70. Thereby, the sound guidance system 1c can also accurately convey the position, the direction, the distance, and the like of the object to the person to be guided in a place where there is a lot of noise in the surroundings or where it is not preferable to make a sound.
Also, the sound guidance systems 1 to 1c may be configured to include a microphone, recognize the voice such as “guidance start” emitted by the person to be guided, the sound of clapping his or her hands, or the like as the intention to start using the sound guidance systems 1 to 1c, and start guidance with respect to the person to be guided. Also, the sound guidance systems 1 to 1c are configured to recognize a gesture such as waving a hand imaged by the camera 20 as the intention to start using the sound guidance systems 1 to 1c and to start guidance with respect to the person to be guided.
Also, although a case in which the origin O is set at the end of the recording space (a position P(F, 1, a) in the above-described example) and a sound according to the position P of the object in front of the person to be guided (i.e., a direction of rows 1 to 10) is output has been described in the sound guidance systems 1 to 1c of the above-described embodiment, the present invention is not limited thereto. A configuration in which the origin O is set in the center of the recording space and a sound according to the position P of the object near the person to be guided (i.e., in a range including an area in a rearward direction with respect to the person to be guided) is output may be adopted.
Hereinafter, a fifth embodiment of the present invention will be described. In the above-described first to fourth embodiments, the configurations and operations have been described on the basis of the basic concept for implementing the sound guidance systems 1 to 1c. In the fifth embodiment, a more specific example for implementing the sound guidance systems 1 to 1c will be described. Also, components similar to those in the above-described embodiment are denoted by the same reference signs and a description thereof will be omitted.
In the fifth embodiment, a specific example of implementing the sound guidance system 1 will be described. Here, in the sound guidance system 1 (hereinafter referred to as a “sound guidance system 1d”) implemented in the fifth embodiment, it is assumed that only metal is detected as an object to be ascertained by the person to be guided for ease of description. That is, in the sound guidance system 1d of the fifth embodiment, it is assumed that only sound information SD (for example, the first type sound information SD1 shown in
The sound guidance system 1d is worn on the face of the person to be guided and cameras 20 (a right camera 20-1 and a left camera 20-2) image a predetermined range in a forward direction. Thus, the range of the horizontal direction in the recording space for recording the sound representing the object to be ascertained by the person to be guided is at least a range including an imaging range of the camera 20. In
Also, the sound guidance system 1d notifies the person to be guided of the presence of an object that hinders the person to be guided when he or she walks by causing the speaker 30 to output (generate) a sound of the first type sound information SD1 through the speaker 30. That is, in the sound guidance system 1d, it is assumed that a physical object (an object), which hinders walking, is emitting a sound and the person to be guided is allowed to hear the sound emitted by the object. Thus, the height of a reference recording point R in the recording space for recording the sound representing the object to be ascertained by the person to be guided is a height corresponding to the height of the ear of the person to be guided. The range of a vertical direction in a recording space for recording a sound representing the object to be ascertained by the person to be guided is a range from the feet of the person to be guided to at least a height obtained by adding a predetermined height to the height of the person to be guided. At this time, the height at which each sound is recorded in the recording space is set to a resolution of, for example, 0.05 [m] in consideration of walking by the person to be guided. That is, each region in a height direction divided into 10 parts in
Also, as shown in
As described above, in the sound guidance system 1d, in the recording space shown in
Also, an equalizer process such as, for example, a process of emphasizing reverberation, may be performed on a recorded sound so that the first type sound information SD1 can clearly represent the relative positional relationship between the person to be guided and the physical object which is the object.
Also, the range shown in
Also, an example in which a range from the feet of the person to be guided (a height of 0 [m]) to a height of 2.10 [m] is set as the range of the vertical direction in which the first type sound information SD1 is recorded is shown in
The camera 20 captures an image in which an object in front of the person to be guided is shown (step S200). A coordinate calculator 302 within a controller 300 provided in the notification device 100 of the sound guidance system 1d extracts a feature point of the object near the person to be guided on the basis of the image captured by the camera 20. The coordinate calculator 302 calculates coordinates CT representing a position of each feature point on an edge of the object that has been extracted (step S210). Here, for example, the coordinate calculator 302 may extract the object by determining the edge of the object on the basis of the brightness and darkness of each subject (physical object) shown in the image, colors (i.e., red (R), green (G), and blue (B)) constituting the image, and the like and calculate the coordinates CT representing a position of the edge of the extracted object.
Next, the position identifier 304 within the controller 300 provided in the notification device 100 of the sound guidance system 1d determines whether or not the object is an object (a physical object) whose edge is continuous in the horizontal direction (for example, a left-right direction or a depth direction) on the basis of the coordinates CT calculated by the coordinate calculator 302 (step S220).
When it is determined that the object is an object whose edge is continuous in the horizontal direction in step S220, the position identifier 304 determines whether or not an upper end of the edge of the object has a continuation of 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302 (step S221).
When it is determined that the upper end of the edge of the object has a continuation of 50 [cm] or more in step S221, the position identifier 304 identifies a relative position within the recording space corresponding to the coordinates CT calculated by the coordinate calculator 302 (step S230). More specifically, the position identifier 304 identifies a position P within the recording space corresponding to the coordinates CT representing the upper end of the object having the continuation of 50 [cm] or more in the horizontal direction within the recording space where the feet of the person to be guided (the height of 0 [m]) is designated as the origin O. Thereby, the coordinates CT of the upper end of the object continuous in the horizontal direction calculated by the coordinate calculator 302 are relatively associated with the position P within the recording space. The position identifier 304 moves the process to step S270.
On the other hand, when it is determined that the object is not an object whose edge is continuous in the horizontal direction in step S220 or when it is determined that the upper end of the edge of the object does not have a continuation of 50 [cm] or more in step S221, the position identifier 304 determines whether or not the distance between front edges of objects is 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302 (step S240).
When it is determined that the distance between the front edges of the objects is not 50 [cm] or more in step S240, the position identifier 304 moves the process to step S230. This is because, when objects shown in the image are objects whose edges are not continuous in the horizontal direction or whose upper ends do not have a continuation of 50 [cm] or more but it is difficult for the person to be guided to pass between the objects, the sound guidance system 1d treats the objects as objects having upper ends of the edges having a continuation of 50 [cm] or more in the horizontal direction.
On the other hand, when it is determined that the distance between the front edges of the objects is 50 [cm] or more in step S240, the position identifier 304 determines whether or not the object of the front side has an edge of the back side on the basis of the coordinates CT calculated by the coordinate calculator 302 (step S250).
When it is determined that the object of the front side has an edge of the back side in step S250, the position identifier 304 determines whether or not an interval between the edge of the back side in the object of the front side and the edge of the front side in the object of the back side is 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302 (step S251).
When it is determined that the object of the front side does not have an edge of the back side in step S250 or when it is determined that the interval between the edge of the back side in the object of the front side and the edge of the front side in the object of the back side is not 50 [cm] or more in step S251, the position identifier 304 moves the process to step S230. This is because, when the distance between the edges of the front sides of the objects shown in the image is 50 [cm] or more but the edge of the back side in the object of the front side cannot be recognized or the interval between the edge of the back side in the object of the front side and the edge of the front side in the object of the back side is not 50 [cm] or more, the sound guidance system 1d treats the process as in step S240. That is, the sound guidance system 1d treats the objects as the objects of the horizontal direction between which it is difficult for the person to be guided to pass.
On the other hand, when it is determined that the interval between the edge of the back side in the object of the front side and the edge of the front side in the object of the back side is 50 [cm] or more in step S251, the position identifier 304 identifies a relative position within the recording space corresponding to the coordinates calculated by the coordinate calculator 302 at the height of the object for each object (step S260). More specifically, the position identifier 304 identifies a position P within the recording space corresponding to coordinates CT representing a position where there are objects between which an interval is 50 [cm] or more in the recording space where the feet of the person to be guided (the height of 0 [m]) is designated as the origin O. The position identifier 304 identifies how high the height of each object is at the identified position P. Thereby, the coordinates CT of each object calculated by the coordinate calculator 302 is relatively associated with a range of the position P within the recording space. The position identifier 304 moves the process to step S270.
Next, the selector 308 within the controller 300 provided in the notification device 100 of the sound guidance system 1d selects first type sound information SD1 corresponding to a relative position of the coordinates CT identified by the position identifier 304 from the sound source information 202 stored in the storage 200 (step S270).
The selector 308 outputs the selected first type sound information SD1 to the speaker 30 sequentially from first type sound information SD1 having a relative position on the front side (step S280). More specifically, when the selected first type sound information SD1 is only the first type sound information SD1 at different positions P in the horizontal direction, the selector 308 sequentially outputs the first type sound information SD1 to the speaker 30 from the side closer to the person to be guided to the side farther from the person to be guided. In other words, when the selected first type sound information SD1 is only the first type sound information SD1 of positions P where the vertical direction (the height) is the same, the selector 308 sequentially outputs the first type sound information SD1 to the speaker 30 so that the first type sound information SD1 is continuous. Also, when the selected first type sound information SD1 includes a plurality of pieces of first type sound information SD1 having different heights at the same position P, the selector 308 sequentially outputs the first type sound information SD1 so that the progress is made from the lower position to the higher position on the side closer to the person to be guided and the progress from the lower position to the higher position proceeds from the side closer to the person to be guided to the side farther from the person to be guided. In other words, when the first type sound information SD1 of different positions P in the vertical direction and the first type sound information SD1 of different positions P in the horizontal direction are included in the selected first type sound information SD1, the selector 308 sets an output in which the first type sound information SD1 having different positions P in the vertical direction at the same position P in the horizontal direction is continuous to the speaker 30 as one set and moves each set in the horizontal direction. Thereby, the speaker 30 sequentially outputs (generates) sounds of the first type sound information SD1 output by the selector 308.
Also, for example, the distance between the person to be guided and the position P for determining the order in which the selector 308 outputs the first type sound information SD1 to the speaker 30 can be obtained on the basis of the distance from the origin O to the position P in the left-right direction, the distance from the origin O to the position P in the depth direction, and the distance from the origin O to the position P in the height direction using the position of the person to be guided in the recording space as the origin O.
Here, an example in which the person to be guided is notified of the presence of the object in the sound guidance system 1d will be described. In the first example, for example, a case in which the camera 20 captures the image IM1 shown in
The object OB1 shown in the image IM1-F is an object whose edge is continuous in the horizontal direction and whose upper end has a continuation of 50 [cm] or more. In this case, the coordinate calculator 302 calculates coordinates CT representing the position of each feature point of the edge (for example, an upper end UE1 and a lower end LE1) of the object OB1. The position identifier 304 determines that the object OB1 is an object whose edge is continuous in the horizontal direction and whose upper end UE1 has a continuation of 50 [cm] or more on the basis of coordinates CT calculated by the coordinate calculator 302. Thereby, the position identifier 304 associates coordinates CT of the upper end UE1 of the object OB1 with each position P within the recording space.
The selector 308 selects first type sound information SD1 of positions P identified by the position identifier 304 from the sound source information 202 and sequentially outputs the selected first type sound information SD1 to the speaker 30. Thereby, the speaker 30 sequentially outputs (generates) sounds of the first type sound information SD1 output by the selector 308.
In
Next, a second example in which the person to be guided is notified of the presence of the object in the sound guidance system 1d will be described. In the second example, a case in which the camera 20 images objects installed at predetermined intervals such as piles or poles and provides a notification of the objects which are notification targets will be described.
In this case, the coordinate calculator 302 calculates coordinates CT representing positions of feature points of edges of the objects OB3-1 to OB3-3 (for example, an upper end UE3, a lower end LE3, a front end FE3, and a back end BE3). The position identifier 304 determines that intervals between edges on the front sides of the objects OB3-1 to OB3-3, i.e., a front end FE3-1, a front end FE3-2, and a front end FE3-3, are not 50 [cm] or more, on the basis of the coordinates CT calculated by the coordinate calculator 302. Alternatively, the position identifier 304 determines that an interval between a back end BE3-1, which is the edge of the back side in the object OB3 (for example, the object OB3-1) of the front side, and a front end FE3-2 in the object OB3 (for example, the object OB3-2) of the back side is not 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302.
In this case, although the objects OB3-1 to OB3-3 are different objects, the position identifier 304 treats the objects OB3-1 to OB3-3 as objects of a horizontal direction in which upper ends have a continuation of 50 [cm] or more. Thus, the position identifier 304 associates coordinates CT of upper ends of the objects OB3-1 to OB3-3, i.e., the upper end UE3-1, the upper end UE3-2, and the upper end UE3-3, with positions P within the recording space. Thereby, in the second example, the position identifier 304 also identifies the position P corresponding to the coordinates CT of the upper end UE3 of each object OB3 within the range of the horizontal direction in which the first type sound information SD1 shown in
Next, a third example in which the person to be guided is notified of the presence of the object in the sound guidance system 1d will be described. In the third example, a notification of each object OB3 serving as a physical object which is a notification target when the objects OB3-1 to OB3-3 in the second example are installed at intervals of 50 [cm] or more will be described.
In this case, as in the second example, the coordinate calculator 302 calculates coordinates CT representing positions of feature points of edges of the objects OB3-1 to OB3-3 (for example, the upper end UE3, the lower end LE3, the front end FE3, and the back end BE3). The position identifier 304 determines that distances between the front ends FE3 of the objects OB3-1 to OB3-3 are 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302. Further, the position identifier 304 determines that an interval between the back end BE3-1 of the object OB3 (for example, the object OB3-1) of the front side and the front end FE3-2 in the object OB3 (for example, the object OB3-2) of the back side is 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302.
In this case, the position identifier 304 identifies a position P corresponding to the coordinates CT of the front end FE3 for each object OB3 within a range of the horizontal direction in which the first type sound information SD1 shown in
The selector 308 selects first type sound information SD1 of positions P identified by the position identifier 304 from the sound source information 202 and sequentially outputs the selected first type sound information SD1 to the speaker 30. At this time, the selector 308 initially sequentially outputs the first type sound information SD1 from the first type sound information SD1 of the position P of the height of 0 [m] in the object OB3-1 on the side closest to the person to be guided to the first type sound information SD1 of the position P of the height of 50 [m] to the speaker 30. Subsequently, the selector 308 sequentially outputs the first type sound information SD1 of the positions P of the heights from 0 [m] to 50 [cm] in the object OB3-2 to the speaker 30 and finally sequentially outputs the first type sound information SD1 of the positions P of the heights from 0 [m] to 50 [cm] in the object OB3-3 farthest from the person to be guided to the speaker 30. Thereby, the speaker 30 sequentially outputs (generates) sounds of the first type sound information SD1 output by the selector 308.
In the example shown in
Next, a fourth example in which the person to be guided is notified of the presence of an object in the sound guidance system 1d will be described. In the fourth example, for example, a case in which objects OB3 are reported as objects which are notification targets when heights of objects OB3-1 to OB3-3 such as street light poles or utility poles are high and upper ends UE3 thereof are not shown in an image IM outside the imaging range of the camera 20 will be described.
In this case, the coordinate calculator 302 calculates coordinates CT representing positions of feature points of edges of the objects OB3-1 to OB3-3 as in the second and third examples. However, in the fourth example, an upper end UE3 of each of the objects OB3-1 to OB3-3 is not shown in the image IM1-F4. Thus, the coordinate calculator 302 calculates coordinates CT of the upper end UE3 under the assumption that the upper end UE3 of each object OB3 is at the maximum height, i.e., 2.10 [m], in the sound source information 202 stored in the storage 200. Thereby, the position identifier 304 determines that intervals between front ends FE3 of the objects OB3-1 to OB3-3 are 50 [cm] or more on the basis of coordinates CT calculated by the coordinate calculator 302 as in the third example. Further, as in the third example, the position identifier 304 determines that an interval between a back end BE3-1 of the object OB3 (for example, the object OB3-1) of the front side and a front end FE3-2 in the object OB3 (for example, the object OB3-2) of the back side is 50 [cm] or more on the basis of the coordinates CT calculated by the coordinate calculator 302.
In this case, the position identifier 304 identifies a position P corresponding to the coordinates CT of the front end FE3 for each object OB3 under the assumption that the objects OB3-1 to OB3-3 are different objects in the vertical direction as in the third example. At this time, the position identifier 304 identifies positions P from the feet of the person to be guided (the height of 0 [m]) to the height of 2.10 [m], i.e., the maximum height at which the person to be guided is notified of the presence of the object, at a position of the front end FE3 serving as the position P corresponding to coordinates CT of each object OB3.
As in the third example, the selector 308 selects first type sound information SD1 of positions P identified by the position identifier 304 from the sound source information 202 and sequentially outputs the selected first type sound information SD1 to the speaker 30. Thereby, the speaker 30 sequentially outputs (generates) sounds of the first type sound information SD1 output by the selector 308.
Also, a case in which the height of the object OB3 is higher than the height of the person to be guided has been described in the fourth example. However, an object whose edge (more specifically, upper end) has a continuation of 50 [cm] or more in the horizontal direction as in the first example may also be, for example, an object whose height is high and whose upper end is outside the imaging range of the camera 20 such as a building, an apartment, a wall, or the like. In this case, as in the fourth example, the coordinate calculator 302 calculates coordinates CT of the upper end under the assumption that the upper end of the object in the horizontal direction is at the maximum height (2.10 [m]) in the sound source information 202 stored in the storage 200. Thereby, as in the first example, the position identifier 304 identifies each position P where the height is 2.10 [m] as a position corresponding to coordinates CT of the upper end of the object in the horizontal direction. As in the first example, the selector 308 selects first type sound information SD1 of positions P identified by the position identifier 304 from the sound source information 202 and sequentially outputs the selected first type sound information SD1 to the speaker 30. Thereby, the speaker 30 sequentially outputs (generates) sounds of the first type sound information SD1 output by the selector 308. That is, the speaker 30 generates a sound capable of being heard by the person to be guided as if the object having the height of 2.10 [m] continuous in the horizontal direction from the side closer to the person to be guided to the side farther from the person to be guided is generating the sound. Also, the selector 308 iterates the output of the first type sound information SD1 to the speaker 30 in the order of outputs from the selector 308. Thereby, the person to be guided can recognize that the object having the height of 2.10 [m] is continuously present in a direction from the closer side to the farther side within a space in the forward direction.
The sound guidance system 1d including the notification device 100 of the present embodiment includes: a detector (the camera 20 in the present example) configured to detect a physical object near a moving person (the person to be guided in the present example); a relative position acquirer (the coordinate calculator 302 and the position identifier 304 in the present example) configured to acquire a relative position with respect to the physical object (an object such as the object OB1 in the present example) for which the camera 20 is designated as a base point; the storage 200 configured to store sound information SD (for example, the first type sound information SD1) in which sounds emitted from a plurality of positions P away from a predetermined recording point R are pre-recorded for each position P with respect to the predetermined recording point R within a predetermined recording space and the recorded sounds are associated with relative positional relationships between the recording point R and the positions P; and the selector 308 configured to select the sound information SD corresponding to the relative position from the sound information SD stored in the storage 200 on the basis of the relative positional relationship corresponding to the relative position acquired by the relative position acquirer. The sound of the sound information SD selected by the selector 308 is generated by a generator (the speaker 30 in the present example) and the moving person is notified of information about the physical object (for example, a position where the object is present, the length of the object continuous in the horizontal direction, the height of the object, and the like) detected by the detector.
Also, in each of the above-described embodiments, a case in which the detector is the stereo camera including the right camera 20-1 and the left camera 20-2 has been described. However, the detector may have any configuration as long as it can detect a physical object in front of the person to be guided and measure the distance from the person to be guided to the detected physical object. For example, a radar device or a light detection and ranging or laser imaging detection and ranging (LIDAR) sensor may be used as the detector. The radar device is a device that detects at least the distance to the physical object or a direction by radiating radio waves toward the physical object near the person to be guided and measuring radio waves (reflected waves) reflected by the physical object. Here, the radio waves emitted by the radar device refer to electromagnetic waves having a lower frequency (in other words, a longer wavelength) than light among electromagnetic waves. Also, electromagnetic waves having the lowest frequency among electromagnetic waves having the property of light are referred to as infrared rays (or far infrared rays), but the radio waves radiated by the radar device have a lower frequency (for example, millimeter waves or the like). Also, the radar device may detect the position and the speed of the physical object in a frequency modulated continuous wave (FM-CW) scheme. LIDAR is one of remote sensing technologies using light. In the LIDAR technology, laser light for emitting pulsed light is radiated and scattered light is measured, so that the distance to an object at a long distance and properties of the object can be analyzed.
Although modes for carrying out the present invention have been described above using the embodiments, the present invention is not limited to the embodiments and various modifications and replacements can be applied without departing from the spirit and scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2018-113865 | Jun 2018 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/023706 | 6/14/2019 | WO | 00 |