The present application claims priority to and incorporates by reference the entire contents of Japanese Patent Application No. 2020-001079 filed in Japan on Jan. 7, 2020.
The present disclosure relates to a moving body control device, a moving body control method, and a computer readable recording medium.
A technique for operating a vehicle by a gesture is known (for example, see JP 2018-172028 A). In the technique disclosed in JP 2018-172028 A, at the time of executing autonomous driving of a vehicle after disabling operations of driving operators such as an accelerator pedal and a steering wheel, when a gesture or the like of a user's hand, which is imaged by an imaging device or the like, is input to a control unit, acceleration/deceleration and steering in the vehicle are controlled to change a travel route.
In the technique disclosed in JP 2018-172028 A, in place of driving by a user who operates the driving operators, the acceleration/deceleration and steering of the vehicle are controlled based on the gesture of the user. Therefore, though a driver of a moving body such as a vehicle has been able to enjoy driving, it has been difficult for other passengers who ride on this moving body to enjoy the pleasure of riding on the moving body. Moreover, it has been difficult for the driver to enjoy the pleasure of riding on the moving body when it is necessary to intermittently stop the operation of the moving body, for example, such as during a traffic congestion. From these points, there has been a demand for a technology that allows a user who rides on a moving body to enjoy the pleasure of riding on the same.
There is a need for a moving body control device, a moving body control method, and a computer readable recording medium, which allow a user who rides on a moving body to enjoy the pleasure of riding on the same.
According to one aspect of the present disclosure, there is provided a moving body control device including a processor including hardware, the processor being configured to: acquire spatial information of at least one of an outside and inside of a moving body; generate a virtual image including the information of the at least one of the outside and inside of the moving body based on the spatial information; output the generated virtual image to a display unit visually recognizable by a user who rides on the moving body; acquire a detection result of a predetermined action of the user when the user performs an action; update and output the virtual image based on the action of the user in the detection result; and output a control signal for the moving body, the control signal being based on the detection result.
Hereinafter, an embodiment will be described with reference to the drawings. Note that the same reference numerals are assigned to the same or corresponding portions in all the drawings of the following embodiments. Moreover, the present disclosure is not limited by the embodiments to be described below.
First, a moving body control device according to the embodiment will be described.
As illustrated in
The traffic information server 20 collects traffic information on a road and acquires information about traffic or the like on the road. The traffic information server 20 includes a control unit 21, a communication unit 22, a storage unit 23, and a traffic information collection unit 24.
The control unit 21 specifically includes a processor such as a central processing unit (CPU), a digital signal processor (DSP), and a field-programmable gate array (FPGA), and a main storage unit such as a random access memory (RAM) and a read only memory (ROM).
The communication unit 22 is composed by using a communication module capable of wired communication or wireless communication, the communication module being, for example, a local area network (LAN) interface board, a wireless communication circuit for wireless communication, or the like. The LAN interface board or the wireless communication circuit may connect to the network 2, such as the Internet, as a public communication network. Moreover, the communication unit 22 may be made capable of communicating with the outside in accordance with a predetermined communication standard, for example, 4G, 5G, Wireless Fidelity (Wi-Fi) (registered trademark), Bluetooth (registered trademark), or the like. The communication unit 22 may connect to the network 2 and communicate with the moving body terminal device 10 or the like. The communication unit 22 may connect to the network 2 and communicate with beacons or the like, which acquire traffic information. The communication unit 22 transmits the traffic information to the moving body terminal device 10 as needed. Note that information transmitted by the communication unit 22 is not limited to such information.
The storage unit 23 is composed of a storage medium selected from an erasable programmable ROM (EPROM), a hard disk drive (HDD), a solid state drive (SSD), a removable medium, and the like. Note that the removable medium is, for example, a universal serial bus (USB) memory or a disc recording medium such as a compact disc (CD), a digital versatile disc (DVD), and a Blu-ray disc (BD) (registered trademark). In the storage unit 23, it is possible to store an operating system (OS), various programs, various tables, various databases, and the like.
The control unit 21 loads a program stored in the storage unit 23 into a work area of the main storage unit, executes the program, and controls respective component units and the like through the execution of the program. Thus, the control unit 21 may achieve a function that meets a predetermined purpose. The storage unit 23 stores a traffic information database 23a.
Via the communication unit 22, the traffic information collection unit 24 collects traffic information from, for example, radio signs such as beacons placed on a road or the like. The traffic information collected by the traffic information collection unit 24 is stored in the traffic information database 23a of the storage unit 23 so as to be searchable. Note that the traffic information collection unit 24 may further include a storage unit. Moreover, the traffic information collection unit 24, the control unit 21, the communication unit 22, and the storage unit 23 may be composed separately from one another.
In the moving body control system, the first wearable device 30 and the second wearable device 40 may be made communicable with each other via the network 2. Moreover, another server communicable with the moving body terminal device 10, the first wearable device 30, and the second wearable device 40 via the network 2 may be provided. In the following, a description will be given of a vehicle, and particularly, an autonomous driving vehicle capable of autonomous travel, which is taken as an example of the moving body 1; however, the present disclosure is not limited to this, and the moving body 1 may be a vehicle, a motorcycle, a drone, an airplane, a ship, a train, or the like, which travels by being driven by a driver.
As illustrated in
The moving body terminal device 10 includes a control unit 11, an imaging unit 12, a sensor group 13, an input unit 14, a car navigation system 15, a communication unit 16, and a storage unit 17. The sensor group 13 includes a line-of-sight sensor 13a, a vehicle speed sensor 13b, an opening/closing sensor 13c, and a seat sensor 13d.
The control unit 11 and the storage unit 17 have physically similar configurations to those of the control unit 21 and the storage unit 23, which are mentioned above. The control unit 11 controls respective components of the moving body terminal device 10 in a centralized manner, and also controls the travel unit 18, thereby controlling operations of various components mounted on the moving body 1 in the centralized manner. The storage unit 17 stores a map database 17a composed of various map data.
The communication unit 16 as a communication terminal of the moving body 1 may be composed of, for example, a data communication module (DCM) or the like, which communicates with an external server, for example, the traffic information server 20 or the like by wireless communication made via the network 2. The communication unit 16 may perform road-vehicle communication of communicating with antennas or the like, which are placed on the road. That is, the communication unit 16 may perform the road-vehicle communication or the like with the beacons or the like, which acquire traffic information. The communication unit 16 may perform inter-vehicle communication of communicating with a communication unit 16 of another moving body 1. The road-vehicle communication and the inter-vehicle communication may be performed via the network 2. Moreover, the communication unit 16 is composed to be communicable with an external device in accordance with a predetermined communication standard, for example, 4G, 5G, Wireless Fidelity (Wi-Fi) (registered trademark), Bluetooth (registered trademark), or the like. The communication unit 16 receives traffic information from the traffic information server 20 via the network 2 as needed. Note that the information transmitted and received by the communication unit 16 is not limited to such information.
By the control of the control unit 11, the communication unit 16 communicates with various devices in accordance with the above-mentioned predetermined communication standard. Specifically, under the control of the control unit 11, the communication unit 16 may transmit and receive various information to and from the first wearable device 30 worn by the user U1 who rides on the moving body 1. Moreover, the communication unit 16 is capable of transmitting and receiving various information to and from the other moving body 1 and the second wearable device 40 worn by the user U2. Note that the predetermined communication standard is not limited to the above-mentioned standards.
A plurality of the imaging units 12 are provided outside the moving body 1. For example, the imaging units 12 may be provided at four positions of the moving body 1, which are forward, backward and both-side positions thereof, so that a shooting angle of view becomes 360°. Furthermore, a plurality of the imaging units 12 may be provided inside the moving body 1. Under the control of the control unit 11, the imaging units 12 individually capture an external space and internal space of the moving body 1, thereby generating image data in which the external space and the internal space are reflected, and outputting the generated image data to the control unit 11. The imaging unit 12 is composed by using an optical system and an image sensor. The optical system is composed by using one or more lenses. The image sensor is composed of a charge coupled device (CCD), a complementary metal oxide semiconductor (CMOS), or the like, which receives a subject image formed by the optical system, thereby generating image data.
The sensor group 13 is composed by including various sensors. For example, the line-of-sight sensor 13a detects line-of-sight information including a line of sight and retina of the user U1 who rides on the moving body 1, and outputs the detected line-of-sight information to the control unit 11. The line-of-sight sensor 13a is composed by using an optical system, a CCD or CMOS, a memory, and a processor including hardware such as a CPU and a graphics processing unit (GPU). For example, by using known template matching, the line-of-sight sensor 13a detects, as a reference point, an unmovable portion of an eye of the user U1, for example, such as the head corner of the eye, and detects, as a moving point, a movable portion of the eye, for example, such as the iris. The line-of-sight sensor 13a detects the line of sight of the user U1 based on a positional relationship between the reference point and the moving point, and outputs a detection result to the control unit 11. The line-of-sight sensor 13a may detect retinal veins of the user U1, and may output a detection result to the control unit 11.
Note that, although the line of sight of the user U1 is detected by a visible camera as the line-of-sight sensor 13a in the embodiment, the present disclosure is not limited to this, and the line of sight of the user U1 may be detected by an infrared camera. In a case in which the line-of-sight sensor 13a is composed of an infrared camera, infrared light is applied to the user U1 by means of an infrared light emitting diode (LED) or the like, a reference point (for example, corneal reflection) and a moving point (for example, a pupil) are detected from image data generated by capturing an image of the user U1 by using the infrared camera, and the line of sight of the user U1 is detected based on a positional relationship between the reference point and the moving point.
The vehicle speed sensor 13b detects a vehicle speed of the moving body 1 during traveling, and outputs a detection result to the control unit 11. The opening/closing sensor 13c detects opening/closing of a door through which the user goes in and out, and outputs a detection result to the control unit 11. The opening/closing sensor 13c is composed by using, for example, a push switch or the like. The seat sensor 13d detects a seated state of each seat, and outputs a detection result to the control unit 11. The seat sensor 13d is composed by using a load detection device, a pressure sensor or the like, which is placed below a seat surface of each seat provided in the moving body 1.
The input unit 14 is composed of, for example, a keyboard, a touch panel keyboard that is incorporated into the display unit 152a and detects a touch operation on a display panel, a voice input device that enables a call with the outside, or the like. Here, the call with the outside includes not only a call with another moving body terminal device 10 but also, for example, a call with an operator who operates an external server or an artificial intelligence system, or the like. When the input unit 14 is composed of a voice input device, the input unit 14 receives an input of a voice of the user U1, and outputs audio data, which corresponds to the received voice, to the control unit 11. The voice input device is composed by using a microphone, an A/D conversion circuit that converts, into audio data, a voice received by the microphone, an amplifier circuit that amplifies the audio data, and the like. Note that a speaker microphone capable of outputting a sound may be used instead of the microphone.
The car navigation system 15 includes a positioning unit 151 and a notification unit 152. The positioning unit 151 receives, for example, signals from a plurality of global positioning system (GPS) satellites and transmission antennas, and calculates a position of the moving body 1 based on the received signals. The positioning unit 151 is composed by using a GPS receiving sensor and the like. Orientation accuracy of the moving body 1 may be improved by mounting a plurality of the GPS receiving sensors or the like, each of which forms the positioning unit 151. Note that a method in which light detection and ranging/laser imaging detection and ranging (LiDAR) is combined with a three-dimensional digital map may be adopted as a method of detecting the position of the moving body 1. The notification unit 152 includes a display unit 152a that displays an image, a video, and character information, and a voice output unit 152b that generates a sound such as a voice and an alarm sound. The display unit 152a is composed by using a display such as a liquid crystal display and an organic electroluminescence (EL) display. The voice output unit 152b is composed by using a speaker and the like.
The car navigation system 15 superimposes a current position of the moving body 1, which is acquired by the positioning unit 151, on the map data stored in the map database 17a of the storage unit 17. Thus, the car navigation system 15 may notify the user U1 of information including a road on which the moving body 1 is currently traveling, a route to a destination, and the like by at least one of the display unit 152a and the voice output unit 152b. The display unit 152a displays characters, figures and the like on a screen of the touch panel display under the control of the control unit 11. Note that the car navigation system 15 may include the input unit 14. In this case, the display unit 152a, the voice output unit 152b, and the input unit 14 may be composed of a touch panel display, a speaker microphone, and the like, and the display unit 152a may be caused to function while including a function of the input unit 14. Under the control of the control unit 11, the voice output unit 152b outputs a voice from the speaker microphone, thereby notifying the outside of predetermined information, and so on.
Note that the moving body 1 may include a key unit that performs, as a short-range radio communication technology, authentication that is based on, for example, blue tooth low energy (BLE) authentication information with the user terminal device owned by the user, and executes locking and unlocking of the moving body 1.
The travel unit 18 includes a drive unit 181 and a steering unit 182. The drive unit 181 includes a drive device necessary for traveling of the moving body 1, and a drive transmission device that transmits drive to wheels and the like. Specifically, the moving body 1 includes a motor or an engine as a drive source. The motor is driven by electric power from a battery. The engine is composed to be capable of generating electricity by using an electric motor or the like by being driven by combustion of fuel. The generated electric power is charged in a rechargeable battery. The moving body 1 includes a drive transmission mechanism that transmits driving force of the drive source, drive wheels for traveling, and the like. The steering unit 182 changes a steering angle of the wheels serving as steered wheels, and determines a traveling direction and orientation of the moving body 1.
The room facilities 19 include a seat portion 191 having a reclining function, for example. Note that the room facilities 19 may further include an air conditioner, a vehicle interior light, a table, and the like.
Next, a description will be given of a configuration of the first wearable device 30.
The first wearable device 30 including the moving body control device, which is illustrated in
As illustrated in
As illustrated in
The line-of-sight sensor 33 detects an orientation of the line of sight of the user U1 wearing the first wearable device 30, and outputs a detection result to the control unit 38. The line-of-sight sensor 33 is composed by using an optical system, an image sensor such as a CCD and a CMOS, a memory, and a processor including hardware such as a CPU. For example, by using known template matching, the line-of-sight sensor 33 detects, as a reference point, an unmovable portion of the eye of the user U1, for example, such as the head corner of the eye, and detects, as a moving point, a movable portion of the eye, for example, such as the iris. The line-of-sight sensor 33 detects the orientation of the line of sight of the user U1 based on a positional relationship between the reference point and the moving point.
Under the control of the control unit 38, the projection unit 34 as a display unit projects a virtual image of an image, a video, character information and the like toward the retina of the user U1 wearing the first wearable device 30. The projection unit 34 is composed by using a red, green and blue (RGB) laser, a micro-electro-mechanical systems (MEMS) mirror, a reflecting mirror, and the like. The RGB laser emits laser beams of respective RGB colors. The MEMS mirror reflects the laser beams. The reflecting mirror projects the laser beams, which are reflected from the MEMS mirror, onto the retina of the user U1. Note that the projection unit 34 may be a unit that causes a lens 39 of the first wearable device 30 to display a virtual image by projecting the virtual image thereon under the control of the control unit 38.
The GPS sensor 35 calculates position information about a position of the first wearable device 30 based on signals received from the plurality of GPS satellites, and outputs the calculated position information to the control unit 38. The GPS sensor 35 is composed by using a GPS receiving sensor and the like.
The wearing sensor 36 detects a wearing state of the user U1, and outputs a detection result to the control unit 38. The wearing sensor 36 is composed by using a pressure sensor that detects a pressure when the user U1 wears the first wearable device 30, a vital sensor that detects vital information of the user U1, such as a body temperature, a pulse, brain waves, a blood pressure, and a sweating state, and the like.
The communication unit 37 is composed by using a communication module capable of wireless communication. Under the control of the control unit 38, the communication unit 37 transmits and receives various information to and from the moving body terminal device 10 in accordance with the above-mentioned predetermined communication standard.
The control unit 38 physically has a configuration similar to those of the above-mentioned control units 11 and 21, and is composed by using a memory and a processor including hardware that is any of a CPU, a GPU, an FPGA, a DSP, an ASIC and the like. The control unit 38 controls operations of the respective units which compose the first wearable device 30. The control unit 38 includes an acquisition unit 381, a determination unit 382, a generation unit 383, an output control unit 384, and a travel control unit 385. In the embodiment, the control unit 38 functions as a processor of the moving body control device.
The acquisition unit 381 acquires various information from the moving body terminal device 10 via the communication unit 37. The acquisition unit 381 may acquire, for example, traffic information from the moving body terminal device 10, or may acquire the traffic information from the traffic information server 20 via the network 2 and the moving body terminal device 10. The acquisition unit 381 may acquire the behavior information of the user U1, the vital information thereof, and user identification information thereof. Note that the acquisition unit 381 is also able to acquire various information from an external server via the communication unit 37 and the network 2.
The determination unit 382 makes a determination based on the various information acquired by the acquisition unit 381. Specifically, the determination unit 382 may determine, for example, whether or not the travel unit 18 may be controlled, whether or not the user U1 is riding on the moving body 1, whether or not an operation control may be started, whether or not action data based on physical information of the user U1 is input, and so on. Note that the physical information of the user U1 includes the behavior information indicating a behavior thereof, the vital information, the user identification information, the line-of-sight information, and the like. Moreover, the determination unit 382 may also determine whether or not predetermined information is input from the input unit 14 of the moving body terminal device 10. Furthermore, the determination unit 382 may have a trained model generated by machine learning using a predetermined input/output data set, which includes an input parameter for making a determination and an output parameter indicating a determination result. In this case, the determination unit 382 may make the determination based on the output parameter obtained by inputting the input parameter thus input to the trained model.
The control unit 38 causes the projection unit 34 to output a predetermined virtual image in the field of view of the user U1 based on the line-of-sight information of the user U1, which is detected by the line-of-sight sensor 33, and based on the behavior information thereof. That is, the generation unit 383 generates a virtual image viewed from a viewpoint of the user U1 by using spatial information of the moving body 1, which is acquired by the acquisition unit 381. The output control unit 384 controls an output of the virtual image to the projection unit 34, the virtual image being generated by the generation unit 383. Note that details of the virtual image generated by the generation unit 383 will be described later. Based on the action data regarding the action of the user U1, the action data being acquired by the acquisition unit 381, the travel control unit 385 outputs a control signal corresponding to the action data and capable of controlling the travel unit 18 of the moving body 1 via the moving body terminal device 10.
Next, a description will be given of moving body control processing executed by the first wearable device 30.
As illustrated in
When the determination unit 382 determines that the user U1 does not ride on the moving body 1 (step ST2: No), the moving body control processing ends. On the other hand, when the determination unit 382 determines that the user U1 rides on the moving body 1 (step ST2: Yes), the processing proceeds to step ST3.
In step ST3, based on the acquired position information, the acquisition unit 381 starts to acquire traffic information through the road-vehicle communication, the inter-vehicle communication or the like or to acquire traffic information from the traffic information server 20 or the like. Note that such acquisition of the position information and the traffic information by the acquisition unit 381 is continuously executed during the execution of the moving body control processing.
Next, in step ST4, the determination unit 382 determines whether or not the action of the user U1, which is detected by the behavior sensor 32, is an input action of a request signal for requesting the start of the operation control. When the action of the user U1 is an input action of the request signal, the request signal is input to the determination unit 382 in accordance with this action. Note that the request signal for requesting the start of the operation control may be input from the communication unit 16 to the acquisition unit 381 via the communication unit 37 based on the operation of the user U1 for the input unit 14 of the moving body terminal device 10. In the present specification, the operation control refers to the control of the travel unit 18 to make control for the travel of the moving body 1 and the control for an action of an avatar image in the virtual image in the first wearable device 30 in response to the action, utterance and the like of the user U1.
When the determination unit 382 determines that the request signal is not input (step ST4: No), step ST4 is repeatedly executed until the request signal is input. On the other hand, when the determination unit 382 determines that the request signal is input (step ST4: Yes), the processing proceeds to step ST5.
The acquisition unit 381 acquires spatial information regarding at least one of the internal space and external space of the moving body 1 (step ST5). Specifically, via the communication unit 37, the acquisition unit 381 acquires image data, which is generated in such a manner that the imaging unit 12 of the moving body 1 captures the inside of the moving body 1, as spatial information regarding the internal space. The acquisition unit 381 acquires image data, which is generated in such a manner that the imaging unit 12 of the moving body 1 captures the external space of the moving body 1, as spatial information regarding the external space. Furthermore, the acquisition unit 381 acquires image data generated by the capturing of the imaging devices 31 as spatial information. Note that the acquisition unit 381 acquires the image data, which is generated by the imaging unit 12 of the moving body 1, as the spatial information regarding the external space, but the acquisition unit 381 is not limited to this. For example, based on the position information of the moving body 1, the acquisition unit 381 may acquire, as the spatial information regarding the external space, image data around a current position of the moving body 1 from the map data recorded in the map database 17a.
Next, in step ST6, the generation unit 383 generates a virtual image, and the output control unit 384 outputs the generated virtual image to the projection unit 34. Specifically, the generation unit 383 first generates the virtual image viewed from the viewpoint of the user U1 by using the spatial information acquired by the acquisition unit 381. The output control unit 384 outputs the virtual image, which is generated by the generation unit 383, to the projection unit 34. The projection unit 34 projects the input virtual image toward the retina of the user U1. This allows the user U1 to recognize the virtual image.
Here, the virtual image generated by the generation unit 383 will be described.
Subsequently, as illustrated in
Thereafter, based on the spatial information and the behavior information, which are acquired by the acquisition unit 381, the determination unit 382 determines whether or not the posture of the user U1 has changed, that is, whether or not the action data indicating the action of the user U1 is input to the acquisition unit 381 (step ST8). When the determination unit 382 determines that the action data as data of a predetermined action is not input (step ST8: No), the processing returns to step ST5. On the other hand, when the determination unit 382 determines that the action data is input (step ST8: Yes), the processing proceeds to step ST9.
Thereafter, the generation unit 383 generates a virtual image corresponding to the action data, and the output control unit 384 outputs the virtual image. Specifically, in response to the action of the user U1, the generation unit 383 generates the virtual image viewed from the viewpoint of the user U1 or the bird's eye viewpoint by using the spatial information acquired by the acquisition unit 381. The output control unit 384 outputs the virtual image P1 and a virtual image P2, which are generated by the generation unit 383, to the projection unit 34, and projects the same toward the retina of the user U1.
For example, when the user U1 performs an action to turn the steering wheel illustrated in
Further, for example, when the user U1 performs an action similar to that of a bird as a preset action, the virtual image P2 generated by the generation unit 383 is such a virtual image, for example, as illustrated in
As illustrated in
Thus, the user U1 may recognize the current situation around the moving body 1. For example, when the moving body 1 is involved in a traffic congestion, the user U1 may recognize the state of the beginning of the traffic congestion and the like from the virtual image P2. Therefore, the user U1 may visually recognize how much traffic congestion the user U1 is involved in, so that an effect of alleviating a stress and anxiety caused by the traffic congestion may be expected.
Next, in step ST10 illustrated in
On the other hand, when the determination unit 382 determines that the safety level of the moving body 1 does not meet the standard (step ST10: No), the processing proceeds to step ST11. The travel control unit 385 of the control unit 38 disconnects the control of the travel unit 18 (step ST11). Specifically, the control unit 38 blocks or stops the transmission of the control signal for controlling the travel unit 18 from the travel control unit 385 to the moving body terminal device 10. In this case, the moving body terminal device 10 continues to control the travel unit 18 by the control signal based on the control program for the autonomous driving. Thus, even if the travel unit 18 is configured to be controllable in response to the action and utterance of the user U1, the travel unit 18 may be made incontrollable when the safety may not be ensured, and accordingly, the safety of the moving body 1 may be ensured.
Here, the safety level may be calculated based on various parameters. Specifically, the safety level may be calculated based on values obtained by quantifying a distance to the front and rear moving bodies 1, a travel route, a speed and an acceleration, whether or not an emergency vehicle or the like is present in the vicinity, and the like in the moving body 1 on which the user U1 rides. Moreover, the safety level may be calculated based on the number of signals and pedestrian crossings on the roads on which the moving body 1 travels, the number of pedestrians in the vicinity, values obtained by quantifying weather conditions, road conditions, and the like. Note that the safety level of the moving body 1 may be calculated by using a trained model generated by machine learning.
Moreover, as a method for the determination unit 382 to determine whether or not the travel unit 18 is controllable by the operation control, a method other than the determination using the safety level may be adopted. For example, the determination unit 382 may determine whether or not the travel unit 18 is controllable by the operation control by determining whether or not the moving body 1 is involved in the traffic congestion. In this case, the determination unit 382 may determine that the moving body 1 is involved in the traffic congestion when the vehicle speed of the moving body 1 is a predetermined speed or lower for a predetermined time or longer. The predetermined speed may be arbitrarily set, for example, to 10 km/h or the like, and the predetermined time may also be arbitrarily set, for example, to 10 minutes or the like. Further, the determination unit 382 may determine whether or not the moving body 1 is involved in the traffic congestion at the current position based on the traffic information acquired by the acquisition unit 381. For example, based on the traffic information, the determination unit 382 may determine that the moving body 1 is involved in the traffic congestion when stop and start states are repeated for 15 minutes or more and the lines of cars are 1 km or more. Note that various methods may be used to determine whether or not the moving body 1 is involved in the traffic congestion. In the case of having determined that the moving body 1 is involved in the traffic congestion, the determination unit 382 may determine that the travel unit 18 is controllable by the operation control, but in the case of having determined that the moving body 1 is involved in the traffic congestion, the determination unit may determine that the control of the travel unit 18 by the operation control is impossible.
As a method for the determination unit 382 to determine whether or not the travel unit 18 is controllable by the operation control, the above-mentioned determination using the safety level and the determination as to whether or not the traffic congestion is present may be combined with each other. For example, in the case of having determined that the moving body 1 is involved in the traffic congestion or in the case of having determined that the safety level in the moving body 1 is a predetermined value or more, the determination unit 382 may determine that the travel unit 18 is controllable by the operation control.
Next, in step ST12, based on the action data of the user U1, which is acquired by the acquisition unit 381, the travel control unit 385 outputs the control signal corresponding to the action data, and controls the travel unit 18 of the moving body 1 via the moving body terminal device 10. Hereinafter, a specific example of control for the moving body 1, which uses the first wearable device 30, will be described.
Specifically, for example, the virtual image P1 or the like illustrated in
Moreover, character information corresponding to the travel path on which the moving body 1 is autonomously driven may be displayed. Specifically, for example, when the moving body 1 travels on a travel path that turns to the left, character information such as “turn the steering wheel to the left” is displayed on the virtual image P1, thus notifying the user U1 of the action. When the user U1 turns the steering wheel to the left in response to this, the steering unit 182 of the travel unit 18 may be controlled at timing associated with the action, and the moving body 1 may be operated so as to turn to the left.
Furthermore, for example, when a virtual image of a hand-turning handle is displayed, and the user U1 performs such an action to rotate the hand-turning handle on the virtual image, an amount of exercise of the action of the user U1 may be calculated to charge a battery in response to the amount of exercise. Specifically, the battery may be charged by changing an engine speed in the drive unit 181 of the travel unit 18, and so on based on the calculated amount of exercise of the user U1. Note that such a configuration may be adopted in which the moving body 1 is provided with an operator such as an actually rotated hand-turning handle and a generator, and electricity is generated by actually operating the operator such as a hand-turning handle in matching with the virtual image, and the electricity is stored in the battery. Furthermore, a predetermined coefficient may be set based on the vital information of the user U1, which is detected by the wearing sensor 36, and the calculated amount of exercise may be multiplied by a predetermined count to change the control of the operation of the user U1 for the drive unit 181. For example, setting of the amount of exercise for increasing the engine speed by 100 rpm may be increased in an athlete mode or the like and may be decreased in a normal mode or the like.
Moreover, for example, when a virtual image of a brake, an accelerator and the like is displayed and the user U1 performs an action of stepping on the accelerator, it is also possible to increase the engine speed and to accelerate the moving body 1 by controlling the drive unit 181 of the travel unit 18. Likewise, when the user U1 performs an action of stepping on the brake, it is possible to reduce the speed of the moving body 1 by controlling the drive unit 181 of the travel unit 18.
Further, while the user U1 is performing the action, the control to the travel unit 18 may be disconnected in steps ST10 and ST11 illustrated in
Subsequently, as illustrated in
When the moving body is a moving body capable of autonomous travel, the driving operation by the user becomes unnecessary while the user is riding on the moving body and moving. Moreover, also when the user rides on and drives the moving body, traffic congestion may occur. Even in such a case, according to the embodiment, which is described above, the user U1 who rides on the moving body 1 performs the action, which corresponds to the virtual image projected on the projection unit 34 of the first wearable device 30, while viewing the virtual image, and may thereby operate the moving body 1. Thus, the user U1 becomes capable of enjoying the pleasure of riding on the moving body 1 and the pleasure of operating the moving body 1.
In the first modification, a virtual image generated by the generation unit 113 is displayed on the display unit 152a. The action of the user U1 is captured by the imaging unit 12 or detected by the action sensor 13e. Thus, the acquisition unit 111 of the control unit 11 may acquire the behavior information of the user U1. Moreover, the user U1 may wear a wristwatch-type wearable device capable of acquiring the vital information of the user U1 and the like and capable of communicating with the communication unit 16. Then, the vital information of the user U1 may be transmitted from the wearable device worn by the user U1 via the communication unit 16 to the acquisition unit 111 of the moving body terminal device 10A. Thus, the determination unit 112 may make a determination based on the vital information of the user U1. With the above configuration, also in the first modification, the same effect as that of the above-mentioned embodiment may be obtained.
Next, a description will be given of the second wearable device 40 according to a second modification of the embodiment.
The second wearable device 40 as a moving body control device illustrated in
As illustrated in
As illustrated in
Under the control of the control unit 49, the display unit 44 displays stereoscopically visible image, video, character information, and the like. The display unit 44 is composed by using a pair of left and right display panels having a predetermined parallax therebetween. The display panel is composed by using liquid crystal, organic electro luminescence (EL) or the like. The operation unit 47 receives an input of the operation of the user U2, and outputs a signal corresponding to the received operation to the control unit 49. The operation unit 47 is composed by using buttons, switches, a jog dial, a touch panel, or the like.
The control unit 49 controls operations of the respective units which compose the second wearable device 40. The control unit 49 includes an acquisition unit 491, a determination unit 492, a generation unit 493, an output control unit 494, and a travel control unit 495. The acquisition unit 491, the determination unit 492, the generation unit 493, the output control unit 494, and the travel control unit 495 are the same as the acquisition unit 381, the determination unit 382, the generation unit 383, the output control unit 384, and the travel control unit 385, which are mentioned above, respectively.
In the second modification, the virtual image generated by the generation unit 493 is displayed on the display unit 44. The action of the user U2 is captured by the imaging devices 41 or detected by the behavior sensor 42. Thus, the acquisition unit 491 of the control unit 49 may acquire the action information of the user U2. With the above configuration, also in the second modification, the same effect as that of the above-mentioned embodiment may be obtained.
Next, a description will be given of an example of a virtual image displayed by the wearable device 30 or 40 according to a third modification of the embodiment and a user's action corresponding to the virtual image.
In the third modification, as illustrated in
In the fourth modification, as illustrated in
In the fifth modification, as illustrated in
In the sixth modification, as illustrated in
In the seventh modification, as illustrated in
The above-mentioned user U1 is mainly a driver, but the users U2 to U7 may be drivers or fellow passengers on the moving body 1. Thus, the fellow passengers other than the driver may recognize a peripheral region of the moving body 1 or control the moving body 1 by their own actions corresponding to the virtual image, and accordingly, may enjoy the pleasure of riding on the moving body 1. Moreover, in each of the above-mentioned third to seventh modifications, the description is given of an example in which the battery is charged or the moving body 1 is moved in response to the user's action, but the objects to be controlled are not necessarily limited to the charge of the battery and the movement of the moving body 1. That is, various controls for the moving body 1, which correspond to the user's action, may be arbitrarily set.
In the above-mentioned embodiment, a program to be executed by the moving body terminal device, the first wearable device or the second wearable device may be recorded in a recording medium readable by a computer, other machines or a device such as a wearable device (hereafter, referred to as a computer or the like). The computer or the like is caused to read and execute the program in the recording medium, whereby the computer or the like functions as the moving body control device. Here, the recording medium readable by a computer or the like refers to a non-transitory recording medium configured to electrically, magnetically, optically, mechanically, or chemically store information, such as data and programs, so as to be read by a computer or the like. Such a recording medium includes recording media removable from the computer or the like, for example, such as flexible disks, magneto-optical disks, CD-ROMs, CD-R/Ws, DVDs, BDs, DATs, magnetic tapes, and memory cards such as flash memories. Furthermore, a recording medium fixed to the computer or the like include hard disks, ROMs, and the like. Moreover, a solid state drive (SSD) may be used as a recording medium removable from the computer or the like, and also as a recording medium fixed to the computer or the like.
Furthermore, the program to be executed by the vehicle terminal device, the first wearable device, the second wearable device and the server according to the embodiment may be stored on a computer connected to a network such as the Internet and configured to be downloaded via the network to be provided.
Although the embodiment has been specifically described above, the present disclosure is not limited to the embodiment mentioned above, and various modifications based on the technical idea of the present disclosure may be adopted. For example, the virtual images and the actions, which are described in the above embodiment, are merely examples, and different virtual images and operations may be used.
In the above-mentioned embodiment, the description is given of the examples of using the eyeglass-type wearable device and the wristwatch-type wearable device, which may be worn by the user, but the present disclosure is not limited to these, and the above-mentioned embodiment may be applied to various wearable devices. For example, as illustrated in
Further, according to the above-mentioned embodiment, the first wearable device projects an image onto the retina of the user U1 to cause the user U1 to view the image. Alternatively, the first wearable device may be a device that projects and displays an image onto the lens 39 of, for example, glasses.
Moreover, in the embodiment, the “units” mentioned above may be replaced with “circuits” and the like. For example, the control unit may be replaced with a control circuit.
Meanwhile, in the description of the flowchart in the present specification, although the expressions “first”, “then”, “subsequently”, and the like are used to clarify a processing order of the steps, the processing order to carry out each of the present embodiments shall not be defined uniquely by these expressions. That is, the processing order in the flowchart described in the present specification may be changed unless it is inconsistent.
According to some embodiments, the riding user may perform an action corresponding to the virtual image while viewing the virtual image, and may control the moving body in response to the action of the user. Accordingly, it becomes possible for the user who rides on the moving body to enjoy the pleasure of riding on the same.
According to some embodiments, the moving body may be controlled while traveling safety of the moving body is ensured, so that the user who rides on the same may feel a sense of security.
According to some embodiments, since the control of the moving body may be executed while the moving body is involved in the traffic congestion, the user who rides on the moving body may enjoy the time even during the traffic congestion, and boredom and stress which the user is likely to feel in a situation of being involved in the traffic congestion may be alleviated.
According to some embodiments, the user may visually recognize the external situation, and therefore, for example, when the moving body is involved in the traffic congestion, the traffic congestion may be recognized in a bird's-eye view, and therefore, the stress and anxiety which the user is likely to feel due to the traffic congestion may be alleviated.
According to some embodiments, even if the user performs various actions, the user may visually recognize the virtual image displayed on the display unit, so that the sense of presence, which is received by the user, may be maintained.
According to some embodiments, the riding user may perform an action corresponding to the virtual image while viewing the virtual image, and may control the moving body in response to the action of the user. Accordingly, it becomes possible for the user who rides on the moving body to enjoy the pleasure of riding on the same.
According to some embodiments, the riding user may perform an action corresponding to the virtual image while viewing the virtual image, and may control the moving body in response to the action of the user. Accordingly, it becomes possible for the processor to execute processing for enabling the user who rides on the moving body to enjoy the pleasure of riding on the same.
In accordance with the moving body control device, the moving body control method and the program according to the present disclosure, the riding user may perform the action corresponding to the virtual image while viewing the virtual image, and may control the moving body in response to the action of the user. Accordingly, it becomes possible for the user who rides on the moving body to enjoy the pleasure of riding on the same.
Although the disclosure has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2020-001079 | Jan 2020 | JP | national |