The present invention relates generally to mechanical robots.
In recent years, there has been increased interest in computerized robots such as, e.g., mechanical pets, which can provide many of the same advantages as their living, breathing counterparts. These mechanical pets are designed to fulfill certain functions, all of which provide entertainment, and also in many cases general utility, to the owner.
As an example, Sony's AIBO robot is designed to mimic many of the functions of a common household pet. AIBO's personality develops by interacting with people and each AIBO grows and develops in different way based on these interactions. AIBO's mood changes with its environment, and its mood affects its behavior. The AIBO can provide certain features and entertainment to the owner through such things as execution of certain tasks and actions based on its programming and the commands of the user. An AIBO can perform any number of functions, e.g., creating noise frequencies that resemble a dog's bark.
In general, a mechanical “robot” as used herein and to which the present invention is directed includes movable mechanical structures such as the AIBO or Sony's QRIO robot that contain a computer processor, which in turn controls electro-mechanical mechanisms such as wheel drive units and “servos” that are connected to the processor. These mechanisms force the mechanism to perform certain ambulatory actions (such as arm or leg movement).
A mechanical robot includes a body, a processor mounted on the body, and one or more electro-mechanical mechanisms controlled by the processor to cause the body to ambulate. A sensor such as a sound sensor (e.g., a microphone) and/or a motion sensor (e.g., a camera) is electrically connected to the processor, and the processor compares a sensed sound and/or image from the sensor with predetermined criteria to selectively generate an intruder alert in response. In this regard, the robot can use adaptive learning algorithms to learn from past decisions, e.g., a user can speak approvingly of “correct” intruder alert response and disapprovingly of incorrect intruder response and the robot, using, e.g., voice recognition software or tone sensors, can then correlate the action to whether it is “correct” or not using the user's input, which may also be made using a keyboard or keypad entry device on the robot. Sony' U.S. Pat. No. 6,711,469 discusses further adaptive learning principles.
In some non-limiting implementations the processor compares an image from the camera with data stored in the processor to determine whether a match is established. The intruder alert may be generated if a match is not established, i.e., if a sensed person is a stranger, or the intruder alert may be generated if a match is established if, for instance, the sensed person is correlated to a known “bad person”. If desired, in the latter case the robot can include a wireless communication module and automatically contact “911” or other emergency response using conventional telephony or VoIP. The robot can also execute a non-lethal response such as emitting a shrill sound to alert nearby people.
In another aspect, a mechanical robot includes a body, a processor mounted on the body, and one or more electro-mechanical mechanisms controlled by the processor to cause the body to ambulate. Means on the robot sense a visible and/or aural disturbance and generate a signal in response. Also, means are on the robot for comparing a sensed sound and/or image represented by the signal with predetermined criteria, with means being provided on the robot for selectively generating an intruder alert in response to the means for comparing.
In still another aspect, a mechanical robot includes a body, a processor mounted on the body, and one or more electro-mechanical mechanisms controlled by the processor to cause the body to ambulate. A sensor such as a sound sensor (e.g., a microphone) and/or a motion sensor, which can be a multi-directional camera that can be preprogrammed based on user preferences and that can be accessed using a wireless module on the robot, is electrically connected to the processor. The processor compares a sensed sound and/or image from the sensor with predetermined criteria to selectively play music in response.
The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
Referring initially to
In some non-limiting implementations an external beacon receiver 8 such as a global positioning satellite (GPS) receiver is mounted on the robot 2 as shown and is electrically connected to the processor 6. Other beacon receivers such as rf identification beacon receivers can also be used. Using information from the receiver 8, the processor 6 can determine its localization.
As set forth further below, the camera 10 can be used as the robot's primary mode of sight. As also set forth below, as the robot 2 “roams” the camera 10 can take pictures of people in its environment and the processor 6 can determine face recognition based on the images acquired through the camera 10. A microphone 11 may also be provided on the robot 2 and can communicate with the processor 6 for sensing, e.g., voice commands and other sounds.
Additionally, the robot 2 may be provided with the ability to deliver messages from one person/user to another through an electric delivery device, generally designated 12, that is mounted on the robot 2 and that is electrically connected to the processor 6. This device can be, but is not limited to, a small television screen and/or a speaker which would deliver the optical and/or verbal message.
Now referring to
Commencing at block 13, the robot detects a new sound (by means of the microphone 11) or motion (by means of the camera 10 or other motion sensor) in its environment. Disturbance detection can be performed by the robot by means known in the art, e.g., by simply detecting motion when a PIR or video camera is used. Further examples of disturbances are the sound of an alarm clock or a new person entering the robot's sensor range. Moving to block 14, the robot records data from the object creating the new disturbance. At block 16, the robot's processor 6 has the option of performing certain pre-set actions based on the new disturbance(s) it has detected as set forth further below.
In
In the latter regard, the robot can access face and/or voice recognition information and algorithms stored internally in the robot to compare an image of a person's face (or voice recording) to data in the internal database of the robot, and the robot's actions can depend on whether the face (and/or voice) is recognized. For instance, if a person is not recognized, the robot can emit an audible and/or visual alarm signal. Or again, if the person is recognized and the internal database indicates the person is a “bad” person, the alarm can be activated.
If the new data is expected or at least does not correlate to a preprogrammed “bad” disturbance, the logic proceeds to block 24, where the robot does not alert the user on the new disturbance. If the new data is not expected or otherwise indicates an alarm condition, however, the logic then moves to block 26. At block 26 the robot alerts the user about the new disturbance. A robot can perform the alert function in many ways that may include, but are not limited to, making “barking” sounds by means of the above-mentioned speaker that mimic those made by a dog, flashing alert lights on the above-mentioned display or other structure, or locating and making physical contact with the user in order to draw the user's attention.
Additionally, when an “expected” or “good” person is recognized by virtue of voice and/or face recognition, the robot may correlate the person to preprogrammed music or other information that the person or other user may have entered into the internal data structures of the robot as being favored by the person. Then, the information can be displayed on the robot, e.g., by playing the music on the above-mentioned speaker.
While the particular ENHANCEMENTS TO MECHANICAL ROBOT as herein shown and described in detail is fully capable of attaining the above-described objects of the invention, it is to be understood that it is the presently preferred embodiment of the present invention and is thus representative of the subject matter which is broadly contemplated by the present invention, that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present invention is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more”. It is not necessary for a device or method to address each and every problem sought to be solved by the present invention, for it to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. Absent express definitions herein, claim terms are to be given all ordinary and accustomed meanings that are not irreconcilable with the present specification and file history.
Number | Name | Date | Kind |
---|---|---|---|
5202661 | Everett et al. | Apr 1993 | A |
6381515 | Inoue et al. | Apr 2002 | B1 |
6459955 | Bartsch et al. | Oct 2002 | B1 |
6493606 | Saijo et al. | Dec 2002 | B1 |
6529802 | Kawakita et al. | Mar 2003 | B1 |
6542788 | Hosonuma et al. | Apr 2003 | B1 |
6650965 | Takagi et al. | Nov 2003 | B1 |
6754560 | Fujita et al. | Jun 2004 | B1 |
6760646 | Osawa | Jul 2004 | B1 |
6865446 | Yokono et al. | Mar 2005 | B1 |