This invention relates generally to systems, devices, and methods for communicating with a pet.
In the past few years, there has been a rapid advance and convergence of communication technologies which exploit the low cost and ubiquitous nature of Wi-Fi and the Internet. Low cost Internet cameras (e.g., Web cameras or “webcams”) are configured to transmit live audio and video feeds over the Internet. Some Internet cameras allow remote control tilting and panning. Video “chat” services, such as Skype and Apple's Face Time, provide video communication with another person who has Internet access. One of the driving forces in the rapid progress and evolution of these communication technologies is our desire to keep in touch with family and friends.
This desire to communicate with family and friends also applies to one of the most important members of a typical family, the pet dog. A simple Internet camera to keep an eye on the family dog when he is home alone suffers from a number of drawbacks. As one example, Internet cameras and their associated computers typically cannot be controlled by typical dogs, as dogs cannot manipulate the requisite input devices, such as keyboards and/or mice. The advance of Internet communication technologies now make possible devices, systems, and methods to better communicate with family dogs.
Example embodiments of the present invention are described in detail below with reference to the following drawings:
FIG. 12 is a flow diagram that illustrates a first process for interacting with a pet, according to an example embodiment.
FIG. 13 is a flow diagram that illustrates a second process for interacting with a pet, according to an example embodiment.
Embodiments described herein provide enhanced methods and systems for human-pet communication and, more particularly, for remote communication and interaction with between a pet and its owner, caretaker, trainer, family member, or the like. Example embodiments provide an Internet Canine Communication System (“ICCS”). Some embodiments of the ICCS include a device (e.g., base station, home device) that is configured to deliver treats to a dog and to transmit audio/visual communication between the dog and a remote client device operated by a human user.
In this embodiment, the components of the ICCS 100 are arranged in a housing that is substantially in the shape of a rectangular prism. In other embodiments, other shapes may be used, including cylindrical, pyramidical, or the like. In some embodiments, the ICCS 100 may be built into the wall of a house or other structure (e.g., a cabinet, refrigerator).
To use the ICCS 100, a family member, pet owner, caretaker, or other user (e.g., trainer) first trains the dog to use the ICCS 100. The ICCS 100 is initially mounted on a wall or other suitable stable surface (e.g., cabinet, chest of drawers, computer table) using Velcro fasteners 122a and 122b, at a height that is appropriate for the size of the dog. The user next places treats 134 in each of the treat compartments 132 of the treat carousel 130, closes the top 118, and then secures the top 118 with the latch 120.
Next, with the dog in the room, the user pushes the training button 116 on the side of the ICCS 100. Pushing the training button 116 causes a treat delivery command or signal to be transmitted to a treat delivery subsystem or module of the ICCS 100, including those components that are involved in delivering a treat to the food tray 114, notifying the dog, and the like. More specifically, pushing the training button 116 activates the bell 108 and flashing light 146, and also activates the carousel motor 148 to rotate and advance the carousel 130 until the next treat compartment 132 is positioned above the opening 136 in the carousel floor layer 138. Upon rotation of the carousel 130, the treat 134 drops through the opening 136, down the treat slide 142, and onto the food tray 114. Transmission of the treat delivery command or signal may be direct (e.g., via hard wiring between the training button 116 and the carousel 130) or indirect (e.g., intermediated via a microprocessor or other data processing module).
While the bell 108 is ringing and the light 146 is flashing, the dog's attention is easily directed to the food tray 114, since the dog can see the treat through the activation cover 110, and can smell the treat through the holes 112. As shown in
The above training procedure, initiated by pressing the training button 116, can be repeated as often as necessary until the dog 200 is trained to push open the activation cover 110 every time he hears the bell 108. The final step in the training process is to push the training button 116 when the dog 200 is not in the room. It may take several repetitions (possibly initially requiring some other command to get the dog to come into the room), but the dog will soon learn to enter the room, approach the ICCS 100, and push the activation cover 110 to obtain a treat every time he hears the bell 108.
Before the ICCS 100 can be used to remotely communicate with the dog 200, it is first configured to interact with a user's mobile device 300. Some embodiments may include security and/or authentication (e.g., based on passwords, device identifiers, or the like), such that only authorized users can access the ICCS 100. In addition, the camera positioning lever 104 is used to adjust the camera angle to accommodate the size of the dog. Before leaving the house, the user also fills the treat compartments 132 and secures the top 118.
To use the ICCS 100, as shown in
Note that other embodiments may have different mechanisms or employ different techniques for delivering treats and determining when to turn off the bell 108. Some embodiments may, for example, eject a treat onto the floor or into a tray of the ICCS 100, and then determine to turn off the bell 108 (or other audio device of the ICCS) in response to an input received from the dog. An input may include pushing an activation cover 110 (as above), but may also or instead include a response lever or panel configured to make it easy for the dog to push with nose or paw, detecting motion (e.g., via a motion detector of the ICCS), detecting a sound (e.g., a bark detected via a microphone of the ICCS), detecting heat (e.g., via an infrared sensor), or the like. Some embodiments thus may not include one or more of the components related to determining when the treat has been taken, such as the activation cover 110, and the like.
Once the dog has accepted the call, the user can further interact with the dog. The user can speak to the dog through the speaker 107, and give additional rewards by pressing the “give treat” button 310 on the mobile app. Pushing the “give treat” button 310 activates the rotation of the carousel and rings the bell, in a similar fashion to the “call dog” button 306, allowing the dog to receive additional treats during the call. Finally, at the end of the interaction with the dog 200, the user presses the “end call” button 308 on the mobile app. This turns off the camera 102, microphone 106, and light 146, thereby resetting the ICCS 100 for the next call. The “record video” button 312 allows the family member to record entertaining or memorable video received during a call.
Other embodiments of the ICCS may include the option to allow the ICCS to be used as a simple remote camera, along with remote panning and zooming. Other embodiments of the ICCS may include multiple treat chutes and/or trays, so that the dog is not sure which platter, tray, or position the treat will go to. Having multiple treat trays achieves entertaining movements of the dog's head and eyes visible on the video during the call. In addition, a toy such as a spinning dial or moving object, activated by the mobile device client app and positioned near the camera on the ICCS, makes the dog's excitement visible on the video, as well as helps direct the dog's attention to the camera. Interesting toys or puzzle accessories may be attached to the ICCS, or linked remotely to the ICCS, in order to present to additional challenges or games for the dog to get to a treat while being viewed on the app. In addition, recorded videos may be shared on line with other family members and others in an online ICCS community.
Note that while operation of the ICCS is herein primarily described with respect to a pet dog, other embodiments may be configured to operate with other types of pets or animals (e.g., in a zoo or scientific study). For example, one embodiment may provide an Internet Feline Communication System that dispenses food treats, cat toys, and/or catnip to cause the cat to interact with the system. The Internet Feline Communication System may also or instead present audio/video of birds or other animals to attract a pet cat's attention.
In the embodiment of
Another embodiment of the ICCS may bypass some or all of the electronics and programming associated with providing audio/video communication by using conventional video conferencing software (e.g., Skype) and/or devices, such as the webcam 400 described above. For example, a desktop computer installed in the same room as the ICCS may include an attached webcam and speakers and have installed video conferencing software that is configured to transmit video and/or audio between the room and a remote client device. In other embodiments, the ICCS may itself include a computer processor that is configured to run a stock or customized video conferencing client. Such approaches potentially lower the cost of producing an ICCS by reducing or eliminating some of the data processing functions and/or software requirements included in the treat delivery housing.
Also shown is a personal computer 500 (e.g., a laptop or desktop computer) that is communicatively coupled via a USB or similar connection to a webcam 400. In other embodiments, the personal computer 500 and the webcam 400 are in wireless communication. The personal computer 500 is also communicatively coupled to a mobile device 300 via the Internet 510 or other communication network.
The personal computer 501 includes logic 501 that is configured to communicate with the mobile device 300, the webcam 400, and the ICCS 100″. In particular, the logic 501 is configured to receive inputs from the mobile device 300 and to forward them (or signals based thereon) to the ICCS 100″. The logic 501 is also configured to receive video/audio data from the webcam 400 and forward it to the mobile device 300.
In one embodiment, the logic 501 may be video conferencing software (e.g., Skype) that is configured to ring or play some other tone or sound in response to an incoming call from a video conferencing app executing on the mobile device 300, or able to receive voice or sound commands from the mobile device 300. The logic 501 plays the ring or other sound via the audio connection to the ICCS 100″. For example, if the audio connection is a cable to the ICCS 100″, the sound may be played via the audio device 502 and then detected by the microphone 106. In another configuration, the logic 501 plays the sound via a speaker of the personal computer 500, such that the sound can be detected by the microphone 106 of the ICCS 100″. In general, after the logic 501 automatically answers an incoming call, the ICCS 100″ may be activated in one or more of the following ways: a sound initiated by the logic 501 and played by the audio device 502; a sound initiated by the mobile device 300 (e.g., in response to a button pressed by a user), received by the logic 501, and then played by the audio device 502; a spoken command uttered by the user of the mobile device 300, transmitted to the logic 501, and then played by the audio device 502. Any of these sounds (or other ones generated in other ways) can then be detected by the microphone 106, causing activation of the ICCS 100″ as described further below. In some embodiments, the ICCS 100″ includes logic (e.g., software, hardware, and/or firmware) to recognize spoken commands received via the microphone 106. The recognition logic may be a speech recognizer that is configured to recognize specific words, phrases, speakers (e.g., to differentiate between different users based on features of their voices), and the like.
In response to the sound detected by the microphone 106, the ICCS 100″ causes the delivery module 503 to dispense a treat and/or causes the audio device 502 to play a sound (e.g., a whistle or bell) that calls the dog. Concurrently, the logic 501 automatically answers the incoming call, and begins to transmit audio/video captured by the Web camera 400 to the mobile device 300. The logic 501 may also receive audio/video from the mobile device 300, and play it via a display or speaker of the personal computer 500 or the audio device 502 of the ICCS 100″.
During the communication session, the user of the mobile device 300 may transmit other commands to the ICCS 100.” For example, the user may press keys or other input controls that generate sounds or issue a voice command word, that are configured, when played by the personal computer 500, to cause the ICCS 100″ to perform certain functions, such as delivering a treat, making a sound, shaking a toy, or the like.
As noted, the logic 501 and the corresponding client software on the mobile device 300 may be standard video conferencing software. In other embodiments, the logic 501 and client on the mobile device client 300 may be custom software. For example, the logic 501 may be configured to respond to dial tones (e.g., DTMF tones) generated by the mobile device and to forward those tones (or signals/commands based thereon) to the ICCS 100″. Other hardware arrangements are also contemplated. For example, the webcam 400 may be incorporated into the personal computer 500, such as is often the case with laptops or tablet computers.
Another embodiment of the ICCS provides an outgoing call feature. More specifically, the ICCS may provide and implement a protocol for allowing the dog or other pet to call or videoconference with family members or other persons. A sound and/or visual cue is programmed in the ICCS to activate at a certain time or time interval. This sound can be distinct from the sound indicating an incoming call. For example, a softer sound or visual cue can catch the dog's attention when the dog wanders into the room, rather than a louder sound indicating an incoming call that needs to be heard throughout the house and answered immediately. During the “outgoing call period” the dog is trained to push the treat door (or another door or button), and receive a treat. Alternatively, the treat can be dispensed when the outgoing call period begins. Operation of the treat door by the dog activates ICCS to initiate an outgoing call to a designated user (or users). Once the outgoing call is accepted by the user, the user can interact and give treats as described with respect to incoming calls, above. It should be noted that the dog gets a treat for initiating the call during the outgoing call period, even if the call is not answered by the user. If the dog activates the treat door at other periods, no treat is given. In some embodiments, once the dog initiates an outgoing call during the outgoing call period, the period is ended; this way, the dog cannot repeatedly make a call in order to obtain more treats.
Embodiments of the ICCS may also provide various training-related techniques. For example, a disembodied voice coming from the ICCS may be confusing to the dog, since it sounds quite different to the dog than a “live voice”, and since it is disassociated from the smell and sight of the person speaking. For this reason, the dog may need to re-learn his established repertoire of tricks in response to a new set of speaker commands. One process for training tricks with the ICCS involves having the dog hear both the sound of a person's live voice and the sound of the person's voice thru the speaker of the ICCS. This process may be effectuated by having the user give commands via a mobile client device while in the room with the dog and the ICCS. Thus, the dog will hear both the user's voice live and via the speaker of the ICCS at the same time. Over time, the user can move further and further away from the ICCS while uttering a command, causing the commands played via the ICCS speaker to become more dominant. Alternatively, the user can cause (e.g., via the mobile client device) the commands played via the ICCS speaker to become louder (e.g., at increased volume), such as by pressing an appropriate input on the mobile client device, thereby causing the commands played via the ICCS speaker to become more dominant.
In another training process, the ICCS includes an audio recording function, which can be used to record commands for training purposes. The user can use a mobile client device (or controls on the ICCS) to play recorded commands via the ICCS speaker. The user can further reinforce such commands with a live voice to the extent necessary to train the dog to do tricks in response to commands played by the ICCS speaker.
In some embodiments, the treat carousel of the ICCS is configured to facilitate selection of a specific treat for the pet. For example, the user of a mobile client device may specify a particular treat compartment or treat type by way of a remote command In response to receiving the command, the ICCS rotates the treat carousel to the selected compartment and delivers a treat therefrom. In this manner, the ICCS may facilitate the delivery of different types of treats to the pet.
In blocks 1002-1006, the process operates in a training mode, during which the pet is trained to interact with the ICCS. More specifically, the process begins at block 1002, where it receives an indication of a user selection of a training control. As discussed above, the ICCS may include a training control such as a button or switch, or circuitry/logic to receive voice command words from the trainer in the room, or from the mobile device. The training control may also or instead be situated on a remote client device operated by a user, and thus transmitted from the client device to the ICCS base station.
At block 1004, the process delivers a treat and notifies the pet. In response to receiving the indication of the selection of the training control, the process may transmit a delivery command to a delivery module in the ICCS base station or housing. The treat delivery module may include logic and/or mechanical elements (e.g., treat carousel, carousel motor, treat slide) configured to deliver a treat to a food tray, initiate a sound (e.g., play an audio signal, ring a bell, cause a local or remote audio device to output a sound) to notify the pet of the presence of a treat in the food tray, and stop playing the audio signal when the pet accesses the food tray to obtain the treat.
At block 1006, the process determines whether to continue training. If so, the process returns to block 1002 to await another selection of the training control; otherwise the process continues with block 1008.
In blocks 1008-1014, the process operates in a communication mode, in which it facilitates communication between the pet and a user operating a remote client device. More specifically, at block 1008, the process receives a communication command from a remote client device. The communication command may be a network packet, request, or other signal received via a networking interface of the ICCS. In other embodiments, the communication command may be a sound (e.g., audio signal) received from a speaker of a nearby webcam or other videoconferencing apparatus (e.g., a PC with attached speakers and camera; a laptop with integral speakers and camera).
At block 1010, the process delivers a treat and notifies the pet. Delivering the treat and notifying the pet may include transmitting the delivery command, as discussed above with respect to block 1004.
At block 1012, the process communicates audio and/or video between the remote client device and the pet. Communicating the audio and/or video may include transmitting audio and/or video of the pet to the remote client device in addition to receiving and presenting audio and/or video of the user received from the remote client device. The communication may be performed by communication modules that are integral to the ICCS housing (e.g., as shown in
At block 1014, the process determines whether to continue. If so, the process returns to block 1008 to await receipt of another communication command If not, the process ends. For example, if an end call signal or command is received from the client device, the process may return to block 1008 to await the next call. Alternatively, the process may receive a shutdown signal or command, in which case the process ends.
The process of
One embodiment provides a first method (illustrated as process 1200 shown in FIG. 12) for interacting with a pet, comprising: providing a system for communicating with a pet from a user of a communication device (block 1201), the system including a treat dispenser for delivery of treats to a pet, a personal computing device in communication with the treat dispenser and further comprising a processor for sending and receiving communication commands, and a speaker in proximity to the treat dispenser; training the pet to make a call to the user (block 1202) by: playing an audio signal via the speaker (block 1203); causing one of the treats to be delivered from the treat dispenser to the pet (block 1204); and stopping the play of the audio signal (block 1205); and facilitating outgoing calls from the pet to the communication device (block 1206), by: initiating a sound to indicate a preselected outgoing call time period during which the pet may initiate a call to the communication device (block 1207); during the preselected outgoing call time period, receiving an indication that the pet has selected a control of the treat dispenser (block 1208); and in response to the received indication, initiating an outgoing call to the communication device, the outgoing call including transmitting live audio or video of the pet (block 1209).
In the first method, the treat dispenser may include multiple food trays that are each configured to deliver treats to the pet, and further comprising: receiving from the communication device a selection of one of the multiple food trays; and in response to the received selection, delivering a treat to the selected food tray.
In the first method, the treat dispenser may have an associated toy that is moveable in response to commands received from the communication device, and further comprising: receiving from the communication device a command to move the toy; and in response to the received command, causing the toy to move.
The first method may further include receiving at the system a communication command; recognizing the communication command; and in response to the communication command, causing one of the treats to be delivered from the treat dispenser to the pet. The communication command may be a spoken command uttered by a user of the communication device, the spoken command played by the personal computing device via the speaker. The communication command may be a verbal command from the user, and the system may further include an audio input device configured to receive the communication command.
The first method may further include during the preselected outgoing call time period, delivering a treat, wherein receiving the indication that the pet has selected the control of the treat dispenser includes receiving an indication that the pet has accessed the treat.
In the first method, facilitating outgoing calls from the pet to the communication device may include in response to the received indication, providing the pet with a treat, thereby rewarding the pet for initiating the outgoing call.
The first method may further include training the pet to respond to spoken commands of a user when played via the speaker by, when the user is in the room with the training dispenser: playing audio of a spoken command via the speaker in time proximity to when the user utters the spoken command so that the pet hears and associates the spoken command played by the speaker and the spoken command uttered by user; and causing one of the treats to be delivered from the treat dispenser to the pet.
In the first method, training the pet to respond to spoken commands of the user may include: recording the spoken commands via an audio input device; and playing one of the recorded spoken commands via the speaker. In the first method, the audio signal may be played from one of an audio signaling device or the speaker. In the first method, the audio signal may be played continuously until receipt of input from the pet.
Another embodiment provides a second method (illustrated as process 1300 shown in FIG. 13) for interacting with a pet, comprising: providing a system for communicating with a pet from a user communication device (block 1301), the system including a treat dispenser having a pet detection mechanism for detecting input from the pet, video conferencing software, and an audio/video device; training the pet to answer a call from a user by (block 1302): playing an audio signal via the audio/video device (block 1303); causing one of the treats to be delivered from the treat dispenser to the pet (block 1304); and stopping the play of the audio signal (block 1305); facilitating outgoing calls from the pet to the communication device (block 1306), by: initiating a sound to indicate a preselected outgoing call time period during which the pet may initiate a call to the communication device (block 1307); during the preselected outgoing call time period, receiving an indication that the pet has selected a control of the treat dispenser via the pet detection mechanism (block 1308); and in response to the received indication, initiating an outgoing call to the communication device, the outgoing call including transmitting live audio or video of the pet (block 1309).
The second method may further include connecting a computing device to the treat dispenser, the computing device executing the video conferencing software, wherein the computing device is connected to the treat dispenser via at least one of a cable or wireless connection.
The second method may further include during the preselected outgoing call time period, delivering a treat, wherein receiving the indication that the pet has selected the control of the treat dispenser includes receiving an indication that the pet has accessed the treat.
In the second method, facilitating outgoing calls from the pet to the communication device may include in response to the received indication, providing the pet with a treat, thereby rewarding the pet for initiating the outgoing call.
The second method may further include training the pet to obtain a treat from the treat dispenser by: playing an audio signal that notifies the pet of the availability of a treat playing audio of a spoken command via the audio/video device in time proximity to when the user utters the spoken command so that the pet hears and associates the spoken command played by the audio/video device with the spoken command uttered by the user; and causing one of the treats to be delivered from the treat dispenser to the pet.
In the second method, training the pet to obtain a treat from the treat dispenser by playing the spoken command may include: recording the spoken command via an audio input device; and playing the spoken command via the audio/video device.
In the second method, the treat dispenser may include multiple food trays that are each configured to deliver treats to the pet, and may further include: receiving from the communication device a selection of one of the multiple food trays; and in response to the received selection, delivering a treat to the selected food tray.
It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “includes,” “including,” “comprises,” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the written description and/or claims refer to at least one of something selected from the group consisting of A, B, C and N, the text should be interpreted as requiring at least one element from the group (A, B, C N), rather than A plus N, or B plus N, etc.
U.S. patent application Ser. No. 14/988,066, filed Jan. 5, 2016 and entitled “INTERNET CANINE COMMUNICATION DEVICE AND METHOD;” U.S. patent application Ser. No. 13/765,546, filed Feb. 12, 2013 and entitled “INTERNET CANINE COMMUNICATION DEVICE AND METHOD;” U.S. patent application Ser. No. 13/752,217, filed Jan. 28, 2013 and entitled “INTERNET CANINE COMMUNICATION DEVICE AND METHOD;” and U.S. Provisional Patent Application No. 61/689,270, filed Jun. 2, 2012 and entitled “INTERNET CANINE COMMUNICATION DEVICE,” are incorporated herein by reference in their entireties. Where a definition or use of a term in an incorporated reference is inconsistent with or contrary to the definition or use of that term provided herein, the definition or use of that term provided herein governs.
While one or more embodiments of the invention have been illustrated and described above, many changes can be made without departing from the spirit and scope of the invention. Accordingly, the scope of the invention is not limited by the disclosure of specific embodiment. Instead, the invention should be determined entirely by reference to the claims that follow.
This application is a reissue of U.S. Pat. No. 10,314,288, issued Jun. 11, 2019, which is a continuation of U.S. patent application Ser. No. 14/988,066 filed Jan. 5, 2016 (now U.S. Pat. No. 9,723,814); which is a continuation of U.S. patent application Ser. No. 13/765,546 filed Feb. 12, 2013 (now U.S. Pat. No. 9,723,813); which is a continuation of U.S. patent application Ser. No. 13/752,217 filed Jan. 28, 2013 (now U.S. Pat. No. 9,226,477); which claims the benefit of priority from U.S. Provisional Patent Application No. 61/689,270 filed Jun. 2, 2012, each of which is incorporated herein by reference in its entirety. Notice: More than one reissue application has been filed for the reissue of U.S. Pat. No. 10,314,288. The reissue applications are application Ser. No. 16/988,318 (the present application) and Ser. No. 17/585,532.
Number | Name | Date | Kind |
---|---|---|---|
5239943 | Kim | Aug 1993 | A |
5299529 | Ramirez | Apr 1994 | A |
6273027 | Watson et al. | Aug 2001 | B1 |
6904868 | Block et al. | Jun 2005 | B2 |
7002466 | Goehring | Feb 2006 | B2 |
7263953 | Sundararajan | Sep 2007 | B2 |
7424867 | Kates | Sep 2008 | B2 |
7650855 | Krishnamurthy | Jan 2010 | B2 |
7654230 | Kroll | Feb 2010 | B2 |
7878152 | Kroll | Feb 2011 | B2 |
8201522 | Kroll | Jun 2012 | B2 |
8347817 | Miller | Jan 2013 | B1 |
8588967 | Carelli et al. | Nov 2013 | B2 |
20040194714 | Lee | Oct 2004 | A1 |
20050224003 | Yin et al. | Oct 2005 | A1 |
20050252457 | Morosin et al. | Nov 2005 | A1 |
20050284412 | Kroll | Dec 2005 | A1 |
20060011145 | Kates | Jan 2006 | A1 |
20060208910 | Tolner | Sep 2006 | A1 |
20070103542 | Carter | May 2007 | A1 |
20070144446 | Neckel | Jun 2007 | A1 |
20080282988 | Bloksberg | Nov 2008 | A1 |
20090056640 | Gross | Mar 2009 | A1 |
20090223463 | Chem | Sep 2009 | A1 |
20100089327 | Gross | Apr 2010 | A1 |
20100095896 | Van Wye | Apr 2010 | A1 |
20100275851 | Yin | Nov 2010 | A1 |
20110018994 | Russoniello et al. | Jan 2011 | A1 |
20110080459 | Kroll | Apr 2011 | A1 |
20110139076 | Pu et al. | Jun 2011 | A1 |
20120006282 | Kates | Jan 2012 | A1 |
20120024237 | Rice | Feb 2012 | A1 |
20120160176 | Murphy et al. | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
201563474 | Sep 2010 | CN |
201797838 | Apr 2011 | CN |
122052 | Aug 2011 | FI |
2454658 | May 2009 | GB |
H01219924 | Sep 1989 | JP |
03-003550 | Jan 1991 | JP |
07-250585 | Oct 1995 | JP |
100415155 | Jan 2004 | KR |
WO 2008140992 | Nov 2008 | WO |
2009087451 | Jul 2009 | WO |
Entry |
---|
Extended European Search Report completed Feb. 4, 2020, in European Patent Application No. 19182331.9, 9 pages. |
Resner, B. I., “Rover@Home: Computer Mediated Remote Interaction Between Humans and Dogs,” School of Architecture and Planning, Massachusetts Institute of Technology, Sep. 2001, 109 pages. |
Certified English translation of Finnish Patent No. 20096151 (FI Patent No. 122052 B), 26 pages. |
Certified English translation of Figures 3 and 4 of International Patent Application No. PCT/IB2008/003588, 3 pages. |
Extended European Search Report completed Mar. 15, 2016, in European Patent Application No. 13796921.8, 7 pages. |
International Search Report and Written Opinion of the International Searching Authority completed Aug. 21, 2013, in International Patent Application No. PCT/US2013/043704, 6 pages. |
Number | Date | Country | |
---|---|---|---|
61689270 | Jun 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14988066 | Jan 2016 | US |
Child | 15629679 | US | |
Parent | 13765546 | Feb 2013 | US |
Child | 14988066 | US | |
Parent | 13752217 | Jan 2013 | US |
Child | 13765546 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15629679 | Jun 2017 | US |
Child | 16988318 | US |