The present invention, Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot is related to robot equipped computer, video vision camera sensors, web cameras, plurality type of sensors, logical software vision program, as trainable computer vision tracking object movements using computer vision transfer user hands, body gesture into computer data and commands input according hands' movement X, Y, X dimensions positions that has calibrate working space into Space Mouse Zone, Space Keyboard zone, and Hand-Sign Languages Zone between user and itself computer, machines. The calibrate puzzle-cell position has define meaning that mapping on its software program for robot to transfer the virtual hands gesture actions into entering data and commands to operating computer, machines and the robot act as Universal virtual Space Mouse, virtual Space Keyboard, and virtual Remote Controllers.
Today Cell phone is design to be as tiny as possible. The keyboard is too small for typing and there is no space to build on it. Those keys are almost too tiny to punch it individual correctly without using a sharp pen point.
In addition, as computer technologies improve everyday, the current computer is design for normal people for most purpose, but lack of real solution for people who have disability of normal physical actions, movements, and physical eyesight, hearing, and speaking limitation. They are unable to operating computer easily as normal people do. These areas should be addressed and provide the effective solution for people needs.
Another concern of new modern technology, today average household owns at least 5 remote controllers for their electronic devices, TV, Stereo, Text Translator, Air Conditions, and Cable machines. Too many remote controllers can give hard times for some people, and for a simple action such as to turn on TV, you need to operate several controllers to do it, regardless that the time you spent to learn for those remote controllers and relearn them again and again.
No particular solution to improve these address issues needs together. The proposal solution of this invention is Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot that has equipped computer system, video vision camera sensor, web cameras, logical vision software program and plural type of sensors. The robot using video computer vision automatically virtual projecting, (Space Mouse Zone, Space Keyboard zone, Hand-Sing Languages zone), working space between user and machine itself for user to enter text and commands by hand gestures. The robot computer vision consistently watching and to recognized user hands gesture movements coordinating with its define puzzle-cell positions of the virtual projecting working space zones that robot will automatically translate the receiving coordination users' hand gesture actions on the puzzle-cell positions' combinations and mapping to its software mapping lists for each the puzzle-cell position definition and calibrate these user hand and/or body gestures' space actions into computer meaningful operations such as Virtual Space Mouse input that moving cursor UP, Down, Left, Right, Left Clicks, Right Clicks, and also as Virtual Keyboard enter text, character input as to typing characters and function keys such as a, A, b, B, c, C, Backspace, Ctrl, Shift, Del, Enter key . . . Etc.
Robot also able to provide Hand-Sign Languages reading from user's hands and/or body gesture according to its preprogram listing of hand-sign languages gesture patterns and grammars, robot can recognize what words and/or commands that user wants to enter. The robot can be enable symbolic characters writing such as Chinese characters, and drawing a picture into computer by user's hand gestures' movements.
The robot can be trained and taught to tracking on a specific object by recognize its shape, symbols and/or colors and optional embedded wireless sensors attached on the tracking objects that to enhance the object tracking reliable vision reading and also to fit user's usage preferences, especially for those who have physical limitation special needs to operate the computer or machine.
The puzzle-cell positions of Space Mouse, and Space Keyboard of the robot can be customized. The puzzle-cell position of space zone to allow user to reprogram the standard working zone positions of Space Mouse, and Space Keyboard to be customized the certain keys rearrangements, and setting up for the certain puzzle-cell position of working space zone for certain meaning of text and commands to represented. This customizable Virtual Space Mouse, Keyboard function to help user save time, easier and effective quickly enter texts and commands that frequent used to operating computers and machines.
Image a Universal remote controller can control all of appliances at home together. User can just move their hand gestures and operate TV that built this robot in it to move channel UP, Down, Volume Up and Volume Down, Power ON, Power OFF. Furthermore, the Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot can be integrated to Home Appliances Automation by having the robot install on the home, and the robot consistently watching for the owner to making commands by their hand gestures and/or voice commands (by speech reorganization software program) to operating each electric devices and turn ON/OFF individual lights at home. With a customized train robot to recognized a particular wood stick that can become a universal remote controller of all appliances at the home as a Magic Stick Remote controller instantly. The robot will simplify all the remote controllers at home into hand gestures commands and the robot assist people have more powerful access of their home devices in dynamic manners, and the proposal robot can help those people who has physical ability limitations operate their home devices as normal people do.
The Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot equipped microphone, sounds sensors, speech reorganization software's program to listening voice commands, and speakers to reading text, articles, communicate with users. The optional reading out of the user's each input character and commands as voice feedback that to aid users to know what key they entering.
The Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot equipped a Motor Vibrate Silent-Reading sub-robot module that comprise a micro controller as programmed brain, a 2 sections of vibrations surface for use to distinguished Long Short signal coding to get the reading Morse code text coding, 2 Seashell shape of Springs Coils attached on the each of motors to be spins (1 larger than the other) that will generate Long signal and short signal) and 2 motors (can be Step motors, or Servo motor, or DC motors), one motor for rotate short spin vibrations, and the other motors for rotate long spin vibrations to generate silent-reading Morse code and standard text coding for users especially for people who can not see and can not hear. The micro controller connected a smart phone or wireless receiver device on the sub-robot itself and the sub robot Morse Code module has controlled by Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot's computer program through wireless technology protocols, such as Wi-Fi 802.11, Bluetooth, Wimax, IP and cell phone channels. As result, the Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot will command the sub-robot motor vibrate module to operate its motors spins to generate Long Short vibrate signals to represent Morse coding and/or standards text coding.
The proposal solution of this invention, Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot, is to benefit to everyone to use computers and machines without physical ability limitations and the proposal robot can improve current small keyboard space hard to typing problem on cell phone, portable devices; in addition, the proposal robot can integrated into home automations and reduce multiple remote controllers. The proposal robot can benefits to people to save time, space, materials, money and increase the dynamic computer, machines operating access manners and provide handful assist for users who have physical ability limitations to be able to operate computers and machines easier like normal people do.
Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot has computer system use video vision camera sensors, logical vision sensor programming as trainable computer vision to allow users commands by their Hands gestures to virtually input data, and commands to operate computer, and machines. The robot automatically translate the receiving coordination users' hand gesture actions puzzle-cell positions of working space and mapping to its software mapping lists for each of the puzzle-cell position definition and calibrate these user hand and/or body gestures' virtual space actions into entering data and commands to computer meaningful operations moving cursor UP, Down, Left, Right, Left Clicks, Right Clicks, typing Texts, Hand-Sign Languages . . . etc.
The robot can be trained and taught to tracking on a specific object by recognize its shape, symbols and/or colors and optional embedded wireless sensors attached on the tracking objects that to enhance the object tracking reliable vision reading. The equipped microphone, sounds sensors, speech reorganization software's program to listening voice commands, and speakers to reading text, articles, communicate with users. The Robot equipped a Motor Vibrate Silent-Reading sub-robot module for produce vibrate Morse code signal coding and/or standard texts vibrate signal coding.
The robot acts as Universal virtual Space Mouse, virtual Space Keyboard, and virtual Remote Controllers.
The proposal robot can benefits to people to save time, space, materials, money and increase the dynamic computer, machines operating access manners and provide handful assist for users who have physical ability limitations to be able to operate computers and machines easier like normal people do.
All of the objects of the invention are be list assigned number with reference to the drawings wherein:
Referring
When the robot's 1, sensor 5 detect user, and robot use video web cameral, and web camera 3, video vision camera sensor 6 to measure of user height and width, and automatically calibrated virtual working space 72, robot adjusting the distant between user and itself to projecting the Virtual Space Mouse zone 69, Virtual Space Keyboard 70, and Hand-Sign Languages Zone 71. The working space 72 can be selected and choose to work on either one of these 3 space function zones, or to have divide whole working space 72 into 3 zones for space mouse, keyboard, and Hand-Sign Languages zone together. The connection 4 of sensor 5 and video sensor vision camera 6 can be connecting with robot by wire or wireless method.
Referring
To move the pages on monitor around, the user 11 use left hand punch out gesture toward robot 1 as Mouse click and moving the right hand around. The robot logical vision tracking program 7 will changing the X surface direction 15 and Y surface direction 17, and confirm Z surface direction 18 values. The robot's Position Translate Program 20 convert the new tracking position XY value and Z value into mapping action value as confirm Mouse click to drafting hold moving the page on the monitor screen Up, Down, Left and Right, accordingly to user right hand gesture movements.
To make Double Click, user 11 use left hand punch out gesture toward robot and back and toward action 13 two times, the robot 1 logical vision tracking program 7 tracking will changing the Z surface direction 19 twice value and Position Translate Program 20 convert the Z, Z value into mapping action value as double click.
To make Double Click, user 11 use left hand punch out gesture toward robot and back and toward action 13 three times, the robot's 1 logical vision tracking program 7 will changing the Z surface direction 19 Triple value and Position Translate Program 20 convert the z, z, z value into mapping action value as Right click.
For a precise Space Mouse operation, the user 11 hand's fingers can carry or wearing or drawing plural specific object's and variety shapes, colors, and/or embedded wireless sensors/LED lights, laser beam Lights on the object.
The Robot's 1 has use video visual vision camera that able to watch user gesture XYZ dimension value at once, and logical vision tracking program 7 can be trained to tracking on very small of finger's movement gestures actions by lock on each individual specific object's shape, size, its colors and/or embedded wireless sensors/LED Lights, laser beam lights on the objects that user fingers carry or wear or draw on. For example, the user's 11 right hand's fingers have variety Star shape objects of vision tracking symbol 10 on his fingers, and left hand fingers have variety Heart shape objects of vision tracking symbols. The user can mimic Regular physical mouse operating actions in one hand in Virtual Space Mouse zone, the robot able to precisely tracking fingers X, Y, Z gesture movements and perform the Virtual Space Mouse functions.
The demonstration method above use plural videos to watch XYZ dimensions is not a limitation. The robot 1 can use just one video camera, or use one web camera to perform the virtual space mouse functions as well. The logical vision-tracking program can intercept the video frames, and compare series video frames to have object's X, Y, Z dimension tracking value.
Referring
The two steps Z value selections method, Example, Use “Shift” key or any special function keys, two steps, First, the user 11 place left hand to the puzzle-cell position on the relative “Shift” key space position and punch toward robot, the robot's 1 logical vision tracking program 7 accept the Z surface direction 18, the Z dimension value 36 will be add −1, and its Position Translate Program 39 into keyboard mapping listing aware of that is a meaningful puzzle space as “Shift” key position, and user 11 hold left hand same position, and then Second, user move right hand to the “A” key position and then use left hand punch out toward robot further again to make confirm key selection and the robot 1 logical vision tracking program accept the Z surface direction 18, the Z dimension value 36 will be add −1 to be −2, and its Position Translate Program 39 into keyboard mapping listing aware of that is double “Shift” key twice will confirm the select key, and the new X surface direction 15, and the X dimension value 40 will be add −5 relative distant with robot's Vision-G-Point 38 center and the new Y surface direction 17, and the Y dimension value 40 will be add 0 relative distant with robot's Vision-G-Point 38 center, and its Position Translate Program 39 into keyboard mapping listing aware of that is a meaningful puzzle space as Capital “A” key. The same 2 steps principal method can apply to using “Ctrl”, “Alt”, Special function keys, “!”, “@”, “#”, “$”, “%”, “̂”, “&”, “*”, “(”, “)”, “{”, “}”, “|”, “_”, “+” . . . etc all of them that require two steps selection method.
The “Backspace”, “Enter”, “Arrow up”, “Arrow Down”, “Arrow Left”, Arrow Right”, “ESC”, “Del”, “Home”, “End”, “PgUp”, “PgDn”, “Pause”, “PrtSc” keys are only require user 11 punch toward robot 1 time, the Position Translate Program 39 able to distinguish those special function key and perform the key selection function as one time.
For a precise Space standard Keyboard operation, the user 11 hand's fingers can carry or wearing or drawing plural specific object's and variety shapes, colors, and/or embedded wireless sensors/LED lights, laser beam Lights on the object.
The Robot's 1 has use video visual vision camera that able to watch user gesture XYZ dimension value at once, and logical vision tracking program 7 can be trained to tracking on very small of finger's movement gestures actions by lock on each individual specific object's shape, size, its colors and/or embedded wireless sensors/LED Lights, laser beam lights on the objects that user fingers carry or wear or draw on. For example, the user's 11 right hand's fingers have variety Star shape objects of vision tracking symbol 10 on his fingers, and left hand fingers have variety Heart shape objects of vision tracking symbols. The user can mimic Regular physical keyboard operating actions in one hand in Virtual Space Mouse zone, the robot able to precisely tracking fingers X, Y, Z gesture movements and user 11 can use both hands typing on Virtual Space Keyboard to perform the Virtual Space Keyboard functions.
The demonstration method above use plural videos to watch XYZ dimensions is not a limitation. The robot 1 can use just one video camera, or use one web camera to perform the virtual space keyboard functions as well. The logical vision-tracking program can intercept the video frames, and compare series video frames to have object's X, Y, Z dimension tracking value.
Referring
For a precise Hand-Sign reorganization, the user 11 hand's fingers can carry or wearing or drawing plural specific object's and variety shapes, colors, and/or embedded wireless sensors/LED lights, laser beam Lights on the object.
The Robot's 1 has use video visual vision camera that able to watch user gesture XYZ dimension value at once, and logical vision tracking program 7 can be trained to tracking on very small of finger's movement gestures actions by lock on each individual specific object's shape, size, its colors and/or embedded wireless sensors/LED Lights, laser beam lights on the objects that user fingers carry or wear or draw on.
When train robot's logical vision tracking program 7 to recognize a special object such as a sharp point of a pen. The user 11 hold the sharp point of the pen face to robot, and start to move around the pen as it writing word on air or drawing a picture on air, the robot 1 watch each video frame and mark the sharp point of the pen XYZ value, and then update the value to monitor or a painting software, the series frames signals xyz values will compose into meaning symbolic character writing or a unique drawing picture from user 11. The robot will able to produce what word of the user write or the picture of the user by draws by its vision.
Referring
Referring
When user 11 wants to train robot to special tracking on certain objects base on each individual specific object's shape, size, its colors and/or embedded wireless sensors/LED Lights, laser beam lights on the objects. The user 11 starts the robot's 1 logical vision-tracking program begins to record the objects that user want it to tracking on. The user 11 take the special tracking objects and moving them in directions let the robot video cameras 6, web camera 2, web camera 3 to video. For example, using 2 steps training object tracking, the first step, the user 11 has wear special objects start moving from E1 55 move toward robot direction 53 to arrive F1 point and still facing robot move down elevator direction 62 to arrive G1 point and move the objects back direction 61 to user to arrive H1 point 57 and from H1 point push toward robot direction 60 to arrive at I1 point, and objects still facing robot move up elevator direction 63 to arrive J1 point, and move the objects back to user direction 54 to E1 55 point.
The second step, the user move the objects from K2 point that represent the high traffic area of user working space zones start toward robot direction 58 to arrive L2 Point, and still object facing robot move back to user 11 direction 59 back to K2 point. The user repeat the K2 to L2 point moving several times, not only in straight line direction and also in circle direction motion within the high traffic working space zone. The user 11 move back to E1 start point and hold without moving object for seconds.
During the two steps of special object training vision tracking, the robot logical vision tracking program will compare the video frames in series, the program will match frames and know to filter out the background of image signal that don't move, and logical vision tracking program compare video frame know what objects signals about its particular shape, size, color, and/or embedded wireless sensors/LED Lights, laser beam lights on the object indications that chance XYZ dimension value every time. The logical vision-tracking program 7 has learned the special object signals. As result, the robot vision become trained computer vision and knows what object that it needs to focus tracking on. The logical vision-tracking program 7 can also be hard coding manually programming by user as well for tracking on special object. The user can directly program at robot's 1 and set coding tracking on what shape of object, what size, what colors and/or if there is embedded wireless sensors, LED Lights, laser beam lights on the objects any indications to be vision tracking on. The logical vision-tracking program will base on the input object definition and looking only the object that matched. For example, to program logical vision tracking program to looking for a Yellow Sharp Pen, the robot vision will tracking on a Yellow Sharp Pen and knowing where it moving, follow the pen moving directions.
Referring
Referring
The long short vibrate signals are not limited use seashell coils, or sticks, or any touching objects, the sub-robot module can simple to divide into 2 sections on the motor 75 half side vibrate surface is represent s short signal, and on the motor 89 half side vibrate surface is represent long signal.
With this Motor Vibrate Silent-Reading sub-robot module, the user who carries it will able to silent reading the articles from robot 1 computer.
Referring
The plural video cameras 6 and plural types of sensors 98 that installed on each rooms of the property as video camera networking and sensors networking. The video camera 6 can install with a motor 114 that has holder 113 to hold the video camera 6 that controlled by robot 92, the robot's 92 vision sensors will tracking follow the user 11 special object where user moving to activate motor 114 to tracking rotate the video camera 6 changing direction to aiming video camera user 11 and specific tracking object where they are and waiting for any command from the user 11.
The Home-base Type robot 92 is design for everyone to increase their home in dynamic ways to operating computer and home appliances and the robot especially to provide handful assistances for users who have physical ability limitations to be able to operate computers and home appliances easier like normal people do.
A practical software and hardware example solution to built as Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot using Microsoft Windows technology.
The features of the Universal Video Computer Vision Input Virtual Space Mouse-Keyboard Control Panel Robot built into a single microchip.
The virtual space mouse-keyboard control panel robot's translate space gestures actions with software mapping key listing method into entering data and commands to operating computer, This method can be built/embedded into a microchip, or a processor, or a video processor that contains these four sections of intelligent virtual space-commands software programs/scripts that demonstrated above, and with three working space calibrate alignment arrangement standard for create virtual Space Mouse, Virtual Space keyboard, and Hand-Sign Languages patterns.
The 4 sections of intelligent virtual space-commands conversation software are
The processor contains three working space calibrate alignment arrangement standard for Mouse, keyboard, and Hand-Sign Languages patterns to automatically initialize the virtual working space of Virtual Space Mouse, Virtual Space Keyboard, and Hand-Sign Languages Zones for computer, machines able to use computer vision method to watching user gestures actions to perform Mouse, Keyboard, Hand-Sign Languages that mapping the received gesture's action position value into practical computer commands. The Virtual Space Mouse-Keyboard Control Panel Robot microchip can install on to any computer, machine, and appliance at home, and able to connect the video vision camera sensor, running by Windows XP, CE embedded, or Linux . . . etc operation software to provide the virtual space mouse, space keyboard on those computer, and machines.
While the preferred embodiments of the invention have been described above. It will be recognized and understood that various modifications may be made therein without departing from the spirit of essential attributes thereof, and it is desired therefore that only such limitations be placed thereon as are imposed by the appended claim.
Number | Date | Country | Kind |
---|---|---|---|
2,591,808 | Jul 2007 | CA | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CA08/01256 | 7/8/2008 | WO | 00 | 3/23/2009 |