Field
The present disclosure relates to a wearable device, which provides haptic and audio feedback based on stereo camera input.
Description of the Related Art
Wearable cameras provide recording and documenting of a user's experience, often from the same or similar point of view or field of view (FOV) of the user. However, these devices are passive recorders, and do not provide real time processing and information about the scene in the FOV. Certain users, such as blind persons, may desire additional feedback relating to the environment. Other wearable cameras may be designed to assist blind persons. However, such devices lack stereo cameras for reliable depth perception information.
Thus, there is a need for an unobtrusive device which augments a user's environmental awareness with depth perception and object recognition.
The present disclosure relates to a smart necklace which provides audio and haptic feedback based on stereo camera input. One aspect of the present disclosure is to provide a wearable device which can recognize objects for increased environmental awareness and obstacle avoidance. Another aspect of the present disclosure is to provide a wearable device which assists in navigation. Yet another aspect of the present disclosure is to provide a smart necklace for social interaction.
In one implementation, a wearable neck device for providing environmental awareness to a user comprises a flexible tube defining a cavity and having a center portion, a left portion and a right portion. A first stereo pair of cameras is positioned on the left portion of the flexible tube and a second stereo pair of cameras is positioned on the right portion of the flexible tube. A vibration unit is positioned within the cavity and configured to provide haptic and audio feedback to the user. A processor, also positioned within the cavity, is configured to receive video frames from the first stereo pair of cameras and the second stereo pair of cameras, provide object recognition of items in the video frames, identify points of interest to the user based on the object recognition, and control the vibration unit to provide haptic and audio feedback to the user based on the points of interest.
In another implementation, a wearable neck device for providing environmental awareness to a user comprises a band defining a cavity and having a center portion, a left portion and a right portion. A first stereo pair of cameras is positioned on the left portion of the band and a first camera is positioned to a side of the first stereo pair of cameras. A second stereo pair of cameras is positioned on the right portion of the band and a second camera is positioned to a side of the second stereo pair of cameras. A vibration unit, positioned within the cavity, is configured to provide haptic and audio feedback to the user. A processor, also positioned within the cavity, is configured to receive video frames from the first stereo pair of cameras, the first camera, the second stereo pair of cameras and the second camera, provide object recognition of items in the video frames, identify points of interest to the user based on the object recognition, and control the vibration unit to provide haptic and audio feedback to the user based on the points of interest.
In yet another implementation, a method of navigation using a wearable neck device comprises recognizing objects with a stereo pair of cameras of the wearable neck device, determining a location of the wearable neck device with respect to the objects, determining a route to a destination that avoids the objects, providing a first audio or haptic cue indicating the route, and providing a second audio or haptic cue when the wearable neck device reaches the destination.
Other systems, methods, features, and advantages of the present invention will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims. Component parts shown in the drawings are not necessarily to scale, and may be exaggerated to better illustrate the important features of the present invention. In the drawings, like reference numerals designate like parts throughout the different views, wherein:
Apparatus, systems and methods that implement the implementations of the various features of the present application will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate some implementations of the present application and not to limit the scope of the present application. Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements.
In one implementation, a smart necklace 100 includes an onboard processing array 110, which communicates with a sensor array 120, an interface array 130, and a component array 140. The onboard processing array 110, the sensor array 120, the interface array 130, and the component array 140 are exemplary groupings to visually organize the components of the smart necklace 100 in the block diagram of
The onboard processing array 110 includes a processor 111, a memory 112, and a storage 113. The processor 111 may be a computer processor such as an ARM processor, DSP processor, distributed processor, or other form of central processing. The memory 112 may be a RAM or other volatile or nonvolatile memory used by the processor 111. The storage 113 may be a non-transitory memory or a data storage device, such as a hard disk drive, a solid state disk drive, a hybrid disk drive, or other appropriate data storage, and may further store machine-readable instructions, which may be loaded into the memory 112 and executed by the processor 111.
The sensor array 120 includes a stereo camera 121, a camera 122, an inertial measurement unit (IMU) 123, a global positioning system (GPS) 124, and a sensor 125. The stereo camera 121 may be a stereo camera comprising two cameras offset by a stereo distance. The stereo distance may be optimized for the two cameras. The smart necklace 100 may have more than one stereo camera 121, as will be further described below. The camera 122 may be a camera or other optical sensor not part of a stereo camera pair. The IMU 123 may be an IMU which may further comprise one or more of an accelerometer, a gyroscope, and/or a magnetometer. The GPS 124 may be one or more GPS units. The sensor 125 may be one or more sensors which provide further information about the environment in conjunction with the rest of the sensor array 120. The sensor 125 may be, for example, one or more of a temperature sensor, an air pressure sensor, a moisture or humidity sensor, a gas detector or other chemical sensor, a sound sensor, a pH sensor, a smoke detector, a metal detector, an actinometer, an altimeter, a depth gauge, a compass, a radiation sensor, a motion detector, or other sensor.
The interface array 130 includes a microphone 131, a speaker 132, a vibration unit 133, an input device 134, and a display 135. The microphone 131 may be a microphone or other device capable of receiving sounds, such as voice activation/commands or other voice actions from the user, and may be integrated with or external to the smart necklace 100. The speaker 132 may be one or more speakers or other devices capable of producing sounds and/or vibrations. The vibration unit 133 may be a vibration motor or actuator capable of providing haptic and tactile output. In certain implementations, the vibration unit 133 may also be capable of producing sounds, such that the speaker 132 and the vibration unit 133 may be the same or integrated. The input device 134 may be an input device such as a touch sensor and/or one or more buttons. For example, the input device 134 may be a touch sensor used as a slider to adjust settings as well as act as a button for making selections, similar to a touchpad. The display 135 may be a display, integrated into the smart necklace 100 or wirelessly connected to the smart necklace 100, and may be capable of displaying visual data from the stereo camera 121 and/or the camera 122. In other implementations, the display 135 may be another visual alert device, such as one or more LEDs or similar light source.
The component array 140 includes a battery 141, an antenna 142, and an input/output (I/O) port 143. The battery 141 may be a battery or other power supply capable of powering the smart necklace 100. The battery 141 may have a connection port for recharging, or may be wirelessly recharged, such as through induction charging. The antenna 142 may be one or more antennas capable of transmitting and receiving wireless communications. For example, the antenna 142 may be a Bluetooth or WiFi antenna, may be a radio frequency identification (RFID) antenna or reader, and/or a near field communication (NFC) unit. The I/O port 143 may be one or more ports for connecting additional peripherals. For example, the I/O port 143 may be a headphone jack, or may be a data port. The antenna 142 and/or the I/O port 143 allows the smart necklace 100 to connect to another device or network for data downloads, such as updates or map information or other relevant information for a particular application, and data uploads, such as status updates. Further, the antenna 142 and/or the I/O port 143 allows the smart necklace 100 to communicate with other smart necklaces 100 for distributed computing or sharing resources. The smart necklace 100 described herein is generally a stand-alone device. However, in other implementations, the smart necklace 100 may be configured or optimized to work in conjunction with other devices. For example, smartphones, tablets, or other mobile devices may wirelessly connect to the smart necklace 100 for shared resources and processing. The mobile device may act as a display unit for the smart necklace 100. The smart necklace 100 may further have specific protocols for interacting with mobile devices or other smart necklaces 100.
The smart necklace 100 is a lightweight, wearable smart device that is worn around the user's neck for environmental awareness, navigation, social interactions, and obstacle avoidance through real-time feedback. The smart necklace 100 is capable of recognizing objects around the user, in order to alert the user. For example, the smart necklace 100 may be used by a blind person to aid in environmental awareness and navigate safely around obstacles. The smart necklace 100 provides the user audio and haptic feedback through the speaker 132 and the vibration unit 133 based upon camera input from the stereo camera 121 and the camera 122.
In one implementation, the smart necklace 100 includes two pairs of stereo cameras 121, which may be positioned on either side of the user's neck. Stereo cameras provide depth information in both indoor and outdoor environments. The stereo cameras 121 may face forward, in front of a user, to establish a field of view (FOV). The stereo cameras 121 may have, for example, an FOV of around 90 degrees. The stereo cameras 121 provide 3D information such as depth in front of the user. Additional cameras 122, which may be placed to the sides of the stereo cameras 121, may increase the FOV to, for example, around 120 degrees. Alternatively, the cameras 122 may be placed where needed, such as behind the user's neck. Although the cameras 122 may be monocular, they can provide simple recognition, even without depth or distance information. For example, the cameras 122 can detect moving objects in the user's periphery. The stereo cameras 121 and the cameras 122 continuously passively recognize objects in the environment. Working in conjunction with the other sensors in the sensor array 120, the smart necklace 100 provides the user with guidance and navigation commands by way of audio and haptic feedback.
The GPS 124 provides location information, which works with the inertial guidance information, including velocity and orientation information, provided by the IMU 123 to help direct the user. The memory 112 and/or the storage 113 may store, for example, map information or data to help locate and provide navigation commands to the user. The map data may be preloaded, downloaded wirelessly through the antenna 142, or may be visually determined, such as by capturing a building map posted near a building's entrance, or built from previous encounters and recordings. The map data may be abstract, such as a network diagram with edges, or a series of coordinates with features. The map data may contain points of interest to the user, and as the user walks, the stereo cameras 121 and/or cameras 122 may passively recognize additional points of interest and update the map data. For example, the user may give a voice command, “Take me to building X in Y campus.” The smart necklace 100 may then download a relevant map if not already stored, or may navigate based on perceived images from the stereo cameras 121 and the cameras 122. As the user follows the navigation commands from the smart necklace 100, the user may walk by a coffee shop in the morning, and the smart necklace 100 would recognize the coffee shop and the time of day, along with the user's habits, and appropriately alert the user. The smart necklace 100 may verbally alert the user through the speakers 132. The user may use the input device 134 to adjust settings, which for example may control the types of alerts, what details to announce, and other parameters which may relate to object recognition or alert settings. The user may turn on or off certain features as needed.
When navigating indoors, the standalone GPS units may not provide enough information to a blind user to navigate around obstacles and reach desired locations or features. The smart necklace 100 may recognize, for instance, stairs, exits, and restrooms and appropriately store them in the memory 112 and/or the storage 113. In another example, the smart necklace 100 may determine empty seats for the user to navigate to, or may remember the user's specific seat in order to navigate away and subsequently return to the same seat. Other points of interest may be potential hazards, descriptions of surrounding structures, alternate routes, and other locations. Additional data and points of interest can be downloaded and/or uploaded to mobile devices and other devices, social networks, or the cloud, through Bluetooth or other wireless networks. With wireless connectivity, local processing can be reduced, as high level information may be available from the cloud or other remote data centers.
The smart necklace 100 may determine paths for navigation, which may be further modified for the user's needs. For example, a blind person may prefer routes that follow walls. Using the IMU 123 and/or the GPS 124 and other sensors, the smart necklace 100 can determine the user's location and orientation to guide them along the path, avoiding obstacles. The vibration unit 133 and the speaker 132 provide audio and haptic cues to help guide the user along the path. For example, the speaker 132 may play a command to move forward a specified distance. Then, special audio tones or audio patterns can play when the user is at a waypoint, and guide the user to make a turn through additional tones or audio patterns. A first tone, audio pattern or vibration can alert the user to the start of a turn, such as a single tone or a vibration from the left side of the smart necklace may indicate a left turn. A second tone, audio pattern or vibration can alert the user that the turn is complete such as two tones, or the vibration may stop, such as the left side ceases to vibrate when the turn is complete. Different tones or patterns may also signify different degrees of turns, such as a specific tone for a 45 degree turn and a specific tone for a 90 degree turn. Alternatively or in addition to tones and vibrations, the smart necklace 100 may provide verbal cues, similar to a car GPS navigation command. High level alerts may also be provided through audio feedback. For example, as the smart necklace 100 reaches a predetermined distance—such as a foot or other value which may be stored in the memory 112 and/or the storage 113 and may be adjusted—from an obstacle or hazard, the speaker 132 and/or the vibration unit 133 may provide audible alerts. As the smart necklace 100 gets closer to the obstacle, the audible alerts may increase in intensity or frequency.
The microphone 131 may provide additional environmental data, such as sounds of moving cars or other possible hazards. The microphone 131 may work in conjunction with the speaker 132, and may be placed away from the speaker 132 to prevent interference. The microphone 131 may alternatively work in conjunction with an attached audio device, such as bone conduction devices, to provide the user with audio feedback without broadcasting the audio feedback.
The smart necklace 100 may improve social interactions. For example, the smart necklace 100 may recognize faces in a room to identify potential friends, and provide the user with audio feedback identifying friends. The stereo cameras 121 and/or the camera 122 may be further able to determine additional details about persons, such as moods or expressions, or if they are engaging in physical activities, in order to alert the user. For example, the potential friend may extend a hand for a handshake or a “high five,” and the smart necklace 100 may use audio or haptic feedback to notify the user. The microphone 131 may recognize voices of other persons to identify and appropriately notify the user, or may recognize a new voice to save for future identification.
Although the smart necklace 100 is described with respect to a blind user, the smart necklace 100 may be used in other applications. For example, the smart necklace 100 may be used by peace officers and law enforcement officers as a recorder which provides additional environmental awareness. The smart necklace 100 may be further used by athletes to record sports in a real-time, first person view. For example, performing certain actions such as a swing can be recorded, including inertial motions, to analyze the motions. The smart necklace 100 may also be used in hazardous environments to provide additional safety warnings. For example, the smart necklace 100 may be used in a factory to provide a factory worker additional warning about possible hazardous conditions or obstacles. In such applications, the sensor 125 may be specifically chosen to provide particularly relevant measurements. For instance, in an environment with harmful gas, the sensor 125 may detect dangerous levels of gas and accordingly alert the user. The sensor 125 may provide low-light viewing, or the stereo cameras 121 and/or the camera 122 may be capable of night vision, to provide the user with additional environmental awareness in low-light conditions, such as outdoors at night or photo-sensitive environments. The smart necklace 100 can be a memory device to aid persons, such as Alzheimer's patients. The smart necklace 100 can aid in shopping or otherwise navigating inventories by helping to keep track of goods. The antenna 142 may be an RFID or NFC reader capable of identifying RFID or NFC tags on goods.
Referring now to
Referring now to
The image data received at block 210 may be data of a variety of forms, such as, but not limited to red-green-blue (“RGB”) data, depth image data, three dimensional (“3D”) point data, and the like. In some implementations, the smart necklace 100 may receive depth image data from an infrared sensor or other depth sensor, such as an infrared sensor or depth sensor integrated with the stereo camera 121 and/or the camera 122. In other implementations that include a depth sensor (e.g., an infrared sensor), the depth sensor may be separate from the stereo camera 121 and/or the camera 122.
Still referring to
The onboard processing array 110 includes at least one object detection parameter to facilitate the detection of the candidate object. In some implementations, the at least one object detection parameter is a window size, a noise filtering parameter, an estimated amount of light, an estimated noise level, a feature descriptor parameter, an image descriptor parameter, or the like.
Still referring to
In some implementations, the onboard processing array 110 may recognize the candidate object by utilizing a feature descriptor algorithm or an image descriptor algorithm, such as scale invariant feature transform (“SIFT”), speeded up robust feature (“SURF”), histogram of oriented gradients (“HOG”), generalized search tree (“GIST”), fast retina keypoint (“FREAK”), and binary robust invariant scalable keypoints (“BRISK”), and the like. In some implementations in which the onboard processing array 110 utilizes a feature descriptor or image descriptor algorithm, the onboard processing array 110 may extract a set of features from a candidate region identified by the onboard processing array 110. The onboard processing array 110 may then access a reference set of features of an object recognition reference model from an object recognition database stored in the memory 112 or the storage 113 and then compare the extracted set of features with the reference set of features of the object recognition reference model. For example, the onboard processing array 110 may extract a set of features from the high entropy region of the acquired target image data that includes a bottle and compare the extracted set of features to reference sets of features for one or more reference bottle models. When the extracted set of features match the reference set of features, the onboard processing array 110 may recognize an object (e.g., recognizing a bottle when the extracted set of features from the high entropy region of the acquired target image data that includes the bottle match the reference set of features for a reference bottle model). When the extracted set of features does not match the reference set of features, an object recognition error has occurred (e.g., an object recognition error indicating that no object recognition reference model matches the candidate object). When an object recognition error has occurred (e.g., referring to the example, no reference bottle model exists in the memory 112 or the storage 113), the at least one object detection parameter may be adjusted to improve the accuracy of the object detection module, as described below with reference to block 225.
In some implementations, the object recognition module may assign an identifier to the recognized object. For example, the identifier may be an object category identifier (e.g., “bottle” when the extracted set of features match the reference set of features for the “bottle category” or “cup” when the extracted set of features match the reference set of features for the “cup” object category) or a specific object instance identifier (e.g., “my bottle” when the extracted set of features match the reference set of features for the specific “my bottle” object instance or “my cup” when the extracted set of features match the reference set of features for the specific “my cup” object instance).
The onboard processing array 110 includes at least one object recognition parameter to facilitate the recognition of the object. In some implementation, the at least one object recognition parameter is a window size, a noise filtering parameter, an estimated amount of light, an estimated noise level, a feature descriptor parameter, an image descriptor parameter, or the like.
Still referring to
Still referring to
Still referring to
Referring now to
When the processor 111 searches for an object model of the plurality of object models, more than one object model may be similar in shape or structure to a portion of the first visual data 306. For example, a body of a bottle (e.g., the target object 310) may be similar in shape or structure to either a cylinder or a box. The processor 111 is configured to determine which of the plurality of object models has the closest fit for the analyzed portion of the first visual data 306. For example, the processor 111 may assign a score (for example, a recognition accuracy percentage) as to the degree of similarity between a particular object model of the plurality of object models and the analyzed portion of the first visual data 306. For example, the processor 111 may choose the object model of the plurality of object models associated with the highest associated score (e.g., recognition accuracy percentage), as the object model that corresponds to the analyzed portion of the first visual data 306. As such, in one implementation, the processor 111 determines the parameters of the chosen object model.
As described below, the plurality of object models are not fixed. The stored object models and their corresponding parameters may be supplemented or modified. In addition or in the alternative, new category object models may be learned and stored in the database based on the recognized target objects. The discussion at this juncture assumes that the method is detecting the target object 310 for the first time, and objects having similar shapes, structure, or pose information to the target object 310 as a whole are not yet encountered and stored.
Referring to
Although the method described above uses a bottle as an exemplary object, the method may be used to recognize points of interest and other features, such as stairs, empty seats or buildings.
Referring now to
The onboard processing array 110 segments the omni-directional image data into a plurality of image slices. In one exemplary implementation, the received omni-directional image is segmented into eight slices (S1, S2, S3, S4, S5, S6, S7, and S8). In some implementations, the omni-direction image may be segmented into any number of slices. In some implementations, the number of slices may be between 8 and 36. However, it should be understood that the number of slices may be less than 8 or greater than 36.
Each of the plurality of slices is representative of at least a portion of the panoramic field of view of the omni-directional image data or the partially panoramic field of view of the omni-directional image data. In some implementations, the plurality of image slices includes a middle image slice (e.g., slice S2), a preceding image slice (e.g., slice S1), and a subsequent image slice (e.g., slice S3), such that a field of view of the middle image slice (e.g., slice S2) is adjacent to or overlaps a preceding field of view of the preceding image slice (e.g., slice S1) and the middle field of view of the middle image slice (e.g., slice S2) is adjacent to or overlaps a subsequent view of the subsequent image slice (e.g., slice S3).
In some implementations, each image slice of the plurality of image slices is representative of an equal portion of the panoramic field of view of the omni-directional image data and the collective fields of view of the plurality of image slices is the same as the panoramic field of view of the omni-directional image data. For example, each of the eight slices captures an eighth of the full panoramic view of the omnidirectional image data and the collective field of view of the eight image slices is the same as the panoramic field of view of the omni-directional image data. In some implementations, the field of view of a first slice of the plurality of views may be greater than a field of view of a second slice of the plurality of slices. In some implementations, the collective fields of view of the plurality of slices may be smaller than the full panoramic field of view. In some implementations, the field of views of neighboring slices may overlap.
The onboard processing array 110 calculates a slice descriptor for each image slice of the plurality of image slices. As used herein, “slice descriptor” refers to a description of the visual features (e.g., color, texture, shape, motion, etc.) of the image data of a particular slice of the omni-directional image data. For example, a slice descriptor d1 is calculated for slice S1, a slice descriptor d2 is calculated for slice S2, a slice descriptor d3 is calculated for slice S3, a slice descriptor d4 is calculated for slice S4, a slice descriptor d5 is calculated for slice S5, a slice descriptor d6 is calculated for slice S6, a slice descriptor d7 is calculated for slice S7, and a slice descriptor d8 is calculated for slice S8.
In some implementations, the slice descriptor may be calculated using an algorithm, such as scale-invariant feature transform (“SIFT”), speeded up robust feature (“SURF”), histogram of oriented gradients (“HOG”), generalized search tree (“GIST”), fast retina keypoint (“FREAK”), and binary robust invariant scalable keypoints (“BRISK”), and the like. However, it should be understood that other algorithms may be used to calculate the slice descriptor. In some implementations, the slice descriptor may include a decimal vector. In some implementations, the slice descriptor may include a binary vector. In other implementations, the slice descriptor may be represented in a format other a binary vector or a decimal vector. Depth information resulting from the application of stereo algorithms may also be used to calculate the slice descriptor.
The onboard processing array 110 generates a current sequence of slice descriptors for the omni-directional image data received. The current sequence of slice descriptors includes the calculated slice descriptor for each image slice of the plurality of image slices. For example, node n1 includes the slice descriptor d1 corresponding to slice S1, node n2 includes the slice descriptor d2 corresponding to slice S2, node n3 includes the slice descriptor d3 corresponding to slice S3, node n8 includes the slice descriptor d8 corresponding to slice S8, etc.
In some implementations, the current sequence of slice descriptors may be structured such that a middle node (e.g., node n2) corresponds to a middle image slice (e.g., slice S2), a preceding node (e.g., node n1) corresponds to a preceding image slice (e.g., slice S1), and a subsequent node (e.g., node n3) corresponds to a subsequent image slice (e.g., slice S3). The preceding node (e.g., node n1) is linked to the middle node (e.g., node n2), and the middle node (e.g., node n2) is linked to the subsequent node (e.g., node n3).
In some implementations, the current sequence of slice descriptors are stored in the storage 113. In some implementations, the storage 113 may include a database of reference sequences of slice descriptors, each of which corresponds to a previously processed omni-directional image encountered by the onboard processing array 110.
In some implementations, the current sequence of slice descriptors may be stored in the storage 113 as a current linked list of slice descriptors. In implementations in which the current sequence of slice descriptors is stored in the storage 113 as a current linked list of slice descriptors, each node of the linked list may be linked to the subsequent node of the linked list (e.g., node n1 is linked to node n2, node n2 is linked to node n3, etc.). In some implementations, the current sequence of slice descriptors may be stored in the storage 113 as a circular linked list of slice descriptors, such that the first node is linked to the second node (e.g., node n1 is linked to node n2), the second node is linked to the third node (e.g., node n2 is linked to node n3), . . . , and the last node is linked back to the first node (e.g., node n8 is linked to node n1). In some implementations, the current sequence of slice descriptors may be stored in the storage 113 as a current doubly linked list of slice descriptors. It should be understood that in other implementations, the current sequence of slice descriptors may be stored in the storage 113 using a data structure other than a linked list, such as an array, and the like.
While the omni-directional image received was not unwarped prior to segmenting the omni-directional image, in other implementations, the omni-directional image may be unwarped prior to segmentation.
Returning to
Still referring to
In some implementations, the current sequence of slice descriptors is a current circular linked list of slice descriptors and the reference sequence of slice descriptors is a reference circular linked list of slice descriptors. In such implementations, the current order of slice descriptors may be determined by traversing the current circular linked list of slice descriptors starting at a current starting node (e.g., the current order of slice descriptors may be determined to be {d1, d2, d3, d4, d5, d6, d7, d8} by traversing the current circular linked list starting from node n1 of the current circular linked list of slice descriptors). The reference order of slice descriptors may be determined by traversing the reference circular linked list of slice descriptors starting at a reference starting node (e.g., the reference order of slice descriptors may also be determined to be {d1, d2, d3, d4, d5, d6, d7, d8} by traversing the reference circular linked list starting from node r7 of the reference circular linked list of slice descriptors). The current sequence of slice descriptors matches the reference sequence of slice descriptors when the current order of slice descriptors is the same as the reference order of slice descriptors. In the embodiment depicted in
Still referring to
Turning to
Referring now to
In
Turning now to
Turning now to
As used herein, the term “network” includes any cloud, cloud computing system or electronic communications system or method which incorporates hardware and/or software components. Communication among the parties may be accomplished through any suitable communication channels, such as, for example, a telephone network, an extranet, an intranet, Internet, point of interaction device, point of sale device, personal digital assistant (e.g., an Android device, iPhone®, Blackberry®), cellular phone, kiosk, etc.), online communications, satellite communications, off-line communications, wireless communications, transponder communications, local area network (LAN), wide area network (WAN), virtual private network (VPN), networked or linked devices, keyboard, mouse and/or any suitable communication or data input modality. Specific information related to the protocols, standards, and application software utilized in connection with the Internet is generally known to those skilled in the art and, as such, need not be detailed herein.
“Cloud” or “Cloud computing” includes a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing may include location-independent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand.
Systems, methods and computer program products are provided. References to “various embodiments”, in “some embodiments”, “one embodiment”, “an embodiment”, “an example embodiment”, etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. After reading the description, it will be apparent to one skilled in the relevant art(s) how to implement the disclosure in alternative embodiments.
The steps of a method or algorithm described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by the processor 111, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium, such as the storage 113, is coupled to the processor 111 such that the processor 111 can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor 111. The processor 111 and the storage medium may reside in an Application Specific Integrated Circuit (ASIC).
The methods/systems may be described herein in terms of functional block components, screen shots, optional selections and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. For example, the methods/systems may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the methods/systems may be implemented with any programming or scripting language such as, VPL, C, C++, C#, Java, JavaScript, VBScript, Macromedia Cold Fusion, COBOL, Microsoft Active Server Pages, assembly, PERL, PHP, awk, Python, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX shell script, and XML with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the methods/systems may employ any number of conventional techniques for data transmission, signaling, data processing, network control, and the like.
As will be appreciated by one of ordinary skill in the art, the methods/systems may be embodied as a customization of an existing system, an add-on product, upgraded software, a stand-alone system, a distributed system, a method, a data processing system, a device for data processing, and/or a computer program product. Furthermore, the methods/systems may take the form of a computer program product on a non-transitory computer-readable storage medium having computer-readable program code means embodied in the storage medium. Any suitable computer-readable storage medium may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, and/or the like.
Exemplary embodiments of the methods/systems have been disclosed in an illustrative style. Accordingly, the terminology employed throughout should be read in a non-limiting manner. Although minor modifications to the teachings herein will occur to those well versed in the art, it shall be understood that what is intended to be circumscribed within the scope of the patent warranted hereon are all such embodiments that reasonably fall within the scope of the advancement to the art hereby contributed, and that that scope shall not be restricted, except in light of the appended claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4520501 | DuBrucq | May 1985 | A |
4586827 | Hirsch et al. | May 1986 | A |
5047952 | Kramer | Sep 1991 | A |
5097856 | Chi-Sheng | Mar 1992 | A |
5129716 | Holakovszky et al. | Jul 1992 | A |
5265272 | Kurcbart | Nov 1993 | A |
5463428 | Lipton et al. | Oct 1995 | A |
5508699 | Silverman | Apr 1996 | A |
5539665 | Lamming et al. | Jul 1996 | A |
5543802 | Villevieille | Aug 1996 | A |
5544050 | Abe | Aug 1996 | A |
5568127 | Bang | Oct 1996 | A |
5636038 | Lynt | Jun 1997 | A |
5659764 | Sakiyama | Aug 1997 | A |
5701356 | Stanford et al. | Dec 1997 | A |
5733127 | Mecum | Mar 1998 | A |
5807111 | Schrader | Sep 1998 | A |
5872744 | Taylor | Feb 1999 | A |
5953693 | Sakiyama | Sep 1999 | A |
5956630 | Mackey | Sep 1999 | A |
5982286 | Vanmoor | Nov 1999 | A |
6009577 | Day | Jan 2000 | A |
6055048 | Langevin et al. | Apr 2000 | A |
6067112 | Wellner et al. | May 2000 | A |
6199010 | Richton | Mar 2001 | B1 |
6229901 | Mickelson et al. | May 2001 | B1 |
6230135 | Ramsay | May 2001 | B1 |
6230349 | Silver et al. | May 2001 | B1 |
6285757 | Carroll et al. | Sep 2001 | B1 |
6307526 | Mann | Oct 2001 | B1 |
6323807 | Golding et al. | Nov 2001 | B1 |
6349001 | Spitzer | Feb 2002 | B1 |
6466232 | Newell | Oct 2002 | B1 |
6542623 | Kahn | Apr 2003 | B1 |
6580999 | Maruyama et al. | Jun 2003 | B2 |
6594370 | Anderson | Jul 2003 | B1 |
6603863 | Nagayoshi | Aug 2003 | B1 |
6619836 | Silvant et al. | Sep 2003 | B1 |
6701296 | Kramer | Mar 2004 | B1 |
6774788 | Balfe | Aug 2004 | B1 |
6825875 | Strub et al. | Nov 2004 | B1 |
6826477 | Ladetto et al. | Nov 2004 | B2 |
6834373 | Dieberger | Dec 2004 | B2 |
6839667 | Reich | Jan 2005 | B2 |
6857775 | Wilson | Feb 2005 | B1 |
6920229 | Boesen | Jul 2005 | B2 |
D513997 | Wilson | Jan 2006 | S |
7027874 | Sawan et al. | Apr 2006 | B1 |
D522300 | Roberts | Jun 2006 | S |
7069215 | Bangalore | Jun 2006 | B1 |
7106220 | Gourgey et al. | Sep 2006 | B2 |
7228275 | Endo | Jun 2007 | B1 |
7299034 | Kates | Nov 2007 | B2 |
7308314 | Havey et al. | Dec 2007 | B2 |
7336226 | Jung et al. | Feb 2008 | B2 |
7356473 | Kates | Apr 2008 | B2 |
7413554 | Kobayashi et al. | Aug 2008 | B2 |
7417592 | Hsiao et al. | Aug 2008 | B1 |
7428429 | Gantz et al. | Sep 2008 | B2 |
7463188 | McBurney | Dec 2008 | B1 |
7496445 | Mohsini | Feb 2009 | B2 |
7501958 | Saltzstein et al. | Mar 2009 | B2 |
7564469 | Cohen | Jul 2009 | B2 |
7565295 | Hernandez-Rebollar | Jul 2009 | B1 |
7598976 | Sofer et al. | Oct 2009 | B2 |
7618260 | Daniel | Nov 2009 | B2 |
D609818 | Tsang et al. | Feb 2010 | S |
7656290 | Fein et al. | Feb 2010 | B2 |
7659915 | Kurzweil et al. | Feb 2010 | B2 |
7743996 | Maciver | Jun 2010 | B2 |
D625427 | Lee | Oct 2010 | S |
7843488 | Stapleton | Nov 2010 | B2 |
7848512 | Eldracher | Dec 2010 | B2 |
7864991 | Espenlaub et al. | Jan 2011 | B2 |
7938756 | Rodetsky et al. | May 2011 | B2 |
7991576 | Roumeliotis | Aug 2011 | B2 |
8005263 | Fujimura | Aug 2011 | B2 |
8035519 | Davis | Oct 2011 | B2 |
D649655 | Petersen | Nov 2011 | S |
8123660 | Kruse et al. | Feb 2012 | B2 |
D656480 | McManigal et al. | Mar 2012 | S |
8138907 | Barbeau et al. | Mar 2012 | B2 |
8150107 | Kurzweil et al. | Apr 2012 | B2 |
8177705 | Abolfathi | May 2012 | B2 |
8239032 | Dewhurst | Aug 2012 | B2 |
8253760 | Sako et al. | Aug 2012 | B2 |
8300862 | Newton et al. | Oct 2012 | B2 |
8325263 | Kato et al. | Dec 2012 | B2 |
D674501 | Petersen | Jan 2013 | S |
8359122 | Koselka et al. | Jan 2013 | B2 |
8395968 | Vartanian et al. | Mar 2013 | B2 |
8401785 | Cho et al. | Mar 2013 | B2 |
8414246 | Tobey | Apr 2013 | B2 |
8418705 | Ota et al. | Apr 2013 | B2 |
8428643 | Lin | Apr 2013 | B2 |
8483956 | Zhang | Jul 2013 | B2 |
8494507 | Tedesco et al. | Jul 2013 | B1 |
8494859 | Said | Jul 2013 | B2 |
8538687 | Plocher et al. | Sep 2013 | B2 |
8538688 | Prehofer | Sep 2013 | B2 |
8571860 | Strope | Oct 2013 | B2 |
8583282 | Angle et al. | Nov 2013 | B2 |
8588464 | Albertson et al. | Nov 2013 | B2 |
8588972 | Fung | Nov 2013 | B2 |
8594935 | Cioffi et al. | Nov 2013 | B2 |
8606316 | Evanitsky | Dec 2013 | B2 |
8610879 | Ben-Moshe et al. | Dec 2013 | B2 |
8630633 | Tedesco et al. | Jan 2014 | B1 |
8676274 | Li | Mar 2014 | B2 |
8676623 | Gale et al. | Mar 2014 | B2 |
8694251 | Janardhanan et al. | Apr 2014 | B2 |
8704902 | Naick et al. | Apr 2014 | B2 |
8743145 | Price | Jun 2014 | B1 |
8750898 | Haney | Jun 2014 | B2 |
8768071 | Tsuchinaga et al. | Jul 2014 | B2 |
8786680 | Shiratori | Jul 2014 | B2 |
8797141 | Best et al. | Aug 2014 | B2 |
8797386 | Chou et al. | Aug 2014 | B2 |
8803699 | Foshee et al. | Aug 2014 | B2 |
8814019 | Dyster et al. | Aug 2014 | B2 |
8825398 | Alexandre | Sep 2014 | B2 |
8836532 | Fish, Jr. et al. | Sep 2014 | B2 |
8836580 | Mendelson | Sep 2014 | B2 |
8836910 | Cashin et al. | Sep 2014 | B2 |
8902303 | Na'Aman et al. | Dec 2014 | B2 |
8909534 | Heath | Dec 2014 | B1 |
D721673 | Park et al. | Jan 2015 | S |
8926330 | Taghavi | Jan 2015 | B2 |
8930458 | Lewis et al. | Jan 2015 | B2 |
8981682 | Delson et al. | Mar 2015 | B2 |
D727194 | Wilson | Apr 2015 | S |
9004330 | White | Apr 2015 | B2 |
9025016 | Wexler et al. | May 2015 | B2 |
9053094 | Yassa | Jun 2015 | B2 |
9076450 | Sadek | Jul 2015 | B1 |
9081079 | Chao et al. | Jul 2015 | B2 |
9081385 | Ferguson | Jul 2015 | B1 |
D736741 | Katz | Aug 2015 | S |
9111545 | Jadhav et al. | Aug 2015 | B2 |
D738238 | Pede et al. | Sep 2015 | S |
9137484 | DiFrancesco et al. | Sep 2015 | B2 |
9137639 | Garin et al. | Sep 2015 | B2 |
9140554 | Jerauld | Sep 2015 | B2 |
9148191 | Teng et al. | Sep 2015 | B2 |
9158378 | Hirukawa | Oct 2015 | B2 |
D742535 | Wu | Nov 2015 | S |
D743933 | Park et al. | Nov 2015 | S |
9190058 | Klein | Nov 2015 | B2 |
9230430 | Civelli et al. | Jan 2016 | B2 |
9232366 | Charlier et al. | Jan 2016 | B1 |
9267801 | Gupta et al. | Feb 2016 | B2 |
9269015 | Boncyk | Feb 2016 | B2 |
9304588 | Aldossary | Apr 2016 | B2 |
D756958 | Lee et al. | May 2016 | S |
D756959 | Lee et al. | May 2016 | S |
9335175 | Zhang et al. | May 2016 | B2 |
9341014 | Oshima et al. | May 2016 | B2 |
9355547 | Stevens et al. | May 2016 | B2 |
20010023387 | Rollo | Sep 2001 | A1 |
20020067282 | Moskowitz et al. | Jun 2002 | A1 |
20020071277 | Starner et al. | Jun 2002 | A1 |
20020075323 | O'Dell | Jun 2002 | A1 |
20020173346 | Wang | Nov 2002 | A1 |
20020178344 | Bourguet | Nov 2002 | A1 |
20030026461 | Arthur Hunter | Feb 2003 | A1 |
20030133008 | Stephenson | Jul 2003 | A1 |
20030133085 | Tretiakoff | Jul 2003 | A1 |
20030179133 | Pepin et al. | Sep 2003 | A1 |
20040232179 | Chauhan | Nov 2004 | A1 |
20040267442 | Fehr et al. | Dec 2004 | A1 |
20050208457 | Fink et al. | Sep 2005 | A1 |
20050221260 | Kikuchi | Oct 2005 | A1 |
20050259035 | Iwaki | Nov 2005 | A1 |
20060004512 | Herbst | Jan 2006 | A1 |
20060028550 | Palmer | Feb 2006 | A1 |
20060029256 | Miyoshi | Feb 2006 | A1 |
20060129308 | Kates | Jun 2006 | A1 |
20060171704 | Bingle et al. | Aug 2006 | A1 |
20060177086 | Rye et al. | Aug 2006 | A1 |
20060184318 | Yoshimine | Aug 2006 | A1 |
20060292533 | Selod | Dec 2006 | A1 |
20070001904 | Mendelson | Jan 2007 | A1 |
20070052672 | Ritter et al. | Mar 2007 | A1 |
20070173688 | Kim | Jul 2007 | A1 |
20070182812 | Ritchey | Aug 2007 | A1 |
20070296572 | Fein | Dec 2007 | A1 |
20080024594 | Ritchey | Jan 2008 | A1 |
20080068559 | Howell | Mar 2008 | A1 |
20080120029 | Zelek et al. | May 2008 | A1 |
20080144854 | Abreu | Jun 2008 | A1 |
20080145822 | Bucchieri | Jun 2008 | A1 |
20080174676 | Squilla et al. | Jul 2008 | A1 |
20080198222 | Gowda | Aug 2008 | A1 |
20080198324 | Fuziak | Aug 2008 | A1 |
20080251110 | Pede | Oct 2008 | A1 |
20080260210 | Kobeli | Oct 2008 | A1 |
20090012788 | Gilbert | Jan 2009 | A1 |
20090040215 | Afzulpurkar | Feb 2009 | A1 |
20090058611 | Kawamura | Mar 2009 | A1 |
20090118652 | Carlucci | May 2009 | A1 |
20090122161 | Bolkhovitinov | May 2009 | A1 |
20090122648 | Mountain et al. | May 2009 | A1 |
20090157302 | Tashev et al. | Jun 2009 | A1 |
20090177437 | Roumeliotis | Jul 2009 | A1 |
20090189974 | Deering | Jul 2009 | A1 |
20100041378 | Aceves | Feb 2010 | A1 |
20100080418 | Ito | Apr 2010 | A1 |
20100109918 | Liebermann | May 2010 | A1 |
20100110368 | Chaum | May 2010 | A1 |
20100179452 | Srinivasan | Jul 2010 | A1 |
20100182242 | Fields et al. | Jul 2010 | A1 |
20100182450 | Kumar | Jul 2010 | A1 |
20100198494 | Chao | Aug 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100241350 | Cioffi et al. | Sep 2010 | A1 |
20100245585 | Fisher et al. | Sep 2010 | A1 |
20100267276 | Wu | Oct 2010 | A1 |
20100292917 | Emam et al. | Nov 2010 | A1 |
20100298976 | Sugihara et al. | Nov 2010 | A1 |
20100305845 | Alexandre et al. | Dec 2010 | A1 |
20100308999 | Chornenky | Dec 2010 | A1 |
20110066383 | Jangle | Mar 2011 | A1 |
20110071830 | Kim | Mar 2011 | A1 |
20110092249 | Evanitsky | Apr 2011 | A1 |
20110124383 | Garra et al. | May 2011 | A1 |
20110181422 | Tran | Jul 2011 | A1 |
20110187640 | Jacobsen | Aug 2011 | A1 |
20110211760 | Boncyk | Sep 2011 | A1 |
20110216006 | Litschel | Sep 2011 | A1 |
20110221670 | King, III et al. | Sep 2011 | A1 |
20110234584 | Endo | Sep 2011 | A1 |
20110260681 | Guccione | Oct 2011 | A1 |
20110307172 | Jadhav et al. | Dec 2011 | A1 |
20120016578 | Coppens | Jan 2012 | A1 |
20120053826 | Slamka | Mar 2012 | A1 |
20120062357 | Slamka | Mar 2012 | A1 |
20120069511 | Azera | Mar 2012 | A1 |
20120075168 | Osterhout et al. | Mar 2012 | A1 |
20120085377 | Trout | Apr 2012 | A1 |
20120092161 | West | Apr 2012 | A1 |
20120092460 | Mahoney | Apr 2012 | A1 |
20120123784 | Baker et al. | May 2012 | A1 |
20120136666 | Corpier et al. | May 2012 | A1 |
20120143495 | Dantu | Jun 2012 | A1 |
20120162423 | Xiao et al. | Jun 2012 | A1 |
20120194552 | Osterhout et al. | Aug 2012 | A1 |
20120206335 | Osterhout et al. | Aug 2012 | A1 |
20120206607 | Morioka | Aug 2012 | A1 |
20120207356 | Murphy | Aug 2012 | A1 |
20120214418 | Lee | Aug 2012 | A1 |
20120220234 | Abreu | Aug 2012 | A1 |
20120232430 | Boissy et al. | Sep 2012 | A1 |
20120249797 | Haddick et al. | Oct 2012 | A1 |
20120252483 | Farmer | Oct 2012 | A1 |
20120316884 | Rozaieski et al. | Dec 2012 | A1 |
20120323485 | Mutoh | Dec 2012 | A1 |
20120327194 | Shiratori | Dec 2012 | A1 |
20130002452 | Lauren | Jan 2013 | A1 |
20130044005 | Foshee et al. | Feb 2013 | A1 |
20130046541 | Klein et al. | Feb 2013 | A1 |
20130066636 | Singhal | Mar 2013 | A1 |
20130079061 | Jadhav | Mar 2013 | A1 |
20130115579 | Taghavi | May 2013 | A1 |
20130116559 | Levin | May 2013 | A1 |
20130127980 | Haddick | May 2013 | A1 |
20130128051 | Velipasalar et al. | May 2013 | A1 |
20130131985 | Weiland | May 2013 | A1 |
20130141576 | Lord et al. | Jun 2013 | A1 |
20130155474 | Roach et al. | Jun 2013 | A1 |
20130157230 | Morgan | Jun 2013 | A1 |
20130184982 | DeLuca | Jul 2013 | A1 |
20130202274 | Chan | Aug 2013 | A1 |
20130211718 | Yoo et al. | Aug 2013 | A1 |
20130218456 | Zelek et al. | Aug 2013 | A1 |
20130228615 | Gates et al. | Sep 2013 | A1 |
20130229669 | Smits | Sep 2013 | A1 |
20130245396 | Berman | Sep 2013 | A1 |
20130250078 | Levy | Sep 2013 | A1 |
20130250233 | Blum et al. | Sep 2013 | A1 |
20130253818 | Sanders et al. | Sep 2013 | A1 |
20130271584 | Wexler et al. | Oct 2013 | A1 |
20130290909 | Gray | Oct 2013 | A1 |
20130307842 | Grinberg et al. | Nov 2013 | A1 |
20130311179 | Wagner | Nov 2013 | A1 |
20130328683 | Sitbon et al. | Dec 2013 | A1 |
20130332452 | Jarvis | Dec 2013 | A1 |
20140009561 | Sutherland | Jan 2014 | A1 |
20140031081 | Vossoughi | Jan 2014 | A1 |
20140031977 | Goldenberg et al. | Jan 2014 | A1 |
20140032596 | Fish et al. | Jan 2014 | A1 |
20140037149 | Zetune | Feb 2014 | A1 |
20140055353 | Takahama | Feb 2014 | A1 |
20140071234 | Millett | Mar 2014 | A1 |
20140081631 | Zhu et al. | Mar 2014 | A1 |
20140085446 | Hicks | Mar 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140100773 | Cunningham et al. | Apr 2014 | A1 |
20140125700 | Ramachandran | May 2014 | A1 |
20140132388 | Alalawi | May 2014 | A1 |
20140133290 | Yokoo | May 2014 | A1 |
20140160250 | Pomerantz | Jun 2014 | A1 |
20140184384 | Zhu et al. | Jul 2014 | A1 |
20140184775 | Drake | Jul 2014 | A1 |
20140222023 | Kim et al. | Aug 2014 | A1 |
20140251396 | Subhashrao et al. | Sep 2014 | A1 |
20140253702 | Wexler | Sep 2014 | A1 |
20140278070 | McGavran | Sep 2014 | A1 |
20140281943 | Prilepov | Sep 2014 | A1 |
20140287382 | Villar Cloquell | Sep 2014 | A1 |
20140309806 | Ricci | Oct 2014 | A1 |
20140313040 | Wright, Sr. | Oct 2014 | A1 |
20140335893 | Ronen | Nov 2014 | A1 |
20140343846 | Goldman et al. | Nov 2014 | A1 |
20140345956 | Kojina | Nov 2014 | A1 |
20140347265 | Aimone | Nov 2014 | A1 |
20140368412 | Jacobsen | Dec 2014 | A1 |
20140369541 | Miskin | Dec 2014 | A1 |
20140379251 | Tolstedt | Dec 2014 | A1 |
20140379336 | Bhatnagar | Dec 2014 | A1 |
20150002808 | Rizzo, III et al. | Jan 2015 | A1 |
20150016035 | Tussy | Jan 2015 | A1 |
20150063661 | Lee | Mar 2015 | A1 |
20150081884 | Maguire | Mar 2015 | A1 |
20150099946 | Sahin | Apr 2015 | A1 |
20150109107 | Gomez et al. | Apr 2015 | A1 |
20150120186 | Heikes | Apr 2015 | A1 |
20150125831 | Chandrashekhar Nair et al. | May 2015 | A1 |
20150141085 | Nuovo et al. | May 2015 | A1 |
20150142891 | Haque | May 2015 | A1 |
20150154643 | Artman et al. | Jun 2015 | A1 |
20150196101 | Dayal et al. | Jul 2015 | A1 |
20150198454 | Moore et al. | Jul 2015 | A1 |
20150198455 | Chen | Jul 2015 | A1 |
20150199566 | Moore et al. | Jul 2015 | A1 |
20150201181 | Moore et al. | Jul 2015 | A1 |
20150211858 | Jerauld | Jul 2015 | A1 |
20150219757 | Boelter et al. | Aug 2015 | A1 |
20150223355 | Fleck | Aug 2015 | A1 |
20150256977 | Huang | Sep 2015 | A1 |
20150257555 | Wong | Sep 2015 | A1 |
20150260474 | Rublowsky | Sep 2015 | A1 |
20150262509 | Labbe | Sep 2015 | A1 |
20150279172 | Hyde | Oct 2015 | A1 |
20150324646 | Kimia | Nov 2015 | A1 |
20150330787 | Cioffi et al. | Nov 2015 | A1 |
20150336276 | Song | Nov 2015 | A1 |
20150341591 | Kelder et al. | Nov 2015 | A1 |
20150346496 | Haddick et al. | Dec 2015 | A1 |
20150356837 | Pajestka | Dec 2015 | A1 |
20150364943 | Vick | Dec 2015 | A1 |
20150367176 | Bejestan | Dec 2015 | A1 |
20150375395 | Kwon | Dec 2015 | A1 |
20160007158 | Venkatraman | Jan 2016 | A1 |
20160028917 | Wexler | Jan 2016 | A1 |
20160042228 | Opalka | Feb 2016 | A1 |
20160098138 | Park | Apr 2016 | A1 |
20160156850 | Werblin et al. | Jun 2016 | A1 |
20160198319 | Huang | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
201260746 | Jun 2009 | CN |
101527093 | Sep 2009 | CN |
201440733 | Apr 2010 | CN |
101803988 | Aug 2010 | CN |
101647745 | Jan 2011 | CN |
102316193 | Jan 2012 | CN |
102631280 | Aug 2012 | CN |
202547659 | Nov 2012 | CN |
202722736 | Feb 2013 | CN |
102323819 | Jun 2013 | CN |
103445920 | Dec 2013 | CN |
102011080056 | Jan 2013 | DE |
102012000587 | Jul 2013 | DE |
102012202614 | Aug 2013 | DE |
1174049 | Sep 2004 | EP |
1721237 | Nov 2006 | EP |
2368455 | Sep 2011 | EP |
2371339 | Oct 2011 | EP |
2127033 | Aug 2012 | EP |
2581856 | Apr 2013 | EP |
2751775 | Jul 2016 | EP |
2885251 | Nov 2006 | FR |
2401752 | Nov 2004 | GB |
1069539 | Mar 1998 | JP |
2001304908 | Oct 2001 | JP |
2010012529 | Jan 2010 | JP |
2010182193 | Aug 2010 | JP |
2013169611 | Sep 2013 | JP |
100405636 | Nov 2003 | KR |
20080080688 | Sep 2008 | KR |
20120020212 | Mar 2012 | KR |
1250929 | Apr 2013 | KR |
WO 1995004440 | Feb 1995 | WO |
WO 9949656 | Sep 1999 | WO |
WO 0010073 | Feb 2000 | WO |
WO 0038393 | Jun 2000 | WO |
WO 0179956 | Oct 2001 | WO |
WO 2004076974 | Sep 2004 | WO |
WO 2006028354 | Mar 2006 | WO |
WO 2006045819 | May 2006 | WO |
WO 2007031782 | Mar 2007 | WO |
WO 2008008791 | Jan 2008 | WO |
WO 2008015375 | Feb 2008 | WO |
WO 2008035993 | Mar 2008 | WO |
WO 2008008791 | Apr 2008 | WO |
WO 2008096134 | Aug 2008 | WO |
WO 2008127316 | Oct 2008 | WO |
WO 2010062481 | Jun 2010 | WO |
WO 2010109313 | Sep 2010 | WO |
WO 2012040703 | Mar 2012 | WO |
WO 2012163675 | Dec 2012 | WO |
WO 2013045557 | Apr 2013 | WO |
WO 2013054257 | Apr 2013 | WO |
WO 2013067539 | May 2013 | WO |
WO 2013147704 | Oct 2013 | WO |
WO 2014104531 | Jul 2014 | WO |
WO 2014138123 | Sep 2014 | WO |
WO 2014172378 | Oct 2014 | WO |
WO 2015065418 | May 2015 | WO |
WO 2015092533 | Jun 2015 | WO |
WO 2015108882 | Jul 2015 | WO |
WO 2015127062 | Aug 2015 | WO |
Entry |
---|
The Nex Band; http://www.mightycast.com/#faq; May 19, 2015; 4 pages. |
Cardonha et al.; “A Crowdsourcing Platform for the Construction of Accessibility Maps”; W4A'13 Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility; Article No. 26; 2013; 5 pages. |
Bujacz et al.; “Remote Guidance for the Blind—A Proposed Teleassistance System and Navigation Trials”; Conference on Human System Interactions; May 25-27, 2008; 6 pages. |
Rodriguez et al; “CrowdSight: Rapidly Prototyping Intelligent Visual Processing Apps”; AAAI Human Computation Workshop (HCOMP); 2011; 6 pages. |
Chaudary et al.; “Alternative Navigation Assistance Aids for Visually Impaired Blind Persons”; Proceedings of ICEAPVI; Feb. 12-14, 2015; 5 pages. |
Garaj et al.; “A System for Remote Sighted Guidance of Visually Impaired Pedestrians”; The British Journal of Visual Impairment; vol. 21, No. 2, 2003; 9 pages. |
Coughlan et al.; “Crosswatch: A System for Providing Guidance to Visually Impaired Travelers at Traffic Intersections”; Journal of Assistive Technologies 7.2; 2013; 17 pages. |
Sudol et al.; “LookTel—A Comprehensive Platform for Computer-Aided Visual Assistance”; Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference; Jun. 13-18, 2010; 8 pages. |
Paladugu et al.; “GoingEasy® with Crowdsourcing in the Web 2.0 World for Visually Impaired Users: Design and User Study”; Arizona State University; 8 pages. |
Kammoun et al.; “Towards a Geographic Information System Facilitating Navigation of Visually Impaired Users”; Springer Berlin Heidelberg; 2012; 8 pages. |
Bigham et al.; “Viz Wiz: Nearly Real-Time Answers to Visual Questions” Proceedings of the 23nd annual ACM symposium on User interface software and technology; 2010; 2 pages. |
Guy et al; “CrossingGuard: Exploring Information Content in Navigation Aids for Visually Impaired Pedestrians” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; May 5-10, 2012; 10 pages. |
Zhang et al.; “A Multiple Sensor-Based Shoe-Mounted User Interface Designed for Navigation Systems for the Visually Impaired”; 5th Annual ICST Wireless Internet Conference (WICON); Mar. 1-3, 2010; 9 pages. |
Shoval et al.; “Navbelt and the Guidecane—Robotics-Based Obstacle-Avoidance Systems for the Blind and Visually Impaired”; IEEE Robotics & Automation Magazine, vol. 10, Issue 1; Mar. 2003; 12 pages. |
Dowling et al.; “Intelligent Image Processing Constraints for Blind Mobility Facilitated Through Artificial Vision”; 8th Australian and NewZealand Intelligent Information Systems Conference (ANZIIS); Dec. 10-12, 2003; 7 pages. |
Heyes, Tony; “The Sonic Pathfinder an Electronic Travel Aid for the Vision Impaired”; http://members.optuszoo.com.au/aheyew40/pa/pf—blerf.html; Dec. 11, 2014; 7 pages. |
Lee et al.; “Adaptive Power Control of Obstacle Avoidance System Using Via Motion Context for Visually Impaired Person.” International Conference on Cloud Computing and Social Networking (ICCCSN), Apr. 26-27, 2012 4 pages. |
Wilson, Jeff, et al. “Swan: System for Wearable Audio Navigation”; 11th IEEE International Symposium on Wearable Computers; Oct. 11-13, 2007; 8 pages. |
Borenstein et al.; “The GuideCane—A Computerized Travel Aid for the Active Guidance of Blind Pedestrians”; IEEE International Conference on Robotics and Automation; Apr. 21-27, 1997; 6 pages. |
Bhatlawande et al.; “Way-finding Electronic Bracelet for Visually Impaired People”; IEEE Pointof-of-Care Healthcare Technologies (PHT), Jan. 16-18, 2013; 4 pages. |
Blenkhorn et al.; “An Ultrasonic Mobility Device with Minimal Audio Feedback”; Center on Disabilities Technology and Persons with Disabilities Conference; Nov. 22, 1997; 5 pages. |
Mann et al.; “Blind Navigation with a Wearable Range Camera and Vibrotactile Helmet”; 19th ACM International Conference on Multimedia; Nov. 28, 2011; 4 pages. |
Shoval et al.; “The Navbelt—A Computerized Travel Aid for the Blind”; RESNA Conference, Jun. 12-17, 1993; 6 pages. |
Kumar et al.; “An Electronic Travel Aid for Navigation of Visually Impaired Persons”; Communications Systems and Networks (COMSNETS), 2011 Third International Conference; Jan. 2011; 5 pages. |
Pawar et al.; “Multitasking Stick for Indicating Safe Path to Visually Disable People”; IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), vol. 10, Issue 3, Ver. II; May-Jun. 2015; 5 pages. |
Pagliarini et al.; “Robotic Art for Wearable”; Proceedings of EUROSIAM: European Conference for the Applied Mathematics and Informatics 2010; 10 pages. |
Greenberg et al.; “Finding Your Way: A Curriculum for Teaching and Using the Braillenote with Sendero GPS 2011”; California School for the Blind; 2011; 190 pages. |
Helal et al.; “Drishti: An Integrated Navigation System for Visually Impaired and Disabled”; Fifth International Symposium on Wearable Computers; Oct. 8-9, 2001; 8 pages. |
Parkes, Don; “Audio Tactile Systems for Designing and Learning Complex Environments as a Vision Impaired Person: Static and Dynamic Spatial Information Access”; EdTech-94 Proceedings; 1994; 8 pages. |
Zeng et al.; “Audio-Haptic Browser for a Geographical Information System”; ICCHP 2010, Part II, LNCS 6180; Jul. 14-16, 2010; 8 pages. |
AlZuhair et al.; “NFC Based Applications for Visually Impaired People—A Review”; IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Jul. 14, 2014; 7 pages. |
Graf, Christian; “Verbally Annotated Tactile Maps—Challenges and Approaches”; Spatial Cognition VII, vol. 6222; Aug. 15-19, 2010; 16 pages. |
Hamid, Nazatul Naquiah Abd; “Facilitating Route Learning Using Interactive Audio-Tactile Maps for Blind and Visually Impaired People”; CHI 2013 Extended Abstracts; Apr. 27, 2013; 6 pages. |
Ramya, et al.; “Voice Assisted Embedded Navigation System for the Visually Impaired”; International Journal of Computer Applications; vol. 64, No. 13, Feb. 2013; 7 pages. |
Caperna et al.; “A Navigation and Object Location Device for the Blind”; Tech. rep. University of Maryland College Park; May 2009; 129 pages. |
Burbey et al.; “Human Information Processing with the Personal Memex”; ISE 5604 Fall 2005; Dec. 6, 2005; 88 pages. |
Ghiani, et al.; “Vibrotactile Feedback to Aid Blind Users of Mobile Guides”; Journal of Visual Languages and Computing 20; 2009; 13 pages. |
Guerrero et al.; “An Indoor Navigation System for the Visually Impaired”; Sensors vol. 12, Issue 6; Jun. 13, 2012; 23 pages. |
Nordin et al.; “Indoor Navigation and Localization for Visually Impaired People Using Weighted Topological Map”; Journal of Computer Science vol. 5, Issue 11; 2009; 7 pages. |
Hesch et al.; “Design and Analysis of a Portable Indoor Localization Aid for the Visually Impaired”; International Journal of Robotics Research; vol. 29; Issue 11; Sep. 2010; 15 pgs. |
Joseph et al.; “Visual Semantic Parameterization—To Enhance Blind User Perception for Indoor Navigation”; Multimedia and Expo Workshops (ICMEW), 2013 IEEE International Conference; Jul. 15, 2013; 7 pages. |
Katz et al; “NAVIG: Augmented Reality Guidance System for the Visually Impaired”; Virtual Reality (2012) vol. 16; 2012; 17 pages. |
Rodríguez et al.; “Assisting the Visually Impaired: Obstacle Detection and Warning System by Acoustic Feedback”; Sensors 2012; vol. 12; 21 pages. |
Treuillet; “Outdoor/Indoor Vision-Based Localization for Blind Pedestrian Navigation Assistance”; WSPC/Instruction File; May 23, 2010; 16 pages. |
Ran et al.; “Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service”; Proceeding PERCOM '04 Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04); 2004; 9 pages. |
Wang, et al.; “Camera-Based Signage Detection and Recognition for Blind Persons”; 13th International Conference (ICCHP) Part 2 Proceedings; Jul. 11-13, 2012; 9 pages. |
Krishna et al.; “A Systematic Requirements Analysis and Development of an Assistive Device to Enhance the Social Interaction of People Who are Blind or Visually Impaired”; Workshop on Computer Vision Applications for the Visually Impaired; Marseille, France; 2008; 12 pages. |
Merino-Garcia, et al.; “A Head-Mounted Device for Recognizing Text in Natural Sciences”; CBDAR'11 Proceedings of the 4th International Conference on Camera-Based Document Analysis and Recognition; Sep. 22, 2011; 7 pages. |
Yi, Chucai; “Assistive Text Reading from Complex Background for Blind Persons”; CBDAR'11 Proceedings of the 4th International Conference on Camera-Based Document Analysis and Recognition; Sep. 22, 2011; 7 pages. |
Yang, et al.; “Towards Automatic Sign Translation”; The Interactive Systems Lab, Carnegie Mellon University; 2001; 5 pages. |
Meijer, Dr. Peter B.L.; “Mobile OCR, Face and Object Recognition for the Blind”; The vOICe, www.seeingwithsound.com/ocr.htm; Apr. 18, 2014; 7 pages. |
OMRON; Optical Character Recognition Sensor User's Manual; 2012; 450 pages. |
Park, Sungwoo; “Voice Stick”; www.yankodesign.com/2008/08/21/voice-stick; Aug. 21, 2008; 4 pages. |
Rentschler et al.; “Intelligent Walkers for the Elderly: Performance and Safety Testing of VA-PAMAID Robotic Walker”; Department of Veterans Affairs Journal of Rehabilitation Research and Development; vol. 40, No. 5; Sep./Oct. 2013; 9pages. |
Science Daily; “Intelligent Walker Designed to Assist the Elderly and People Undergoing Medical Rehabilitation”; http://www.sciencedaily.com/releases/2008/11/081107072015.htm; Jul. 22, 2014; 4 pages. |
Glover et al.; “A Robotically-Augmented Walker for Older Adults”; Carnegie Mellon University, School of Computer Science; Aug. 1, 2003; 13 pages. |
OrCam; www.orcam.com; Jul. 22, 2014; 3 pages. |
Eccles, Lisa; “Smart Walker Detects Obstacles”; Electronic Design; http://electronicdesign.com/electromechanical/smart-walker-detects-obstacles; Aug. 20, 2001; 2 pages. |
Graft, Birgit; “An Adaptive Guidance System for Robotic Walking Aids”; Journal of Computing and Information Technology—CIT 17; 2009; 12 pages. |
Frizera et al.; “The Smart Walkers as Geriatric Assistive Device. The SIMBIOSIS Purpose”; Gerontechnology, vol. 7, No. 2; Jan. 30, 2008; 6 pages. |
Rodriquez-Losada et al.; “Guido, The Robotic Smart Walker for the Frail Visually Impaired”; IEEE International Conference on Robotics and Automation (ICRA); Apr. 18-22, 2005; 15 pages. |
Kayama et al.; “Outdoor Environment Recognition and Semi-Autonomous Mobile Vehicle for Supporting Mobility of the Elderly and Disabled People”; National Institute of Information and Communications Technology, vol. 54, No. 3; Aug. 2007; 11 pages. |
Kalra et al.; “A Braille Writing Tutor to Combat Illiteracy in Developing Communities”; Carnegie Mellon University Research Showcase, Robotics Institute; 2007; 10 pages. |
Blaze Engineering; “Visually Impaired Resource Guide: Assistive Technology for Students who use Braille”; Braille 'n Speak Manual; http://www.blaize.com; Nov. 17, 2014; 5 pages. |
AppleVis; An Introduction to Braille Screen Input on iOS 8; http://www.applevis.com/guides/braille-ios/introduction-braille-screen-input-ios-8, Nov. 16, 2014; 7 pages. |
Dias et al.; “Enhancing an Automated Braille Writing Tutor”; IEEE/RSJ International Conference on Intelligent Robots and Systems; Oct. 11-15 2009; 7 pages. |
D'Andrea, Frances Mary; “More than a Perkins Brailler: A Review of the Mountbatten Brailler, Part 1”; AFB AccessWorld Magazine; vol. 6, No. 1, Jan. 2005; 9 pages. |
Trinh et al.; “Phoneme-based Predictive Text Entry Interface”; Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility; Oct. 2014; 2 pgs. |
Merri et al.; “The Instruments for a Blind Teacher of English: the challenge of the board”; European Journal of Psychology of Education, vol. 20, No. 4 (Dec. 2005), 15 pages. |
Kirinic et al.; “Computers in Education of Children with Intellectual and Related Developmental Disorders”; International Journal of Emerging Technologies in Learning, vol. 5, 2010, 5 pages. |
Campos et al.; “Design and Evaluation of a Spoken-Feedback Keyboard”; Department of Information Systems and Computer Science, INESC-ID/IST/Universidade Tecnica de Lisboa, Jul. 2004; 6 pages. |
Ebay; Matin (Made in Korea) Neoprene Canon DSLR Camera Curved Neck Strap #6782; http://www.ebay.com/itm/MATIN-Made-in-Korea-Neoprene-Canon-DSLR-Camera-Curved-Neck-Strap-6782-/281608526018?hash=item41912d18c2:g:˜pMAAOSwe-Fu6zDa ; 4 pages. |
Newegg; Motorola S10-HD Bluetooth Stereo Headphone w/ Comfortable Sweat Proof Design; http://www.newegg.com/Product/Product.aspx?Item=9SIA0NW2G39901&Tpk=9sia0nw2g39901; 4 pages. |
Newegg; Motorola Behind the Neck Stereo Bluetooth Headphone Black/Red Bulk (S9)—OEM; http://www.newegg.com/Product/Product.aspx?Item=N82E16875982212&Tpk=n82e16875982212; 3 pages. |
Lee et al.; “A Walking Guidance System for the Visually Impaired”; International Journal of Pattern Recognition and Artificial Intelligence; vol. 22; No. 6; pp. 1171-1186; 2008. |
Ward et al.; “Visual Experiences in the Blind Induced by an Auditory Sensory Substitution Device”; Journal of Consciousness and Cognition; 30 pages; Oct. 2009. |
Bharathi et al.; “Effective Navigation for Visually Impaired by Wearable Obstacle Avoidance System;” 2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET); pp. 956-958; 2012. |
Pawar et al.; “Review Paper on Multitasking Stick for Guiding Safe Path for Visually Disable People;” IJPRET; vol. 3, No. 9; pp. 929-936; 2015. |
Ram et al.; “The People Sensor: A Mobility Aid for the Visually Impaired;” 2012 16th International Symposium on Wearable Computers; pp. 166-167; 2012. |
Singhal; “The Development of an Intelligent Aid for Blind and Old People;” Emerging Trends and Applications in Computer Science (ICETACS), 2013 1st International Conference; pp. 182-185; Sep. 13, 2013. |
Aggarwal et al.; “All-in-One Companion for Visually Impaired;” International Journal of Computer Applications; vol. 79, No. 14; pp. 37-40; Oct. 2013. |
“Light Detector” EveryWare Technologies; 2 pages; Jun. 18, 2016. |
Arati et al. “Object Recognition in Mobile Phone Application for Visually Impaired Users;” IOSR Journal of Computer Engineering (IOSR-JCE); vol. 17, No. 1; pp. 30-33; Jan. 2015. |
Yabu et al.; “Development of a Wearable Haptic Tactile Interface as an Aid for the Hearing and/or Visually Impaired;” NTUT Education of Disabilities; vol. 13; pp. 5-12; 2015. |
Mau et al.; “BlindAid: An Electronic Travel Aid for the Blind;” The Robotics Institute Carnegie Mellon University; 27 pages; May 2008. |
Shidujaman et al.; “Design and navigation Prospective for Wireless Power Transmission Robot;” IEEE; Jun. 2015. |
Wu et al. “Fusing Multi-Modal Features for Gesture Recognition”, Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Dec. 9, 2013, ACM, pp. 453-459. |
Pitsikalis et al. “Multimodal Gesture Recognition via Multiple Hypotheses Rescoring”, Journal of Machine Learning Research, Feb. 2015, pp. 255-284. |
Shen et al. “Walkie-Markie: Indoor Pathway Mapping Made Easy” 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI'13); pp. 85-98, 2013. |
Tu et al. “Crowdsourced Routing II D2.6” 34 pages; 2012. |
De Choudhury et al. “Automatic Construction of Travel Itineraries Using Social Breadcrumbs” pp. 35-44; Jun. 2010. |
Number | Date | Country | |
---|---|---|---|
20150201181 A1 | Jul 2015 | US |