The present disclosure pertains to wireless cameras, and more specifically to wireless cameras using RF multilateration and machine vision.
Wireless security cameras are closed-circuit television (CCTV) cameras that transmit a video and audio signal to a wireless receiver through a radio band. Many wireless security cameras require at least one cable or wire for power—the term “wireless” is sometimes used to refer only to the transmission process of video and/or audio. However, some wireless security cameras are battery-powered, making the cameras truly wireless from top to bottom.
Wireless cameras are proving very popular among modern security consumers due to their low installation costs and flexible mounting options. For example, there is no need to run expensive video extension cables, and wireless cameras can be mounted and/or installed in locations previously unavailable to standard wired cameras. In addition to the ease of use and convenience of access, wireless security cameras allow users to leverage broadband wireless internet to provide seamless video streaming over the internet.
Indoor tracking of people and objects is an area of critical importance for a wide variety of industries. Purely radio frequency (RF) or purely camera based (e.g., machine vision) tracking solutions have performance or corner case limitations that prevent them from becoming robust business intelligence tools.
For example, all existing methods of RF based indoor positioning have several limitations, ranging from large position inaccuracy (e.g., methods such as RF proximity, Received Signal Strength Indicator (RSSI) trilateration) to complex hardware architectures (e.g., RF triangulation, Time of Arrival (ToA), Time Difference of Arrival (TDoA)) to hefty processing requirements (e.g., RSSI fingerprinting). RSSI, or the Received Signal Strength Indicator, is a measure of the power level that a RF device, such as WiFi or 3G client, is receiving from the radio infrastructure at a given location and time. Other methods, such as RF multi angulation, use complex phased antenna arrays to determine both the RSSI and angle of arrival of an incoming RF signal. However, multiple radios dedicated to just this on a single device are needed in order to work. Moreover, RF Time of Arrival methods are cost prohibitive for anything that is shorter range than GPS because the hardware required to detect shorter flights is too expensive for commercial deployment.
Another method of increasing the accuracy of RF based indoor positioning is the use of RSSI fingerprinting to better model the RF surroundings. Traditionally this is done by placing a fixed beacon at a known distance from the access points and continuously monitoring the RSSI of its emissions. These are compared to the fixed Line of Sight approximated values to better model the access points surroundings. Modelling accuracy tends to increase with the total number of beacons deployed. However, deploying additional always-on beacons increases cost, and the total number of beacons rises at ⅓ the rate of the deployed access points for the least accurate method. Accordingly, in certain high deployment scenarios, one might not have the space to accommodate this.
Meanwhile, camera based indoor tracking solutions using computer vision/machine vision struggle with accuracy, even when using most advanced deep learning algorithms (trained by large datasets) and using very powerful hardware in the cloud. And for the instances in which all processing needs to be done on the device, there are even more constraints. There is a need to reduce computer vision processing requirements so that the processing requirements can fit within the camera's processing budget, but still offer people and object tracking benefits to users.
The above-recited and other advantages and features of the present technology will become apparent by reference to specific implementations illustrated in the appended drawings. A person of ordinary skill in the art will understand that these drawings only show some examples of the present technology and would not limit the scope of the present technology to these examples. Furthermore, the skilled artisan will appreciate the principles of the present technology as described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various examples of the present technology are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the present technology.
Systems, methods, and devices are disclosed for enhancing target positioning. A broadcast signal from a device is received by one or more access points, where a first position area of the device is determined from an analysis of the broadcast signal. A second position area of the target, which is within the first position area, is determined by scanning pixels within the first position area in an image captured by the camera. Based on the scanned pixels, at least one target comprising a portion of the pixels is detected. The target within the image is classified, and based on the classification and portion of the pixels comprising the target, the second position area of the target within the first position of the image is triangulated.
The disclosed technology addresses the need in the art for developing an easy to use and easy to deploy camera and wireless access point(s) system without the limitations due to large position inaccuracy, complex hardware architectures, and/or hefty processing requirements (e.g., methods like RF proximity, RSSI trilateration, RF triangulation, ToA, TDoA, RSSI fingerprinting, etc.). Thus, while these processes can be used individually for the detection and location of people and/or objects, these processes are both computationally and financially expensive. Moreover, these detection techniques produce low detection confidence, especially for wireless cameras. Accordingly, computer vision processing requirements need to be reduced so that the processing requirements can fit within the camera's processing budget, and yet still offer accurate people and object tracking capabilities.
The disclosed technology provides a solution to the technological problems outlined above by using a process that combines aspects of RF multilateration with machine vision (e.g., camera sensing). By using the output of RF multilateration techniques in conjunction with machine vision (such as in optical sensing), more accurate methods, systems, and devices can be made for enhancing target positioning within images captured by a wireless camera. Moreover, since this process reduces the search area around the target, processing time and resources are reduced as well. This provides the benefit of saving on expensive hardware and increasing processing speeds without sacrificing tracking and location accuracy.
Applying the techniques of RF multilateration with camera machine vision provides a number of benefits. For example, the technique enables the accuracy of RF multi-angulation based indoor positioning without the hardware complexity and can provide accurate indoor (e.g., 2D) positioning using bilateration (a method that was hitherto impossible to do with just RF). The technique also enables higher confidence in computer vision object detection by pairing radio broadcast signature information to what is specifically or currently being looked at, and can confidently locate visually obscured or “invisible” objects. Moreover, the technique allows for tracking and/or detecting additional context to the objects being tracked that can be highly valuable for business intelligence purposes.
Thus, in the embodiments disclosed, target positioning is enhanced when a broadcast signal from a device is received by at least two wireless access points. From the at least two wireless access points, a first position area of the device is determined from an analysis of the broadcast signal according to a multilateration model (or similar model, such as methods from RSSI bilateration, trilateration, etc.). A position of the target can be refined within the first position area by scanning, by a camera using machine vision (e.g., optical sensing), pixels within the first position area only. Based on those scanned pixels, at least one target is detected as being present in an image. Each target within the image is classified based on type, and, based on the classification and portion of the pixels making up the target, a second, more accurate position area of the target is triangulated within the first position area of the image.
System 100 further includes camera 110 in communication with wireless access points 120. Camera 110 includes at least one receiver for receiving, from wireless access points 110, a broadcast signal from device 130. System 100 can determine a first position area corresponding to the location of device 130 by analyzing the broadcast signal according to multilateration model 112. Multilateration model 112 can include any number of models corresponding to RF tracking, including bilateration, trilateration, and/or any similar models. In addition, while the embodiments depicted in
Camera 110 is also trained with algorithms designed to locate both people and objects within an environment. Camera sensing service 114, for example, analyzes pixels within video frames captured by camera 110 in order to detect people and/or objects (e.g., targets) within its captured field of view (FOV). This is most commonly done through machine vision that distinguishes objects through optical sensing, although infrared cameras may use analogous infrared sensing to distinguish objects (and indeed, any frequency range can be used as long as the target object emits them).
In some embodiments, system 100 includes broadcast signature database 140, which can be a cloud connected database of radio broadcast signatures. Broadcast signatures can include, for example, the OUI/MAC address specified by the IEEE and/or any other identifying information in the radio transmission format. In some embodiments, broadcast signature database 140 can be local to camera 110.
An advantage of system 100 is that most of it requires a one-time initial setup, requiring updates only when a new node is added to the mix (i.e., a wireless access point or camera).
Referring back to the embodiment shown, the position of each node of the system (e.g., camera 230 and wireless access points 210, 220) and the scanning radio coverage for each of the wireless access points are known to some accuracy within their surrounding environment. The scanning radio coverage ideally extends to some radius from the wireless access point (assuming there are no intervening obstructions), and can be known or provided by OEM.
The angle of camera 230 with respect to the floor of the environment is also known to some accuracy. This allows accurate mapping of camera 230's pixels to its field of view, FOV (FOV is denoted in
Each wireless access point scans for the broadcast signature of a device within the zone of its radio coverage capabilities. If the broadcast signature of the device is detected by two or more wireless access points, RSSI multilateration, trilateration, or a similar technique can be used to determine a rough estimate of the device's location. For example, in
The rough estimate of the device's location can then be improved through computer vision algorithms on camera 230. Machine vision via the use of computer vision algorithms can be performed in only the zone identified through the RSSI multilateration techniques (or similar) Thus, if RSSI multilateration indicates that the device is within trilateration zone 240, computer vision algorithms would be performed only to the pixels that map to the area within trilateration zone 240. This gives the system the ability to significantly lower the computational requirements for object tracking, and/or increases the performance and accuracy for a given computational resource pool. During the search, camera 230 will detect within its FOV 232 any objects or persons of interest (e.g., target object) for location and/or tracking.
The location of the detected target can be determined from a scan of the given pixel grid captured by camera 230, since the camera tilt angle and the FOV to pixel mapping is known. Thus, the system can triangulate or narrow down the location of a target to a more accurate position within trilateration zone 240. Thus, the location capabilities of the system is limited only by the resolution limit of camera 230 and/or the distance of the detected target while saving computational power, resources, and increasing processing speed.
Assuming that the approximated area is located in the FOV of the camera, a scan of the corresponding pixel grid can be limited to the approximated area (step 414). Thus, the specific pixels within the approximate area can be scanned for any objects or persons in the images captured by the camera. The pixels corresponding to portions of the approximate area are known or determined based on the camera tilt angle and the FOV to pixel mapping. If a target of interest (e.g., a device detected through RSSI or its broadcast signature, or a person associated with the device) is detected within one or more pixels, a more accurate location of the target can be determined based on pixel mapping the target to a smaller area or position. The smaller area or position is limited only by the resolution of the camera (or combination of cameras), which typically have better resolution than RF multilateration.
Additionally, some embodiments classify the target within the image, using one or more image classification algorithms (step 416) that match the machine vision detected target to a known object. The image classification algorithms can be one or more models of devices and people in different positions or orientations. While the image classification algorithms can be generated, stored, and/or processed locally on the camera, in some embodiments, the classification of the target can be based on a model stored remotely from the device (e.g., such as in the cloud). The classification may further narrow down a device's dimensions and size based on a match to a known device model (which, based on the classification of the target and portion of the pixels that include the target, can help refine or triangulate the device's position).
In
Thus, the method may determine, based on the classification (or lack thereof), that a device is a ‘known’ device or an ‘unknown’ device (step 418). The device can be a ‘known’ device, and subsequently identified, for example, through one or more matches with the image classification model. Or, in some embodiments, the device can be known through its broadcast signature, which may include identifying information about the device (e.g., model, manufacturer, dimensions, etc.).
Depending on the application of this system, a list of ‘known’ devices can be accessed to determine a match. For example, the list of ‘known’ devices can include, but is not limited to, an employee badge, mobile/laptop (with or without a Meraki Systems Manager installed), and/or any device with a detectable and distinguishable broadcast signature. For example, in embodiments where the list of known devices is part of a cloud backed database, more data than just the Broadcast signature can be accessed. For example, device information can be found in the cloud using its broadcast signature or ID (step 420).
Device information can be any information associated with the device. For example, an employee badge and/or mobile laptop can be associated with a known employee. The system can include a user profile associated with the employee, including distinguishing features detectable by the camera (e.g., facial features based on a photo, hair/eye color, build, height, etc.). Other information can include device design parameters, such as device dimensions, that can provide some scale to the images (e.g., a laptop of known length and width carried by a person can narrow down the person's location depending on how many pixels it extends across).
Once the device is identified by the camera's machine vision, the accuracy of its approximated location from RSSI/broadcast ID scanning can be improved based on determining the angle of arrival (AoA) from the camera (step 422) and then running a pixel mapping model to get a triangulated position based on the AoA and multilateration model (step 242).
In some embodiments, the system can provide security features by quickly narrowing down on any target and the surroundings, and then checking for discrepancies. For example, a device within the environment can be identified based on a broadcast ID or application installed on the device that connects with the wireless access points. The broadcast ID or application installed on the device can be associated with a particular user, such as an employee, that is assigned to the device. The application can connect with a wireless access point and communicate an ID that identifies the particular user or device assigned to the user. After the system and/or camera classifies the target in the camera images as a person, machine vision applied to that person can determine whether there is a discrepancy in possession of the device. If the system identifies that the person in the camera images is not the user assigned to the device, the discrepancy in possession can be detected and security personnel (or other appropriate authority) notified.
Security features that check for discrepancies in device possession can therefore be performed with respect to any device assigned to a user, including ID badges that broadcast to wireless access points. For example, an unauthorized individual with an employee's ID badge can be flagged down based on a failure to match with the assigned employee's features (e.g., the facial features don't match, the individual is much taller than the employee, etc.). The system can monitor for unauthorized individuals continuously and around the clock as a way to increase workplace safety. For example, in
In some instances, however, the device may be unknown (step 418). For an ‘unknown’ device, there are at least two separate ways to identify an object: (1) through the broadcast signature and (2) machine vision. These methods of device identification can be used separately or in conjunction in the disclosed system.
For an unknown device, the system can look up a vendor associated with the broadcast signature (step 426). Additionally and/or alternatively, the camera's machine vision can determine or make an educated guess as to the device's type (e.g., mobile phone, laptop, etc.) (step 428).
For example, if the broadcast signal cannot be used to uniquely identify the device, the classification of the device can proceed by identifying a broadcast signature within the broadcast signal and identifying a type of the device based on comparing the broadcast signature with a number of stored broadcast signatures with associated device information. The broadcast signatures can, for example, be stored in a database either local to the camera or remote from the camera (such as a cloud database). The broadcast signature may contain an identifier that identifies the vendor of the device. Based on a determination of the identity of the vendor, the type of the device can be inferred or a request for broadcast signatures associated with devices sold or produced by the vendor can be obtained for comparison to the detected broadcast signal.
Moreover, the type of the device can be further refined by machine vision applied to multilateration zone 520. For example, a vendor's OUI can limit the object type down to, say, 1 or 2 types. The vendor may produce badges only, or phones and laptops only. In
In some embodiments, the camera can track targets over a period of time to identify an unknown device. In
In some embodiments, after the device location is refined based on the angle of arrival of the device with respect to the camera, RSSI triangulation, RSSI fingerprinting, or both can also be refined or updated (step 434). Both these models of localization can be used to further increase the baseline accuracy of the initial RSSI multilateration technique (through RSSI triangulation) and help map the immediate room environment (through RSSI fingerprinting).
For example, a broadcast signature within the broadcast signal from a device in multilateration zone 520 can be received by the surrounding wireless access points. The type of device can be identified based on machine vision applied to the initial multilateration area. Whether or not the device is ‘known’ or ‘unknown’, once the device is identified, its broadcast signature can be noted by the system and/or camera. The broadcast signature store can then be updated, if needed, based on the received broadcast signature and identified type of device. In this way, the updated broadcast signature can dynamically update the broadcast signature databases over time, which increases the accuracy of the multilateration model applied to subsequent devices. In some embodiments, the broadcast signature database and multilateration model can be updated continuously.
In some embodiments, the system and/or camera can detect ‘invisible’ targets or objects. This type of functionality can be performed locally on the camera or can be performed remotely (e.g., using cloud vision) to contextually find an ‘invisible’ object's location (step 440).
At time t=0 hours,
In this instance, the system and/or camera can track the target associated with the ‘invisible’ device over time until the camera's machine vision picks it up. Thus, for example, the system and/or camera tracks target 516 from, say,
In some instances, the device detected and identified by machine vision may not match the detected broadcast signature. When that happens, some embodiments can update the RSSI multilateration models to increase the accuracy of the system. For example, in
In many instances, however, multiple devices may be within multilateration zone.
Once the RSSI and broadcast signature of the device on two or more wireless access points are received, a generic RSSI multilateration algorithm can be used to obtain an initial approximate area where the object is located (step 812). Assuming that the approximated area is located in the FOV of the camera, the camera can perform a scan of the corresponding pixel grid using machine vision techniques in that initial area (step 814), and any targets within the image can be classified using one or more image classification algorithms (step 816) that match the detected targets to known objects. If no objects are within the camera's FOV or captured image, the RSSI scan (step 810) can be repeated until an object is detected.
The system then determines whether any of the devices detected within the machine vision scan are of the same or different type using the known/unknown method discussed in
However, the methodology differs when the system cannot completely differentiate the devices. The reasons for this could be multiple instances of similar devices closely packed together, NLOS devices with LOS devices closely packed together, etc. In these cases, all unique broadcast ID's can be monitored and/or all of these devices can be assigned a group ID and continuously tracked over a period of time (step 824). Various methods of separating the various members of a group into individual devices are discussed below, depending on whether there is any device movement (step 826).
Assuming there is device movement, the system and/or camera determines whether any devices are diverging or moving in a different direction than the group of devices (step 828). If there is no divergence or the direction of movement is the same, the multiple devices are tracked within the image captured by the camera as a cohesive block and a differencing model is applied to the broadcast signal to extract broadcast signals for each device (step 830). For example, a particular broadcast signature with a unique portion (say, a strong signal at 30 kHz that is not shared by other devices) can be inferred as a device within the group, and that signature can be subtracted through the differencing model from the group's signature. The removal of the signature may help in identifying other broadcast signals embedded within the group signature.
The system/camera can continue to track the group of devices in an effort to further distinguish broadcast ID's from the rest of the group (step 832). If there is no change in device location with respect to the other devices, the system/camera can assign the group's broadcast signature an averaged RSSI ID (thus including a mixture of all the broadcast IDs) (step 834).
If there is device movement, such as determining through machine vision that at least one of the devices is moving in a direction that diverges from the group, the moving or diverging device can be visually identified by the camera once it has separated further than the camera's resolution limit. The identification can then be matched to a broadcast signature, and the broadcast signature associated with the visually identified device can be removed from the group's broadcast signal (step 836), such as through one or more differencing models. Accordingly, through these various methodologies, individual devices within a group of indistinguishable devices can be detected, identified, and separated from the group.
In some embodiments computing system 900 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.
Example system 900 includes at least one processing unit (CPU or processor) 910 and connection 905 that couples various system components including system memory 915, such as read only memory (ROM) and random access memory (RAM) to processor 910. Computing system 900 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 910.
Processor 910 can include any general purpose processor and a hardware service or software service, such as services 932, 934, and 936 stored in storage device 930, configured to control processor 910 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 910 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 900 includes an input device 945, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 900 can also include output device 935, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 900. Computing system 900 can include communications interface 940, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 930 can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices.
The storage device 930 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 910, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 910, connection 905, output device 935, etc., to carry out the function.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program, or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4236068 | Walton | Nov 1980 | A |
5642303 | Small et al. | Jun 1997 | A |
5751223 | Turner | May 1998 | A |
6812824 | Goldinger et al. | Nov 2004 | B1 |
D552603 | Tierney | Oct 2007 | S |
7283046 | Culpepper | Oct 2007 | B2 |
7573862 | Chambers et al. | Aug 2009 | B2 |
D637569 | Desai et al. | May 2011 | S |
7975262 | Cozmei | Jul 2011 | B2 |
8010079 | Mia et al. | Aug 2011 | B2 |
8102814 | Rahman et al. | Jan 2012 | B2 |
8260320 | Herz | Sep 2012 | B2 |
8284748 | Borghei | Oct 2012 | B2 |
8300594 | Bernier et al. | Oct 2012 | B1 |
8325626 | Tóth et al. | Dec 2012 | B2 |
8396485 | Grainger et al. | Mar 2013 | B2 |
8446899 | Lei et al. | May 2013 | B2 |
8458184 | Dorogusker et al. | Jun 2013 | B2 |
D691636 | Bunton | Oct 2013 | S |
8549638 | Aziz | Oct 2013 | B2 |
8615254 | Jamtgaard | Dec 2013 | B2 |
8644301 | Tamhankar et al. | Feb 2014 | B2 |
8650279 | Mehta et al. | Feb 2014 | B2 |
8669902 | Pandey et al. | Mar 2014 | B2 |
8676182 | Bell et al. | Mar 2014 | B2 |
8682279 | Rudolf et al. | Mar 2014 | B2 |
8693367 | Chowdhury et al. | Apr 2014 | B2 |
8718644 | Thomas et al. | May 2014 | B2 |
8768389 | Nenner et al. | Jul 2014 | B2 |
8849283 | Rudolf et al. | Sep 2014 | B2 |
8909698 | Parmar et al. | Dec 2014 | B2 |
8958318 | Hastwell et al. | Feb 2015 | B1 |
9060352 | Chan et al. | Jun 2015 | B2 |
9130859 | Knappe | Sep 2015 | B1 |
9173084 | Foskett | Oct 2015 | B1 |
9173158 | Varma | Oct 2015 | B2 |
D744464 | Snyder et al. | Dec 2015 | S |
D757424 | Phillips et al. | May 2016 | S |
D759639 | Moon et al. | Jun 2016 | S |
9389992 | Gataullin et al. | Jul 2016 | B2 |
9426305 | De Foy et al. | Aug 2016 | B2 |
D767548 | Snyder et al. | Sep 2016 | S |
D776634 | Lee et al. | Jan 2017 | S |
9544337 | Eswara et al. | Jan 2017 | B2 |
9609504 | Karlqvist et al. | Mar 2017 | B2 |
9642167 | Snyder et al. | May 2017 | B1 |
9654344 | Chan et al. | May 2017 | B2 |
9713114 | Yu | Jul 2017 | B2 |
9772927 | Gounares et al. | Sep 2017 | B2 |
9820105 | Snyder et al. | Nov 2017 | B2 |
D804450 | Speil et al. | Dec 2017 | S |
9858559 | Raleigh et al. | Jan 2018 | B2 |
9860151 | Ganichev et al. | Jan 2018 | B2 |
9933224 | Dumitriu et al. | Feb 2018 | B2 |
9923780 | Rao et al. | Mar 2018 | B2 |
9967906 | Verkaik et al. | May 2018 | B2 |
9980220 | Snyder et al. | May 2018 | B2 |
9985837 | Rao et al. | May 2018 | B2 |
20030087645 | Kim et al. | May 2003 | A1 |
20030116634 | Tanaka | Jun 2003 | A1 |
20040169587 | Washington | Sep 2004 | A1 |
20040203572 | Aerrabotu et al. | Oct 2004 | A1 |
20050090225 | Muehleisen et al. | Apr 2005 | A1 |
20050169193 | Black et al. | Aug 2005 | A1 |
20050186904 | Kowalski et al. | Aug 2005 | A1 |
20060022815 | Fischer et al. | Feb 2006 | A1 |
20060030290 | Rudolf et al. | Feb 2006 | A1 |
20060092964 | Park et al. | May 2006 | A1 |
20060126882 | Deng et al. | Jun 2006 | A1 |
20060187866 | Werb et al. | Aug 2006 | A1 |
20070037605 | Logan | Feb 2007 | A1 |
20070152057 | Cato | Jul 2007 | A1 |
20070239854 | Janakiraman et al. | Oct 2007 | A1 |
20080037715 | Prozeniuk et al. | Feb 2008 | A1 |
20080084888 | Yadav et al. | Apr 2008 | A1 |
20080101381 | Sun et al. | May 2008 | A1 |
20080163207 | Reumann et al. | Jul 2008 | A1 |
20080233969 | Mergen | Sep 2008 | A1 |
20090129389 | Halna DeFretay et al. | May 2009 | A1 |
20090203370 | Giles et al. | Aug 2009 | A1 |
20090282048 | Ransom et al. | Nov 2009 | A1 |
20090298511 | Paulson | Dec 2009 | A1 |
20090307485 | Weniger et al. | Dec 2009 | A1 |
20100039280 | Holm et al. | Feb 2010 | A1 |
20100097969 | De Kimpe et al. | Apr 2010 | A1 |
20110087799 | Padhye et al. | Apr 2011 | A1 |
20110142053 | Van Der Merwe et al. | Jun 2011 | A1 |
20110182295 | Singh et al. | Jul 2011 | A1 |
20110194553 | Sahin et al. | Aug 2011 | A1 |
20110228779 | Goergen | Sep 2011 | A1 |
20120023552 | Brown et al. | Jan 2012 | A1 |
20120054367 | Ramakrishnan et al. | Mar 2012 | A1 |
20120088476 | Greenfield | Apr 2012 | A1 |
20120115512 | Grainger et al. | May 2012 | A1 |
20120157126 | Rekimoto | Jun 2012 | A1 |
20120167207 | Beckley et al. | Jun 2012 | A1 |
20120182147 | Forster | Jul 2012 | A1 |
20120311127 | Kandula et al. | Dec 2012 | A1 |
20120324035 | Cantu et al. | Dec 2012 | A1 |
20130029685 | Moshfeghi | Jan 2013 | A1 |
20130039391 | Skarp | Feb 2013 | A1 |
20130057435 | Kim | Mar 2013 | A1 |
20130077612 | Khorami | Mar 2013 | A1 |
20130088983 | Pragada et al. | Apr 2013 | A1 |
20130107853 | Pettus et al. | May 2013 | A1 |
20130108263 | Srinivas et al. | May 2013 | A1 |
20130115916 | Herz | May 2013 | A1 |
20130145008 | Kannan et al. | Jun 2013 | A1 |
20130155906 | Nachum et al. | Jun 2013 | A1 |
20130191567 | Rofougaran et al. | Jul 2013 | A1 |
20130203445 | Grainger et al. | Aug 2013 | A1 |
20130217332 | Altman et al. | Aug 2013 | A1 |
20130232433 | Krajec et al. | Sep 2013 | A1 |
20130273938 | Ng et al. | Oct 2013 | A1 |
20130317944 | Huang et al. | Nov 2013 | A1 |
20130322438 | Gospodarek et al. | Dec 2013 | A1 |
20130343198 | Chhabra et al. | Dec 2013 | A1 |
20130347103 | Veteikis et al. | Dec 2013 | A1 |
20140007089 | Bosch et al. | Jan 2014 | A1 |
20140016926 | Soto et al. | Jan 2014 | A1 |
20140025770 | Warfield et al. | Jan 2014 | A1 |
20140052508 | Pandey et al. | Feb 2014 | A1 |
20140059655 | Beckley et al. | Feb 2014 | A1 |
20140087693 | Walby et al. | Mar 2014 | A1 |
20140105213 | A K et al. | Apr 2014 | A1 |
20140118113 | Kaushik et al. | May 2014 | A1 |
20140148196 | Bassan-Eskenazi et al. | May 2014 | A1 |
20140179352 | V.M. et al. | Jun 2014 | A1 |
20140191868 | Ortiz et al. | Jul 2014 | A1 |
20140198808 | Zhou | Jul 2014 | A1 |
20140233460 | Pettus et al. | Aug 2014 | A1 |
20140269321 | Kamble et al. | Sep 2014 | A1 |
20140302869 | Rosenbaum et al. | Oct 2014 | A1 |
20140337824 | St. John et al. | Nov 2014 | A1 |
20140341568 | Zhang et al. | Nov 2014 | A1 |
20150016286 | Ganichev et al. | Jan 2015 | A1 |
20150016469 | Ganichev et al. | Jan 2015 | A1 |
20150030024 | Venkataswami et al. | Jan 2015 | A1 |
20150043581 | Devireddy et al. | Feb 2015 | A1 |
20150063166 | Sif et al. | Mar 2015 | A1 |
20150065161 | Ganesh et al. | Mar 2015 | A1 |
20150087330 | Prechner et al. | Mar 2015 | A1 |
20150103818 | Kuhn et al. | Apr 2015 | A1 |
20150163192 | Jain et al. | Jun 2015 | A1 |
20150172391 | Kasslin et al. | Jun 2015 | A1 |
20150213391 | Hasan | Jul 2015 | A1 |
20150223337 | Steinmacher-Burow | Aug 2015 | A1 |
20150256972 | Markhovsky et al. | Sep 2015 | A1 |
20150264519 | Mirzaei et al. | Sep 2015 | A1 |
20150280827 | Adiletta et al. | Oct 2015 | A1 |
20150288410 | Adiletta et al. | Oct 2015 | A1 |
20150326704 | Ko et al. | Nov 2015 | A1 |
20150358777 | Gupta | Dec 2015 | A1 |
20150362581 | Friedman et al. | Dec 2015 | A1 |
20160007315 | Lundgreen et al. | Jan 2016 | A1 |
20160044627 | Aggarwal et al. | Feb 2016 | A1 |
20160099847 | Melander et al. | Apr 2016 | A1 |
20160105408 | Cooper et al. | Apr 2016 | A1 |
20160127875 | Zampini, II | May 2016 | A1 |
20160146495 | Malve et al. | May 2016 | A1 |
20160344641 | Javidi et al. | Nov 2016 | A1 |
20170026974 | Dey et al. | Jan 2017 | A1 |
20170214551 | Chan et al. | Jul 2017 | A1 |
20180069311 | Pallas et al. | Mar 2018 | A1 |
20180084389 | Snyder et al. | Mar 2018 | A1 |
20180295473 | Wirola | Oct 2018 | A1 |
Number | Date | Country |
---|---|---|
WO 2013020126 | Feb 2013 | WO |
WO 2014098556 | Jun 2014 | WO |
WO 2018009340 | Jan 2018 | WO |
Entry |
---|
“I Love WiFi, The difference between L2 and L3 Roaming Events,” Apr. 1, 2010, 6 pages. |
Carter, Steve SR., “E911 VoIP Essentials for Enterprise Deployments,” XO Communications, LLC, 2012, 9 pages. |
Chalise, Batu K., et al., “MIMO Relaying for Multiaccess Communication in Cellular Networks,” Sensor Array and MultiChannel Signal Processing Workshop, 2008, SAM 2008, 5th IEEE, Jul. 21, 2008, pp. 146-150. |
Cisco Systems, Inc., “Wi-FI Location-Based Services 4.1 Design Guide,” May 20, 2008, 206 pages. |
Cui, Wenzhi et al., “DiFS: Distributed Flow Scheduling for Data Center Networks,” Nanjing University, China, Jul. 28, 2013, 10 pages. |
Galvan T., Carlos E., et al., “Wifi bluetooth based combined positioning algorithm,” International Meeting of Electrical Engineering Research ENIINVIE 2012, Procedia Engineering 35 (2012 ), pp. 101-108. |
Gesbert, David, “Advances in Multiuser MIMO Systems (Tutorial Part II) Emerging Topics in Multiuser MIMO Networks,” IEEE PIMRC Conference, Sep. 2007, 107 pages. |
Halperin, Daniel, et al., “Augmenting Data Center Networks with Multi-Gigabit Wireless Links,” Aug. 15-19, 2011, SIGCOMM'11, ACM 978-1-4503-0797-0/11/08, pp. 38-49. |
Ji, Philip N., et al., “Demonstration of High-Speed MIMO OFDM Flexible Bandwidth Data Center Network,” Optical Society of America, 2012, 2 pages. |
Kandula, Srikanth, et al., “Flyways to De-Congest Data Center Networks,” Microsoft Research, Oct. 23, 2009, 6 pages. |
Katayama, Y. et al., “MIMO Link Design Strategy for Wireless Data Center Applications,” IEEE Wireless Communications and Networking Conference: Services, Applications, and Business, 2012, 5 pages. |
Leary, Jonathan, et al., “Wireless LAN Fundamentals: Mobility,” Jan. 9, 2004, Cisco Press, 15 pages. |
Network Heresy, “NVGRE, VXLAN and What Microsoft is Doing Right,” Oct. 3, 2011, 5 pages. |
Savvides, Andreas, et al., “Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors”, Proceeding MobiCom '01 Proceedings of the 7th annual international conference on Mobile computing and networking, Jul. 2001, pp. 166-179. |
Afolabi, Ibrahim, et al., “Network Slicing & Softwarization: A Survey on Principles, Enabling Technologies & Solutions,” Mar. 21, 2018, pp. 1-24. |
Antonioli, Roberto, et al., “Dual Connectivity for LTE-NR Cellular Networks,” Research Gate, Sep. 3-6, 2017, pp., 171-175. |
Cisco ASR 5x00 Mobility Management Entity Administration Guide, Version 15.0, Last updated Jun. 13, 2014, Cisco, 1-266. |
Cox, Jacob H. Jr., et al., “Advancing Software-Defined Networks: A Survey,” IEEE, Oct. 12, 2017, pp. 25487-25526. |
Saraiva de Sousa, Nathan F., et al., “Network Service Orchestration: A Survey,” IEEE Communications Surveys & Tutorials, Mar. 23, 2018, pp. 1-30. |
Geller, Michael, et al. , “5G Security Innovation with Cisco,” Whitepaper Cisco Public, Jun. 8, 2018, pp. 1-29. |
Ventre, Pier Luigi, et al., “Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions,” ieee.org, Nov. 7-10, 2016, pp. 1-7. |
Number | Date | Country | |
---|---|---|---|
20190182790 A1 | Jun 2019 | US |