Many situations exist in which it may be useful or desirable to be able to detect and track moving objects. Two examples of such situations include security and surveillance applications such as for border or port security.
Moving object detection and tracking has historically been a relatively labor intensive process. Human operators would commonly have to diligently watch an area. The required attention could limit the amount operators can effectively observe.
Some semi-automated systems have been developed to aid operators in security and surveillance applications. One system includes highlighting a moving object by overlaying a box around the moving object in the original image. Some systems have also included showing a “tail” in the original image that provides an indication of a past location of the object.
An aspect of the disclosure relates to systems for detecting, tracking, and displaying moving objects. Systems illustratively include a graphical user interface and a processing unit. The processing unit is a functional part of the system that executes computer readable instructions to generate the graphical user interface. The graphical user interface may include an alert and tracking window that has a first dimension that corresponds to a temporal domain and a second dimension that corresponds to a spatial domain. In some embodiments, alert and tracking windows include target tracking markers. Target tracking markers optionally provide information about moving objects such as, but not limited to, information about past locations of moving objects and information about sizes of moving objects. Certain embodiments may also include other features such as zoom windows, playback controls, and graphical imagery added to a display to highlight moving objects.
These and various other features and advantages that characterize the claimed embodiments will become apparent upon reading the following detailed description and upon reviewing the associated drawings.
Embodiments of the present disclosure include systems and methods for detecting, tracking, and displaying moving objects. One notable feature of certain embodiments is an alert and tracking window. Alert and tracking windows are illustratively placed below or near a video display of the area being monitored. In at least certain situations, alert and tracking windows aid operators in quickly and effectively detecting and tracking moving objects. Alert and tracking windows may include tracking markers that provide an indication of past locations, direction of travel, speed, and relative size of moving objects. These features may permit operators to observe a larger area than they otherwise could. Also, as is discussed below in greater detail, some embodiments may also provide additional benefits and advantages such as, but not limited to, reducing costs, enhancing operator sensitivity in identification of moving objects, and helping operators distinguish between moving objects of interest (e.g. a boat) and clutter or noise (e.g. a tree with branches or leaves moving in the wind).
Video window 102 illustratively includes target detection highlights. Target detection highlights are graphical features overlaid on top of the video that indicate that an object shown in the video is moving. In the specific example shown in
Alert and tracking window 152, as is shown in
Alert and tracking window 152 includes a number of rasters. Each raster illustratively runs along the entire width 156 of the window and has a height that is a fraction of the height 154 of the window. Each raster also illustratively includes a number of columns that are a fraction of the width of the raster. In one embodiment, the heights of the rasters and the widths of the columns are illustratively one pixel, but embodiments are not limited to any particular number of pixels or dimensions.
In an embodiment, each raster in alert and tracking window 152 corresponds to one video frame of the video being shown in video window 102. For instance, the raster at the top 161 of window 152 illustratively corresponds to the current video frame being displayed in video window 102, and the raster at the bottom 162 of window 152 corresponds to a video frame that has been previously shown in window 102. Embodiments are not however limited to a one-to-one correspondence between video frames and rasters in alert and tracking window 152. Rasters in alert and tracking window 152 may be generated at a fraction of the video frame rate. For instance, rows may be generated once for every ten video frames, once every one hundred video frame, or any other frequency. Additionally, the height 154 of window 152 is not limited to any particular dimension. Accordingly, window 152 illustratively has rasters that correspond to any number of previous video frames that may be desirable. For instance, for illustration purposes only and not by limitation, window 152 could be set-up to show information that corresponds to 10 minutes, 30 minutes, 1 hour, 2 hours, or any other length of time.
In one embodiment, the intensity of the highlighted pixels or areas in a raster is dependent upon the number of pixels or amount of area in the corresponding video window for which motion is detected. The intensity illustratively increases as the amount of detected motion increases. For example, a column in a raster corresponding to five pixels with detected motion is displayed brighter than a column in a raster only corresponding to three pixels with detected motion. Accordingly, in some embodiments, alert and tracking windows provide users with indications of sizes or at least relative sizes of moving objects. An indication of horizontal size is given by the number of highlighted columns in a raster, and an indication of vertical size is given by the intensity of the highlights within the columns.
The width 156 of alert and tracking window spatially corresponds to width 106 of video window 102. In the particular example shown in the figure, there is a one-to-one correspondence, but embodiments include any ratio of correspondence. In the example shown in
In at least certain embodiments, alert and tracking windows include target tracking markers. Target tracking markers illustratively include highlighted pixels or areas of multiple rasters within an alert and tracking window. The target tracking markers correspond to detected moving objects shown in video window 102. In
Before continuing to discuss additional features of embodiments of the present disclosure, it is worthwhile to highlight a few other benefits and possible advantages of features that have been discussed. One benefit of such a system is that it allows for operators to easily identify moving objects in video window 102. For instance, an operator walking up to video window 102 may not be able to easily discover that boat 112 is moving or even that boat 112 exists. However, alert and tracking window 152 provides target tracking marker 182 that is easily identifiable by an operator and clearly shows that there has been a history of motion. An operator can identify marker 182 and then look to the corresponding areas of video window 102 to find boat 112. The target tracking marker in this sense helps to narrow or reduce the amount of video window 102 that an operator needs to study to find boat 112.
Another benefit is that alert and tracking windows provide direction and historical location information. For instance, in
Related to the location and direction information discussed above, another potential benefit is that alert and tracking windows may also provide relative speed information. For instance, if a target tracking marker is sloped such that it has a large angle separating it from being vertical (i.e. as a target tracking marker becomes more horizontally oriented), this indicates that the object has moved across a larger distance in a shorter amount of time and is thus moving relatively quickly. Additionally, based on the speed information and the predicted directional information discussed previously, in one embodiment, systems are able to predict future locations of moving objects. These future locations can be shown for instance by extending a target tracking marker past a moving object's current location. Embodiments are not however limited to any particular method of communicating possible future locations.
One other benefit worth noting at this point is that certain embodiments may help operators distinguish between real moving objects of interest and objects that have been detected as having motion but that are not of interest (i.e. clutter or noise). For instance, an area being monitored may have an object such as a tree or a water wave that may intermittently move. Detection systems may detect this movement and highlight it in an alert and tracking window. The motion however in not being constant will only show up as an isolated highlight or mark in the alert and tracking window. These types of highlights in the window are distinguishable from the more or less solid or constant lines or trails of target tracking markers. Accordingly, even in systems that have motion clutter or noise, operators are able to identify objects of interest.
Another type of noise that may be present in a system is noise caused by camera or sensor instability. A camera could for instance be bumped or otherwise moved. As is explained later in greater detail, in one embodiment, movement is detected by comparing pixels. In such a system, any camera/sensor instability may show up as highlighted pixels/movement. One example of such highlighted pixels caused by camera instability is the white horizontal line 130 in
Returning to describing features of
In another embodiment, instead of zoom windows being generated based upon a user selecting a moving object shown in video window 302, zoom windows are alternatively automatically generated once an object is detected. Zoom windows 310, 320, and 330 are for instance in an embodiment automatically generated by a software algorithm without a user manually clicking on any objects within video window 302. Furthermore, an automatic tracking function is optionally utilized to keep moving objects within zoom windows centered within their associated zoom windows.
As is shown in
GUI 300 also optionally includes features that facilitate identification of which part of the video window 302 that each zoom window corresponds to. In the embodiment shown in
Zoom window borders may also include other features to aid operators in performing security and/or surveillance operations. In one embodiment, borders such as borders 311, 321, and 331 include horizontal and/or vertical scrollbars. The scrollbars allow operators to change the part of the larger image (i.e. the image shown in window 302) that is being displayed in the zoom window. An operator could for example change the zoom window to be showing an area to the left, right, above, and/or below the current area being shown. In another embodiment, borders also have interface elements that allow operators to quickly change the level of magnification of the zoom window (i.e. an operator can zoom in or zoom out on an area). It should be noted however that embodiments are not however limited to the specific examples described above and illustratively include any mechanisms for scrolling and zooming. For instance, for illustration purposes only and not by limitation, scrolling and zooming capabilities could be implemented utilizing keyboard strokes or mouse buttons.
Users illustratively interact with the GUI on display device 402 using input device 406. Input device 406 may be one or more devices. Some examples of potential input devices include keyboards, mice, scroll balls, and touch screens. Embodiments are not however limited to any particular type of input devices and illustratively include any type of input device.
Operating environment 400 collects information to detect moving objects utilizing moving object sensor 404. In the examples described above, the information that has been collected has been shown as being video from a camera. Embodiments of the present disclosure are not however limited to only sensors that are cameras. Embodiments include any type of sensor that is able to collect information that may be utilized in detecting moving objects. For example, embodiments of sensors 104 include any imaging sensor that is responsive to different types of radiation. Some possible types of sensors, for illustration purposes only and not by limitation, include electro-optic, infrared, radar, magnetic, seismic, and acoustic.
Computing device 408 is illustratively communicatively coupled to moving object sensor 404, display device 402, and input device 406 through its interface 412. Computing device 408 includes memory unit 410 and processing unit 414. Memory unit 410 illustratively stores information or data collected by sensor 404. The stored information can be used for any of a great variety of applications such as, but not limited to, reviewing previously recorded/collected data. In one embodiment, playback controls such as playback controls 190 in
In another embodiment, operating environment 400 optionally includes external data storage 411 in place of or in addition to data storage 410. External data storage 411 is not limited to any particular configuration or implementation. Certain embodiments include storage 411 being implemented as external hard disk drives, external solid state drives, and/or data storage capabilities implemented across a computer network (e.g. a RAID system accessed across a network). Accordingly, data may be stored and later reviewed independently from the operation of computing device 408. This may also be advantageous in providing data back-ups and possibly storing the sensor information in a more secure location.
Processing unit 414 illustratively executes instructions stored in memory unit 410 and controls the operations of operating environment 400. Processing unit 414 may include one or more types of devices such as, but not limited to, application-specific integrated circuits, field programmable gate arrays, general purpose microprocessors, video/graphic cards, etc. For instance, computing device 408 may include a microprocessor that processes software or firmware and multiple graphics cards to support multiple display devices 402.
Although embodiments are not limited to any particular configuration of operating environment 400, it is worth noting that in one embodiment that the environment is illustratively made of commercial off-the-shelf components. Or, in other words, the environment can be implemented without the need for any specialized hardware. This may be advantageous in that it may allow for at least some embodiments to have lower costs. Some examples, for illustration purposes only and not by limitation, of commercial off-the-shelf components that may be used include commercial workstation class computers capable of running software applications and hosting multiple graphic cards, commercial off-the-shelf graphic cards, standard computer control devices (e.g. keyboards and mice), and commercial off-the-shelf display monitors such as, but not limited to, flat panel color liquid crystal displays.
Returning to
At block 508, the additional images are compared to the reference image. Moving objects illustratively correspond to pixels that have changed from the reference image. Embodiments are not limited to any particular method of change detection. Some examples of testing procedures that can be utilized to perform change detection operations include linear independence tests, vectorized tests, and edge motion tests. Further, the change detection can be performed by utilizing application specific information such as region of interest or known sizes or shapes. In one embodiment, a vector change detection algorithm is utilized. In this manner, a vector method is utilized that determines change at each image pixel (with respect to a reference image) based on a calculation using the test pixel and surrounding pixels in a square region (e.g. 3×3, 5×5, 7×7). In another embodiment, the spatial resolution of the change detection algorithm (the size of the square region) is utilized. The change detection operations may be performed simultaneously on multiple frames or performed on one frame at a time on a continuous basis.
At block 510, if one or more changes are detected, the method continues to block 512. At block 512, several optional events may be triggered upon a change detection. In one embodiment, the detected moving object shown on the display is highlighted. For instance, the image shown on the display is supplemented with a visual indicator (e.g. a box around the moving object). Additionally or alternatively, the portions of an alert and tracking window that correspond to the pixels in which change was detected are highlighted. In yet another embodiment, an audible or visual indicator is provided to alert operators of detected changes. Such indicators include, for illustration purposes only and not by limitation, an audio alarm or an indicator light.
Embodiments of the present disclosure may also include dedicated algorithms or series of algorithms to classify detected objects. For instance, embodiments illustratively identify an unknown object as a person, a vehicle, an animal, etc., and provide an indication of the classification to operators.
As has been described above, embodiments of the present disclosure include systems and methods for detecting, tracking and displaying moving objects. Embodiments may provide advantages such as ease of use, increased operator efficiency in detecting objects, and relatively lower costs to implement. Additionally, certain embodiments may include alert and tracking windows that are intuitively understood by operators and that provide additional information about detected objects that may not otherwise be available. For instance, alert and tracking windows may provide information about relative speed, size, past locations, and direction of travel. Accordingly, embodiments may be advantageous in a wide variety of situations such as, but not limited to, surveillance and/or security applications.
Finally, it is to be understood that even though numerous characteristics and advantages of various embodiments have been set forth in the foregoing description, together with details of the structure and function of various embodiments, this detailed description is illustrative only, and changes may be made in detail, especially in matters of structure and arrangements of parts within the principles of the present disclosure to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
The present application is based on and claims priority of U.S. provisional patent application Ser. No. 61/265,156, filed Nov. 30, 2009, the content of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3816741 | Macall | Jun 1974 | A |
3978281 | Burrer | Aug 1976 | A |
4043672 | Baumgarter | Aug 1977 | A |
5022723 | Schmidt et al. | Jun 1991 | A |
5149969 | Fouilloy et al. | Sep 1992 | A |
5339188 | Fritzel | Aug 1994 | A |
5610730 | Osipchuk | Mar 1997 | A |
5721585 | Keast et al. | Feb 1998 | A |
5870220 | Migdal et al. | Feb 1999 | A |
5923365 | Tamir et al. | Jul 1999 | A |
6034716 | Whiting et al. | Mar 2000 | A |
6064423 | Geng | May 2000 | A |
6081278 | Chen | Jun 2000 | A |
6147701 | Tamura et al. | Nov 2000 | A |
6195609 | Pilley et al. | Feb 2001 | B1 |
6226035 | Korein et al. | May 2001 | B1 |
6304284 | Dunton et al. | Oct 2001 | B1 |
6335758 | Ochi et al. | Jan 2002 | B1 |
6356296 | Driscoll, Jr. et al. | Mar 2002 | B1 |
6373508 | Moengen | Apr 2002 | B1 |
6421185 | Wick et al. | Jul 2002 | B1 |
6456232 | Milnes et al. | Sep 2002 | B1 |
6654063 | Tadatsu | Nov 2003 | B1 |
6665003 | Peleg et al. | Dec 2003 | B1 |
6717608 | Mancuso et al. | Apr 2004 | B1 |
6734808 | Michaelson et al. | May 2004 | B1 |
6738073 | Park et al. | May 2004 | B2 |
6744403 | Milnes et al. | Jun 2004 | B2 |
6757446 | Li et al. | Jun 2004 | B1 |
6765566 | Tsao | Jul 2004 | B1 |
6795113 | Jackson et al. | Sep 2004 | B1 |
6798923 | Hsieh et al. | Sep 2004 | B1 |
6831693 | Sunaga | Dec 2004 | B1 |
6909438 | White et al. | Jun 2005 | B1 |
6927905 | Kashitani et al. | Aug 2005 | B1 |
6975353 | Milinusic et al. | Dec 2005 | B1 |
7058239 | Singh et al. | Jun 2006 | B2 |
7092132 | Matsuda | Aug 2006 | B2 |
7136096 | Yamagishi et al. | Nov 2006 | B1 |
7206017 | Suzuki | Apr 2007 | B1 |
7245744 | Kaneko et al. | Jul 2007 | B2 |
7256834 | Sagefalk et al. | Aug 2007 | B1 |
7260241 | Fukuhara et al. | Aug 2007 | B2 |
7301557 | Kakau et al. | Nov 2007 | B2 |
7489330 | Hayashi et al. | Feb 2009 | B2 |
7495694 | Cutler | Feb 2009 | B2 |
7528864 | Sassa | May 2009 | B2 |
7583815 | Zhang et al. | Sep 2009 | B2 |
7660439 | Lu et al. | Feb 2010 | B1 |
7710463 | Foote | May 2010 | B2 |
7801328 | Au et al. | Sep 2010 | B2 |
7801330 | Zhang et al. | Sep 2010 | B2 |
7884848 | Ginther | Feb 2011 | B2 |
7911517 | Hunt, Jr. et al. | Mar 2011 | B1 |
8072482 | Gibbs et al. | Dec 2011 | B2 |
8099201 | Barber et al. | Jan 2012 | B1 |
8521339 | Gariepy et al. | Aug 2013 | B2 |
8670020 | Gibbs et al. | Mar 2014 | B2 |
8792002 | Gibbs et al. | Jul 2014 | B2 |
8803972 | Gibbs et al. | Aug 2014 | B2 |
20010005218 | Gloudemans et al. | Jun 2001 | A1 |
20020024599 | Fukuhara et al. | Feb 2002 | A1 |
20020054211 | Edelson et al. | May 2002 | A1 |
20020071122 | Kulp et al. | Jun 2002 | A1 |
20020109772 | Kuriyama et al. | Aug 2002 | A1 |
20020126226 | Dudkowski | Sep 2002 | A1 |
20020196962 | Fukuhara et al. | Dec 2002 | A1 |
20030142203 | Kawakami et al. | Jul 2003 | A1 |
20030171169 | Cavallaro et al. | Sep 2003 | A1 |
20040021766 | Daniilidis et al. | Feb 2004 | A1 |
20040022453 | Kusama et al. | Feb 2004 | A1 |
20040061774 | Wachtel et al. | Apr 2004 | A1 |
20050029458 | Geng et al. | Feb 2005 | A1 |
20050031204 | Kaneko et al. | Feb 2005 | A1 |
20050259146 | Berdugo | Nov 2005 | A1 |
20050259158 | Jacob et al. | Nov 2005 | A1 |
20060017816 | Gat | Jan 2006 | A1 |
20060023074 | Cutler | Feb 2006 | A1 |
20060069497 | Wilson, Jr. | Mar 2006 | A1 |
20060072020 | McMcutchen | Apr 2006 | A1 |
20060227997 | Au et al. | Oct 2006 | A1 |
20060265109 | Canu-Chiesa et al. | Nov 2006 | A1 |
20060268102 | Ginther | Nov 2006 | A1 |
20060283317 | Melnychuk et al. | Dec 2006 | A1 |
20070140427 | Jensen et al. | Jun 2007 | A1 |
20070244608 | Rath et al. | Oct 2007 | A1 |
20080068451 | Hyatt | Mar 2008 | A1 |
20080088719 | Jacob et al. | Apr 2008 | A1 |
20080166015 | Haering et al. | Jul 2008 | A1 |
20080185526 | Horak et al. | Aug 2008 | A1 |
20080219509 | White et al. | Sep 2008 | A1 |
20080252527 | Garcia | Oct 2008 | A1 |
20080263592 | Kimber et al. | Oct 2008 | A1 |
20080291279 | Samarasekera et al. | Nov 2008 | A1 |
20090087029 | Coleman et al. | Apr 2009 | A1 |
20090223354 | Root, Jr. | Sep 2009 | A1 |
20090260511 | Melnychuk et al. | Oct 2009 | A1 |
20090278932 | Yi | Nov 2009 | A1 |
20100002082 | Buehler et al. | Jan 2010 | A1 |
20100013926 | Lipton et al. | Jan 2010 | A1 |
20100026802 | Titus et al. | Feb 2010 | A1 |
20100045799 | Lei et al. | Feb 2010 | A1 |
20100073460 | Gibbs et al. | Mar 2010 | A1 |
20100073475 | Gibbs et al. | Mar 2010 | A1 |
20100128110 | Mavromatis | May 2010 | A1 |
20100156630 | Ainsbury | Jun 2010 | A1 |
20100215212 | Flakes, Jr. | Aug 2010 | A1 |
20110081043 | Sabol et al. | Apr 2011 | A1 |
20110150272 | GunasekaranBabu et al. | Jun 2011 | A1 |
20110299733 | Jahangir et al. | Dec 2011 | A1 |
20120120189 | Gibbs et al. | May 2012 | A1 |
20140327733 | Wagreich | Nov 2014 | A1 |
20160054733 | Hollida et al. | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
2239762 | Mar 1994 | GB |
Entry |
---|
Assfalg et al., “Semantic annotation of soccer videos: automatic highlights identification”, Computer Vision and Image Understanding, v. 92, pp. 285-305, 2003. |
Cavallaro, “The FoxTrax Hockey Puck Tracking System”, IEEE Computer Graphics and Applications, pp. 6-12, Mar.-Apr. 1997. |
D'Orazio et al., “A Visual system for real time detection of goal events during soccer matches”, Computer Vision and Image Understanding, v. 113, pp. 622-632, 2009. |
Figueroa et al., “Tracking soccer players aiming their kinematic motion analysis”, Computer Vision and Understanding, v. 101, pp. 122-135, 2006. |
Khatoonabadi et al., “Automatic soccer players tracking in goal scenes by camera motion elimination”, Image and Vision Computing, v. 27, pp. 469-479, 2009. |
Liu et al., “Extracting 3D information from broadcast soccer video”, Image and Vision Computing, v. 24, pp. 1146-1162, 2006. |
Liu et al., “Automatic player detection, labeling and tracking in broadcast soccer video”, Pattern Recognition Letters, v. 30, pp. 103-113, 2009. |
Pallavi et al., “Ball detection from broadcast soccer videos using static and dynamic features”, Journal of Visual Communication and Image Representation, v. 19, n. 7, pp. 426-436, Oct. 2008. |
Piciarelli et al, “On-line trajectory clustering for anomalous events detection”, Pattern Recognition Letters, v. 27, pp. 1835-1842, 2006. |
Xie et al, “Structure analysis of soccer video with domain knowledge and hidden Markov models”, Pattern Recognition Letters, v. 25, pp. 767-775, 2004. |
Zhang et al., “People detection in low-resolution video with non-stationary background”, Image and Vision Computing, v. 27, pp. 437-443, 2009. |
Zhu et al., “Trajectory Based Events Tactics Analysis in Broadcast Sports Video”, Proceedings of the 15th International Conference on Multimedia, pp. 58-67, 2007. |
Chen et al., “A Trajectory-Based Ball Tracking Framework with Visual Enrichment for Broadcast Baseball Videos”, Journal of Information Science and Engineering, v. 24, pp. 143-157, 2008. |
Kasi et al., “Yet Another Algorithm for Pitch Tracking”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), v. 1, pp. 361-364, May 2002. |
Whiting, “Science lesson for baseball”, SFGate.com, Sep. 19, 2009. |
Girgensohn et al., “DOTS: Support for Effective Video Surveillance”, ACM Multimedia 2007, pp. 423-432, Sep. 2007. |
Khoshabeh et al., “Multi-camera Based Traffice Flow Characterization & Classification”, Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, pp. 259-264, Sep. 2007. |
Pham et al., “A Multi-Camera Visual Surveillance System for Tracking of Reoccurences of People”, First ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC '07), pp. 164-169, Sep. 2007. |
Saleemi et al., “Probabilistic Modeling of Scene Dynamics for Applications in Visual Surveillance”, IEEE Transactions on Pattern Analysis and Machine Intelligence, v. 31, n. 8, pp. 1472-1485, Aug. 2009. |
Usdot, Transview (TV32) Installation and Operations Guide for Maritime Safety and Security Information System (MSSIS), Version 7.7.4B, revision 1, Jun. 18, 2009. |
Zhang et al., “Ship Tracking Using Background Subtraction and Inter-frame Correlation”, 2nd International Congress on Image and Signal Processing (CISP '09), pp. 1-4, Oct. 17, 2009. |
Hampapur et al., “Smart Video Surveillance”, IEEE Signal Processing Magazine, pp. 38-51, Mar. 2005. |
Application and drawings filed on Aug. 20, 2015, for U.S. Appl. No. 14/830,990, 42 pages. |
Durucan, Emrullah, “Change Detection and Background Extraction by Linear Algebra”, Proceedings of the IEEE vol. 89, No. 10, Oct. 2001, pp. 1368-1381. |
Blair Bran J. “The Laser Imaging Sensor: a medium altitude, digitization-only, airborne laser altimeter or mapping vegetation and topography”, ISPRS Journal for Photogrammetry & Sensing 54 (1999), pp. 115-122. |
Prosecution history from U.S. Appl. No. 14/314,646, including: Requirement for Election/ Restriction from Aug. 12, 2015, Response to Restriction Requirement, filed Oct. 12, 2015, and non-final Rejection from Dec. 30, 2015. 18 pages. |
Non-Final Office Action for U.S. Appl. No. 14/830,990 dated Mar. 28, 2016, 13 pages. |
Number | Date | Country | |
---|---|---|---|
20110169867 A1 | Jul 2011 | US |
Number | Date | Country | |
---|---|---|---|
61265156 | Nov 2009 | US |