The present invention relates to monitoring networked remote devices such as portable data terminals, indicia readers or barcode scanners, configured to communicate with a server, and, more particularly, to a highly effective system and method for monitoring, analyzing and managing remote device failure and/or performance.
Remote devices such as portable data terminals, optical and laser indicia readers, barcode scanners, and other mobile computers, for example, typically read data represented by printed indicia such as symbols, symbology, and bar codes, for example. One type of symbol is an array of rectangular bars and spaces that are arranged in a specific way to represent elements of data in machine readable form. Optical indicia reading devices typically transmit light onto a symbol and receive light scattered and/or reflected back from a bar code symbol or indicia. The received light is interpreted by an image processor to extract the data represented by the symbol. Laser indicia reading devices typically utilize transmitted laser light. One-dimensional (1D) optical bar code readers are characterized by reading data that is encoded along a single axis, in the widths of bars and spaces, so that such symbols can be read from a single scan along that axis, provided that the symbol is imaged with sufficiently high resolution.
In order to allow the encoding of larger amounts of data in a single bar code symbol, a number of 1D stacked bar code symbologies have been developed which partition encoded data into multiple rows, each including a respective 1D bar code pattern, all or most all of which must be scanned and decoded, then linked together to form a complete message. Scanning still requires relatively higher resolution in one dimension only, but multiple linear scans are needed to read the whole symbol.
A class of bar code symbologies known as two dimensional (2D) matrix symbologies have been developed which offer orientation-free scanning and greater data densities and capacities than 1D symbologies. 2D matrix codes encode data as dark or light data elements within a regular polygonal matrix, accompanied by graphical finder, orientation and reference structures.
Many other classes of bar code symbologies and/or indicia have been known and are in widespread use including, for example, PDF417, MicroPDF417, MaxiCode, Data Matrix, QR Code, Aztec, Aztec Mesas, Code 49, EAN-UCC Composite, Snowflake, Dataglyphs, Code 39, Code 128, Codabar, UPC, EAN, Interleaved 2 of 5, Reduced Space Symbology, Code 93, Codablock F, and BC412, Postnet, Planet Code, British Post, Canadian Post, Japanese Post, OCR-A, OCR-B, Code 11, UPC, EAN, MSI, and Code 16K. Further, indicia may be represented by printed indicia, symbol indicia, biogenic/biometric indicia or any information extracted from a captured image.
Conventionally, a reader, whether portable or otherwise, includes a central processor which directly controls the operations of the various electrical components housed within the bar code reader. For example, the central processor controls detection of keypad entries, display features, wireless network communication functions, trigger detection, and bar code read and decode functionality. More specifically, the central processor typically communicates with an illumination assembly configured to illuminate a target, such as a bar code, and an imaging assembly configured to receive an image of the target and generate an electric output signal indicative of the data optically encoded therein. The output signal is then converted by an analog to digital converter and analyzed by algorithms stored in memory to decode any barcode contained in the captured image. Further, the central processor often controls a network interface configured to communicate over a wireless or wired network with a host server.
All remote devices have complex electronic system components that can fail for several reasons such as battery degradation, physical degradation of wear components such as docking station interfaces, memory failures, illumination, aimer, and imaging assembly failures as well as failure due to environmental factors. Remote devices are subject to repetitious use and each use reduces the mean time to failure of each device. Currently, the failure of a system component that renders a remote device inoperable requires a user in the field to fix the error such as by consultation with a user manual or other documentation or communication with the original equipment manufacturer (OEM). In those cases in which the device cannot be fixed in the field, the user often has to return the system component or device, such as by a return material authorization form, to the OEM and wait for the device to be repaired or a replacement to be sent. The user then suffers from device downtime and reduced productivity and/or throughput.
Accordingly, there is a need for a predictive remote device management system configured to monitor networked devices and more effectively manage remote device and/or system component failure and performance.
The present invention is disclosed with reference to the accompanying drawings, wherein:
It will be appreciated that for purposes of clarity and where deemed appropriate, reference numerals have been repeated in the figures to indicate corresponding features.
Referring to
Referring to
Still referring to
In the embodiment shown in
The monitoring module 224 includes program instructions that, when implemented by the processor 216, acquire one or more performance parameter values and communicate the performance parameter values to the host server 236 through the network interface 234 continuously, on a periodic basis, and/or upon request from the server 236. Exemplary performance parameter values include accumulated processor run time, processor failure, network interface throughput, image engine to memory transmission time, memory utilization, full memory utilization failure, memory read/write failure, battery level, battery failure, primary power source failure, network connection failure, application identification, screen identification, timestamp, battery charge/discharge cycles, dock/undock cycles, dock interface failure, keypad/trigger presses, keypad/trigger failure, display failure, touch screen presses, and touch screen presses per unit area, among others. Preferably, the monitoring module 234 acquires a plurality of performance parameter values so as to effectively monitor a plurality of system components and/or events.
In order to communicate a performance parameter value to the host server 236, the monitoring module 224 first communicates with at least one system component in order to acquire the performance parameter value. For example, to determine touch screen presses, the display interface 232 can communicate an event to the monitoring module 224 which can update or set a variable, of integer or floating point data type, for example, representing the total number of accumulated touch screen presses or the number of touch screen presses occurring since the last communication with the host server 236. Accordingly, the performance parameter value can include those monitored events occurring subsequent to the last communication with the host server 236 or the number of monitored events occurring subsequent to a specified point in time, for example.
The performance parameter value can also consist of or include an error code such as in the event of a system component failure. For example, it is well known in the art that data storage means 222 or memory system components can be configured to issue an error code to the system bus 238 upon read/write failure due to a corrupt memory block or a full memory utilization state, for example. In the case of a memory read/write error, it is preferable that a performance parameter value, including an error code, is automatically communicated to the monitoring module 224 where it can be transmitted to the host server 236 for further processing. The memory utilization performance parameter value is preferably communicated to the host server 236 on a periodic basis, upon request from the host server, or contemporaneously in the event of a system component failure (e.g. initiated by the receipt by the monitoring module of a performance parameter value including an error code). Also preferably communicated to the host server 236 by the monitoring module 224 is a remote device identifier to provide the host server with information regarding the device from which the parameter values were communicated.
In the event of a significant failure as defined by the network administrator, or as defined by, or inherent in, the system configuration, the remote device 200 can be configured to retrieve its last known successful network communication channel to communicate a performance parameter value including an error code. Further, the device 200 can also be configured to communicate a performance parameter value including an error code by operating on other than its primary power source, if necessary, such as a secondary battery.
In one exemplary embodiment, the monitoring module 224 maintains and logs, for at least the period subsequent to the prior communication with the host server 236, the accumulated number of keypad/trigger presses, accumulated processor clock cycles, accumulated network interface 234 throughput, battery charge/discharge cycles and accumulated touch screen presses, for example, and the remote device 200 communicates these performance parameter values to the host server 236 periodically as requested. Further, the monitoring module 224 automatically receives a performance parameter value in the form of an error code upon memory read/write failure, battery or primary power source 228 failure, keypad/trigger failure or touch screen failure. Upon receipt of a performance parameter value including an error code, the monitoring module 224 automatically communicates the performance parameter value to the system bus 238, network interface 234, and host serve 236r.
Referring to
The performance look-up table 248 is configured to store at least one predetermined failure value and/or at least one calculated failure value associated with at least one performance parameter. Preferably, the performance look-up table 248 is initially populated with predetermined values for each performance parameter, the predetermined values being generally known to the remote device 200 original equipment manufacturer (OEM) as representing known failure values. Alternatively, the performance look-up table 248 is initially populated upon system component failure whereby the performance look-up table 248 is populated with the value of the performance parameter value received contemporaneously with the failure. In another alternative configuration of the performance look-up table 248, each failure value is replaced by a value calculated based on the value received contemporaneously with the current failure and the failure value in the performance look-up table 248 at the time of the failure, such as by averaging or other calculation, for example. In operation, the analyzer module 244 is configured to compare the most recently received performance parameter value with the corresponding performance parameter value in the performance look-up table 248.
In one exemplary embodiment, the battery charge/discharge cycles parameter value is initially set to 100 in the performance look-up table 248 because 100 charge/discharge cycles is known by the OEM to result in battery failure in 80% of OEM batteries. Accordingly, in operation, the host server 236 periodically polls the remote device 200 to receive, from the monitoring module 224, the accumulated battery charge/discharge cycles performance parameter value. The analyzer module 244 then compares the value received by the host server 236 to the value of 100 in the performance look-up table 248. If the value is greater than 100, the analyzer module 244 can predict that failure is more than 80 percent likely to occur on the next charge/discharge cycle.
In another exemplary embodiment, upon battery failure, for example, the battery charge/discharge cycles performance parameter value previously stored in the database 246, for example, 100, is replaced by the value received contemporaneously with the current failure, for example, 80, as 80 charge/discharge cycles is likely a more accurate representation and prediction of the likely failure of the OEM battery in the user's environment/network. In this embodiment, the analyzer module 244 more accurately predicts the failure of remote device system components in the respective remote device network based on the environment as well as usage level and activities of those remote device users.
In yet another embodiment, the administrator of the host server 236 can define the values of the performance look-up table 248. For example, if the host sever 236 administrator determines that battery supply is not important in the remote device 200 network environment, the administrator can set the battery charge/discharge cycles performance parameter value of the performance look-up table 248 to 200, for example, such that the analyzer module 244 does not determine that failure is imminent until it is 99 percent likely to occur on the next charge/discharge cycle, for example.
Still referring to
In one embodiment in which the analyzer module 244 of the host server 236 receives an error code as at least part of a performance parameter value, the host server 236, and/or the analyzer module 244, can communicate the error code to the remote device manufacturer interface 290 causing the knowledgebase module 292 to query the error code look-up table 294, based on the communicated error code, and retrieve error code information which it then communicates to the host server 236 where the information can be displayed to the host server 236 administrator. Accordingly, upon remote device 200 system component failure, the administrator of the host server 236 can automatically receive information regarding the failure thereby drastically reducing the effort required to diagnose and potentially resolve the cause of a remote device 200 failure.
In another embodiment, one of the database 246 or the performance look-up table 248 includes the one or more OEM error codes associated with a performance parameter. Accordingly, upon a determination by the analyzer module 244 that a system component is nearing likely failure, the analyzer module 244 retrieves the error code(s) likely to result from failure of the system component and automatically communicates the error code to the remote device manufacturer interface 290 which automatically responds by communicating error code information, retrieved from the error code look-up table 294, to the host server 236. The remote device manufacturer interface 290 and/or the knowledgebase module 292 can also be configured to communicate at least one of system update information, technical documentation and system upgrade information to the host server as requested by the administrator or as necessary as determined by the OEM.
In those embodiments in which the analyzer module 244 is configured to determine likely failure of a system component, the host server 236 can also be configured to automatically communicate one or more notifications based on a trigger condition. The notification can be in the form of a simple graphical display on the host server 23, automated notification sent to the OEM such as a return material authorization request notification (RMA), or an automated e-mail message optionally containing error information retrieved from the knowledgebase module 292 and/or an RMA prepared for submission to the OEM at the option of the administrator. Preferably, the notification includes the remote device identifier of the failed device.
In one embodiment, the trigger condition(s) are predetermined by the OEM. In another embodiment, a notification is sent based on a trigger condition communicated to the analyzer module 244 of the host server 236 by the host server administrator. For example, should the administrator determine that, for example, batteries are not important in the environment in which the remote devices are being used, the administrator can communicate with the analyzer module 244 to set a trigger condition such that the notification sent is an e-mail message to the administrator and not an RMA. Accordingly, when a battery charge/discharge performance parameter value is received that indicates likely failure, as determined by the analyzer module 244 comparison with the corresponding value in the performance look-up table 248, the administrator will be notified by e-mail and will then decide the next course of action with respect to the failure. Alternatively, the administrator can determine that no notifications will be sent with respect to the battery charge/discharge cycles performance parameter or any other parameter. Even further, the administrator can set a trigger condition in the analyzer module 244 such that a graphical display on the host server 236, an e-mail message to the administrator containing error code information, and an RMA to the OEM can be provided upon a device 200 system component failure, or any other notification or combination of notifications.
In the embodiments in which an RMA is automatically communicated to the OEM based on predefined trigger condition(s), a replacement remote device 200 or system component can be automatically shipped to the user/administrator upon receipt by the OEM of the RMA. Along with the replacement component or device 200, instructions with respect to shipping the failed device 200 to the OEM can optionally be included. Accordingly, in this embodiment, the analyzer module is configured to predict a remote device 200 system component failure, as described above, and automatically generate an RMA request, based on comparison with a value in the performance look-up table 248 and predefined trigger condition, which can be promptly responded to by the shipping of a replacement component or device 200 by the OEM thereby significantly reducing or eliminating device downtime.
Even further, the location of the remote device software/data storage means/memory/disk image, or the image itself, can be stored in the database 246 entry corresponding to the remote device identifier for each device 200 and sent along with the RMA request so that the OEM can retrieve the location of the software image and/or the software image itself, optionally by communication through the remote device manufacturer interface 290, and ship a replacement remote device 200, if necessary, that includes a substantially identical software configuration. Similarly, the identification of any unique device 200 hardware component(s) can be stored in the database 246 entry corresponding to the remote device identifier for each device 200 and sent along with the RMA notification so that the OEM can replace the failed device 200, if necessary, with the appropriate hardware configuration.
In another embodiment, the analyzer module, or another module on the host server 236, includes program instructions that, when implemented by the processor 260, communicate with the display interface 252 to graphically display at least one performance value retrieved from the database 246 and/or the performance look-up table 248. In this embodiment, the administrator communicates with the host server 236 to selectively display performance parameter data for one or more devices 200 to allow for more useful interpretation of the acquired performance data.
In one exemplary operation, timestamp, application identification and screen identification are optional portions of the system snapshot of performance parameter values communicated to the host server 236 upon request. Although none of these performance parameters will likely reflect a system failure, these values provide information regarding application and screen usage overall, at particular points in time and as compared to contemporaneous hardware performance. These performance parameters further function to provide information about application and specific screen usage and, accordingly, allow a user, administrator and/or OEM access to device 200 performance with respect to third party software applications as well as OEM installed applications. Third party software applications can be designed according to an OEM software development kit which provides a framework for logging application and screen ID parameter values such as by communication with the display interface 232 and/or monitoring module 224. Accordingly, the performance parameter information can be graphically displayed by the analyzer module 244, or any other module of the host server 236, to allow the administrator to more effectively view which applications are used at particular times and average time spent by a user on a particular screen, for example.
Other permutations and graphical displays of the parameter value data are also possible such as displaying the performance value data of several devices 200 simultaneously to determine the health of comparable system components and/or overall device 200 health and displaying the performance parameter values of one device 200 simultaneously with the current performance look-up table 248 values to determine those system components nearing failure. Other graphical displays of performance parameter data are also contemplated.
While the principles of the invention have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the invention. In particular, certain performance parameters used herein are exemplary and not intended to limit the invention. Other embodiments are contemplated within the scope of the present invention in addition to the exemplary embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present invention, which is not to be limited except by the following claims.