Conventional data capturing systems (e.g., a combination of a data capture device and a host device) are not typically bi-directional in terms of the flow and processing of data. For example, in most conventional systems, a data capture device may capture an image, process the image (e.g., decode barcode data visible in the image), and then transmit the processed data to the host device. Once transmitted, the data capture device cannot further process the data. As a result, conventional data capturing systems often struggle in identifying objects that do not have easily decodable information (such as a barcode).
For example, if an object does not include an indicia (e.g., a barcode)—such as produce, meat, etc.—the object is typically processed using object identifying applications before being transmitted to the host device. However, a common limitation of such conventional configurations is that host devices are often not configured to read the output from these object identifying applications. As such, the output of these object identifying application cannot inform the host device the identity of the object placed before data capture devices. Thus, users of the host device must navigate through a full listing of products to identify the object manually, rendering the object identifying application moot.
In an embodiment, the present invention may be a system for identifying an object, and the system may include: (1) a data capture device; (2) an object prediction application deployed on either (i) the one or more memories of the data capture device or (ii) a host device; and/or (3) a selection application executing on the host device communicatively connected to the data capture device. The data capture device may include: an imaging assembly configured to capture images over one or more fields of view, one or more processors connected to the imaging assembly, and/or one or more memories communicatively coupled to the one or more processors. The object prediction application may be configured to: in response to the data capture device being unable to identify a decodable indicia on the object, receive image data associated with the images captured by the imaging assembly, identify one or more aspects of the object, and/or generate object candidate data corresponding to the object from the identification. The selection application may be configured to: receive the object candidate data, present a selection user interface via an interactive display of the host device, wherein a register log application was initially displaying a register log user interface on the interactive display and the register log application is configured to receive object identifier data and process the object identifier data, display, via the selection user interface, the object candidate data, receive one or more object selections of the object candidate data from a user interacting with the interactive display, generate object identifier data for each of the one or more object selections, and/or transmit the object identifier data to the register log application.
Additionally or alternatively, in some embodiments, presenting the selection user interface may cause the selection user interface to be displayed to a foreground of the interactive display and/or receiving the one or more object selections may cause the selection user interface to be displayed to a background of the interactive display.
Additionally or alternatively, in some embodiments, the system may further include an electronic weight scale connected to the data capture device. The electronic weight scale may be configured to: (i) detect, via a sensor of the electronic weight scale, a change in weight of a product presentation region and/or (ii) in response to detecting the change in weight, transmit a capture signal to the imaging assembly, wherein the product presentation region is within the one or more fields of view.
Additionally or alternatively, in embodiments where the object prediction application is deployed on the one or more memories of the data capture device, the data capture device may be configured to: (i) in response to the imaging assembly capturing the images, generate a timer that measures an amount of time that has elapsed since the images were captured, (ii) detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and/or (iii) in response to detecting that the timer reached the threshold amount of time, transmit a time-out signal to the selection application. Additionally or alternatively, the selection application may be further configured to: in response receiving the time-out signal, display, via the selection user interface, one or more of: (i) a complete object listing and/or (ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, and/or (d) need to be reoriented.
Additionally or alternatively, in some embodiments, the object prediction application may be further configured to: (i) generate a confidence score for each object candidate in the object candidate data, (ii) determine a greatest confidence score exceeds a second greatest confidence score by a threshold amount, and/or (iii) generate determined object candidate data corresponding to the object candidate with the greatest confidence score, wherein the selection application receives the determined object candidate data.
Additionally or alternatively, in some embodiments, the object candidate data may include a determined object classification. Additionally or alternatively, displaying, via the selection user interface, the object candidate data may include displaying objects from the determined object classification.
In another embodiment, the present invention may be a data capture device comprising: (1) an imaging assembly configured to capture images over one or more fields of view; (2) one or more processors connected to the imaging assembly; (3) one or more memories communicatively coupled to the one or more processors; and/or (4) computing instructions stored on the one or more memories that, when executed, may cause the data capture device to: capture, via the imaging assembly, images of an object in one or more fields of view, in response to being unable to identify a decodable indicia on the object, provide image data associated with the images to an object prediction application deployed on the one or more memories, identify, via the object prediction application, one or more aspects of the object, generate, via the object prediction application, object candidate data corresponding to the object from the identification, wherein the object prediction application is deployed on the one or more memories and the object prediction application is configured to generate object candidate data corresponding to one or more objects detected within captured images, and/or transmit the object candidate data to a selection application executing on a host device communicatively connected to the data capture device, wherein the selection application may be configured to: (i) receive the object candidate data, (ii) present the object candidate data, via a selection user interface, (iii) receive one or more object selections of the object candidate data from a user interacting with the interactive display, (iv) generate object identifier data for each of the one or more object selections, and/or (v) transmit the object identifier data to a register log application executing on the host device, and/or wherein the register log application may be configured to: (i) initially display a register log user interface on an interactive display of the host device, (ii) receive object identifier data and process the object identifier data, and/or (iii) present the object identifier data, via the register log user interface.
Additionally or alternatively, in some embodiments, presenting the object candidate data, via the selection user interface, may cause the selection user interface to be displayed to a foreground of the interactive display and/or receiving the one or more object selections may cause the selection user interface to be displayed to a background of the interactive display.
Additionally or alternatively, in some embodiments, the data capture device may further include an electronic weight scale connected to the imaging assembly. The electronic weight scale may be configured to detect, via the electronic weight scale, a change in weight of a product presentation region. Additionally or alternatively, in some embodiments, the images may be captured in response to detecting the change in weight, and the imaging assembly may have a field of view of the product presentation region.
Additionally or alternatively, the computing instructions may further cause the data capture device to: (i) in response to capturing the images, generate a timer that measures an amount of time that has elapsed since the images were captured, (ii) detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and/or (iii) in response to detecting that the timer reached the threshold amount of time, transmit a time-out signal to the selection application. Additionally or alternatively, the selection application may be further configured to: in response receiving the time-out signal, display, via the selection user interface, one or more of: (i) a complete object listing and/or (ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, and/or (d) need to be reoriented.
Additionally or alternatively, in some embodiments, the object prediction application may be further configured to: (i) generate a confidence score for each object candidate in the object candidate data, (ii) determine a greatest confidence score exceeds a second greatest confidence score by a threshold amount, and/or (iii) generate determined object candidate data corresponding to the object candidate with the greatest confidence score, wherein the selection application receives the determined object candidate data.
Additionally or alternatively, in some embodiments, the object candidate data may include a determined object classification. Additionally or alternatively, presenting the object candidate data may include displaying objects from the determined object classification.
In yet another embodiment, the present invention may be a tangible, non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a host device, may cause the host device to: (1) receive image data associated with images of an object, wherein the images were captured by an imaging assembly of a data capture device; (2) provide the image data to an object prediction application deployed on the tangible, non-transitory computer-readable medium; (3) identify, via the object prediction application, one or more aspects of the object; (4) generate, via the object prediction application, object candidate data corresponding to the object from the identification; (5) present, via a selection user interface, the object candidate data, wherein the selection user interface is associated with a selection application executing on the host device; (6) receive, via the selection user interface, one or more object selections of the object candidate data from a user interacting with the interactive display; (7) generate, via the selection application, object identifier data for each of the one or more object selections; and/or (8) transmit the object identifier data to a register log application executing on the host device, wherein the register log application is configured to: (i) initially display a register log user interface on the interactive display, (ii) receive object identifier data and process the object identifier data, and/or (iii) present the object identifier data, via the register log user interface.
Additionally or alternatively, in some embodiments, presenting, via the selection user interface, the object candidate data may cause the selection user interface to be displayed on a foreground of an interactive display of the host device and/or receiving, via the selection user interface, the one or more object selections may cause the selection user interface to be displayed to a background of the interactive display.
Additionally or alternatively, in some embodiments, an electronic weight scale may be connected to the data capture device, the images may be captured in response to the electronic weight scale detecting a change in weight of a product presentation region, and/or the imaging assembly may have a field of view of the product presentation region.
Additionally or alternatively, in some embodiments, the stored instructions may further cause the host device to: (1) in response to receiving the image data, generate a timer that measures an amount of time that has elapsed since the image data was received, (2) detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and/or (3) in response to detecting that the timer reached the threshold amount of time, present, via the selection user interface, one or more of: (i) a complete object listing and/or (ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, and/or (d) need to be reoriented.
Additionally or alternatively, in some embodiments, the object prediction application may be further configured to: (i) generate a confidence score for each object candidate in the object candidate data, (ii) determine a greatest confidence score exceeds a second greatest confidence score by a threshold amount, and/or (iii) generate determined object candidate data corresponding to the object candidate with the greatest confidence score, wherein the determined object candidate data is presented to the user via one or more of the selection user interface or the register log user interface.
Additionally or alternatively, in some embodiments, the object candidate data may include a determined object classification. Additionally or alternatively, presenting the object candidate data may include displaying objects from the determined object classification.
In a further embodiment, the present invention may be a computer-readable method may include: (1) capturing, via an imaging assembly, images of an object in one or more fields of view; (2) in response to a data capture device being unable to identify a decodable indicia on the object, providing image data associated with the images to an object prediction application deployed on either (i) the data capture device or (ii) a host device; (3) identifying, via the object prediction application, one or more aspects of the object; (4) generating, via the object prediction application, object candidate data corresponding to the object from the identification, wherein the object prediction application is configured to generate object candidate data corresponding to one or more objects detected within captured images; (5) receiving, via a selection application executing on the host device, the object candidate data; (6) presenting, via a selection user interface, the object candidate data, wherein the selection user interface is associated with the selection application; (7) receiving, via the selection user interface, one or more object selections of the object candidate data from a user interacting with the interactive display; (8) generating, via the selection application, object identifier data for each of the one or more object selections; (9) passing, via the selection application, the object identifier data to a register log application, wherein a register log user interface associated with the register log application was initially displaying on the interactive display; and/or (10) presenting, via the register log user interface, the object identifier data.
Additionally or alternatively, in some embodiments, presenting, via the selection user interface, the object candidate data may cause the selection user interface to be displayed on a foreground of an interactive display of the host device and/or receiving, via the selection user interface, the one or more object selections may cause either one of (i) the selection user interface to be displayed to a background of the interactive display and/or (ii) a register log user interface to be displayed to a background of the interactive display.
Additionally or alternatively, in some embodiments, the method may further include: (1) detecting, via an electronic weight scale connected to the imaging assembly, a change in weight of a product presentation region and/or (2) in response to detecting the change in weight, transmitting, from the electronic weight scale, a capture signal to the imaging assembly, wherein the imaging assembly captures the images in response to receiving the capture signal.
Additionally or alternatively, in some embodiments the method may further include: (1) in response to capturing the images, generating, via the data capture device, a timer that measures an amount of time that has elapsed since the images were captured; (2) detecting, via the data capture device, that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data; and/or (3) in response to detecting that the timer reached the threshold amount of time, causing, via the data capture device, the host device to display, via one or more of the selection user interface or the register log user interface, one or more of: (i) a complete object listing and/or (ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, and/or (d) need to be reoriented.
Additionally or alternatively, in some embodiments, the object prediction application may be further configured to: (i) generate a confidence score for each object candidate in the object candidate data, (ii) determine a greatest confidence score exceeds a second greatest confidence score by a threshold amount, and/or (iii) generate determined object candidate data corresponding to the object candidate with the greatest confidence score, wherein the selection application receives the determined object candidate data.
Additionally or alternatively, in some embodiments, the object candidate data may include a determined object classification. Additionally or alternatively, presenting the object candidate data may include displaying objects from the determined object classification.
Advantages will become more apparent to those of ordinary skill in the art from the following description of the preferred embodiments, which have been shown and described by way of illustration. As will be realized, the present embodiments may be capable of other and different embodiments, and their details are capable of modification in various respects. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
Conventional image-based data capture devices may include image devices (such as a scanner) connected to a host device (such as a point-of-sale (POS) terminal). The imaging device may scan an indicia (such as a barcode) located on an object and the decoded information from the indicia identifying the object may then be transmitted to the host device. Upon receiving the decoded information, the host device may accurately identify the object and use this information in a POS transaction. While these conventional configurations work well for objects featuring indicia, as noted above, these conventional configurations tend to fail if the object is not featuring an indicia because the host device is often not configured to read the output of object identifying applications.
The present disclosure relates generally to an imaging-based data capture device that may be connected to a host device that processes POS transactions. In particular, the methods and systems described herein address the limitations of conventional operations by allowing host devices to read the outputs of object identifying applications (e.g., in scenarios where an object does not have a barcode) without having to be specifically configured to read those outputs. In this way, software associated with the data capture device may be able to subsequently process data transmitted to the host device to allow for fast and accurate object identification without an easily decodable indicia.
The data capture device may include an imaging assembly and/or a prediction controller. In some embodiments, the imaging assembly and the prediction controller may be enclosed in a singular housing and/or share components (e.g., processing elements, memories, etc.). A user may place an object in the field of view of the imaging assembly, and the imaging assembly may capture one or more images of the object. In some embodiments, the images may then be processed, analyzed, and/or embedded to generate image data associated with the captured images. For example, the image data may include a highly contrasted version of an image and/or the image data may include meta data included with the image (e.g., decoded indicia data). The images and/or the image data may then be input into an object prediction application wherein one or more object candidates are generated. The data capture device may then pass the one or more object candidates to the host device.
The host device may run at least two applications in parallel: a register log application, initially in the foreground and/or displayed by an interactive display of the host device, and a selection application, initially in the background and/or not displayed by the interactive display of the host device. The register log application may be configured to receive decoded indicia data (e.g., decoded barcode data) and/or object identifier data (e.g., product look up (PLU) codes readable by the host device).
The selection application may receive the object candidates generated by the prediction controller. In response, the selection app may display over the register log application by presenting the object candidates to the user (e.g., via a pop-up window displayed on the foreground of the interactive display of the host device). A user may make one or more selections of the presented object candidates, and, upon receiving the user selections, the selection application may return to the background of the interactive display (e.g., by minimizing the pop-up window, sending it behind the window of the register log application, etc.) and convert the object candidates selected into object identifier data (e.g., PLU codes) readable by the register log application. The selection application may then pass the object identifier data to the register log application (e.g., via a driver interfacing with the register log application that transmits decoded indicia data from a scanning port of the host device).
In this way, the systems described herein overcome the limitations of conventional product identification systems because host devices may now be able to identify objects from the outputs of object prediction applications without the need for the specific configuration of reading those outputs.
As used herein, the term indicia should be understood to refer to any kind of visual marker that can be associated with an object. For example, indicia can be a 1D, 2D, or 3D barcode, a graphic, a logo, etc. Additionally, indicia may comprise encoded payload data as, for example, is the case with a 1D or 2D barcode where the barcode encodes a payload comprised of, for example, alphanumeric or special characters that may be formed into a string.
The data capture device 101 may include an imaging device 111 and/or a prediction controller 121.
The imaging device 111 may include one or more processors 112, one or more memories 114, one or more I/O ports 115, one or more image sensors 116, and/or one or more optics 118. Any of these components of the imaging device 111 may be communicatively coupled to one another via a dedicated communication bus. In one example, the imaging device 111 may be a camera device. In another example, the imaging device 111 may be a scanning device (such as a monoptic scanner, a bioptic scanner, etc.).
The one or more processors 112 may be one or more central processing units (CPUs), one or more coprocessors, one or more microprocessors, one or more graphical processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more programmable logic devices (PLDs), one or more field-programmable gate arrays (FPGAs), one or more field-programmable logic devices (FPLDs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices, etc.
The one or more memories 114 may be any local short term memory (e.g., random access memory (RAM), read only memory (ROM), cache, etc.) and/or any long term memory (e.g., hard disk drives (HDD), solid state drives (SSD), etc.). The one or more memories 114 may also store machine readable instructions, including any of one or more application(s) and/or one or more software component(s) which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, and/or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
As an example, the machine readable instructions of the imaging device 111 may instruct, direct, and/or cause the imaging device 111 to capture images over one or more fields of view (FOVs). As an example, the machine readable instructions of the imaging device 111 may instruct, direct, and/or cause the imaging device 111 and/or any processors of the data capture device 101 to decode encrypted information within images and/or image data, such as indicia (e.g., barcodes, quick-response (QR) codes, etc.).
The one or more processors 112 may include one or more registers capable of temporarily storing data, and the one or more processors 112 may include further storage capacity in the form of integrated memory slots. The one or more processors 112 may interact with the any of the forgoing (e.g., registers, integrated memory slots, one or more memories 114, etc.) to obtain, for example, machine-readable instructions corresponding to, for example, the operations represented by the flowcharts of this disclosure.
The one or more I/O ports 115 may be, or may include, any number of different types of I/O units, I/O interfaces, and/or I/O circuits that enable the one or more processors 112 of the imaging device 111 to communicate with external devices (e.g., one or more I/O ports 125 of the prediction controller 121 and/or one or more I/O ports 155 of the host device 151). In some embodiments, the one or more I/O ports 115 of the imaging device 111 may have a direct connection to the one or more I/O ports 125 of the prediction controller 121 (e.g., via dedicated coupling via a communication bus, a wired connection, a wireless connection, etc.) to allow for the imaging device 111 to receive digital signals, object candidate data, and/or object identifier data from the prediction controller 121 and/or transmit images and/or image data to the prediction controller 121. Additionally or alternatively, in some embodiments, the one or more I/O ports 115 of the imaging device 111 may also have a direct connection to the one or more I/O ports 155 of the host device 151 (e.g., via a dedicated scanner terminal, a wireless connection, etc.) to allow for the imaging device 111 to transmit the object candidate data generated by the prediction controller 121 to the host device 151. In some embodiments, the imaging device 111 may also transmit object identifier data to the host device 151.
The one or more image sensors 116 may be any image capturing unit(s), component(s), and/or sensor capable(s) of capturing images. For example, the image sensors 116 may be CMOS image sensors, CCD image sensors, and/or other types of image sensor architectures. The image sensors 116 may be configured to capture convert the values of the component sensors into a file format associated with images.
The one or more optics 118 may be any optical elements, such as collimators, lenses, apertures, compartment walls, etc. that may be attached to and/or detached from a housing of the imaging device 111.
In operation, the imaging device 111 may be configured to capture images and/or decode indicia data in the images and/or image data. In embodiments where two or more imaging devices 111 are employed, the two or more imaging devices may be arranged such that the FOV of each imaging device 111 has a different perspective than the FOV of each other imaging device 111. In some embodiments, the imaging device 111 may capture the image device upon receiving a digital communication signal (e.g., a signal flag triggered by an electronic sensor communicatively coupled to the imaging device 111). In some embodiments, the imaging device 111 may be configured to continuously capture images over a period of time (e.g., a video recording, a video stream, etc.). In these embodiments, a single frame may be selected. In some embodiments, the single frame is selected based upon the quality of the image frame (e.g., using a focus measure operator and/or algorithm as described herein). In these embodiments, the frame with the highest relative quality among the captured frames may be selected.
The imaging device 111 and/or one or more dedicated processors of the data capture device 101 may decode one or more indicia in the images and/or image data. For example, in the embodiments where the object has an indicia visible in the images and/or image data, the imaging device 111 and/or one or more dedicated processors of the data capture device 101 may decode the indicia to generate the decoded indicia data and then transmit the decoded indicia data to the host device 151. Conversely, in the embodiments where the object has no indicia, the indicia is not visible in the images and/or image data, and/or the prediction controller 121 cannot otherwise decode the indicia in the images and/or image data, the imaging device 111 and/or one or more dedicated processors of the data capture device 101 may transmit the images and/or image data to either to the prediction controller 121.
In the embodiments where the imaging device 111 is continuously capturing images over a period of time, the imaging device 111 may decode the images and/or image data and/or transmit the images, image data, and/or decoded indicia data in parallel to capturing the images (e.g., transmitting 24 image frames every second, transmitting a stream of image frames as soon as it is captured, etc.).
The prediction controller 121 may include one or more processors 122, one or more memories 124, one or more input and/or output (I/O) ports 125, and/or one or more network adapters 128. Any of these components of the prediction controller 121 may be communicatively coupled to one another via a dedicated communication bus.
The one or more processors 122 may be one or more central processing units (CPUs), one or more coprocessors, one or more microprocessors, one or more graphical processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more programmable logic devices (PLDs), one or more field-programmable gate arrays (FPGAs), one or more field-programmable logic devices (FPLDs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices, etc.
The one or more memories 124 may be any local short term memory (e.g., random access memory (RAM), read only memory (ROM), cache, etc.) and/or any long term memory (e.g., hard disk drives (HDD), solid state drives (SSD), etc.). The one or more memories 124 may also store machine readable instructions, including any of one or more application(s) and/or one or more software component(s) which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, and/or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
As another example, the machine readable instructions of the prediction controller 121 may include an object prediction application 126 configured to: (i) receive images and/or image data captured by the imaging device 111, (ii) identify one or more aspects of an object in the images and/or image data, and/or (iii) generate object candidate data corresponding to the object from the identification. In some embodiments, the object candidate data includes a classification, categorization, designation or otherwise determination regarding the identity of the object (e.g., a designation of that an object is a banana). Additionally, in some embodiments, the object candidate data may include be one or more product look up (PLU) codes.
As yet another example, the machine readable instructions of the prediction controller 121 may instruct, direct, and/or cause the prediction controller 121 to transmit images and/or image data and/or object candidate data to the imaging device 111 and/or the host device 151. The machine readable instructions of the prediction controller 121 may also instruct, direct, and/or cause the imaging device 111 and/or the host device 151 to facilitate and/or perform the features, functions, or other disclosure described herein. In some embodiments, the prediction controller 121 may also convert the object candidate data into object identifier data (e.g., by converting the PLUs of the object candidate data into PLUS understood by the host device 151). In these embodiments, the prediction controller 121 may also transmit object identifier data to the host device 151.
The one or more processors 122 may include one or more registers capable of temporarily storing data, and the one or more processors 122 may include further storage capacity in the form of integrated memory slots. The one or more processors 122 may interact with the any of the forgoing (e.g., registers, integrated memory slots, one or more memories 124, etc.) to obtain, for example, machine-readable instructions corresponding to, for example, the operations represented by the flowcharts of this disclosure.
The one or more I/O ports 125 may be, or may include, any number of different types of I/O units, I/O interfaces, and/or I/O circuits that enable the one or more processors 122 of the prediction controller 121 to communicate with external devices (e.g., one or more I/O ports 115 of the imaging device 111 and/or one or more I/O ports 155 of the host device 151). In some embodiments, the one or more I/O ports 125 of the prediction controller 121 may have a direct connection to the one or more I/O ports 115 of the imaging device 111 (e.g., via dedicated coupling via a communication bus, a wired connection, a wireless connection, etc.) to allow for the prediction controller 121 to receive images and/or image data from and/or transmit object candidate data and/or object identifier data to the imaging device 111. Additionally or alternatively, in some embodiments, the one or more I/O ports 125 of the prediction controller 121 may have a direct connection to the one or more I/O ports 155 of the host device 151 (e.g., via a wired connection, such as a universal serial bus (USB) or ethernet connection, a wireless connection, etc.) to allow for the prediction controller 121 to transmit object candidate data and/or object identifier data to the host device 151.
The one or more network adapters 128 may include be one or more communication components configured to communicate (e.g., send and receive) data via one or more external/network port(s) over one or more communication networks. For example, the one or more network adapters 128 may be, or may include, a wired network adapter, connector, interface, etc. (e.g., an Ethernet network connector, an asynchronous transfer mode (ATM) network connector, a digital subscriber line (DSL) modem, a cable modem) and/or a wireless network adapter, connector, interface, etc. (e.g., a Wi-Fi connector, a Bluetooth® connector, an infrared connector, a cellular connector, etc.) configured to communicate over the one or more communication networks. Additionally or alternatively, in various aspects, the one or more network adapters 128 may include, or interact with, one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and that may be used in receipt and transmission of data via external/network ports connected to the one or more communication networks.
In operation, in some embodiments, the prediction controller 121 may be configured to: (i) receive images and/or image data from the imaging device 111, (ii) provide the images and/or image data to the object prediction application 126 to generate object candidate data, (iii) convert the object candidate data into object identifier data, and/or (iv) transmit the object candidate data and/or the object identifier data to the imaging device 111 and/or the host device 151.
For example, in the embodiments where the object has an indicia visible in the images and/or image data, the prediction controller 121 may not receive the images and/or image data from the imaging device 111, as the indicia may be decoded to generate the decoded indicia data. The generated decoded indicia data may be transmitted by the data capture device 101 (e.g., via the one or more ports 115 of the imaging device 111) to the host device 151. As another example, in the embodiments where the object has no indicia, the indicia is not visible in the images and/or image data, and/or the imaging device 111 and/or one or more dedicated processors of the data capture device 101 cannot otherwise decode the indicia in the images and/or image data, the prediction controller 121 may receive the images and/or image data from the imaging device 111, provide the images and/or image data to the object prediction application 126 to generate object candidate data (e.g., based upon one or more identifying features of the object), and then transmit the object candidate data to either the imaging device 111 or the host device 151. In some embodiments, the prediction controller 121 may be configured to convert the object candidate data into object identifier data (e.g., when the system of the data capture device 101 and the host device 151 are configured to only present a number of object candidates to a user of the host device 151).
The host device 151 may include one or more processors 152, one or more memories 154, one or more I/O ports 154, an interactive display 156, and/or one or more network adapters 158. Any of these components of the host device 151 may be communicatively coupled to one another via a dedicated communication bus.
The one or more processors 152 may be one or more central processing units (CPUs), one or more coprocessors, one or more microprocessors, one or more graphical processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more programmable logic devices (PLDs), one or more field-programmable gate arrays (FPGAs), one or more field-programmable logic devices (FPLDs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices, etc.
The one or more memories 154 may be any local short term memory (e.g., random access memory (RAM), read only memory (ROM), cache, etc.) and/or any long term memory (e.g., hard disk drives (HDD), solid state drives (SSD), etc.). The one or more memories 154 may also store machine readable instructions, including any of one or more application(s) and/or one or more software component(s) which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, and/or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
As an example, the machine readable instructions of the host device 151 may instruct, direct and/or cause the host device 151 to receive images, image data, decoded indicia data, object candidate data, and/or object identifier data from the data capture device 101.
As another example, in some embodiments, the machine readable instructions of the host device 151 may include an object prediction application 126 configured to: (i) receive images and/or image data captured by the data capture device 101, (ii) identify one or more aspects of an object in the images and/or image data, and/or (iii) generate object candidate data corresponding to the object from the identification. In some embodiments, the object candidate data includes a classification, categorization, designation or otherwise determination regarding the identity of the object (e.g., a designation of that an object is a banana). Additionally, in some embodiments, the object candidate data may include be one or more product look up (PLU) codes.
As another example, the machine readable instructions of the host device 151 may instruct, direct and/or cause the host device 151 to execute one or more applications, such as a register log application 161 and/or a selection application 171. In these examples, the register log application 161 may initially display on the interactive display 156 of the host device 151. The register log application 161 may be configured to receive and/or process object identifier data. The selection application 171 may be configured to receive the object candidate data, present the object candidate data (e.g., by displaying over the register log application 161, e.g., via a pop-up window being displayed on the foreground of the interactive display 156), receive one or more user selections of the object candidate data, convert the selected object candidate data to object identifier data, and/or transfer the object identifier data to the register log application 161.
The one or more processors 152 may include one or more registers capable of temporarily storing data, and the one or more processors 152 may include further storage capacity in the form of integrated memory slots. The one or more processors 152 may interact with the any of the forgoing (e.g., registers, integrated memory slots, one or more memories 154, etc.) to obtain, for example, machine-readable instructions corresponding to, for example, the operations represented by the flowcharts of this disclosure.
The one or more I/O ports 155 may be, or may include, any number of different types of I/O units, I/O interfaces, and/or I/O circuits that enable the one or more processors 152 of the host device 151 to communicate with external devices (e.g., the one or more I/O ports 115 of the imaging device 111 and/or the one or more I/O ports 125 of the prediction controller 121). In particular, the one or more I/O ports 155 of the host device 151 may have a direct connection to the one or more I/O ports 115 of the imaging device 111 (e.g., via a USB connection, a wireless connection, etc.) to allow for the host device 151 to receive data from the imaging device 111. Similarly, the one or more I/O ports 155 of the host device 151 may have a direct connection to the one or more I/O ports 125 of the prediction controller 121 (e.g., via a USB connection, ethernet, a wireless connection, etc.) to allow for the host device 151 to receive data from the prediction controller 121.
The interactive display 156 of the host device may be any suitable display unit (e.g., a monitor, etc.) capable of outputting visual data to a user and/or capable of receiving input from a user alone (e.g., via a touch screen) and/or in conjunction with one or more input devices (e.g., a mouse and/or a keyboard).
The one or more network adapters 158 may include be one or more communication components configured to communicate (e.g., send and receive) data via one or more external/network port(s) over one or more communication networks. For example, the one or more network adapters 158 may be, or may include, a wired network adapter, connector, interface, etc. (e.g., an Ethernet network connector, an asynchronous transfer mode (ATM) network connector, a digital subscriber line (DSL) modem, a cable modem) and/or a wireless network adapter, connector, interface, etc. (e.g., a Wi-Fi connector, a Bluetooth® connector, an infrared connector, a cellular connector, etc.) configured to communicate over the one or more communication networks. Additionally or alternatively, in various aspects, the one or more network adapters 158 may include, or interact with, one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and that may be used in receipt and transmission of data via external/network ports connected to the one or more communication networks.
The scenario of
In some examples, as noted above, the data capture device (e.g., via the prediction controller 121) may convert the object candidate data into the object identifier data (e.g., when the object prediction application outputs a single candidate). In these embodiments, the data capture device 101 may be configured to transmit the object identifier data directly to the register log application 161 of the host device thereby avoiding the triggered display of the selection application 171.
The present embodiments may involve the use of machine vision, image recognition, object identification, and/or other image processing techniques and/or algorithms, collectively referred to as machine vision (MV) herein. In particular, images and/or image data may be input into one or more machine vision programs described herein that are able to recognize, track, and/or identify objects and/or specific features of objects (e.g., the bend of a banana, the color of an orange, the outline of an apple, etc.) in and across the images and/or image data. Additionally, such machine vision programs may also be able to analyze the images and/or image data itself to determine the quality of the images and/or image data, select one or more images from a plurality of images and/or image data, and/or the like.
In some embodiments, the MV techniques and/or algorithms may utilize image classification, image recognition, and/or image identification techniques and/or algorithms (e.g., query by image content (QBIC), optical character recognition (OCR), pattern and/or shape recognition, histogram of oriented gradients (HOG) and/or other object detection methods), two dimensional image scanning, three dimensional image scanning, and/or the like. Similarly, the MV techniques and/or algorithms may utilize focus measure operators and/or accompanying algorithms (e.g., gradient-based operators, Laplacian-based operators, wavelet-based operators, statistics-based operators, discrete cosine transform based operators, and/or the like) to determine the focus of the images and/or image data. Such operators and/or algorithms may be applied to the images and/or image data as a whole or to a portion of the images and/or image data. The resulting focus may be a representation of the quality of the images and/or image data. If the focus (and, thus, the quality) of the images and/or image data falls below a threshold value, subsequent images and/or image data may be captured.
In some embodiments, the MV techniques and/or algorithms may utilize machine learning (ML) (also known as artificial intelligence (AI)) techniques and/or algorithms. For instance, a processor and/or a processing element (e.g., the one or more processors 112 of the imaging device 111, the one or more processors 122 of the prediction controller 121 and/or the one or more processors 152 of the host device 151) may be trained, validated, and/or otherwise developed using supervised machine learning to determine one or more object candidate identifications of an object within images and/or image data.
Further, in some embodiments, the ML program may employ one or more artificial neural network, which may be convolutional neural network(s) (CNN), fully convolutional neural network(s) (FCN), deep learning neural network(s), and/or combined learning modules or programs that learn in two or more fields or areas of interest. Machine learning may involve identifying and/or recognizing patterns in existing data in order to facilitate making predictions, estimates, and/or recommendations for subsequent data.
In supervised ML, a processing element identifies patterns in existing data to make predictions and/or classifications about subsequently received data. Specifically, the processing element is trained using training data, which includes example inputs, which may include features and associated labels. The training data is formatted so that the features explain or otherwise statistically correlate to the labels, such that an ML model outputs a prediction or classification corresponding to the label. Based upon the training data, the processing element may generate a predictive function which maps outputs to inputs and may utilize the predictive function to generate outputs based upon data inputs. In this way, the applied ML program, algorithm, and/or technique may determine and/or discover rules, relationships, and/or patterns between the exemplary inputs and the exemplary outputs. The exemplary inputs and exemplary outputs of the training data may include any of the data inputs or outputs described herein. In some embodiments, the processing element may be trained by providing it with a large sample of data with known characteristics or features. For example, as used herein, the features may be characteristics of objects (e.g., color, shape, classification, etc.). The labels, meanwhile, data may include designations of the object (e.g., a banana, an orange, a pork loin, etc.). In such embodiments, the characteristics of objects may be used to train a ML model to classify, categorize, recognize, and/or identify an object.
Supervised ML may also include retraining, relearning, and/or otherwise updating models with new, or different, information, which may include information received, ingested, generated, or otherwise used over time.
In some embodiments, the ML model generate multiple outputs. In these embodiments, the ML model may generate a confidence score for each generated output, where the ML model has been trained to assign a value to the one or more characteristics that bring an output to a specific classification. For example, when shown a novel image of apples, the ML model may be able to give a confidence score of 90 that the objects in the image are apples, a confidence score of 50 that the objects in the image are tomatoes, a confidence score of 10 that the objects in the image are tomatoes are cherries, etc. In some embodiments, the ML model may then select the output with the greatest confidence score to make a singular determination. In some embodiments, if two or more confidence scores are too close to each other in value (e.g., a difference between confidence scores does not exceed a threshold value), then the ML model may not make a selection and present all generated outputs. For example, the ML model generates a confidence score of 90 that the objects in the image are limes and a confidence score of 85 that the objects in the image are lemons, and confidence values need to have a difference of 10 or greater for the ML model to make a determination, the ML model may then present both outputs and their respective confidence values to the user.
The example implementation of
In some embodiments, the example processing platform 200a may begin with a user interface of the register log application 161 (e.g., the register log user interface 162) initially being displayed on the interactive display 156 of the host device 151 (201). The host device 151 may execute the selection application 171 in parallel to the register log application 161 with the selection application 171 in the background and/or not initially displayed on the interactive display 156 of the host device 151.
The imaging device 111 may capture images of one or more objects (202). In some embodiments, the capturing of the images is continuous. In alternative embodiments, the images are captured in response to the imaging device 111 receiving a capturing signal in response to a triggered sensor (e.g., a motion sensor, a weight sensor, lidar) and/or a manual triggering of the imaging device 111 (e.g., via a mechanical trigger, button, switch, etc.). In some embodiments, the images may then be processed, analyzed, and/or embedded to generate image data associated with the captured images. For example, the image data may include a highly contrasted version of an image and/or the image data may include meta data included with the image (e.g., decoded indicia data).
In some embodiments, the imaging device 111 and/or one or more dedicated processors of the data capture device 101 may attempt to locate and decode indicia in the image. Upon failing to identify and/or decoding any indicia in the image data, the imaging device 111 may then transmit the image data to the prediction controller 121. The prediction controller 121 may then input the image data into the object prediction application 126 (203).
The object prediction application 126 may then generate object candidate data as an output (204). The object candidate data may include one or more object identification candidates for each object in the image data.
The data capture device 101 may then transmit the object candidate data to the selection application 171 of the host device 151 (205). In some embodiments, the selection application 171 may implement a signal listener to detect incoming data from the data capture device 101. Additionally or alternatively, in some embodiments, the data capture device 101 may include a signal flag with the object candidate data to cause the selection application 171 to transition from a passive mode to an active mode.
Upon receiving the object candidate data, the selection application 171 may then present a user interface of the selection application 171 displaying the object candidate data at the foreground of the interactive data 156 of the host device 151 (206) (e.g., via a pop-up window over the register log application 161). The display of the object candidate data may be per-object detected in the image data (e.g., if two objects were detected, the selection application may display the 5 object candidates for each of the two detected objects).
The selection application 171 may then receive one or more user selections of the displayed object candidate data (207). The one or more user selections may be the user's confirmation of each detected object's identity (e.g., object 1 is a banana, object 2 is an apple, etc.).
Upon receiving the one or more user selections, the selection application 171 may convert the selected object candidate data into object identifier data (208). In some embodiments, as described herein, the conversion may involve converting the PLU code of the object candidate data (as assigned by the data capture device 101) into the PLU code of the register log application 161 and/or the host device 151.
The selection application 171 may then transmit the object identifier data to the register log application 161 (209). In some embodiments, the object identifier data may be sent to the register log application 161 via a driver of the scanning terminal (e.g., the point of connection between the data capture device 101 and the host device 151).
Upon receiving the object identifier data, the register log application 161 may then display and/or process the object identifier data to the user via the interactive display 156 of the host device 151. The example processing platform 200a may then exit.
The example implementation of
In some embodiments, the example processing platform 200b may begin with a user interface of the register log application 161 (e.g., the register log user interface 162) initially being displayed on the interactive display 156 of the host device 151 (201). The host device 151 may execute the selection application 171 in parallel to the register log application 161 with the selection application 171 in the background and/or not initially displayed on the interactive display 156 of the host device 151.
The imaging device 111 may capture images of one or more objects (202). In some embodiments, the capturing of the images is continuous. In alternative embodiments, the images are captured in response to the imaging device 111 receiving a capturing signal in response to a triggered sensor (e.g., a motion sensor, a weight sensor, lidar) and/or a manual triggering of the imaging device 111 (e.g., via a mechanical trigger, button, switch, etc.). In some embodiments, the images may then be processed, analyzed, and/or embedded to generate image data associated with the captured images. For example, the image data may include a highly contrasted version of an image and/or the image data may include meta data included with the image (e.g., decoded indicia data).
In some embodiments, one or more processors of the data capture device 101 may attempt to locate and decode indicia in the image. Upon failing to identify and/or decoding any indicia in the image data, the imaging device 111 may then transmit the image data to the host device 151. In some embodiments, the selection application 171 may implement a signal listener to detect incoming data from the data capture device 101. Additionally or alternatively, in some embodiments, the data capture device 101 may include a signal flag with the object candidate data to cause the selection application 171 to transition from a passive mode to an active mode.
Upon receiving the image data, the host device 151, under the direction of the selection application 171, may then input the image data into the object prediction application 126 (203). The object prediction application 126 may then generate object candidate data as an output (204). The one or more user selections may be the user's confirmation of each detected object's identity (e.g., object 1 is a banana, object 2 is an apple, etc.). The object candidate data may include one or more object identification candidates for each object in the image data. The object prediction application 126 may then transmit the object candidate data to the selection application 171 (205).
The selection application 171 may then present a user interface of the selection application 171 displaying the object candidate data at the foreground of the interactive data 156 of the host device 151 (206) (e.g., via a pop-up window over the register log application 161). The display of the object candidate data may be per-object detected in the image data (e.g., if two objects were detected, the selection application may display the 5 object candidates for each of the two detected objects).
The selection application 171 may then receive one or more user selections of the displayed object candidate data (207). The one or more user selections may be the user's confirmation of each detected object's identity (e.g., object 1 is a banana, object 2 is an apple, etc.).
Upon receiving the one or more user selections, the selection application 171 may convert the selected object candidate data into object identifier data (208). In some embodiments, as described herein, the conversion may involve converting the PLU code of the object candidate data (as assigned by the data capture device 101) into the PLU code of the register log application 161 and/or the host device 151.
The selection application 171 may then transmit the object identifier data to the register log application 161 (209). In some embodiments, the object identifier data may be sent to the register log application 161 via a driver of the scanning terminal (e.g., the point of connection between the data capture device 101 and the host device 151).
Upon receiving the object identifier data, the register log application 161 may then display and/or process the object identifier data to the user via the interactive display 156 of the host device 151. The example processing platform 200a may then exit.
Alternative implementations of the example processing platform 200a and/or the example processing platform 200b represented by the block diagrams may include one or more additional or alternative elements, processes and/or devices. Additionally or alternatively, one or more of the example blocks of the block diagrams may be combined, divided, re-arranged, added, or omitted. Components represented by the blocks of the diagrams may be implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware.
In the illustrated example, the data capture system 300 is shown as part of a POS system arrangement having the data capture device 301 positioned within a workstation counter. Generally, the data capture device 301 includes an enclosed housing region (also referred to as an upper housing, an upper housing region, an upper housing portion, an upper portion, and/or a tower portion) and a product presentation region 381 (also referred to as a lower housing, a lower housing region, a lower housing portion, a lower portion, and/or a platter portion). The enclosed housing region can be characterized by an optically transmissive window positioned therein along a generally vertical plane and a horizontally extending field of view which passes through a window. The product presentation region 381 can be characterized by an electronic weight scale platter that includes an optically transmissive window positioned therein along a generally horizontal (also referred to as a transverse) plane and a vertically extending field of view which passes through the window. The electronic weight scale platter is a part of a weight platter assembly that generally includes the electronic weight scale platter and a scale (or load cell) configured to measure the weight of an object placed the top surface of the electronic weight scale platter. By that virtue, the top surface of the electronic weight scale platter may be considered to be the top surface of the product presentation region 381 that faces a product scanning region there above.
In operation, a user generally passes an object across a product scanning region of the data capture device 301 in a swiping motion in some general direction relative to the window of the data capture device 301 (e.g., right-to-left). A product scanning region can be generally viewed as a region that extends above the product presentation region 381 and/or in front of the window where data capture device 301 is operable to capture images of sufficient quality to perform imaging-based operations like decoding an indicia that appears in the obtained image. It should be appreciated that while object may be swiped past the data capture device 301 in any direction, items may also be presented into the product scanning region by means other than swiping past the window(s). When the object comes into the any of the fields of view of the data capture device 301, the indicia on the object may be captured and decoded by the data capture device 301, and corresponding data is transmitted to a communicatively coupled host device 151.
The data capture device 301 can utilize a variety of imaging and optical components (collectively referred to as imaging subsystem or imaging assembly) to achieve the desired field of view(s) FOV(s) over which images can be captured and derived data may be transmitted to the host device 151 (such as a decoder (aka decoder subsystem), processor, or ASIC that may be internal to the data capture device 301) for decoding of indicia and further utilization of the decoded payload data. For example, an imaging assembly may include an image sensor(s) (also referred to as an imager or imaging sensor) that can be, for example, a two-dimensional CCD or a CMOS sensor that can be either a monochrome sensor or a color sensor having, for instance 1.2 megapixels arranged in a 1200×960 pixel configuration. It should be appreciated that sensors having other pixel-counts (both below and above) are within the scope of this disclosure. These two-dimensional sensors generally include mutually orthogonal rows and columns of photosensitive pixel elements arranged to form a substantially flat square or rectangular surface. Such imagers are operative to detect light captured by an imaging lens assembly along a respective optical path or axis that normally traverses through either of the generally horizontal or generally upright window(s). In instances where multiple imaging components are used, each respective imager and imaging lens assembly pair is designed to operate together for capturing light scattered, reflected, or emitted from indicia as pixel data over a respective FOV. In other instances, a single imaging sensor may be used to generate a single primary FOV which may be split, divided, and/or folded to generate multiple FOVs by way of splitter and/or fold mirrors. In such cases, data collected from various portions of the imaging sensor may be evaluated as if it was obtained by an individual imaging assembly/imaging sensor.
The host device 351 may include a computing assembly that features an interactive display (such as a desktop computer, a laptop computer, a kiosk computer, a tablet, a smart device, etc.). The computing assembly may work in conjunction with a transceiver, a network adapter, ethernet, and/or one or more connection ports (e.g., a scanning terminal, a USB port, etc.) to communicatively connect with the data capture device 301.
Details of operation of the systems, devices, and methods described herein are provided with respect to
The user interacting with the interactable elements 464a may cause one or more other GUIs and/or portions of GUIs to be displayed (e.g., interacting with the interactable element button featuring the text “Quick Lookup” may cause the register log GUI 400b as illustrated in
A user interacting with the interactable elements 464b may cause one or more other GUIs and/or portions of GUIs to be displayed (e.g., interacting with an interactable element button featuring the text “Back” (not shown) may cause the register log GUI 400a as illustrated in
In operation, the POS device may receive decoded indicia and/or PLU codes from the scanning device. When the scanning device scans an indicia on an object, the scanning device decodes the indicia and transmits the decoded indicia to the POS device. If the scanning device cannot locate, identify, and/or otherwise decode an indicia relating to the object, the scanning device may utilize an object prediction application (e.g., the object prediction application 126 described herein) to generate one or more PLU codes that may identify the object and/or provide candidates as to what the object might be. Additionally or alternatively, the user may manually enter data to identify an object (e.g., via the user interacting with the interactable input elements 464b of the register log GUI 400b). Upon receiving the decoded indicia data, the PLU codes, and/or the user inputted data, the register log application may add the value corresponding to the identified object to the point of sale transaction.
The user interacting with the interactable input element 474c may select one or more of the object candidates to identify the object scanned by the scanning device. In some embodiments, upon selecting one of the object candidates, the selection window 472c may move to the background behind the register log window 462c (in some embodiments, the selection window 472c is minimized and/or closed without causing the selection application 172 from halting). Alternatively, in some embodiments, selecting or more of the object candidates may cause a confirmation alert to appear whereby the user may confirm that the selection accurately identifies the object. The user may also interact with the interactable element 474c (e.g., the interactable button that features the text “Close”) and/or the register log window 462c and/or interactable elements of the register log GUI visible 464c to immediately cause the selection window 472c to move to the background, minimize, and/or close (e.g., either without selecting one or more of the object candidates and/or after selecting the one or more object candidates).
In operation, as described herein, an object prediction application 126, deployed on either the scanning device or the POS device, may generate one or more object candidates of an object in embodiments where the scanning device cannot locate, identify, and/or otherwise decode an indicia relating to an object placed on the product presentation region. The POS device may receive either (i) one or more object candidates or (ii) image data from the scanning device (e.g., via a scanning terminal that connects the scanning device and the POS device and allows the scanning device to transmit decoded indicia and/or PLU codes). In some embodiments, a scanning driver may be installed on the POS device that manages the receiving functionality of the POS device via the scanning terminal. Upon detecting the one or more object candidates have been received via the scanning terminal, the scanning driver may then pass the one or more object candidates to the selection application 172 running on the POS device. Upon detecting image have been received via the scanning terminal, the scanning driver may then pass the image data to the object prediction application 126 running on the POS device. In some embodiments, the selection application 172 and/or the object prediction application 126 are modules of the scanning driver. Once the selection application 172 has received the object candidates, the selection application 172 may then display the selection GUI 400c as illustrated in
The GUIs depicted in
The method and/or operation 500 may begin at block 502 by capturing images of an object in one or more fields of view. In some embodiments, the capturing of the image data may be performed via an imaging assembly (e.g., the imaging device 111). In these embodiments, the imaging assembly may be communicatively connected a prediction controller (e.g., the prediction controller 121 described herein) and/or a host device (e.g., the host device 151 described herein). In some embodiments, the imaging assembly and the prediction controller may be combined into a singular housing. In these embodiments, the imaging assembly and the prediction controller may be communicatively coupled via a direct communication bus. Alternatively, in some embodiments, some components of the imaging assembly and the prediction controller may be the same (e.g., the one or more processors 112 of the imaging device 111 and the one or more processors 122 of the prediction controller 121) such that the combination of the imaging assembly and the prediction controller may be considered a singular device. Any of the aforementioned combinations of the imaging assembly and the prediction controller may be considered a data capture device (e.g., the data capture device 101 described herein). In some embodiments, the data capture device may be connected to the host device via a dedicated scanner terminal.
The method and/or operation 500 may proceed to block 504 by providing image data associated with the images to an object prediction application (e.g., the object prediction application 126 described herein) deployed on either (i) the prediction controller or (ii) the host device. In some embodiments, providing the image data may occur in response to the data capture device being unable to locate, identify, and/or otherwise decode a decodable indicia on the object. In the embodiments where the data capture device identifies and/or decodes an indicia on the object, the data capture device does not require an output of the object prediction application to identify the object (e.g., a bar code scan from the imaging assembly can identify the object).
The method and/or operation 500 may proceed to block 506 identifying, via the object prediction application, one or more aspects of the object. In some embodiments, the object prediction application may include one or more imaging processing algorithms, techniques, and/or models as described herein. Additionally or alternatively, the object prediction application may include one or more machine learning algorithms, techniques, and/or models as described herein.
The method and/or operation 500 may proceed to block 508 by generating, via the object prediction application, object candidate data corresponding to the object from the identification. In some embodiments, the object candidate data may be one or more product look up (PLU) codes (e.g., corresponding to objects typically without an indicia placed on it, such as produce, meat, etc.).
The method and/or operation 500 may proceed to block 510 by receiving, via a selection application executing on the host device, the object candidate data. The selection application may be configured to: (i) receive the object candidate data, (ii) present the object candidate data, via a selection user interface displayed onto a foreground of an interactive display of the host device, (iii) receive one or more object selections of the object candidate data from a user interacting with the interactive display, (iv) generate object identifier data for each of the one or more object selections, (v) upon receiving the one or more object selections, send the selection user interface to a background of the interactive display, and/or (vi) transmit the object identifier data to a register log application executing on the host device. In some embodiments, the selection application may receive the object candidate data from either the scanning terminal of the host device and/or an alternative wired or wireless connection to the data capture device (e.g., ethernet, Wi-Fi, etc.).
The method and/or operation 500 may proceed to block 512 by presenting, via a selection user interface displayed on a foreground of an interactive display of the host device, the object candidate data. In some embodiments, selection user interface may be associated with the selection application.
The method and/or operation 500 may proceed to block 514 by receiving, via the selection user interface, one or more object selections of the object candidate data from a user interacting with the interactive display. In some embodiments, the user may make the one or more object selections via interacting with the interactive display directly (e.g., via a touchscreen). Additionally or alternatively, in some embodiments, the user may make the one or more selections via one or more external input devices connected to the host device (e.g., via a mouse and a keyboard).
The method and/or operation 500 may proceed to block 516 by presenting, via a register log application, a register log user interface to the foreground of the interactive display of the host device. The register log application may be configured to: (i) initially display a register log user interface on an interactive display of the host device, (ii) receive object identifier data and process the object identifier data, and/or (iii) present the object identifier data, via the register log user interface. In some embodiments, the register log user interface may be associated with the register log application. In some embodiments, the register log user interface may be initially displaying on the interactive display.
The method and/or operation 500 may proceed to block 518 by generating, via the selection application, object identifier data for each of the one or more object selections. In some embodiments, the object identifier data may be a decoded indicia data and/or a product look up (PLU) code associated with and/or readable by the register log application. In some embodiments, the PLU code of the object candidate data selected by the user is converted into a PLU code readable by the register log application to generate the object identifier data.
The method and/or operation 500 may proceed to block 520 by passing, via the selection application, the object identifier data to the register log application.
The method and/or operation 500 may proceed to block 522 by presenting, via the register log user interface, the object identifier data.
The method and/or operation 500 may have more or less or different steps and/or may be performed in a different sequence.
In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit may include one or more processors 102. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. The above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted. In some examples, the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)). In some examples, the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)). In some examples the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s).
As used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)). Further, as used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal.
In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations.
The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.