This disclosure relates to communication of information and in particular to communicating information over a distance using optical codes, without using radio waves.
It is often desired to communicate information from one device to another without physically coupling the devices. In some scenarios, use of a wireless signal comprising radio waves is undesirable. Optical codes are transmissible, over a distance, from one device to another where a first (sending) device displays an optical code and a second (receiving) device receives the code via an image sensor or camera and decodes the information encoded by the optical code. Optical codes are transmissible through light-transmissive mediums such as air and water, as well as through empty space (e.g. a vacuum).
Optical codes can take different forms. Once such form is a two-dimensional (2D) barcode, such as a QR Code® (a trademark of Denso Wave Incorporated), which encodes information in horizontal and vertical directions. While a single 2D barcode can encode a significant amount of information, more information may be desired to be communicated than may be encoded in one code. The information capacity of an instance of a single optical code such as a 2D barcode may be constrained to meet requirements of scale and/or resolution, standards defined for the type of code, etc. Such standards can be defined so that the optical code can be read at a specified distance, and/or by devices having limited reading capabilities.
Displaying a sequence of optical codes such as in a series of frames of a video or other data structure has been proposed. Such a burst of codes can increase the amount of information sent. It also obviates the need for a user to send (or a user to receive) multiple codes separately, one by one. However, there is a need to improve the encoding and/or communication of a sequence of optical codes.
In accordance with embodiments, systems, methods and techniques are described herein to communicate information via sequences of optical codes.
According to a broad aspect, the present disclosure describes a computer method comprising: encoding data into a plurality of optical codes in an optical code sequence; generating a plurality of frames for presenting in a frame sequence, wherein: the plurality of frames comprises at least two channels to present the optical code sequence; and each channel presents the same optical codes in the optical code sequence, but in a different order of the optical code sequence; and providing the frame sequence for display.
The different order of the codes may be selected to optimize a combination of: transmission duration and transmission predictability.
Generating the plurality of frames may include generating identification features to visually identify and distinguish the at least two channels. The frames may be generated as components of a video file or a GIF.
Providing the frame sequence for display may comprise communicating the frames to a sending device configured to display the frames for receiving by a receiving device a distance from the sending device, without using radio waves to communicate the frames between the sending device and the receiving device. Communicating the frames to a sending device may comprise communicating the frames via email to the sending device.
According to another broad aspect, the present disclosure describes a computer method comprising: (a) receiving, via an image sensor, a sequence of images from a sending device displaying frames in a frame sequence comprising at least two channels of optical codes; (b) identifying, from the images, optical codes from each of the at least two channels; (c) decoding and storing the optical codes, wherein duplicate optical codes are discarded; and (d) performing steps a, b and c until a completeness metric is achieved.
Identifying the optical codes from each channel may comprise localizing a respective optical code in a respective image using image processing. Localizing may comprise determining a location of identification features in the respective image. Localizing may be based on a location of identified optical codes in a previous image to the respective image.
The optical codes may be defined having a code sequence and, in the frames, each channel may comprise the same optical codes but in a different order of the code sequence. The different order of the code sequence is selected to optimize a combination of: transmission duration and transmission predictability.
Receiving via the image sensor may comprise receiving from a sending device the frames at a receiving device over a distance, without using radio waves to communicate the frames between the sending device and the receiving device.
According to another broad aspect, the present disclosure describes a computer method comprising: receiving, via an image sensor, a sequence of images from a sending device displaying frames comprising a sequence of optical codes, each optical code comprising an optical code index; decoding the optical codes and recording optical code indices of the codes that are decoded; and providing for display a progress indicator responsive to the optical code indices of the codes that are decoded.
The method may further comprise providing for display a second indicator indicating a location of a last decoded optical code within the sequence of optical codes. The second indicator may be one of: a numerical display of a last read optical code index; and a graphical indicator along a graphical element representing the sequence of optical codes.
The progress indicator may be one or both of: a numerical display of indices of unread optical codes; and a graphical element in which read and unread codes are visually distinguishable.
Each frame may comprise a frame index, each frame may comprise a plurality of optical codes of the optical code sequence, and providing for display a progress indicator may include providing for display a progress indicator which indicates at least one frame index of at least one frame containing an optical code that was not decoded.
According to another broad aspect, the present disclosure describes a computer method comprising: receiving, via an image sensor, a sequence of images from a sending device displaying frames comprising a sequence of optical codes, each optical code comprising an optical code index; decoding the optical codes and recording the indices of decoded optical codes; and providing for display a progress indicator of transmission time of the optical codes.
The progress indicator may be a countdown timer indicating an amount of time remaining to decode the optical codes. The countdown timer may be responsive to each optical code remaining to be received and decoded without error. The countdown timer may be responsive to a projected number of errors.
The method may further comprise calculating the frequency of image transitions, and the countdown timer may be responsive to the frequency as calculated.
According to another broad aspect, the present disclosure describes a computer method comprising: generating a sequence of frames, wherein: each frame comprises an optical code; and each frame has an index; and providing the sequence of frames for display both as a video or animation and as individual static frames.
The frames may comprise a GIF, and providing the sequence of frames may comprise sending a communication including: the GIF as the video; and static images of the frames with indices.
According to another broad aspect, the present disclosure describes a computer method comprising: receiving a sequence of frames for visual display; displaying the sequence of frames to communicate the frames to a receiving device over a distance, without using radio waves to communicate the sequence to the receiving device; and responsive to user input while communicating the frames, performing one of: selecting a particular frame to be communicated; pausing the video; and changing the speed of video playback.
According to another broad aspect, the present disclosure describes a computer method comprising: receiving data to be encoded; determining segmentation parameters to segment the data into a plurality of data segments, each data segment to be encoded in a single respective optical code of a plurality of optical codes, and wherein the segmentation parameters are determined to minimize a processing time by a receiving device to receive and/or decode the optical codes; generating a sequence of frames comprising the optical codes; and providing the sequence of frames for display by a sending device to communicate the sequence of frames to the receiving device.
The segmentation parameters may include: a data capacity of each optical code; and error correction/redundancy information.
The segmentation parameters may be further selected based on: display device performance specifications; and camera/receiver specifications.
Determining segmentation parameters may comprise performing a simulation to determine a minimum read time based on candidate segmentation parameters.
In any of the method aspects, data encoded in the optical codes may comprise a digital signature certifying the data.
It will be understood by a person of ordinary skill in the art that any of the method aspects may be performed by a computing device comprising a processor and a storage device storing instructions for execution by the processor to configure operations of the computing device to perform any of the methods. It will be further understood that non-transitory storage media (e.g. tapes, discs, integrated circuit devices such as memory devices, drives, etc.) may store instructions for execution by a processor to configure operations of a computing device to perform any of the methods.
The description herein details several exemplary embodiments. One skilled in the art will appreciate that it is within the scope of the present disclosure to combined individual embodiments with other embodiments as appropriate. The followings terms are used herein:
Optical code: static instance of something that an image sensor or camera can read (e.g. a 2D barcode such as a QR Code). Information can be encoded in the optical code spatially, such that light from different regions of the code will have differing brightness and/or wavelength. In the example of a black and white barcode, light from black regions of the code will have significantly lower brightness than light from white regions of the code, such that the black and white regions can be used to encode binary data.
Frame: one image in a sequence of images (in some examples, one frame contains a representation of one optical code, and in other examples, one frame can contain representations of multiple optical codes).
Sequence: a set of multiple frames. A sequence can be displayed or received as a video, animation, or temporal arrangement of individual images.
Sending device 110 includes a display 112 which can output (display, present visually) optical codes or sequences of optical code frames. In
Receiving device 120 includes an image sensor 122 (shown in dashed lines because is it positioned on a rear of receiving device 120). Image sensor 122 could be a camera found in mobile devices like cellular telephones and smartphones, for example. Image sensor 122 can capture image data over a field of view 124. Optical code 114 can be positioned within field of view 124, such that image sensor 122 can capture image data including a representation of the optical code 114, and receiving device 120 can decode the optical code 114 in the image data.
In
In the example of
In some implementations, sending device 110 and receiving device 120 could be capable of both sending and receiving optical codes as described herein, such that the roles of the device could be reversed, with device 110 acting as a receiving device, and device 120 acting as a receiving device.
References throughout this disclosure to a “sending device” and a “receiving device” can refer to sending device 110 and receiving device 120, respectively, but can also refer to devices which serve similar roles and achieve similar functionality. Further, acts described herein as being performed by a sending device or a receiving device can be performed by components of these devices. For example, acts of determining, calculating, encoding, decoding, reading etc. can be performed by at least one processor of the sending device or the receiving device. Further, acts of communicating, presenting, displaying, sending, or outputting optical codes or frames can be performed by a display of the sending device. Further still, acts of capturing, receiving, etc. optical codes or frame can be performed by an image sensor or camera of the receiving device. In addition, acts performed by at least one processor of any of the devices herein can be caused by instructions stored in a non-transitory processor-readable storage medium of said device. For example, a non-transitory processor-readable storage medium of a sending device can have instructions stored thereon which, when executed by at least one processor of the sending device, cause the sending device to perform actions dictated by the instructions. Similarly, a non-transitory processor-readable storage medium of a receiving device can have instructions stored thereon which, when executed by at least one processor of the receiving device, cause the receiving device to perform actions dictated by the instructions.
Issues may arise when communicating successive frames of optical codes. For example, a receiving device may not properly receive and decode each of the optical codes. Communication can often be one way, such that the receiving device does not provide communication signals back to the sending device. One manner to address a missed optical code is to repeat a communication. As one example, frames can be automatically communicated in a loop, starting over at the beginning each time the end of the loop is reached. As another example, frames can be communicated using discrete attempts, where a user manually restarts communication each time. Repeated attempts in loops or otherwise may be undesirable. In particular, repeated attempts can be time consuming and frustrating for users, and can consume additional power.
The present disclosure provides means to communicate information encoded to a number (n) of optical codes (C) with code index (i)=1, 2, . . . n referencing one code (Ci). Note that the code index (i) can be different from a frame index of a sequence.
Such an implementation advantageously can improve accuracy, by presenting optical codes in duplicate so that if one optical code is not readable, a duplicate of the code may be, and thus the code can still be read. This is particularly useful when one of the optical code copies is occluded, such as by an object coming between the sending device and the reading device.
However, there are cases where an entire frame from the sending device may not be properly read or decoded by the receiving device. For example, if the sending device or receiving device is moved or shaken during a frame, an entire image captured during the frame by the receiving device may be blurry. Consequently, the image may not provide sufficiently distinct representations of the optical codes for the receiving device to properly decode the optical codes, even if the optical codes are presented in duplicate. As another example, if an optical code in a given frame is particularly complex or difficult to decode, and the receiving device is decoding in real time (i.e. without a buffer), the receiving device may drop or not decode the optical code. Presenting multiple copies of the optical code simultaneously would not resolve this issue, but would rather compound it by adding additional complexity to the decoding process.
In act 302, data is encoded by a sequence generator into a plurality of optical codes in an optical code sequence.
In act 304, the sequence generator generates a plurality of frames for presenting in a frame sequence, wherein the plurality of frames comprises at least two channels to present the optical code sequence, and each channel presents the same optical codes in the optical code sequence but in a different order of the optical code sequence. Several exemplary techniques for generating the plurality of frames, and for arranging different optical code orders in each channel, are discussed below with reference to
In act 306, the sequence generator provides the sequence of frames for display. This could include for example a processor of the sending device providing the frame sequence to a display of the sending device for output. As another example, this could include a sequence generator separate from the sending device providing the sequence of frames to the sending device, for display by the sending device (e.g., the sequence of frames could be emailed to the sending device, for display by the sending device to communicate the code to a receiving device optically without using radio waves).
In act 352, a sequence of images is received by a receiving device from a sending device displaying frames in a frame sequence comprising at least two channels of optical codes. The receiving device and the sending device can be similar to receiving device 120 and sending device 110, respectively, discussed with reference to
In act 354, optical codes from each of the at least two channels are identified. For example, a processor of the receiving device could perform image processing on images received by the image sensor, to identify and/or localize optical codes in each image (for example delineating multiple optical codes from each other, or determining a region of interest for detailed image processing). Exemplary techniques for identifying, detecting, and localizing optical codes and regions of interest in an image are discussed later with reference to
In act 356, optical codes identified in act 354 are decoded and stored. For example, a processor of the receiving device can perform image processing on at least a region of images from the image sensor, to read/decode the data encoded in each optical code, and store data from the decoded optical codes, for example on a non-transitory processor-readable storage medium on the receiving device. When an optical code is at least partially decoded, the processor can determine whether that optical code has already been previously decoded and stored, for example by checking an index of the optical code against indices of already decoded and stored optical codes. Duplicate codes which have already been decoded can be discarded (keeping the originally decoded code), or can be used to overwrite the previously decoded code (for example if the subsequently decoded code is decoded with greater accuracy).
In act 358, a determination as made as to whether a completion metric is achieved. For example, a processor of the receiving device could determine if every optical code in the sequence was properly received. This could entail for example comparing the number of correctly received optical codes against an expected number of optical codes. As another example, data may be compressed and encoded in the sequence of optical codes, along with redundancy data. In such an example, the completion metric could be the receipt of all unique data, which may occur even if not all optical codes are received and decoded (for example if certain optical codes contain only redundancy data).
If it is determined in act 358 that the completion metric is not achieved, the method 350 can return to act 352, to receive the frame sequence again (i.e., try another playback loop). If it is determined in act 358 that the completion metric is achieved, then the optical code sequence is properly received and transfer of the optical code sequence is complete, as in act 360.
By presenting multiple channels with the same sequence of optical codes, but in a different order, copies of each optical code will be separated temporally, such that even if one frame is dropped, missed, not read, not decoded, not received, or otherwise erroneous, another copy of each optical code in the frame should be included in another frame somewhere in the frame sequence. This improves the chances that each optical code will be received and decoded correctly. Example changes of order could include for example reversing an order of communication of optical codes between channels as discussed with reference to
A sequence of optical codes communicated in channel 220 can be reversed relative to the order optical codes are communicated in channel 210. An exemplary sequence of 5 optical codes C1, C2, C3, C4, and C5 communicated in 5 frames is illustrated in
This concept can be extended to an indefinite number of frames, as illustrated in
As can be seen in the example of
Firstly, the scheme of
Further, each frame does not need to be repeated an equal number of times. Rather, frames which are more likely to be communicated erroneously can be repeated more times. For example, the middle frame in an odd numbered sequence (e.g. Frame 3 in
A sequence of optical codes communicated in channel 220 can be communicated with an offset or delay relative to the sequence of optical codes communicated in channel 210. An exemplary sequence of 5 optical codes C1, C2, C3, C4, and C5 communicated in 5 frames is illustrated in
This concept can be extended to an indefinite number of frames, as illustrated in
Similar to as discussed with reference to
Reversing the order of communication of the optical codes in one channel can be combined with applying an offset or delay in the index of the presented optical codes, as shown in
The scheme of
In the examples of
The number of simultaneously communicated codes (i.e. the number of channels) in a given application can be determined based on constraints such as display size, aspect ratio, resolution, refresh rate, in order to meet targets like size of optical codes, size of data unit in each code (e.g., size of each black or white “module” or “pixel” in a QR Code), space between optical codes, etc. In some embodiments, two simultaneously presented optical codes are appropriate to meet the constraints of typical smartphones, a type of mobile device.
Additionally, when discussing communication of optical codes in a sequence of frames, the frame rate of the sequence of the optical codes does not necessarily have to match the frame rate of a display of the sending device communicating the codes, or a frame rate of the image sensor which receives the optical codes (although they can match). Rather, the frame rate of the optical code sequence can be defined such that each frame is communicated for a specified length of time, regardless of the frame rate of said display or said image sensor. Because said display and said image sensor will commonly operate at different frame rates, it can be advantageous to define the frame rate of the optical code sequence to be lower than both the frame rate of the display and the frame rate of the image sensor (i.e., each optical code frame is longer than each display frame and each captured frame). This can prevent optical code frames from being improperly displayed or captured.
In some embodiments, the optical codes of the respective channels are visually distinguished from one another, for example, to disambiguate the codes and facilitate decoding. In some embodiments, the optical code generator generates additional features to aid with distinguishing and localizing optical codes in a frame. In some embodiments, the additional features can be static with respect to the optical code sequence, and act to demarcate or delineate locations of separate optical codes. In addition to disambiguating codes to speed up decoding and make decoding more accurate, providing optical cues to distinguish optical codes can advantageously allow the optical codes to be positioned closer together. This can reduce empty display space, which in turn enables the communication of more optical codes simultaneously, and/or the communication of larger optical codes which occupy more display space and thus are less prone to communication errors or contain more data.
Markers 830 are shown at the corners of each of the optical codes, but such markers could be positioned anywhere appropriate, such as centered along an edge of an optical code. Further, markers 830 are shown as being positioned at each corner of optical codes 810 and 830, but it is within the scope of the present disclosure to include fewer or more markers. For example
Any appropriate markers can be used, such as the Secchi disk patterns of markers 830 in
When communicating a sequence of codes, a receiving device receives the codes using an image sensor that defines images or frames of a video, such as image sensor 122 discussed with reference to
In an embodiment, in a first image captured at a receiving device, one or more optical codes may be detected and localized (e.g, using attributes like shape of size of the optical code, using localization features in the optical code, using unique/proprietary localization features, etc). If optical code detection is successful, a region of interest is stored (e.g. X and Y coordinates and Height and Width of a bounding box) defined by the size and location of the first optical code in the image data. Optionally, the size can be an expandable region responsive to the size of the detected optical code. This region of interest can be used to locate subsequent codes in subsequent frames. The technique may be useful in sequences of single or multiple codes per frame.
The region of interest approach may improve image processing time (less time to detect and decode optical codes) and make detection more robust. However, in some cases, the optical codes communicated in a subsequent frame may not be completely within the identified region of interest. For example, the sending device or the receiving device (or the display and image sensor respectively thereof) could move. In such a situation, the receiving device may fail to identify and decode an optical code. This could be handled in a number of ways. As one example, after failing to identify an optical code in a frame, the receiving device could attempt the identify the optical code again, analyzing a larger area of the image (e.g., the region of interest can be expanded for an additional attempt). As another example, the receiving device could attempt the identify the optical code again, analyzing the entire captured image. These exemplary additional attempts could be performed immediately subsequent the failed attempt, or could be performed during a subsequent loop of the sequence being communicated.
In some implementations, the region of interest could be regularly updated. For example, at regular intervals (or even every frame), the receiving device could detect and localize the optical codes in the region of interest. Further, based on the position of the optical codes within the region of interest, the region of interest could be updated for the next frame. An example of this is illustrated in
In some embodiments, user interfaces (UIs) such as graphical user interfaces (GUIs) for sending and receiving devices are configured to provide feedback on frames (or optical codes) that were received (decoded) or not received (not decoded) by the receiving device and to facilitate communication of the frames by the sending device.
In act 1002, a receiving device (such as receiving device 120 of
In act 1004, the receiving device decodes the optical codes, and records indices of frames for which the optical codes therein were successfully decoded. For example, a processor of the receiving device attempts to decode optical codes in each frame received, and stores a list of indices of properly decoded frames in a non-transitory processor-readable medium of the receiving device. Alternatively, the processor could store a list of frame indices for frames including optical codes which were not properly decoded (which would implicitly indicate frames for which optical codes were properly decoded).
In act 1006, the receiving device provides for display a progress indicator responsive to indices of frames for which the optical codes therein were successfully decoded. Examples of this are discussed below with reference to
In the example of
The UI of the sending device 1120 can also optionally display other indicators 1125 and/or 1126, which indicate which code was most recently decoded. Indicator 1125 is a visual marker on the progress bar 1122, whereas indicator 1126 is a text-based indicator stating the last read code. In an embodiment receiving device 1120 presents neither of, one of, or both of indicator 1125 and indicator 1126. In an embodiment, a different indicator is presented which has similar functionality as indicators 1125 and 1126. Advantageously, by indicating the last read/decoded code, a status of the transfer operation as a whole can be determined. In this example of
A difference between
In the example of
Progress bar 1122, indicator 1125, and indicator 1126 can be adapted to display either optical code indices or frame indices, or multiple indicators could be displayed to indicate both frame indices and optical code indices, if desired. For example,
In certain scenarios, it is desirable for a user to be able to manually control communication of optical codes by a sending device. For example, when certain codes are not properly received or decoded by a receiving device, it may be desirable for a user to manually cause the sending device to communicate only the missing codes. Communication status information provided by a receiving device, such as described above with reference to
In act 1202, a sending device receives a sequence of frames for visual display, the sequence of frames comprising a sequence of optical codes. In some implementations, a sequence generator on the sending device could generate the sequence of optical codes and the sequence of frames. In other implementations, the sending device could receive the sequence of frames or codes from another device.
In act 1204, the sending device displays the sequence of frames to communicate the sequence of optical codes to a receiving device. Act 1204 can be performed by a display element of the sending device.
In act 1206, the sending device receives a user input while communicating the frames. In this context, “while communicating the frames” refers to a time before the sequence of optical codes is fully communicated. If display of the frames is paused, but the sequence of optical codes is not yet fully communicated, this can still be considered as “while communicating the frames”. User input can be received by any appropriate interface of the sending device, such as a touchscreen or microphone. Further, user input can also be received by the sending device via peripheral interfaces which are not physically part of the sending device, such as a mouse, keyboard, remote control, or other appropriate interfaces. Such peripheral devices can communicate with the sending device via physical coupling (e.g. a wire) or wireless coupling. User input can indicate any of a plurality of commands from the user, and can be processed by the sending device, or can be processed externally to the sending device, with the resulting commands being provided to the sending device.
In act 1208, responsive to the user input, the sending device alters the display of the sequence of frames. In one example, the user can provide a pause command to the sending device, and in response the sending device will stop progressing through the sequence of frames (e.g. a video or animation file presenting the sequence can be paused). In another example, the user can provide a speed-up or slow-down command to the sending device, and in response the sending device will progress through the sequence of frames faster or slower (e.g. a video or animation file presenting the sequence can be sped-up or slowed-down). In yet another example, a user can provide a command to manually select a frame to display, as discussed in detail with reference to
In the UI of
Once the desired optical code frame is selected and displayed, the user presents the display of sending device 1310 to the image sensor of the receiving device. Scrolling controls and frame number etc. are presented so as not to interfere with the optical code(s) 1311. The receiving device can then receive and decode the missing frame. These operations can be performed for each missing frame.
In some implementations, display of select optical codes or frames could be performed autonomously. For example, the sending device could include an accelerometer; when movement exceeding a threshold is detected by the accelerometer, a processor of the sending device could determine select codes/frames being communicated during the movement which were likely not received correctly. The sending device could then display these select frames, for reception by the receiving device.
One reason that an optical code frame or frames can be missed (that is, that less than all of the optical codes are properly received and decoded) is that the processing resources of the receiving device are insufficient to complete receiving and decoding operations in the time of a single presentation of the optical code sequence. Another reason could be limitations in physical detection capabilities of a receiving device, such as resolution, frame rate, and exposure time of an image sensor.
In some embodiments, the sending device can be configured to present the sequence of optical codes at different speeds. For example, in a first loop, the sending device can communicate the sequence of optical frames at a first speed (relatively high frames per second, or “FPS”), and for each subsequent loop, the sequence of optical frames can be presented at a successively lower speed (that is, the number of optical code frames presented per second is reduced). In an exemplary implementation, the sequence of optical frames can be communicated in a first loop at 30 FPS, in a second loop at 15 FPS, in a third loop at 7.5 FPS, in a fourth loop at 3.25 fps, and so forth. In this example, the playback speed or FPS is reduced by half in each subsequent loop. However, this set of playback speeds and progression is exemplary, and any appropriate starting speeds and progressions can be chosen as appropriate for a given application. An optimal progression may be determined to minimize an expected or average read time for optical code sequences in general. For example, a progression could be determined in which most optical code sequences can be properly read within the first two or three loops, but in subsequent loops the playback speed may be reduced drastically in order to provide flexibility to read particularly difficult to decode sequences. The determined progression can be based on or account for hardware components of the system, including sending device display capabilities and receiving device processing capabilities, as non-limiting examples.
Further, playback speed can be reduced in each subsequent loop indefinitely, or a threshold minimum can be set, where the playback speed will not decrease beyond the minimum, to prevent the optical code communication from taking excessively long. Additionally or alternatively, a maximum number of loop times can be set, such that the system does not continue looping indefinitely. Systems with such speed or loop number thresholds can be implemented together with manual intervention mechanisms, such as those discussed with reference to
In some implementations, optical codes may contain information regarding their usage. In this regard,
In act 1402, at least one optical code (which could include a sequence of optical codes) is generated for display by a sequence generator. The sequence generator could be implemented on a processor of a sending device, or could be implemented on a processor in communication with, but not necessarily physically coupled to, the sending device. The at least one optical code can be encoded with primary data and secondary data, the primary data for use during a procedure, and the secondary data to verify correct usage of the primary data. The secondary data may be context dependent, defined in relation to how primary data in the at least one optical code is to be used. In the example of information for surgical procedures, primary data refers to data which directly affects how the surgical procedure is carried out (for example, pre-operative alignment goals for the surgery, attributes or images of an anatomy to be handled during the surgery, or other such appropriate data). Secondary data, on the other hand, while related to the surgery, is not actually used during the surgery (for example, a hospital identification, a surgeon name, a patient name, a valid date or date range of the surgery, or other such information).
In act 1404, the at least one optical code can be used to transmit surgical procedure information from the sending device (e.g. a smartphone) to a receiving device (e.g. an intraoperative computing device configured to assist with the surgical procedure), by the sending device displaying the at least one optical code to the receiving device. The sending device and receiving device can be similar to sending device 110 and receiving device 120 discussed above with reference to
In act 1406, the receiving device can receive and decode the primary data and the secondary data from the at least one optical code.
In act 1408, the receiving device determine whether the primary data is appropriate for a procedure to be performed, by comparing the secondary data to contextual data on the receiving device. This can determine whether the secondary data is valid, and thus is indicative of whether the primary data should be used. That is, the receiving device can use the secondary data to validate use of primary data to prevent incorrect usage.
An exemplary scenario is discussed with reference to
In the exemplary scenario, to configure the intraoperative computing device for a surgical procedure, the representative looks for an optical code planning data email by typing in a case number that they have written down for the case (e.g., a 10 digit code). The representative makes an error, for example by mistyping the case number or by miswriting the case number, and accesses another optical code sequence which pertains to another surgery (e.g. a different surgical procedure, a different surgeon, a different hospital, a different patient, a different or expired date, or any other different context information), and does not immediately identify the error. The representative proceeds to cause the optical code sequence to be communicated from the sending device to the intraoperative computing device as the receiving device. The receiving device, via a verification component, decodes the secondary data, for example, containing expected dates of surgery, hospital etc., and compares this decoded secondary data (secondary information) to contextual data (context information) on the receiving device. The receiving device identifies a mismatch between the secondary data and the contextual data and alerts the representative that an incorrect case has been loaded.
In the example of
Further, although a single piece of secondary data can be compared to contextual data of the receiving device to verify that a correct sequence of optical codes was received, greater accuracy can be achieved by comparing several pieces of secondary data to contextual data on of the receiving device. In the example of
Based on the alerts presented, the representative can identify that the incorrect optical code sequence was communicated, and may proceed to redo the communication steps from the sending device to the receiving device, to communicate the correct sequence. In some cases, however, the representative may apply his or her own judgement and choose to continue with the surgical procedure based on the already communicated optical code sequence. In particular, the representative may be able to determine that any mismatches are not material concerns, and that the communicated optical code sequence is in fact correct. For example, the receiving device may be on loan from a different hospital, such that hospital information stored in the receiving device is not accurate. As another example, the surgery may have been rescheduled, such that the surgery date in the secondary data is not accurate.
In other embodiments, the receiving device (intraoperative computing device) can be configured to require a match or verification of the secondary data with the contextual data to enable display or other presentation of the decoded primary data. That is, instead of just alerting the representative when the received secondary data doesn't match contextual data on the receiving device, the receiving device may refuse to proceed with the procedure until an optical code sequence is communicated in which the secondary data in the optical code sequence matches contextual data stored on the receiving device.
Advantageously, the concepts discussed with reference to
Further, when generating a reference code for the sequence of optical codes, the sequence generator can structure the reference code so that case with similar reference codes are particularly distant in time, to increase the probability that the verification component will detect a mismatch between the surgery date or date range in the secondary data and the date in the contextual data at the receiving device. For example, the sequence generator could generate a reference code of 1AB6, and verify that all codes of the format xAB6, 1xB6, 1Ax6, and 1Abx (where “x” can be any valid character in the code) occur more than 2 months from the expected date of surgery. That way if one character is mistyped, this verification component is likely to identify this.
As discussed above,
remaining time=(time left in current loop)+n*(time for full loop)
where n represents the expected number of full loops remaining before completion. In some implementations, the length of a single loop can be known to the receiving device, such as by the sending device and the receiving device communicating in a standardized format where sequences of optical codes have a standard length. In cases where playback speed changes for successive loops, as discussed earlier, the progression of playback speeds and/or loop lengths can be known to the receiving device. In other implementations, the receiving device can determine the length of a single loop by timing the first loop, and determining the remaining time after the first loop has played. In cases where playback speed changes for successive loops, as discussed earlier, the progression of playback speeds and/or loop lengths can be estimated by the receiving device, and the remaining time updated as more loops are played and the estimate becomes more accurate. In other implementations, the receiving device can determine the length of a single loop by determining or measuring the frequency of image transitions (i.e., how often the sending device changes frames in a frame sequence), and calculating the time of one loop (and the remaining transition time) from the frequency of image transitions. In cases where playback speed changes for successive loops, as discussed earlier, the progression of playback speeds and/or loop lengths can be estimated by the receiving device determining a new frequency of image transitions for each loop.
The receiving device can derive the time left in the current loop by subtracting a time since the current loop began from the length of a full loop.
In some implementations, the receiving device can determine n based on historical data, for example by cataloging previous communications and determining an average number of loops required for completion of these communications. In other implementations, n can be provided to the receiving device during configuration.
Exemplary remaining time estimates are illustrated in
In addition or alternative to an estimate of remaining time, the receiving device can visually indicate a status of communication, including which frames are properly decoded and which frames are not properly decoded, so a user can know exactly that the receiving device is expecting.
In
In
In
In
In
In
In each of the successive
As mentioned above, the techniques of
In act 1702, an encoding device receives data to be encoded. Such an encoding device could be similar to sending device 110 in
In act 1704, segmentation parameters are determined to segment the data into a plurality of data segments. Each data segment is to be encoded in a single respective optical code of a plurality of optical codes, and the segmentation parameters are determined to minimize a processing time by a receiving device to receive and/or decode the optical codes. Exemplary techniques for determining segmentation parameters are discussed below with reference to
In act 1706, a sequence of frames comprising the optical codes is generated. This can be performed by a sequence generator, which can be included in the sending device, or can be communicatively coupled to but not physically included in the sending device.
In act 1708, the sequence of frames is provided for display by a sending device to communicate the sequence of frames to the receiving device. This can include the sequence generator physically separate from the sending device transmitting the sequence of frames to the sending device, or the sequence generator included in the sending device providing the sequence of frames to a display component of the sending device.
Act 1704 focuses on determining segmentation parameters to segment the data. In particular, when transmitting data over a sequence of optical codes, the data is split up into sections which fit into individual codes. A single code can have a limited amount of information. For example, discussion of data capacity and error correction codes for QR Codes is available at URL: www.qrcode.com/en/about/version.html incorporated herein by reference. QR codes are available in a number of “symbol versions” (also called “symbols” or “versions”), where each QR Code symbol version has a maximum data capacity that varies with the amount of data, character type and error correction level. A QR Code comprises “modules” to encode the data, which refers to each individual unit in the QR code that can be black or white to encode a value. As the amount of data encoded in a single QR code increases, more modules are required in the QR Code, resulting in larger codes (symbols) in height and width. Such a “larger” QR code can require a greater physical area, and/or be denser (higher resolution) than a “smaller” QR code.
For example, a Version 1 QR Code is a grid of 21 by 21 modules, while a Version 40 QR Code is a grid of 177 by 177 modules. A Version 1 QR Code with low error correction code (ECC) stores 152 mixed data bits. A Version 5 QR code with medium ECC stores 688 mixed data bits and a Version 12 QR code with high ECC stores 1264 bits.
Table 1 below shows ECC (error correction code) levels for QR Codes:
ECC features of QR codes allow for restoration of data which is communicated erroneously, for example due to dirt, damage, or other problems in presentation of the code. Successive levels of ECC L, M, Q, and H can be chosen for a given QR code, with higher levels enabling greater data recovery, at the expense of being able to fit less data within the QR code. ECC features can include encoding of redundancy data. See the QR Code standard for more information.
The larger the QR Code, the more difficult it may be to read and decode by the receiving device. In some scenarios, a long sequence of small, low version QR codes may be faster to read and decode than a shorter sequence of fewer, higher version, QR Codes. This can be because the higher version QR codes may be more prone to reading and decoding errors, such that multiple loops may be needed to read every QR Code. Alternatively, each frame of the sequence may need to be displayed for a longer time to prevent reading and decoding errors.
That being said, in some scenarios a long sequence of small, low version QR codes may be slower to read and decode than a shorter sequence of fewer, higher version, QR Codes. This can be because if a sequence of fewer, higher version, QR Codes can be read without many errors, a long sequence of QR codes just extends the amount of reading time unnecessarily. Further, when encoding data, the data may be split into segments responsive to the size of the capacity of the QR Code. Padding can be used to fill out a segment as necessary so that each QR Code is of a same size. Further still, in addition to any format prescribed by the QR Code standard, a segment of data to be encoded may have a format comprising a sequence (index) number for the frame (or code), for example in a header or similar, and a body or payload portion encoding a portion of the whole data to be transmitted. As a result, a long sequence of small, low version QR codes may contain more overall padding data and more formatting/header data than a shorter sequence of fewer, higher version, QR Codes.
In view of the above, it is desirable to select an appropriate optical code version for a sequence of optical codes, such that overall time for communication is reduced, balancing speed of reading and decoding with probability for error. Although the above examples are discussed in the context of QR codes, similar concepts can apply to other forms of optical codes as well, such as linear barcodes.
In accordance with some embodiments, the sequence generator may provide optical code version selection options—e.g. a minimum and a maximum version that is useful to encode all of the data, and providing a choice (or making a determination) of the version(s) which best spread all the data out. By way of example, if there were 81,285 bits of data to encode, such data fits on 11 version 22 QR Codes (ECC level L), but also on 11 version 21 QR Codes (ECC level L). Version 21 would have a better chance of being read by the receiving device by virtue of being a smaller, less dense code. In this case, Version 21 would be preferable, because the data, once encoded in a sequence, would include the same number of QR Codes as if version 22 were used, but with a better per code chance of being successfully read and decoded.
In some embodiments, the sequence generator component can use predictive calculations or simulations, such as Monte Carlo simulations, to determine an optimal code version to minimize read time. In other embodiments, an optimal code version to minimize read time can be similarly determined by another computing device, and information indicated the optimal code version can be provided to the sequence generator component.
In Act 1802, for a plurality of optical code versions, a probability of an optical code of each version being properly detected and decoded by a receiving device can be determined. This could include, for example, using empirical testing to determine the probability of an optical code being properly detected/decoded for different versions and ECC levels. Such a determination can be specific to different combinations of sending devices and receiving devices. For example, a hardware setup could be assembled, with a sending device and receiving device prepared as they would be intended to be used for communication of a sequence of optical codes (for example, the sending device 110 and the receiving device 120 discussed with reference to
In such a setup, the sending device could communicate an optical code of a first version to the receiving device for a specified duration. This code be repeated a number of times (preferably a large number of times), and results could be tabulated which indicate how many communications failed, and how many succeeded, which indicates the probability that a given frame of the first optical code version will be properly detected and decoded. This could be repeated for a plurality of different optical code versions and ECC levels, to determine a probability that a given frame will be properly detected and decoded for a plurality of different optical code versions. It is possible to run such testing for all available combinations of optical code version and ECC level, but this is not necessarily required, and testing can be limited to only candidate combinations of optical code version and ECC level which are of particular interest. The results of this evaluation can be provided to the sequence generator component or another component which determines the optimal optical code version to use for a communication.
In act 1810, based on the probability of proper detection and decoding, a completion time for transfer of a sequence of optical codes can be calculated. Each calculation of a completion time in act 1810 can include acts 1811, 1812, 1813, 1814, and 1815 for the given optical code version. Stated differently, average, minimum, or expected read time of an optical code sequence can be determined by running a simulation for a candidate optical code version and ECC level combination (candidate segmentation parameters).
In act 1811, for the given optical code version, the amount of time required to communicate the sequence of optical codes once is determined (i.e., the time to communicate a single playback loop of the optical sequence is determined). This could entail multiplying the length of each frame by the number of frames. For example, if a sequence of optical codes communicates each frame for 1/10 of a second, and there are 30 frames to communicate, then the time to communicate the sequence of optical codes once is three seconds. However, variable communication times for loop iterations can be accounted for, as will be discussed later.
In act 1812, a random determination is made as to whether each frame in the optical code sequence will be successfully decoded based on the probability of successful detection and decoding (i.e., the system does a simulation of a playback loop, to identify frames which are not properly detected and decoded).
In act 1813, a determination is made as to whether there are any frames in the sequence which have not yet been successfully decoded. In this case, the simulation of the first playback loop determines whether any of the frames of the sequence were not successfully decoded during the first playback loop. If there is at least one frame which has not been not successfully decoded yet, the method proceeds to act 1814.
In act 1814, a random determination is made as to whether for another playback loop (second playback loop in this case), each frame in the sequence which has not yet been successfully decoded will be successfully decoded based on the probability of successful detection and decoding. That is, in act 1814 a determination is made as to whether, among the previously non-decoded frames, there are any frames still not properly decoded. The method then proceeds back to act 1813, where a determination is made as to whether, even after the second playback loop, there are any frames which have not yet been successfully decoded. Importantly, it is not required that all frames be properly decoded within a single loop for act 1813 to identify that there are no frames which have not yet been decoded. Rather, act 1813 determines whether there are any frames which have not been successfully decoded in any previous playback loop. As long as there is at least one non-decoded frame in the sequence, the method can cycle between acts 1813 and 1814 to run successive playback loops.
Once a determination is made in act 1813 that there are no non-decoded frames in the sequence, the method proceeds to act 1815. In act 1815, the completion time for communication of the sequence for the given optical code version is determined, based on the number of playback loops that were need to completely decode the optical sequence. Since the full sequence is sent by the sending device even if only a single frame remains to be decoded, the completion time could be determined by multiplying the number of loops needed by the time required to communicate the sequence once as determined in act 1811. In implementations where the playback speed (and thus playback time) of each loop changes with each successive loop, as discussed earlier, act 1811 can entail determining the playback time for each loop, and the completion time can be obtained in act 1815 by summing the time of each playback loop which was required to complete decode the sequence.
The acts within act 1810 can be performed for each optical code version and ECC level combination of interest, to obtain a completion time for each such optical code version and ECC level combination. Further, to account for the probabilistic nature of the simulation, the acts within act 1810 can be performed multiple times for each optical code version and ECC level combination, and averaged or otherwise combined to increase accuracy of the simulation (e.g. by obtaining an average completion time for each optical code version and ECC level combination). In act 1820, based on the calculated completion times, the optical code version and ECC level combination that produces the lowest completion time can be selected as the optimal optical code version and ECC level combination.
The discussed simulations are useful to determine which optical code version and ECC level results in an optimal total time for completion and the probability of that occurring based on the results of many simulations (e.g. 100 simulations per combination of version and ECC).
Although the acts of method 1800 in
In act 1902, a sequence of frames is generated by a sequence generator, wherein each frame comprises an optical code and each frame has an index. For example, a processor communicatively coupled to but not physically included in a sending device could generate the sequence of frames. As another example, a processor included in the sending device could generate the sequence of frames.
In act 1904, the sequence of frames is provided for display both as a video or animation, and as individual static frames. For example, the sequence generator could provide the sequence of frames to the sending device as a GIF, AVI, MPG, or other appropriate video or animation file type. The sequence generator can also provide the sequence of frames to the sending device as a plurality of image files, such as BMP, PNG, JPG, or any other appropriate format. By providing the sequence of frames in both video/animation format and static image format, the sending device has the flexibility to display either format of the sequence of frames, based on what formats the sending device is configured to display. Additionally or alternatively, the sending device could display the video/animation formatted sequence of frames when the whole sequence is being communicated, and can display the static image for a given frame when said frame is displayed independently (such as discussed above with reference to
Communicating data from a sending device to a receiving device via a sequence of optical codes has several benefits. For example, the amount of data that can be transmitted greatly exceeds what is possible with a single optical code (for example, a sequence of 100 codes can transmit 100 times more data than a single code). Further, no radio frequency based network connectivity is required for data transmission (i.e. sending and receiving devices are “air gapped” and communicate using optical frequencies in the visible light range, display screen to camera).
Some exemplary applications benefiting from the advantages of optical code sequences include: a) Digital identification; b) Diagnostics; c) Medical devices, and d) Medical records (e.g. vaccination status).
In one example, a sequence of optical codes is used to represent an individual's identity. The sending device may be the individual's smart phone. Receiving devices could be any device that requires confirming the individual's identify (for example, a passport kiosk or terminal at an airport). The data encoded may contain alphanumeric information about the individual (e.g. name, date of birth, etc). The data encoded may also include biometric information about the individual. For example, information describing the individual's fingerprints may be encoded (a variety of encoding methods being available for such data, including bitmaps, 2D coordinates, shape model coefficients, etc.). Other biometric markers may similarly be encoded (e.g. retina scans, face identification, etc.).
In one example, an optical code sequence from a sending device communicates information about the individual (e.g. their credit card number), as well as their biometric information to a receiving device. The receiving device may be configured to receive the information, and also receive a biometric measurement from the individual. The receiving device may compare the biometric measurement and the biometric information from the optical code sequence to perform authentication (for example, a credit card payment may require authentication before processing).
In a similar example, the sending device may be coupled to biometric sensors (e.g. the finger-print sensor on a smartphone); the sending device may generate a biometric measurement and encode it into the optical code sequence for transmittal to a receiving device. The receiving de-vice may validate the biometric information without the need of having biometric sensing capability.
In the above examples, written and/or digital signatures may be contemplated instead of or in addition to the biometric information.
In an example, the optical code sequence encodes digitally signed data using key pairs that provides certification of the data by an authority that issued the data. If a sender is to send information as proof of a fact (whether it is identity alone or identity and other information like a vaccination status), it may be desired that all of this data is signed by a third party issuer of the data such as a government authority, an institution, etc.). In this way, the vaccine proof can't be spoofed.
In an embodiment, the issuer (e.g. a public authority) compiles data for an individual and signs it with a private key. This signed data and the signature is encoded in optical codes and provided to the respective individual. The authority's public key is made available (e.g. as a digital certificate) to all for storing on a code receiving device that wants to confirm data received is proper and has not been subject to tampering. An individual provides the optical codes encoding the signed data to a receiving device as proof of the fact sought to be proved. The receiving device decodes the optical codes to get the signed data and signature uses the public key to confirm the data received was certified (signed) by the issuer authority using common practices.
While described with reference to a public authority, data to be encoded in the optical codes (all or part) may be signed by any entity in an applicable scenario (e.g. a manufacturer providing diagnostic data, a physician or health care provider providing medical data, etc. as described).
In one example, an individual's medical records, for example, their vaccination history, is encoded and provided by a sending device as an optical code sequence. Such medical information is well suited for the application of optical codes sequences because no network connectivity is needed (beneficial for cybersecurity and the safeguarding of privacy), and because the quantity of information required to describe medical information may be greater than what is available on static optical codes.
As discussed, medical devices may benefit from receiving data via optical code sequences, for the purposes of increased privacy and cybersecurity. For example, transferring pre-operative surgical planning information to intra-operative surgical devices.
Diagnostic information may benefit from being transmitted via optical code sequences. Large amounts of numerical data may be transmitted in an optical code sequence. For example, in automotive applications, vehicle diagnostics are typically available over a physical connector to a specialized receiving device. In this application, the vehicle itself may provide the sending device (e.g. the main dashboard display), and the receiving device may be a smartphone; vehicle diagnostics may be available on the smartphone without the need for specialized diagnostic equipment or connectors.
Applications in which the receiving device has limited electrical power may benefit from receiving data via optical code sequences. For example, where the receiving device is battery operated, and where changing batteries often is impractical. The receiving device may implement a simple user interface to minimize power consumption. For example, the user interface may consist of LEDs that indicate the optical code reading status to a user. For example: LED flashing may indicate “currently reading code sequence”; red LED may indicate a failure to read the code; Green LED may indicate success, and so on. In another example, a progress bar may be indicated via the flashing frequency of the LED (i.e. flashing frequency may be proportional to percentage progress).
Several exemplary implementations and embodiments are described above. It is within the scope of the present disclosure to combine features and aspects of each implementation and embodiment together; however, not all features of each embodiment or implementation are necessarily required.
In act 2002, encoding parameters are determined for data to be encoded. This could be achieved for example using methods 1700 and 1800 in
In act 2004, the data is encoded into a plurality of optical codes. This can be based on encoding parameters determined in act 2002, to encode the data into an appropriate plurality of optical codes (or an optical code sequence). This could also entail encoding primary data and secondary data, as discussed with reference to
In act 2006, a plurality of frames are generated for presenting in a frame sequence which comprises the optical codes. For example, the plurality of optical codes could be arranged sequentially in the frame sequence, with one optical code in each frame. In some examples, a plurality of optical code channels can be arranged in each frame, similar to as discussed with reference to
In act 2008, the frame sequence is provided for display, for example as shown in
In act 2010, responsive to user input, the sending device can alter display of the sequence of frames. Exemplary inputs and alterations of display based thereon are described with reference to
In act 2012, a receiving device receives, via an image sensor, the sequence of frames comprising the optical codes, for example as shown in
In act 2014, the receiving device (or a decoder communicatively coupled thereto) detects, localizes, and decodes the optical codes. Exemplary techniques for detecting and localizing optical codes are discussed with reference to
In act 2016, feedback is provided indicating a decoding status of the optical codes. Exemplary techniques and feedback interfaces are described with reference to
In act 2018, usability of data decoded from the optical codes can be evaluated. Exemplary evaluation techniques are described with reference to
Many of the methods and systems described herein can be implemented across a network, as is discussed with reference to
Also shown are additional computing devices 2108 and 2110 coupled to the network. Computing device 2108 is a data originating device to define data for encoding into frames comprising optical codes. Computing device 2110 is a sequence generating device configured to receive data to be encoded and generating the frames comprising the optical codes. Computing device 2110 can communicate the frames to sending device 2102, for example, as a video or animation file such as a GIF, AVI, MPG, or similar, and/or as static images of the optical codes. Sending device 2102 comprises a display device with which to display the optical codes.
Sending device 2102 can have any of the sending functionality and features described herein.
Computing device 2110 can have any of the sequence generator functionality and features described herein, including without limitation data segmentation, optical code encoding and frame sequence generating functionality.
In some implementations, the sending device 2102 communicates over the network 2106 with computing device 2110 to provide information on physical specifications of sending device 2102 (e.g. screen size, resolution, framerate). Computing device 2110 may use this information to determine ideal sequence characteristics (e.g. Optical Code version, ECC level, sequence frames per second), as is described in detail with reference to
Sending device 2102 and receiving device 2152 are configured to communicate optical codes from the sending device 2102 to the receiving device 2152 over a distance, without radio waves. Sending device 2102 can have any of the sending functionality and features described herein, including without limitation frame displaying functionality. Receiving device 2152 can have any of the receiving, detecting, localizing, and decoding functionality and features described herein, including without limitation, progress indicator features set out with reference to
In another embodiment, not shown, features and functions of computing device 2108 and 2110 are provided by a single computing device.
The various computing devices shown herein comprise a processing unit (for example a microprocessor, FPGA, ASIC, logic controller, or any other appropriate processing hardware), a storage device (e.g. non-transitory processor-readable storage medium, such as memory, RAM, ROM, magnetic-disk, solid state storage, or any other appropriate storage hardware) storing instructions which when and executed by the processing unit configure the computing device to perform operations for example to provide the functionality and features described herein. Computer program code for carrying out operations may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages.
Any of the computing devices may have communication subsystems to communicate via a network such as network 2106. Any may have a display device and other input and/or output devices. At least device 2152 has an image sensor, such as a camera. Device 2102 is preferably a mobile device such as a smartphone, tablet, laptop etc. but the devices may have any form factor.
Practical implementation may include any or all of the features described herein. These and other aspects, features and various combinations may be expressed as methods, apparatus, systems, means for performing functions, program products, and in other ways, combining the features described herein. A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, other steps can be provided, or steps can be eliminated, from the described process, and other components can be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.
Throughout the description and claims of this specification, the word “comprise”, “contain” and variations of them mean “including but not limited to” and they are not intended to (and do not) exclude other components, integers or steps. Throughout this specification, the singular encompasses the plural unless the context requires otherwise. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics, or groups described in conjunction with a particular aspect, embodiment or example are to be understood to be applicable to any other aspect, embodiment or example unless incompatible therewith. All of the features disclosed herein (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing examples or embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings) or to any novel one, or any novel combination, of the steps of any method or process disclosed.
This application is a continuation of PCT Application No. PCT/CA2021/051399, filed Oct. 6, 2021 and entitled, “Communication of Optical Codes between Devices”, the entire contents of which are incorporated herein by reference. The PCT Application No. PCT/CA2021/051399 claims a benefit of U.S. Provisional Application No. 63/089,362 filed Oct. 8, 2020 and entitled, “Communication of Optical Codes between Devices”, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63089362 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CA2021/051399 | Oct 2021 | US |
Child | 18131565 | US |