The present application is related to and claims the benefit of the earliest available effective filing dates from the following listed applications (the “Related Applications”) (e.g., claims earliest available priority dates for other than provisional patent applications (e.g., under 35 USC § 120 as a continuation in part) or claims benefits under 35 USC § 119(e) for provisional patent applications, for any and all parent, grandparent, great-grandparent, etc. applications of the Related Applications).
U.S. Provisional Patent Application Ser. No. 63/278,576 entitled SYSTEMS AND METHODS FOR GENERATION, SELECTION, AND DISPLAY OF MAP-BASED CHART DATABASES FOR USE WITH CERTIFIED AVIONICS SYSTEMS and filed Nov. 12, 2021;
Concurrently filed U.S. patent application Ser. No. 17/525,659, entitled TOOL TO FACILITATE CUSTOMER GENERATED CHART DATABASES FOR USE WITH A CERTIFIED AVIONICS SYSTEM;
Concurrently filed U.S. patent application Ser. No. 17/525,690, entitled ELECTRONIC CHART APPLICATION WITH ENHANCED ELEMENT SEARCHING AND HIGHLIGHTING USING GENERIC THIRD-PARTY DATA; and
Concurrently filed U.S. patent application Ser. No. 17/525,184, entitled METHOD FOR SEPARATING LARGE AVIONICS CHARTS INTO MULTIPLE DISPLAY PANELS.
Conversion of graphics from visualized forms to an encoded form are process-intensive tasks. Methods to compress images and/or simplify conversion protocols have been developed to decrease the amount of processing necessary for conversion.
One method of image conversion is the use of Run Length Encoded (RLE) vectors. For example, a shape may be virtually sliced into slices, then data derived from each slice is then converted into one or more RLE vectors. Current methods using RLE vector strategies are still process heavy and the time required to convert filled shaped to RLE using standard conversion methods is still excessive. This is particularly true for complex shapes (e.g., having hundreds of thousands of points), which under current RLE conversion methods have a complexity of O(n2). Accordingly, it would be advantageous to provide a system and method that overcomes the shortcomings described above.
A method for converting a filled shape to an RLE vector is disclosed. In one or more embodiments, the method includes creating a virtual pixel array of pixel cells corresponding to a graphical array of graphic pixels includes the filled shape, wherein a pixel cell corresponding to a graphic pixel of the filled shape is assigned an “ON” state, wherein a pixel cell not corresponding to the graphical pixel is assigned an “OFF” state. In one or more embodiments, the method further includes determining a border on the virtual pixel array corresponding to the filled shape, wherein the border includes one or more border lines, wherein each border line includes one or more border line elements, wherein each border line element corresponds to a single pixel. In one or more embodiments, the method further includes storing a pixel-type value within each pixel cell that corresponds to a border line element within the pixel, wherein the pixel-type value includes at least one of a start value, a line value, or a vertex value. In one or more embodiments, the method further includes creating a shape RLE group corresponding to a line of pixels aligned along a first axis of the virtual pixel array. In one or more embodiments, creating the shape RLE group includes scanning the virtual pixel array along a first row of the first axis. In one or more embodiments, creating the shape RLE group further includes initiating the shape RLE group upon detecting a pixel cell that has been assigned a start value. In one or more embodiments, creating the shape RLE group further includes extending the shape RLE group upon detection of a subsequently scanned adjacent pixel cell that is assigned an “ON” state. In one or more embodiments, creating the shape RLE group further includes terminating the shape RLE group upon the detection of the adjacent cell that is assigned an “OFF” state. In one or more embodiments, the method further includes storing the position and length of the shape RLE group as a shape RLE vector.
In one or more embodiments, the method further includes continuing the scanning the virtual pixel array along the first axis of the graphical display to the end of the array line, wherein upon reaching the end of the array line, scanning initiates along the second row of the first axis.
In one or more embodiments of the method, the first axis is configured as an X-axis, and the second axis is configured as a Y-axis.
In one or more embodiments of the method, scanning is configured to proceed from left to right along the X-axis.
In one or more embodiments of the method, the filled shape may be configured with an internal unfilled region.
In one or more embodiments of the method, the filled shape is configured to be displayed on a chart.
In one or more embodiments of the method, the chart is configured as a digital flight management system chart.
In one or more embodiments of the method, the method further including clipping the filled shape. In one or more embodiments, clipping the filled shape includes: creating a virtual clip array. In one or more embodiments, clipping the filled shape further includes determining a clip border on the virtual clip array corresponding to the clipped region. In one or more embodiments, clipping the filled shape further includes storing a pixel-type value within each pixel cell that corresponds to a clip line element. In one or more embodiments, clipping the filled shape further includes generating a clip RLE group corresponding to a line of pixels aligned along a first axis of the virtual clip array. In one or more embodiments, clipping the filled shape further includes storing the position and length of the clip RLE group as an RLE vector. In one or more embodiments, clipping the filled shape further includes combining the clip RLE vector and the shape RLE vector to form a clipped shape RLE vector.
In one or more embodiments of the method, the clipped region bounds a region of the filled shape that is visualized.
In one or more embodiments of the method, the clipped region bounds an exclusion zone of the filled shape.
In one or more embodiments of the method, the method is configured with O(n) complexity to compute.
A system is disclosed. In some embodiments, the system includes a controller configured to convert a filled shape to a run length encoded (RLE) vector. In some embodiments, the controller includes one or more processors. In some embodiments, the controller further includes a memory configured to store data and instructions executable by the one or more processors. In some embodiments, the instructions include creating a virtual pixel array of pixel cells corresponding to a graphical array of graphic pixels comprising the filled shape, wherein a pixel cell corresponding to a graphic pixel of the filled shape is assigned an “ON” state, wherein a pixel cell not corresponding to the graphical pixel is assigned an “OFF” state. In some embodiments, the instructions further include determining a border on the virtual pixel array corresponding to the filled shape, wherein the border comprises one or more border lines, wherein each border line comprises one or more border line elements, wherein each border line element corresponds to a single pixel. In some embodiments, the instructions further include storing a pixel-type value within each pixel cell that corresponds to a border line element within the pixel, wherein the pixel-type value includes at least one of a start value, a line value, or a vertex value. In some embodiments, the instructions further include creating a shape RLE group corresponding to a line of pixels aligned along a first axis. In some embodiments, scanning the virtual pixel array along a first row of the first axis. In some embodiments, creating a shape RLE group further includes initiating a shape RLE group upon detecting a pixel cell that has been assigned a start value. In some embodiments, creating a shape RLE group further includes extending the shape RLE group upon detection of a subsequently scanned adjacent pixel cell that is assigned an “ON” state. In some embodiments, creating a shape RLE group further includes terminating the shape RLE group upon the detection of the adjacent cell that is assigned an “OFF” state. In some embodiments, the instructions further include storing the position and length of the shape RLE group as a shape RLE vector.
In one or more embodiments of the system, the filled display is displayed on a chart.
In one or more embodiments of the system, the chart is configured as a digital flight management system chart.
In one or more embodiments of the system, the instructions further include clipping the filled shape. In one or more embodiments of the system, clipping the filled shape includes creating a virtual clip array. In one or more embodiments of the system, clipping the filled shape further includes determining a clip border on the virtual clip array corresponding to the clipped region. In one or more embodiments of the system, clipping the filled shape further includes storing a pixel-type value within each pixel cell that corresponds to a clip line element. In one or more embodiments of the system, clipping the filled shape further includes generating a clip RLE group corresponding to a line of pixels aligned along a first axis of the virtual clip array. In one or more embodiments of the system, clipping the filled shape further includes storing the position and length of the clip RLE group as a clip RLE vector. In one or more embodiments of the system, clipping the filled shape further includes combining the clip RLE vector and the shape RLE vector to form a clipped shape RLE vector.
This Summary is provided solely as an introduction to subject matter that is fully described in the Detailed Description and Drawings. The Summary should not be considered to describe essential features nor be used to determine the scope of the Claims. Moreover, it is to be understood that both the foregoing Summary and the following Detailed Description are example and explanatory only and are not necessarily restrictive of the subject matter claimed.
The detailed description is described with reference to the accompanying figures. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items. Various embodiments or examples (“examples”) of the present disclosure are disclosed in the following detailed description and the accompanying drawings. The drawings are not necessarily to scale. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.
Before explaining one or more embodiments of the disclosure in detail, it is to be understood that the embodiments are not limited in their application to the details of construction and the arrangement of the components or steps or methodologies set forth in the following description or illustrated in the drawings. In the following detailed description of embodiments, numerous specific details may be set forth in order to provide a more thorough understanding of the disclosure. However, it will be apparent to one of ordinary skill in the art having the benefit of the instant disclosure that the embodiments disclosed herein may be practiced without some of these specific details. In other instances, well-known features may not be described in detail to avoid unnecessarily complicating the instant disclosure.
As used herein a letter following a reference numeral is intended to reference an embodiment of the feature or element that may be similar, but not necessarily identical, to a previously described element or feature bearing the same reference numeral (e.g., 1, 1a, 1b). Such shorthand notations are used for purposes of convenience only and should not be construed to limit the disclosure in any way unless expressly stated to the contrary.
Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of “a” or “an” may be employed to describe elements and components of embodiments disclosed herein. This is done merely for convenience and “a” and “an” are intended to include “one” or “at least one,” and the singular also includes the plural unless it is obvious that it is meant otherwise.
Finally, as used herein any reference to “one embodiment” or “some embodiments” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment disclosed herein. The appearances of the phrase “in some embodiments” in various places in the specification are not necessarily all referring to the same embodiment, and embodiments may include one or more of the features expressly described or inherently present herein, or any combination of or sub-combination of two or more such features, along with any other features which may not necessarily be expressly described or inherently present in the instant disclosure.
A system and method for converting a filled shape to a run length encoded (RLE) vector is enclosed. The method first determines a border around the shape, and defines pixels defining the border for use in creating a shape RLE group. For example, the method identifies initiating pixels within a line of pixels of an image slice that can be used to initiate a shape RLE group. The method also identifies extending pixels that are added to the shape RLE group and/or a termination pixel that ends the shape RLE group. Data from the shape RLE group is then converted into an RLE vector. The method effectively reduces the complexity of multiple point graphics from O(n2) to O(n). A method for clipping RLE filled areas is also enclosed.
In embodiments, the input device 104 inputs RLE-related data and/or graphical data into the system 102, and may be configured as any type of input device including but not limited to a keyboard, a scanner, a camera, or any type of data port. For example, the input device 104 may be configured as a scanner configured to scan a graphic (e.g., physical avionics chart) into the system. In another example, the input device 104 may be configured as a USB port configured to receive a USB memory device having an avionics chart (e.g., a digital navigation chart a flight management system (FMS)) loaded onto it (e.g., as a pdf. file, jpg. file, or other type of file). In another example, the input device 104 may be configured as a data port connected to a network 114.
In embodiments, the output device 108 may be configured to output RLE-related data and/or graphical data from the system 102 and may be configured as any type of output device 108 including but not limited to a display, a printer, or any type of data port. For example, the output device 108 may be configured as a computer screen. In another example, the output device 108 may be configured as a data port connected to the network 114.
In embodiments, the controller 112 includes one or more processors 116, a memory 120, and a communication interface 124. The one or more processors 116 may include any processor or processing element known in the art. For the purposes of the present disclosure, the term “processor” or “processing element” may be broadly defined to encompass any device having one or more processing or logic elements (e.g., one or more micro-processor devices, one or more application specific integrated circuit (ASIC) devices, one or more field programmable gate arrays (FPGAs), or one or more digital signal processors (DSPs)). In this sense, the one or more processors may include any device configured to execute algorithms and/or instructions (e.g., program instructions stored in memory). In one embodiment, the one or more processors may be embodied as a desktop computer, mainframe computer system, workstation, image computer, parallel processor, networked computer, or any other computer system configured to execute a program configured to operate or operate in conjunction with the system 102, as described throughout the present disclosure. Moreover, different subsystems of the system 102 may include a processor or logic elements suitable for carrying out at least a portion of the steps described in the present disclosure. Therefore, the above description should not be interpreted as a limitation on the embodiments of the present disclosure but merely as an illustration.
The memory 120 can be an example of tangible, computer-readable storage medium that provides storage functionality to store various data and/or program code associated with operation of the controller 112 and/or other components of the system 102, such as software programs and/or code segments, or other data to instruct the controller and/or other components to perform the functionality described herein. Thus, the memory can store data, such as a program of instructions for operating the system 102 or other components. It should be noted that while a single memory 120 is described, a wide variety of types and combinations of memory 120 (e.g., tangible, non-transitory memory) can be employed. The memory can be integral with the controller, can comprise stand-alone memory, or can be a combination of both. Some examples of the memory 120 can include removable and non-removable memory components, such as random-access memory (RAM), read-only memory (ROM), flash memory (e.g., a secure digital (SD) memory card, a mini-SD memory card, and/or a micro-SD memory card), solid-state drive (SSD) memory, magnetic memory, optical memory, universal serial bus (USB) memory devices, hard disk memory, external memory, and so forth.
The communication interface 124 can be operatively configured to communicate with components of the controller 112 and other components of the system 102. For example, the communication interface 124 can be configured to retrieve data from the controller 112 or other components, transmit data for storage in the memory 120, retrieve data from storage in the memory 120, and so forth. The communication interface 124 can also be communicatively coupled with controller 112 and/or system elements to facilitate data transfer between system components.
Rules disclosed herein for processing filled shapes 204 into RLE vectors require that each leftmost pixel 216 contained within a horizontal row of pixels 216 in a border line 226 is designated as a start pixel 216 (e.g., “S”). However, different implementations of the rules may also consider any first pixel 216 within a row or column of an X- or Y-axis as the first pixel 216. Therefore, the invention should not be construed as being limited to the description in following the embodiments and examples. For example, the rightmost pixel 216 contained within the horizontal row of pixels 216 may be given the start pixel 216 designation. In another example, the uppermost pixel 216 contained within a vertical row of pixels 216 may be given the start pixel designation. In another example, the lowermost pixel 216 within a vertical row of pixels 216 may be given the start pixel 216 designation. The start pixel may also be further defined by the number of border lines 226 drawn through the start pixel 216 (e.g., the start pixel 216 may be designated with an incremental value depending on the number of line elements drawn through the start pixel 216). For example, a start pixel 216 that comprises one border line 226 may be designated “S1”, whereas a start pixel 216 including two border lines 226 may be designated “S2”.
Referring to
In embodiments, the method 300 includes a step 310 of grouping connected border elements 222 of a border line 226 of a filled shape 204 together that do not cross an X-axis pixel border 404a-d into a pixel line 406a-c containing non-crossing border lines 408a-d. For example, grouping connected border elements 222 may be performed by “stepping through” a group of border lines 226 representing the border 220 of a filled shape 204. Examples of non-crossing border lines 408a-d are shown in
In embodiments, the method 300 further includes a step 320 of designating the leftmost pixel 216 containing the leftmost border elements 222 of the non-crossing border line 408 as a start pixel 216, and the rightmost border line element 222 of the non-crossing border line as an end pixel. Some non-crossing border lines 408 may include only one pixel 216, as in pixel line 406b.
In embodiments, the method 300 further includes a step 330 of designating all border line element-containing pixels 216, continuously extending border elements 222 horizontally from the start pixel 216 without crossing an X-axis pixel border 404 as a line pixel (“L”). For example, in
In embodiments, the method 300 further includes a step 340 of designating a pixel 216 containing two coupled border elements 222, formed from border lines 226 that cross the same X-axis pixel border as a vertex pixel 216. For example, in
In embodiments, the method 300 further includes a step 350 of designating all border line element-containing pixels 216 formed from border elements extending horizontally from the vertex pixel 216 as a vertex pixel 216. For example, in filled area 224c of
In embodiments, the method 300 further includes a step 360 of designating start pixels 216 with an incremental value defining the number of border lines 226 drawn through the pixel 216. For example, in
In embodiments, the method 300 further includes a step 370 of designating a pixel 216 configured with one or more vertex pixel 216 designations from one or more border lines 226 and a line pixel 216 designation from a border line as a line pixel 216. For example, in
In embodiments, the method 300 includes a step 380 of designating a pixel 216 configure with one or more vertex pixel 216 designations from one or more border lines 226 and a line pixel 216 designation from a border line as a start pixel 216. For example, in
Border elements 222, or points of border elements 222 can land exactly on pixel boundaries (e.g., or pixel cell boundaries). In this case, the border element or point should be placed in either pixel 216 sharing the pixel boundary, and these placements should be consistent. In a similar manner, border lines 226 may cross at exact pixel boundaries (e.g., an X-axis pixel border 404 or a Y-axis pixel boundary). In these cases, the border elements 222 of these border lines 226 may be placed on either side of the pixel boundary, and should be placed consistently.
Points (e.g., border elements smaller than a pixel 216) may be discarded, but their position may not be discarded. For example, if a border line 226 is shorter than a pixel 216 and extends upward half a pixel 216, further line and position calculations should be made from the higher point, even though the pixel 216 may be combined with another pixel 216 for drawing purposes.
The last border line 226 in a filled shape 204 will either be configured as a “close” command, or a line that terminates at the original start location. The original starting point (e.g., a pixel 216), which may have been designated as an extension, is reprocessed with according to new data acquired in assigning designations to other border pixels 216, and following the steps 310-380. The pixel 216 containing the original starting point may be reassigned as a start pixel 216, a line pixel 216, or a vertex pixel 216. It may be necessary to reprocess additional parts of the border 220 after considering data derived from “closing” the border.
Once the border has been closed, the shape is fully traversed within the pixel array 208. For instance, in some embodiments, traversing top to bottom processing each row left to right. Processing of each row begins RLE processing. As noted herein, the traversal and the direction of processing may be left-to-right, right-to-left up-to-down, or down-to-up, but is performed in the same directions within the filled shape 204.
When all of the border lines 226 of the filled shape 204 have been processed, and every pixel 216 has been assigned a designation, shape RLE grouping will begin.
In some embodiments, the method 500 includes a step 510 of toggling a LineOn attribute when a start pixel 216 is found. For example, if the LineOn attribute is originally set to an “OFF” setting, upon detecting a start pixel 216, the LineOn attribute will be set to an “ON” setting. Conversely, if the LineOn attribute is originally set to an “ON” setting, upon detecting a start pixel 216, the LineOn attribute will be set on an “OFF” setting.
In some embodiments, the method 500 further includes a step 520 of initiating a shape RLE group if no previous shape RLE group has been started and a pixel cell is found that is configured with an “ON” state. For example, a pixel cell correlated to a start pixel 216, and having an “ON” state will be placed as the first pixel cell in a shape RLE group.
In some embodiments, the method 500 further includes a step 530 of extending the shape RLE group if a shape RLE group is started and a pixel cell is found that is configured with an “ON” setting. For example, a pixel cell (e.g., designated as a line pixel 216 or a non-border pixel 216) that has been detected immediately to the right of a pixel cell that has been added to a shape RLE group and having an “ON” state will also be added to the shape RLE group. As described herein, all “ON” pixels cells represent pixels 216 that are filled, all pixel cells designated as start pixels 216, line pixels 216, vertex pixels 216, or other markings may also be configured with an “ON” setting.
In some embodiments, the method 500 further includes a step 540 of terminating the shape RLE group if a shape RLE group has been started and/or extended, and the next cell is configured with an “OFF” setting. For example, if upon scanning left to right through the row of pixel cells, that correspond to a filled shape 204, an initiated and/or extended RLE that detects the next pixel cell as configured with an “OFF” setting will result in termination of the shape RLE group.
In some embodiments, the method 500 further includes a step 545 of storing the position and length of the shape RLE group. For example, the position and length of shape RLE groups may be stored as an RLE vector in memory 120.
The conversion of filled shapes 204 to RLE vectors may include filled shapes 204 containing arcs, with multiple start pixels 216 designated at each crossing of each X-axis border 404. This process may also be extended to complex internal shape processing. For example, the process may be used to convert filled shapes 204 having unfilled internal areas, such as a doughnut shape.
The implementation of the RLE vector conversion may include storing data with a single number (e.g., such as “0”) representing an “OFF” setting, and another single number (e.g., such as “1”) representing an “ON” setting. Indeterminate pixels, such as extensions, may be represented with another single number (e.g., such as “−1”).
For large shapes that are memory intensive, the process may be chunked into a set of subarrays. For example, the process may be modified to use less memory by setting an arbitrary rectangle set (e.g., a rectangle outline) of X and Y boundaries overlaid upon a portion of the large shape. Sub-shapes within the large shape corresponding to the rectangle outline are then processed. The entire sub-shape then will be scanned with locations calculated and the location of the shape is noted and saved. Multiple rectangle shapes may be overlaid upon the large shape in this manner. Lines from single sub-shapes that end at a border of the rectangle shape may be connected with a line (e.g., such as through a set of start pixels 216 or start points), and further processed into an RLE. Once all overlaying RLEs within the multiple rectangle outline have been processed, the shape RLE groups may be sorted by position and stitched together.
The total final space taken by the encoded shape may be minimized by performing a first pass analysis on the shape and storing the maximum X and Y values of the shape. These values may then be used to determine if RLE processing should be performed horizontally, vertically, or in an angled position between horizontal and vertical. For example, a shape that is taller than wide will process with fewer lines if processing is performed vertically, rather than horizontally. The shape may then be rotated or the system 102 adjusted, so that RLE processing can be performed with fewer lines.
In embodiments, the method 550 includes a step 555 of creating a virtual pixel array 212 or pixel cells corresponding to a graphical array 208 of pixels 216 comprising the filled shape 204, wherein a pixel cell corresponding to a pixel 216 of the filled shape 202 is assigned an “ON” state, wherein a pixel cell not corresponding to the pixel 216 is assigned an “OFF” state. The graphical array 208 may be configured as a either a physical or virtual grid placed over a visualized filled shape 204 (e.g., a filled shape 206 on a display screen or a printed sheet). As detailed above, the virtual pixel array 212 is a mathematical representation of the graphical array.
In embodiments, the method 550 includes a step 560 of determining a border 200 on the virtual pixel array 212 corresponding to the filled shape 204, wherein the border 200 comprises one or more border lines 226, wherein each border line comprises one or more border line elements 222, wherein each border line element 222 corresponds to a single pixel 216. For example, if a border line 226 (e.g., a straight line) enters slightly into a pixel 216, the pixel 216 will contain one border line element (e.g., the terminal tip of the border line 226), and the pixel will be assigned an “ON” state. As mentioned herein, a pixel not containing a border line element 222 will be assigned an “OFF” state.
In embodiments, the method 550 further includes a step 565 of storing a pixel-type value within each pixel cell that corresponds to a border line element 222 within the pixel 216, wherein the pixel-type value includes at least one of a start value (S), a line value (L), or a vertex value (V). The pixel-type values for each pixel 216 on the border 200 are determined via the rules described herein, with some pixels 216 having initially assigned pixel-type values that change to due reprocessing and hierarchy rules (e.g., extension (X) pixel-type values changed to start (S) pixel-type values or line (L) pixel-type values, and vertex (V) pixel-type values).
A complete assignment of the pixels 216 aligned on the border 220 of a filled shape 204 is shown in
In embodiments, the method 550 further includes a step 570 of creating a shape RLE group corresponding to a line of pixels 216 aligned along a first axis of the virtual pixel array. For example, a line of pixels 216 along an X-axis responding to a horizontal slice of the filled shape 204 may be assigned to a shape RLE group. For instance, rows 1-6 of
In embodiments, the method 550 further includes a step 575 of scanning the virtual pixel array 212 along a first row of the first axis. The first row may correlate to any row within the display area 200. For example, the scanning may begin at a top, leftmost pixel cell of the virtual pixel array and proceed left to right, top to bottom fashion.
In embodiments, the method 550 further includes a step 580 of initiating a shape RLE group upon detecting a pixel cell that has been assigned a start value. The step 580 may also initiate a shape RLE group if the pixel cell has been assigned an “ON” state, and the previously scanned pixel 216 has been assigned an “OFF” state. For example, ON/OFF states may need to be toggled back and forth for shape RLE group initiation if the pixel 216 assigned the start value has been clipped, as discussed below. Initiating a shape RLE group may also include one or more steps of method 500.
In embodiments, the method 550 further includes a step 585 of extending the shape RLE group upon detection of a subsequently scanned adjacent pixel cell that is assigned an “ON” state. For example, the shape RLE group may be extended if the shape RLE group has been initiated and the pixel cell scanned is assigned an “ON” state. In this manner, all “ON” pixel cells contiguously extending from the start pixel 216 along the X-axis will be added to the shape RLE group.
In embodiments, the method 550 further includes a step 590 of terminating the shape RLE group upon the detection of the adjacent cell that is assigned an “OFF” state. For example, the shape RLE group may be terminated if the shape RLE group has been initiated, and the pixel cell scanned is assigned an “OFF” state.
In embodiments, the method further includes a step 595 of storing the position and length of the shape RLE group as a shape RLE vector. For example, the data corresponding to the position, length, and other aspects of the shape RLE group may be stored in memory 120. The one or more processors may also convert the shape RLE group as instructed into the RLE vector. The process may then reinitiate by further scanning along the first row of the first axis. Upon reaching the end of the array line, scanning may initiate along the second row of the first axis, and so on.
The conversion of filled shapes to RLE vectors may be extended to include clipping of filled shapes 204 to form a defined shape and limiting how the defined shape is displayed. Clipping assumes that the filled shape may be processed into an RLE form and may be converted to RLE vectors, and that the clipping region may also be processed into an RLE form and possibly converted to RLE vectors. By converting the filled shape 204 and clipping region 600 to similar data forms, the conversion of the filled shape 204 to a final clipped image is rapid and processively efficient.
In embodiments, the scheme used to convert filled shapes 204 to RLE vectors may also be used as a base or template for creating a virtual clip array similar to the virtual pixel array 212. By matching the virtual clip array to the virtual pixel array 212, software within the system 102 may quickly process shapes with complex clip regions 600. For example, a clipping algorithm may include converting both the filled section 224 and a clipping region 600 to a combined array, an RLE vector, or a set of RLE vectors, and compare, left-to-right, across the data set to determine clipped shape 604. In this manner, the clipping may be performed with a complexity of a O(n) operation, considerably less complex, and less processor intensive that O(n2) clipping operations, particularly those based on Sutherland-Hodgman and Weiler-Atherton clipping methods.
In embodiments, the method 700 includes a step 710 of creating a virtual clip array. The virtual clip array is formed of pixel cells similar to the virtual pixel array 212. The virtual clip array must have dimensions as large as, or larger than the clipping region 600. For example, the virtual clip array may be equal to the size of the display area and/or the virtual pixel array 212.
In embodiments, the method 700 further includes a step 720 of determining a clip border 608 on the virtual clip array corresponding to the clipped region 600. For example, the clipping region 600 may define the specific dimensions and coordinates as required to clip the filled shape 204, which is defined by the clip border 608. As in the method 550, the clip border 608 comprises one or more clip lines, which further comprise one or more clip line elements, similar to the border lines 226 and border elements 222, respectively.
In embodiments, the method 700 includes a step 730 of storing a pixel-type value within each pixel cell that corresponds to a clip line element. The pixel-type values may be identical or analogous to the pixel-type values used in method 550 and described herein. For example, a clip line element may be assigned a start value, a line value, or a vertex value.
In embodiments, the method 700 further includes a step 740 of generating a clip RLE group corresponding to a line of pixels 216 aligned along a first axis of the virtual clip array. The clip RLE group may be formed similar to the shape RLE group. For example, the forming of the clip RLE group may include scanning of the virtual clip array along a first axis, initiating the clip RLE group upon detecting a pixel cell that has been assigned a start value, extending the clip RLE group upon detection of a subsequently scanned adjacent pixel cell determined within the clipped region and/or assigned an “OFF” state (e.g., as opposed to assigned an “ON” state in method 550), and/or terminating the clip RLE group upon the detection of the adjacent pixel cell that is outside the clipped region 600 or assigned an “ON” state (e.g., as opposed to assigned an “OFF” state in method 550).
In embodiments, the method 700 further includes a step 750 storing the position and length of the clip RLE group as an clip RLE vector. For example, data from the clip RLE group may be stored in memory 120 and processed as described herein.
In embodiments, the method 700 further includes a step 760 of combining the clip RLE vector and the shape RLE vector to form a clipped shape RLE vector (e.g., ultimately forming the clipped shape 604). The combining may include a series of logic steps to determine whether a pixel 216 should be “ON” or “OFF”. For example, through a comparison of an exclusion clip RLE vector and a shape RLE vectors, a processor may determine that if the pixel 216 on the filled shape 204 is “ON” and an associated pixel 216 of the clipping region 600 is “OFF”, then the pixel 216 is “OFF” (e.g., the clip RLE vector overriding the shape RLE vector).
The method 700 is efficient, and may have distinct advantages over other clipping methods. For example, the method 700 works with concave regions, which cannot be performed using the Sutherland-Hodgman method. In another example, the method 700 works more efficiently with complex clipping regions 600 (e.g., having hundreds of thousands of points) than using the Weiler-Atherton protocol. The method 700 is relatively simple to understand, code, and verify, as compared to industry standard methods.
Charts, such as avionic navigation charts, are often defined using a single clipping region 600 that affects multiple filled areas. Standard methods require each filled area 204 to interact with the clipping region 600 independently. Using the method 700, it is possible to process the clipping region 600 a single time and have data from the clipping region 600 interact with all filled areas 204 without traversing the clipping region 600 multiple times, decreasing the time required to process charts with these characteristics.
In another example, the filled shape 204c, 204d may include tendrils 824, 828, defined as pixel-wide lengths of filled area, may be formed. The tendrils may be aligned with an axis (e.g., tendril 824) or rotated (e.g., tendril 828). In another example, a filled shape 204e may include a double line 832 (e.g., a line that is filled on both sides, where lines are close together and share a pixel 216).
Referring to
It is to be understood that embodiments of the methods disclosed herein may include one or more of the steps described herein. Further, such steps may be carried out in any desired order and two or more of the steps may be carried out simultaneously with one another. Two or more of the steps disclosed herein may be combined in a single step, and in some embodiments, one or more of the steps may be carried out as two or more sub-steps. Further, other steps or sub-steps may be carried in addition to, or as substitutes to one or more of the steps disclosed herein.
Although inventive concepts have been described with reference to the embodiments illustrated in the attached drawing figures, equivalents may be employed and substitutions made herein without departing from the scope of the claims. Components illustrated and described herein are merely examples of a system/device and components that may be used to implement embodiments of the inventive concepts and may be replaced with other devices and components without departing from the scope of the claims. Furthermore, any dimensions, degrees, and/or numerical ranges provided herein are to be understood as non-limiting examples unless otherwise specified in the claims.
Number | Name | Date | Kind |
---|---|---|---|
3522586 | Kiji et al. | Aug 1970 | A |
3656178 | Maine et al. | Apr 1972 | A |
4096527 | Furuta | Jun 1978 | A |
4736303 | Itoh et al. | Apr 1988 | A |
4792981 | Cahill, III et al. | Dec 1988 | A |
4876651 | Dawson et al. | Oct 1989 | A |
5050230 | Jones et al. | Sep 1991 | A |
5428692 | Kuehl | Jun 1995 | A |
5454076 | Cain et al. | Sep 1995 | A |
5499382 | Nusinov et al. | Mar 1996 | A |
5537669 | Evans et al. | Jul 1996 | A |
5546572 | Seto et al. | Aug 1996 | A |
5559707 | DeLorme et al. | Sep 1996 | A |
5577170 | Karow | Nov 1996 | A |
5936637 | Seto | Aug 1999 | A |
5978715 | Briffe et al. | Nov 1999 | A |
6014133 | Yamakado et al. | Jan 2000 | A |
6240341 | Snyder | May 2001 | B1 |
6275610 | Hall, Jr. et al. | Aug 2001 | B1 |
6320984 | Shigeta | Nov 2001 | B1 |
6448922 | Kelly | Sep 2002 | B1 |
6501441 | Ludtke et al. | Dec 2002 | B1 |
6839714 | Wheeler et al. | Jan 2005 | B2 |
7039505 | Southard | May 2006 | B1 |
7096211 | Fujihara | Aug 2006 | B2 |
7173738 | Kohn | Feb 2007 | B2 |
7552010 | Saito | Jun 2009 | B2 |
7552011 | Ishii et al. | Jun 2009 | B2 |
7562289 | Bufkin et al. | Jul 2009 | B2 |
7581036 | Powell et al. | Aug 2009 | B2 |
7609263 | Nagasaki et al. | Oct 2009 | B2 |
7739622 | DeLine et al. | Jun 2010 | B2 |
7777749 | Chung et al. | Aug 2010 | B2 |
7948502 | Stanton | May 2011 | B2 |
7966609 | Serebryany | Jun 2011 | B2 |
8035642 | Suzuki | Oct 2011 | B2 |
8165732 | Corbefin et al. | Apr 2012 | B2 |
8169505 | Hoshi | May 2012 | B2 |
8306745 | Clark et al. | Nov 2012 | B1 |
8339417 | Stroila et al. | Dec 2012 | B2 |
8374390 | Stroila et al. | Feb 2013 | B2 |
8379065 | Nam et al. | Feb 2013 | B2 |
8515658 | Foster et al. | Aug 2013 | B1 |
8583368 | Sindlinger et al. | Nov 2013 | B1 |
8704732 | Pourbigharaz et al. | Apr 2014 | B2 |
8937737 | Tsutsumi et al. | Jan 2015 | B2 |
9035969 | Ivashin et al. | May 2015 | B2 |
9195637 | Peraza et al. | Nov 2015 | B2 |
9430195 | Pecoraro et al. | Aug 2016 | B1 |
9443433 | Conway et al. | Sep 2016 | B1 |
9465513 | Sims | Oct 2016 | B2 |
9489121 | Davis et al. | Nov 2016 | B2 |
9547727 | Passani et al. | Jan 2017 | B2 |
9619919 | Postnikov et al. | Apr 2017 | B1 |
9639309 | Pugh | May 2017 | B1 |
9671935 | Miichi et al. | Jun 2017 | B2 |
9703455 | Cocco et al. | Jul 2017 | B2 |
9781294 | Chapman | Oct 2017 | B1 |
9818051 | Panek et al. | Nov 2017 | B2 |
9858823 | Lynn et al. | Jan 2018 | B1 |
9891875 | Kim et al. | Feb 2018 | B2 |
9921721 | Beavers et al. | Mar 2018 | B2 |
9939271 | Foster et al. | Apr 2018 | B1 |
10001376 | Tiana et al. | Jun 2018 | B1 |
10061480 | McCusker et al. | Aug 2018 | B1 |
10170010 | McCusker et al. | Jan 2019 | B1 |
10372292 | Vogel et al. | Aug 2019 | B2 |
10674075 | Kimura | Jun 2020 | B2 |
10684769 | Yamat et al. | Jun 2020 | B2 |
10872274 | Mao et al. | Dec 2020 | B2 |
10880522 | McCutchen et al. | Dec 2020 | B2 |
10984501 | Milan et al. | Apr 2021 | B2 |
11030477 | Becker et al. | Jun 2021 | B2 |
11061563 | Nielsen et al. | Jul 2021 | B1 |
11106329 | He et al. | Aug 2021 | B2 |
20030151630 | Kellman et al. | Aug 2003 | A1 |
20040071351 | Rade | Apr 2004 | A1 |
20040225440 | Khatwa et al. | Nov 2004 | A1 |
20050030321 | Anwar | Feb 2005 | A1 |
20050091340 | Facemire et al. | Apr 2005 | A1 |
20060031006 | Stenbock et al. | Feb 2006 | A1 |
20060215915 | Kim | Sep 2006 | A1 |
20070067095 | King | Mar 2007 | A1 |
20070094591 | Etgen et al. | Apr 2007 | A1 |
20070112517 | Goldstein | May 2007 | A1 |
20070185651 | Motoyama et al. | Aug 2007 | A1 |
20080046254 | Nuno et al. | Feb 2008 | A1 |
20080103641 | Ratcliffe | May 2008 | A1 |
20080240152 | Quinn et al. | Oct 2008 | A1 |
20090080801 | Hatfield | Mar 2009 | A1 |
20090123070 | Xiaoying | May 2009 | A1 |
20090125837 | Hatem et al. | May 2009 | A1 |
20090324065 | Ishida et al. | Dec 2009 | A1 |
20100128020 | Oh et al. | May 2010 | A1 |
20100218089 | Chao et al. | Aug 2010 | A1 |
20100262318 | Ariens | Oct 2010 | A1 |
20100328353 | McDonald et al. | Dec 2010 | A1 |
20110191014 | Feng et al. | Aug 2011 | A1 |
20120019673 | Narayanan | Jan 2012 | A1 |
20120242687 | Choi | Sep 2012 | A1 |
20120287151 | James et al. | Nov 2012 | A1 |
20140168277 | Ashley et al. | Jun 2014 | A1 |
20140225928 | Konnola et al. | Aug 2014 | A1 |
20140282038 | Royster et al. | Sep 2014 | A1 |
20150070373 | Clinton | Mar 2015 | A1 |
20150239574 | Ball et al. | Aug 2015 | A1 |
20150278626 | Nakamura | Oct 2015 | A1 |
20150324088 | Pasetto et al. | Nov 2015 | A1 |
20160092557 | Stojanovic et al. | Mar 2016 | A1 |
20170262413 | Song et al. | Sep 2017 | A1 |
20170299633 | Pietrowicz et al. | Oct 2017 | A1 |
20170313332 | Paget et al. | Nov 2017 | A1 |
20180181646 | Balasa et al. | Jun 2018 | A1 |
20180253889 | Nagasaka | Sep 2018 | A1 |
20190057671 | Baer et al. | Feb 2019 | A1 |
20190220234 | Lewis et al. | Jul 2019 | A1 |
20190237043 | Tahmasebi | Aug 2019 | A1 |
20190299701 | Bartels | Oct 2019 | A1 |
20200089694 | Cabra et al. | Mar 2020 | A1 |
20200195924 | Hsiang | Jun 2020 | A1 |
20200251029 | Tseng | Aug 2020 | A1 |
20200255350 | Baek | Aug 2020 | A1 |
20200320142 | Malak et al. | Oct 2020 | A1 |
20200386567 | Igarashi | Dec 2020 | A1 |
20210004930 | Kamath et al. | Jan 2021 | A1 |
20210035453 | Khan et al. | Feb 2021 | A1 |
20210056300 | Chitta et al. | Feb 2021 | A1 |
20210192202 | Tripuraneni et al. | Jun 2021 | A1 |
20210225181 | Feyereisen et al. | Jul 2021 | A1 |
20210349615 | Ruby et al. | Nov 2021 | A1 |
20230154338 | Henry et al. | May 2023 | A1 |
Number | Date | Country |
---|---|---|
3095088 | Feb 2021 | CA |
H05205069 | Aug 1993 | CN |
1045835 | Oct 1999 | CN |
100440222 | Dec 2008 | CN |
101751449 | Jun 2010 | CN |
101676988 | Dec 2011 | CN |
102714759 | Oct 2016 | CN |
107026958 | Aug 2017 | CN |
107402734 | Nov 2017 | CN |
109325083 | Feb 2019 | CN |
110727747 | Jan 2020 | CN |
110906938 | Mar 2020 | CN |
0341645 | Nov 1989 | EP |
0380294 | Aug 1990 | EP |
0748562 | Oct 1998 | EP |
1352315 | Oct 2003 | EP |
1366462 | Dec 2003 | EP |
1454213 | Sep 2004 | EP |
1272977 | Dec 2004 | EP |
1687777 | Aug 2006 | EP |
2224359 | Sep 2010 | EP |
2792998 | Oct 2014 | EP |
2879061 | Jun 2015 | EP |
1736894 | Jul 2016 | EP |
3201879 | Aug 2017 | EP |
3538978 | Aug 2020 | EP |
3845862 | Jul 2021 | EP |
2504085 | Jan 2014 | GB |
S622721 | Jan 1987 | JP |
S62196772 | Aug 1987 | JP |
S6393273 | Apr 1988 | JP |
3871040 | Jan 2007 | JP |
2007133231 | May 2007 | JP |
2008022215 | Jan 2008 | JP |
2009282855 | Dec 2009 | JP |
4728744 | Jul 2011 | JP |
WO-9523364 | Aug 1995 | WO |
1998043208 | Jan 1999 | WO |
2011036499 | Mar 2011 | WO |
2014146561 | Sep 2014 | WO |
2021035223 | Feb 2021 | WO |
2021035954 | Mar 2021 | WO |
Entry |
---|
Seo et al, ‘Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors’, Sensors, MDPI. (Year: 2016). |
ArcGIS, “Introduction to export a map or layout”, retrieved from the Internet Nov. 11, 2021. |
Bongwon Suh, Haibin Ling, Benjamin B. Bederson, and David W. Jacobs. 2003. Automatic thumbnail cropping and its effectiveness. In Proceedings of the 16th annual ACM symposium on User interface software and technology (UIST 03). Association for Computing Machinery, New York, NY, USA, 95-104. |
Houston, Ben & Nielsen, Michael & Batty, Christopher & Nilsson, Ola & Museth, Ken. (2006). Hierarchical RLE Level Set: A compact and versatile deformable surface representation. ACM Trans. Graph . . . 25. 151-175. |
Jeppesen, “JeppView for Windows, User Guide”, (2016), 92 pages. |
Lufthanasa Systems Blog, “Lido eRouteManual 4.3 Design Overview”, (2016) Retrieved from the Internet. |
Maptiler, “Software performs Geocoding, Place name search, and Reverse Geocoding.” Retrieved from Internet on Nov. 11, 2021. |
Microsoft, “Generate a thumbnail sprite with Azure Media Services”, (2021), Retrieved from Internet Nov. 11, 2021. |
Narkive Mailinglist Archive, “Fastest Method of Drawing a TileMap”, (2002), Retrieved from Internet Nov. 11, 2021. |
Navigraph, “Navigraph Charts”, Retrieved from the Internet. |
Pamental, Jason, “Digging in to dynamic typography”, Retrieved from Internet , Nov. 11, 2021, 11 pages. |
Pamental, Jason, “The evolution of typography with variable fonts”, Retrieved from the Internet , Nov. 11, 2021. |
Penquerch, “[AD] RLE clipping speedup patch” (2002), Retrieved from Internet , Nov. 11, 2021. |
QGIS: Open-source cross-platform GIS software, Retrieved from Internet , Nov. 11, 2021. |
Somasundaram, K. “A Method for Filling Holes in Objects of Medical Images Using Region Labeling and Run Length Encoding Schemes.” (2010). |
Anonymous: “Pilot's Guide to ForeFlight Mobile 82nd Edition Covers ForeFlight Mobile v12.7”, Aug. 26, 2020, pp. 161-165. |
Anonymous: Pilot's Guide to ForeFlight Mobile 82nd Edition Covers ForeFlight Mobile v12.7, Aug. 26, 2020, pp. 78-90. |
Extended European Search Report dated Apr. 5, 2023, European Application No. 22207025.2. |
Extended European Search Report dated Apr. 5, 2023, European Application No. 22207047.6. |
Extended European Search Report dated Apr. 5, 2023; European Application No. 22207057.5. |
Anonymous: “SkyDemon Mobile, GBS handheld navigation devices for aircrfaft”, Dec. 4, 2021; Internet URL https://web.archive.org/web/20211204140934/https://www.skydemon.aero/inflight/. |
C. Pschierer et al, “Human factors analysis for a 2D enroute moving map application”, SPIE, PO Box 10, Bellingham, WA 98227-0010 USA, vol. 5802, May 25, 2005. |
Extended European Search Report dated Apr. 11, 2023; European Application No. 22207123.5. |
Extended European Search Report dated Apr. 12, 2023; European Application No. 22207050.0. |
Extended European Search Report dated Apr. 12, 2023; European Application No. 22207124.3. |
Extended European Search Report dated Apr. 18, 2023; European Application No. 22207164.9. |
Rockwell Collins: “Flight Database Services for Pro Line Fusion”, Jan. 12, 2021, XP093035870, Internet URL: https://www.rockwellcollins.com/-/media/files/unsecure/products/product-brochures/navigation-and-guidance/flight-management-systems/resources/fusion-data-base-services-01.pdf?la=en&lastupdate=20210125195039&csrt=15271691716207860418, p. 5. |
Skysectionals: “Tour Low-Altitute Enroute Charts”, Sep. 22, 2021; XP093035866, Internet: URL:https://web.archive.org/web/20210922184910/https://skysectionals.com/tour-enroute/. |
Stephen Dubet; Institute of Electrical and Electronics Engineers: “Aeronautical charts for electronic flight bags”, 22nd. DASC. The 22nd Digital Avionics Systems Conference Proceedings. Indianapolis, IN Oct. 12-16, 2003. vol. 2, pp. 13_D_1_1_13_D_1_9, XP010669024. |
Extended European Search Report dated Apr. 11, 2023; European Application No. 22207049.2. |
Extended European Search Report dated Apr. 4, 2023; European Application No. 22207012.0. |
Extended European Search Report dated Apr. 5, 2023; European Application No. 22207019.5. |
Hatlapatka Radim: “JBIG2 Supported by OCR”, EUDML Jul. 9, 2012, pp. 1-9. |
Shang Junqing et al: “JBIG2 text image compression based on OCR”, Proceedings of the SPIE, vol. 6067, Jan. 15, 2006, p. 6067D. |
Extended European Search Report dated Mar. 24, 2023; European Application No. 22207029.4. |
Anonymous: “algorithm-Contour of a run-length-coded digital shape”, Stack Overflow, Dec. 31, 2015, pp. 1-5, URL:https://stackoverflow.com/questions/32354807/contour-of-a-run-length-coded-digital-shape. |
Extended European Search Report dated Apr. 21, 2023; European Application No. 22207060.9. |
Extended European Search Report dated Jun. 13, 2023; European Application No. 22206954.4. |
Neupane Prasanga et al: “Extracting Unknown Repeated Pattern in Tiled Images: 19th International Conference on Hybrid Intelligent Systems (HIS 2019) held in Bhopal, India, Dec. 10-12, 2019” In: Intelligent Autonomous Systems 13, International Publishing, Cham, vol. 1179, pp. 92-102. |
Yang Y. et al: “Vectorization of Linear Features in Scanned Topographic Maps Using Adaptive Image Segmentation and Sequential Line Tracking”, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXIX-B4, Aug. 25, 2012, pp. 103-108. |
Number | Date | Country | |
---|---|---|---|
20230154071 A1 | May 2023 | US |
Number | Date | Country | |
---|---|---|---|
63278576 | Nov 2021 | US |