Printer driver systems and methods for automatic generation of embroidery designs

Information

  • Patent Grant
  • 9683322
  • Patent Number
    9,683,322
  • Date Filed
    Monday, October 19, 2015
    9 years ago
  • Date Issued
    Tuesday, June 20, 2017
    7 years ago
Abstract
Printer driver systems and methods for automatic generation of embroidery designs are disclosed. An example method to convert image data to embroidery data, includes converting image data representing an image to first vector data, converting the first vector data into component data structures that specify regions within the image, converting a first one of the component data structures into a fill shape including second vector data, converting a second one of the component data structures into a stroke shape including third vector data, and generating embroidery data structures using the fill shape and the stroke shape.
Description
TECHNICAL FIELD

The present disclosure pertains to automatic generation of embroidery designs and, more particularly, to printer driver systems and methods for automatic generation of embroidery designs.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1: Example printer driver system for generating embroidery designs when printing documents via a general purpose computer operating system



FIG. 2: Example operations of the example printer driver system of FIG. 1.



FIG. 3: Example operations of an example compositing method used by the printer driver system of FIG. 1.



FIG. 4: Example of Compositing Input Records for a Printing File Containing Three Overlapping Polygons. FIG. 4 shows an original printing file containing three overlapping polygons [two red, one blue (with a hole)]. The output contours (here 5 polygons) are shown on the right.



FIG. 5: An example illustration of handing collinear cases: Lines [AB], [CD] and [EF] are collinear segments. Points C, E, F, D are reported as intersection points. As a result, four intersection points are inserted into line [AB], two points are inserted into line [CD]. Note: collinear segments are handled in lines 4 and 15 without increasing the degree of the algorithm.



FIG. 6: Segment Pairs using winding rule fill mode illustrated. A is the starting drawing point. Segment pairs are {ABleft, CDright} and {EFleft, PQright} at event point A in (a). Segment pair is {ABleft, PQright} at event point D in (b).



FIG. 7: Segment Selection and Duplication



FIG. 8: Re-order of Coincident Segments Hit by Scan Ray (i.e. segments have identical end points).



FIG. 9: Part (a) shows coincident segments in a Segment Pool and the incorrect hole that may potentially be generated. Part (b) shows the correct result with no coincident/redundant segments.



FIG. 10: V1 is the first event point in this example. After traversal at V1, edges in dashed lines are visited edges. At event point P1, Edge P1P2 is the start traversal edge. P1P6 is to the left of edge P1P2 and is unvisited. Therefore, traversal edge P1P2 generates a hole. Similarly, at event point M1, edge M1M2 is an odd edge and on the left edge V1V7 has been visited, therefore, traversal edge M1M2 generates the outer edge of a new polygonal object.



FIG. 11: The left side shows an outline traversal in segment pool A. At vertex D, there are three edges that can be chosen: edge DE, DF and edge DG. Since the traversal started at event point A indicates an outer edge and DE is the leftmost of the three edges (DE, DG and DG) it is chosen. A hole traversal in a segment pool B is shown on the right. At vertex D′, there are three edges that can be chosen: D′E′, D′F′ and D′C′. Because the traversal path starting at A′ indicates a hole, the rightmost edge D′C′ is chosen.



FIG. 12: Example graphics metafile. Left: original metafile image, Middle: wire-frame outlines of original metafile records, Right: wire-frame outlines of composite result.



FIG. 13: Illustration of example end-cap types and join types.



FIG. 14: Illustration of an example method to generate round end-cap stroke path outlines.



FIG. 15: Illustration of an example method and generate square end-cap stroke path outlines.



FIGS. 16 and 17: Example method to process round type joints.



FIGS. 18 and 19: Example method to process miter type joints.



FIGS. 20 and 21: Example method to process bevel type joints.



FIG. 22: Represents the process or machine readable and executable instructions to find segment pairs when a winding-rule fill mode is specified.



FIG. 23: Represents the process or machine readable and executable instructions delineating the general elimination, selection and duplication process.



FIG. 24: Modified segment arrangement criteria for the situation of multiple coincident segments.



FIG. 25: Polygonal Intersection Processes.



FIG. 26: Sorted Segments inside Status Tree. There are three segments in this figure; they are: [AB], [EF] and [CD]. At event point E, the order of the segments in the status tree is: [EF], [CD], [AB], in sequence.



FIG. 27: Example of Twin Segment



FIG. 28: Example of border information. In this situation, edge border information for object 50 is: edge V1V2 border ID is 10, V2V3 border ID is 30, and V3V1 border ID is 20.





DESCRIPTION

Printer drivers are traditionally software programs that facilitate communication between an operating system's printing sub-system and an actual hardware device that physically imprints a particular type of substrate. While considerable complexity may exist in the implementation of a printer driver, from the end user's perspective, utilization of such a driver appears simply as part of a seamless process whereby the user selects a “print” command under a given application running within the operating system and then the active document within that application is visually reproduced on the desired printing device. Under some circumstances, printer drivers are used to produce output that is not directly communicated to an actual hardware device. In such cases, the printing device may be referred to as a “virtual” printer in that it may exist to primarily produce electronic files (e.g. image or typesetting files such as jpeg's, bmp's or pdf's). Once created, these files may then be subsequently viewed, transferred or edited by the user for a variety of purposes.


The method described here specifies a printer driver that can be thought of in either sense (i.e. traditional or virtual) and is unique in that it produces output that effectively reproduces printed documents as embroidered designs. This output when connected to actual hardware such as an embroidery machine allows the machine to appear to the computer operator as simply another printer to which documents may be easily sent. When not connected to hardware, the driver provides the functionality of a virtual printer whereby an embroidery data file may be generated that effectively encompasses the complete specification of an embroidery design. This data file may then be used to view a pictorial representation of embroidery data on a computer screen for editing or further manipulation. Alternatively, this data file may also be manually transferred as input to embroidery equipment where the file presents all data necessary for the equipment to sew out or produce the related embroidery design on material or a provided garment. In another embodiment, this data file can be transferred to a web-service to be embroidered on apparel like T-shirts or hats. The actual transfer may be done using many different protocols like html, low-level sockets, web-service protocols like SOAP, XML-RPC, etc. The printer driver may transfer the low level vector graphics information to the web-service, which then generates embroidery data based on that information. The user is then directed to the web-page through a browser, where he can manipulate the design and select garments on which he wants the design embroidered. After the user confirms the selection the, embroidered garments are delivered to him.


The embroidery process is substantially different from other more traditional imprinting technologies such as CMYK inkjet processes or screen printing processes. Images are created on fabric using embroidery by placing sequences of stitches at various locations, with various orientations, using a multitude of thread colors. One common type of information stored within embroidery data relates to the relative locations of needle penetration points. This information is often stored using a Cartesian coordinate system (e.g. sequences of x, y values representing the horizontal and vertical location of each needle penetration and subsequently the end point locations for stitches which may be visualized as small line segments). There is already at least one automated system known and disclosed within U.S. Pat. Nos. 6,397,120, 6,804,573, 6,836,695 and 6,947,808 that allows automatic conversion from graphical data (e.g. a scanned image bitmap) into embroidery design data. These patents disclose various aspects of image preparation, shape interpretation, and translation to specific embroidery data primitives based on a variety of factors. The methods described here can be used to preprocess and integrate the raw data supplied by an operating system to its printing subsystem such that it may be re-formed in a way that makes it appropriate or compatible as input to an automatic embroidery data generation system. More specifically, an overview of the systems methods disclosed here is presented in FIG. 1 and employs a low-level printer driver that forwards various types of printing commands to a variety of supporting software. Overall, allowing the user to convert artwork into embroidery designs by the simple act of printing that artwork (e.g., clicking a print button) may offer considerable advantage over other potential methods such as saving the artwork in specific formats or at specific resolutions for later importing by an automatic embroidery generation system. This contrast in use is one of several features that distinguish it from other methods.


The printer driver that facilitates the disclosed method may be configured as a raster printer that supports bezier curves and other forms of vector and bitmap data (e.g., vector outline representations of fonts, rectangles, ellipses, etc.). Configuration in this way, for example, tells the printer subsystem to send font glyphs instead of bitmaps and bezier curve points instead of normal straight line paths for outline data. This is useful in that it may provide greater accuracy in the image specification when compared to simple, fixed resolution bitmap information. Vector data is the term used to refer to graphical information where a region is specified by mathematically precise shape specifiers such as the edge contours that bound it. Often these boundaries are described as smooth curve or poly-line information. Alternatively, bitmap or raster data refers to more discrete data often in the form of pixels, where a region is specified as a function of what groups of pixels it contains. When the print driver is forced to process bitmap data (e.g., as a result of such data being forwarded from an application program), processing such as that described in previously mentioned prior art should be performed to convert that data to vector outline information. Once vector data is obtained, it is then the responsibility of the printer driver to further process it in order to make it suitable for embroidery design generation.


When a user prints a particular document (using the print facility supported by the computer's operating system), the printer subsystem calls various routines in a printer driver DLL (dynamic link library) with data to be printed. Example names of such routines may include DrvTextOut, DrvBitBlt, DrvFillPath, and DrvStrokeAndFillPath. These are some of the routines that are standardized as part of the Microsoft Windows operating system printing subsystem. The implementations of these driver routines, as developed in the preferred embodiment described here, convert this vector information into more basic data structures that specify regions such as polygons, rectangles and paths, and then store them as records in a dynamically sized memory block. The path structure may be composed of several sub paths, which are typically either straight line paths or bezier curve points. A path structure may be composed of multiple closed figures formed from several sub paths. The printer dll may also generate additional parts of a path required to close a figure by connecting the first and the last points in a path or sub-path structure.


The closed or open figures (i.e., shapes) resultant from path structures may be of two types—fill and stroke. A fill shape uses a path structure to delineate its outer most boundaries, whereas a stroke shape uses a path structure to delineate a continuous curve with a predetermined thickness and is typically not actually bounded by the path or sub-path. The printer subsystem specifies a number of attributes to be used to draw such shapes. For example, for fill shapes, the printer subsystem could specify the brush type and color while for stroke shapes it could specify pen color, pen width, end cap and join types. More examples on the type and variety of properties that may be specified for shapes at the printer driver level may be found within printer driver development documentation provided by Microsoft and other operating system vendors. This information is associated with the record of each individual shape. Some of the properties specified by the printer subsystem might not be able to be expressed directly as stitches because of the inherent limitations of embroidery. In such situations, the closest representation may be automatically chosen by default while the user may choose to modify it later-in or completely-after the embroidery generation process. For example, a pattern brush specified for a fill shape would be presented as a solid brush to the system with a default color where this shape will translate to a particular area of embroidery using the specified color as a thread color using a specified fill pattern to approximate the texture or nature of the pattern.


After the printer subsystem signals an end to the printing of a document (e.g., by calling the function DrvEndDoc) the printer dll transfers raw vector data to the Embroidery Generation Support Program (referred to hereafter as the EG method). Various methods can be used to transfer the data to the EG method such as saving it to a (temporary) file, passing individual messages for each record or utilizing a shared block of memory. In one embodiment, the printer dll passes a predetermined unique message to the EG method indicating that the raw vector data is available in a shared memory block. Prior to passing the message, the printer dll copies the shape records and associated information in a predetermined order from the internal dynamic memory block to the shared memory block.


The EG method uses a Path Generator (PG) method to generate polygonal boundaries from generic curves/poly-lines and also for stroked paths (e.g., sequences of curves and line segments to be drawn using a GDI pen with particular attributes). Line attributes that are associated with pen types (e.g. pen width, pen color, etc.) may then be used to create a set of polygons that delineate an exterior edge boundary of a stroked path. In some cases, Microsoft Windows® GDI path functions may be called to generate polygons along a stroke path which are visually identical to the original line drawing path after filling occurs during rasterization. However, these functions are typically not sufficient for use here since their precision is often tied to a particular raster resolution.


The EG method then uses a Metafile Compositing (MC) method that sequentially takes shapes (e.g., polygons) where filling modes and color attributes are specified as input and then outputs a set of consistently formed non-overlapping maximally contiguous regions. Input polygons need not necessarily be regular polygons, i.e. polygon vertices may be specified in any order (clockwise or counter-clockwise) and the polygon itself may be self-overlapped. The output is order-specified, i.e. the outer most edge for each region is specified in a counter-clockwise order and any contours indicating holes are specified in a clockwise order. This constraint may not be required, but is often useful in simplifying many subsequent processing tasks including computation of intermediate data such as skeletons (e.g., Voronoi diagram computation), deformation of regions, etc. The EG method then analyzes the composite objects (i.e. the outputted regions) and generates stitch data which can then be fed to an embroidery machine for stitching. The actual methods used to generate stitch data are similar to those already disclosed in the previously mentioned prior art system. A more detailed description of the EG method and some related methods is now provided.


A stroked path typically has symmetrical properties. Specifically, all end-cap types are symmetrical along the path's center line; all types of joints are symmetrical along the joint angle bisectors. The PG method maintains visual features after adding the stroke outline points and maintains shared points between different connected segment paths consistently. Thus, paths generated by the PG method may be substantially more accurate and resolution independent than ones generated by built-in GDI functions.


The PG method invokes several methods to compute the end cap and joins based on the attributes specified at the print driver level.


The Process Round End Cap (PREC) method is used to compute edge boundary vertices at the end point of a stroked path when the selected pen type indicates round end caps as one of its attributes. To maintain the symmetrical property of the round end-caps, the middle point of the arc (Refer to FIG. 14) is added first, then boundary edge vertices on left and right sides of the arc are added recursively until a minimum threshold value for smoothness of the arc is meet. Detailed operations of the process are illustrated in FIG. 14.


The PG method uses a Process Square End Cap (PSEC) method to compute edge boundary vertices at the end point of a stroked path when the associated pen type indicates squared end caps. Right corner points and left corner points are added first. Example operations are shown in FIG. 15.


Process Round Join (PRJ) method is used to compute edge boundary vertices when the selected pen type indicates a round join type. First, the bisector of the two connected path segments is computed (see FIG. 16). For the convex side of the path, two vectors are projected from the common join point of the specified related medial path where each vector is projected a distance of one half the pen width and orthogonal to each of the related medial path line segments. The ends of these vectors indicate the end points of the curved boundary to be computed on the outer convex edge side of the path. Then the endpoint of a bisector of these two vectors (again projected a distance of one half the specified pen width) is inserted into the boundaries vertex list. The rest of the vertices are then computed by recursively introducing new bisectors as specified in FIG. 17 and illustrated in FIG. 16.


Process Miter Join (PMJ) method is used to compute edge boundary vertices when the selected pen type indicates a miter join type. Here the bisector of the two connected path segments is computed (see FIG. 18). Point Py on the concave side (see FIG. 18) is computed on the bisector based on the path radiation R (i.e., based on one half the specified pen width). Point Px on the convex side is computed based on the miter limit length. If the limit is not set with the associated pen property, then Px is computed using the extensions of two side boundaries (see FIG. 18).


Process Bevel Join (PBJ) method is used to compute edge boundary vertices when the selected pen type indicates a bevel join type. The bisector of the two connected path segments is computed (see FIG. 20). Point Py is computed similar to the methods used within the PMJ method. Point Px is calculated on the bisector based on the pen width. Line PmPn is calculated perpendicular to the bisector line and Point Pm and Pn are the intersections with two side boundaries which are parallel to the related path segment. A final boundary shape is illustrated in FIG. 20. The MC method (also referred to as the compositing method) receives the printing records and translates them into a set of closed contours that delineate the contiguous regions equivalent to those that would result from rendering (e.g., printing) the original file on an arbitrarily sized display. These printing records may be thought of as analogous to a computer graphics metafile (CGM) specification in that they are an ordered list of commands that may be used to reproduce a visual picture or image. The ISO specification is a four-part standard defining a file format for the application-independent capture, storage and transfer of graphical pictures. Compositing computer graphics metafiles (CGM) is the process of applying various Boolean operators among potentially overlapped primitive shapes specified within a file designed to create a visual image. On a raster-type device such as a computer's CRT display or inkjet printer when a subset of vector commands overlaps or otherwise intersects with previously drawn or executed commands, the pixels within the overlapped areas are simply reset to the color specified by the more recent vector commands. Thus, potential redundancies within a metafile (i.e. situations where multiple commands repeatedly “paint” within the same area) are resolved through a process of rasterization in which more recent commands always take precedence over those that were previously executed. However, for many applications, the loss of flexibility that results from rasterization (e.g., loss of detailed outline information) makes it less suitable for developing a usable composite representation of a metafile's vector commands. Specifically, it may be desirable to eliminate redundancies within vector outlines by actually modifying the underlying outlines directly so that painting within any given area never occurs more than once (i.e., no overlapping occurs). This may provide such benefits as greater compression of picture information. Also, the result may be used for other applications such as computerized embroidery imprinting in which it is often undesirable to repeatedly sew or place stitches within a single area of fabric. Note that compositing is not a strict requirement of the print driver method disclosed here. Without compositing, embroidery data may still be generated separately for each of the individual underlying print records. However, there are many situations where such an approach yields embroidery data that may not be practical for actual production on embroidery equipment (e.g., sewing repeatedly over the same area or triggering excessive thread trims or redundant needle movements even when sewing a single same-colored contiguous area). Hence, compositing is included here as a desirable step to achieve a more consistent usable result for embroidery data generation.


The compositing method is comprised of four general operations: 1) Finding intersections among the edges of regions (e.g., polygonal boundary intersection). 2) Finding segment fill pairs. 3) Arranging segments and 4) Re-establishing segment lists and the resultant associated output regions.


The MC method first executes a Find Polygonal Object Boundary Intersection (FPOBI) method which permits the reliable and predictable detection of intersecting polygonal edges. This method makes use of the line sweep technique and algebraic predicates, but has also been further extended to handle additional requirements and degeneracies precipitated by the compositing operations. Some of the degeneracies have been tackled individually in previous work, but still do not facilitate a comprehensive and robust solution to the specific issues discussed here. Previous work includes a method for testing two simple polygonal objects using enveloping triangulations. Another method includes heuristics for detecting whether two polygons intersect using a grid-based method, a method that works optimally when the polygon edges are distributed in a uniform manner (which would not be typical of input cases dealt with here). This method offers some distinct benefits when compared to basic line-segment intersection algorithms. Numerous methods have been presented that solve the problem of finding intersections among line-segments. Unfortunately, it has also been shown that several prior art methods largely rely upon models of exact computation that may become computationally impractical for engineering solutions implemented using hardware which supports only IEEE floating point representations. One previous method proposed the plane-sweep algorithm for finding intersections among line-segments which solves the problem in time O((n+k)log n). This method also has been reported to be quite sensitive to numerical errors and, hence, must also rely upon a model of exact computation to produce correct results. Thus, one proposed solution relies upon algebraic predicates to alleviate many of the numerical issues prevalent in the line sweep algorithm and argue that this algorithm may be superior to others since it requires a comparatively lower degree predicate than that which would be required by other algorithms.


The MC method is different from Polygon Clipping or other operators that compute Boolean operations among specified regions. Algorithms that facilitate a Boolean set of operations that may be used to unite, subtract, or intersect solid objects with each other is a common component of many solid modeling systems. Polygon Boolean operations are derived from polygon clipping algorithms. Many polygon clipping algorithms have significant limitations, (e.g., some algorithms are limited to convex polygons, some algorithms require that the clip polygon be rectangular; some algorithms do not allow polygon self-intersections). Commonly encountered CGMs (computer graphics metafiles) cannot be easily modified to adhere to such restrictions (including those produced by the print driver method described here). Even the simple case of detecting if one polygon lies within the boundaries of another polygon becomes less obvious when one of the input polygons intersects with itself (a degeneracy that is common within metafile records). Vatti's algorithm and Greiner and Hormann's algorithm can be used for testing polygon self-overlaps by counting the winding number. However, overlaps that result in zero-area portions of the polygon would still not be eliminated as is inherently required by the problem presented here. Many efficient polygon clipping algorithms have been published in the literature, however, a direct substitution of such algorithms to handle the task of metafile compositing is generally infeasible. Hence, the metafile compositing method described here is largely focused on developing Boolean operators suitable for input sets with large numbers of polygonal objects containing varied degeneracies, to provide a fast, robust, comprehensive and practical solution.


The MC method is related to the problem of map overlay studied within computational geometry. Solutions to this problem involve detecting and subsequently processing the intersections and unions of polygonal objects that are placed within a two-dimensional space (e.g., outlines of highways, rivers, lakes, etc.). Thus, if each vector command within a graphics metafile is considered as a layer in a geometric map, the techniques used in map overlay may be applied to the problem of metafile compositing. The input of a map overlay operation consists of two or more topologically structured layers and the output is a new layer in which the new areas in that layer are given attributes that are based on the input layers. The procedures are similar in that an overlay operation takes two or more data layers as input and results in an output layer, just as a metafile contains many records and the output may be considered as a single layer. However, there are several differences. First, the ordering of input records or layers within metafile compositing is important; if the input order is changed, the output may be different. Thus, when applying map overlay algorithms to metafile compositing, the time sequential features of the metafile records are taken into account. Second, in map overlay algorithms, different layers have different attributes. However, in metafile compositing, different records may have identical attributes, for example, the same color. Therefore, in certain situations, merging operations may be performed for same attribute layers when constructing the output. Finally, in map overlay one region may receive attributes from many layers; in compositing CGM, any given region typically only receives attributes from a single record.


CGM command records (e.g., the printing records) may contain degenerate polygonal objects, such as zero-length segments, zero-area polygonal objects, grazing and self-overlapping. Many records may also be drawn in the same region redundantly. The vertex list order is not specified. The closed area is the brush painting area, thus, some records may be drawn in clockwise order while others are drawn in counter-clockwise order. CGM records may be attribute filled using different modes (e.g., alternate edge/scanline versus winding rule fills). Filling modes must be considered to generate correct results.


CGM input records paint arbitrary, potentially overlapping regions sequentially where the ordering of records combined with their fill attributes is important. For example, for records with different fill colors, the newly drawn record hides the previously drawn record if they are overlapping or partially overlapping. Based on this property, the Boolean operation of “NOT” is performed if two input records have different colors and the newly drawn record has a higher drawing priority (e.g., is present later within the list of input records).


Overlapping records that have identical fill attributes (e.g., same color) in certain instances may be processed to eliminate the extra overlapping portion since this does not affect the visual appearance of the metafile. Thus, in these instances, a merging or logical “OR” operation may be performed.


Other prior art methods such as graph exploration for overlaying planar subdivisions do not address issues of numerical accuracy or degeneracy within input data sets. Unfortunately, without consideration of such issues, a practical and robust solution is difficult to obtain. Examples of such degeneracies include zero-length segments, zero-area polygonal objects, grazing, self-overlapping, and multiple congruent polygonal region boundaries. The MC method disclosed here has been shown to work for very large numbers of polygons where such input data may contain large numbers of degeneracies of the types mentioned previously. The method considers not only the original geometric coordinates, but also the original drawing sequence and filling modes. Output display is visually identical to the input, the difference being that all overlap of dissimilar attributes and all adjacency of like attributes are removed. The method's performance within the presence of degeneracies and large input sets is one feature which distinguishes it from previously published related work.


In order to disclose the details of the MC method some basic definitions are first provided. The terms defined may relate to terminology used here as well as in prior art that may discuss other methods that employ sweep-line approaches to solve problems within computational geometry. First, an “event point” is defined as a point in the plane at which the sweep algorithm evaluates and processes current input and data structures. Event points are ordered according to their y and then x coordinate values. In the MC method event points are the endpoints of line segments or computed intersection points between two or more line segments where these line segments represent the outer boundaries of polygonal regions. An “edge” refers to the connection between two event points (i.e., its end points). Its domain is a finite, non-self-intersecting open curve. An edge has two end-points and its length is greater than zero. E[AiAj] denotes an edge that has Ai and Aj as its end-points. A “segment” is similar to an edge in that it is also a closed line. It stores an upper-end-point and a lower-end-point. Let S[AiAj] denote a segment that has Ai and Aj as its end-points. Let Ai<y Aj denote that point Ai is smaller than Aj along the y-axis. Similarly, Ai<x Aj denotes that point Ai is smaller than Aj along the x-axis. If Ai<y Aj, or Ai=y Aj and Ai<x Aj, in the printer device coordinate scheme, Ai is the upper-end-point and Aj is the lower-end-point. A “segment pair” consists of two segments which intersect the sweep line and lie on opposite edges of a given region. It indicates an area between two segments that is part of a GDI fill area for a particular metafile record or polygonal object. A “segment pool” contains segments having a particular attribute (e.g., color) as inherited from the original input data (i.e., the attribute of its related polygonal object). Multiple segment pools are maintained within the MC method where there is one and only one pool for every attribute present within the input data. A segment pool invariant is that while segments may share end points, no segment within a given pool may be coincident with any other segment within that pool. Note: segments may be added to a particular attributed pool, even though originally they may not have exhibited that attribute. However, once added to the pool they then lose their previous attribute and inherit that of the pool. A half opened edge, which only includes the origin point, is called a “half-edge.” E[ViVj] denotes a Half-edge that has vertex Vi as its origin and vertex Vj as its destination. If one walks along a main-half-edge, the face of an associated region lies to the left. For a twin-half-edge, the face of an associated region lies to the right. A closed polygon P is described by the ordered set of its vertices V0, V1, V2, . . . , Vn, V0=Vn+1, where n>=3. It contains all main and twin half-edges consecutively connecting the vertices Vi, i.e. the main half-edges are E[V0V1), E[V1V2), . . . E[Vn−1Vn), E[VnVn+1)=E[VnV0) and the twin half-edges are E[VnVn−1), E[Vn−1Vn−2), . . . E[V1V0), E[V0V−1)=E[V0Vn). A “polygonal object” O is described by a set of polygons P0, P1, P2, . . . , Pn where P0 is the outer polygon, which is specified in a counter-clockwise order and P1, P2, . . . , Pn are inside P0 and are specified in clockwise order. In terms of metafile compositing, a polygonal object is a distinct, named set of attributes that represents a contiguous graphic region. The attributes hold data describing the graphic, such as color, drawing sequence, etc.


Let S be the set of segments of all polygonal objects in the plane. Let Q be the sorted vertices of segments (sorted by y and then x values) in the plane; these points will be evaluated as “event points” within the algorithm. Let τ be the sorted list that stores those segments that intersect with a sweep line. P is the pointer that indicates the current event point being evaluated within Q. Let U(P) be the set of segments which have P as their upper endpoint. Let L(P) be the subset of τ which has P as its lower endpoint. Let C(P) be the subset of τ which has P as its interior point, meaning P is on that segment but is not the endpoint. Si(P) and Sr(P) denote, respectively, the left and right neighbor segments of P in τ. Let A be the collection of segments in τ (the status tree). Let Mi(A) be the left-most segment of A and Mr(A) be the right most segment of A. Note, lines of pseudo-code shown in FIG. 25 represent an overview of the method used to find boundary intersections. Lines printed in bold, represent modifications over that which was presented in previous methods.


There are many differences between the sweep-line methods disclosed here when compared to other commonly-known sweep line algorithms. Other published algorithms do not address details on the treatment of special cases and degeneracies or, when present, such details are only partially explained. For example, some methods assume any two segments or curves will intersect at most at a single point which may not be true. Here, an attempt is made to avoid such assumptions and fully consider the details of degenaricies to allow a comprehensive engineering solution.


A predicate arithmetic model is used to determine if two segments intersect in line 1 of FindNewEvent (see FIG. 25), an approximation of this intersection point is also computed and stored. Using algebraic predicates, the determination of whether two segments intersect is guaranteed to be correct as long as input data coordinates do not exceed what may be represented by 24-bit integers. In this specific application, input coordinates of metafile records are stored as 16-bit integers. However, the construction and storage of actual resultant intersection points does not have the same guarantee of accuracy and inevitably some rounding of results may occur potentially shifting the locations of intersection points from their true positions. Such rounding may potentially impact the final output in that certain polygonal vertices may be inaccurate to the extent that IEEE floating point arithmetic results yield slightly different values for their positions. However, particular care is taken such that this rounding will not prevent the method from constructing its output. This is primarily achieved by assuring some degree of consistency in the rounding that will occur and allowing the algorithm to effectively ignore such rounding. For example, when two segments intersect, where one or both of those segments emanate from previously computed intersections at one or more of their end points, the original end points of the related segment (rather than the “intersection end points”) are used for both detection and construction of an intersection point.


It has been suggested that the order of the segments in the status-tree corresponds to the order in which they are intersected by the sweep line just below the related event point. However, this appears to be insufficient in some cases (see example in FIG. 26). According this method, the key value for [AB] cannot be found, because an intersection point below the sweep line is not present. Here, in such cases, a super-key may be used to sort the segments in the status-tree: the first attribute of the super-key is the x-coordinate of the point intersected by the sweep line and the segment at the event point; the second attribute of the super-key is the segment's slope.


An intersection is a point where lines intersect by definition. This definition is used by most previously published work. However, for polygonal object intersection, this is not always applicable. If two segments from the same polygonal object intersect at both end points, this intersection may not be considered as an intersection of the object. Only intersections of segments that are from different polygonal objects should be reported. In lines 6, 17, 19 and 22 of HandleEventPoint and line 5 of FindNewEvent, segment classification is performed before reporting intersections. Typical CGM records cannot be assumed to be simple polygons. Rather, they tend to exhibit all types of deficiencies, such as self-intersections and grazing contact between multiple polygons (e.g. holes) even within a single polygonal object. The above algorithm can be modified slightly for detecting and finding self-overlapping intersections.


These compositing methods presented here are intended to eliminate redundant segments and re-establish link-listed polygonal objects. This is accomplished primarily through the creation and use of segment pools where segments having a particular shared attribute are organized together in a single pool. As the sweep-line process progresses, each segment (through its association with a segment pair) may either be discarded or moved to one or two segment pools. Another invariant of the sweep-line process regarding segment pools is that while segments may share end points, no segment within a given pool may be coincident with any other segment within that pool and no two segments will cross each other. Preservation of this invariant is largely addressed within the Overlapped Segments Selection Criteria algorithm summarized in FIG. 24. For example, lines 2 and 3 of the algorithm imply that Sm or Sn may be selected into different segment pools with different attributes or neither may be selected. Similarly, the duplication rule cannot generate coincident or duplicated segments to an individual segment pool. After this sweep completes, a segment pool has the property that traversing segments within the pool (via another sweep pattern) generates one or more cycles (i.e., closed contours containing no self-crossings).


Segment pairs (see definitions disclosed earlier in this specification) are found at each event-point (event-points include original segment end points and segment intersections) based on CGM filling rules. These pairs are intended to indicate areas between each pair that comprise filled portions of related polygonal objects. Finding segment pairs is a pre-processing step for segment arrangement (e.g. selection and duplication to segment pools) that effectively eliminates unneeded or redundant segments of a polygon (i.e. segments that have been occluded due to filling rules or self overlap). Similar to the algorithm used for finding intersections, it is assumed that a scan-line goes from top to bottom, halting at each event point. Segment pairs are easily located if the original related print or metafile record uses an alternate edge fill mode. More specifically, it can be done by just selecting the odd and even segments on the scan-line and pairing them up respectively. If a record and its related polygonal specification use a winding-rule fill mode, the original drawing direction must be stored and the fill depth must also be tracked. FIG. 22 depicts the algorithm used here for finding segment pairs when a winding-rule fill mode is specified.


Segment pairs may change at each event point. For example, at event point A in FIG. 6(a), segment pairs are {ABleft, CDright} and {EFleft, PQright}. While at event point D in FIG. 6(b), segment pairs are {ABleft, PQright} (i.e. the pair segment AB changes at different event points due to the winding rule fill mode).


The Segment Arrangement (SA) method described here determines at each “event point” whether an input segment should be eliminated, selected or duplicated based on metafile drawing and filling rules. Elimination means a segment that is drawn underneath other primitives will not be put into any segment pool. Selection means an original segment will be moved into a segment pool with similar attributes. Duplication means an original segment is copied into a segment pool with different attributes (where the copied segment then assumes the attributes of the pool into which it was copied). These three rules, shown in detail below constitute guidelines for the final arrangement algorithms. In general, segment selection and duplication are based on two factors: attribute values and age of the related polygonal object. A polygonal object is said to be younger if it appeared sequentially later within the list of metafile records. If a polygonal object is created earlier, it is considered older. For example, for differently colored objects, segments that are from younger objects may be selected and duplicated for those objects that are underneath or overlapped by them. These can be observed, in FIG. 7, where object C is specified last and its segments will be selected and copied for object B.


Rules for Segment Elimination, Selection and Duplication are described as follows: Let Sface(i) denote the face that is associated with segment S belonging to polygonal object i, where polygonal objects are ordered by their age. Note if j<i this indicates that the ith object is younger than the jth object. {SLi, SRi} denotes a segment pair where SLi denotes the left segment (of the pair) of the ith polygonal object at a specific event point and SRi denotes the right segment. According to the CGM filling method, the following selection and duplication rules are defined in order to separate the segments according to their attributes:


The “Elimination Rule” is defined as follows: if Sj is between any segment pair {SLi,SRi}, Sj will be hidden in either of the following two cases: Case 1: j<i or Case 2: Attributes(Sface(i)=Attributes(Sface(j)). If Sj is hidden, it will not be placed or duplicated into a segment pool.


The “Selection Rule” is defined as follows: Sj will be moved to a segment pool in either of the following two cases: Case 1: Sj is not inside or between any segment pair {SLi, SRi}, or Case 2: Of all segment pairs that Sj lies between, let {SLi,SRi} denote the youngest pair. If j>i and Attributes(Sface(i))≠Attributes(Sface(j)) Sj will be moved.


The “Duplication Rule” is defined as follows: Of all segment pairs that Sj lies between, let {SLi,SRi} denote the youngest pair. If j>i and Attributes(Sface(i))≠Attributes(Sface(j)), let Sj′ be the duplication of Sj where Attributes(S′face(i)) are assigned Attributes(Sface(i)) and Sj′ is placed into the associated segment pool.


To further the operations of segment arrangement, an object stack is used to store active polygonal objects, where an object is considered to be active while scan lines continue to intersect with it. When the scan line hits the left segment of a segment pair, the object that is associated with that left segment is pushed on to the stack. Similarly, when the scan ray hits the right segment of a segment pair, the object associated with the right segment is popped off the stack.


Assuming a ray comes from infinity on the left and moves toward infinity on the right. Let Sk denote a segment that intersects with the ray, where k=0, 1, . . . , n. At each event point, all segments are sorted from left to right (using the same method used previously for finding intersections) and stored in a queue. Therefore, S0 is the left most segment, and Sn is the right most segment.


It is not safe to assume that S0 through Sn do not overlap. It may be commonly found that many segments are coincident (i.e., share the same two end points). Such cases require additional bookkeeping and are discussed next. FIG. 23 delineates the general elimination, selection and duplication algorithm.


Lines 1 and 4 in FIG. 23 must be modified when several segments are coincident, because otherwise any one of these coincident segments could be arbitrarily or unpredictably hit first by the scan ray. In such cases, coincident segments are reordered and grouped into a “right group” and a “left group” where each group is then sorted. Specifically, Let S be the coincident segments which intersect with the scan ray. Let Sleft be the segments in S that belong to the left group (i.e. segments that are marked as the left segment within their corresponding segment pairs) and similarly, let Sright be the remaining segments in S that are marked as right segments. Sleft and Sright are then sorted by their related polygonal object's age (ascending order, youngest first). Let Sm and Sn denote the youngest segments within Sleft and Sright respectively. Ø denotes an empty segment set. Thus, the modified segment arrangement criteria for the situation of multiple coincident segments are refined in FIG. 24.


Note that in this special case, “Not Selected” implies “elimination”, therefore, the elimination criterion is omitted altogether. Additionally, according to these new coincident segment selection and duplication rules, Sright will be processed first then Sleft. In the case of duplication, if there is at least one left segment and one right segment overlapping, even if they are not a segment pair, they will not be used for duplication. For selection, only the youngest left segment and youngest right segment will be selected. An example is illustrated in FIG. 8. Let SR2 SL3 SR4 SR6 SL7 SR8 SL8 SR9 in FIG. 8(a) be overlapping segments where their order represents their intersection sequence with the scan ray. In this case, only SR9 and SL8 will be selected if the related face attributes of SR9 and SL8 are different. However, if the attributes of SR9 and SL8 are identical, neither SR9 nor SL8 will be selected or copied.


After segment pools are populated, a Generate Composite Objects (GCO) method must execute to generate new resultant objects that represent the final composite shapes within the image. This method effectively builds new objects using the segments contained within each pool. As a segment pool may contain segments inherited from initially unrelated or differently attributed polygonal objects, there is no inherent linking or sequencing among them (other than obviously being placed within the same pool). Thus, a final step is to reconstruct a consistent and uniform traversal of such segments to indicate the boundaries of the one or more polygonal objects contained in a pool (i.e. so objects are comprised of an outer edge contour specified in counter clockwise vertex order and zero or more inner edge contours, indicating holes, specified in clockwise order). This is accomplished most efficiently by performing one final sweep-line process (using the rules below) on each pool to construct the appropriate contours as just described.


Rule 1: Segment traversal in each segment pool starts from an unvisited odd-segment at each event point where the even/odd attribute of a segment is determined as when alternate edge filling rules are applied. Each segment can only be visited once and all segments in the pool must be visited. For example, the arrowed lines in FIG. 10 indicate the starting segments at event points V1, P1 and M1.


Rule 2: If there is an unvisited even numbered segment on the left of an odd numbered segment emanating from the same event point at the start of a traversal, the traversal path forms a hole. Oppositely, if the segment on the left of an odd numbered segment is visited, the traversal path forms the outer edge of a polygonal object (see example in FIG. 10).


Rule 3: At each vertex during traversal, if there are two or more edges unvisited, the leftmost edge is chosen if the traversal is along an outside boundary whereas the rightmost edge is chosen if it is a hole (as previously determined using rules 1 & 2). FIG. 11 shows how this rule is applied.


In addition to pool attributes (i.e. pool ID, color etc.), each segment is also associated with its twin segment which is stored in a different pool (analogous to the two half edges that comprise any edge). This association allows border information to be constructed for each object when a traversal is performed in each segment pool. More specifically, the twin segment's attributes are checked during the traversal. If the twin segment's attribute information is changed (e.g. the adjacent object with which this object borders has changed), the starting point of the edge is flagged as an “Adjacent Object Transfer Point.” And the border ID is set to is twin segment ID (where ID's are uniquely assigned to every resultant object generated). This border information basically specifies exactly where objects are touching or adjacent to other objects and can be quite useful when generating embroidery data. For example, to ensure solid registration (with no visible gap between adjacent objects) it may be useful to modify the embroidery generated for one object (appearing earlier in a sewing sequence) such that it extends or partially overlaps underneath another object to be sewn later in a sewing sequence only where the two objects are adjacent to one another. This will ensure that even if some visible shrinkage is present in the embroidered representation (i.e. due to stitch tension, etc.), the two objects will still be visibly adjacent to each other with no apparent gap. This auto-overlap type feature is difficult to facilitate if border information is not generated for each object.


After MC method is executed, embroidery primitive data generation can proceed by translating objects into specific embroidery stitching pattern. One embodiment of this method executes as disclosed in U.S. Pat. Nos. 6,397,120, 6,804,573, 6,836,695 and 6,947,808 where embroidery primitive control points are generated based on the geometric properties of the related shapes. Common border information (as mentioned above and referred to within the patents) further guides this process. After control points are generated, the actual x,y coordinates of stitch end points are produced by a stitch generation method. These end-points may then be easily reformed into any one of dozens of different proprietary machine file formats for viewing in editing programs or direct download for production on actual embroidery sewing equipment.

Claims
  • 1. A method to convert image data to embroidery data, comprising: converting, with a processor, image data representing an image to first vector data;converting, with the processor, the first vector data into component data structures that specify regions within the image;converting, with the processor, a first one of the component data structures into a fill shape including second vector data;converting, with the processor, a second one of the component data structures into a stroke shape including third vector data;converting the fill shape and the stroke shape to a set of non-overlapping contiguous regions by specifying an order of a set of contours defining the set of non-overlapping contiguous regions; andgenerating, with the processor, embroidery data structures using the fill shape and the stroke shape.
  • 2. A method as defined in claim 1, wherein the image data includes at least one of line data, Bezier curve data, a font glyph, or a raster operation.
  • 3. A method as defined in claim 1, wherein the converting of the first one of the component data structures includes specifying a brush type and a color.
  • 4. A method as defined in claim 1, wherein the converting of the second one of the component data structures includes specifying at least one of a pen color, a pen width, an end cap, or a join type.
  • 5. A method as defined in claim 1, wherein the converting of the fill shape and the stroke shape includes removing a redundancy in the component data structures corresponding to a location within the image data.
  • 6. An apparatus to convert image data to embroidery data, comprising: a processor; anda memory coupled to the processor, the memory comprising instructions which, when executed by the processor, cause the processor to at least: convert image data representing an image to first vector data;convert the first vector data into component data structures that specify regions within the image;convert a first one of the component data structures into a fill shape including second vector data;convert a second one of the component data structures into a stroke shape including third vector data;convert the fill shape and the stroke shape to a set of non-overlapping contiguous regions by specifying an order of a set of contours defining the set of non-overlapping contiguous regions; andgenerate embroidery data structures using the fill shape and the stroke shape.
  • 7. An apparatus as defined in claim 6, wherein the image data includes at least one of line data, Bezier curve data, a font glyph, or a raster operation.
  • 8. An apparatus as defined in claim 6, wherein the instructions are to cause the processor to specify a brush type and a color to convert the first one of the component data structures.
  • 9. An apparatus as defined in claim 6, wherein the instructions are to cause the processor to specify at least one of a pen color, a pen width, an end cap, or a join type to convert the second one of the component data structures.
  • 10. An apparatus as defined in claim 6, wherein the instructions are to cause the processor to convert the fill shape and the stroke shape by removing a redundancy in the component data structures corresponding to a location within the image data.
  • 11. An article of manufacture comprising machine readable instructions stored on a non-transitory computer readable medium which, when executed, cause a processor to at least: convert image data representing an image to first vector data;convert the first vector data into component data structures that specify regions within the image;convert a first one of the component data structures into a fill shape including second vector data;convert a second one of the component data structures into a stroke shape including third vector data;convert the fill shape and the stroke shape to a set of non-overlapping contiguous regions by specifying an order of a set of contours defining the set of non-overlapping contiguous regions; andgenerate embroidery data structures using the fill shape and the stroke shape.
  • 12. An article of manufacture as defined in claim 11, wherein the instructions are to cause the processor to specify a brush type and a color to convert the first one of the component data structures.
  • 13. An article of manufacture as defined in claim 11, wherein the instructions are to cause the processor to specify at least one of a pen color, a pen width, an end cap, or a join type to convert the second one of the component data structures.
  • 14. An article of manufacture as defined in claim 11, wherein the instructions are to cause the processor to convert the fill shape and the stroke shape by removing a redundancy in the component data structures corresponding to a location within the image data.
RELATED APPLICATIONS

This patent arises from a continuation of U.S. patent application Ser. No. 14/174,540, filed Feb. 6, 2014, entitled “Printer Driver Systems and Methods for Automatic Generation of Embroidery Designs,” which is a continuation of U.S. patent application Ser. No. 13/346,338 (now U.S. Pat. No. 8,660,683), filed Jan. 9, 2012, entitled “Printer Driver Systems and Methods for Automatic Generation of Embroidery Designs,” which is a continuation of U.S. patent application Ser. No. 11/556,008 (now U.S. Pat. No. 8,095,232), filed on Nov. 2, 2006, entitled “Printer Driver Systems and Methods for Automatic Generation of Embroidery Designs,” which claims priority from U.S. Provisional Patent Application No. 60/732,831, filed on Nov. 2, 2005, entitled “Printer Driver Systems and Methods for Automatic Generation of Embroidery Designs.” The entireties of U.S. patent application Ser. No. 14/174,540, U.S. patent application Ser. No. 13/346,338, U.S. patent application Ser. No. 11/556,008, and U.S. Provisional Patent Application No. 60/732,831 are hereby incorporated by reference.

US Referenced Citations (30)
Number Name Date Kind
4991524 Ozaki Feb 1991 A
5191536 Komuro et al. Mar 1993 A
5320054 Asano Jun 1994 A
5410976 Matsubara May 1995 A
5823127 Mizuno Oct 1998 A
5880963 Futamura Mar 1999 A
6010238 Kotaki Jan 2000 A
6192292 Taguchi Feb 2001 B1
6356648 Taguchi Mar 2002 B1
6397120 Goldman May 2002 B1
6629015 Yamada Sep 2003 B2
6690988 Kaymer et al. Feb 2004 B2
6968255 Dimaridis et al. Nov 2005 B1
7228195 Hagino Jun 2007 B2
8095232 Goldman et al. Jan 2012 B2
8660683 Goldman et al. Feb 2014 B2
20020007228 Goldman Jan 2002 A1
20020038162 Yamada Mar 2002 A1
20030074100 Kaymer et al. Apr 2003 A1
20030212470 Kaymer Nov 2003 A1
20040243272 Goldman Dec 2004 A1
20040243273 Goldman Dec 2004 A1
20040243274 Goldman Dec 2004 A1
20040243275 Goldman Dec 2004 A1
20050182508 Niimi et al. Aug 2005 A1
20050234584 Mizuno et al. Oct 2005 A1
20060096510 Kuki et al. May 2006 A1
20100106283 Harvill et al. Apr 2010 A1
20100108754 Kahn May 2010 A1
20140156054 Goldman et al. Jun 2014 A1
Non-Patent Literature Citations (11)
Entry
Song et al., “Algorithms for Vector Graphic Optimization and Compression,” Advances of Computer Graphics, 2006, pp. 665-672, Springer-Verlag, Berlin/Heidelberg. (8 pages).
“Definition of: printer driver,” http://www.pcmag.com/encyclopedia/term/49695/printer-driver, retrieved from the internet on Jun. 12, 2015, 2 pages.
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 14/174,540, Dec. 2, 2014, 20 pages.
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 14/174,540, Jun. 15, 2015, 16 pages.
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 11/556,008, Jul. 24, 2009, 16 pages.
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 11/556,008, Nov. 3, 2009, 20 pages.
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 11/556,008, Jun. 9, 2010, 14 pages.
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 11/556,008, Sep. 7, 2011, 13 pages.
United States Patent and Trademark Office, “Non-Final Office Action,” issued in connection with U.S. Appl. No. 13/346,338, Aug. 1, 2012, 19 pages.
United States Patent and Trademark Office, “Final Office Action,” issued in connection with U.S. Appl. No. 13/346,338, Apr. 22, 2013, 15 pages.
United States Patent and Trademark Office, “Notice of Allowance,” issued in connection with U.S. Appl. No. 13/346,338, Oct. 8, 2013, 18 pages.
Related Publications (1)
Number Date Country
20160040340 A1 Feb 2016 US
Provisional Applications (1)
Number Date Country
60732831 Nov 2005 US
Continuations (3)
Number Date Country
Parent 14174540 Feb 2014 US
Child 14886383 US
Parent 13346338 Jan 2012 US
Child 14174540 US
Parent 11556008 Nov 2006 US
Child 13346338 US