The present disclosure relates to systems and methods to create a mesh geometry for three dimensional beveled shapes. More particularly, the present disclosure relates to systems and methods for creating a polygonal mesh geometry, based on a straight skeleton graph of an outline shape of an object.
One of the common techniques to define three dimensional (3D) objects in a 3D modeling application is to extrude two dimensional (2D) outlines. The technique is often performed to create 3D text or to give depth to various graphical elements. Specifically, a shape is sampled by an arbitrary algorithm to discretize it. This creates a set of points each connected to the next point with a linear line segment. These line segments each build one quadrangle in the extrusion of which they represent one of the four edges. The other three edges are created by connecting each line's start and end points with a clone of the discretized shape in the extrusion direction and the corresponding cloned line segment. The created geometry can be defined in distinct parts: the surfaces created on the inside of the outlines, called “caps”; and the surfaces that build the extrusion of the outlines, called “hulls”.
To improve the aesthetic of the created 3D shape and to emulate real world shapes, the border edge between the caps and the hulls can be beveled, also referred to as chamfered. When combined with state of the art image synthesis techniques, these beveled edges reflect light to a virtual camera, which greatly improves visual fidelity, as real world objects rarely have perfect edges and thus also reflect real world light in a similar way.
To create the necessary mesh geometry for the parts that make up the beveled shape is very similar to how the extruded geometry was created. This bevel operation can, for example, be performed by the following operations: clone the outline; shrink the outline locally in a direction that points to the inside of the original outline as far as the bevel size requires it to (this process is called an inverse offset, and the outline formed as a result of the inverse offset is called an inverse offset outline); and connect the original outline line segments with the shrunk island (i.e., the inverse offset outline) as described for extrusion. These operations can be applied multiple times to create additional rings of geometry in the direction of the inverse offset, for example to allow rounded bevel shapes by placing all of the sub-steps in an arc.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art, or suggestions of the prior art, by inclusion in this section.
One embodiment provides a computer-implemented method for polygonal mesh geometry extraction for a bevel operation in a modeling application. The method comprises: receiving an original shape outline; determining a straight skeleton graph of the original shape outline, the straight skeleton graph comprising a plurality of edges; determining one or more inverse offset outlines of the original shape outline based on the straight skeleton graph; determining one or more polygons based on a union of the straight skeleton graph, the original shape outline, and the one or more inverse offset outlines, the one or more polygons including one or more graph polygons and one or more sub-polygons; and generating a beveled shape of the original shape outline based on the one or more polygons.
Another embodiment provides a system for polygonal mesh geometry extraction for a bevel operation in a modeling application. The system comprises one or more processors and at least one non-transitory computer readable medium storing instructions which, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving an original shape outline; determining a straight skeleton graph of the original shape outline, the straight skeleton graph comprising a plurality of edges; determining one or more inverse offset outlines of the original shape outline based on the straight skeleton graph; determining one or more polygons based on a union of the straight skeleton graph, the original shape outline, and the one or more inverse offset outlines, the one or more polygons including one or more graph polygons and one or more sub-polygons; and generating a beveled shape of the original shape outline based on the one or more polygons.
Another embodiment provides at least one non-transitory computer readable medium for polygonal mesh geometry extraction for a bevel operation in a modeling application. The at least one non-transitory computer readable medium stores instructions which, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving an original shape outline; determining a straight skeleton graph of the original shape outline, the straight skeleton graph comprising a plurality of edges; determining one or more inverse offset outlines of the original shape outline based on the straight skeleton graph; determining one or more polygons based on a union of the straight skeleton graph, the original shape outline, and the one or more inverse offset outlines, the one or more polygons including one or more graph polygons and one or more sub-polygons; and generating a beveled shape of the original shape outline based on the one or more polygons.
Additional objects and advantages of the disclosed embodiments will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the disclosed embodiments. The objects and advantages of the disclosed embodiments will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate various exemplary embodiments and together with the description, serve to explain the principles of the disclosed embodiments.
The following embodiments describe systems and methods for creating a mesh geometry for 3D beveled shapes, and, more particularly, for creating a closed polygonal mesh without overlapping artifacts.
In a 3D modeling application, the inverse offset operation during the generation of bevel geometry can become a complicated problem in convex areas, as the outline points are moved individually from each other. This can move them closer to each other with increasing offset distance to the point of flipping orientation of the line segments connecting them, as shown in
One example algorithm that may solve the above-described problem calculates the “straight skeleton” of the outline, called a straight skeleton graph. The straight skeleton graph contains the path of each outline point with increasing offset distance to its original position. The straight skeleton detects and resolves all events that would create previously mentioned problems to provide a data structure from which an overlap free inverse offset at any offset distance could be extracted.
The technique using the straight skeleton graph can be applied to creating the inverse offset of an outline in areas where the inverse offset of an outline is the final output of the operation. Such application might pose another problem, as the conventional approaches to mesh geometry generation for beveled shapes require the original outline and the inverse offset outline to have the same amount of points and line segments, making it easy to connect the outline segments with the inverse offset outline segments to create the final geometry. With the technique using the straight skeleton graph, the inverse offset outline will have a different amount of points and/or as the matching cannot be performed as easily as soon as at least one event that would create an overlap otherwise has been processed by the straight skeleton. These overlapping events are all internal crossings of the straight skeleton graph with three of more lines connected to them. A new approach is needed to extract offset geometry that can be applied to a straight skeleton graph.
The benefits of the techniques proposed in the current disclosure include: (1) an approach to create a mesh geometry used for 3D beveled shapes that have been created out of a 2D shape without overlapping artifacts (e.g.,
The goal of the proposed techniques is to create a closed polygonal mesh consisting mainly of quadrangles and triangles whose edges are the union of the straight skeleton of the outline and multiple inverse offset outlines with arbitrarily defined offsets, starting from an outline described by clockwise-ordered 2D points and the internal straight skeleton graph of that outline. The resultant polygonal mesh should have the following structure: polygons are defined by a list of indices, which in turn reference a list of 2D positions. Each point of the output mesh may be individual and polygons using this position may all reference the same point. Such arrangement guarantees a connected polygonal mesh.
The subject matter of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments. An embodiment or implementation described herein as “exemplary” is not to be construed as preferred or advantageous, for example, over other embodiments or implementations; rather, it is intended to reflect or indicate that the embodiment(s) is/are “example” embodiment(s). Subject matter may be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any exemplary embodiments set forth herein; exemplary embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof. The following detailed description is, therefore, not intended to be taken in a limiting sense.
Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of exemplary embodiments in whole or in part.
The terminology used below may be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the present disclosure. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section. It should also be noted that all numeric values disclosed herein may have a variation of ±10% (unless a different variation is specified) from the disclosed numeric value. Further, all relative terms such as “about,” “substantially,” “approximately,” etc. are used to indicate a possible variation of ±10% (unless noted otherwise or another variation is specified).
Referring now to the appended drawings,
The proposed mesh geometry generation process may be performed locally at the client device 510 by the modeling application 520 (i.e., by the geometry generation engine 515 that is part of the modeling application 520) and/or by the geometry generation engine 535 residing in the server 530. In the client device 510, the geometry generation engine 515 may function as a part of the modeling application 520. The client device 510 may be a computing device consistent with or similar to the computing device depicted in
The geometry generation engine 515 may be part of a software application that is installed on the client device 510. For example, the geometry generation engine 515 may be part of a modeling application 520. Likewise, the geometry generation engine 515 may be implemented with any software application 520 in which a need for geometry generation may arise, or may itself be a standalone application in connection with another software application in need of such geometry generation and/or related parameters.
As shown by step 710, the modeling application 520 may enable a user to select the desired outline and degree of bevel applicable to the received original shape outline. The original shape outline may be an outline of text, as shown in the figures previously described, or may be an outline of an icon, an object, or other graphical element.
As explained above, at step 720, the geometry generation engine 515 or 535 may determine a straight skeleton graph of the original shape outline, wherein the straight skeleton graph comprises a plurality of edges. The straight skeleton graph may be determined using any now known or future-developed technique. For example, a straight skeleton graph must meet a number of requirements: (1) each event in the calculation of the straight skeleton graph should be represented as a node in the graph (also known as a crossing), and each path an outline point took to reach that event should be represented as an edge; (2) for each edge, the start and end nodes must be known; (3) each graph node (i.e., each node on the straight skeleton graph) must be traversable from an incoming edge to the edge next to it in counterclockwise order; (4) each graph node needs to have the offset stored at which the event happened and the 2D position at which it happened; and (5) a graph node that was the start of an outline point needs to be distinguishable.
With continuing conference to
In step 920, the geometry generation engine 515 or 535 may determine an edge connected to the outline node, and in step 925, may traverse the edge to determine an internal node. In step 930, the geometry generation engine 515 or 535 may set the determined internal node as a current internal node, and in step 935, may store the current internal node in the same node list. In step 940, the geometry generation engine 515 or 535 may determine another edge connected to the current internal node. In one embodiment, the another edge is in a counterclockwise order from the edge traversed to reach the internal node. In step 945, the geometry generation engine 515 or 535 may determine whether traversal of the another edge determined in step 940 leads to another internal node or another outline node. If the traversed edge leads to another internal node, the geometry generation engine 515 or 535 may set the another internal node as the current internal node in step 955, and then the method 900 returns to step 935 for another iteration. On the other hand, if the traversed edge leads to another outline node, signaling that the graph polygon extraction of the particular graph polygon being processed is complete, the geometry generation engine 515 or 535 may store the another outline node in the node list in step 950.
Step 910 of the method 900 is to determine an outline node on an original shape outline, for example, the original shape outline 830 in
In step 915, an outline node, such as one of outline nodes 820A-820E in
In step 920, an edge connected to the outline node is an edge forming a part of the straight skeleton graph. For example, in
Once the edge connected to the outline node is determined in step 920, that edge is traversed in order to determine an internal node. The internal node may be, for example, the internal node 825A in
However, in step 1216, the geometry generation engine 515 or 535 may determine that the current node is a subdivision node. Should this be the case, in step 1226 the geometry generation engine 515 or 535 may assign (i.e., add) the subdivision node to the active stack entry. In step 1228, the geometry generation engine 515 or 535 may determine whether a next node is associated with a higher offset or a lower offset than an offset of the subdivision node.
If it is determined at step 1228 that the next node has a higher offset, in step 1232 the geometry generation engine 515 or 535 may go up to a next stack entry and designate that next stack entry as the active stack entry, or, if already at the topmost stack entry, add a new stack entry at the top of the stack and designate that stack entry as the active stack entry. In step 1234, the geometry generation engine 515 or 535 may assign (i.e., add) the subdivision node to the active stack entry.
If it is determined at step 1228 that the next node has a lower offset, the geometry generation engine 515 or 535 in step 1230 may go down to a next stack entry and designate that next stack entry as the active stack entry. Then in step 1234, the geometry generation engine 515 or 535 may assign (i.e., add) the subdivision node to the active stack entry. In step 1236, the geometry generation engine 515 or 535 may determine whether the next node (for which an offset has been determined to be higher or lower) is a last node. If the next node is not a last node, in step 1238 the geometry generation engine 515 or 535 may designate the next node as the current node, and the method 1200 loops back to step 1216 to process that newly-designated current node. However, if the next node is a last node, the geometry generation engine 515 or 535 may add that last node to a lowest stack in step 1224.
Upon performing the method 1200 for the graph polygons, every stack may contain a list of nodes that define an offset slice of the corresponding graph polygon. These lists may be split into sub-polygons (i.e., final polygons) by going through each stack's node list and splitting them between nodes that have the same offset.
The same method as shown in
In a networked deployment, the computer system 1400 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 1400 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In a particular implementation, the computer system 1400 can be implemented using electronic devices that provide voice, video, or data communication. Further, while a single computer system 1400 is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
As illustrated in
The computer system 1400 may include a memory 1404 that can communicate via a bus 1408. The memory 1404 may be a main memory, a static memory, or a dynamic memory. The memory 1404 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one implementation, the memory 1404 includes a cache or random-access memory for the processor 1402. In alternative implementations, the memory 1404 is separate from the processor 1402, such as a cache memory of a processor, the system memory, or other memory. The memory 1404 may be an external storage device or database for storing data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data. The memory 1404 is operable to store instructions executable by the processor 1402. The functions, acts or tasks illustrated in the figures or described herein may be performed by the programmed processor 1402 executing the instructions stored in the memory 1404. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
As shown, the computer system 1400 may further include a display 1410, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display 1410 may act as an interface for the user to see the functioning of the processor 1402, or specifically as an interface with the software stored in the memory 1404 or in the drive unit 1406.
Additionally or alternatively, the computer system 1400 may include an input device 1412 configured to allow a user to interact with any of the components of system 1400. The input device 1412 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control, or any other device operative to interact with the computer system 1400.
The computer system 1400 may also or alternatively include a disk or optical drive unit 1406. The disk drive unit 1406 may include a computer-readable medium 1422 in which one or more sets of instructions 1424, e.g. software, can be embedded. Further, the instructions 1424 may embody one or more of the methods or logic as described herein. The instructions 1424 may reside completely or partially within the memory 1404 and/or within the processor 1402 during execution by the computer system 1400. The memory 1404 and the processor 1402 also may include computer-readable media as discussed above.
In some systems, a computer-readable medium 1422 includes instructions 1424 or receives and executes instructions 1424 responsive to a propagated signal so that a device connected to a network 1450 can communicate voice, video, audio, images, or any other data over the network 1450. Further, the instructions 1424 may be transmitted or received over the network 1450 via a communication port or interface 1420, and/or using a bus 1408. The communication port or interface 1420 may be a part of the processor 1402 or may be a separate component. The communication port 1420 may be created in software or may be a physical connection in hardware. The communication port 1420 may be configured to connect with a network 1450, external media, the display 1410, or any other components in computer system 1400, or combinations thereof. The connection with the network 1450 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed below. Likewise, the additional connections with other components of the computer system 1440 may be physical connections or may be established wirelessly. The network 1450 may alternatively be directly connected to the bus 1508.
While the computer-readable medium 1422 is shown to be a single medium, the term “computer-readable medium” may include a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” may also include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein. The computer-readable medium 1422 may be non-transitory, and may be tangible.
The computer-readable medium 1422 can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. The computer-readable medium 1422 can be a random-access memory or other volatile re-writable memory. Additionally or alternatively, the computer-readable medium 1422 can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
In an alternative implementation, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various implementations can broadly include a variety of electronic and computer systems. One or more implementations described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
The computer system 1400 may be connected to one or more networks 1450. The network 1450 may define one or more networks including wired or wireless networks. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMax network. Further, such networks may include a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. The network 1450 may include wide area networks (WAN), such as the Internet, local area networks (LAN), campus area networks, metropolitan area networks, a direct connection such as through a Universal Serial Bus (USB) port, or any other networks that may allow for data communication. The network 1450 may be configured to couple one computing device to another computing device to enable communication of data between the devices. The network 1450 may generally be enabled to employ any form of machine-readable media for communicating information from one device to another. The network 1450 may include communication methods by which information may travel between computing devices. The network 1450 may be divided into sub-networks. The sub-networks may allow access to all of the other components connected thereto or the sub-networks may restrict access between the components. The network 1450 may be regarded as a public or private network connection and may include, for example, a virtual private network or an encryption or other security mechanism employed over the public Internet, or the like.
In accordance with various implementations of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited implementation, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
Although the present specification describes components and functions that may be implemented in particular implementations with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) represent examples of the state of the art. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.
It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the disclosed embodiments are not limited to any particular implementation or programming technique and that the disclosed embodiments may be implemented using any appropriate techniques for implementing the functionality described herein. The disclosed embodiments are not limited to any particular programming language or operating system.
It should be appreciated that in the above description of exemplary embodiments, various features of the present disclosure are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiment requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment.
Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present disclosure, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the disclosed techniques.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Thus, while there has been described what are believed to be the preferred embodiments, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the present disclosure, and it is intended to claim all such changes and modifications as falling within the scope of the present disclosure. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure.
The above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other implementations, which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description. While various implementations of the disclosure have been described, it will be apparent to those of ordinary skill in the art that many more implementations and implementations are possible within the scope of the disclosure. Accordingly, the disclosure is not to be restricted except in light of the attached claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
4730186 | Koga et al. | Mar 1988 | A |
4963967 | Orland et al. | Oct 1990 | A |
5040081 | McCutchen | Aug 1991 | A |
5086495 | Gray | Feb 1992 | A |
5519828 | Rayner | May 1996 | A |
5555357 | Fernandes et al. | Sep 1996 | A |
5623612 | Haneda et al. | Apr 1997 | A |
5664132 | Smith | Sep 1997 | A |
5745113 | Jordan et al. | Apr 1998 | A |
5828360 | Anderson et al. | Oct 1998 | A |
5933153 | Deering | Aug 1999 | A |
5936671 | Van Beek et al. | Aug 1999 | A |
5982909 | Erdem et al. | Nov 1999 | A |
5986662 | Argiro et al. | Nov 1999 | A |
6108006 | Hoppe | Aug 2000 | A |
6144378 | Lee et al. | Nov 2000 | A |
6259458 | Theisen et al. | Jul 2001 | B1 |
6373488 | Gasper et al. | Apr 2002 | B1 |
6389173 | Suzuki et al. | May 2002 | B1 |
6448987 | Easty et al. | Sep 2002 | B1 |
6452875 | Lee et al. | Sep 2002 | B1 |
6546558 | Taguchi | Apr 2003 | B1 |
6549219 | Selker | Apr 2003 | B2 |
6629065 | Gadh et al. | Sep 2003 | B1 |
6728682 | Fasciano | Apr 2004 | B2 |
6771263 | Behrens et al. | Aug 2004 | B1 |
6839462 | Kitney et al. | Jan 2005 | B1 |
6888916 | Launay et al. | May 2005 | B2 |
6973200 | Tanaka et al. | Dec 2005 | B1 |
6993399 | Covell et al. | Jan 2006 | B1 |
7290704 | Ball et al. | Nov 2007 | B1 |
7372472 | Bordeleau et al. | May 2008 | B1 |
7401731 | Pletz et al. | Jul 2008 | B1 |
7413113 | Zhu | Aug 2008 | B1 |
7423645 | Dougherty et al. | Sep 2008 | B2 |
7439975 | Hsu et al. | Oct 2008 | B2 |
7487170 | Stevens | Feb 2009 | B2 |
7512886 | Herberger et al. | May 2009 | B1 |
7584152 | Gupta et al. | Sep 2009 | B2 |
7603623 | Lengeling et al. | Oct 2009 | B1 |
7668243 | Ho et al. | Feb 2010 | B2 |
7692724 | Arora et al. | Apr 2010 | B2 |
7701445 | Inokawa et al. | Apr 2010 | B2 |
7730429 | Kruse et al. | Jun 2010 | B2 |
7770125 | Young et al. | Aug 2010 | B1 |
7831521 | Ball et al. | Nov 2010 | B1 |
7949946 | Mollicone et al. | May 2011 | B2 |
8103545 | Ramer et al. | Jan 2012 | B2 |
8140389 | Altberg et al. | Mar 2012 | B2 |
8161396 | Barber et al. | Apr 2012 | B2 |
8205148 | Sharpe et al. | Jun 2012 | B1 |
8336770 | Grillion | Dec 2012 | B2 |
8345046 | Norrby | Jan 2013 | B2 |
8375329 | Drayton et al. | Feb 2013 | B2 |
8413040 | O'Dell-Alexander | Apr 2013 | B2 |
8519979 | Smith et al. | Aug 2013 | B1 |
8560449 | Sears | Oct 2013 | B1 |
8601366 | Saltwell et al. | Dec 2013 | B2 |
8667406 | Thakur et al. | Mar 2014 | B1 |
8698806 | Kunert et al. | Apr 2014 | B2 |
8793599 | Lajoie et al. | Jul 2014 | B1 |
8850335 | Eldridge et al. | Sep 2014 | B2 |
9035949 | Oberheu et al. | May 2015 | B1 |
9038001 | Jetter et al. | May 2015 | B2 |
9158508 | Eldridge et al. | Oct 2015 | B2 |
9223488 | Lajoie | Dec 2015 | B1 |
9449647 | Sharpe et al. | Sep 2016 | B2 |
9478033 | Sharpe et al. | Oct 2016 | B1 |
9734608 | Grealish et al. | Aug 2017 | B2 |
9766787 | Danton et al. | Sep 2017 | B2 |
9804747 | Schmitlin et al. | Oct 2017 | B2 |
10434717 | Boettcher et al. | Oct 2019 | B2 |
20020094135 | Caspi et al. | Jul 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020123938 | Yu et al. | Sep 2002 | A1 |
20030146915 | Brook et al. | Aug 2003 | A1 |
20030160944 | Foote et al. | Aug 2003 | A1 |
20030179234 | Nelson et al. | Sep 2003 | A1 |
20030179740 | Baina et al. | Sep 2003 | A1 |
20040049739 | McArdle et al. | Mar 2004 | A1 |
20040148159 | Crockett et al. | Jul 2004 | A1 |
20040170392 | Lu et al. | Sep 2004 | A1 |
20050046889 | Braudaway | Mar 2005 | A1 |
20050162395 | Unruh | Jul 2005 | A1 |
20050165840 | Pratt et al. | Jul 2005 | A1 |
20050192956 | Evans | Sep 2005 | A1 |
20050199714 | Brandt et al. | Sep 2005 | A1 |
20060008247 | Minami et al. | Jan 2006 | A1 |
20060078305 | Arora et al. | Apr 2006 | A1 |
20060098007 | Rouet | May 2006 | A1 |
20060121436 | Kruse et al. | Jun 2006 | A1 |
20060123445 | Sullivan et al. | Jun 2006 | A1 |
20060150072 | Salvucci | Jul 2006 | A1 |
20060212704 | Kirovski et al. | Sep 2006 | A1 |
20060290695 | Salomie | Dec 2006 | A1 |
20070002047 | Desgranges et al. | Jan 2007 | A1 |
20070036403 | Albertson et al. | Feb 2007 | A1 |
20070075998 | Cook et al. | Apr 2007 | A1 |
20070100773 | Wallach | May 2007 | A1 |
20070162844 | Woodall et al. | Jul 2007 | A1 |
20070189708 | Lerman et al. | Aug 2007 | A1 |
20070230765 | Wang et al. | Oct 2007 | A1 |
20070248321 | Hamada et al. | Oct 2007 | A1 |
20070248322 | Hamada et al. | Oct 2007 | A1 |
20070256029 | Maxwell | Nov 2007 | A1 |
20070257909 | Kihslinger | Nov 2007 | A1 |
20080005130 | Logan et al. | Jan 2008 | A1 |
20080012859 | Saillet et al. | Jan 2008 | A1 |
20080021787 | Mackouse | Jan 2008 | A1 |
20080033880 | Fiebiger et al. | Feb 2008 | A1 |
20080079851 | Stanger et al. | Apr 2008 | A1 |
20080082510 | Wang et al. | Apr 2008 | A1 |
20080099552 | Grillion | May 2008 | A1 |
20080117204 | Thom | May 2008 | A1 |
20080150937 | Lundstrom et al. | Jun 2008 | A1 |
20080162577 | Fukuda et al. | Jul 2008 | A1 |
20080177663 | Gupta et al. | Jul 2008 | A1 |
20080256448 | Bhatt | Oct 2008 | A1 |
20080301341 | Mosek et al. | Dec 2008 | A1 |
20080304814 | Fujii et al. | Dec 2008 | A1 |
20080306790 | Otto et al. | Dec 2008 | A1 |
20080313565 | Albertson | Dec 2008 | A1 |
20090034940 | Hamada et al. | Feb 2009 | A1 |
20090063312 | Hurst | Mar 2009 | A1 |
20090079742 | Albertson et al. | Mar 2009 | A1 |
20090087161 | Roberts et al. | Apr 2009 | A1 |
20090157519 | Bishop et al. | Jun 2009 | A1 |
20090167942 | Hoogenstraaten et al. | Jul 2009 | A1 |
20090171683 | Hoyos et al. | Jul 2009 | A1 |
20090192904 | Patterson et al. | Jul 2009 | A1 |
20090198614 | De Ruiter et al. | Aug 2009 | A1 |
20090199123 | Albertson et al. | Aug 2009 | A1 |
20090228786 | Danton et al. | Sep 2009 | A1 |
20090231337 | Carr | Sep 2009 | A1 |
20090254864 | Whittington et al. | Oct 2009 | A1 |
20090279453 | Yeh | Nov 2009 | A1 |
20090318800 | Gundel et al. | Dec 2009 | A1 |
20090319948 | Stannard et al. | Dec 2009 | A1 |
20100050083 | Axen et al. | Feb 2010 | A1 |
20100053215 | Coldicott et al. | Mar 2010 | A1 |
20100058161 | Coldicott et al. | Mar 2010 | A1 |
20100058162 | Coldicott et al. | Mar 2010 | A1 |
20100063903 | Whipple et al. | Mar 2010 | A1 |
20100083077 | Paulsen et al. | Apr 2010 | A1 |
20100146393 | Land et al. | Jun 2010 | A1 |
20100153841 | Haug, III et al. | Jun 2010 | A1 |
20100183280 | Beauregard et al. | Jul 2010 | A1 |
20100185985 | Chmielewski et al. | Jul 2010 | A1 |
20100192101 | Chmielewski et al. | Jul 2010 | A1 |
20100192102 | Chmielewski et al. | Jul 2010 | A1 |
20100211860 | O'Dell-Alexander | Aug 2010 | A1 |
20100228669 | Karim | Sep 2010 | A1 |
20100333030 | Johns | Dec 2010 | A1 |
20120081389 | Dilts | Apr 2012 | A1 |
20120290609 | Britt | Nov 2012 | A1 |
20120293558 | Dilts | Nov 2012 | A1 |
20130073500 | Szatmary et al. | Mar 2013 | A1 |
20130093787 | Fulks et al. | Apr 2013 | A1 |
20130151413 | Sears | Jun 2013 | A1 |
20130328870 | Grenfell | Dec 2013 | A1 |
20140132603 | Raghoebardayal | May 2014 | A1 |
20150134095 | Hemani | May 2015 | A1 |
20150149314 | Sears | May 2015 | A1 |
20150161595 | Sears | Jun 2015 | A1 |
20150279071 | Xin | Oct 2015 | A1 |
20150331968 | Crocker | Nov 2015 | A1 |
20160216872 | Dilts | Jul 2016 | A1 |
20180108164 | Dilts | Apr 2018 | A1 |
20190258693 | Lawrence | Aug 2019 | A1 |
20200137195 | Lipke et al. | Apr 2020 | A1 |
20210065424 | Kniemeyer | Mar 2021 | A1 |
20210271784 | Marl et al. | Sep 2021 | A1 |
20220068036 | Ng et al. | Mar 2022 | A1 |
Number | Date | Country |
---|---|---|
0916374 | May 1999 | EP |
2230666 | Sep 2010 | EP |
2008305241 | Dec 2008 | JP |
WO0139130 | May 2001 | WO |
WO2004040576 | May 2004 | WO |
WO2009042858 | Apr 2009 | WO |
WO2010034063 | Apr 2010 | WO |
WO2010068175 | Jun 2010 | WO |
WO2010138776 | Dec 2010 | WO |
WO2016179401 | Nov 2016 | WO |
Entry |
---|
Held M, Palfrader P. Skeletal structures for modeling generalized chamfers and fillets in the presence of complex miters. Computer-Aided Design and Applications. Jan. 1, 2019;16(4):620-7. |
Palfrader, P., Weighted Skeletal Structures in Theory and Practice (Doctoral dissertation, University of Salzburg). |
Shrstha et al., “Synchronization of Multi-Camera Video Recordings Based on Audio”, MM '07: Proceedings of the 15th ACM international conference on Multimedia, Sep. 2007, Augsburg, Bavaria, Germany, pp. 545-548, 5 pages. https://doi.org/10.1145/1291233.1291367. |
Haitsma, J. et al., “A Highly Robust Audio Fingerprinting System”, IRCAM, 2002, 9 pages. |
Toklu et al., “Semi-automatic video object segmentation in the presence of occlusion”, Jun. 2000, IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, iss. 624-629, 3 pages. |
Toklu et al., “Two-dimensional triangular mesh-based mosaicking for object tracking in the presence of occlusion”, Jan. 10, 1997, Proc. SPIE, Visual Communications and Image Processing '97, vol. 3024, p. 328-337, 11 pages. |
Toklu et al., “Tracking Motion and Intensity Variations Using Hierarchical 2-D Mesh Modeling for Synthetic Object Transfiguration”, Nov. 1996, Graphical Models and Image Processing, vol. 58, No. 6, p. 553-573, 21 p. |
Jain et al., “Non-Rigid Spectral Correspondence of Triangle Meshes”, Apr. 5, 2007, International Journal of Shape Modeling, 25 pages. |
Toklu et al., “2-D mesh-based tracking of deformable objects with occlusion”, Sep. 19, 1996, Proceedings of International Conference on Image Processing, 1996, vol. 1, p. 933-936, 3 pages. |
Zhao et al., “An object tracking algorithm based on occlusion mesh model”, 2002, Proceedings of International Conference on Machine Learning and Cybernetics, 2002, vol. 1, p. 288-292, 3 pages. |
Altunbasak et al., “Occlusion-adaptive 2-D mesh tracking”, May 10, 1996, Conference Proceedings ICASSP-96, vol. 4, p. 2108-2111. |
Tekalp et al., “Face and 2-D mesh animation in MPEG-4”, Jan. 2000, Signal Processing: Image Communication, vol. 15, issue 4-5, p. 387-421. https ://doi.org/10.1016/S0923-5965 (99)00055-7. |
Shewchuk, “Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator”, 1996, Applied Computational Geometry towards Geometric Engineering Lecture Notes in Computer Science, 1996, vol. 1148, 11 pages. |
Dobashi et al., Interactive Rendering method for Displaying Shafts of Light, Proceedings Computer Graphics and Applications; Oct. 2000, pp. 31-37, 435, 3 pages. DOI:10.1109/PCCGA.2000.883864. |
Li et al., Unified Volumes for Light Shaft and Shadow with Scattering, 2007 10th IEEE International Conference on Computer-Aided Design and Computer Graphics, Oct. 2007, pp. 161-166. |
Akenine-Moller, T., et al., Real-Time Rendering, (Second Edition, A. K. Peters, Ltd., Wellesley, MA), (202), pp. 158, 315-316. |
Herndon, K. P., et al., “Interactive Shadows”, UIST: Proceedings of the Fifth Annual ACM Symposium on User Interface Software and Technology, (Nov. 15-18, 1992), (1992), 1-6. |
Loscos, C., et al., “Real-Time Shadows for Animated Crowds in Virtual Cities”, VRST 2001. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, (Nov. 15-17, 2001, Banff, Alberta, Canada), (2001), 85-92. |
Woo, A., et al., “A Survey of Shadow Algorithms”, IEEE Computer Graphics & Applications, (Nov. 1990), 31 pages. |
Bregler, Christoph et al., “Video Rewrite: Driving Visual Speech with Audio”, ACM Siggraph 97, Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 1997, ISBN: 0-89791-896-7, pp. 1-8. |
“Alaric: Euronet Worldwide to implement Alarics Fractals fraud detection solution; Fractals will provide a comprehensive fraud solution for the prevention and detection of fraudulent transactions”, M2 Presswire [Coventry], Nov. 7, 2006, pp. 1-2. |
Martin Held et al., ‘Straight Skeletons and Mitered Offsets of Polyhedral Terrains in 3D’, Computer-Aided Design and Applications, vol. 16, No. 4, 2019, pp. 611-619, Jul. 9, 2018, pp. 611-617. |
Gill Barequet et al., ‘Straight Skeletons of Three-Dimensional Polyhedra’, arXiv:0805.0022v1, Apr. 30, 2008, pp. 1-11. |
Number | Date | Country | |
---|---|---|---|
20220068022 A1 | Mar 2022 | US |