Historically, 3D editing used to require substantial up-front training. Professional tools, such as Maya (http://www.autodesk.com/products/maya), Softimage (http://www.autodesk.com/products/softimage), SolidWorks (http://www.solidworks.com) are very powerful, but also take years to master.
The first publicly recognized breakthrough in terms of learnability was probably Sketch-Up (http://www.sketchup.com). Sketch-up allows less experienced users to create and edit 3D models. More recently, TinkerCAD (http://www.sketchup.com) set new standards in learnability. The main user interface strategy behind TinkerCAD is to offer only those tools that are required for manipulating volumetric objects.
In parallel to this main evolution of 3D editors, researchers and software engineers have created 3D editors designed specifically with ease-of-use in mind, such as Teddy [Takeo Igarashi, Satoshi Matsuoka, Hidehiko Tanaka. Teddy: a sketching interface for 3D freeform design. In Proc. SIGGRAPH 1999] and follow-up projects (e.g., Plushy paper [Yuki Mori, Takeo Igarashi. Plushie: an interactive design system for plush toys. In Proc. SIGGRAPH 2007], patent [Takeo Igarashi, Yuki Mori. Three-dimensional shape conversion system, three-dimensional shape conversion method, and program for conversion of three-dimensional shape. US patent number 2009/0040224A1. Feb. 12, 2009] However, these systems fall short in the sense that the set of objects they allow users to create and edit tends to be a small subset of the models other 3D editors are capable of creating. Teddy, for example, is limited to producing rounded objects.
Yet another line of easy-to-use 3D editors is construction kit editors, such as MecaBricks (http://mecabricks.com) for the LEGO universe. Editors of this type allow arranging parts, but do not allow modifying parts of making new parts, which confines users to a universe of premeditated elements. Similarly, the level editor of various video games, such as the Unreal Editor (https://en.wikipedia.org/wiki/Unreal_Engine) or the Portal 2 Editor (https://en.wikipedia.org/wiki/Portal_2) allowed users to assemble worlds easily, but only from the elements previously designed for this particular game world. Also, all simulation tends to take place in a mode that is separate from editing, i.e., the game itself. Similarly, physical window managers, such as bumptop [Anand Agarawala, Ravin Balakrishnan. Keepin' it Real: Pushing the Desktop Metaphor with Physics, Piles and the Pen. In Proceedings of CHI 2006] and sandbox games, such as Gary's Mod (http://www.garrysmod.com) and Minecraft (https://minecraft.net) allow users to create and edit 3D objects/game worlds; they are limited to placing predefined elements though. Unlike game editors, the primary purpose of 3D editors, including the ones presented in this disclosure, is to create objects for use outside the game world. The same holds for configurators and customizers, the primary purpose of which is to define or configure goods in the physical world.
Similar to Teddy, the 3D editor FlatFab (http://www.flatfab.com) is limited to a specific type of 3D model, in particular volumetric 3D objects approximated as intersecting cross sections. FlatFab allows for efficient use, although its gesture language is hard to discover, arguably making it unsuitable for inexperienced users. Similarly, Autodesk 123DMake (http://www.123dapp.com/make) achieves ease-of-use by limiting users to spars-and-frames approximations of volumes. Sketch Chair (http://sketchchair.cc) is easy to use, but limited to making chairs.
The objective behind the present invention is to innovate in terms of ease-of-use past traditional general-purpose 3D editors, without falling into the trap of over-specialization.
The present disclosure presents a family of interactive 3D Editors, i.e., software programs that allow users to create/edit 3D models, i.e., unlike for example games, the outcome of the interaction is a designed object. We consider general 3D editors, as well as programs that allow customizing products, also referred to as configurators or customizers and we refer to the whole as 3D editors. The key challenge in designing 3D editors is to make them powerful, yet also usable. Achieving one of the two objectives is easy. At one end of the resulting spectrum, general-purpose 3D editors, such as Autodesk Maya (http://www.autodesk.com/products/maya) allow creating wide ranges of geometry, but take months to learn. At the other end of the spectrum, specialized editors, such as Teddy are easy to learn, yet are also limited in terms of what they allow designing. The inventive concept innovates on 3D editing by substantially increasing ease-of-use without sacrificing expressive power. (1) The inventive concept introduces the concept of 3D editors designed as “physics sandboxes”, i.e., environments that simulate a place governed by physics, such as gravity or inertia, etc. The benefit of this approach is that it leverages users' knowledge of the physical world. (2) The inventive concept replaces aspects that would distract from the notion of a place governed by physics, such as traditional alignment techniques, manual grouping, etc. It instead proposes concepts that align well with the notion of a physical world or that eliminate the necessity for the aspects that clash with a physical world, such as automatic alignment, automatic view management. (3) The inventive concept optionally targets one or more specific target fabrication machines. This allows it to focus the interaction to the creation of contents the target fabrication machines are capable of fabricating. This allows the inventive concept to reduce user interface complexity without losing expressive power. (4) The inventive concept offers smart content elements for selected groups of the targeted fabrication machines that can embody useful domain knowledge, such as stability and material efficiency, allowing inexperienced users to solve common problems with ease.
The present invention implements a new class of 3D editors. In one aspect, the inventive concept pertains to software programs designed with ease-of-use in mind so as to address a broad range of users, including those with no prior knowledge of 3D editing. It attempts to achieve this as follows.
First, some of its embodiments attempt to improve ease-of-use by leveraging users' knowledge of the physical world. By implementing a radical “what-you-see-is-what-you-get” approach, these embodiments display, animate, and simulate objects in a realistic way during editing, including realistic rendering and realistic physics. In some embodiments, these simulations are active during editing, i.e., without a specific rendering or preview mode. The objective for physical realism is twofold. First, the present invention is designed for use by a broad audience, including inexperienced users. The only knowledge truly inexperienced users can be expected to have is knowledge about the physical worlds. Second, for some of the more specialized embodiments that target a physical fabrication machines (see below), making the editor experience resemble the experience of physically fabricating and assembling the object, intends to create excitement during editing similar to the excitement users may experience when physically fabricating and assembling the object.
Second, some of its embodiments improve ease-of-use by eliminating some of the traditional hurdles involved in 3D editing, such as alignment, grouping, and view navigation (see below).
Third, some embodiments of the present invention increase ease-of-use by limiting their functionality to specific classes of personal fabrication machines, such as devices capable of cutting sheets of physical material. Three-axis laser cutters, for example do not allow fabricating arbitrary 3D models, but only a reduced set, such as 3D models assembled from flat plates (The term 3-axis used herein refers to laser cutters the cutting head of which can be height adjusted, then moved in a 2D plane during cutting). Milling machines, as another example, may be subject to additional constraints, such as a minimum cutting curvature. Cutting devices with additional degrees of freedom will typically offer additional possibilities. 3D printers, may generally be more capable in terms of producing three-dimensional objects, yet be subject to more specific limitations, e.g., in terms of their ability to produce overhanging structures, and so on. Some embodiments exploit these limitations imposed by the respective fabrication machine with embodiments that offer appropriately reduced functionality, aimed at matching what the fabrication device is capable of creating. Note that this does not fall into the same trap as the specialized 3D editors mentioned earlier in that the limitations in terms of expressiveness are already imposed by the fabrication device; implementing the same limitations into the 3D editor does not further limit the design space, so that ease-of-use is gained without further reducing the design space.
Fourth, certain embodiments further improve ease-of-use by implementing domain knowledge about the targeted fabrication machine(s) and the physics the resulting creations are subject to, such as the know-how, how to create a box or a certain type of hinge on a particular fabrication machine, or how to implement spheres of different sizes, etc.
Limiting functionality to a fabrication machine and implementing domain knowledge are both motivated by the fact that fabrication machines have recently become available to a much broader audience of users (caused by the expiration of several of the initial patents). As a result, the potential user base of such machines now includes not only traditional users, i.e., engineers in industry and the more recent tech-enthusiastic “makers”, but also increasingly “consumers” (aka “casual makers”), i.e., users who are interested in the physical objects these personal fabrication machines allow them to make, but lack the training or enthusiasm to dive into the functioning of the machines and the software. A technique that allows users to create and/or edit 3D models for specific fabrication machines is thus desired and the present invention is aimed at addressing this need.
Fifth, certain embodiments integrate with a repository. Some embodiments further implement an attract mode, a tutorial mode, a demo mode, or any combination thereof, making them suitable for deployment as part of the home screen/landing page.
The term 3D editor or editor used herein generally refers to software programs that allow users to create or edit 3D objects. However, large parts of the invention also apply to editing 2D objects, such as objects cut out of plates, 1D objects, such as sticks, beams, or bars cut to length, as well as defining the behavior of any such objects over time (sometimes referred to as “4D editing”), etc.
The term computer used herein refers to a wide range of devices with computational abilities, including, but not limited to, servers, data centers, workstations, mini and micro computers, personal computers, laptops, tablets, mobile phones, smart phones, PDA, mobile devices, computational appliances, eBook readers, wearable devices, embedded devices, micro controllers, custom electronic device, a networked system, any combination thereof, etc. Typically, a computer includes a processor, memory, and allow for input/output devices.
The term system or software or software system used herein typically means a software program or system of software programs with optional hardware elements that among other functions, allows users to create or edit models. The software may be running on a computer located with the user, a computer located elsewhere (such as a web server or a machine connected to one), etc. The software may run locally, over a network, in a web browser or similar environment, etc. (In some cases, the terms ‘system’ refers to the entire system, including the fabrication machine)
Objects that users create, edit, or otherwise manipulate using the computer use the following nomenclature: Object:=the physical object the user is trying to create. Model:=virtual representation of the object. The term object is sometimes used as a synonym of model if unambiguous. Scene:=all contents visible/editable in the current session, i.e., an in-between version of the model. Part:=smallest entity that can be edited (even though it may be possible to turn it into multiple parts, e.g., by subdividing it). Compound:=multiple parts that are connected (e.g., rigidly by joints or in a way allowing sub-assemblies to love with respect to each other, e.g., forming a mechanism) so that editing functions can potentially be applied to multiple or all of them at once. Assembly:=a part or a compound. Stage:=the view in which the scene is displayed defined by position and parameters of a virtual camera, as well as backdrop.
Those of the disclosed embodiments that refer specifically to a fabrication machine employ a process based on the notion of three “worlds”. (1). The physical world is where the object will eventually be created. Given that the object will be fabricated from one or more materials and using one or more targeted fabrication machines, these determine the nature of the resulting objects. (2). A virtual world in terms of the targeted fabrication device(s) and materials is how the software may choose to show users what they may later fabricate in a “what-you-see-is-what-you-will-get” fashion. When designing from plywood to be processed using a 3-axis laser cutter, for example, users typically see some sort of a rendition of that. (3) A virtual world in terms of abstract graphics. Some interactions allow users to interact in terms of geometries that are independent from materials and what the targeted fabrication machines is capable of producing. One example is that users may add a sphere to the scene while working with an embodiment that embodies the constraints of a machine not capable of creating spheres. The editors part of the present invention may respond in various ways, such as by rendering this assembly in the virtual world (e.g., here, a sphere), by rending a (somewhat) corresponding part in terms of the targeted fabrication device(s), or some combination thereof.
The terms personal fabrication machines or simply fabrication machines may include computer-controlled 3D printers, milling machines, laser cutters, etc. (for a more inclusive list see below). The combination of computer and fabrication machine allows users to use computers to create and edit electronically readable descriptions of what to fabricate and then send these descriptions to one or more fabrication machine, which causes the fabrication machines to physically fabricate what is specified in the descriptions. One type of such as representation is a 3D model or simply model, e.g., a description that defines the shape of one of multiple parts and in many cases also how these parts spatially relate to each other in 3D (this includes 1D and 2D objects as special cases).
The term fabrication, used herein refers to the process of creating physical objects using a fabrication machine. The term fabrication machines used herein refers to computer-controlled machines that are designed to produce physical output. Fabrication machines can be of any type, including machines that cut up or remove material from a given block or sheet of material (subtractive fabrication), machines that build up an object by adding material (additive fabrication), machines that deform a given block, sheet, or string of material (formative fabrication), e.g., vacuum forming, punch presses, certain types of industrial robots and robot arms, and hybrid machines that offer combinations of abilities of the above, such as machines that cut and fold materials (e.g., LaserOrigami [Stefanie Mueller, Bastian Kruck, and Patrick Baudisch. 2013. LaserOrigami: laser-cutting 3D objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, N.Y., USA, 2585-2592. DOI=http://dx.doi.org/10.1145/2470654.2481358]), machines that cut and weld materials (e.g., LaserStacker [Udayan Umapathi, Hsiang-Ting Chen, Stefanie Mueller, Ludwig Wall, Anna Seufert, and Patrick Baudisch. 2015. LaserStacker: Fabricating 3D Objects by Laser Cutting and Welding. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, N.Y., USA, 575-582. DOI=http://dx.doi.org/10.1145/2807442.2807512]), or any combination.
Subtractive machines include those that are primarily designed to etch, cut into, or cut through sheets of material, such as certain types of laser cutters, knife-based cutters, water jet cutters, plasma cutters, scissors, CNC milling machines etc., i.e., machines that move a cutting device primarily in a plane (2 axes, 2 degrees of freedom), optionally with the ability to lift the cutting device (3 axes, 3 degrees of freedom). The same applies to machines that instead move the workpiece or some combination thereof during the cutting process.
Subtractive machines based on the same concepts may also offer additional functionalities and degrees of freedom, such as cutting devices that cannot only be moved, but also rotated around one or multiple axes (roll, pitch, and yaw, etc.), or that rotate the workpiece around one or more axes, or any combination thereof. Devices with high degrees of freedom are included, as well as those based on robotic arms in the wider sense. Also included are hybrid machines that include any of the above functionalities. Also included are combinations of machines, such as a laser cutter or milling machine and a 3D printer (or any other combination) that may produce an object together.
Machines that produce physical output under computer control used herein may also include processes that produce a physical output without direct control of a computer. For example, the “machine” may be one or multiple humans holding one or multiple cutting devices, such as scissors, knifes, lasers, etc. and that manufacture physical objects while following instructions. The “machine” also includes combinations of such human-based devices with fabrication machines.
As used herein, “a targeted fabrication machine” includes machines that implement multiple fabrication processes, such as additive plus subtractive, etc. or combinations of machines that together provide such hybrid functionality.
While the inventive concept is generally targeted at fabricating objects, many of the disclosed system and interaction techniques also apply to scenarios where users will not end up fabricating. These concepts, e.g., non-modal alignment, non-alignment, automatic view-management, automatic grouping, apply also to 3D editors not primarily designed for fabrication, as well as other types of interactive software programs, such as simulation and games, productivity software, office software, drawing and painting programs, etc.
Some aspects of the invention at hand also apply to additive machines. These include various types of 3D printers (fused deposition modeling, laser sintering, stereolithography and resin printers, inkjet-based printers, etc.), injection molding machines, etc.
Some of the techniques disclosed in this disclosure refer to objects made from individual parts to be assembled later (e.g., hybrid compounds, etc.). These more specialized techniques apply to various application scenarios. (1) For fabrication machines that bear very few constraints on the shapes they can produce (e.g., additive manufacturing using laser sintering), these techniques can be used for cases where users choose to break down an object into parts, e.g., to parallelize fabrication, to reduce the need for support material, or so create objects larger than the print volume of the 3D fabrication machine. (2) For those types of fabrication machines are limited to producing certain types of geometries, the techniques reveal these limitations to the user, help them deal with the limitation, and get a clear sense of what to expect when they fabricate in the end. Three-axis laser cutters, for example, tend to be used to cut flat plates from sheets of material (technically such cutters tend to be able to edge and engrave as well; however, users may choose not to use that functionality for a variety of reasons, such as speed and accuracy). By showing these techniques during interactive experience, the respective technique clarifies expectations and make the interactive experience more similar to the experience with the physical parts fabricated later.
Screens or displays referred to herein explicitly or implicitly may also include any type of computer display used with any of the computing devices described above, including LCD, any type of OLED, CRTs, projection, etc., as well as any type of 3D display, such as individual screens capably of displaying alternating stereo images, or any virtual reality or augmented reality display, including headsets, projection systems, retinal displays, or any type of volumetric or holographic display technology, etc.
A movement engine 20 performs movement operations on assemblies, such as tilt, rotation, and drag. In some embodiments, the movement engine 20 uses the physics engine 10 to apply physics properties to the movement. A rendering engine 24 performs the processing of scenes and formatting of data for rendering and displaying the scenes. An alignment engine 26 performs operations on assemblies to align assemblies or to non-align assemblies. A boxel engine 28 performs operation on boxels. The operations include, for example, embedding a boxel into a scene. A connection engine 30 performs operations to connect assemblies together or disconnect assemblies. The connection engine 30 performs operations to snap or unsnap assemblies. In some embodiments, the connection engine 30 snaps or unsnaps assemblies in response to alignment or non-alignment operations of the alignment engine 26. A view generating engine 32 performs operations for generating detail views, zooming in, zooming out, and the like. The view generating engine 32 performs semantic zooming. The rendering engine 24 may use data generating by the view generating engine 32. A calibration engine 34 performs operations for calibrating machines, such as fabrication machines, that use data exported from the 3D editor 100. Export engine 36 exports data, for example, to a fabrication machine. An import engine 38 receives data, such as from a fabrication device.
The concepts disclosed in this section apply to generic 3D editors as well as to those 3D editors that target one or more specific fabrication machines.
First, to leverage users' knowledge about the physical world, the embodiments disclosed build on the notion of “what-you-see-is-what-you-get”, i.e., 3D editor 100 displays and animates objects in a realistic way including (various levels of) realistic rendering and (various levels of) realistic physics. The present invention may achieve physical realism by applying any or all of the following concepts.
As illustrated by
Assemblies are distinct from parts in a number of ways. During user interactions with the scene, assemblies behave as distinct physical entities. In most cases, all parts within an assembly are physically connected with one another, while assemblies are generally not physically connected to other assemblies. Since the scene-assembly-part model defines what is connected explicitly, it allows placing two assemblies in immediate physical contact (zero gap; or even press them against each other) without causing them to become physically connected. Connecting two assemblies into one generally features placing a joint (or mechanism, see below). Joining part 2.1 and part 1.1 from
Connections within an assembly may be rigid or compliant. For embodiments that use this scene-assembly-part model to represent objects to be fabricated, rigid connections tend to correspond to physical counter parts on the respective fabrication machine, such as joints. For a 2-3 axis laser cutter or similar device, for example, such rigid connections may represent finger joints, cross joints, butterfly joints, captured nut connectors, etc.; some of these connections may be glued or welded, etc. or just jammed together and held by friction. Compliant elements may for example, include living hinges, as disclosed below. Some embodiments represent assemblies consisting of elements that can move with respect to each other in a constrained way (mechanism: such as axles and bearings) as a single assembly; other embodiments represent them as multiple assemblies.
Most embodiments include one or more tools that allow users to move assemblies, i.e., whichever part of the assembly users touch or click, they always interact with the entire assembly. Examples of such tools include the gravity-based tilt tool and the inertia/friction-based yaw tool, all types of stacking tools, etc. as disclosed later in this document.
When assemblies are moved into each other, the assemblies do not penetrate each other (as is commonly the case with 3D editors in the prior art) but collide and push each other away. Someone of skill in the art will appreciate that this can be accomplished with the help of an off-the-shelf physics engine.
As a result, embodiments of the scene-assembly-part model tend to offer two very different types of interactions and tools. In order to allow users to manipulate assemblies in a scene, tools will generally refer to tools based on gravity, inertia, and friction, such as the gravity-based tilt tool and the inertia/friction-based yaw tool. In order to allow users to manipulate parts (or combinations of multiple parts which we may call sub-assemblies) in an assembly, tools may also include non-physical tools, such as push-pull, etc.
If exported to a fabrication machine, all parts of an assembly tend to physically hold together either right away, or when assembled. Two different assemblies in contrast do not hold together.
Based on this scene-assembly-part model, assemblies also tend to have their own coordinate system, as described in the following.
Global coordinate systems/local coordinate systems. Some embodiments may use a single global grid, as is common with 3D editors in general. This grid can be rendered onto the working surface, effectively forming the scene's backdrop at all times, as common for the vast majority of 3D editors in use today. Based on this, the grid may be used for alignment in that objects can be snapped into it.
The global grid, however, makes only so much sense in the context of the principles laid out earlier in this disclosure, such as physics in the editor. If two assemblies collide, for example, and one gets bumped away causing it to translate and rotate in a way governed by physics; as a result it will typically not land in a meaningful location on some grid (unless the respective embodiments intentionally snap it), but at an arbitrary location and in some arbitrary rotation. This suggests that things can get out of alignment easily.
Some embodiments therefore drop the concept of a global coordinate system and grid. Instead, such embodiments define a coordinate system and grid only within an assembly, i.e., only where parts are physically connected thereby “physically” maintaining their alignment (
At 1901, 3D editor 100 receives a user selection of one or more assemblies. At 1902, 3D editor 100 starts dragging the selected assemblies in the scene. At 1903, 3D editor 100 determines the current reference assembly, for example, based on proximity. At 1904, 3D editor 100 retrieves properties of the reference assembly, such as a coordinate system. At 1905, 3D editor 100 applies an operation to the selected assemblies using current properties of the reference assembly, such as the coordinate system. At 1906, 3D editor 100 re-renders the scene. At 1907, 3D editor 100 determines whether the user is still dragging the assemblies. If the determination at 1907 is the user is still dragging the assemblies, 3D editor 100 returns to the determination at 1903. Otherwise, if the determination at 1907 is the user is no longer dragging the assemblies, 3D editor 100 applies at 1908 the final operation to the selected assemblies referring to the current reference coordinate system. At 1909, 3D editor 100 renders the scene.
Showing sizes and achieving precision by entering numbers. Especially for embodiments that display no global grid that would communicate a coordinate system to the user, different embodiments may choose different strategies for communicating dimensions of parts and assemblies to the user, as illustrated by
The embodiment shown in (
As shown in
Animated transitions. To keep users oriented, embodiments may use animated transitions. Such transitions may, for example, take place in the context of tools that result in movement without the user dragging the object to its final destination, such as tools that “snap” to a destination. Examples include certain types of alignment tools, assembly tools (“attach this assembly to that assembly”), scaling and rotation tools, and disassembly tools, etc. and includes versions that operate on a plurality of assemblies at once. Some embodiments emphasize the transition using appropriate sounds.
Realistic textures including transparent and translucent materials. The system 100 may choose materials and/or allow users to choose materials, allowing the system 100 to render each part in its actual surface texture, translucency, reflectance, bump maps, etc. or any combination thereof. Someone skilled in the art will appreciate that this is commonly accomplished with the help of shaders. Image pyramids and mip-mapping allow achieving performance as the scene is zoomed in and out. Embodiments targeted to a specific fabrication machines may choose to limit the selection to materials that are available for use with the targeted device, such as sheets of plywood, acrylic, Delryn, etc. for a 3-axis laser cutter, ABS and PLA for an entry-level FDM 3D printer, steel and glass for a high-end sintering 3D printer, and so on.
Some embodiments offer textures that change between multiple representations when zoomed (“semantic zooming”). A map may, for example, stop showing any streets but highways when zoomed out past a certain point. One approach is to set a fix threshold for size of structures, number of features, amount of detail, etc. A system 100 performing the zooming may then assess every feature and stop rendering the feature if the feature falls below the threshold.
Realistic physical properties. In some embodiments, the physics engine 10 may consider the specific mass of the materials used in assemblies; iron would, for example, generally be heavier than acrylic and wood. This would manifest itself during collisions (heavy objects are likely to bump light ones away), when computing friction, deformations (see discussion of bending living hinges), etc. In some embodiments, the physics engine 10 may consider the considered friction indices. Assemblies made from Delryn then tend to slide further and tumble less than objects made from wood or even rubber. In some embodiments, the physics engine 10 may consider compliance (e.g., when computing bending and stretching of assemblies, such as living hinges etc.) elasticity/damping (e.g., when determining the wiggling after a collision or when the user relaxes a force applied), sheer strength, Young modulus, etc.
Fabrication-specific artifacts: If the model is intended for fabrication on a fabrication machine, some embodiments consider this process by picking textures so as to represent expected fabrication artifacts or by simulating fabrication artifacts. The result they may render onto the model's (regular and/or bump map) textures (and in the case exceptionally large artifacts) into the object geometry. When processing plywood using a laser cutter, for example, realistic textures may include traces resulting from fumes and burning, may consider the direction of air suction, etc. When processing acrylic using a laser cutter, for example, realistic textures may include buckling and molten edges. For plasma cutters edges may contain burrs. Some embodiments may allow users to specify additional options for pre-/post processing artifacts, such as the result of a plasma cutter after applying use of a deburring tool, the use of a laser cutter after the adding, cutting, and removing masking tape, sanding, sand blasting, chemical processing (e.g., using acetone fumes for certain plastics) etc.
Realistic work environment where the modeling takes place (aka realistic stage). This could be a desk or a workshop or similar.
As shown in
The system 100 may allow users to load and save their models, e.g., by storing them in a personal file system or on a server. Alternatively, the system 100 may allow users to keep some, multiple, or all of their models in their work environment.
Realistic graphics including light and shadows. Objects may be rendered in a realistic perspective, field of view, and zoom. In some embodiments, the system 100 renders shadows, such as soft shadows. In some embodiments, the system 100 renders shadows in real-time, e.g., by means of real-time shadow-mapping. In various embodiments, the system 100 employs even more sophisticated approaches to computing lighting in order to obtain additional realism with clear or reflective objects and assemblies in the scene. In such a system 100, a piece of straight of curved acrylic may produce reflections, caustic effects, spectral effects, etc.
Realistic sounds: when objects collide or scrape across a surface, etc. in some embodiments, the system 100 renders matching sounds. In some embodiments, collision events trigger stored sounds (e.g., looking up in an array, hash, trivial hash, or a look-up table, etc.); other embodiments may generate the sounds by simulating the events taking place in the editor. Sound generation may consider the materials and/or size and/or weight of the involved objects. For even higher realism, in some embodiments, the system 100 renders sounds taking the objects contained in the scene and their placement with respect to each other and the environment into considering (sound rendering). In some embodiments, the system 100 adds made-up sounds to interactions that would otherwise produce no or no perceivable sound (such as picking up an object).
Natural input. Some embodiments of the system 100 are designed to work on a computing system with various input/output capabilities. While different embodiments may use different input/output mappings, the system 100 is designed so that inputs take place in the most natural way and with the most intuitive mapping from input to display space. In particular: (a) direct touch: here users interact with assemblies by tapping or dragging them directly on a touch-sensitive screen; this can be done using a finger, a stylus, a digitizer, etc.; it can be emulated if desired using an indirect input device, such as a mouse, and a pointer. (b) Absolute six degree-of-freedom input (as used in virtual reality): here users interact with assemblies by acquiring them in three-dimensional space where they tap or drag them directly. This embodiment goes together with a corresponding 3D display, such as a head-mounted display (e.g., Oculus Rift, Vive, etc. with 3 DoF, such as roll, pith, and yaw, or 6 DoF tracking/realwalking, such as x, y, z, roll, pitch, yaw) so that users acquire objects in three-space where they see them in three-space.
Some embodiments are designed to run across these different input/output systems, e.g., including 2D direct touch and 3D with 6DOF input. To allow for this, all interactions may be designed for (single) direct touch input in the first place, as this tends to be available on a large number of computing devices available today or can be emulated using an indirect input device controlling an on-screen pointer. If the platform offers additional input capabilities, the system 100 includes for additional functionality or speed-ups, such as the ability to also rotate objects using multi-touch or to select objects behind other objects in 6 degrees of freedom (DOF).
Resolving ambiguity in the interaction. The present invention allows users to manipulate assemblies by means of tools, such as scaling tools or texture tools. In various embodiments, the system 100 includes tools that contain all information to perform the operation, such as a scaling tool that gathers the new scale at part of the interaction. Such tools can show the outcome during the interaction, making it easy for users to see what they are about to get. In various embodiments, the system 100 may also include tools that use more parameters than what the respective tools allow users to enter as part of the interaction. Some tools may allow users to provide the additional parameters before the tool is applied, which allows the tool to still provide feedback as the user interacts. Yet other tools allow entering the additional parameters after the tool interaction. In order to provide feedback along the way, the tools may assign default values for the missing parameters or may try to guess the values of the parameters. In this case, the tools may allow users to fix or tweak the results in case the default or guessed values were incorrect. Where the tool chose to connect two parts using finger joints, for example, the user may afterwards replace the finger joints with a skeleton+façade structure. Tools may also ask for the missing parameters. Tools may also explore multiple possible values for the parameters and allow users to choose afterwards, e.g., by rendering the respective outcomes and offering users to choose from them (building on suggestive interfaces [Takeo Igarashi and John Hughes. Chateau: A Suggestive Interface for 3D Drawing. In Proc. UIST 2001. Pages 173-181]).
Some embodiments include all of these strategies, while other embodiments may include a subset of these strategies.
To generate the picture-in-picture views, the system 100 for example clones the scene graph and processes each one individually. The picture-in-picture views may be scaled down geometrically or “semantically”, i.e., so that the key features that make them different are highlighted.
In this embodiment, the system 100 computes the interpretations and estimates a probability for each one, for example, based on default settings, statistics of past use, etc. In the shown example, the system 100 may have come up with more than the three shown interpretations, but the system 100 may have chosen not to render them. The option of a straight cut through the front plate and a diagonal cut through the side plate, for example, appears unlikely and when the system 100 determines that is probability is below an internally defined “tell me” threshold, the system 100 may simply suppress this option, as was the case on the shown example.
At 301, 3D editor 100 computes a set of plausible value combinations for non-defined parameters. At 302, 3D editor 100 computes probabilities for all value combinations. At 303, 3D editor 100 selects a non-processed value combination from the set. At 304, 3D editor 100 determines whether the non-processed value combination is above a “tell me” threshold. If the determination at 304 is that the non-processed value combination is above a “tell me” threshold, 3D editor 100 determines, at 305, whether the non-processed value combination is the highest value. If the determination at 305 is that the non-processed value combination is the highest value, the 3D editor 100 renders at 306 the tool application into the main view. If the determination at 305 is that the non-processed value combination is not the highest value, the 3D editor 100 renders at 307 the tool application into the secondary view. After the rendering at 306 or 307, 3D editor 100 determines at 308 whether there are additional value combinations. If the determination at 308 is that there are additional value combinations, 3D editor 100 proceeds to compute at 302 as described above. Otherwise, if the determination at 308 is that there are no additional value combinations, 3D editor 100 ends this process. If the determination at 304 is that the non-processed value combination is not above a “tell me” threshold, 3D editor 100 proceeds to the determination at 308 described above.
Configuring proactive behavior. In order to allow the system to deal with uncertainly, the system 100 may perform some or all of its computation in terms of estimated probabilities, utilities, and penalties [Donald Knuth. The TeX Book. Chapter on line break algorithm]. Based on these, the system 100 makes decisions on the user's behalf. Different users have different preferences with respect to proactive system behaviors. To accommodate for this, the system 100 may maintain one or more configurable “tell me threshold”; when one or more of these threshold is exceeded, the system 100 may make the respective suggestion (e.g., suggesting to add structural support). Similarly, the system 100 may maintain one or more configurable “do it threshold”; when one or more of these threshold are exceeded the system 100 takes immediate action.
In some cases, alternative solutions can be arranged into a multi-step workflow. In the case of
Add-assembly tool based on gravity. collisions: Some embodiments of the present invention may simulate additional aspects of physics, such as collision, in order to achieve a stronger sense of immersion and realism.
This particular interaction may also take place as a way of starting a new modeling session (the user may, for example, have selected to create “new” scene), i.e., to pre-populate the stage with a simple object that illustrates the laws of the editor's world (a) a simple cube appears (b) drops following gravity, and (c) bounces of the ground until it comes to rest (
Simulation of moving parts and physical constraints. Some embodiments may allow users to edit and/or operate assemblies with moving parts, such as levers, wheels and axles, pulleys, incline planes, wedges, screw, gears, or other mechanisms resulting from these or combinations of these. Some embodiments may offer specialized tools for editing or operating mechanisms. Other embodiments may instead or in addition offer versions of those tools that are generally used for moving things around, so that these can also operate or modify mechanisms.
Simulation of deformation and springiness: Some embodiments may allow for compliant materials resulting in non-rigid parts and assemblies. While no material and thus assembly is ever fully rigid, many assemblies are rigid enough to result in only very small deformations. An embodiment may thus choose to render all deformations or just those large enough to be noticeable.
Structure and arrangement of parts into an assembly has major influence on the amount of resulting deformation. For example, as shown in
If forces apply, compliant parts and assemblies tend to deform. In some embodiments, the system 100 computes these forces and render the resulting deformation.
Some embodiments support objects deformable in two or more dimensions.
One way of computing the deformation of compliant parts and assemblies is by means of specialized components that implement their own deformation as a function of the forces applied. A more general way of implementing deformation is by means of general-purpose solvers for deformation, such as those based on finite elements analysis. The page https://en.wikipedia.org/wiki/List_of_finite_element_software_packages lists software libraries lists solutions that perform such analyses, such as Agros2D, CalculiX, Code_Saturne, DIANA_FEA, deal.II, DUNE, Elmer, etc.; someone skilled in the art would appreciate that the better-suited engines are those that allow for large deformations. Suitable subsystems can also be adopted from engines commonly used as part of computer games. Some embodiments push physics and simulation further, e.g., including dynamic aspects, such as oscillation.
While all materials and parts may have some level of compliance, embodiments may include various tools to allow users to explicitly create, insert, and modify compliant parts or regions or parts.
To allow users to interact with compliant parts and assemblies, some embodiments include tools that make certain regions compliant or rigid or change their rigidity, such as the ones shown in
Although compliant behavior is described in the context of gravity, forces may also be applied by the user, e.g., using tools.
The concept of physical interaction as disclosed above results in a consistent user experience. Yet, 3D editors traditionally contain several aspects that do not align well with the notion of a world based on physics—in particular view management and alignment. In the following, novel approaches to view management and alignment are described that are consistent with a world based on physics.
Realistic view navigation. Many traditional 3D editors allow for 6-degree-of-freedom-view navigation (as in three degrees of rotation and three degrees of translation), some with additional zoom or dolly zoom (which changes the perspective between parallel projection and strong foreshortening). Because of the large number of degrees of freedom, some navigation tools can involve complex user actions (such as press-and-hold the middle mouse button, then drag) and thus can be hard to learn and error-prone to use—users may end up with an undesired perspective or even looking away from the object or zooming in so far that the screen is filled with a single color or zooming out until nothing can be seen [Susanne Jul and George W. Furnas. 1998. Critical zones in desert fog: aids to multiscale navigation. In Proceedings of the 11th annual ACM symposium on User interface software and technology (UIST '98). ACM, New York, N.Y., USA, 97-106. DOI=http://dx.doi.org/10.1145/288392.288578]). Some embodiments of the system 100 may support some version or subset of this traditional style of navigation.
Other embodiments of the system 100, however, may offer assembly-based view management. The main idea here is that embodiments offer one or more tools that allow users to inspect assemblies by manipulating the assembly, rather than by manipulating the camera (with the camera, of course, traditionally representing the user's eyes). In accordance with what users may need to inspect, different embodiments offer different sets of inspection tools. If users may need to inspect detail, for example, an embodiment may include a close-up tool that allows picking up an assembly and moving it closer to the camera; the same functionality would traditionally be achieved by zooming the camera. Alternatively, e.g., an embodiment handling scenes of low-complexity and/or on large displays may refrain from offering such a tool. If users may need to inspect an assembly from different sides, an embodiment may allow rotating assemblies in terms of yaw; this might be used as substitute for (some aspects of) the traditional camera orbiter mode. If users may need to inspect an assembly from above and below, an embodiment may allow tilting assemblies; this might be used as substitute for (some other aspects of) the traditional camera orbiting. If users may need to inspect the internal structure of assemblies, an embodiment may allow viewing it in a semi transparent representation, as a wireframe graphic, or as some sort of explosion diagram, etc.; this might be used as substitute for a global rendering settings.
Some embodiments include a single assembly-based view navigation tool, others include multiple specialized tools, yet others include one or more tools that allow users to manipulate multiple degrees of freedom at once. For example, a mouse-based or single-touch-based system may offer a tilt/yaw tool that allows rotating an assembly around its tilt and yaw axes in one operation (e.g., adjust yaw by dragging sideways and adjust tilt by dragging up/down, in the case of a 2D input device). A multi-touch based embodiment may also allow, for example, close-up viewing using a pinch-to-zoom gesture or similar. As another example, an embodiment may use a 6DoF VR controller or comparable device to allow users to manipulate a large number of degrees at once, such as tilt, yaw, roll, horizontal pan, vertical pan, and close-up or any subset of these. Other embodiments may invoke assembly-based view navigation tools by tapping, clicking, or simply pushing a button, e.g., to bring a selected assembly into close-up viewing.
Such operations can be complemented with additional automated aspects, such as up-close viewing while also tilting and/or yawing the assembly so as to be perpendicular to the camera and/or scaling it to filling a particular percentage of the screen. Such operations may also establish a mode within which a particular set of functions of tools is made available, such as that a pointing device controls tilt and yaw of assemblies as long as they are in up-close viewing mode, i.e., until the mode ends.
In some embodiments, the system 100 includes assembly-based view navigation exclusively. However, hybrid and redundant embodiments are possible too. Such embodiments may, for example, combine an assembly-based close-up tool with traditional orbiting, etc.
Different embodiments of the system 100 implement assembly-based view navigation in different ways. An assembly-based view navigation operation may be inherently temporary, so that manipulated assemblies move or animate back automatically, e.g., once the tool is released. Alternatively, assemblies may permanently come to rest in the intended position. Especially the latter allows assembly-based view navigation to be unified with other tools, in the sense that any tool that moves assemblies around can implicitly serve as an assembly-based view navigation tool. Gravity-enabled tilt-tools and gravity-enabled yaw-tools, for example, may be considered tools for manipulating assemblies (e.g., as part of a workflow that positions assemblies before fusing them with other assemblies)—but these tools can also be considered assembly-based view navigation tools.
Analogously, the temporary versions can also be integrated with other tools. A tool that adds texture to assemblies, for example, may temporarily position those assemblies for up-close viewing while users position the texture; the assembly may then, for example, automatically snap back to its previous position and orientation.
The primary benefit of assembly-based view navigation is that it largely eliminates the need to manipulate or even think about the view. In traditional view navigation, users need to switch back and forth between manipulating the scene and manipulating the view. Even worse, they need to learn each aspect separately, which may include the difficulty of controlling six or more degrees of freedom using a 2D input device, such as a mouse or touch. Assembly-based view navigation, in contrast, allows users to only think and learn about manipulating objects, which then does double duty for manipulating scene and view. Furthermore, the integration makes it easy to complement tools that manipulate assemblies with just the right amount of view manipulation, such as only to automatically tilt and yaw an assembly into position so as to allow manipulating the front, etc. As a result, some embodiments may choose not to offer an equivalent for the traditional 6 DoF view navigation, thereby eliminating one of the hurdles that traditionally made it difficult for new users to get into 3D editing.
Finally, assembly-based view navigation integrates particularly well with the concept of a “realistic” 3D editor as already described throughout this disclosure. Traditional camera-based view navigation contains a lot of interactions that clash with the notion of a physical world, e.g., when users fly around an object or duck through a virtual table surface in order to view an object from below. In contrast, the notion of picking up an object and turning it in one's hand in order to view it from different sides tends to be consistent with the notion of a physical world.
Note that the concept of assembly-based view navigation emerges from the notion of a world containing a realistic work environment and multiple assemblies. The traditional concept of 3D editing, in contrast, is generally based on a single assembly and a stage of no discernable work environment. In such a world that effectively consists only of a single chunk of contents and the camera, manipulating the single assembly can be indistinguishable from manipulating the camera.
Different embodiments may include different types of assembly-based view navigation tools. In the following, a few specialized tools, including gravity-enabled tilt-tools and gravity-enabled yaw-tools, are described.
Gravity-enabled tilt-tools. As illustrated by
Gravity/friction-enabled yaw tool. Embodiments that support multi-touch may allow users to rotate assemblies in terms of yaw, for example by acquiring the assembly with two fingers such as thumb and index finger and performing a rotation with the respective pair of fingers. Embodiments that include input with sufficient degrees of freedom, such as a VR controller may also rotate assemblies by acquiring the object, rotating the controller, and dropping the assembly. Alternatively, embodiments that support spatial input of at least two degrees of freedom (such as x, y on single or multi touch systems, mouse/trackball/d-pad/joystick etc. input, game controller, VR controller, etc.) may rotate assemblies in terms of yaw using the algorithm described in
Note how two different dragging paths generally result in different resulting orientations of the dragged assembly, even if the user's input covers the same net vector. The dragged object may be subject to inertia, potentially causing it to swing, oscillate, or overshoot. Some embodiments may apply appropriate damping to prevent this.
To allow the same tool to be used to merely move objects, some embodiments may help users to move an assembly without rotating it by eliminating small friction forces (e.g., by thresholding them) or by virtually expanding the touch contact region over the center of mass.
Friction forces may apply not only to yaw, but also tilt and roll. Some embodiments may consider these components of the friction force, thereby allowing the dragged assembly to tilt, roll, tumble, or even flip over during dragging. Other embodiments may eliminate this by eliminating the respective components of the friction force vector, or by projecting down the contact point of the user's input into the same plane as the center of mass. Some embodiments may apply the friction force also to the assemblies the dragged assembly, allowing this class of tools to be used to affect the rest of the scene, e.g., to align assemblies.
Another embodiment of yaw tool considers inertia instead of or in addition to friction. This allows these versions to work without the dragged assembly being in physical contact with the work surface or another assembly. Strong damping helps prevent the object from spinning perpetually. Gravity/friction-based yaw tools can be combined with gravity-based tilt/roll tools. The inertia-based version is particularly useful here.
As discussed above, these and other tools can be combined with automatic snapback, with additional up-close viewing, with each other, etc.
Many of today's mobile devices, such as phones and tablets contain accelerometers capable of detecting when the device is being shaken. Some embodiments use this to allow users to interact with the scene assemblies. One approach is consider the stage as being statically coupled to the phone's casing, so that any movement to the phone results in a corresponding movement of the stage.
Automatic stage scaling: As a means to reduce the necessity to trigger up-close viewing repeatedly, some embodiments may allow manipulating the user's view onto the stage, such as zoom, pan, and tilt. In many situations, the objective is to make “good use” of the stage, i.e., (1) to fit all assemblies into the stage or (2) make sure that each assembly is at least partially visible on stage (so it can be pulled in if necessary), but other view management strategies are possible. Within those constraints, the stage could, for example, be zoomed in as far as possible so as to minimize the necessity to zoom.
The optimality of the view onto the stage may change as users manipulate assemblies. As shown in
In some embodiments, the system 100 chooses to scale the stage automatically. Using
All of the above can be done for tilt and yaw as well.
In some embodiments, the 3D editor 100 may complement zooming the stage with additional actions, such as the (e.g., automatic) addition or removal of size reference objects, e.g., replacing or complementing the cup shown earlier with a size reference object appropriate for the new scale, such as a chair. In some embodiments, the 3D editor 100 may also modify the stage, e.g., transition from assemblies being placed on a desktop work environment to a workshop environment, where (large) objects are placed on the floor. These transitions may be animated, including effects where the desk may appear out of or disappear into the workshop's ground.
Some devices may offer additional view management features. For example, some mobile devices, such as phones and tablets, etc. may allow users to rotate the view by rotating the screen between a portrait and a landscape orientation. Some embodiments may implement this in a “transparent” way, i.e., so that the arrangement and visibility of contents of the stage is affected as little as possible. Other embodiments may use this to allow users to intentionally view contents up-close, e.g., by zooming in, e.g., also to switch between a view that contains editor and additional contents, such as a repository, to an isolated view of the editor.
Some embodiments may offer ways of rearranging assemblies on stage. This may be triggered manually by the user, e.g., by pushing a button, performing a gesture, etc. or automatically by the system. In some embodiments, for example, the 3D editor 100 may move assemblies closer together in order to allow the system to zoom in the stage. In other embodiments, the 3D editor 100 may rearrange objects to keep them visible, e.g., by moving them out from being fully or partially occluded by another assembly or from being fully or partially off screen. Such embodiments typically determine patches of empty space in 2D screen space, map them back to 3D world space, and the move assemblies to an appropriate empty patch, e.g., by considering which patch is closest.
Exaggerated realism. So far strategies for creating systems that behave realistically have been described. In contrast, some embodiments may choose to exaggerate realism for additional visual clarity or to attract an audience with an affinity for this type of effect, such as video gamers, kids, cartoon readers, or simply people who like the style. Embodiments may, for example, implement cartoon-like physics effects [Bay-Wei Chang, David Ungar. Animation: From Cartoons to the User Interface. In Proc. UIST 1993] to emphasize animation with anticipation, follow-through, objects stretching when being accelerated or falling lower-parts-first, parts subjected to forces deforming or disassemble temporarily, etc. The option to render using exaggerated realism applies to all interactions and effects described in this disclosure.
Fast, cartoon-like transitions. Along the same lines, some embodiments may employ a cartoon-like style for rendering movement in the 3D editor 100, i.e., those transitions that could also be animated. These embodiments move objects quickly (often immediately or in real-time, i.e. within a fraction of a second) using the technique described in [Patrick Baudisch, Desney Tan, Maxime Collomb, Dan Robbins, Ken Hinckley, Maneesh Agrawala, Shengdong Zhao, and Gonzalo Ramos. 2006. Phosphor: explaining transitions in the user interface using afterglow effects. In Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST '06). ACM, New York, N.Y., USA, 169-178. DOI=http://dx.doi.org/10.1145/1166253.1166280], i.e., they show some sort of fading trail to inform users about the transition that just took place.
The benefit of this type of style of rendering movement is that is avoids slowing users down, as traditional animation techniques do. Some embodiments will complement the transition with appropriate sounds. This style of visualization can be applied to all interactions or system actions that could otherwise be rendered using animation. In contrast, this approach is less useful for interactions that inherently “animate” such as dragging.
The concepts disclosed in this section apply to generic 3D editors as well as to those 3D editors that target one or more specific fabrication machines.
Non-modal Alignment. Alignment plays a key role in 3D editing (and in particular in those systems aiming at fabrication). For example, users may want to create a move two plates so as to add a joint, the two plates need to attach directly edge-to-edge.
In 3D editors, alignment has traditionally been achieved with the help of (magnetic) snapping [Eric Bier. Snap-dragging in three dimensions. In I3D '90 Proceedings of the 1990 symposium on Interactive 3D graphics, pp. 193-204] and some embodiments of the present invention may implement this approach. Magnetic snapping translates or rotates an assembly as soon as the user has moved it closer to that snap position than a certain epsilon, such as closer than 10 pixels (or 5 mm) or closer than 5 degrees of rotation.
Traditional snapping, however, is limited in that that it makes it impossible to close-to-align two assemblies, as they would snap together as soon as they get close enough. Thus traditional snapping tends to be accompanied by a mechanism for deactivating snapping; in Microsoft Office applications, for example, snapping can often be deactivated by holding down the <Alt>-Key. Alternative approaches (e.g., on computers that do not offer a keyboard) include the use a gesture or GUI button instead. Any and all of these, however, require users to learn the respective interaction and thus tend to result in a learnability/discoverability hurdle—sometimes to the extent that many users never (bother to) find out about this option.
Some embodiments may therefore instead use an alignment method not based on magnetic snapping, but on temporarily stopping a dragged assembly (snap-and-go [Patrick Baudisch, Adam Eversole, Paul Hellya. System and method for aligning objects using non-linear pointer movement. U.S. Pat. No. 7,293,246 B2. Nov. 6, 2007]). This technique designed for use with mice and other indirect input devices temporally slows down a dragged object to a speed of zero while in alignment, making it easy to align because the technique effectively enlarges the position corresponding to the aligned position in the space of the input device. Unfortunately, snap-and-go only works for indirect input devices, such as mice.
The inventive concept, which is referred to as a space curvature, includes a novel alignment method that works not only with indirect input devices, but also with direct input devices, i.e., devices that establish a 1:1 mapping between input space and display space, such as touchscreens/direct touch, pen/stylus input, virtual reality controllers, etc. As used herein, screen space is also referred to as output space. As illustrated by
One way of implementing space curvature is by determining the space of intended output positions and map them to the space of possible input positions.
What users thus see is that a dragged assembly stops at a snap target, then lags behind as the input device, such as the user's finger already continues on, then starts to move slightly faster than the input device until it catches up and gets ahead of the user's input device. The assembly thus reaches the next snap position before the input device and stops again.
Each individual snap targets can be given a different “intensity”, i.e., a different size in input space. The bigger that intensity/size the easier the snap target will be to acquire. Some embodiments may thus choose to assign intensity/sizes according to how likely a target is to be acquired or according to how important it is to avoid the respective type of user error. In one application scenario, the system 100 may complement a tool that scales assemblies with space curvature making is very easy to scale the assembly to integer sizes in terms of centimeters and to a lesser extent to integer sizes in terms of millimeters.
Ideally, space curvature receives “raw” input from the input device (as opposed to input already rounded to the closest pixel), i.e., with input precision in the sub-pixel range, as this makes sure that all positions remain accessible.
Unlike magnetic snapping, space curvature never causes dragged assemblies to perform any jerky movements. Some embodiments may still add emphasis to certain moments in the interaction, such as the moment an assembly aligns itself. Some embodiments may add a shake effect resembling magnetic snapping; other embodiments may use other channels, such as sound or visual stimuli, haptic, etc.
Space curvature for higher dimensionalities works in direct analogy to the algorithms described in 2D. The same way 2D space curvature maps rectangles or quadrilaterals to rectangles, 3D space curvature maps cubes or cubes with an indented corner to cubes, and so on, such as, hyper cubes to hyper cubes or hyper cubes to hyper cubes with an indented corner in 4D.
The mapping behind space curvature can also be implemented as a sequence of separate steps—one for each dimension. In 2D, for example, such a “dimension-by-dimension” implementation might first align x and then align y. In
In a similar fashion, alignment in rotation and translation can be combined by concatenating the two mappings, i.e., the output of the first becomes input to the second.
For the point alignment version, perform the two alignment steps are performed separately, but the non-aligned x/y coordinates are also kept, so the final stage now has four x/y pairs as input. The algorithm now determines the how offset the alignment method has produced in each dimension and use one to limit the other.
The space curvature algorithm, as described above, leads to a slight jerkiness when manipulating an assembly, as the algorithm causes a repeated stop-and-go. For applications where a smooth appearance is preferable, certain embodiments may therefore instead implement patent space curvature with a “delayed” update mechanism. As users manipulate the assembly (here at the example of scaling), the size of assembly on screen adjusts continuously and only the attached (numeric or symbolic) scale display updates in the expected stop-and-go fashion. Users adjust the size of the assembly with an eye on the scale display and when the desired value is reached they stop. The scale display now shows the intended value; however, the scale of the actual assembly on screen will in most cases be inaccurate. The system 100 therefore adjusts the geometry to match the value suggested by the scale after the interaction ends. The system 100 may do so right as the interaction ends or at a later, less conspicuous moment, e.g. faded slowly over time, at the beginning of the next manipulation of that assembly, the manipulation of some other assembly, etc.
This approach may be used to help users acquire small targets, e.g., when the user touches the screen, drags one or more fingers across the screen or uses any other type of pointing input. Acquiring a small target is equivalent to aligning the pointer or finger etc. with the target and this can be achieved by applying the algorithm disclosed above to the pointer or finger etc. position.
Visualizing (non) alignment. It is often important for users to know whether or not two sub-assemblies in an assembly are indeed aligned. Some embodiments may follow a traditional approach and visually explain alignment. If the left edges of two parts happen to be aligned, for example, some embodiments may, for example, connect these two edges with an additional line. However, such lines and similar displays tend to clutter the display—especially when there is a lot of alignment (as tends to be the case with certain fabrication machines, such as laser cutters, etc.). This clutter may make it harder to understand a scene.
An alternative approach that inverts the design problem: rather than illustrating alignment is proposed that illustrates non-alignment (which may be refer to as Heisenberg misalignment). The respective embodiments do so as follows. If two assemblies were aligned explicitly (using an explicit alignment tool or a tool manipulates assemblies that aligns them as a side effect of the interaction), then these embodiments renders them as aligned. Otherwise, however, these embodiments visually exaggerate the (potentially tiny) offsets to visually clarify the non-alignment. One way of achieving this is to make the rendered version not correspond to the data model. In such an embodiment, the on-screen rendition of a non-aligned assembly will look different (i.e., more extreme) than, for example, the export of the same assembly to a fabrication device.
Alternatively, and maybe more commonly, the same approach is used to show whether two collocated sub-assemblies form a single (connected) assembly or whether they are two separate assemblies that just happen to be collocated. In the example shown in
All of these effects can be thought of and visualized as repulsive forces between the two assemblies and with a certain amount of damping. The wiggling may be actual animation or may be implemented fully or in part using afterglow effects as described above in this disclosure. While this approach makes assemblies appear misaligned, in their internal data model the assemblies scene graph may (while unlikely) actually happen to be aligned; one way of achieving this effect is to introduce the misalignment when rendering the scene graph.
As illustrated by
There are many different possible strategies for determining the size of offsets.
There are multiple ways for implementing non-alignment. One approach is to simply determine the degrees of freedom and animate along those. Another approach is to insert appropriately chosen springs into the model and let a physics engine or something similar perform the animation. In assemblies that contain sequences of multiple non-aligned objects, e.g., a stack multiple boxes high, some embodiments prevent offsets from accumulating by choosing offsets in alternating directions. In this and all subsequent examples, wiggling may be implemented using traditional animation or by rendering a trail, as discussed earlier, or any combination thereof.
If, at 4309, a wiggling operation is to be performed, 3D editor 100 determines at 4320 the repulsion force based on position proximity. At 4321, 3D editor 100 inserts a spring that has the determined repulsion force. At 4322, 3D editor 100 sets damping. At 4323, 3D editor 100 animates the scene according to physics. At 4324, 3D editor 100 determines if the speed is less than a threshold. If the determination at 4234 is that the speed is not less than the threshold, 3D editor 100 continues to animate at 4323. Otherwise, if the determination at 4234 is that the speed is less than the threshold, 3D editor 100 ends the wiggling operation.
5.5.5. Non-Grouping with Compounds
Functionality for compounds. Compounds of the present invention are assemblies that offer additional functionality on the assembly as a whole. Examples include joints and mechanisms. A box consisting of six rectilinear plates connected with finger joints along all sides, may be considered a compound as well if we add additional functionality, such as allowing the entire assembly to scale along one or more of its principal axes, causing four of its plates to all scale at the same time and four of the finger joints to be recomputed so as to fit the new dimensions, etc. Other compounds may offer this type of scaling functionality as well.
This type of “box-specific” functionality can be made available explicitly by defining that this compound is a box (either by importing the compound pre-grouped as a box or by selecting the six plates and “grouping” them (or pick some sort of “define object” function as done, for example, in Macromedia Flash Version 2), allowing the system to apply its box-specific functions.
Another approach is to perform the grouping automatically. Any two parts physically connected by a joint or mechanism, for example, suggest that the two parts have a relationship to each other, thus may be manipulated together. This explicit approach can lead to difficulties (1) with inexperienced users who may struggle with the concept of grouping/models organized in the form of hierarchies. In particular, users may construct compounds from individual parts (e.g., a box by assembling six sides) and the compound functionality never becomes available. (2) When compounds have to be ungrouped in order to customize them, causing the special compound functionality to disappear.
Implicit “bottom-up” compounds functionality Some embodiments therefore implement the additional compound functionality in an implicit way. Objects are only “loosely” assembled from parts. When users try to manipulate an assembly/compound, a set of specialized filters (one for each type of additional functionality) analyze the object (either at this moment or earlier and cache results). Each filter determines whether its pre-requirements are met and if so offers its functionality. Example: A five-sided box, i.e., a box with no top. When scaling the box by pulling one of the sides outwards, a filter determines that there are other connected parts and scale these connected parts accordingly [Eric Saund, David Fleet, Daniel Lamer, and James Mahoney. 2003. Perceptually-supported image editing of text and graphics. In Proceedings of the 16th annual ACM symposium on User interface software and technology (UIST '03). ACM, New York, N.Y., USA, 183-192. DOI=http://dx.doi.org/10.1145/964696]. (Additional example: assembling six plates so that they form a box, scaling this box down may convert the box to a stack of plates).
As discussed earlier, some embodiments of the inventive concept increase ease-of-use by limiting their functionality to specific classes of personal fabrication machines, such as laser cutters. Many such devices do not allow fabricating arbitrary 3D models, but only a reduced set, such as 3D models assembled from flat plates, in the case of 3-axis laser cutters. The present invention exploits the limitations imposed by the fabrication machine by offering appropriately reduced functionality, aimed at matching what the fabrication device is capable of creating. In some sense, the present invention builds on editors for construction kits (e.g., MecaBricks); unlike construction kits, however, the “parts” are user-defined within the constraints of the targeted fabrication machines. While most functions of the present invention are illustrated at the example of a 3-axis laser cutter or compatible device (plasma cutter, water jet cutter, in some cases milling machines etc.), the inventive contributions in this section apply to other fabrication machines as well.
Calibration. Some machines are subject to calibration. Laser cutters for example may burn a certain amount of material during cutting (aka “kerf”). To make sure that objects fabricated by the system, embodiments may allow calibration tools.
Similarly, embodiments may offer a test strip for determining cutting intensity. The strip contains holes to be cut with different intensities (power settings and/or speed). Users fabricate the strip, find out which holes actually cut through, and enter the ID of the weakest one that still cut all the way through (or the strongest that failed etc.) into a GUI dialog. This allows the system to calibrate the power settings of subsequent cuts by simply using the weakest setting that still cut (plus an optional fudge factor).
Changing material thickness. When designing an assembly and then changing the material thickness, some dimensions of the assembly may change. In the case of a box, for example, the space inside of the box may change, the outer size of the box may change, or any combination thereof. In various embodiments, the 3D editor 100 may ask or guess whether the user prefers the additional material to grow away from the inner surface (so as to preserve the inner diameter of boxes etc., relevant, e.g., when storing certain-sized objects inside) or outer surface (so as to maintain fit into an external enclosure), or around the middle of the material etc. (e.g., for centered parts), or any other combination. The 3D editor 100 may try to guess based on scene geometry. The 3D editor 100 may also disambiguate by asking the user. Different embodiments may offer such functionality also in the form of various change thickness tools that either changes insides or outsides, etc.
As discussed earlier, one way of creating 3D objects is by fabricating parts from material that is largely two-dimensional and then assembling these parts into the 3D object. Such 2D parts will typically be fabricated using two or three-axis subtractive devices, such as certain types of laser cutters, knife-based cutters, water jet cutters, plasma cutters, scissors, CNC milling machines, etc. However, this approach may also be used with other fabrication methods, such as additive methods including 3D printing, if these are used to make largely flat parts (e.g., to reduce the use of support material, to fabricate faster, to make larger objects, etc.).
As discussed earlier, the most convenient way of manipulating the models describing 3D objects to be fabricated using a 2-3 axis laser cutter etc. is by means of manipulating a 3D representation of the model, e.g., in a 3D editor.
Unfortunately, such a 3D model may not always be available. Instead, especially in shared repositories, models for laser cutters etc. are most commonly (designed and) shared in 2D formats (e.g., G-Code or .svg). Such formats describe models in the form of a cutting path across a 2D plane, as this is what the fabrication machines are able to execute. This may, for example, be described at the lowest possible level, i.e. G-Code, which actually tells the machine where to move its tool head, etc. G-Code tends to be hard to view and manipulate for humans, though. To make such 2D models slightly easier for humans to view and manipulate, they tend to be edited and shared in line drawing or vector graphics formats (e.g., encoded using the .svg file format, or the file formats of Adobe Illustrator, InkScape, OpenDraw, PowerPoint, etc.). As someone skilled in the art would appreciate, such 2D line formats can be converted back and forth to a G-Code (e.g., by the fabrication machine's printer driver), which allows using 2D drawing formats almost interchangeably with G-Code. Still, editing in any 2D format is difficult, as it requires users to mentally decompose the 3D object to be created into two dimensional parts.
To address the issue of models for 2-3 axis fabrication machines often being shared in 2D formats, while they would be easier to edit in 3D, we propose a method for converting 2D representations of 3D models for laser cutting etc. to 3D representations. Our algorithm implements a multi-stage pipeline that includes, among others, a conversion of 2D drawings to parts, auto assembly of those parts as far as possible, optional artificial intelligence steps that try to guess how to best resolve ambiguity, and a final resolution of ambiguity by user interaction.
Ultimately, this 2D import combines with 3D editing and 3D export into the workflow shown in
The physical assembly creates non-planar (aka “three-dimensional”) physical objects. This is accomplished by folding, bending, extruding, etc. flat parts or by assembling multiple flat parts, e.g., stacking, gluing, screwing, welding them, or by joining parts using appropriate types of joints, such as snap fits, press fits, etc.
The 3D editing step will typically take place interactively in a 3D editor, such as the one described throughout this disclosure. Alternatively, users might manipulate 3D models using other means, such as script.
During 3D export to 2D, such 3D models can then be (automatically) converted to a cutting path, as represented e.g., in G-Code. This may be accomplished through an intermediate 2D representation (as illustrated by
In the following, we disclose a method for converting 2D representations of 3D models to 3D representations (
The disclosed conversion process may be performed in full or just parts of it. For example, the import format may be a 2D drawing, in which case we skip the first step of converting the cutting path. Or we may leave out steps at the end and export an in-between data format. We may even skip the automatic part in part or entirely, starting with manual assembly. The algorithm may also perform some steps in alternative orders. The algorithm may, for example, determine material thickness earlier in the process.
If the model should be in a machine-specific cutting path representation, the proposed invention renders it out into a 2D line drawing. In the case of G-Code, for example, we place a matching brush at some starting point and then move the brush according to the g-code instructions, one at a time, as illustrated by
The algorithm now converts the line drawing into a set of segments; this set will contain all parts of the 3D model, as well as scrap pieces surrounding the parts. Someone skilled in the art will appreciate that image segmentation is a standard problem and can be solved for line drawings e.g. by tracing connected lines and grouping found closed circles/surfaces as individual segments. The algorithm proceeds as follows: For each segment, store a set of neighbor segments, i.e., the set of all segments it shares at least one line segment with. Sort segments according to their hierarchy in a data structure, which could be a tree structure, where the outermost/biggest segments are located higher in hierarchy than the segments they contain. The algorithm now sorts all top-level segments directly below a root element.
Step 3: Classification of Segments as Parts Vs. Scrap
Next, the algorithm classifies which of these elements are likely to be part of the model and which are likely scrap.
One simple possible algorithm is shown in
This algorithm handles all cases well where objects are laid out without touching (
Humans and software may create shared contours though for the purpose of optimization, i.e., to reduce cutting time and especially material consumption.
In order to identify cases of shared contours, as for example in
If such a case of doubt, the algorithm may perform additional heuristic tests on each segment, i.e., tests indicating how likely a segment, considered in isolation, appears to be viable as a part. The algorithm may consider any subset of the following features: (a) Segments larger than their neighboring segments tend to be parts, as users and programs tend to minimize scrap. (b) Segments that are identical to segments that have already been classified as parts tend to be parts. (c) Segments bearing engravings tend to be parts. (d) Segments that are largely convex, (i.e., ratio between segment surface and the surface of its convex hull close to one) tend to be parts. (e) Segments with incisions the width of the material thickness (suggest the use of cross joints) and are thus parts.
The shape of many joints reflects the thickness of the material sheet they are to be fabricated from. The width of the incision that makes a cross joints, for example, is the material thickness. The length of the fingers of a 90 degree finger joint, for example, commonly corresponds to the material thickness.
The following naïve algorithm (flowchart in
Some embodiments may filter first, i.e., start by creating an array or hash-map of plausible values and then count line segments that have plausible lengths. Thickness estimation can also be run earlier in the algorithm, as early as g-code, which does contain line segments lengths. More elaborate versions of the algorithm limit their analysis to line segments that appear to be part of a joint (see below), which eliminates a lot of noise.
Next, the algorithm locates joints. Joints are typically inspired by carpentry. Examples of joints include finger joints, cross joints, butterfly joints, in general but not limited to any joints the signature of which involves a combination of outside contour and cutouts, etc. The exact nature of the joints varies by cutting tool.
Our algorithm proceeds by locating contours that suggest a joint (or, rather, one half of it—we will refer to these as half joints). For each joint type, our algorithm implements one classifier, i.e., a piece of code (e.g., a class) that is passed a part and that searches this part for one or more half-joints. A notch joint classifier, for example, may look for straight incisions into an edge of the part. A finger joint classifier may look for notches along an edge.
For each identified potential half joint, joint classifiers return the joint's characteristic parameters, such as the depth and width of teeth and gaps for finger joints. We will refer to these are as a joint's signature, as it may be useful in identifying the matching half-joint.
Joint classifiers may also return an estimated probability that this region actually is a half joint. A long row of finger joints, for example, will return a high estimated probability, as this pattern is unlikely to occur outside of finger joints. A single incision, in contrast, may be part of a notch joint; it may also be part of many other things, so the estimated probability will be lower.
Our algorithm calls each of its joint classifiers on all parts and stores the results for each joint type.
Our algorithm implements joint classifiers by performing matching G-code-like descriptions of the half joint—such as “left, straight, left, straight, right, straight, right, straight, repeat” for finger joints or “right, straight, left, straight by material thickness, left, straight, right” for cross half joints—on the contours of all individual parts.
Next, the algorithm tries to match joints. Naïve embodiments try to match all possible pairs of two half joints, resulting in a time complexity of O(n2) (with n being the number of half joints found earlier). A more elegant embodiment stores half joints in an appropriate data structure, such as an array sorted by half joint signature for O(n*log(n)) or a hash table for O(n) complexity, by locating only those matches that fit. Contours, such as “right, 1 cm straight, left, 1 cm straight, left, 1 cm straight, right” are easily hashed, e.g., converted first to a string such as R10L10L10R and then hashed.
The matching produces the data structure shown in
The algorithm now clusters identical parts, i.e., it locates parts defined by the same contour, eliminates all but one from the graph joint, and increments the counter of the remaining copy for every copy deleted. Matching contours can be done by matching contours stores as a string with all rotated versions of the other fingerprint (fast algorithms use sub-string searching).
The algorithm now searches the joint graph for uniquely defined joints, i.e., pairs that have only a single match (with redundant parts counting as one), and combines these parts into sub-assemblies. Our algorithm proceeds as follows: (1) the algorithm starts with a list of assemblies:=the list of all parts. (2) Iterate over all pairs of assemblies and their half-joints: (3) if there is exactly one match, assemble the two parts, i.e., (4) remove them from assemblies, (5) join the two half joints and (6) add the newly created assembly to assemblies. See flowchart in
Optionally, the system may start by giving users the opportunity to verify any of the automatic processing steps, such as segmentation and classification. As illustrated by
Some embodiments then group identical parts (e.g., by stacking them) and assemble what can be assembled with sufficient confidence, resulting in a scene like the use shown in
While some embodiments may try to continue to assemble for the user, e.g., by making suggestions, other embodiments prefer to simply support users as they “drive”.
One approach is to allow users to assemble with a simple click or tap on the object to connect to, causing the two objects to join.
As illustrated by
Some embodiments offer additional tools to help simplify assembly. Selecting the swap tool and clicking or tapping on a part or assembly causes the system to highlight all plates or assemblies in the scene the currently selected assembly can me swapped with (i.e.) identical joint signatures. The flip tool flips a part or assembly that was attached using a symmetric joint. The inside-out tool changes the chirality of an assembly by flipping all angled joints.
A 3D editor 100 targeted at fabrication may perform its main loop as (1) receive input from the user, (2) use it to process a scene containing models/assemblies, (3) repeat. In addition, the 3D editor 100 typically includes means for converting one or more parts, assemblies, or scenes into a file format/“code” that can be sent to a fabrication machine. For a laser cutter, for example, this may mean to decompose a 3D model into 2D plates as shown in
The export of 3D models to 2D machines serves an important purpose, because it allows users to edit their models in 3D, which is much easier than the today most common way of editing such models directly in 2D line format (e.g., Adobe Illustrator, InkScape, OpenDraw, PowerPoint, or any other program that allows editing 2D line drawings). While editing in 2D line formats requires users to mentally convert back and forth between the 3D object they want to create and the 2D format the fabrication device accepts, editing in 3D and exporting to 2D automatically dramatically simplifies this process for users, thus makes this type of fabrication accessible to a much wider audience.
The export engine 36 may perform some or all of the process flow. At 4801, 3D editor 100 creates the nodes of the object from the scene graph. At 4802, 3D editor 100 generates an export list and a parts list for the object. At 4803, 3D editor 100 prepares the export data for the node from the export list and the parts list. The 3D editor 100 beings the export at 4803. At 4811, the 3D editor 100 determines whether the node is a foreign object. If the determination at 4811 is that the nodes is a foreign object, the 3D editor 100 generates at 4812 the part list of the node, and ends the export. Otherwise, if the determination at 4811 is that the nodes is not a foreign object, the 3D editor 100 determines at 4813 whether the node is a primitive (e.g., a plate). If the determination at 4813 is that the node is a primitive, 3D editor 100 exports at 4814 the list for the primitive, and ends the export. Otherwise, if the determination at 4813 is that the node is not a primitive, 3D editor 100 determines at 4815 whether the node contains unprocessed sub-nodes. If the determination at 4815 is the node does not contain unprocessed sub-nodes, the 3D editor 100 ends the export. Otherwise, if the determination at 4815 is the node contains unprocessed sub-nodes, the 3D editor 100 extracts the sub-node from the node at 4816, exports the sub-node at 4817, and continues back to the determination at 4815. At 4804, 3D editor 100 generates the layout data of the object. The 3D editor 100 begins the layout at 4804. At 4821, 3D editor calculates the bounding box for each primate in the export list. At 4822, 3D editor 100 creates a trivial layout by packing bounding boxes. At 4824, 3D editor 100 optionally optimize the layout. The 3D editor 100 ends the layout. At 4805, 3D editor 100 send the data to the fabrication device.
Some embodiments perform the conversion of 3D to 2D on export, i.e., the scene graph itself still consists of generic 3D objects during editing. Other embodiments, however, implement the conversion to the target fabrication machine earlier and in particular the moment objects are being created and modified, so that the display of parts, assemblies, and scene is already in the form and visual appearance of the target fabrication machine. This very strong form of WYSIWYG offers a series of benefits, such as that it allows users to already see what they will get, catch unexpected effects early on, but also to allow users to use their judgment if the result can be expected to perform as intended.
Note how the algorithm described in
As described above, some fabrication machines (such as ⅔-axis subtractive devices, such as ⅔-axis laser cutters, 3-axis milling machines, ⅔-axis water jet cutters, etc.) tend to produce objects in multiple parts, thus require assembly. Some embodiments will automate assembly, e.g., using robotic arms. For this purpose, the assembling machine(s) will require information on how to do so. Other embodiments will require human users (or whoever offers assembly as a service) to assemble objects by hand. To help these humans perform the job with ease, many embodiments incorporate assembly instructions into fabricated parts. Different embodiments will accomplish this in different ways.
(e) This version places markers on the “support” material next to parts, so as to not interfere with the look of the final object at all (even though at the expense of requiring users to match parts up while still located inside the cut sheet). (f) This version places markers outside the actual parts, but on little perforated tabs that initially stay with the respective part, but that users will break off once the part has been placed in its intended context, so that the marker information on the tab is no longer required (and now often in the way of further assembly). Since etched markers can be hard to read or take time to create, some embodiments cut markers into the tabs.
Since joint-joint and joint-part labels require quite a bit of effort in terms of locating the matching part (O(n2)), some embodiments instead opt for a labeling scheme that allows users to find the matching parts faster.
Some embodiments complement the techniques listed above with an approach to preventing erroneous assembly by preventing non-matching parts from being assembled. The algorithm accomplishes this by making each pairs of joints distinct, so that each joint on a part has only one matching counterpart. For finger joints, for example, this can be accomplished by varying the sequence of finger widths. For butterfly joints, this can be accomplished by using differently shaped cutouts, e.g., copying the different joints from a jigsaw puzzle. This can be accomplished with an algorithm as simple as one that enumerates the joints in the scene, then maps each number to a joint pattern. There are many possible mappings, such as representing the joint ID as a binary number and then assign each finger joint a small width for a zero and a higher width for a one at the respective position in the finger joint. Another approach is to randomize joint parameters, which can be further improved by checking newly generated joints against the joints already in the current object (ideally in all possible orientations).
As someone of skill in the art will appreciate, some technologies, such as laser cutting can leave residue on the parts, e.g., as a result of heat and smoke caused by the laser. This effect can be reduced by covering the sheets to be cut with a protective film, such as adhesive plastic foil or masking tape, etc. Unfortunately, this requires users to (generally manually) remove the film after cutting (and before or after assembly). Users generally accomplish this by inserting a fingernail or thin blade between film and part—as time consuming and tedious process.
Native primitives allow users to create parts that the targeted fabrication machine is able to fabricate as a single part.
There are many different ways how such native primitives can be added.
Predefined compounds are objects that the intended fabrication machine is not able to fabricate in one part, but that it can fabricate from multiple parts, e.g., to be assembled by the user. Some embodiments may offer predefined various compounds as shown in
There are many different ways in which the predefined compounds can be added.
Predefined/imported assemblies. Some embodiments include libraries of commonly used assemblies and/or allow importing assemblies form one or more repositories (see also the section on integration with repositories further down).
Ambiguous assemblies. Some assemblies allow for multiple different types of implementations on the targeted fabrication machine. In this case we refer to them as ambiguous assemblies.
When such an assembly is created, the user may specify which representation to create (for example, by picking a specific tool that will always create stacks of plates or my picking a more genetic tool that allows the choice to be input) or the system may choose. The embodiment may consider various parameters in making its decision, such as the size of the sphere, forces it is expected to take, etc. (and of course the targeted fabrication machine). Embodiments may allow users to create specific implementations, to change the implementation of an ambiguous assembly later, and/or generic implementations that may change their own implementation what requirements change, e.g., when the assembly is scaled or loads change.
In
Embodiments may offer ambiguous assemblies (and predefined assemblies in general) in various forms. They may offer a specific implementation, a generic implementation that the system picks when inserted, a generic implementation that keeps changing as requirements change, or a generic one that stays abstract and generic and is not implemented until fabrication time. At the same time, the object may be displayed in the editor as a specific implementation, as a generic abstract thing, or as an overlay of both. So far, good experiences were gained showing everything in the domain of the target fabrication machine to go with the “what you see is what you get” concept described earlier.
Approximate assemblies. In some cases, some or all of the implementations on the target fabrication machine only approximate the desired shape. As with other ambiguous assemblies, the system 100 may consider several parameters when making its decision; for approximate assemblies, in some embodiments the system 100 also considers how closely the respective implementation approximates the desired assembly when deciding which implementation to pick.
Foreign assemblies. Sometimes users' models involve “foreign” assemblies, i.e., parts of compounds that will not be fabricated by the targeted fabrication machines. There are several reasons why these assemblies cannot be fabricated. (1) The targeted fabrication machine may not be capable of fabricating the respective shape (3-axis laser cutters, for example, cannot form screws), (2) The targeted fabrication machine may not be capable of processing the respective materials (100 W laser cutters, for example, generally cannot process steel objects), (3) The targeted fabrication machine may not be capable of producing the required level of complexity (e.g., servomotors or LEDs), or (4) the foreign assembly may simply not be part of the object, but part of the environment the object is supposed to interact with (e.g., a water bottle and a bike frame, in case the object is a bottle mount for a bike).
Some embodiments represents foreign assemblies in the model, as this (1) tends to make the model more comprehensible by showing assemblies to be fabricated in their full context, (2) often allows connecting sub-assemblies that would otherwise be disconnected, (3) helps create contraptions that physically connect to foreign assemblies (e.g., the bike bottle mount, for example, can be created by subtracting the bottle and the bike frame from a generic holder (plus/minus offsets for play)).
To allow users to include foreign assemblies in their models, some embodiments may allow users to import foreign assemblies from a library of commonly used foreign assemblies. They may allow including additional objects by importing models from an online repository of 3D models of 3D printed models (e.g., http://thingyverse.com) or to upload 3D objects scanned (e.g., using a 3D scanner, a depth camera, or one or more regular cameras) or modeled elsewhere (e.g., in any of the 3D editors mentioned earlier).
To allow foreign assemblies to perform more realistically in various physics simulations, embodiments may allow annotating them with additional meta information, such as weight, stiffness, texture, specific weight/buoyancy, translucency, optical density, etc., import then together with such annotations, or look up such annotations elsewhere. To clarify what will be fabricated and what will not, the embodiments may render foreign assemblies differently, e.g., de-saturated or translucent, etc.
Smart components are assemblies with optional additional meta information that embodies engineering knowledge, thus allow users to create assemblies that would normally require an engineering background.
Different embodiments use different approaches to embody the engineering knowledge. Some embodiments define components using a few numbers and symbols, such as a few constants set in .json and .xml file. Other embodiments consider a type of component as a class and each smart component is an object, as in object-oriented programming (and arguably, even the approach to just store a new named variables can be considered a special case of object construction).
Someone skilled in the art will appreciate that there are many ways to implement, store, and load such class descriptions and object descriptions. Some embodiments that may implement a subset of the functionality of a class by trying to represent all relevant data in the form of (member) variables, other embodiments may (also) contain executable code or script, such as an subset of (1) the visual representation in the editor (e.g., in the form of a 3D graphics file, e.g., in .stl format or a textured format, such as .obj or similar and/or code that expresses this), (2) the fabrication instructions required to create the mount for the smart component (e.g., an assembly in the editor's own format, an .stl format or G-CODE, or similar, or a piece of code that expresses this), (3) a behavior that the resulting assembly is able to perform (e.g., the rotation in the case of the servo motor; this can be encoded as the axis representing the rotation center, as code performing the animation etc. (4) a filter that determines what objects the smart component can be combined with, (5) an animation to be played back when mounting the smart component, (6) an animation to be played back when un-mounting the smart component, etc.
Note that a smart component may contain a foreign object and may even look like a foreign object (as is the case with the servomotor smart component discussed above), yet the additional behavior makes it more than a foreign object. The smart servomotor component embedded into the dog's knees in
The inherently object-oriented nature of smart components also allows expressing relationships between different smart components using a variety of ways including inheritance. Some embodiments will use this to group smart components by functionality. One type of resulting functionality is that the system 100 may include tools that replace smart components of functionally equivalent, yet slightly different smart components. A make stronger tool, for example, may replace a weak servomotor component with a stronger one often with little modification of the overall assembly's geometry; a make more precise tool may replace smart components with more precise ones; a make cheaper tool may replace smart components with cheaper ones, and so on. Some embodiments will extend this concept beyond smart components (or consider everything a smart component for that matter) allowing these tools to be applied more broadly to an assembly or scene, then attempting to replace all sub-assemblies that implement such a make_stronger( ) method with appropriate sub-assemblies.
The servomotor component is just an example of a rotating smart actuator component. Many embodiments will often multiple, if not dozens or hundreds or thousands of smart components smart components, such as additional actuator components, such as linear servomotor components or lift table components, additional rotating components, such as screws, axles with bearings (or ball bearings), nails, etc. ball bearings, universal joints, or one of hundreds of carpenter joints (captured nut, etc.) mechanisms, such as rack-and-pinion mechanisms or such from collections such as the book 507 mechanical movements. In that sense, finger joints, nudge joints, and gluing can also be considered smart components.
Predefined joints and mechanisms. While users often know what final shape they are trying to achieve (say, a bike with two wheels, frame, saddle, fork), the challenge often lies in the technical construction, i.e., how to get the individual parts to be arranged the desired way and especially to do so in a way that is structurally sound. This difficulty arises especially when designing for fabrication machines that are limited to simple, such as flat primitives, as they require users to decompose their desired design into such primitives and the add contraptions that hold these primitives together once fabricated. The inventive concept addresses this by offering tools and libraries that contain domain knowledge on the joints and mechanisms common for the intended platform. For 3-axis laser cutters, for example, one particular embodiment may offer, among others, finger joints and notch joints, living hinges, as well as pairs of gears and similar mechanisms.
While most of this section focuses on 3-axis laser cutters and related machines, the same techniques apply to other machines. If the target machine is a milling machine, for example, embodiments generally consider the size of the mill bit; they may also offer additional carpenters joints (
5.7.1. Joining Assemblies that do not Directly Touch Yet
Joints implemented using special techniques. Some embodiments may offer joints implemented using special techniques. For certain materials (e.g., acrylic) on certain machines (e.g., a 3-axis laser cutter or better), some embodiments may offer bending the material (as described in laserOrigami]). For certain materials (e.g., acrylic) on certain machines (e.g., a 3-axis laser cutter or better), some embodiments may offer welding parts together (as described in laserStacker]).
Automatic creation of joints and mechanisms. The inventive concept allows users to arrange assemblies with respect to each other in a way that causes the system to create a physical connection between the assemblies. Some embodiments may create a connection automatically if two assemblies are arranged in a certain way with respect to each other, e.g., stacked, interacting, at a 90 degree angle, etc. In this case, the system 100 automatically creates a joint at the intersection, connecting the two assemblies. The system 100 may consider various parameters when determining what connection to create, such as the materials of the assemblies, material thickness, distances and angles to be bridged, the types of joints currently in use in the model, especially those (if any) explicitly defined by the users, and the systems built-in knowledge about structural and mechanical engineering.
Connecting assemblies at a distance A specific challenge for users is to place two (or more) assemblies with respect to each other even though these assemblies would not normally touch each other. When designing a camera holder for phones, for example, a user may have created the camera mount and the phone mount separately, but now this user needs to connect the two mounts in a way resulting in the correct placement causing the camera held by the camera mount to face the phone held by the phone mount. Embodiments following the more traditional 3D editing approach may simply allow moving the two assemblies together, then connect them.
Embodiments employing gravity, in contrast, require a support structure that bridges the gap between the two. Such embodiments may offer a connect tools that allows placing one (or more) assemblies with respect to one another, causing the system to automatically create a support structure between the two (i.e., another assembly) that holds them in place with respect to each other, when necessary.
Some embodiments may modify one or both of the assemblies to make them easier to connect. The connection itself may be a primitive or a compound, such as a truss. It may connect two or more objects along the shortest connecting path, or it may create a connection better aligned with the nature of the object, e.g., the main axes of which align with the axes of the connected objects [Yuki Koyama, Shinjiro Sueda, Emma Steinhardt, Takeo Igarashi, Ariel Shamir, and Wojciech Matusik. 2015. AutoConnect: computational design of 3D-printable connectors. ACM Trans. Graph. 34, 6, Article 231 (October 2015), 11 pages. DOI=http://dx.doi.org/10.1145/2816795.2818060]
There are many ways to connect assemblies.
The chewing gum tool is shown in
Another embodiment of a chewing gum tool comprises distinct phases. The tool first connects an assembly to another assembly; this phase ends by the user ending the drag interaction, e.g., by lifting the finger off the touch screen or releasing the mouse button. Then, as a second phase, the user picks up the assembly again and drags it into its intended position, whereby the tool (continuously) creates the necessary connector.
The functionality of connecting can also be offered using multiple tools, in order to improve discoverability. In
Another approach to connecting two assemblies is by marking what areas the user wants joined. The user may start by applying a mark to the first assembly and then a mark to the second assembly. The system 100 now determines how to move one of the assemblies so as to make its mark touch the mark on the other piece with a high degree of overlap.
The approach to connecting two assemblies shown in
Among other criteria, the shape of the mark can be used to clarify how the two assemblies are supposed to be aligned. The complexity of a mark depends on the amount of ambiguity in the scene. A point-shaped mark, e.g., created by the user clicking or tapping each assembly once determines two locations are supposed to touch, but leaves the rotation undefined. This is sufficient if one or both assemblies are rotationally symmetric or if the system 100 can figure what rotation the user expects, e.g., based on what is physically possible or leads to good alignment, high esthetics, etc. Such one-click/one-tap marks are also sufficient (1) if the two assemblies already bear a joint that defines the rotation or (2) if the plate to be added is symmetric, so it does not make much of a difference.
The system 100 may consider the two marks literally as two locations when determining the mapping. Alternatively, the system 100 may consider the marks only as an indication of which surface of an assembly to connect, in which case the system 100 may work out the exact mapping automatically, e.g., so as to achieve the best alignment between the two assemblies. The system 100 may also extract just enough information from the clicks/taps, so as to decide which of two edges the user might be referring to, etc.
The system 100 may allow users to enter strokes to allow users to provide additional position/rotation information to the system. This can be useful when trying to place an assembly, e.g., in a manual way overriding automatic alignment, e.g., when assembling two uncommon location or rotation (not along the edges, etc.).
When connecting two assemblies, there are several different ways of setting up “posts/connectors” on the two assemblies.
Once a post/connector has been added, the system 100 may allow users to modify post/connector.
As shown in
If the user “connects” an object resting on the ground, the system 100 may still generate a (special type of) “connector” as shown in
Automatic construction of support structures. If users create structures that involve large forces there is a risk of breakage. When creating a chair, for example, sheering forces when the user is pushing backwards may break off the chair's legs. Embodiments may analyze such forces (e.g., using finite element analysis) and automatically create support structures where it deems them necessary.
The following figures show tools for combining multiple parts into a single part and for subdividing a part into multiple parts.
To allow users to create assemblies that are largely defined on a grid even faster, some embodiments offer what are refer to herein as boxels. Boxels are components, with their height, width, and depth being integer units on some grid. Most embodiments will define boxels on a 3-dimensional, Cartesian grid, as illustrated by
They do not have to though and might instead be defined on a rectilinear grid (
Boxel assemblies can be manufactured using a wide range of fabrication machines, including additive devices, such as 3D printers. That said, many of them, such as the cubic boxels mentioned above can also be manufactured using subtractive devices, such as milling machines and laser cutters, etc. They can also be manufactured using 2- or 3-axis subtractive devices, such as 2-3 axis laser cutters, milling machines, water jet cutters etc. A cubic boxel, for example, may by assembling from six plates, e.g., using by gluing the plates together or using any types of joints that allow plates to be mounted at the required angles, e.g., finger joints, bending, living hinges, miter joints, dowels, and internal skeleton, welding, etc.
Boxels come in different types, but they all feature at least one, but typically multiple standard connectors that allows them to be connected to another boxel of compatible type. Different embodiments may offer different types of connectors, such as round holes that allow the boxel to be connected to another boxel using a dowel, 3D printed pins, treads, snap fits, Velcro, gluing, etc. Alternatively, on appropriate machines, connectors may offer no particular joint mechanism and instead they are “connected” in software by uniting the geometries of the adjacent boxels, so as for the resulting geometry to simply be manufactured in one piece.
5.8.1. Building with Cubic Boxels
In this disclosure, the focus is on cubic boxels. This particular design allows two boxels to be connected in many of the various ways listed above, some of which are illustrated by
While we will continue to illustrate the concept at the example of cubic boxels that tessellate space in the arrangement of a 3D Cartesian grid, other embodiments may offer other types of boxels and tessellations, such as tetrahedral boxels and octahedral boxels that tessellate 3D space in the form of a tetrahedral/octahedral honeycomb structure (e.g., using their triangular walls as connectors), or any other set of 3D primitives that together allow tessellating 3-space.
Boxels can be added to a scene in many ways and using a wide range of input devices.
Some embodiments may make such a first isolated boxel already align itself into some sort of global grid. Other embodiments more aligned with the multiple assemblies approach discussed throughout this disclosure may use local grids instead, thus allow for arbitrary placement of such first isolated boxels. However, such a boxel will then typically define a coordinate system for boxels subsequently added to this boxel.
The main strength of boxels comes to fruition when additional boxels are added to an assembly and in particular to boxels added previously.
The interaction of adding a boxel to an existing assembly of boxels is as simple as it is for a number of reasons. First, the existing boxel defines a grid (or is itself part of a grid) and that uniquely reduces the act of placing a new boxel to the act of selecting a connector. This, however, can be accomplished very quickly even with a reasonably inaccurate input device (e.g., a touch screen), because a boxel surface will often be large to allow for easy targeting. Second, selecting a connector can typically be accomplished with a 2D pointing device, despite the scene being 3D, e.g., by means of ray casting the pointer into the scene, as someone of skill in the art would appreciate. This reduces 3D targeting to 2D targeting, making targeting easy. Third, if the new boxel is fully symmetric, i.e., if all surfaces are identical and themselves fully symmetric, then it does not matter how the boxel is rotated, so that attaching a boxel is all it takes.
In order to allow adding boxels that are not fully symmetric, tools may either allow users to select a connector on the new boxel as part of the operation (e.g., using a higher DoF controller as input device to rotate the new boxel into position before attaching it), or attach the new boxel by some default of guessed connector and in some default or guessed orientation and offer additional tools or mechanisms that rotating the boxel so as to connect using other connectors or in different orientations afterwards.
Different embodiments of the system 100 may choose different strategies for making the union of two boxels work with the global grid. A particularly simple strategy aligns the planes in the centers of each boxel walls with the grid, so that walls of adjacent cubic boxels line up automatically.
As shown in
As illustrated by
Removing boxels may cause an assembly to become disconnected (in particular when users remove an entire plane). Some embodiments of the system 100 may offer tools that respond by indeed breaking the assembly in two or more smaller assemblies. Other embodiments of the system 100 may include tools that automatically shift the otherwise separated boxels towards the rest of the assembly and reconnect them there. A separate knife tool may instead be used to actually break down assemblies into multiple smaller assemblies.
The boxel tools described in this disclosure may refer to a single boxel—as are used in most of our illustrations. However, to enable more efficient manipulation, many embodiments may allow configuring tools to instead apply to larger “scopes”. This can be accomplished using a range of interfaces, including simple graphical user interface dialogs, such as the one shown in
(f) When brush scope has been selected, subsequent operations will affect the boxel/connector pointed to, as well as additional boxels in its immediate vicinity. Which boxels are affected is defined by a brush, which users typically select before performing the operation.
In the examples shown in
Different embodiments of the system 100 may use different ways of offering brushes to the user, e.g., in menus of sorts that contains swatches each of which represents a brush.
Different embodiments of the system 100 may include different ways for defining a brush. Simpler embodiments may allow users to generate a brush by defining a small number of parameters, e.g., radius in boxels, roundness, i.e., whether boxels in the perimeter are automatically rounded etc. Some embodiments of the system 100 may allow users to enter these parameters using a collection of GUI widgets, such as combo boxes, numerical text entry field, sliders, etc.
While it is conceivable to apply add boxel on a brush by indeed stacking the brush geometry onto the assembly pointed to it may not always produce the best results. The interaction shown in
In addition, some embodiments of the system 100 may allow annotating brushes with additional parameters. A defined front boxel allows tools to rotate the brush during use so as to always have the front facing forward during a stroke/drag interaction or when tracing a path. This allows creating well-defined effects at the beginning and end of such paths.
Some embodiments of the system 100 may instead or in addition achieve a similar effect by offering a convolve tool. We can use
As illustrated by
There are many different ways for covering a path with boxels, such as Bresenham's line algorithm and its large number of variants. Preferably, we would pick a variant that results in a path of boxels connected under 4-connectivity, i.e., so that each boxel except the first and last have at least one of their up/down/left/right/front/back neighbors as predecessor and one as successor, which prevents the fabricated result from falling apart. Other embodiments use algorithms from 2D or 3D painting programs. Some embodiments may also allow stroking the path with a brush, using all the concepts disclosed earlier.
The boxel concept offers an excellent mechanism for users to add functionality to their assemblies. As illustrated by
As illustrated by
Unlike traditional construction kits that out electronics in boxes, most embodiments of the present invention will optimize boxels for the context of their current assembly before fabricating them. For example, as already mentioned boxels will generally not be fabricated as individual boxes, but as an assembly where only the outer walls will be produced. Also, a cubic battery boxel may deliver electricity to any of six neighbors; when fabricated, however, only those connectors will be executed that do actually connect to something. Also, some embodiments may hint at the presence of electric connectors between boxels during editing, yet run a single long cable pair through many boxels when fabricating, thus leaving out any electric plugs that users may assume to connect boxels (the same way that many embodiments will leave out the center walls between two boxels). Similarly, boxels may offer optional axles that only manifest themselves if something is connected to them.
Note that many boxels refer to an abstract concept, rather than to a specific implementation. Many embodiments will prefer this approach because (1) the actual implementation may not be of importance to the users, so leaving out the detail may make for a cleaner user interface.
The same concept also holds for decorative boxels, such as boxels engraved with the depiction of one or more eyes, boxel with engraved patterns, boxels with cut patterns, etc.
Specific applications tend to be based on certain characteristic sets of boxels and optionally other assemblies, which we call boxel kits.
Mirrors: In order to allow users to design even faster, many embodiments tend to include components that help user replicate similar structures quickly.
Some embodiments may choose to consider one side of the mirror at the original and one as the copy. They proceed by deleting everything on the copy side and replacing it with a (mirrored) copy of the original side. In order to eliminate the need for users to make this choice, many embodiments will bypass this decision by instead considering both sides “original”, i.e., mirror both sides to the other side and uniting with any geometry present there.
Some tools may execute the functionality of mirror boxels only once, typically at the moment the boxel is added or embedded. Such immediate tools create the copy, unite it with the assembly, (optionally show some brief (visual) explanation of what took place, e.g., by flashing the mirror plane) and be done. Changes applied to the assembly at a later point will then typically not be mirrored/may require embedding the mirror boxel again in order to update the sides. In contrast, the mirror boxels added or embedded using persistent tools stay in the assembly (often including their visualization), so that geometry added later will appear on all sides of the mirror at once.
Some embodiments of the system 100 allow manipulating mirror just like any other part of geometry, some even with the same tools, such as moving mirrors, rotating them, or even bending mirrors. At the same time, some embodiments of the system 100 may include a good selection of different predefined mirror boxels to cover a wide range of cases without requiring such tweaking.
Different embodiments of the system 100 may choose different ways of implementing mirror boxels. Those built on a scene graph model (
Some embodiments will push the concept of copies/symbolic links further by extending the metaphor of a mirror to something more like what might be called a portal. As illustrated by
If multiple different originals should be used, a different icon should be used for each. Along the lines of what we said about computing unions earlier, the portal-based approach also works without a dedicated original connector, i.e., all connectors belonging to the same set then bear the same type of icon without any specific one being highlighted. Here, users may connect geometry to any connector and it will simply appear on all other sides in the respective orientation.
Connector-based symmetry tools can be implemented just like what we discussed for mirrors, i.e., by adding copies and/or symbolic links and transform nodes into the scene graph.
The approach of placing and transforming labeled connectors can be used to implement new symmetry tools and boxels, including the mirror tools discussed earlier. Some embodiments may offer tools that allow users to start with any assembly that features two or more connectors, place (2D or 3D) ‘icons’ onto at least two of the connectors and transform from. The result may then be stored, shared, and used as a new type of mirror “boxel” (with optional visual effects added for illustration). In particular, all of the mirror boxels shown in
Two or more mirrors generally lead to a series of infinite reflections, thus an infinite amount of geometry may be produced. Infinite geometry is generally not tractable. However, there are special cases where systems can handle this case. With the mirror boxels shown in
Another approach to handling infinite reflections is to scale down subsequent copies, so that the scale of new geometry quickly drops below a threshold and can be eliminated from further processing. The result is fractals—a useful approach to creating complex geometry quickly. In some embodiments, the system 100 may include fractals by using boxels or other assemblies bearing connectors, with the icons on the clone connectors being scaled down.
Another approach is to limit geometry generation by (manually) adding a maximum recursion depth. Instead of this (recursive) approach to generating geometry, we may also generate geometry using an iterative approach, e.g., clone boxels that produce a defined number of copies.
Another approach is to create clones until the generated geometry spatially collides with some end boxel. This allows users to control the generation of boxels, e.g., by creating infinite geometry, and then embedding an end boxel into it. By allowing users to tweak the end boxel, we can allow users to tween boxels, i.e., to create a sequence of boxels the tweaked parameter(s) of which form some sort of interpolation from clone boxel to end boxel.
Some embodiments of the system 100 help users create objects for a known specific purpose by offering collections of predefined assemblies, which we call building blocks into what we call kits.
To help users figure out how to proceed, building blocks may bear pairs of (2D or 3D) ‘icons’ conceptually similar to the ones we discussed in the context of clone connectors. These icons are hints to the user that suggest how building blocks can/should be connected. The two legs to the left of
The orientation of the ‘A’ furthermore defines a suggested transformation under which the legs should be mounted. Here these icons make sure that all legs will attach to the body in the same “downwards” orientation. However, icons can do more in that they may specify any of the transformations discussed earlier, such as translation, rotation, scale, skew, affine transformations, homographies, etc. Scale, for example, could be used to make legs shrink automatically when mounted to a small body or for robotic creatures with smaller front legs.
This particular example chooses a symmetric icon; this makes sense in the context of leg C, which is functionally symmetric along this dimension. With respect to legs A and B this could be improved by using an asymmetric icon, which would then also give legs a default orientation, so that some of the leg would be mirrored automatically when mounted to as to all (roughly) face the same direction.
Icons serve two purposes, i.e., first, to inform users about suggested options and second, to help mount building blocks. The latter is particularly useful on systems operated using low DoF input devices, where the icons can take care of what might otherwise require substantial tweaking. Still, icons define defaults only—intended to serve as suggestions, so that users may tweak the transformations of building blocks after mounting. Also, most embodiments will allow users to add or embed building blocks anywhere else as well, so as to produce an even larger set of choices.
Some embodiments may clone building blocks automatically when added to an assembly to as to always have a spare building blocks at hand; others may expect users to do this manually.
The icons on building blocks can be implemented similarly to how clone connectors can be implemented and again icons define the default transformation. Similarly, building blocks may be copied or linked, causing subsequent operations to one leg to affect to just affect this one leg or all legs of this type.
One approach to creating kits is to allow users to simply create a scene consisting of multiple assemblies, optionally placing groups of icons onto connectors, and to save the scene.
Ultimately, specialized boxels are just special cases of building blocks and can be handled the same way. To make their use predictable, embodiments may want to make sure specialized boxels are mounted in a standardized way, such as in an orientation making whichever side is considered “functional” to face away from the side mounted, which also assures that the functional side is visible afterwards, etc. This can be accomplished by having boxel designers place ‘icons’ (as described above) onto their creations. During these are then places onto a default corresponding icon that is (automatically) placed on the clicked/tapped connector. If the user should be dissatisfied with this orientation, appropriate tools allow users to rotate boxels afterwards.
As shown in
Often times, a single boxel may be enough to offer multiple functionalities (as long as these do not physically collide). Of course it is possible to offer such compound functional boxels, e.g., a boxel that contains two servo motors. Since this tends to lead to an exponential inflation in the number of specialized boxels offered to the user, some embodiments will allow users to combine the functionality of two boxels into one instead. As illustrated by
Boxel embedding is a very powerful concept, because it elevates the concept of boxels from a rather specific construction kit to a general way of wrapping components, i.e., a way of implementing the aforementioned smart components. To allow boxels to play the role of generic smart components, the respective embodiments embed boxels, such that they first strip the embedded boxel of the “blank” boxel it is built on, and then embedding only what is left over, i.e., the functional geometry. When embedding into a boxel this makes no difference as the boxel geometry of the embedded boxel would not add anything when combining both blank boxels by, for example, computing their union. When embedding into the surface of a non-boxel assembly, however, stripping the embedded boxel of its box geometry is useful, because it prevents the box geometry from sticking out the resulting assembly.
How the algorithm combines geometry depends on the underlying implementation of the boxel. Boxels may, for example, maintain one data structure to hold additive geometry (such as an stl file that shows the appearance of the boxel), in which case the additive geometries of both boxels would be combined (e.g., by performing a Union/OR of their geometries). Similarly, the algorithm would unity the subtractive geometries of both boxels, such as to create the necessary cut outs for both functionalities.
Defining the smart component as a boxel in the first place is useful for two reasons. First, it allows instantiating its contents on its own, i.e. it allows placing the smart component into the scene by itself, because it now has its own assembly; by itself, the four magnet smart component would fall apart. This matters, for example, when we want to offer smart components as part of kits. Second, the blank boxel serves as geometric reference. By embedding into a boxel, the creator of the boxel demonstrates how to translate, rotate, tilt, etc. with respect to at least this one standard geometry. This defines a default transform note that can often be applied as is when embedding,
This approach also makes it easy to define functional boxels. Users simply embed their functional contents into a default boxel, optionally place icons to define how the component is to be mounted to other boxels, and invoke some make boxel function that groups the result into a boxel and/or stores it as a single boxel.
Finally, some embodiments of the system 100 will implement the approach discussed above not based on boxels, but based on connectors, i.e., users create afunctional connector, by embedding into a connector-shaped plate, embedding the functional connector into an assembly produces largely the same effect. However, adding the functional connector to the scene plate being added to the scene.
The system 100 may offer a large number of such specialized boxel components. Many embodiments will therefore choose to handle these boxels not as part of the core system but as add-ons that can be added, modifying, and deleted without recompiling or redeploying the core system. These are referred to as boxel assets. Assets can be stored in modular and easy-to-modify ways, e.g., as combinations of graphics, mark-up, and code, etc. and they can typically be loaded dynamically on demand and if properly indexed they can be searched.
There are many ways to construct or allow users to construct boxel assets.
The asset may now be stored in a file system or asset database, shared, indexed, searched for, and most of all inserted into other 3D scenes, where it can now help users construct quickly.
While most boxels fit into one grid, i.e., they simply continue the coordinate system they fit into, some embodiments may (also) offer boxels that deliberately break with the raster. This allows users to generate specific 3D structures, and generally break out of the raster paradigm instead implement aspects of vector-based/beam-based construction.
Mixing boxels of different sizes allows creating structures of varying level of detail, thereby combining the benefits of small and large boxels, i.e., speed and detail. One possible approach is to combine boxels the sizes of which form a geometric row, i.e., boxels measuring 1×1×1 of some unit, ½×½×½ units, and so on, as this allows creating structures shaped like octrees (
As illustrated by
The most common workflow involving boxels is such that users start by designing their overall geometry using boxels—as this is very fast, and then refine the resulting geometry. This workflow is already supported by the above description of how to create a boxel assembly from scratch and in that the other tools discussed earlier do not mind if the assembly manipulated consists of boxels.
The challenge thus is how to continue applying boxel tools to a boxel assembly after it has been processed using non-boxel tools (or how to even to assemblies created entirely using non-boxel tools). (Another reasons for enabling this is that it allows using the same tools for boxel and non-boxel assemblies, resulting in a simpler user interface than if all tools have be offered as generic and boxel-specific tools.)
If the existing base assembly offer a connector matching the boxel to be added (e.g., a plate of the right size in the case of cubic boxels), the add boxel tool can simply apply to it and this connector thereby defines the coordinate system/grid for all subsequent boxel operations branching off it.
Otherwise, the existing base assembly does not offer a matching connector, i.e., there is a mismatch between the coordinate system/grid required by the boxel to be added and coordinate system/grid suggested by the base assembly. Embodiments may handle this situation by supporting one or more of the following approaches.
First approach: butt both coordinate system together. If the existing assembly offers a sub-assembly that generally allows mounting the boxel to be added, yet leaves one or more dimensions of how exactly to do it undefined, most embodiments will allow their users to still perform the operation by guessing the underdetermined parameters based on analyzing the base assembly or by reverting to default values, such as to center the added boxel, etc. Users may correct incorrect guesses or default values subsequently using appropriate tools, such as yaw tools and move tools.
For example, an embodiment may allow users to mount a cubic boxel to a large square plate, by rotating the boxel so as to align with the square plate in terms of yaw rotation. However, the exact position in terms of x/y translation of the boxel with respect to the large square plate would be unspecified. Some embodiments may here mount to a default position, such as the center of the plate or the position clicked or tapped by the user, etc., and users may tweak that position later.
Similarly, adding a cubic boxel to a round plate may leave (translation and) orientation unspecified. Again, the embodiment may pick a default (position and) orientation and users may tweak (position and) orientation manually afterwards.
Second approach: base coordinate system wins. In this approach, the system may modify the boxel to be added so as to fit the coordinate system defined by the connector of the base assembly. If the connector of the base geometry is a large square plate, the system may scale at least the two relevant dimensions of the boxel so as to fit that size (optionally also the third dimension, e.g., to preserve the boxel's aspect ratio, etc.). Following this approach, a user attempting to add a cube boxel to a round plate, may find the boxel morphed into the shape of an appropriately sized cylinder before mounted.
Third approach: boxel coordinate system wins. Change the geometry of the base assembly so as to fit the boxel. This is possible, but will probably find less application.
Finally, if the existing base assembly should not offer a connector and if even an improvised connection seems to result in a poor overall construction (e.g., arguably, mounting a cubic boxel to a living hinge base geometry) some embodiments will let the add boxel operation indeed fail (and instead prompt the users to first create some sort of connector). Other embodiments may go ahead adding the boxel by either automatically mounting a connector into the base geometry or by simply creating the poorly constructed connection (if only for the purpose of giving the user the chance to understand the issue, before undoing the operation, fixing the issue and redoing it).
To allow users to continue applying boxels tools later on, some embodiments offer tools to address specific boxel subsets.
There are many different ways of storing boxel-based assemblies in computer memory. While it is possible to store boxels one at a time in an arbitrary data structure (array, list, look-up table, hash, scene graph, etc.) many embodiments will try to capture the overall structure of each assembly in order to achieve a data structure that makes it computationally inexpensive to modify large numbers of boxels at once.
One particularly efficient approach is shown in
(Many embodiments will choose to show boxel boundaries all the time or at least when the user has selected a boxel tool, as seeing boxel boundaries can help users make sense of the geometry and target the right connectors in subsequent operations. Some embodiments will offer the option to show the boxel grid in the fabricated result, e.g., by engraving otherwise invisible boxel boundaries.)
Boxels also allow users to sculpt terrain or entire 3D models based on a 2D or 3D raster.
Other smoothing tools may offer different types of smoothing operations, such as smooth curves in one dimension. Someone skilled in the art will appreciate that these can be implemented using Bezier or B-spline interpolation. On 2-3 axis subtractive machines, such as laser cutters, the result can be implemented, for example, using long strips of living hinges. Other smoothing tools may locally subdivide boxels into smaller and smaller boxels, the length of which is determined by (linear, bilinear, bicubic, etc.) interpolation between the length of the original-size boxels. Yet other tools may smooth by interpolating in 2D, e.g., using NURBS. On 2-3 axis subtractive machines, such as laser cutters, the result can be implemented, for example, by placing a 2D-stretched (auxetic) façade over the top surface.
Many embodiments will allow users to edit smooth terrain structures as well, e.g., using the same modeling tools as before smoothing.
This sculpting process can be applied to multiple or even all sides of a boxel assembly, allowing users to sculpt 3D structures. Users may model an approximation of a sphere, for example, by starting with a multi-boxel cube and successively pulling out surface centers and pushing in corners.
Some embodiments may offer additional tools for shaping boxels, typically to shape the façade of a boxel assembly.
Brushing the perimeter again (c) causes the tool to implement the next level of roundness, here, for example, rounded corners implemented as living hinges.
Brushing the perimeter yet again (d) causes the tool to implement the yet next level of roundness, here, for example, rounded corners that start to cut into the immediately adjacent boxels along the edge. In the shown case this results in a cylindrical assembly, so no more rounding can be achieved and additional applications of the round tool would have no effect; for larger assemblies the process could be continued though.
Instead, brushing the surface on the inside of the assembly (e) causes the inside to be rounded (e.g. again using prisms then quarter circles, etc.)
Alternatively, clicking/tapping individual edges will round that particular edge (which most embodiments will implement by allowing users to click/tap close to the edge up to a Voronoi tessellation of the screen surface area of the assembly (and some surrounding blank space) into only edges)
Some embodiments of the system 100 simply increment some “roundness” counter associated with each boxel edge and look up an associated rounding style from an array, look-up table, or similar. In order to help users achieve a homogeneous look, other embodiments will offer tools that increment only the first boxel edge they encounter and will increment subsequent edges only if they that would get them to the same level of roundness as the first one (or, yet another version of the tool, if that gets them to the same or lower level of roundness).
What this particular round tool accomplished by means of multiple applications, other embodiments will accomplish with multiple tools, such as a separate miter edge tool, a rounded edge tool, and so on.
The create bend tool allows creating curved sub-assemblies. As shown in
The boxel clone tool is similar to the add boxel tool in that it allows adding additional boxels to an assembly. However, the boxel clone tool adds boxels of the type the user is building on, i.e., it proceeds as follows: user pointing input, determine which connector was clicked or tapped, determine the “reference” boxel the connector belongs to, determine the type of the reference boxel, create a new boxel of that type, give that new boxel the same orientation as the reference boxel, translate it so as to be adjacent to the clicked or tapped connector, attach the new boxel to the reference boxel. For asymmetric boxels, this may try to mount incompatible connector types to each other. To overcome this, the boxel clone tool may mirror the new boxel before attaching it. Clone boxel tools may support all the additional interactions discussed earlier, such as dragging, painting, or brushing. The clone tool thereby, to a certain extent, generalizes the concept of boxels in that it allows picking a wide range of assemblies and building with them in a boxel-like fashion.
Consequently the flowchart of the clone boxel tool is identical to add boxel tool, except that add boxed is replaced with clone boxel.
The 3D editor 100 described above can be provided to the user in various forms and embodiments. The 3D editor 100 can be stand-alone or integrated into a system. The 3D editor 100 may be implemented in various forms, including, but not limited to a native application, a networked application, an app, a web app, etc. (when we say “app” we will refer to any of these.)
The shown embodiment integrates the editor with a central location in which 3D models are stored and managed (aka a repository). In this particular example, editor and repository may focus exclusively on specific types of models (e.g., 3D models) for a specific type of machine (such as 3-axis laser cutting of flat sheets), but other embodiments may offer different selections.
There are many ways how embodiments may integrate the editor into an app. The editor may be linked from the home/landing page, can be started from a menu, can be visible by default, and so on. The same holds for the repository, which may be linked from the home/landing page, can be started from a menu, can be visible by default, be invoked through a search function, and so on.
The detail view may perform one of several different routines featuring one or more models.
This particular embodiment of the detail view may be used to perform some or all of the following functions (other embodiments may choose to perform this using multiple elements or implement only a subset). (1) Get users interested (in video games en.wikipedia.org/wiki/Glossary_of_video_game_terms, this is called attract mode). (2) Contents: over time it may show one or more actual models. (3) It teaches users the use of tools and how to create certain models. (4) It may serve as editor. If users bring the focus area into focus (e.g., move the pointer over the detail view area, or if users touch the detail view area, or they focus on the detail view in a virtual reality view, etc.) this embodiment may allow them to edit the model. In one embodiment, the action in the detail view may simply stop, allowing users to take it from there and make their own objects.
This particular embodiment also integrates the detail view (in particular the editor) with the overview (in particular the repository) in that users can transfer contents from one to the other. For example, the system 100 may allow users to select objects from the repository to be loaded into the detail view/editor. Selection may take place by clicking a model in the overview/repository, by tapping an associated button, by dragging it into the detail view, selecting a function from a menu, performing a gesture, etc. The loaded contents may replace the current contents or it may be added to the contents as additional contents, next to whatever is being worked on. Note how this also helps create content from multiple existing models (aka “remixing”), in that users may drag in multiple models to then assemble them or their parts into a new model.
Similarly, objects may move from the overview/repository into the detail view, e.g., to demonstrate this object and/or the editing process behind it.
Similarly, objects may move from the detail view to the overview, e.g., to offer one or more demonstrated objects to the user (as shown in
Similarly, users may drag models from the 3D editor 100 into the repository. This may include models the users edit, i.e., new, original contents. When this happens, the system 100 may display the model in the overview (e.g., by moving existing contents aside or my removing a model from this view). The 3D editor 100 may also save the model more permanently, e.g., on the server with the other models or locally on the user's computer. As part of this the system 100 may ask the user to log in or create an account. (File formats may contain 3D geometry and/or 2D geometry and/or target machine-specific information; may be saved in the same file).
While the above illustrates this at the example of 3D models for laser cutting, the entire process around the home screen/landing page, detail view and overview may be performed with models for other fabrication processes and/or generic 3D editing.
When switching from an imported 2D layout, parts may animate towards their positions in the 3D model. Similarly, when exporting the 2D layout, parts may animate into their export layout.
Computer system 510 may be coupled via bus 505 to a display 512 for displaying information to a computer user. An input device 511 such as a keyboard, touchscreen, and/or mouse is coupled to bus 505 for communicating information and command selections from the user to processor 501. The combination of these components allows the user to communicate with the system. In some systems, bus 505 represents multiple specialized buses, for example. The user may be, for example, the User or System Administrator.
Computer system 510 also includes a network interface 504 coupled with bus 505. Network interface 504 may provide two-way data communication between computer system 510 and a local network 520. The network interface 504 may be a wireless or wired connection, for example. Computer system 510 can send and receive information through the network interface 504 across a local area network, an Intranet, a cellular network, or the Internet, for example. One example implementation may include a browser executing on a computing system 510 that renders interactive presentations that integrate with remote server applications as described above. In the Internet example, a browser, for example, may access data and features on backend systems that may reside on multiple different hardware servers 531-535 across the network. Servers 531-535 and server applications may also reside in a cloud computing environment, for example. Servers 531-535 may execute the Algorithm and the 3D editor system 100 and store the associated code and the databases described above. Servers 531-535 may have a similar architecture as computing system 510.
Reference in the specification to “one embodiment”, “an embodiment”, “various embodiments” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with these embodiments is included in at least one embodiment of the invention, and such references in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.
However, all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but is not limited to, any type of disk including magnetic memory, solid state memory, optical disks, CD-ROMs, magnetic-optical disks, randomly memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.
In addition, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims.
As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “on” includes “in” and “on” unless the context clearly dictates otherwise.
While particular embodiments and applications of the present invention have been illustrated and described herein, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatuses of the present invention without departing from the spirit and scope of the invention as it is defined in the appended claims.
All publications, patents, and patent applications cited herein are hereby incorporated by reference in their entirety for all purposes to the same extent as if each individual publication, patent, or patent application were specifically and individually indicated to be so incorporated by reference.
This application claims the benefit under 35 USC § 119 to U.S. provisional patent application Ser. No. 62/363,735, filed on Jul. 18, 2016, which is incorporated by reference herein in its entirety. This application claims the benefit under 35 USC § 119 to U.S. provisional patent application Ser. No. 62/517,898, filed on Jun. 10, 2017, which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2017/042686 | 7/18/2017 | WO | 00 |