Examples of the present disclosure generally relate to projection systems and methods. More particularly, examples of the present disclosure relate to a projection system, and a software application and/or a method of projecting an image.
Projection systems used to project an image have been used in a number of applications. For example, movie theaters commonly use video projectors to project a sequence of images on a screen. Lithography can use a projection system to expose photosensitive material to an image to thereby pattern the photosensitive material. More recently, three-dimensional (3D) printing can use a projection system to project an image of each slice of the object being printed in some photosensitive liquid. The characteristics of projection systems used in these examples can vary depending on the requirements of the application.
In some examples, a system is provided. The system includes a pixelated light source and an optical relay. The pixelated light source includes an array of spatial light modulator pixels. Each spatial light modulator pixel is individually controllable to selectively project a beam of light. The optical relay includes an optically reflective surface and an actuator coupled to the optically reflective surface. The actuator is configured to move the optically reflective surface. The pixelated light source and the optical relay are configured such that one or more beams projected from the pixelated light source are reflected off of the optically reflective surface and form an image of the optical relay in a focal plane. Movement of the optically reflective surface causes the respective beams to be at varying locations in the focal plane.
In other examples, a method is provided. A convex optically reflective surface is moved. One or more beams are projected from a pixelated light source based on a position of the convex optically reflective surface. The one or more beams are reflected off of the convex optically reflective surface towards a target. Movement of the convex optically reflective surface varies respective one or more angles of reflection of the one or more beams reflected off of the convex optically reflective surface.
In yet other examples, a non-transitory storage medium stores instructions. When the instructions are executed by a processor, the execution causes the processor to perform operations comprising: controlling movement of an actuator, the actuator being connected to a convex optically reflective surface; receiving positional information of the convex optically reflective surface from an encoder; and controlling a pixelated light source to selectively project one or more beams based on the positional information. The one or more beams are incident on the convex optically reflective surface. Movement of the convex optically reflective surface varies respective one or more angles of reflection of the one or more beams reflected off of the convex optically reflective surface.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to described examples, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only examples and are therefore not to be considered limiting of its scope, and may admit to other equally effective implementations.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements and features of one example may be beneficially incorporated in other examples without further recitation.
Examples described herein provide a projection system, and a software application and/or a method of projecting an image. The projection system can be static relative to the surface or substrate on which the image is projected. The projection system can include a convex reflective surface that is rotated off-axis in a way that beams of light projected onto the convex reflective surface are reflected to varying locations. The varying locations can be along a path, e.g., respective circular paths, and the locations can form an address grid that is used to form one or more bitmaps used to project the image. In some examples, the address grid and operation of the projection system can permit redundancy, binary and/or greyscale imaging, imaging with different wavelengths of light, sub-pixel edge control, and/or other benefits.
Various different examples are described below. Although multiple features of different examples may be described together in a process flow or system, the multiple features can each be implemented separately or individually and/or in a different process flow or different system. Additionally, various process flows or operations are described as being performed in an order; other examples can implement process flows or operations in different orders and/or with more or fewer operations.
Projection System
The pixelated light source 102 includes an array of spatial light modulators. Each spatial light modulator may be referred to as a pixel, and hence, the pixelated light source 102 can be said to include an array of pixels. The array of spatial light modulators includes, but are not limited to, digital micromirrors, liquid crystal displays (LCDs), liquid crystal over silicon (LCoS) devices, an array of light emitting diodes, an array of vertical cavity surface-emitting laser (VCSEL) devices, ferroelectric liquid crystal on silicon (FLCoS) devices, and microshutters. Each spatial light modulator is individually controllable and is configured to selectively project a beam. The pixelated light source 102 can include one or more discrete devices that can collectively form the array of spatial light modulators. The size of the array of spatial light modulators (e.g., the number of pixels) may vary based on the size and resolution of the image to be projected incident on the substrate 107, for example. In some examples, the pixelated light source 102 is or includes a digital micromirror device (DMD) having an array of spatial light modulators that is of a given size, e.g., 1080×1920.
The relay 104 includes a lens 109, a concave reflective surface 110, and a convex reflective surface 112. The relay 104 has an object of the relay, which in this example, is the light source 102, and has an image of the relay, which in this example, is an object plane of the projection lens 106. The image of the relay in this example is also an intermediate focal plane. The image of the relay can be directed at a target, which in this example is the projection lens 106 or other lens, and in other examples, can be the substrate 107 (e.g., without a lens or other optics intervening between the relay 104 and the substrate 107). The lens 109 of the relay 104 may be any appropriately shaped transparent material, such as a glass lens. The concave reflective surface 110 and the convex reflective surface 112 may each be a mirror. In some examples, the relay 104 is an Offner relay. The lens 109 and concave reflective surface 110 can be physically mounted to, e.g., a housing of the relay 104 to maintain distances and operability of the relay 104 as described herein. The dimensions and shapes of the lens 109, concave reflective surface 110, and convex reflective surface 112 can be determined based on various considerations of the projection system 100, including aspects described herein, form factors, focal points, etc. In some examples, the convex reflective surface 112 can have a shape corresponding to a portion of an ellipsoid. In some examples, the convex reflective surface 112 can have a non-uniform radius of curvature.
The convex reflective surface 112 is attached to an axle 114 at a connection point 116 on a backside of the convex reflective surface 112. An axis 118 is normal to a tangential surface of the convex reflective surface 112 at the connection point 116. The axle 114 is at a non-zero angle 120 to the axis 118 (e.g., also referred to herein as “off axis”). In some examples, the connection point 116 is a center of mass of the convex reflective surface 112 along directions perpendicular to the axle 114.
An actuator 122 (e.g., a motor) is attached to the axle 114. The actuator 122 can be physically mounted to, e.g., the housing of the relay 104 like the lens 109 and concave reflective surface 110. The actuator 122 is configured to rotate 124 the axle 114. Rotation 124 of the axle 114 causes the convex reflective surface 112 to rotate around the connection point 116. If the connection point 116 is a center of mass along directions perpendicular to the axle 114, vibrations caused by the rotation 124 of the axle 114 and convex reflective surface 112 can be reduced or avoided. In some examples, the actuator 122 is configured to continuously rotate the axle 114 when the actuator 122 is driving the rotation, while in other examples, the actuator 122 is configured for step-wise rotation of the axle 114 when the actuator 122 is driving the rotation.
An encoder 126 is positioned, in the illustrated example, to view the backside of the convex reflective surface 112 and determine a rotational position of the convex reflective surface 112. As illustrated, the encoder 126 is disposed on the actuator 122 and is positioned with a view 128 of the backside of the convex reflective surface 112. The encoder 126 can be positioned differently in other examples. Notches or other identifying marks can be formed in or on the backside of the convex reflective surface 112. The encoder 126 can view the notches or identifying marks and process the view to determine the rotational position of the convex reflective surface 112 at any given instance. In other examples, the encoder 126 can be positioned to view a side of the axle 114 or a lateral edge of the convex reflective surface 112, where, in such examples, the side of the axle 114 or lateral edge of the convex reflective surface 112 has notches or other identifying marks, respectively.
The controller 108 includes a processor 160, memory 162, storage 164, input/output (I/O) interfaces 166, support circuits 168, and an interconnect 170. Each of the processor 160, memory 162, storage 164, I/O interfaces 166, support circuits 168 is connected to the interconnect 170. The I/O interfaces 166 are communicatively coupled to the pixelated light source 102 via a first data path 167, the actuator 122 via a second data path 169, the encoder 126 via a feedback path 171, and any other I/O devices (not illustrated) (for example, keyboard, display, touchscreen, and mouse devices). The first data path 167, second data path 169, and feedback path 171 are described and illustrated as separate, although in some examples, these paths may be or may be considered a same data path.
The processor 160 is configured to retrieve and execute instruction code stored in the memory 162 and/or storage 164. The processor 160 may be one of any form of processors, such as a general purpose processor, a central processing unit (CPU), a graphics processing unit (GPU), or the like. The processor 160 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, etc.
The instruction code includes a projection control application. The projection control application, when executed by the processor 160, causes the processor 160 to control the actuator 122 via the second data path 169 and the pixelated light source 102 via the first data path 167 and to receive positional information from the encoder 126 via the feedback path 171. The projection control application can be based on application data, such as a bitmap file described in further detail below. The instruction code can further include a generation application that, when executed by the processor, generates application data on which the projection control application operates (e.g., a bitmap file) from another form of application data (e.g., another image file). Similarly, the processor 160 may be configured to cause the application data to be stored in the memory 162 and to retrieve the application data stored in the memory 162.
The processor 160 can control the pixelated light source 102 and actuator 122 via communications via the interconnect 170, I/O interfaces 166, and data paths 167, 169 and/or may receive data from the encoder 126 via communications via the interconnect 170, I/O interfaces 166, and feedback path 171. The interconnect 170 is operable to transmit instruction code and application data between the processor 160, I/O interfaces 166, storage 164, and memory 162.
The memory 162 is generally included to be representative of any non-transitory memory (e.g., random access memory (RAM) (like static RAM and dynamic RAM), read-only memory (ROM), etc.), which may be volatile and/or non-volatile, and, in operation, is operable to store one or more software applications and data for use by the processor 160. Storage 164 is generally included to be representative of any non-transitory, non-volatile memory, such as a hard disk drive, solid-state storage drive (SSD), etc. Although shown as a single unit, the storage 164 may be a combination of fixed and/or removable storage devices, such as fixed disk drives, floppy disk drives, hard disk drives, flash memory storage drives, tape drives, removable memory cards, CD-ROM, optical storage, etc. configured to store non-volatile data. The memory 162 may store instruction code (e.g., as described above) that is capable of being executed by the processor 160. In some examples, the instruction code may additionally and/or alternatively be stored in the storage 164.
The support circuits 168 are also connected to the interconnect 170 for supporting the processor 160 and/or are connected to other components for support thereof. The support circuits 168 may include a cache, power supplies, clock circuits, input/output circuitry, and the like.
The controller 108 can cause the actuator 122 to start, continue, and stop rotation of the axle 114 and convex reflective surface 112. The controller 108 can receive positional information from the encoder 126 relating to the rotational position of the convex reflective surface 112. The controller 108 can control the pixelated light source 102 to project light from various spatial light modulator pixels. The controller 108 can control which spatial light modulator pixels project light and when, which may be based on the positional information from the encoder 126.
Before describing the operation of the projection system 100 of
When the pixelated light source 102 is self-emitting, such as a pixelated light emitting diode (LED) array or a pixelated VCSEL array, the wavelength of the light of the pixelated light source 102 can be selected based on the application. For example, the wavelength(s) may be red, green, or blue when using multiple LED and multiple relays in a theatre application, or red, green, and blue when using a single self-emissive pixelated light source 102 and single relay in a theatre application. As another example, the wavelength may be less than 450 nm, such as near ultraviolet, for photo-lithography or 3D printing applications. When the pixelated light source 102 includes a reflective device (such as a DMD or LCoS) or a shutter device, then an illuminator may be implemented to illuminate the pixelated light source 102, whereas the light source may be a LED or a laser, and the light source may be capable of producing a light having a predetermined wavelength. In some examples, the light source can have a wavelength of red, green, or blue visible light or can have multiple wavelengths of light, such as red, green, and blue light. In some examples, the predetermined wavelength is in the blue or near ultraviolet (UV) range, such as less than about 450 nm. The wavelength of the light generated by the light source can be based on the application of the projection system 100, such as dependent on a response to a photosensitive material in lithography or 3D printing or on visible light to be projected in a movie theater. The illuminator may be configured to focus the beam generated by the light source, and passed through the aperture and lens, onto the spatial light modulator pixel. The projection lens 106 following the relay 104 may have any magnification suitable for the given application.
In operation, based on control by the controller 108, each spatial light modulator pixel is at an “on” position or “off” position. During operation, a beam is produced by the light source and is directed, through the illuminator containing an aperture and lens, to the pixelated light source 102. In some cases, optical design and/or form factor compaction may require a prism assembly or mirror. The beam is directed from and focused by the illuminator to the spatial light modulator pixel of the spatial light modulator pixel assembly. When the beam reaches the spatial light modulator pixel, the spatial light modulator pixel in an “on” position reflects the beam through the relay 104 and subsequently through the projection lens 106. As used herein, a beam projected from the pixelated light source 102 of a limited duration may also be referred to as a “shot.” The spatial light modulator pixels that are at an “off” position reflect the beam to the light dump instead projecting the beam from the pixelated light source 102.
In some examples, a spatial light modulator pixel assembly is part of a digital micromirrors device (DMD) that includes mirrors, which are the spatial light modulator pixels. In some examples, the DMD includes 1920×1080 mirrors. In some examples, the DMD includes more than about 4,000,000 mirrors.
One or more beams 150 can be projected from the pixelated light source 102 (e.g., from the object of the relay). In the illustrated example, the pixelated light source 102 also includes a transparent material through which the one or more beams 150 are projected. The transparent material can be, e.g., part of a protective housing of the pixelated light source 102 and can be glass. The transparent material has an interior surface 172 (e.g., interior to the housing) and an exterior surface 174. The one or more beams 150 are projected through the transparent material in a direction such that the one or more beams 150 are incident on the interior surface 172 and are subsequently incident on the exterior surface 174. The beams 150 are then passed through the lens 109. The lens 109 has a first surface 176 on which the beams 150 are incident and a second surface 178 on which the beams 150 are subsequently incident.
The beams 150 are incident on and reflected from a first surface 180 of the concave reflective surface 110 towards the convex reflective surface 112 as beams 152. The beams 152 are incident on and reflected from the convex reflective surface 112 towards the concave reflective surface 110 as beams 154. The beams 154 are incident on and reflected from a second surface 182 of the concave reflective surface 110 towards the projection lens 106 as beams 156. It is noted that the first surface 180 and second surface 182 may be a same surface but are identified separately for ease of subsequent description. The beams 156 are then passed through the lens 109. The lens 109 has a first surface 184 on which the beams 156 are incident and a second surface 186 on which the beams 156 are subsequently incident. It is noted that the first surface 176 and second surface 186 may be a same surface, and that the second surface 178 and first surface 184 may be a same surface. These surfaces are identified separately for ease of subsequent description. The beams 156 are then incident on the projection lens 106, in which an intermediate focal plane is disposed. This intermediate focal plane is also the image of the relay and object of the projection lens 106. The image of the relay can be a 1× magnification of the object of the relay in some examples.
The image of the relay is then projected by the projection lens 106 onto the substrate 107 as an image of the projection lens. The projection lens 106 can magnify or shrink the image of the relay to the image of the projection lens. For example, for a lithography process, the image of the relay can be shrunk to the image of the projection lens (e.g., which shrinks a pixel size), and the projection lens 106 can have a magnification of less than 1, and more particularly, can have a magnification in a range from 0.2× to 0.5×. For a 3D printing, the image of the relay can be maintained or increased in size to the image of the projection lens, and the projection lens 106 can have a magnification of equal to or greater than 1, and more particularly, can have a magnification that is greater than 10×, e.g., to enable part sizes 10× larger than the light source 102. For a theater application, the image of the relay can be increased in size to the image of the projection lens, and the projection lens 106 can have a magnification of equal to or greater than 1, and more particularly, can have a magnification that is greater than 50× (e.g., in a range from 50× to 500×), e.g., for magnifying an image onto a large screen. The relay 104 can magnify, maintain a same image size, or shrink the object of the relay (e.g., from the light source 102) to be the image of the relay.
Various spatial light modulator pixels can be turned on and off during operation (e.g., at the direction of the controller 108) to project the beams 150. Further, the convex reflective surface 112 can be rotated by the actuator 122 rotating 124 the axle 114 (e.g., at the direction of the controller 108) during operation. The pixelated light source 102, lens 109, concave reflective surface 110, and projection lens 106 can remain static or unmoving during operation. More generally, the projection system 100 can remain static or unmoving relative to the image projected at the substrate 107 (e.g., a photosensitive material in 3D printing or lithography, or a screen surface in a movie theater).
Rotation of the convex reflective surface 112 can permit beams 154 to be incident on different locations on the concave reflective surface 110 and, thereby, beams 156 to be incident at different locations in the image of the relay (e.g., in the intermediate focal plane).
The paths 206-1, 206-2 are generally circular and encircle the respective path centers 202-1, 202-2. In some examples, some distortion between pixels and distortion of an individual path may occur due to changes of angles of incidence possibly being non-uniform resulting from geometries of various reflective surfaces. Small offsets can maintain low distortions and low levels of wavefront errors. For example, for a 20 mm sized DMD, with 2560×1600 pixels, the image field can be displaced by up to 10 pixels without significant distortion.
The paths 206-1, 206-2 form respective radii 208 from the path centers 202-1, 202-2, which are also half a diameter of the paths 206-1, 206-2. A pitch 210 is between path centers 202-1, 202-2. The radii 208 for the beams can generally be equal, with some differences due to possible distortion noted previously. The variable placement beam centroids 204-1, 204-2 can move in parallel along the paths 206-1, 206-2 such that the pitch 210 is maintained along the paths 206-1, 206-2 at any given instance. Any path 206-1, 206-2 can encircle any number of other path centers (e.g., radii 208 of the paths 206-1, 206-1 can be greater than a pitch 210 between path centers 202-1, 202-2). A ratio (r/P) of the radius 208 to the pitch 210 can affect various patterns of paths of beams, some of which are illustrated and described below.
An intensity threshold 410 is defined corresponding to a desired result (e.g., a desired intensity dosage received by a photosensitive material, a desired intensity at a surface for viewing, etc.). In many instances, the desired result is a function of accumulated dosage (e.g., which can be a function of time, or a function of the number of shots or overlapping image flashes), such as by integrating a continuous exposure (or multiplicity of shots). As an example, a photosensitive material in lithography or 3D printing can react (e.g., cross-link) as a result of receiving a dosage above the intensity threshold 410, which dosage may be integrated as a function of time for a continuous exposure by a beam or multiple discrete exposures by the beam at the variable placement beam centroid 404 and other nearby positions within a close proximity as to allow an overlapping dose (generally half the beam's pixel size (e.g., full-width-half-max intensity level)). Similarly, a perceived intensity by a human eye in a theater application can integrate the intensity at the variable placement beam centroid 404. An above-threshold area 412 is shown on, e.g., the substrate 107 where the beam has a dosage or intensity above the intensity threshold 410 to illustrate the extent of the exposure by the beam. Different beams having different intensity distributions and/or different thresholds can achieve different above-threshold areas.
The table below provides prescriptions for various components of the relay 104 of
Various modifications can be made to the projection system 100 of
General Operation
Rotation of the convex reflective surface 112 can permit variably locating where beams can be incident in the image of the relay, which can permit redundancy, more precise edge definition in an image, binary or greyscale imaging, and/or imaging with different wavelengths of light. Various examples are provided to illustrate these aspects.
At block 604, an image file including data indicating the image to be projected at the substrate is obtained. Generally, an image to be projected at the substrate can be modeled (e.g., in computer-aided design (CAD) software) in a two dimensional (2D) or three dimensional (3D) space or can be captured (e.g., as a picture or part of a video). Modelling or capturing an image can generate an image file (e.g., .GDS design file, a .STL file, a .MOV file, a .JPG file, a .MP4 file, etc.). More specifically, the image file can be a .GDS file for a lithography application, a .STL file for a layer slice for a 3D printer application, or any image or movie file for a theater application.
At block 606, a bitmap file is generated based on the address grid and the image file. Generating a bitmap is described below. A bitmap file can have one or more bitmaps that populate the address grid. Each bitmap can correspond to a given intensity dose of light, a given wavelength of light, and/or a given rotational position of the convex reflective surface 112. A person having ordinary skill in the art will readily understand how multiple bitmaps can be generated to be imaged in a sequence to implement, e.g., greyscale imaging by image convolution, imaging with multiple wavelengths of light, or a combination thereof.
The image file can be decomposed into one or more polygons, where each respective polygon has, if appropriate, a dosage and/or wavelength of light to be imaged. The polygonal(s) are then overlaid onto the address grid. The address points of the address grid overlaid by a polygon are transformed into a bitmap based on a rotational position of the convex reflective surface 112 and, if appropriate, having the given dosage and/or wavelength. Generally, the bitmap indicates which spatial light modulator pixels to turn on given the rotational position of the convex reflective surface 112 and, if appropriate, the given dosage and/or wavelength. The bitmap file can include a sequence of bitmaps, where the bitmaps can have a same or different rotational position, dosage of light, and/or wavelength of light. The sequence of bit maps can be used to implement binary imaging, greyscale imaging (e.g., using image convolution), and/or imaging with different wavelengths of light.
The operations of blocks 602-606 can be performed by a processor executing instruction code of a generation application, as described above. The processor can be the processor 160 of the controller 108, as stated previously, or another processor. The instruction code of the generation application can be stored on the memory 162 and/or storage 164 (e.g., when the generation application is executed on the controller 108), or on different memory or storage (e.g., when the generation application is executed by a different processor).
At block 608, the bitmap file is executed by the projection system. For example, the controller 108 can execute the bitmap file, e.g., by execution of the projection control application as described above. Execution of the bitmap file by the controller 108 causes the controller to control the pixelated light source 102 (e.g., selectively controlling spatial light modulator pixels to turn on and off) and to control the actuator 122 and thereby the rotation of the convex reflective surface 112. The controller 108 causes the actuator 122 to rotate the axle 114, which rotates the convex reflective surface 112. The encoder 126 reads positional information from the convex reflective surface 112 and feeds back that positional information to the controller 108. The controller 108 turns on spatial light modulator pixels to project respective beams based on a bitmap of the bitmap file and the corresponding rotational position of the convex reflective surface 112.
The bitmap file can be generated at block 606 a priori before being provided to the projection system for execution at block 608, or can be generated at block 606 at runtime by the projection system substantially contemporaneously with execution at block 608. The operation of block 608 can be performed by a processor executing instruction code of a projection control application, as described above. The processor can be the processor 160 of the controller 108, as stated previously. The instruction code of the generation application can be stored on the memory 162 and/or storage 164 (e.g., when the generation application is executed on the controller 108.
Orthogonal Address Grid
The following examples show orthogonal address grids that can be obtained and implemented by a projection system. Various address grids and arrays of path centers of paths can be conceptualized in the context of being in the image of the relay (e.g., the intermediate focal plane). This may provide a reference point of comparison between an address grid and an array of path centers of paths for simplicity and clarity in some circumstances. An address grid and corresponding features can be implemented using a transform irrespective of the substrate on which the image of the relay is subsequently projected, in some examples.
The exposure locations 702 of the paths 704 form address points of the address grid in
Each of the address points of the address grid of
This ratio and the exposure locations 902 can result in the address grid having address points with a linear density that is two times greater than the linear density of the array of path centers of paths 904, and with an areal density that is four times greater than the areal density of the array of path centers. The pitch between the address points can be half of the pitch (e.g., a sub-pixel pitch) between neighboring path centers. Each of the address points can be exposed by any of two beams, which can permit two times redundancy.
This ratio and the exposure locations can result in the address grid having address points with a linear density that is approximately 1.414 (e.g., √{square root over (2)}) times greater than the linear density of the array of path centers of the paths 1004, and with an areal density that is two times greater than the areal density of the array of path centers. The pitch between the address points can be approximately 0.707
times the pitch between path centers of paths 1004. Each of the address points can be exposed by any of four beams, which can permit four times redundancy.
Binary Projection
In some examples, the projection system can implement binary projection. In binary projection, each beam projected by the pixelated light source 102 has a same light intensity and is projected for a same duration. Each beam has a same dose per exposure. Binary projection can be implemented to achieve a binary image or a greyscale image.
A binary image can be achieved using binary projection where each address point receives a same dosage or no dosage. A received dosage can be accumulated. The circumstances under which an address point is exposed can vary. Referring back to
Similarly,
The first bitmap is to be executed when the convex reflective surface 112 is in the first position. The second bitmap is to be executed when the convex reflective surface 112 is in the second position. The third bitmap is to be executed when the convex reflective surface 112 is in the third position. The fourth bitmap is to be executed when the convex reflective surface 112 is in the fourth position.
Executing the bitmap file that includes the first, second, third, and fourth bitmaps includes turning on spatial light modulator pixels as indicated by the first, second, third, and fourth bitmaps when the convex reflective surface 112 is in the first, second, third, and fourth position, respectively. The spatial light modulator pixels indicated by having solid line paths in
The dose projected by each spatial light modulator pixel when turned on is a same dose di in this example. Hence, after one revolution of the convex reflective surface 112, each address point within the polygon 1102 receives an accumulated dosage of 4di since each such address point is exposed to the dose di four times within one revolution. The address points within a polygon can receive a greater accumulated dosage by executing the bitmap file for multiple revolutions of the convex reflective surface 112.
A greyscale image can be achieved using binary projection where each address point can receive a dosage along a predetermined scale. A received dosage can be accumulated. The circumstances under which an address point is exposed can vary. Referring back to
A person having ordinary skill in the art will readily understand the indicated bitmaps and execution in
Redundancy
In some examples, the projection system can have redundancy. Each address point in an address grid can be exposed by multiple spatial light modulator pixels. This can permit an address point to be exposed if a spatial light modulator pixel is defective.
Referring back to
Referring to previous address grids, the address grid of
Greyscale Projection
In some examples, the projection system can implement greyscale projection. In greyscale projection, a beam projected by the pixelated light source 102 can have any of a number of doses, which can result from different light intensities, different exposure durations, or a combination thereof. Greyscale projection can be implemented to achieve a greyscale image. In some examples, greyscale projection can be implemented to achieve a binary image.
A greyscale image can be achieved using greyscale projection where each address point may receive any of a number of different dosages. A received dosage can be accumulated. The circumstances under which an address point is exposed can vary. Referring back to
To illustrate this further with reference to
Executing the bitmap file that includes the first and second bitmaps includes turning on spatial light modulator pixels as indicated by the first and second bitmaps when the convex reflective surface 112 is in the first and second position, respectively, like described previously. The dose projected by each spatial light modulator pixel when turned on when the first bitmap is executed (when the convex reflective surface 112 is in the first position) is a first bitwise dose di (e.g., 20di), and the dose projected by each spatial light modulator pixel when turned on when the second bitmap is executed (when the convex reflective surface 112 is in the second position) is a second bitwise dose 2di (e.g., 21di). Hence, after one revolution of the convex reflective surface 112, each address point within the dosage polygons 1302, 1304, 1306 can receive any increment of an accumulated dosage of 0, di, 2di, 3di. For example, address points within both the polygons 1312b, 1314 (which corresponds to 3× dosage polygon 1306) receive an accumulated dosage of 3di as a result of two exposures, the first exposure having a dose of di (
Sub-Pixel Edge Control
Depending on the application (e.g., 3D printing, lithography, video projection, etc.), sub-pixel edge control of an image can be achieved by differing mechanisms. In some examples, sub-pixel edge control can be achieved using greyscale imaging. In some examples, sub-pixel edge control can be achieved using a sub-pixel resolution address grid.
Above-threshold area 1422 (e.g., due to the 2D nature of the distribution as described with respect to
As indicated in
Different Wavelengths
In some examples, the projection system can implement projection with multiple wavelengths of light. In projection with multiple wavelengths, a beam projected by the pixelated light source 102 can have any of a number of wavelengths, which can further have different light intensities or dosages. Projection with multiple wavelengths can be implemented in applications such as in video projection (e.g., in movie theaters) and in applications where a photosensitive material on which an image is projected has different reactions to different wavelengths.
Projection with multiple wavelengths can be performed like described above with respect to greyscale projection, except with different wavelengths of light being projected instead of or in addition to the different doses of light. Referring back to
Some examples can achieve imaging using different wavelengths with each wavelength being capable of being imaged along a greyscale. For example, video generally requires each of red, green, and blue light to have 6-bit to 8-bit levels of greyscale (e.g., 64 to 256 increments of greyscale). To achieve 8-bit levels of greyscale for a single wavelength of light using the example of
Irregular Address Grid
Some examples can implement an irregular address grid. An irregular address grid can be a non-orthogonal address grid that includes address points along overlapping circular paths of beams.
An example similar to the example of
Even further, an analysis can be performed to identify possible exposure locations that can be skipped or omitted to further improve uniformity of intensity of exposure. Referring to the example of
In an example, exposure locations 1602 at locations 14 through 17, locations 60 through 62, locations 89 through 91, and locations 136 through 138 can be skipped or omitted. The exposure locations 1602 can be located at angles from the positive direction y-axis (+Y) in ranges from about 30 degrees to about 39 degrees, from about 140 degrees to about 147 degrees, from about 210 degrees to about 217 degrees, and from about 323 degrees to about 330 degrees.
In another example, exposure locations 1602 at locations 24 through 25, locations 52 through 53, locations 99 through 101, and locations 127 through 128 can be skipped or omitted. The exposure locations 1602 can be located at angles from the positive direction y-axis (+Y) in ranges from about 54 degrees to about 59 degrees, from about 121 degrees to about 126 degrees, from about 234 degrees to about 241 degrees, and from about 301 degrees to about 306 degrees.
In these examples of omitting or skipping exposure locations 1602, the uniformity of the intensity of exposure can be increased (e.g., to having a variation lower than 1.78% rms). Other examples and patterns can omit or skip different exposure locations.
Referring back to the example of
Edge deviation can depend on the ratio of the radius to the pitch (r/P) in examples like
Additionally, binary projection, greyscale projection, and projection with different wavelengths can be implemented using an irregular address grid. The principles described above with respect to these projection techniques apply similarly to an irregular address grid, and hence, further description is omitted here.
Example Applications
The projection system and techniques described above can be applied to any application in which a static projection system is implemented, for example. In some examples, a resolution requirement for a projection system can be large, and a field size of the pixelated light source 102 (e.g., a DMD in this example) is larger than the image to be obtained. For example, a large array DMD (such as having 4K×2K pixels) projecting 100 μm pixels would obtain a field size that is 400 mm×200 mm. If the size of the image to be obtained is less than 400 mm×200 mm, then a technique described herein to generate a fine pitch address grid and without scanning the pixelated light source 102 (e.g., a DMD in this example) may be implemented. Various examples below are described in the context of the pixelated light source 102 being a DMD; other examples can implement other light sources. Specific applications are described below, but other applications are within the scope of other examples.
3D Printing
Any number of permutations of aspects described above can be implemented for 3D printing. An orthogonal or irregular address grid, binary or greyscale imaging and/or projection, and single or multiple wavelengths of light can be implemented as described above.
Projecting the exposures with a DMD, the frame rate for binary images can be 10 kHz, which results in the cumulative time for 150 exposures being 15 milliseconds. This enables processing 67 design layers per second. This short process time makes the techniques described herein suitable for applications needing a rapid lithography exposure with sub-pixel edge placement accuracy from a static system, such as a continuous pull 3D printer (e.g., Carbon 3D). At 67 design layers per second, a 3D printer operating to a 5 μm design grid could therefore print at a Z-height velocity of 1.2 meters per hour. To achieve 5 μm X/Y edge placement accuracy, the DMD pixel size can be set to 125 μm, and the irregular address grid of
Lithography
Any number of permutations of aspects described above can be implemented for lithography. An orthogonal or irregular address grid, binary or greyscale imaging and/or projection, and single or multiple wavelengths of light can be implemented as described above. The example below is described to further illustrate aspects concerning different responses in a photosensitive material from different wavelengths of light.
As shown by
Video Projection
In digital Cinema, the resolution of the screen is generally limited to the array size of the projector DMD; for example, the Texas Instruments Cinema 4K DMD has an array size of 4096×2160. Considering the case of
To obtain three colors (e.g., red, green, and blue), three DMDs can be implemented where each DMD is projecting one color of either red, green, or blue, or the circular rotations of the convex reflective surface 112 can be at 10,080 revolutions per minute (RPM), which enables three colors (e.g., red, green, and blue) each having 8-bit levels of greyscale at 28 frames per second.
While the foregoing is directed to examples of the present disclosure, other and further examples of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.