Method for reducing perceived optical distortion

Information

  • Patent Grant
  • 9557813
  • Patent Number
    9,557,813
  • Date Filed
    Monday, June 30, 2014
    11 years ago
  • Date Issued
    Tuesday, January 31, 2017
    8 years ago
Abstract
One variation of a method for reducing perceived optical distortion of light output through a dynamic tactile interface includes: rendering an image on a digital display coupled to a substrate opposite a tactile layer, the tactile layer defining a tactile surface, a peripheral region, and a deformable region adjacent the peripheral region, disconnected from the substrate, and operable between a retracted setting and an expanded setting, the deformable region substantially flush with the peripheral region in the retracted setting and offset above the peripheral region in the expanded setting; estimating a viewing position of a user relative to the digital display; transitioning the deformable region from the retracted setting into the expanded setting; and modifying the portion of the image rendered on the digital display according to the estimated viewing position of the user and a profile of the tactile surface across the deformable region in the expanded setting.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. Provisional Application No. 61/841,176, filed on 28 Jun. 2013, which is incorporated in its entirety by this reference.


TECHNICAL FIELD

This invention relates generally to touch-sensitive displays, and more specifically to a new and useful user method for reducing perceived optical distortion of light output through a dynamic tactile interface in the field of touch-sensitive displays.





BRIEF DESCRIPTION OF THE FIGURES


FIG. 1 is a flowchart representation of a method of one embodiment of the invention;



FIG. 2 is a flowchart representation of one variation of the method;



FIG. 3 is a flowchart representation of one variation of the method;



FIG. 4 is a flowchart representation of one variation of the method;



FIG. 5 is a flowchart representation of one variation of the method; and



FIG. 6 is a flowchart representation of one variation of the method.





DESCRIPTION OF THE EMBODIMENTS

The following description of the embodiment of the invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use this invention.


1. Method and Variation


As shown in FIG. 1, a method S100 for reducing perceived optical distortion of light output through a dynamic tactile interface includes: rendering an image on a digital display in Block S110. The digital display is coupled to a substrate opposite a tactile layer, the substrate and tactile layer are substantially transparent, and the tactile layer defines a tactile surface, a deformable region, and a peripheral region. The peripheral region is adjacent the deformable region and is coupled to the substrate opposite the tactile surface. The deformable region is disconnected from the substrate and is operable between a retracted setting and an expanded setting, the tactile surface at the deformable region substantially flush with the tactile surface at the peripheral region in the retracted setting and offset from the tactile surface at the peripheral region in the expanded setting. The method S100 further includes: estimating a viewing position of a user relative to the digital display in Block S120; transitioning the deformable region from the retracted setting into the expanded setting in Block S130; and modifying the portion of the image rendered on the digital display according to the estimated viewing position of the user and a profile of the tactile surface across the deformable region in the expanded setting in Block S140.


As shown in FIG. 5 one variation of the method S100 includes transitioning the deformable region from a retracted setting into an expanded setting in Block S130, the tactile surface offset above the tactile surface at the peripheral region in the expanded setting; at a first time, estimating a first viewing position of a user relative to the digital display in Block S120; substantially at the first time, rendering a image on the digital display based on the first viewing position and a profile of the tactile surface across the deformable region in the expanded setting, the image including a portion rendered on the digital display adjacent the deformable region in the expanded setting in Block S110; at a second time succeeding the first time, estimating a second viewing position of the user relative to the digital display in Block S120; and modifying a position of the portion of the image rendered on the digital display adjacent the deformable region based on a difference between the first viewing position and the second viewing position in Block S140.


2. Applications and Dynamic Tactile Interface


As shown in FIG. 2, the method S100 functions to reduce a user's perceived optical distortion of an image rendered on a digital display, such as resulting from reconfiguration of one or more deformable regions of a dynamic tactile interface arranged over the digital display, based on a predicted or estimated viewing position of the user to the digital display.


The method S100 can therefore be implemented on a computing device incorporating a dynamic tactile interface described in U.S. patent application Ser. No. 11/969,848, filed on 4 Jan. 2008, in U.S. patent application Ser. No. 12/319,334, filed on 5 Jan. 2009, in U.S. patent application Ser. No. 12/497,622, filed on 3 Jul. 2009, in U.S. patent application Ser. No. 12/652,704, filed on 5 Jan. 2010, in U.S. patent application Ser. No. 12/652,708, filed on 5 Jan. 2010, in U.S. patent application Ser. No. 12/830,426, filed on 5 Jul. 2010, in U.S. patent application Ser. No. 12/830,430, filed on 5 Jul. 2010, which are incorporated in their entireties by this reference. For example, the method S100 can be implemented on a smartphone, tablet, mobile phone, personal data assistant (PDA), personal navigation device, personal media player, camera, watch, and/or gaming controller incorporating a dynamic tactile interface. The method S100 can additionally or alternatively be implemented on an automotive console, desktop computer, laptop computer, television, radio, desk phone, light switch or lighting control box, cooking equipment, a (dashboard) display within a vehicle, a commercial display, or any other suitable computing device incorporating a dynamic tactile interface. The digital display can include a touchscreen configured to both output an image and to detect an input, such as by a finger or by a stylus. Alternatively, the computing device can include the digital display that is a discrete display coupled to a touch sensor, such as an optical, capacitive, or resistive touch sensor.


In particular, the method S100 can be implemented on a computing device that includes a digital display coupled to a substrate opposite a tactile layer, and the method S100 can interface with a displacement device to displace a volume of fluid from a reservoir into a cavity adjacent a deformable region of the tactile layer (e.g., Block S130), thereby expanding the cavity and transitioning the deformable region into an expanded setting. In the expanded setting, the tactile surface at the deformable region is thus elevated above the tactile surface at the peripheral region such that an effective thickness of the tactile layer across the deformable region exceeds an effective thickness of the tactile layer across the peripheral region of the dynamic tactile interface.


The substrate, the tactile layer, and the volume of fluid (from hereon after the “dynamic tactile layer”) can each be substantially transparent such that images (or “frames”) rendered on the digital display can be visible to a user through the substrate, tactile layer, and fluid arranged over the digital display. However, the substrate, the tactile layer, and the fluid can each exhibit a refractive index that differs from that of air such that expansion of one or more deformable regions into expanded settings yields variations in thickness across the dynamic tactile layer and thus non-uniform distortion (e.g., refraction) of light output from the digital display through the dynamic tactile layer. In particular, transition of a deformable region of the dynamic tactile layer from the retracted setting into the expanded setting can cause a user to visually detect optical distortion of an image rendered on the digital display, and the method S100 can therefore modify an image rendered on the digital display prior to transition of the deformable region into the expanded setting to reduce a user's perceived optical distortion of the image once the deformable region enters the expanded setting. The method S100 can also systematically (e.g., cyclically) refresh the digital display with modifications of the image to compensate for a dynamically changing profile of the dynamic tactile layer throughout transition of the deformable region from the retracted setting into the expanded, and vice versa.


Furthermore, a user's viewing position relative to the digital display (e.g., the user's viewing angle to and/or viewing distance from the digital display) can dictate how light output through the dynamic tactile layer is perceived by the user, and the user's viewing position relative to the digital display can change over time as the user interacts and interfaces with the computing device such the perceived distortion of light through the dynamic tactile layer changes dynamically during such time. The method S100 can therefore modify (e.g., refresh, update) an image rendered on the digital display to compensate for a change in the user's viewing position relative to the digital display, such as when one or more deformable regions of the dynamic tactile layer is in the expanded setting or is transitioning between the expanded and retracted settings.


In particular, the method S100 can modify an image and/or refresh the digital display within the computing device to reduce or limit perceived light scattering effects, perceived internal reflection of regions of the image, perceived refraction and/or diffraction of the image, perceived directional or preferential light transmission or emission through the substrate (e.g., in favor of more uniform scattering, diffraction, reflection, and/or refraction of light), perceived chromatic dispersion of light transmitted through the dynamic tactile layer, and/or other perceived optical distortions of parallax effects of the displayed image. The method S100 can therefore predict (or estimate) a user viewing position (in Block S120), control a vertical position (e.g., height) of a deformable region (in Block S130), and modify the image displayed on the digital display (in Block S140)—based on the predicted user viewing position and the current position of the deformable region—to reduce and/or minimize optical distortion of the image output by the digital display as perceived by the user. In one example, the method S100 linearly stretches the image—rendered on the digital display—horizontally and/or vertically about a predicted point of focus of the user on the digital display. In another example, the method S100 translates (i.e., shifts laterally or vertically on the digital display) a subregion of the image adjacent (e.g., lying under) a deformable region based on an angle and distance of the user to the deformable region or to the digital display. In yet another example, the method S100 linearly or nonlinearly scales (i.e., alters a size of) a subregion of the image adjacent the deformable region to offset preferential magnification of the subregions of the image by the adjacent deformable region in the expanded setting.


In addition to the position of a deformable region and the user's angle and/or distance from the digital display, the method S100 can additionally or alternatively account for (average) refractive indices, wavelength-specific refractive indices, Abbe numbers, chromatic dispersion of different wavelengths of light, and/or other optical properties of materials within the dynamic tactile interface and/or the digital display to dictate compensation of optical distortion of all of a portion of the image rendered on the digital display. The method S100 can also account for mechanical properties of materials of the dynamic tactile interface, a thickness of a cover glass of the digital display, colors and/or a brightness of the rendered image, a thickness or other geometry of the substrate, tactile layer, and/or deformable regions(s), a gap between the digital display and the substrate, an orientation of the digital display relative to the user, and/or a shape and/or height of a deformable region, a change in thickness across the deformable region of the tactile in between expanded and retracted settings, etc. to dictate compensation of optical distortion of all of a portion of the image rendered on the digital display, such as to limit, reduce, and/or substantially eliminate optical distortion of the displayed image as perceived by a user. Therefore, the method S100 can adjust regions of a displayed image based on viewing position of the user to the digital display, optical properties, mechanical properties, and geometries of components of the computing device, and a three-dimensional profile (e.g., shape) of one or deformable regions across the tactile layer of the dynamic tactile interface.


As described above, the method S100 can repeat systematically to accommodate changes in the user's viewing position relative to the digital display and/or changes in the position of one or more deformable regions over time. In one example implementation, the method S100 estimates or measures a new position of a deformable region (such as described in U.S. patent application Ser. No. 13/896,090, filed on 16 May 2013, which is incorporated in its entirety by this reference), executes a process to estimate (e.g., predict, calculate) the viewing position of the user at a refresh rate, generates a new (or updates an existing) transfer matrix for modifying the image based on the new position of the deformable region and the predicted viewing position of the user, and applies the transfer matrix to the image and renders the updated image on the digital display at the refresh rate. The method S100 can therefore update the image substantially in real-time by cyclically capturing and implementing new user viewing position and/or deformable region position data. In another example implementation, the method S100 generates and applies a new (or update an existing) transfer matrix to the displayed image in response to a change in the predicted user viewing distance that exceeds a threshold distance change (e.g., more than 0.5″) and/or in response to a change in predicted user viewing angle that that exceeds a threshold angle change (e.g., more than 5°). In a similar example implementation, the method S100 generates and applies a new (or update an existing) transfer matrix to the displayed image in response to a setting change of one or more deformable regions, such as if a deformable region transitions from the expanded setting to the retracted setting, transitions from the retracted setting to the expanded setting, or transitions into an intermediate position between the retracted and expanded settings. However, the method S100 can update or modify the image rendered on the digital display in response to any other trigger(s) and/or threshold event(s).


3. Image


Block S110 of the method S100 recites rendering an image on a digital display. Generally, Block S110 functions to render an (initial) image on the digital display, such as when the deformable region is in the retracted setting. In one example, the tactile surface at the deformable region can be substantially flush (i.e., in plane) with the tactile surface at the peripheral region in the retracted setting, and Block S110 can control the digital display within the computing device to render a stock (i.e., standard, unadjusted) image on the digital display. Thus, in this example, Block S130 can expand the deformable region to elevate the tactile layer at the deformable region to a position offset above the peripheral region, and Block S140 can update the digital display to render a new or adjusted image that compensates for optical effects of the deformable region in the expanded setting. In an alternative example, the tactile surface at the deformable region is arranged at a first position above the tactile surface at the peripheral region in the expanded setting, and Block S110 outputs a first adjusted image that compensates for optical irregularities across the dynamic tactile layer stemming from a difference in effective thickness of the dynamic tactile layer across the peripheral region and the deformable region. In this alternative example, Block S130 can expand the deformable region to a second position further elevated above the tactile layer at the peripheral region, and Block S140 can update the digital display to render an adjusted image that compensates for different optical effects of the deformable region in the expanded setting. In a similar alternative example, the tactile surface at the deformable region can be retracted below the tactile surface at the peripheral region in the retracted setting, and Block S110 can output a first adjusted image that compensates for optical distortion of the digital display resulting from the concave form of the tactile surface across the retracted deformable region. In this example, Block S130 can expand the deformable region into a position offset above the deformable region in the retracted setting, such as flush with or offset above the peripheral region, and Block S140 can output a second adjusted image that compensates for non-uniform transmission of light through the dynamic tactile layer across the peripheral region and the deformable region in the expanded setting.


As described in U.S. patent application Ser. No. 12/319,334, the dynamic tactile interface can include the tactile layer that defines a peripheral region and a deformable region, the peripheral region adjacent the deformable region and coupled to the substrate opposite a tactile surface. The deformable region can also cooperate with the substrate to define a cavity, a displacement device can be coupled to the cavity via a fluid channel defined within the substrate, and actuation of the displacement device can pump fluid into and out of the cavity to expand and retract the deformable region, respectively. As described in U.S. patent application Ser. No. 12/319,334, the dynamic tactile interface can also include multiple deformable regions that can be transitioned between retracted and expanded settings in unison and/or independently, such as through actuation of various valves arranged between one or more displacement devices and one or more cavities and/or fluid channels.


In one implementation, the dynamic tactile interface includes an array of deformable regions patterned across the digital display in a keyboard arrangement. In one example of this implementation, Block S110 controls the digital display to render an initial image of a home screen for a smartphone incorporating the dynamic tactile interface when each deformable region of the dynamic tactile layer is set in the retracted setting. In this example, once a user selects a native text-input application (e.g., a native SMS text messaging application, an email application, a calendar application, a web browser applications including a search bar), Block S110 controls the digital display to render a new image of an interface including a 26-key virtual alphanumeric keyboard at a first time, Block S130 transitions a set of deformable regions—each arranged over and aligned with a key of the virtual keyboard—into the expanded setting over a period of time (e.g., two seconds) following the first time, and Block S140 modifies the position (and size) of one or more displayed keys (based on the user's viewing position) to mitigate perceived misalignment of the keys due to an effective variation in thickness of the tactile layer across the deformable regions.


In another example of the foregoing implementation, the method S100 is implemented through a road vehicle including a console display and a dynamic tactile interface arranged over the console display. In this example, once a user turns the vehicle on, Block S110 controls the console display to render an image of a virtual stereo control interface including multiple stereo control keys (e.g., volume, play, track forward, rewind, saved radio stations, etc.), and Block S130 controls a displacement device to transition a set of deformable regions—each substantially aligned with a virtual key of the virtual stereo control interface rendered on the digital display—within the dynamic tactile layer into the expanded setting. In this example, Block S130 interfaces with one or more cameras arranged within the vehicle to track the eyes of occupants of the vehicles, including when and from what position within the vehicle an occupant looks at the console display, and Block S140 modifies the position of one or more virtual keys rendered on the digital display to improve the perceived alignment of each virtual key with a corresponding deformable region for occupants of the vehicle look directly at the console display. In this example, Block S140 can prioritize modification of the rendered image of the virtual stereo control interface, such as by prioritizing modification of the image for the driver of the vehicle first, then a front-row passenger of the vehicle, and then rear-seat passengers of the vehicle, etc.


Block S110 can therefore interface with the digital display to render the initial image on the digital display. For example, Block S110 can execute on a display driver controlling the digital display or on a processor electrically coupled to the digital display driver within the computing device. Block S110 can output the initial image that defines a complete display area of the digital display, such as by assembly the initial image from icons, stock frames, digital still photographs, text, stock borders, and/or stock figures, etc. Block S110 can also generate the initial image including virtual input regions that visually demarcate one or more input-sensitive regions of a touchscreen (e digital display with an integrated touch sensor) that renders the initial image, and Block S110 can place the virtual input regions within the initial image such that the virtual input regions substantially align with corresponding deformable regions of the dynamic tactile interface in the retracted setting. In one example and as described above, Block S110 can control a touchscreen integrated with the computing device to display a set of virtual alphanumeric keys (e.g., A, B, C, . . . , X, Y, Z) that define an alphanumeric keyboard such that an input on the tactile surface (arranged over the touchscreen) over a particular virtual alphanumeric key triggers input of a corresponding alphanumeric symbol into the computing device. In another example, Block S110 can interface with the digital display a render an image of a home screen with a set of icons, each icon in the set of icons corresponding to a native applications installed on the computing device (e.g., smartphone) and displayed adjacent a corresponding deformable region.


As described above, Block S110 can control the digital display to render a ‘standard’ image for a dynamic tactile interface with a flush and continuous tactile surface (e.g., for the dynamic tactile interface with all deformable region in the retracted setting), the standard image not adjusted to compensate for a user's perceived optical distortion due to a substantially uniform effective thickness of the dynamic tactile layer that yields substantially uniform optical distortion of the image broadcast through the dynamic tactile layer.


Alternatively, Block S110 can implement one or more methods or techniques described below to adjust the initial image based on the user's viewing position relative to the computing device, such as when the tactile surface at the deformable region is offset from the tactile surface at the peripheral region in the retracted setting and/or while the deformable region transitions into the expanded setting. For example, Block S120 can execute prior to or in conjunction with Block S110 to predict a viewing position of a user relative to the digital display, and Block S110 can implement methods or techniques as in Block S140 described below to transform a ‘standard’ image into an initial modified image that compensates for optical distortions precipitated by a non-uniform surface profile of the tactile surface (in the retracted setting) based on the predicted viewing position of the user. However, Block S110 can function in any other way to render (or to interface with a digital display to render) an image on a digital display coupled to a substrate opposite a tactile layer.


4. Optical Distortion


Generally, given a first imaginary line extending from a first point (e.g., a first pixel) on the digital display normal to (a dominant face of) the digital display and bisecting a line between the user's eyes, the user may perceive the first point projected through the dynamic tactile layer without substantial optical distortion. Specifically, the angle of incidence of light output from the digital display—through the dynamic tactile layer—to the user is approximately 0° at the first point, and the angle of refraction of this light output from the first portion of the digital display is approximately 0°, as indicated by Snell's law, which recites








sin






θ
1



sin






θ
2



=



v
1


v
2


=


n
2


n
1








wherein each θ defines an angle measured from a normal boundary between the digital display and the dynamic tactile layer (or between the dynamic tactile layer and air), wherein each v represents a velocity of light through the respective medium in meters per second, and wherein n defines a the refractive index of the respective medium.


However, for a second point (e.g., second pixel) on the digital display offset from the first point, a second imaginary line extending from the second point to a center of the line between the user's eyes may not be normal to the dominant face of the digital display. Specifically, the angle of incidence of light output from second point of the digital display—through the dynamic tactile layer—to the user is non-zero (i.e., greater than 0°) in at least one of two planes perpendicular to the plane of the dominant face of the display. Therefore, light output from the second point of the digital display may refract (i.e., bend) across the junction between the digital display and the dynamic tactile layer and again across the junction between the dynamic tactile layer and air (e.g., at the tactile surface), thereby shifting a perceived position and/or light intensity of the second point on the digital display relative to the first point for the user. Furthermore, a thickness of the dynamic tactile layer over the second point of the digital display can affect a total distance between the actual second point of the digital display and the perceived position of the second point for the user. In particular, the offset between the real and perceived position of the second point can be directly proportional to a thickness of the dynamic tactile layer over (and around) the second point. Therefore, as the deformable regions with the dynamic tactile layer transition from the retracted setting into the expanded setting (or between other settings) and the effective thickness of the dynamic tactile layer becomes non-uniform over its breadth, refraction of light through the dynamic tactile layer also becomes non-uniform, thereby yielding distortion of an image rendered on the digital display as offsets between real and perceived position of points across the rendered image vary. For example, a user may perceive greater optical distortion of light output from a third point on the digital display, the third point at a distance from the first point greater than a distance between the first point and the second point.


Furthermore, light output from the digital display proximal the second point may exhibit greater internal reflection than light output from the digital display proximal the first point, thereby resulting in perceived lower intensity of light output at the second point compared to light output at the first point. Varying thickness of the dynamic tactile layer across its breadth due to expansion of one or more deformable regions may further yield non-uniform internal reflection of light output from the digital display and thus perception of non-uniform light intensity across the display. Similarly, a surface roughness of the tactile surface across a deformable region may be affected by a position of the deformable region, such as particularly along a junction between the deformable region and an adjacent peripheral region of the tactile surface, which may change how the tactile layer scatters and/or diffuses light transmitted through the tactile layer. Diffusion of light through the tactile layer may also be dependent on an angle of incidence of light—output from the display—onto a boundary between the display and the substrate and/or onto a boundary between the substrate and the tactile layer.


Light output from the digital display proximal the second point may also exhibit greater chromatic dispersion or other optical effects than light output from the digital display proximal the first point, resulting in further perceived optical distortion of the second point compared to the first point. In particular, the substrate, the fluid, the tactile layer, the display, and/or air may exhibit difference indices of refractions for light across the visible spectrum such that light transmitted from the digital display through the dynamic tactile layer may bend by different degrees for various wavelengths the light, thereby separating light output from the digital display by wavelength. Such effect may be substantially minimal across planar regions of the tactile surface but may be relatively significant and perceptible at a junction between a peripheral region and a deformable region.


Therefore, differences between average and/or wavelength-specific refractive indices of various materials of the dynamic tactile interface and the digital display, geometries (e.g., thicknesses) of and offsets between materials within the dynamic tactile layer and the display, orientation of the components of the dynamic tactile layer and the digital display (e.g., orientation of pixels in the digital display relative to a fluid channel within the dynamic tactile layer), etc. can yield non-uniform chromatic dispersion of different wavelengths of light output from the digital display and variations in degree and/or type of perceived optical distortion of light emitted across the digital display.


In one implementation, the method S100 models (or predicts, estimates) a user's perception of optical distortion of light emitted from each particular point (e.g., pixel or group of pixels) on the digital display based on an angle(s) and a distance from the user's eyes to each particular point on the digital display. For example, the method S100 can select a particular point on the digital display through which a line normal to the dominant face of the digital display can be drawn to the (center of the) user's eyes and define this particular point as a point of minimal perceived optical distortion of the digital display on the digital display and set this particular point as an anchor point. In this example, the method S100 can then apply a known geometry (e.g., width, length) of the digital display and/or of the dynamic tactile interface to estimate an angle and a distance of the user's eyes to each other point across the digital display (such as based on an orientation of the computing device) and to model perceived optical distortion of light emitted from these points according to these angles, these distances, and a current profile of the tactile surface. As described below, Block S140 (and Block S110) can thus implement such information to modify the displayed image to reduce perceived optical distortion of the image rendered across the digital display.


Furthermore, a shape and/or height (e.g., vertical position, internal or external radius of curvature) of a deformable region of the tactile layer can also affect types of optical distortion (i.e., reflection, refraction, diffraction, chromatic dispersion, etc.) that occur across the dynamic tactile layer by locally altering an effective thickness to the dynamic tactile layer and/or by introducing new material (e.g., fluid) into the dynamic tactile layer. As described below, Block S140 can further modify the image rendered on the digital display according to a known position, shape, height, etc. of one or more deformable regions of the tactile layer to reduce perception of non-uniform optical distortion across the deformable region(s). For example, Block S140 can implement one or more techniques described in U.S. patent application Ser. No. 13/896,090 to detect a height and/or shape of a deformable region based on an output of a capacitive touch sensor coupled to the dynamic tactile layer. Block S140 can also modify an image rendered on the digital display according to optical properties of the fluid, optical properties of the tactile layer under different strains or deformations, etc. to render an image—which yields a reduced perception of optical distortion by the dynamic tactile layer—on the display.


Therefore, perceived optical distortion of an image rendered on the digital display can be dependent on an angle between the user's eyes and points across the digital display, a distance from the user's eyes to the points across the digital display, and/or an orientation of the digital display relative to the user. Block S110 can therefore function to render an initial image, Block S120 can function to collect, predict, and/or estimate user eye position data, Block S130 can function to control a profile of the tactile surface of the dynamic tactile layer, and Block S140 can function to update or modify the image based on (a change in) the position of the user and/or (a change in) the profile of the tactile surface to compensate for changes corresponding actual or perceived changes in refraction, diffraction reflections, etc. of light output from the display.


5. Viewing Position


Block S120 of the method S100 recites estimating a viewing position of a user relative to the digital display. Block S120 functions to detect an angle (e.g., in each of two lanes) and/or a distance of the user's eyes to one or more points on the digital display, such as to each pixel or cluster of adjacent pixels of the display. Block S120 can also detect the user's point of focus on the digital display, the orientation of the computing device (and the dynamic tactile interface) relative to the user's eyes or to gravity, etc.


In one implementation, Block S120 interfaces with a forward-facing camera adjacent the digital display within the computing device to capture (or retrieve) a photographic image of the user viewing the digital display, and Block S120 further implement machine vision (e.g., objection recognition) and/or machine learning techniques to identify the user's eyes in the photographic image. Once the user's eyes (or pupils, cheekbones, eyelids, and/or eyebrows, etc. or an outline of the user's head, etc.) are identified in the photographic image, Block S120 can estimate a real distance between the user's eyes and the camera. For example, Block S120 can count a number of pixels between the centers of the user's pupils as shown in the photographic image, predict a real pupil center distance of the user, and calculate a real distance from the camera to the user's eyes based on the number of pixels between the user's pupils and the predicted real pupil center distance of the user, such as according to a parametric function specific to the camera arranged within the computing device. In this example, Block S120 can process the photographic image of the user to detect an age, gender, and/or ethnicity, etc. of the user and then generate a prediction for the real pupil center distance of the user based on these data based on a parametric function, a lookup table, etc. Alternatively, Block S120 can apply a static (i.e., preset) estimated real distance between eyes of users to estimate the distance between the user's eyes and the camera. Yet alternatively, Block S120 can implement a standard viewing distance for the computing device to estimate the user's viewing position relative to the camera. For example, for the computing device that includes a smartphone (or other computing device with a display less than sixteen square inches in size), Block S120 can apply a typical viewing distance of twelve inches between a user's eyes and a surface of the smartphone in determining the user's viewing position. In another example, for the computing device that includes a tablet (or other computing device with a display between sixteen and 100 square inches in size), Block S120 can apply a typical viewing distance of fifteen inches between a user's eyes and a surface of the smartphone in determining the user's viewing position. In yet another example, for the computing device that includes a laptop (or other computing device with a display between 100 and 200 square inches in size), Block S120 can apply a typical viewing distance of eighteen inches between a user's eyes and a surface of the smartphone in determining the user's viewing position. However, Block S120 can function in any other way to calculate, predict, or estimate a distance between the user's eyes and the camera (or digital display or other surface) of the computing device.


In this implementation, Block S120 can also estimate (e.g., calculate, extrapolate) an angle between the user's eyes and the camera along one or more axes based on a location of the user's eyes within the image. For example, Block S120 can (effectively) apply a virtual crosshatch centered on the photographic image (and corresponding to perpendicular centerlines of a lens of the camera), and Block S120 can determine that the user's viewing angle is 0° about a horizontal axis of the camera if a virtual horizontal centerline extending across the user's pupils within the photographic image lies substantially over a horizontal line of the virtual crosshatch, and Block S120 can determine that the user's viewing angle is 0° about a vertical axis of the camera if a vertical centerline centered between the user's pupils within the photographic image lies substantially over a vertical line of the virtual crosshatch, as shown in FIG. 3.


However, as in this implementation, Block S120 can also determine one or more non-normal viewing angles of the user relative to the camera. For example, Block S120 can also interface with a gyroscope and/or accelerometer within the computing device (shown in FIG. 2) to determine a current orientation of the computing device, such as if the computing device is currently held in a portrait orientation or a landscape orientation. Alternatively, Block S120 can implement any suitable machine vision technique to identify a second feature of the user's face (e.g., a mouth, a nose, eyebrows, a chin, etc.) in the photographic image and to determine an orientation of the camera (and therefore to the computing device) relative to the user based on a common relation (i.e., position) between eyes and the second feature on a human face. Block S120 can then implement this orientation of the camera relative to the user to determine that the user's viewing angle about the horizontal axis of the camera is less than 0° if the virtual horizontal centerline extending across the user's pupils within the photographic image lies below the horizontal line of the virtual crosshatch applied to the photographic image and that the user's viewing angle about the horizontal axis of the camera is greater than 0° about the horizontal axis if the centerline between the user's pupils lies above the horizontal line of the virtual crosshatch of the photographic image. Therefore, Block S120 can estimate the pitch of the user relative to a center point of the camera and about a horizontal axis of the camera based on a position of the user's eyes within a photographic image captured by a camera within the computing device.


Block S120 can also determine the user's viewing angle about the vertical axis of the camera. For example, Block S120 can implement the detected orientation of the camera relative to the user to determine that the user's viewing angle about the vertical axis of the camera is less than 0° if the virtual vertical centerline between the user's pupils within the photographic image lies to the right of the vertical line of the virtual crosshatch applied to the photographic image, and Block S120 can determine that the user's viewing angle about the vertical axis of the camera is greater than 0° if the centerline between the user's pupils lies to the left of the vertical line of the virtual crosshatch in the photographic image. Therefore, Block S120 can estimate the yaw of the user relative to the center point of the camera and about a vertical axis of the camera based on a position of the user's eyes within the photographic image.


Therefore, as in this implementation, Block S120 can estimate angles between the user's eyes and the camera along two perpendicular axes based on the position of the user's eyes within a photographic image, as shown in FIG. 1


Block S120 can also set positive and negative directions for the user's viewing angle relative to the camera along the horizontal and vertical planes based on the orientation of computing device and/or the orientation of the user to the camera, and Block S120 can also define horizontal and vertical planes of the camera based on the orientation of computing device and/or the orientation of the user to the camera. For example, Block S120 can implement machine vision techniques, as described above, to identify the user's eyes and a second feature of the user within the photographic image of the user, detect a “roll” or angular position of the user relative to the camera based on the position of the eyes and the second feature of the user within the photographic image, and then define the horizontal and vertical planes of the camera according to the angular position of the user to the camera (i.e., not parallel to short and long sides of the computing device, an optical sensor within the camera, and/or the photographic image). Specifically, Block S120 can function as described above to detect an angular position of the user's eyes about an axis extending outward from (and normal to) the camera, and Block S120 can further define axes of the camera (and the computing device or the photographic image) according to the angular position of the user.


Block S120 can further transform the distance between the camera and the user's eyes, a known position of the camera relative to the digital display, the angle of the user's eyes about the horizontal axis of the camera, and/or the angle of the user's eyes about the vertical axis of the camera, etc. into user viewing angles at discrete positions (e.g., points, discrete areas) across the digital display. For example, as shown in FIG. 3, Block S120 can transform the user's viewing angles and distance from the camera into viewing angles (about perpendicular horizontal and vertical axes) at each pixel (or cluster of pixels) across the digital display. Therefore, Block S120 can generate a viewing position matrix defining an angle between the eyes of the user and the digital display along a first axis (e.g., the ‘x’ or horizontal axis) of the tactile layer over the digital display and an angle between the eyes of the user and the digital display along a second axis of the tactile layer for each discrete position in a set of discrete positions across the tactile layer based on a position of the eyes within the photographic image.


In another implementation, Block S120 can interface with a speaker within the computing device to output a low-frequency sound and then interface with a microphone within the computing device to receive reflected sounds originating from the speaker. Block S120 can then implement sonic imaging techniques to generate a virtual map of the user's facial futures (e.g., nose, eye sockets, forehead) and to estimate the angle and/or distance of the user's eyes to a reference point on the computing device (e.g., the microphone). Block S120 can then calculate the user's viewing angle (i.e., viewing position) relative to each pixel, group of pixels, or other regions across the digital display, such as based on known dimensions of the computing device, known dimensions of the digital display, and/or known dimensions of the dynamic tactile layer within the computing device. For example, Block S120 can generate a reference position matrix including the angle and/or distance of the user's eyes to the reference point on the computing device, cross the position matrix with a display region matrix (e.g., specifying the location of each pixel of the digital display relative to the reference point), and thus generate a viewing position matrix specifying viewing angles for each region of the digital display, shown in FIG. 2.


Block S120 can also implement eye-tracking methods to identify the user's current point of focus on the digital display. However, Block S120 can interface with any other component within, coupled to, or in communication with the computing device and can implement any other method or technique to determine the viewing position of the user relative to the digital display. Block S120 can also calculate the user's viewing position substantially in real-time and can update continuously—such as at a sampling rate of 10 Hz—to capture changes in the user's viewing position relative to the computing device (or to the digital display, to the dynamic tactile layer, and/or to the camera, etc.)


5. Deformable Region


Block S130 of the method S100 recites transitioning the deformable region from a retracted setting into an expanded setting. Generally, Block S130 functions to alter the position of one or more deformable regions of the dynamic tactile interface, thereby transiently creating a tactile formation across the tactile surface of the dynamic tactile layer (and over the digital display).


In one implementation in which the deformable region cooperates with the substrate of the dynamic tactile layer to define a cavity and in which the cavity is coupled to a displacement device via a fluid channel, Block S130 can control the displacement device (e.g., a positive displacement pump) to displace fluid from a reservoir, through the fluid channel, and into the cavity, thereby expanding the tactile layer at the deformable region to a offset above the tactile surface at the peripheral region. Block S130 can further interface with the displacement device (or with multiple displacement devices) to transition multiple deformable regions from the retracted setting to the expanded setting in unison, such as a set of deformable regions arranged over a virtual alphanumeric keyboard rendered on the digital display below to provide tactile guidance to the user as the user enters text into the computing device. Block S130 can therefore implement techniques and/or interface with components as described in U.S. patent application Ser. No. 13/481,676, filed on 25 May 2012, which is incorporated herein in its entirety by this reference.


In this implementation, Block S130 can control the displacement device to displace a preset volume of fluid into the fluid channel fluidly coupled to the cavity cooperatively defined by the substrate and the deformable region to expand the deformable region to a target height offset above the peripheral region. For example, Block S130 can displace 0.2 mL of fluid into the fluid channel to transition the deformable region to a known three-dimensional convex profile at a known maximum height above the peripheral region (e.g., 1.8 mm), such as at a known external radius of curvature. Block S130 can also track or measure a volume of fluid displaced into the dynamic tactile layer and then calculate a profile across the tactile surface based on this volume of fluid, an elasticity or other mechanical property of the tactile layer, and a known position and footprint of each expanded deformable region defined in the dynamic tactile layer. For example, Block S130 can implement a lookup table to access a stored surface profile model of the dynamic tactile layer based on a volume of fluid pumped into the dynamic tactile layer. In another example, Block S130 can execute a parametric model to calculate the profile across the tactile surface based on the foregoing parameters. Yet alternatively, Block S130 can interface with a capacitive touch sensor coupled to the dynamic tactile layer to detect a vertical position of discrete positions across the tactile layer, such as described in U.S. patent application Ser. No. 13/896,090.


However, Block S130 can function in any other way and can control any other component within the computing device to transition the deformable region from the retracted setting to the expanded setting and to monitor a profile of the tactile surface.


6. Image Modification


Block S140 of the method S100 recites modifying the portion of the image rendered on the digital display according to the estimated viewing position of the user and a profile of the tactile surface across the deformable region in the expanded setting. Generally, Block S140 functions to modify the image rendered on the digital display based on the viewing position of the user relative to (discrete positions across) the digital display and the profile of the tactile surface of the dynamic tactile layer to compensate for optical distortion of the image transmitted through the dynamic tactile layer to the user. In particular, Block S140 functions to modify the image to reduce the user's perception of distortion of the image rendered on the digital display.


As described above, differences in the indices of refraction of various materials of the dynamic tactile layer and the digital display and variations in the effective thickness of the dynamic tactile layer (i.e., between peripheral regions and deformable regions of the tactile layer) may yield optical distortion of an image rendered on the digital display across all regions of the digital display except a single point (i.e., pixel) in direct (i.e., normal) line of sight of the user. In particular, when the user views a ray of light—transmitted through the dynamic tactile layer—at an acute angle (i.e., a non-normal angle) between the user's eyes and the digital display, the user may perceive an origin (e.g., pixel) of the ray of light that is other than the true origin of the ray of light due to the thickness of the dynamic tactile layer that is arranged over the digital display. However, the distance between the true origin of a ray of light and the perceived origin of the ray of light may depend on an effective thickness of the dynamic tactile layer (and an effective thickness and material property of each of the substrate, fluid, and tactile layer of the dynamic tactile layer) such that the dynamic tactile layer with expanded deformable regions distorts an image rendered on the digital display. In one example, the tactile surface at the deformable region is substantially flush (i.e., in-plane) with the tactile surface at the peripheral region in the retracted setting, and the tactile surface at the deformable region is elevated (or offset) above the peripheral region when the deformable region transitions into the expanded setting. Thus, in the expanded setting, the deformable region can define a portion of the dynamic tactile layer of a (varying and) greater effective thickness relative to a portion of the dynamic tactile interface at the peripheral region. In this example, dissimilarity between the refractive index of materials of the dynamic tactile interface and air around the computing device can thus yield a greater distance between a perceived origin of a first ray of light and the true origin of the first ray of light transmitted through the deformable region in the expanded setting than for a second ray of light transmitted through the peripheral region for an equivalent viewing position of the user. For example, the dynamic tactile layer can exhibit a lensing effect locally across the deformable region in the expanded setting, thereby preferentially distorting light emitted from the digital display proximal the deformable region over light emitted from the digital display proximal the peripheral region. In particular, the user's viewing angle (and distance) to a region of the digital display (the “source”) adjacent the deformable region (the “lens”) can affect the user's perception of light emitted from the digital display (even if the user views the digital display at an angle of substantially co). Furthermore, as described above, the deformable region can be transitioned to various heights above (and/or below) the peripheral region, and the height and shape (i.e., “profile”) of the deformable region in the expanded setting can further affect distortion of light through the dynamic tactile layer and therefore the user's perception of an image rendered on the digital display.


Therefore, in one implementation, Block S140 can modify the image rendered on the digital display (or render a second image based on the original image on the digital display) to compensate for a non-uniform optical distortion (e.g., refraction) of light transmitted through the dynamic tactile layer due to a change in the position of one or more deformable regions. For example, Block S140 can update the image as a deformable region of the dynamic tactile layer transitions from the retracted setting into the expanded setting to compensate for a local difference in optical distortion of light transmitted from the digital display to the user's eyes through the deformable region. In another example, Block S140 updates the image as a deformable region transitions from the expanded setting into the retracted setting to compensate for a return to more uniform optical distortion of light transmitted through the dynamic tactile layer. In another implementation, Block S140 can modify the image rendered on the digital display to compensate for a change in the user's viewing position relative to the digital display to compensate for non-uniform optical distortion of light transmitted through the dynamic tactile layer due to a non-uniform surface profile of the dynamic tactile layer. Block S140 can therefore update, modify, or replace an image rendered on the digital display in response to a change in position of a deformable region of the dynamic tactile layer and/or based on a change in a viewing position of the user relative to the digital display (when one or more deformable regions of the dynamic tactile layer is in a position other than flush with the peripheral region). Therefore, as in these implementations, Block S140 can translate and/or scale regions of the image rendered on the digital display, such as shown in FIG. 4, to offset optical effects resulting from light transmission through the dynamic tactile layer.


In one implementation, Block S110 displays a frame of a messaging application—including alphanumeric keys of an alphanumeric keyboard—on a digital display integrated into a computing device (e.g., a smartphone, a tablet, an other mobile computing device), and Block S120 determines that the user is holding the computing device in a landscape orientation based on an output of a gyroscope and/or accelerometer integrated into the computing device, sets a viewing center for the user at the effective center of the output surface of the digital display, and selects a preset user viewing distance of twelve inches from the output surface of the digital display. Block S130 then transitions select deformable regions of the dynamic tactile interface into the expanded setting, each of the select deformable regions adjacent an alphanumeric key rendered on the digital display, and Block S140 horizontally shrinks a portion of the rendered image corresponding to the alphanumeric keyboard with the portion of the image anchored at the effective center of the digital display and the magnitude of the horizontal expansion of the image corresponding to the user's viewing distance. For example, Block S140 can leave a first alphanumeric key proximal the user's viewing center in a substantially unchanged position in the second image, translate a second alphanumeric key laterally adjacent the first alphanumeric key horizontally away from the first alphanumeric key by a first distance in the second image, and translate a third alphanumeric key—laterally adjacent the second alphanumeric key opposite the first alphanumeric key—horizontally away from the second alphanumeric key by a second distance greater than the first distance in the second image. In particular, in this implementation, Block S140 can scale the image horizontally from an anchor point of the image that is set based on a preset (or predicted) user viewing center. Thus, in this implementation, Block S140 can adjust the image to accommodate predicted refraction effects of various deformable regions arranged across the digital display based on a static preset user viewing distance and/or user viewing center.


In the foregoing implementation, Block S120 can alternatively interface with a forward-facing camera integrated into the computing device to capture in photographic image, analyze the photographic image as described above to identify a user viewing distance relative to a reference point on the computing device. In the foregoing implementation, Block S140 can apply the predicted user viewing distance to set a horizontal scaling magnitude corresponding to the user's viewing distance and thus scale the image rendered on the digital display relative to a scaling anchor point (based on a predicted user viewing center) substantially in real-time. Block S140 can therefore apply a linear scaling (and/or or translation) value or a non-linear scaling (and/or or translation) model to the image to shift portions of the image to compensate for the user's viewing position relative to the dynamic tactile layer and the position of one or more deformable regions across the tactile surface.


As described above and shown in FIG. 3, Block S120 can further analyze the image to predict the user's viewing center (i.e., point of focus), such as by interfacing with a forward facing camera and implementing an eye tracking technique to analyze an output of the camera, and Block S140 can set a scaling anchor point at the user's point of focus on the digital display. Block S140 can thus modify a scaling magnitude and a scaling anchor point location substantially in real-time to accommodate changes in the user's viewing angle, distance, and/or point of focus relative to the digital display over time.


In another implementation, Block S140 can scale and/or transform only regions of the displayed image substantially adjacent a deformable region in an elevated position (i.e., in an expanded setting). For example, the tactile surface at the deformable region can define a three-dimensional convex domed geometry in the expanded setting, and Block S140 can translate a corresponding alphanumeric key displayed adjacent the deformable region such that a ray traced from the center of the corresponding alphanumeric key to the user's eyes (e.g., to the center of the bridge of the user's nose) passes through the deformable region normal to the tactile surface, such as shown in FIG. 4. In this example, Block S140 can translate a first region of the displayed image over a static second region of the image to substantially minimize perceived translation of the center of the first region optical due to a shift in the position of a deformable region adjacent the first region and/or due to a change in the user's viewing position. In this example, Block S120 can select a translation matrix defining viewing angles for various regions of the image for a current viewing angle and position of the user relative to a point on the computing device (e.g., relative to the camera, relative to a center of the digital display, relative to the user's viewing center or pint of focus), and Block S140 can implement the translation matrix to translate select portions of the image rendered on the digital display adjacent expanded deformable regions. Alternatively, Block S120 can select a translation matrix from a group of available translation matrices based on an orientation of the computing device, a user viewing angle, position, and/or point of focus (e.g., with predefined ranges) and/or settings of deformable regions of the dynamic tactile layer, etc., and Block S140 can implement the selected translation matrix to translate select portions of the image rendered on the digital display adjacent expanded deformable regions. Yet alternatively, Block S120 can generate a translation matrix based on an orientation of the computing device, a user viewing angle, a user viewing position, and/or a user point of focus, etc. (e.g., extrapolated from an output of a forward-facing camera), and Block S140 can implement the translation matrix accordingly. In particular, Block S140 can estimate a change in perceived position of a portion of the image, by the user, through the deformable region between the retracted setting and the expanded setting based on an estimated three-dimensional surface profile and the estimated viewing position of the user, can calculate a translation magnitude value and a translation direction to compensate for the change in perceived position of the portion of the image by the user, and can translate the portion of the image rendered on the digital display based on the translation magnitude value and the translation direction accordingly.


Block S140 can implement similar techniques to select and/or generate a scaling matrix for one or more regions of the displayed image. For example, as described above, the deformable region in the expanded setting can define a three-dimensional convex domed geometry—mimicking a lens—over a region of the digital display such that the deformable region magnifies an adjacent region of an image rendered on the digital display. In particular, the magnification power of the deformable region in the expanded setting can be based on the shape (e.g., the vertical height, perimeter geometry, attachment geometry, optical and mechanical material properties, etc.) of the deformable region, a distance between the tactile layer and an output surface of the digital display, and the user's viewing angle (and distance) to the deformable region, etc. Block S140 can thus accommodate for such variables to scale a portion of the region of the image adjacent the deformable region to compensate for magnification of the portion of the image by the expanded deformable region. Block S140 can therefore estimate a three-dimensional surface profile of the tactile layer across the deformable region in the expanded setting (e.g., based on a magnitude of a volume of fluid displaced into the dynamic tactile layer), estimating a non-uniform magnification of the digital display across the portion of the image by the deformable region in the expanded setting based on the estimated three-dimensional surface profile, and non-uniformly scale the portion of the image, rendered on the digital display, to compensate for the non-uniform magnification of the digital display by the deformable region in the expanded setting.


In an example of the foregoing implementation, Block S110 includes rendering—on a digital display within a mobile computing device—a first images of a locked home screen with a set of icons, each original icon in the set of icons corresponding to a native application installed on the mobile computing device. In this implementation, when the mobile computing device is unlocked, Block S130 expands a set of deformable regions into expanded settings, each deformable region adjacent an original icon rendered on the display, and Block S140 generate a second image (or “frame”) including scaled and translated versions of the icons (“second icons”) based on the user's viewing position relative to the mobile computing device and renders the second image on the digital display such that the second icons appear to the user as substantially undistorted and substantially centered on corresponding deformable regions—substantially regardless of the user's viewing position—as the deformable regions expand. For example, for each first icon, Block S140 can set a scaling value for the first icon based on a vertical position of the corresponding (i.e., adjacent) deformable region in the expanded setting, set a translation magnitude value and a translation direction for the first icon based on a vertical position of the corresponding deformable region in the expanded setting and the estimated viewing position of the user relative to the corresponding deformable region, and then apply the scaling value uniformly across the first icon and translate the first icon according to the translation magnitude and the translation angle in the first image to generate the second image.


Furthermore, in the second image, Block S140 can shift to black a set of pixels in the digital display corresponding to an area of a first icon excluding an area of an intersection of the first icon and the second icon as rendered on the digital display, as shown in FIG. 6. In particular, distortion of light through the dynamic tactile layer may be greatest proximal a perimeter of a deformable region in the expanded setting, such as to a degree that chromatic dispersion (e.g., a “rainbow effect”) around an expanded deformable region becomes noticeable to the human eye at a standard viewing distance to the digital display. Therefore, Block S140 can generate the second image that excludes visual data (i.e., includes only black) in areas corresponding to high degrees of optical distortion and not necessary to transmit comprehensible visual data of the corresponding icon. Alternatively, Block S140 can reduce a brightness (or light intensity) of the image at a region corresponding to an area of the first icon excluding an area of an intersection of the first icon and the second icon to reduce a perceived intensity of chromatic dispersion of light transmitted through the dynamic tactile layer. For example, Block S140 can uniformly or non-uniformly reduce brightness of a backlight across this region of the display. Similarly, Block S140 can set a color of this region of the image to a singular wavelength of light (e.g., 512 nm) or to a limited range of wavelengths of light (e.g., 509 nm to 515 nm) to limit chromatic dispersion of light transmitted through the dynamic tactile layer. Alternatively, Block S140 can extend a background color, background pattern, etc. of an adjacent portion of the image into this region of the image (i.e., into the region of the updated image behind the icon in the original image).


In one variation, Block S130 retracts the deformable region from the expanded setting in which the deformable region is substantially flush with the peripheral region to the retracted setting below the peripheral region such that the deformable demagnifies an adjacent portion of the image. In this variation, Block S140 can thus expand the adjacent portion of the image to compensate for a user's perceived demagnification of the portion of the image by the deformable region in the retracted setting. In one example, Block S140 can (linearly or nonlinearly) expand an image of an icon rendered on the display adjacent the deformable region to compensate for the transition of the deformable region into the retracted setting, such as by expanding the image of the icon over a background color or background pattern rendered on the display.


Therefore, Block S140 can prioritize one or more portions of the image rendered on the display for modification to compensate for optical distortion of the image due to a non-planar three-dimensional profile of the tactile surface. For example, Block S140 select prioritize icons for native applications on a home screen rendered on the display or icons for alphanumeric keys rendered on the display for scaling and/or translation to compensate for a varying thickness and/or material compensation of the dynamic tactile layer.


In another implementation, Block S120 generates a reference position matrix including the angle and distance of the user's eyes to a reference point on the computing device, as described above, and Block S120 crosses the reference position matrix with a display region matrix (e.g., specifying the location of each pixel of the digital display relative to the reference point) to generate a viewing position matrix specifying the user's viewing angle(s) for each portion (e.g., pixel, cluster of pixels) across the digital display, as shown in FIG. 2. In this implementation, Block S130 can generate a tactile surface position matrix defining the position of regions across the tactile surface, such as regions adjacent corresponding portions of the digital display, as shown in FIG. 2. For example, Block S130 can generate a tactile surface position matrix that specifies the vertical distance between a portion of the digital display and a corresponding region of the tactile surface, such as based on lookup table of deformable region positions corresponding to fluid pressure within corresponding fluid channels defined within the substrate, based on deformable region positions detected with a capacitive sensor coupled to the dynamic tactile layer as described in U.S. patent application Ser. No. 13/896,090, or based on a volume(s) of fluid pumped into the dynamic tactile layer. Finally, in this implementation, Block S140 can cross the viewing position matrix with the tactile surface position matrix to generate a transfer matrix, as shown in FIG. 2, and Block S140 can apply the transfer matrix to the image currently rendered on the digital display (i.e., the “original image”) to generate an updated image (i.e., the “second image”) including portions of the original image defined in new positions across the second image to compensate for a change in optical distortion of the original image resulting from a shift in the profile of the tactile surface and/or from a shift in the user's viewing position relative to the digital display.


As shown in FIG. 2, Block S140 can also generate an optical properties matrix that includes optical property information for one or more layers or materials within the dynamic tactile interface, such as refractive indices of the tactile layer, the substrate, and/or the fluid and a modulus of elasticity of the tactile layer. Block S140 can also generate a geometry matrix that includes geometry information of the dynamic tactile interface, such as the locations and geometries of fluid channels within the substrate, the locations and geometries of support members with corresponding cavities and adjacent corresponding deformable regions, and/or attachment point geometries for each deformable region, etc. Block S140 can thus cross the viewing position matrix and the tactile surface position matrix with an optical properties matrix and/or a geometry matrix to generate the transfer matrix, as shown in FIG. 2, that defines how an original image rendered on the digital display is to be modified to compensate for a change in optical distortion of the original image.


Block S130 can therefore select or generate a geometry matrix defining a profile of the tactile surface for the deformable region in the expanded setting for each discrete position (e.g., discrete area) in a set of discrete positions across the tactile surface, and Block S140 can transform the (original) image rendered on the digital display according to the viewing position matrix and the geometry matrix to generate a second image and then rendering the second image on the digital display. Block S130 can therefore generate the geometry matrix defining an effective thickness of the tactile layer across the peripheral region and the set of deformable region in the expanded setting for each discrete position in the set of discrete positions, and Block S140 can transform the (original) image into the second image further based on an index of refraction of the tactile layer and an index of refraction of a fluid arranged between the deformable region and the substrate.


Block S140 can further map virtual pixels the second image to real pixels in the digital display. In particular, in translating and/or scaling the original image to generate the updated second image, Block S140 can translate a virtual pixel of the original image to a virtual position within the second image corresponding to a nearest real pixel of the digital display. However, Block S140 can map scaled and/or translated portions of the original image to real positions of pixels of the digital display to generate the second image in any other suitable way.


Block S140 can also apply any of the viewing position matrix, the tactile surface position matrix, the optical properties matrix, and/or the geometry matrix to generate a light level matrix, and Block S140 can implement the light level matrix to adjust the brightness of one or more regions of the image to accommodate for internal reflection and/or other optical effects that reduce perceived light intensity across one or more regions of the image due to a change in a position of a deformable region and/or a shift in the user's viewing position relative to the digital display. However, Block S140 can function in any other way to modify the image to reduce, limit, compensate for, and/or substantially eliminate perceived optical distortion of an image rendered on the digital display.


In one implementation, Block S140 selects a transfer matrix from a set of stored transfer matrices, wherein each transfer matrix in the set of stored transfer matrices is based on empirical optical data. For example, empirical optical data can be generated by displaying an image (e.g., a black and white grid) on a display of a test dynamic tactile interface, transitioning one or more deformable regions of the test dynamic tactile interface between vertical elevations (e.g., fully expanded, partially expanded, fully retracted), moving a camera to various positions (e.g., different angles and distances) over the digital display, and capturing a photographic image of all or a portion of the digital display at each camera position and at various deformable region elevations. Each photographic image can then be compared to the displayed image (of known two-dimensional content) to generate a matrix approximating perceived optical distortion of an image rendered on the digital display as captured by a camera (like a human user) at a corresponding position over the digital display and for a known elevation of the deformable region(s). Each perceived optical distortion matrix can then be manipulated (e.g., inverted) to generate a transfer matrix corresponding to a particular user viewing position relative to the digital display and to an elevation of each deformable region. A set of such transfer matrices corresponding to various viewing positions and deformable region elevations can thus be uploaded to and stored on the dynamic tactile interface. Block S140 can thus receive a user viewing position from Block S120 and a deformable region elevation from Block S130, and Block S140 can select a particular transfer matrix—from the set of stored transfer matrices—that best matches the received user viewing position and the received deformable region elevation(s). Block S140 can then apply the select transfer matrix to an image currently rendered on the digital display to generate a new image that compensates for optical distortion of the image by the dynamic tactile layer and/or that compensates for a shift in the users viewing position relative to he digital display.


In the foregoing implementation, Block S140 can alternatively (linearly) interpolate a composite transfer matrix based on two or more stored transfer matrices corresponding to user viewing positions and/or deformable region elevations similar to (e.g., “near”) the current user viewing position as determined in Block S120 and the deformable region elevation(s) set in Block S130. Block S140 can thus apply a ‘stock’ transfer matrix or a composite (e.g., interpolated) transfer matrix—based on multiple stock transfer matrices—to an image currently rendered on the digital display to reduce perceived optical distortion of the image by the user. However, Block S140 can function in any other way and/or apply any other suitable technique or method to modify the image displayed on the digital display.


7. Variations


In one variation of the method S100, Block S110 includes rendering the image on the digital display at a first time, Block S130 includes transitioning the deformable region into the expanded setting over a period of time succeeding the first time, the period of time terminating in a second time, and Block S120 includes cyclically refreshing an image rendered on the digital display according to a position of the deformable region over the peripheral region during the period of time. Generally, in this variation, the method S100 can cyclically detect the user's viewing position and update an image rendered on the digital display of the computing device to compensate for optical distortion of the image transmitted through the dynamic tactile layer, which may include one or more deformable region in expanded settings (i.e., in positions other than flush with the peripheral region).


In one example, Block S140 predicts a first perceived parallax of each pixel in the digital display by the user at a first time for the deformable region in the retracted setting, predicts a second perceived parallax of each pixel in the digital display by the user at a second time succeeding the first time for the deformable region in the expanded setting, and modifies the image to compensate for a change in perceived parallax of each pixel of the digital display from the first perceived parallax to the second perceived parallax. In a similar example, Block S130 includes transitioning the deformable region into the expanded setting at a first time, Block S110 includes rendering the image on the digital display substantially at the first time, Block S120 includes estimating a first viewing position of the user relative to the digital display substantially at the first time and estimating a second viewing position of the user relative to the digital display different from the first viewing position at a second time succeeding the first time, and Block S140 includes refreshing the image rendered on the digital display according to a difference between the first viewing position and the second viewing position.


Blocks S120 and S140 can thus repeat this process cyclically, such as at a refresh rate of the digital display, an image capture rate of a camera arranged within the computing device, or at an other preset rate, (e.g., 10 Hz), etc. For example, Block S140 can refreshing a transfer matrix—implemented as described above to adjust an image rendered on the digital display and defining transformation of images rendered on the digital display based on an estimated position of the user relative to the digital display—at the refresh rate. In this example, Block S140 can estimate a first perceived parallax of the image at the first time based on the first set of viewing angles at a set of discrete positions across the digital display, estimate a second perceived parallax of the image at the second time based on a second set of viewing angles of the user to the set of discrete positions across the digital display, and translate and scale a portion of the image to compensate for a difference between the first perceived parallax and the second perceived parallax. However, in this variation, Blocks S110, S120, S130, and S140 can repeat at any other rate and cooperate in any other way to adjust an image rendered on the digital display to compensate for a change in the user's viewing position and/or a position of one or more deformable regions within the dynamic tactile layer, which may other yield perception—by the user—of optical distortion of the image rendered on the display.


In another variation, the digital display includes in-pixel photodetectors configured to output signals corresponding to incident light on the digital display. In this variation, Block S140 can implement signals output from the in-pixel photodetectors to sense internal reflectance and/or other optical effects within the digital display and/or within the dynamic tactile interface. Block S140 can thus interface with the in-pixel photodetectors to measure local optical effects (e.g., internal reflection) within the digital display and/or within the dynamic tactile interface based on outputs of the in-pixel photodetectors, and Block S140 can calibrate an algorithm or any of the foregoing matrices based on these measured optical effects. For example, Block S140 can interface with the in-pixel photodetector to calibrate a retracted setting algorithm and/or matrix defining internal reflection of light across the dynamic tactile layer-digital display assembly for a set of deformable regions in the retracted setting and to similarly calibrate an expanded setting algorithm and/or matrix defining internal reflection of light across the dynamic tactile layer-digital display assembly for a set of deformable regions in the expanded setting. In this example, Block S140 can then adjust a brightness across an image rendered on the digital display based on a difference between the expanded setting algorithm and/or matrix and the retracted setting algorithm and/or matrix as the set of deformable regions transitions from the retracted setting into the expanded setting. However, the method S100 can function in any other way to reduce perceived optical distortion of light output through a dynamic tactile interface.


The systems and methods of the embodiments can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, native application, frame, iframe, hardware/firmware/software elements of a user computer or mobile device, or any suitable combination thereof. Other systems and methods of the embodiments can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor, though any suitable dedicated hardware device can (alternatively or additionally) execute the instructions.


As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.

Claims
  • 1. A method for reducing perceived optical distortion of light output through a dynamic tactile interface, the method comprising: rendering an image on a digital display coupled to a substrate opposite a tactile layer, the substrate and the tactile layer being substantially transparent, the tactile layer defining a tactile surface, a peripheral region, and a deformable region, the peripheral region adjacent the deformable region and coupled to the substrate opposite the tactile surface, the deformable region disconnected from the substrate and operable between a retracted setting and an expanded setting, the tactile surface at the deformable region being substantially flush with the tactile surface at the peripheral region in the retracted setting and offset above the tactile surface at the peripheral region in the expanded setting;estimating a viewing position of a user relative to the digital display;transitioning the deformable region from the retracted setting into the expanded setting; andmodifying a portion of the image rendered on the digital display according to the estimated viewing position of the user and a profile of the tactile surface across the deformable region in the expanded setting,whereinestimating the viewing position of the user comprises capturing a photographic image of the user through a forward-facing camera arranged within a computing device coupled to the dynamic tactile interface, detecting eyes of the user within the photographic image, and generating a viewing position matrix defining an angel between the eyes of the user and the digital display along a first axis of the tactile layer and an angle between the eyes of the user and the digital display along a second axis of the tactile layer for each discrete position in a set of discrete positions across the tactile layer based on a position of the eyes within the photographic image; andmodifying the portion of the image rendered on the digital display comprises selecting a geometry matrix defining a profile of the tactile surface for the deformable region in the expanded setting for each discrete position in the set of discrete positions, transforming the image rendered on the digital display according to the viewing position matrix and geometry matrix to generate a second image, and rendering the second image on the digital display.
  • 2. The method of claim 1, wherein rendering the image on the digital display comprises rendering a first frame comprising an icon for an input key of a computing device coupled to the dynamic tactile interface, the icon substantially aligned with the deformable region in the retracted setting in the first frame; andwherein modifying the portion of the image comprises generating a second frame comprising a scaled and translated version of the icon and rendering the second frame on the digital display in response to transitioning the deformable region into the expanded setting.
  • 3. The method of claim 2, wherein modifying the portion of the image comprises shifting to black a set of pixels in the digital display, the set of pixels corresponding to an area of a first icon excluding an area of an intersection of the first icon and a second icon as rendered on the digital display.
  • 4. The method of claim 2, wherein modifying the portion of the image comprises non-uniformly reducing backlight intensity of a set of pixels in the digital display, the set of pixels corresponding to an area of a first icon excluding an area of an intersection of the first icon and a second icon as rendered on the digital display.
  • 5. The method of claim 2, wherein scaling the icon and translating the icon from the first frame to generate the second frame comprises setting a scaling value based on a vertical position of the deformable region in the expanded setting, setting a translation magnitude value and a translation direction based on a vertical position of the deformable region in the expanded setting and the estimated viewing position of the user, applying the scaling value uniformly across the icon, and translating the icon according to the translation magnitude value and the translation direction in the first frame to generate the second frame.
  • 6. The method of claim 1, wherein transitioning the deformable region from the retracted setting into the expanded setting comprises displacing a volume of fluid through a fluid channel to expand the deformable region, the fluid channel fluidly coupled to a cavity cooperatively defined by the substrate and the deformable region.
  • 7. The method of claim 6, wherein modifying the portion of the image rendered on the digital display comprises estimating a three-dimensional surface profile of the tactile layer across the deformable region in the expanded setting based on a magnitude of the volume of fluid, estimating a non-uniform magnification of the portion of the image by the deformable region in the expanded setting based on the estimated three-dimensional surface profile, and non-uniformly scaling the portion of the image rendered on the digital display to compensate for the non-uniform magnification of the portion of the image rendered on the digital display.
  • 8. The method of claim 7, wherein modifying the portion of the image rendered on the digital display comprises estimating a change in perceived position of the portion of the image by the user through the deformable region between the retracted setting and the expanded setting based on the estimated three-dimensional surface profile and the estimated viewing position of the user, calculating a translation magnitude value and a translation direction to compensate for the change in perceived position of the portion of the image by the user, and translating the portion of the image rendered on the digital display based on the translation magnitude value and the translation direction.
  • 9. The method of claim 1, wherein transforming the image rendered on the digital display comprises transforming the image into the second image further based on an index of refraction of the tactile layer and an index of refraction of a fluid arranged between the deformable region and the substrate.
  • 10. The method of claim 1, wherein transitioning the deformable region from the retracted setting into the expanded setting comprises transitioning a set of deformable regions from the retracted setting into the expanded setting; andwherein selecting the geometry matrix comprises generating the geometry matrix defining an effective thickness of the tactile layer across the peripheral region and the set of deformable regions in the expanded setting for each discrete position in the set of discrete positions.
  • 11. The method of claim 1, wherein generating the viewing position matrix comprises translating a position of the eyes within the photographic image to a viewing angle of the eyes to the forward-facing camera, translating a distance between the eyes within the photographic image to a viewing distance between the eyes and the forward-facing camera, and transforming the viewing angle and the viewing distance into an angle between the eyes of the user along the first axis and an angle between the eyes of the user along the second axis for each discrete position in the set of discrete positions across the tactile layer based on a location of each discrete position in the set of discrete positions across the tactile layer.
  • 12. The method of claim 1, wherein rendering the image on the digital display comprises rendering the image on the digital display at a first time;wherein transitioning the deformable region from the retracted setting into the expanded setting comprises transitioning the deformable region into the expanded setting over a period of time succeeding the first time, the period of time terminating in a second time; andwherein modifying the portion of the image rendered on the digital display comprises cyclically refreshing an image rendered on the display according to a position of the deformable region over the peripheral region during the period of time.
  • 13. The method of claim 12, wherein modifying the portion of the image comprises predicting a first perceived parallax of each pixel in the digital display by the user at the first time for the deformable region in the retracted setting, predicting a second perceived parallax of each pixel in the digital display by the user at the second time for the deformable region in the expanded setting, and modifying the image to compensate for a change in perceived parallax of each pixel of the digital display from the first perceived parallax to the second perceived parallax.
  • 14. The method of claim 1, wherein transitioning the deformable region from the retracted setting into the expanded setting comprises transitioning the deformable region into the expanded setting at a first time;wherein rendering the image on the digital display comprises rendering the image on the digital display substantially at the first time;wherein estimating the viewing position of the user relative to the digital display comprises estimating a first viewing position of the user relative to the digital display substantially at the first time and estimating a second viewing position of the user relative to the digital display different from the first viewing position at a second time succeeding the first time; andwherein modifying the portion of the image rendered on the digital display comprises refreshing the image rendered on the digital display according to a difference between the first viewing position and the second viewing position.
  • 15. The method of claim 14, wherein estimating the first viewing position of the user and the second viewing position of the user comprises estimating a new viewing position of the user relative to the digital display at a refresh rate, andwherein modifying the portion of the image rendered on the digital display comprises refreshing a transfer matrix at the refresh rate, the transfer matrix defining transformation of images rendered on the digital display based on an estimated position of the user to the digital display.
  • 16. A method for reducing perceived optical distortion of light output through a dynamic tactile interface, the dynamic tactile interface including a digital display coupled to a substrate opposite a tactile layer, the substrate and the tactile layer being substantially transparent, the tactile layer defining a tactile surface, a peripheral region, and a deformable region, the peripheral region adjacent the deformable region and coupled to the substrate opposite the tactile surface, the deformable region disconnected from the substrate, the method comprising: transitioning the deformable region from a retracted setting into an expanded setting, the tactile surface at the deformable region being substantially flush with the tactile surface at the peripheral region in the retracted setting and offset from the tactile surface at the peripheral region in the expanded setting; andat a first time, estimating a first viewing position of a user relative to the digital display;substantially at the first time, rendering an image on the digital display based on the first viewing position and a profile of the tactile surface across the deformable region in the expanded setting, the image comprising a portion rendered on the digital display adjacent the deformable region in the expanded setting;at a second time succeeding the first time, estimating a second viewing position of the user relative to the digital display; andmodifying a position of the portion of the image rendered on the display adjacent the deformable region based on a difference between the first viewing position and the second viewing positionwhereinestimating the first viewing position or estimating the second viewing position of the user comprises capturing a photographic image of the user through a camera arranged within a computing device coupled to the dynamic tactile interface, detecting eyes of the user within the photographic image, and generating a viewing position matrix defining an angel between the eyes of the user and the digital display along a first axis of the tactile layer and an angle between the eyes of the user and the digital display along a second axis of the tactile layer for each discrete position in a set of discrete positions across the tactile layer basedon a position of the eyes with in the photographic image; andmodifying the portion of the image rendered on the digital display comprises selecting a geometry matrix defining a profile of the tactile surface for the deformable region in the expanded setting for each discrete position in the set of discrete positions, transforming the image rendered on the digital display according to the viewing position matrix and the geometry matrix to generate a second image, and rendering the second image on the digital display.
  • 17. The method of claim 16, wherein modifying the position of the portion of the image rendered on the display further comprises estimating a first perceived parallax of the image at the first time based on a first set of viewing angles, estimating a second perceived parallax of the image at the second time based on a second set of viewing angles of the user to the set of discrete positions across the digital display, and translating and scaling the portion of the image to compensate for a difference between the first perceived parallax and the second perceived parallax.
  • 18. The method of claim 16, wherein transitioning the deformable region from the retracted setting into the expanded setting comprises displacing a preset volume of fluid into a fluid channel fluidly coupled to a cavity cooperatively defined by the substrate and the deformable region to expand the deformable region to a target height offset above the peripheral region.
  • 19. A method for reducing perceived optical distortion of light output through a dynamic tactile interface, the method comprising: rendering an image on a digital display, the digital display coupled to a tactile layer defining a tactile surface, a peripheral region, and a deformable region;estimating a viewing position of a user relative to the digital display;transitioning the deformable region from a retracted setting into an expanded setting, the tactile surface at the deformable region substantially flush with the tactile surface at the peripheral region in the retracted setting and offset from the tactile surface at the peripheral region in the expanded setting; andin response to transitioning the deformable region into the expanded setting, modifying the image rendered on the digital display according to the estimated viewing position of the user to compensate for a surface profile of the tactile layer at the deformable region in the expanded setting,whereinestimating the viewing position of the user comprises capturing a photographic image of the user through a forward-facing camera arranged within a computing device coupled to the dynamic tactile interface, and detecting eyes of the user within the photographic image, and generating a viewing position matrix defining an angel between the eyes of the user and the digital display along a first axis of the tactile layer and an angle between the eyes of the user and the digital display along a second axis of the tactile layer for each discrete position in a set of discrete positions across the tactile layer based on a position of the eyes within the photographic image; andmodifying the image rendered on the digital display comprises selecting a geometry matrix defining a profile of the tactile surface for the deformable region in the expanded setting for each discrete position in the set of discrete positions, transforming the image rendered on the digital display according to the viewing position matrix and the geometry matrix to generate a second image, and rendering the second image on the digital display.
US Referenced Citations (574)
Number Name Date Kind
2885967 C et al. May 1959 A
3034628 Wadey May 1962 A
3441111 P Apr 1969 A
3453967 L et al. Jul 1969 A
3490733 Jean Jan 1970 A
3659354 Sutherland May 1972 A
3759108 Borom et al. Sep 1973 A
3780236 Gross Dec 1973 A
3818487 Brody et al. Jun 1974 A
4109118 Kley Aug 1978 A
4181476 Malbec Jan 1980 A
4209819 Seignemartin Jun 1980 A
4290343 Gram Sep 1981 A
4307268 Harper Dec 1981 A
4467321 Volnak Aug 1984 A
4477700 Balash et al. Oct 1984 A
4517421 Margolin May 1985 A
4543000 Hasenbalg Sep 1985 A
4584625 Kellogg Apr 1986 A
4700025 Hatayama et al. Oct 1987 A
4743895 Alexander May 1988 A
4772205 Chlumsky et al. Sep 1988 A
4920343 Schwartz Apr 1990 A
4940734 Ley et al. Jul 1990 A
5090297 Paynter Feb 1992 A
5194852 More et al. Mar 1993 A
5195659 Eiskant Mar 1993 A
5212473 Louis May 1993 A
5222895 Fricke Jun 1993 A
5286199 Kipke Feb 1994 A
5346476 Elson Sep 1994 A
5369228 Faust Nov 1994 A
5412189 Cragun May 1995 A
5459461 Crowley et al. Oct 1995 A
5470212 Pearce Nov 1995 A
5488204 Mead et al. Jan 1996 A
5496174 Garner Mar 1996 A
5666112 Crowley et al. Sep 1997 A
5717423 Parker Feb 1998 A
5729222 Iggulden et al. Mar 1998 A
5742241 Crowley et al. Apr 1998 A
5754023 Roston et al. May 1998 A
5766013 Vuyk Jun 1998 A
5767839 Rosenberg Jun 1998 A
5835080 Beeteson et al. Nov 1998 A
5880411 Gillespie et al. Mar 1999 A
5889236 Gillespie et al. Mar 1999 A
5917906 Thornton Jun 1999 A
5943043 Furuhata et al. Aug 1999 A
5977867 Blouin Nov 1999 A
5982304 Selker et al. Nov 1999 A
6067116 Yamano et al. May 2000 A
6154198 Rosenberg Nov 2000 A
6154201 Levin et al. Nov 2000 A
6160540 Fishkin et al. Dec 2000 A
6169540 Rosenberg et al. Jan 2001 B1
6187398 Eldridge Feb 2001 B1
6188391 Seely et al. Feb 2001 B1
6218966 Goodwin et al. Apr 2001 B1
6243074 Fishkin et al. Jun 2001 B1
6243078 Rosenberg Jun 2001 B1
6268857 Fishkin et al. Jul 2001 B1
6271828 Rosenberg et al. Aug 2001 B1
6278441 Gouzman et al. Aug 2001 B1
6300937 Rosenberg Oct 2001 B1
6310614 Maeda et al. Oct 2001 B1
6323846 Westerman et al. Nov 2001 B1
6337678 Fish Jan 2002 B1
6354839 Schmidt et al. Mar 2002 B1
6356259 Maeda et al. Mar 2002 B1
6359572 Vale Mar 2002 B1
6366272 Rosenberg et al. Apr 2002 B1
6369803 Brisebois et al. Apr 2002 B2
6384743 Vanderheiden May 2002 B1
6414671 Gillespie et al. Jul 2002 B1
6429846 Rosenberg et al. Aug 2002 B2
6437771 Rosenberg et al. Aug 2002 B1
6462294 Davidson et al. Oct 2002 B2
6469692 Rosenberg Oct 2002 B2
6486872 Rosenberg et al. Nov 2002 B2
6498353 Nagle et al. Dec 2002 B2
6501462 Garner Dec 2002 B1
6509892 Cooper et al. Jan 2003 B1
6529183 Maclean et al. Mar 2003 B1
6573844 Venolia et al. Jun 2003 B1
6636202 Ishmael et al. Oct 2003 B2
6639581 Moore et al. Oct 2003 B1
6655788 Freeman Dec 2003 B1
6657614 Ito et al. Dec 2003 B1
6667738 Murphy Dec 2003 B2
6681031 Cohen et al. Jan 2004 B2
6683627 Ullmann et al. Jan 2004 B1
6686911 Levin et al. Feb 2004 B1
6697086 Rosenberg et al. Feb 2004 B2
6700556 Richley et al. Mar 2004 B2
6703924 Tecu et al. Mar 2004 B2
6743021 Prince et al. Jun 2004 B2
6788295 Inkster Sep 2004 B1
6819316 Schulz et al. Nov 2004 B2
6850222 Rosenberg Feb 2005 B1
6861961 Sandbach et al. Mar 2005 B2
6877986 Fournier et al. Apr 2005 B2
6881063 Yang Apr 2005 B2
6930234 Davis Aug 2005 B2
6937225 Kehlstadt et al. Aug 2005 B1
6975305 Yamashita Dec 2005 B2
6979164 Kramer Dec 2005 B2
6982696 Shahoian Jan 2006 B1
6995745 Boon et al. Feb 2006 B2
7004655 Ferrara Feb 2006 B2
7015894 Morohoshi Mar 2006 B2
7027032 Rosenberg et al. Apr 2006 B2
7056051 Fiffie Jun 2006 B2
7061467 Rosenberg Jun 2006 B2
7064655 Murray et al. Jun 2006 B2
7079111 Ho Jul 2006 B2
7081888 Cok et al. Jul 2006 B2
7096852 Gregorio Aug 2006 B2
7102541 Rosenberg Sep 2006 B2
7104152 Levin et al. Sep 2006 B2
7106305 Rosenberg Sep 2006 B2
7106313 Schena et al. Sep 2006 B2
7109967 Hioki et al. Sep 2006 B2
7112737 Ramstein Sep 2006 B2
7113166 Rosenberg et al. Sep 2006 B1
7116317 Gregorio et al. Oct 2006 B2
7124425 Anderson, Jr. et al. Oct 2006 B1
7129854 Arneson et al. Oct 2006 B2
7131073 Rosenberg et al. Oct 2006 B2
7136045 Rosenberg et al. Nov 2006 B2
7138977 Kinerk et al. Nov 2006 B2
7138985 Nakajima Nov 2006 B2
7143785 Maerkl et al. Dec 2006 B2
7144616 Unger et al. Dec 2006 B1
7148875 Rosenberg et al. Dec 2006 B2
7151432 Tierling Dec 2006 B2
7151527 Culver Dec 2006 B2
7151528 Taylor et al. Dec 2006 B2
7154470 Tierling Dec 2006 B2
7158112 Rosenberg et al. Jan 2007 B2
7159008 Wies et al. Jan 2007 B1
7161276 Face Jan 2007 B2
7161580 Bailey et al. Jan 2007 B2
7168042 Braun et al. Jan 2007 B2
7176903 Katsuki et al. Feb 2007 B2
7182691 Schena Feb 2007 B1
7191191 Peurach et al. Mar 2007 B2
7193607 Moore et al. Mar 2007 B2
7195170 Matsumoto et al. Mar 2007 B2
7196688 Schena Mar 2007 B2
7198137 Olien Apr 2007 B2
7199790 Rosenberg et al. Apr 2007 B2
7202851 Cunningham et al. Apr 2007 B2
7205981 Cunningham Apr 2007 B2
7208671 Chu Apr 2007 B2
7209028 Boronkay et al. Apr 2007 B2
7209113 Park Apr 2007 B2
7209117 Rosenberg et al. Apr 2007 B2
7209118 Shahoian et al. Apr 2007 B2
7210160 Anderson, Jr. et al. Apr 2007 B2
7215326 Rosenberg May 2007 B2
7216671 Unger et al. May 2007 B2
7218310 Tierling et al. May 2007 B2
7218313 Marcus et al. May 2007 B2
7233313 Levin et al. Jun 2007 B2
7233315 Gregorio et al. Jun 2007 B2
7233476 Goldenberg et al. Jun 2007 B2
7236157 Schena et al. Jun 2007 B2
7245202 Levin Jul 2007 B2
7245292 Custy Jul 2007 B1
7249951 Bevirt et al. Jul 2007 B2
7250128 Unger et al. Jul 2007 B2
7253803 Schena et al. Aug 2007 B2
7253807 Nakajima Aug 2007 B2
7265750 Rosenberg Sep 2007 B2
7280095 Grant Oct 2007 B2
7283120 Grant Oct 2007 B2
7283123 Braun et al. Oct 2007 B2
7283696 Ticknor et al. Oct 2007 B2
7289106 Bailey et al. Oct 2007 B2
7289111 Asbill Oct 2007 B2
7307619 Cunningham et al. Dec 2007 B2
7308831 Cunningham et al. Dec 2007 B2
7319374 Shahoian Jan 2008 B2
7336260 Martin et al. Feb 2008 B2
7336266 Hayward et al. Feb 2008 B2
7339572 Schena Mar 2008 B2
7339580 Westerman et al. Mar 2008 B2
7342573 Ryynaenen Mar 2008 B2
7355595 Bathiche et al. Apr 2008 B2
7369115 Cruz-Hernandez et al. May 2008 B2
7382357 Panotopoulos et al. Jun 2008 B2
7390157 Kramer Jun 2008 B2
7391861 Levy Jun 2008 B2
7397466 Bourdelais et al. Jul 2008 B2
7403191 Sinclair Jul 2008 B2
7432910 Shahoian Oct 2008 B2
7432911 Skarine Oct 2008 B2
7432912 Cote et al. Oct 2008 B2
7433719 Dabov Oct 2008 B2
7453442 Poynter Nov 2008 B1
7471280 Prins Dec 2008 B2
7489309 Levin et al. Feb 2009 B2
7511702 Hotelling Mar 2009 B2
7522152 Olien et al. Apr 2009 B2
7545289 Mackey et al. Jun 2009 B2
7548232 Shahoian et al. Jun 2009 B2
7551161 Mann Jun 2009 B2
7561142 Shahoian et al. Jul 2009 B2
7567232 Rosenberg Jul 2009 B2
7567243 Hayward Jul 2009 B2
7589714 Funaki Sep 2009 B2
7592999 Rosenberg et al. Sep 2009 B2
7605800 Rosenberg Oct 2009 B2
7609178 Son et al. Oct 2009 B2
7656393 King et al. Feb 2010 B2
7659885 Kraus et al. Feb 2010 B2
7671837 Forsblad et al. Mar 2010 B2
7679611 Schena Mar 2010 B2
7679839 Polyakov et al. Mar 2010 B2
7688310 Rosenberg Mar 2010 B2
7701438 Chang et al. Apr 2010 B2
7728820 Rosenberg et al. Jun 2010 B2
7733575 Heim et al. Jun 2010 B2
7743348 Robbins et al. Jun 2010 B2
7755602 Tremblay et al. Jul 2010 B2
7808488 Martin et al. Oct 2010 B2
7834853 Finney et al. Nov 2010 B2
7843424 Rosenberg et al. Nov 2010 B2
7864164 Cunningham et al. Jan 2011 B2
7869589 Tuovinen Jan 2011 B2
7890257 Fyke et al. Feb 2011 B2
7890863 Grant et al. Feb 2011 B2
7920131 Westerman Apr 2011 B2
7924145 Yuk et al. Apr 2011 B2
7944435 Rosenberg et al. May 2011 B2
7952498 Higa May 2011 B2
7956770 Klinghult et al. Jun 2011 B2
7973773 Pryor Jul 2011 B2
7978181 Westerman Jul 2011 B2
7978183 Rosenberg et al. Jul 2011 B2
7978186 Vassallo et al. Jul 2011 B2
7979797 Schena Jul 2011 B2
7982720 Rosenberg et al. Jul 2011 B2
7986303 Braun et al. Jul 2011 B2
7986306 Eich et al. Jul 2011 B2
7989181 Blattner et al. Aug 2011 B2
7999660 Cybart et al. Aug 2011 B2
8002089 Jasso et al. Aug 2011 B2
8004492 Kramer et al. Aug 2011 B2
8013843 Pryor Sep 2011 B2
8020095 Braun et al. Sep 2011 B2
8022933 Hardacker et al. Sep 2011 B2
8031181 Rosenberg et al. Oct 2011 B2
8044826 Yoo Oct 2011 B2
8047849 Ahn et al. Nov 2011 B2
8049734 Rosenberg et al. Nov 2011 B2
8059104 Shahoian et al. Nov 2011 B2
8059105 Rosenberg et al. Nov 2011 B2
8063892 Shahoian et al. Nov 2011 B2
8063893 Rosenberg et al. Nov 2011 B2
8068605 Holmberg Nov 2011 B2
8077154 Emig et al. Dec 2011 B2
8077440 Krabbenborg et al. Dec 2011 B2
8077941 Assmann Dec 2011 B2
8094121 Obermeyer et al. Jan 2012 B2
8094806 Levy Jan 2012 B2
8103472 Braun et al. Jan 2012 B2
8106787 Nurmi Jan 2012 B2
8115745 Gray Feb 2012 B2
8116831 Meitzler et al. Feb 2012 B2
8123660 Kruse et al. Feb 2012 B2
8125347 Fahn Feb 2012 B2
8125461 Weber et al. Feb 2012 B2
8130202 Levine et al. Mar 2012 B2
8144129 Hotelling et al. Mar 2012 B2
8144271 Han Mar 2012 B2
8154512 Olien et al. Apr 2012 B2
8154527 Ciesla et al. Apr 2012 B2
8159461 Martin et al. Apr 2012 B2
8162009 Chaffee Apr 2012 B2
8164573 Dacosta et al. Apr 2012 B2
8166649 Moore May 2012 B2
8169306 Schmidt et al. May 2012 B2
8169402 Shahoian et al. May 2012 B2
8174372 Da Costa May 2012 B2
8174495 Takashima et al. May 2012 B2
8174508 Sinclair et al. May 2012 B2
8174511 Takenaka et al. May 2012 B2
8178808 Strittmatter May 2012 B2
8179375 Ciesla et al. May 2012 B2
8179377 Ciesla et al. May 2012 B2
8188989 Levin et al. May 2012 B2
8195243 Kim et al. Jun 2012 B2
8199107 Xu et al. Jun 2012 B2
8199124 Ciesla et al. Jun 2012 B2
8203094 Mittleman et al. Jun 2012 B2
8203537 Tanabe et al. Jun 2012 B2
8207950 Ciesla et al. Jun 2012 B2
8212772 Shahoian Jul 2012 B2
8217903 Ma et al. Jul 2012 B2
8217904 Kim Jul 2012 B2
8223278 Kim et al. Jul 2012 B2
8224392 Kim et al. Jul 2012 B2
8228305 Pryor Jul 2012 B2
8232976 Yun et al. Jul 2012 B2
8243038 Ciesla et al. Aug 2012 B2
8253052 Chen Aug 2012 B2
8253703 Eldering Aug 2012 B2
8279172 Braun et al. Oct 2012 B2
8279193 Birnbaum et al. Oct 2012 B1
8310458 Faubert et al. Nov 2012 B2
8345013 Heubel et al. Jan 2013 B2
8350820 Deslippe et al. Jan 2013 B2
8362882 Heubel et al. Jan 2013 B2
8363008 Ryu et al. Jan 2013 B2
8367957 Strittmatter Feb 2013 B2
8368641 Tremblay et al. Feb 2013 B2
8378797 Pance et al. Feb 2013 B2
8384680 Paleczny et al. Feb 2013 B2
8390594 Modarres et al. Mar 2013 B2
8390771 Sakai et al. Mar 2013 B2
8395587 Cauwels et al. Mar 2013 B2
8395591 Kruglick Mar 2013 B2
8400402 Son Mar 2013 B2
8400410 Taylor et al. Mar 2013 B2
8547339 Ciesla Oct 2013 B2
8587541 Ciesla et al. Nov 2013 B2
8587548 Ciesla et al. Nov 2013 B2
8749489 Ito et al. Jun 2014 B2
8856679 Sirpal et al. Oct 2014 B2
8922503 Ciesla et al. Dec 2014 B2
8922510 Ciesla et al. Dec 2014 B2
8928621 Ciesla et al. Jan 2015 B2
8970403 Ciesla et al. Mar 2015 B2
9035898 Ciesla May 2015 B2
9075429 Karakotsios Jul 2015 B1
9116617 Ciesla et al. Aug 2015 B2
9128525 Yairi et al. Sep 2015 B2
9274612 Ciesla et al. Mar 2016 B2
9274635 Birnbaum Mar 2016 B2
9372539 Ciesla et al. Jun 2016 B2
20010008396 Komata Jul 2001 A1
20010043189 Brisebois et al. Nov 2001 A1
20020063694 Keely et al. May 2002 A1
20020104691 Kent et al. Aug 2002 A1
20020106614 Prince et al. Aug 2002 A1
20020110237 Krishnan Aug 2002 A1
20020125084 Kreuzer et al. Sep 2002 A1
20020149570 Knowles et al. Oct 2002 A1
20020180620 Gettemy et al. Dec 2002 A1
20030087698 Nishiumi et al. May 2003 A1
20030117371 Roberts et al. Jun 2003 A1
20030179190 Franzen Sep 2003 A1
20030206153 Murphy Nov 2003 A1
20030223799 Pihlaja Dec 2003 A1
20040001589 Mueller et al. Jan 2004 A1
20040056876 Nakajima Mar 2004 A1
20040056877 Nakajima Mar 2004 A1
20040106360 Farmer et al. Jun 2004 A1
20040114324 Kusaka et al. Jun 2004 A1
20040164968 Miyamoto Aug 2004 A1
20040178006 Cok Sep 2004 A1
20050007339 Sato Jan 2005 A1
20050007349 Vakil et al. Jan 2005 A1
20050020325 Enger et al. Jan 2005 A1
20050030292 Diederiks Feb 2005 A1
20050057528 Kleen Mar 2005 A1
20050073506 Durso Apr 2005 A1
20050088417 Mulligan Apr 2005 A1
20050110768 Marriott et al. May 2005 A1
20050162408 Martchovsky Jul 2005 A1
20050212773 Asbill Sep 2005 A1
20050231489 Ladouceur et al. Oct 2005 A1
20050253816 Himberg et al. Nov 2005 A1
20050270444 Miller et al. Dec 2005 A1
20050285846 Funaki Dec 2005 A1
20060026521 Hotelling et al. Feb 2006 A1
20060026535 Hotelling et al. Feb 2006 A1
20060053387 Ording Mar 2006 A1
20060087479 Sakurai et al. Apr 2006 A1
20060097991 Hotelling et al. May 2006 A1
20060098148 Kobayashi et al. May 2006 A1
20060118610 Pihlaja et al. Jun 2006 A1
20060119586 Grant et al. Jun 2006 A1
20060152474 Saito et al. Jul 2006 A1
20060154216 Hafez et al. Jul 2006 A1
20060197753 Hotelling Sep 2006 A1
20060214923 Chiu et al. Sep 2006 A1
20060238495 Davis Oct 2006 A1
20060238510 Panotopoulos et al. Oct 2006 A1
20060238517 King et al. Oct 2006 A1
20060256075 Anastas et al. Nov 2006 A1
20060278444 Binstead Dec 2006 A1
20070013662 Fauth Jan 2007 A1
20070036492 Lee Feb 2007 A1
20070085837 Ricks et al. Apr 2007 A1
20070108032 Matsumoto et al. May 2007 A1
20070122314 Strand et al. May 2007 A1
20070130212 Peurach et al. Jun 2007 A1
20070152982 Kim et al. Jul 2007 A1
20070152983 Mckillop et al. Jul 2007 A1
20070165004 Seelhammer et al. Jul 2007 A1
20070171210 Chaudhri et al. Jul 2007 A1
20070182718 Schoener et al. Aug 2007 A1
20070229233 Dort Oct 2007 A1
20070229464 Hotelling et al. Oct 2007 A1
20070236466 Hotelling Oct 2007 A1
20070236469 Woolley et al. Oct 2007 A1
20070247429 Westerman Oct 2007 A1
20070257634 Leschin et al. Nov 2007 A1
20070273561 Philipp Nov 2007 A1
20070296702 Strawn et al. Dec 2007 A1
20070296709 Guanghai Dec 2007 A1
20080010593 Uusitalo et al. Jan 2008 A1
20080024459 Poupyrev et al. Jan 2008 A1
20080054875 Saito Mar 2008 A1
20080062151 Kent Mar 2008 A1
20080136791 Nissar Jun 2008 A1
20080138774 Ahn et al. Jun 2008 A1
20080143693 Schena Jun 2008 A1
20080150911 Harrison Jun 2008 A1
20080165139 Hotelling et al. Jul 2008 A1
20080174321 Kang et al. Jul 2008 A1
20080174570 Jobs et al. Jul 2008 A1
20080202251 Serban et al. Aug 2008 A1
20080238448 Moore et al. Oct 2008 A1
20080248836 Caine Oct 2008 A1
20080249643 Nelson Oct 2008 A1
20080251368 Holmberg et al. Oct 2008 A1
20080252607 De et al. Oct 2008 A1
20080266264 Lipponen et al. Oct 2008 A1
20080286447 Alden et al. Nov 2008 A1
20080291169 Brenner et al. Nov 2008 A1
20080297475 Woolf et al. Dec 2008 A1
20080303796 Fyke Dec 2008 A1
20080312577 Drasler et al. Dec 2008 A1
20080314725 Karhiniemi et al. Dec 2008 A1
20090002140 Higa Jan 2009 A1
20090002205 Klinghult et al. Jan 2009 A1
20090002328 Ullrich et al. Jan 2009 A1
20090002337 Chang Jan 2009 A1
20090009480 Heringslack Jan 2009 A1
20090015547 Franz et al. Jan 2009 A1
20090028824 Chiang et al. Jan 2009 A1
20090033617 Lindberg et al. Feb 2009 A1
20090059495 Matsuoka Mar 2009 A1
20090066672 Tanabe et al. Mar 2009 A1
20090085878 Heubel et al. Apr 2009 A1
20090106655 Grant et al. Apr 2009 A1
20090115733 Ma et al. May 2009 A1
20090115734 Fredriksson et al. May 2009 A1
20090128376 Caine et al. May 2009 A1
20090128503 Grant et al. May 2009 A1
20090129021 Dunn May 2009 A1
20090132093 Arneson et al. May 2009 A1
20090135145 Chen et al. May 2009 A1
20090140989 Ahlgren Jun 2009 A1
20090160813 Takashima et al. Jun 2009 A1
20090167508 Fadell et al. Jul 2009 A1
20090167509 Fadell et al. Jul 2009 A1
20090167567 Halperin et al. Jul 2009 A1
20090167677 Kruse et al. Jul 2009 A1
20090167704 Terlizzi et al. Jul 2009 A1
20090174673 Ciesla Jul 2009 A1
20090174687 Ciesla et al. Jul 2009 A1
20090181724 Pettersson Jul 2009 A1
20090182501 Fyke et al. Jul 2009 A1
20090195512 Pettersson Aug 2009 A1
20090207148 Sugimoto et al. Aug 2009 A1
20090215500 You et al. Aug 2009 A1
20090231305 Hotelling et al. Sep 2009 A1
20090243998 Wang Oct 2009 A1
20090250267 Heubel et al. Oct 2009 A1
20090273578 Kanda et al. Nov 2009 A1
20090289922 Henry Nov 2009 A1
20090303022 Griffin et al. Dec 2009 A1
20090309616 Klinghult Dec 2009 A1
20100043189 Fukano Feb 2010 A1
20100045613 Wu et al. Feb 2010 A1
20100073241 Ayala et al. Mar 2010 A1
20100078231 Yeh et al. Apr 2010 A1
20100079404 Degner et al. Apr 2010 A1
20100097323 Edwards et al. Apr 2010 A1
20100103116 Leung et al. Apr 2010 A1
20100103137 Ciesla et al. Apr 2010 A1
20100109486 Polyakov et al. May 2010 A1
20100121928 Leonard May 2010 A1
20100141608 Huang et al. Jun 2010 A1
20100142516 Lawson et al. Jun 2010 A1
20100162109 Chatterjee et al. Jun 2010 A1
20100171719 Craig et al. Jul 2010 A1
20100171720 Craig Jul 2010 A1
20100171729 Chun Jul 2010 A1
20100177050 Heubel et al. Jul 2010 A1
20100182135 Moosavi Jul 2010 A1
20100182245 Edwards et al. Jul 2010 A1
20100225456 Eldering Sep 2010 A1
20100232107 Dunn Sep 2010 A1
20100237043 Garlough Sep 2010 A1
20100238367 Montgomery Sep 2010 A1
20100295820 Kikin-Gil Nov 2010 A1
20100296248 Campbell et al. Nov 2010 A1
20100298032 Lee et al. Nov 2010 A1
20100302199 Taylor et al. Dec 2010 A1
20100321335 Lim et al. Dec 2010 A1
20110001613 Ciesla et al. Jan 2011 A1
20110011650 Klinghult Jan 2011 A1
20110012851 Ciesla et al. Jan 2011 A1
20110018813 Kruglick Jan 2011 A1
20110029862 Scott et al. Feb 2011 A1
20110043457 Oliver et al. Feb 2011 A1
20110060998 Schwartz et al. Mar 2011 A1
20110074691 Causey et al. Mar 2011 A1
20110102462 Birnbaum May 2011 A1
20110120784 Osoinach et al. May 2011 A1
20110148793 Ciesla et al. Jun 2011 A1
20110148807 Fryer Jun 2011 A1
20110157056 Karpfinger Jun 2011 A1
20110157080 Ciesla et al. Jun 2011 A1
20110163978 Park et al. Jul 2011 A1
20110175838 Higa Jul 2011 A1
20110175844 Berggren Jul 2011 A1
20110181530 Park et al. Jul 2011 A1
20110193787 Morishige et al. Aug 2011 A1
20110194230 Hart et al. Aug 2011 A1
20110234502 Yun Sep 2011 A1
20110241442 Mittleman et al. Oct 2011 A1
20110242749 Huang et al. Oct 2011 A1
20110248947 Krahenbuhl et al. Oct 2011 A1
20110248987 Mitchell Oct 2011 A1
20110254672 Ciesla Oct 2011 A1
20110254709 Ciesla et al. Oct 2011 A1
20110254789 Ciesla et al. Oct 2011 A1
20120032886 Ciesla et al. Feb 2012 A1
20120038583 Westhues et al. Feb 2012 A1
20120043191 Kessler et al. Feb 2012 A1
20120044277 Adachi Feb 2012 A1
20120056846 Zaliva Mar 2012 A1
20120062483 Ciesla et al. Mar 2012 A1
20120080302 Kim et al. Apr 2012 A1
20120098789 Ciesla et al. Apr 2012 A1
20120105333 Maschmeyer et al. May 2012 A1
20120120357 Jiroku May 2012 A1
20120154324 Wright et al. Jun 2012 A1
20120193211 Ciesla et al. Aug 2012 A1
20120200528 Ciesla et al. Aug 2012 A1
20120200529 Ciesla et al. Aug 2012 A1
20120206364 Ciesla et al. Aug 2012 A1
20120218213 Ciesla et al. Aug 2012 A1
20120218214 Ciesla et al. Aug 2012 A1
20120223914 Ciesla et al. Sep 2012 A1
20120235935 Ciesla et al. Sep 2012 A1
20120242607 Ciesla et al. Sep 2012 A1
20120306787 Ciesla et al. Dec 2012 A1
20130019207 Rothkopf et al. Jan 2013 A1
20130127790 Wassvik May 2013 A1
20130141118 Guard Jun 2013 A1
20130215035 Guard Aug 2013 A1
20130275888 Williamson et al. Oct 2013 A1
20140043291 Ciesla et al. Feb 2014 A1
20140132532 Yairi et al. May 2014 A1
20140160044 Yairi et al. Jun 2014 A1
20140160063 Yairi et al. Jun 2014 A1
20140160064 Yairi et al. Jun 2014 A1
20140176489 Park Jun 2014 A1
20150009150 Cho et al. Jan 2015 A1
20150015573 Burtzlaff Jan 2015 A1
20150091834 Johnson Apr 2015 A1
20150091870 Ciesla et al. Apr 2015 A1
20150138110 Yairi et al. May 2015 A1
20150145657 Levesque et al. May 2015 A1
20150205419 Calub et al. Jul 2015 A1
20150293591 Yairi et al. Oct 2015 A1
Foreign Referenced Citations (29)
Number Date Country
1260525 Jul 2000 CN
1530818 Sep 2004 CN
1882460 Dec 2006 CN
2000884 Dec 2008 EP
S63164122 Jul 1988 JP
10255106 Sep 1998 JP
H10255106 Sep 1998 JP
2006268068 Oct 2006 JP
2006285785 Oct 2006 JP
2009064357 Mar 2009 JP
2010039602 Feb 2010 JP
2010072743 Apr 2010 JP
2011508935 Mar 2011 JP
20000010511 Feb 2000 KR
100677624 Jan 2007 KR
2004028955 Apr 2004 WO
2006082020 Aug 2006 WO
2009002605 Dec 2008 WO
2009044027 Apr 2009 WO
2009067572 May 2009 WO
2009088985 Jul 2009 WO
2010077382 Jul 2010 WO
2010078596 Jul 2010 WO
2010078597 Jul 2010 WO
2011003113 Jan 2011 WO
2011087816 Jul 2011 WO
2011087817 Jul 2011 WO
2011112984 Sep 2011 WO
2011133604 Oct 2011 WO
Non-Patent Literature Citations (5)
Entry
Essilor. “Ophthalmic Optic Files Materials,” Essilor International, Ser 145 Paris France, Mar. 1997, pp. 1-29, [retrieved on Nov. 18, 2014]. Retrieved from the internet. URL: <http://www.essiloracademy.eu/sites/default/files/9.Materials.pdf>.
Lind. “Two Decades of Negative Thermal Expansion Research: Where Do We Stand?” Department of Chemistry, the University of Toledo, Materials 2012, 5, 1125-1154; doi:10.3390/ma5061125, Jun. 20, 2012 (Jun. 20, 2012) pp. 1125-1154, [retrieved on Nov. 18, 2014]. Retrieved from the internet. URL: <https://www.google.com/webhp? sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=materials-05-01125.pdf>.
“Sharp Develops and Will Mass Produce New System LCD with Embedded Optical Sensors to Provide Input Capabilities Including Touch Screen and Scanner Functions,” Sharp Press Release, Aug. 31, 2007, 3 pages, downloaded from the Internet at: http://sharp-world.com/corporate/news/070831.html.
Jeong et al., “Tunable Microdoublet Lens Array,” Optical Society of America, Optics Express; vol. 12, No. 11. May 31, 2004, 7 Pages.
Preumont, A. Vibration Control of Active Structures: An Introduction, Jul. 2011.
Related Publications (1)
Number Date Country
20150177906 A1 Jun 2015 US
Provisional Applications (1)
Number Date Country
61841176 Jun 2013 US