People are utilizing electronic devices, particularly portable electronic devices, for an increasing number and variety of tasks. In many instances, these electronic devices provide increasingly realistic images and video and in some instances even present three-dimensional views. Often, however, the realism of the generated or displayed image is limited by the information available to the device. For example, a device might render an image using predetermined lighting approach with shadowing performed from a specific angle. Other devices or applications might render graphical information using a first lighting approach when that image is viewed during daylight hours at a current location of the device and a second lighting approach when that image is instead viewed during night-time hours. Such lighting approaches do not, however, take into account the actual lighting around the device. If the device is attempting to display a realistic image in the present location, for example, the device does not properly light and/or shade the image based on ambient lighting conditions. For example, in known systems, where the device is capturing an image of a person and wants to overlay a digital outfit or other such image information over the person's image, the overlay will likely not blend well with the image of the person. This can be caused by the rendered image and overlay not representing the actual lighting around the device.
Various embodiments in accordance with the present disclosure will be described with reference to the drawings, in which:
a), (b) and (c) illustrate example approaches of providing occlusions with an imaging sensor to determine lighting directions in accordance with various embodiments;
a) and (b) illustrate an example approach of determining a direction from which to light and/or shade a rendered object that can be used in accordance with various embodiments;
a) and (b) illustrate an example rendering including appropriate shading that can be used in accordance with various embodiments;
Systems and methods in accordance with various embodiments of the present disclosure overcome one or more of the above-referenced and other deficiencies in conventional approaches of processing and/or displaying graphical content on an electronic device. In particular, various embodiments provide for the determination of a relative position of at least one light source detectable by an electronic device. By determining the relative position of the light source, the electronic device (or another device, service or process) can render or otherwise process graphical elements based at least in part upon the lighting and/or shading that would result from a light source at that location. Images or other graphical elements displayed on a device can be enhanced with virtual shadows, for example, that are rendered according to the determined location of the ambient (or other) light surrounding the device.
In one example, an electronic device might capture an image of at least one object within a viewable area of a camera of the device. In rendering the object for display on the electronic device, a determined position of a light source emitting light onto the electronic device can be used to properly light and/or shade the graphical element such that the displayed image appears more realistic to the user, as the object lighting and/or shading is virtually the same as if the object was being illuminated by the actual light source at the determined location. For example, if an image of a ball was captured and rendered on a user's tablet computer where the light source was determined to be on the left side of the ball, the present system would render the ball to include more light on the left side of the ball and more shading or shadow on the right side of the ball. Similarly, if a user or application of the electronic device attempts to overlay a graphical element on the object image to appear to be a part of the original image, the overlay would also include the proper lighting and/or shading as that graphical element would be lighted by the light source. If the object is not a captured image but instead a rendered image, such as an element of a video game or media file, the position of the light source can be used to light and/or shade the object such that the object appears more realistic. Various other applications and services can utilize the determined position of a light source for other purposes as well, as discussed and suggested elsewhere herein.
In at least some embodiments, an occlusion (or obfuscation) is utilized with a sensor in order to generate a detectable shadow. The occlusion can comprise, for example, an elongated bar, a paint marker, a plastic disc, a printed symbol or another other such element that can be positioned relative to a light sensor or other imaging object. As described below in more detail, by knowing the relative position and/or separation of the occlusion with respect to the sensor, a vector calculation or other such process can be used to determine the approximate direction from which the light source is projecting (referred to herein as “projection direction”) based on the position of the shadow cast by the occlusion on the sensor.
If the electronic device has at least two sensors or imaging elements each capable of making such a projection direction determination, the information from the multiple sensors can be utilized to determine a position of the light source in relation to the electronic device in three dimensions, such that a distance, as well as a relative projection direction, of each light source can be determined. Such an approach enables a three-dimensional lighting model to be developed which can be used to render graphical elements. In many cases, the object or element being rendered or processed by the electronic device will be at some distance from the actual device (either physically or virtually). By knowing the position of the object relative to the light source in three dimensions, the object rendered by the electronic device can be illuminated and/or shaded based on the projection direction of the light source relative to the object itself, and not necessarily based on the projection direction of the light source relative to the electronic device.
Various other approaches can be used to determine the relative projection direction of a light source in accordance with other embodiments. For example, a device can utilize a number of different light paths to obtain intensity information from various directions. By analyzing the relative intensity from each direction, the device can generate a three-dimensional lighting model, or at least determine the approximate direction of at least one light source. The paths can be provided using any appropriate element, such as optical fibers or transmissive apertures as described below.
In addition to determining projection direction of one or more light sources, various approaches may also be used to determine a type of light source projecting light. For example, penumbral blur of a shadow cast on the sensor by an occlusion can be used to determine whether the light source is a point light source, such as a light emitting diode (LED) or a non-point light source, such as the sun. Penumbral blurring is primarily a function of two variables: the angular extent of the light source, and the distance between the casting object (in this description the occlusion) and the surface on which the shadow is cast. Penumbral blur increases as the light source is made larger or the occlusion object is moved away from the surface. By determining the penumbral blur of the occlusion shadow cast on the sensor a similar blurring can be applied to shadows rendered by various embodiments described herein.
The ability to determine positions and types of various light sources relative to objects rendered by the electronic device also can assist with other applications as well. For example, shadows can be removed from images that are captured by a device. The ability to remove shadowing can be used to improve image quality as well as assist with processes such as facial recognition and image analysis.
Various other applications, processes and uses are presented below with respect to the various embodiments.
The electronic device 100 can have a number of other input mechanisms, such as at least one front image capture element 104 positioned on the front of the device and at least one back image capture element (not shown) positioned on the back of the device such that, with sufficient lenses and/or optics, the user device 100 is able to capture image information in substantially any direction about the computing device. The electronic device 100 can also include at least one microphone 106 or other audio capture device capable of capturing audio data, such as words spoken by a user of the device. The example device also includes at least one position and/or orientation determining element 108. Such an element can include, for example, an accelerometer or gyroscope operable to detect an orientation and/or change in orientation of the user device 100. An orientation determining element also can include an electronic or digital compass, which can indicate a direction (e.g., north or south) in which the device is determined to be pointing (e.g., with respect to a primary axis or other such aspect). A location determining element also can include or comprise a global positioning system (GPS) or similar positioning element operable to determine relative coordinates for a position of the computing device. Various embodiments can include one or more such elements in any appropriate combination. As should be understood, the algorithms or mechanisms used for determining relative position and/or orientation can depend at least in part upon the selection of elements available to the device.
In the example of
In some embodiments, a device is able to use at least one camera 210 to determine the approximate location or projection direction of a primary light source, such as by performing image analysis on at least one captured image. In order to improve the realism of the shading, the device can utilize at least a second camera 212 and also determine the relative location/position of the light source from a second position. By combining the position/location of the light source as determined by the first camera 210 and second camera 212, the device 200 may be used to determine a three-dimensional location of the light source 204. As will be discussed later herein, the ability to determine a position of a light source relative to the device in three dimensions enables lighting, shading and glint to be applied properly to an image where an object in the image appears to be located at a distance from the device, such that the lighting might be different on the object than on the device.
There can be various problems or disadvantages, however, to attempting to determine light position using standard image analysis. For example, a sensor might be oversaturated when capturing an image that includes the light source, which prevents the sensor from capturing an accurate image. Further, the captured image might show multiple white regions, only one of which corresponds to an actual light source, and it can be difficult for the device to distinguish between, or properly interpret, those regions. Further, a light source that illuminates a device and/or imaged object might not actually be captured in the field of view of a camera at the current orientation. As described herein, various embodiments overcome these identified problems and disadvantages.
a)-3(c) illustrate examples of sensors and/or sensor assemblies 300 that can be used in accordance with various embodiments to determine the approximate direction and/or location of one or more light sources around a device. In this example, a sensor 304 is able to capture light using, for example, an array of pixels. The sensor can capture intensity, color and/or other such aspects of visible light or other radiation (e.g., infrared radiation) incident on, or otherwise directed to, the sensor. In one embodiment, the sensor 304 can be positioned relative to a lens element 302, which can be a focusing lens, glass plate, transparent plastic disc or other such element capable of transmitting light while protecting the sensor from scratches, debris, or other potential damage by being otherwise exposed. As mentioned above, an electronic device can have one or more such sensors that are each capable of capturing light from at least one direction, or range of directions, around the device. The sensor 300 in
In
The position of the shadow can be determined using any appropriate image or intensity analysis algorithm. For example, an algorithm can be executed on an image captured using the sensor, wherein the algorithm attempts to locate drops or reductions in intensity level over a region corresponding in size, shape and/or position of an occlusion creating the shadow, here the elongated element. In some embodiments, the algorithm can begin analyzing the image at the location of the elongated element in an attempt to more quickly determine the direction and thus reduce the amount of the image that must be processed. In some embodiments, only a portion of the shadow is analyzed until the direction can be determined within a reasonable amount of certainty. In some embodiments, a shadow direction is only determined/registered when there is a minimum level of intensity variation corresponding to the shadow. Various other determinations can be utilized as well within the scope of the various embodiments.
b) illustrates the example sensor assembly 300 with a different type of occlusion. In this example, a single “dot” 310 or other such feature or marker is painted on, adhered to, activated, embedded in or otherwise positioned relative to the sensor assembly. In this example, the marker is positioned at a distance from the sensor 304 at a fixed position approximately co-planar with the upper protective surface 302. The marker can be made of any appropriate material, such as paint or a sticker attached to, or printed on, the sensor assembly or a plastic or other such member attached to, or formed in, the sensor assembly 300. In other embodiments, the marker 310 may be selectively activated such that it can be activated by the device so that it is at least partially opaque such that it casts a shadow on the sensor. When not activated, the marker 310 may be transparent. In still further embodiments, the marker 310 may be movable such that it can be activated at one or more selected positions on the protective surface 302 with respect to the sensor 304. Further, although illustrated as a rounded disc or hemispherical element, it should be understood that the marker can have any appropriate shape, such as may help to more easily determine a location of a shadow formed by the marker. Similar to the elongated member 306 in
There are certain advantages to using a smaller occlusion. For example, the reduced size of the occlusion also reduces the amount of the view of the sensor (e.g., camera) that is potentially blocked by the occlusion. Further, a point occlusion can give a more accurate indication of the direction of the light source, as an elongated member will cast an elongated shadow rather than a quasi-point shadow. A benefit to an elongated member, however, is that light from an oblique angle can still cast a shadow on the sensor even if the shadow from the end of the member falls outside the region of the sensor. For a dot-like occlusion, light at oblique angles can cause the shadow to fall outside the area of the sensor, which then can prevent calculation of the light direction based on the shadow.
As illustrated in
c) illustrates an example wherein there are multiple occlusions 320, 322, 324, 325 positioned relative to the sensor 304. In order to avoid blocking a portion of the field of view of the sensor with an occlusion, approaches in accordance with various embodiments position the occlusion(s) outside the field of view, such as near a periphery of the assembly 300. A downside to moving an occlusion towards an edge, however, is that the effective range detection for a light source is adjusted to one side, such that some information will be lost for more oblique angles. An approach illustrated in
If a single shadow falls on the sensor, however, it can be difficult to determine, in at least some situations, which occlusion corresponds to the shadow. For example, in an arrangement such as that in
If the sensor is large enough and/or the resolution high enough, the use of multiple occlusions can also help to calculate the direction of the light source in three dimensions. For example, the ray from a point light source will be incident at each occlusion at a slightly different angle. If the sensor assembly is able to detect this difference, the device can determine the distance to the light source in addition to the direction in two dimensions, as determined using the planar sensor. In some embodiments, a single moving occlusion can be used that is only in the field of view when the sensor is being used for light detection, for example, and the change in angle of the shadow with respect to the occlusion as the occlusion moves across the sensor, or appears at different locations, can be used to determine distance to the light source.
In some embodiments, the number of occlusions is increased, or spacing between occlusions adjusted, such that different shapes are not needed. For example, if the occlusions are in a ring-based orientation about the periphery, then the shadows that are cast will form a portion of a ring that can be used to determine which occlusions are forming the shadows and thus the direction of the light. In some embodiments, a single ring can be used about a periphery of the assembly that will form a portion of a ring-shaped shadow on the sensor over a range of angles, such that the direction of the light source can be determined without blocking the field of view or requiring more complex image analysis for pattern matching or other such aspects.
In approaches such as those illustrated in
In embodiments where the occlusion is selectively activated, the sensor may take two successive images, one with the occlusion and one without. The image with the occlusion can be used to determine the projection direction of the light source while the image without the occlusion can be used for rendering. In other embodiments, the occlusion may not be completely opaque, thereby improving the ability to reconstruct an image as the cast shadow also includes some information from the image itself. In such an embodiment, the opacity of the occlusion may be altered to determine the relative intensity of the light source, in addition to the projection direction. In other embodiments, the occlusion may only be opaque in one or more color, frequency, intensity and/or spectrum. For example, a filter may be utilized as the occlusion such that it is opaque in blue color so that light information of an object in the green and red color passes through the occlusion to the sensor. The shadow cast in the blue color can be used to determine the projection direction while the information in the other colors can be used to render the image. In addition, by utilizing information in adjacent pixels, the blue color can be reconstructed for rendering. Various other approaches can be used as well within the scope of the various embodiments. It will be appreciated that occlusions may also be generated using any type of filter and not just a color filter. Filters in any light spectrum may be utilized to generate an occlusion that can be used to determine a projection direction of a light source.
As mentioned, it can be desirable in at least some embodiments to utilize an occlusion that gives three-dimensional information regarding a position of a light source. As illustrated in the situation 400 of
As discussed, the ability to render images with realistic lighting and shading can be desirable in a number of different situations. In the example of
Further, graphical elements or overlays added to captured (or rendered) images or video can be improved by ensuring that those additions match the lighting of the image or video. For example,
In one application, a user might have the ability to overlay graphical content on the captured (or rendered) image. In this example, as illustrated in
In addition to shading based on the determined projection direction of the light source, embodiments may also generate glint on the rendered objects based on the determined projection direction of the light source. In these embodiments, it may also determine the type of objection onto which glint is applied. For example, the device may determine if the object is a human eye, glass, metal, etc. and apply an appropriate level and representation of glint to the object. As illustrated in
At a time prior to the example process 600, an occlusion is positioned relative to a sensor and an orientation between the occlusion and the sensor can be determined. As discussed, this can include any appropriate type of occlusion positioned with respect to a sensor such that over a range of incident light, the occlusion will cast a shadow on at least a portion of the sensor. Positioning and orientation can include, for example, determining the location on the sensor corresponding to the center point of the occlusion based on an imaginary line placed orthogonal to the primary plane of the sensor, as well as the separation between the occlusion and the sensor along that line. This information then can be stored in an appropriate location, such as to permanent storage on the electronic device associated with the sensor.
During operation of the device, as illustrated in the example process 600, a request can be received to render image information using a display element of the device 602. This image information can include, for example, graphical content corresponding to a two- or three-dimensional model to be added to, overlaid or rendered as part of an image. At around the time in which the image information is to be rendered, the sensor can attempt to capture lighting information 604, including light incident on the device from at least one light source. This may include activating the occlusion such that its shadow will be cast on the sensor. The device can analyze the captured lighting information to attempt to locate at least one shadow in the captured light information and determine a position where that shadow was cast on the capturing sensor by the occlusion 606. Once determined, the shadow position and the relative position of the occlusion to the sensor can be used to calculate and/or determine the approximate projection direction, type of light source (e.g., point or non-point) and/or position of the light source responsible for the shadow 608. Based on the determined projection direction of the light source, a lighting and/or shading process can be applied to the image content to be rendered such that the content appears to the user as if the content was being lit by the determined light source 610. As part of the lighting or shading process, the appropriate penumbral blur may be applied such that the lighting and shading added by the process 600 matches that of other lighting or shading included in the image. In other examples, glint may also be applied as part of the lighting or shading process 600 to further increase the realistic nature of the rendered image.
As mentioned above, it can be desirable in at least some embodiments to attempt to determine the actual location of the light source relative to the device, as opposed to determining only the projection direction of the light source. For example, consider the situation wherein light from a source is incident on a device along a given direction, and the projection direction of the light source can be used to illuminate some aspect of the object to be rendered or otherwise displayed on a display element of the device. In such an example, the lighting, shading and/or glint effect on the object might be significantly different for a person viewing the device head on in comparison to a person viewing the device from another location as their perspective of the object is different at each position. Thus, simply determining the relative projection direction of a light source may not be sufficient for lighting in all circumstances. Similarly, only determining the relative projection direction of the light source from the device may not be sufficient, as it may make the object appear as if the object were lighted along a fixed direction, which would be significantly different than if the object were lighted from the actual relative position.
In some embodiments, the occlusion can be turned on and off based upon the current mode of operation, orientation of the device or other such aspects. For example, if a camera is being used to take a picture, the occlusion might be turned off. A camera on the other side used to determine lighting and shadows, however, might have the occlusion turned on. In other examples, the occlusion may be activated when an accelerometer or orientation element detects that the object has moved a predetermined amount such that the projection direction of the surrounding light source(s) should be re-determined. In a gaming mode where everything is being rendered by the device, the occlusion might be activated for each sensor (e.g., camera). If the device is overlaying graphics over a captured video, the camera capturing the video might have the occlusion deactivated, while at least one other camera used for capturing lighting information might have the occlusion activated. While some occlusions might be activated by moving parts or other mechanical approaches, in some embodiments a number of pixels might be activated, such as in an electronic-ink type display, in order to provide an occlusion when needed. Various other such approaches can be utilized as well within the scope of the various embodiments.
In some embodiments, an occlusion might not be needed to determine the shadow direction. For example, a device can have a button or indentation (e.g., a speaker area or recessed input) that will provide some indication of the projection direction of the incoming light based upon the shadows created by those features. If the device has one or more cameras (e.g., with wide-angle or fisheye lenses) that are able to image such a feature, the device can utilize those inputs to attempt to determine the projection direction of at least a primary light source. In some devices, a camera can attempt to analyze the shadow on a lip, edge or other such area around the periphery of the camera lens to attempt to detect a projection direction of incident light. Various other such components can be utilized as well for such purposes.
As discussed, determining a projection direction or relative position of at least one light source can help to more accurately render any of a number of different types of graphical elements displayed or otherwise presented by an electronic device. As described above, such an approach can enable a game or other rendered video presentation to appear to be more realistic, as the lighting or shading of the graphical elements can match the lighting or shading if the element were actually a physical element in the vicinity of the user or device, for example. Similarly, if a user is capturing video of objects (e.g., people) near the device, the device can overlay graphical elements on top of the image of the those objects with a similar shading, blur and glint, such that the elements will appear as if they are actually on, or a part of, the objects being captured.
In at least some embodiments, the user will be able to rotate or tilt the device, and the rendered image, including shadows and/or lighting, will adjust accordingly. For example, in the maze example of
In the situation where a user is viewing video captured by another device, such as may be connected over a network such as the Internet, the other device might determine and communicate relative lighting information such that any image captured by that other device and transmitted to a user device can have graphical information overlaid that can be lighted or shaded according to the light surrounding the other device, as opposed to the user device. Such an approach can enable the user device to overlay graphical elements over video from remote sources that are shaded according to the lighting near that remote source (so the overlay shading matches the captured video). Similarly, if video was captured at a time in the past, that video could have lighting information stored along with it, or at least associated with the video file, such that at a subsequent point the user device can add graphical elements that are shaded accordingly. For example, an application might allow a user to change the costume on a television character. If the lighting information for that character in an episode was determined and saved, any of a number of different users at different times could change costumes or other such elements that then could be shaded to match the conditions in which the episode was filmed. In another example, where the images were captured in the morning at a tourist site and the user visited the site in the afternoon, the images displayed to user visiting the site in the afternoon would be rendered to reflect the position of the sun in the afternoon.
In a navigational or mapping application, for example, the ability to shade an image based on current conditions can also improve the realism of the image. For example, an application might be able to approximate a relative position of the sun to a certain location, which can be used to render a three-dimensional view of that location with appropriate lighting based on time of day, day of the month, etc. Such an approach, however, will not be able to compensate for changes such as cloudiness, other light sources, etc. For example, a mapping application might overlay information over a building being viewed by the device. In order to properly shade the image of the building, it can be desirable to adjust for the amount of light actually being received from the sun in the current direction. Further, there could be other light sources such as spotlights or stadium lighting that can significantly affect the appearance of the building, which can be captured by the device. In some cases, information such as compass and GPS information can be used to assist in the lighting determinations, in order to obtain a primary direction of the sun at the current place, time and direction even if the sun is blocked by clouds at the present time. Further, if the building is in the shade of a larger building, it can be desirable to shade the building accordingly even though the sun is out and facing a given side of the building.
As discussed, being able to determine the relative position and type of a light source and a relative position of an object being lit by that source enables a 3D model of the environment around a user device. If the user device has more than one camera able to image an object, or has a stereoscopic or other such element, the device can capture three-dimensional information about an object being imaged. For example, the device can capture information about the profile of a person's nose in addition to the shape from a direct view. Thus, not only can the device light the object from a position corresponding to the light source when rendering but can also light any graphical elements according to the actual shape of that object. This information can be utilized with any appropriate graphics program, such as by submitting the information as a request to an Open GL API, whereby the appropriate lighting and shading can be performed using the three-dimensional information.
Being able to generate such a model can have other benefits as well. For example, a user such as a photographer can capture an image of an object such as another person. By being able to determine the direction of lighting, and potentially the intensity and/or other such aspects, the device can determine the location of various shadows or shading and can make adjustments accordingly. For example, the device might be able to utilize an algorithm to remove shadows, highlights, glint or otherwise adjust the brightness or contrast of portions of an image digitally based upon the relative location of the light source. In other embodiments, the device might apply a longer exposure or otherwise perform different capture approaches to areas in low light in order to obtain additional color information. For example, the device can capture a portion of the image that is in the sun with a first set of optical settings and a second portion of the image that is not in the sun with a second set of optical settings. Such a setting could be applied automatically for captured images to minimize or remove shadowing or decrease the variations in intensity, etc.
Such processes also can be used with other applications, such as image or facial recognition. For example, certain facial recognition algorithms have difficulty identifying a person if half of that person's face is covered in shadow. If the device performing the recognition has access to lighting information as discussed elsewhere herein, the device can make any necessary adjustments in order to improve the recognition process. For example, the device can attempt to remove the shadows or analyze based only on that portion that is in the light. In some embodiments, the device can attempt a “mirroring” process whereby any section that is likely covered in shadow can be replaced or merged with similar portions of the other side of that person's face in order to provide the points needed for proper recognition. In some embodiments, at least one front-facing camera can be used to attempt to recognize a current user of the device.
Accordingly, in at least some embodiments it can be desirable to have imaging elements and/or sensors at various positions around the device not only to be able to generate a three-dimensional model of lighting around the device, or at least determine the relative positions of light sources around the device, but also to capture image information in various directions around the device. The desire to include a number of cameras or sensors, however, can increase the cost and/or complexity of the device.
In some embodiments, the angular range of each fiber at least partially overlaps the range of one or more adjacent fibers, such that interpolation of lighting information between ranges can be improved. In other embodiments, each optical fiber 804 is actually a fiber bundle comprised of multiple individual fibers. Each individual fiber can be tapered or angled at the receiving end, for example, such that each individual fiber of a bundle captures light from a slightly different direction while only running a single bundle to that location. If each individual fiber then directs light to at least one unique pixel, an improved model of surrounding ambient light can be generated based on the additional data points. Such an approach also has the added benefit that none of the main sensors (e.g., cameras) on the device are obscured by an occlusion as discussed elsewhere herein. Further, if the fiber ends are substantially flush with the edge of the device casing there may be no need for lenses or other such elements.
In some embodiments, however, the desire to keep the size of the device as small as possible can outweigh the cost of multiple sensors or other such elements. For example, even though the size of each optical fiber in
As discussed, the device in many embodiments will include at least one image capture element/sensor 1008 such as a camera, ambient light sensor or infrared sensor that is able to image objects or at least capture light in the vicinity of the device. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application or other device. The device also can include one or more orientation and/or location determining elements 1012, such as an accelerometer, gyroscope, electronic compass or GPS device as discussed above. These elements can be in communication with the processor in order to provide the processor with positioning and/or orientation data.
In some embodiments, the computing device 1000 of
The illustrative environment includes at least one application server 1108 and a data store 1110. It should be understood that there can be several application servers, layers or other elements, processes or components, which may be chained or otherwise configured, which can interact to perform tasks such as obtaining data from an appropriate data store. As used herein the term “data store” refers to any device or combination of devices capable of storing, accessing and retrieving data, which may include any combination and number of data servers, databases, data storage devices and data storage media, in any standard, distributed or clustered environment. The application server can include any appropriate hardware and software for integrating with the data store as needed to execute aspects of one or more applications for the client device, handling a majority of the data access and business logic for an application. The application server provides access control services in cooperation with the data store and is able to generate content such as text, graphics, audio and/or video to be transferred to the user, which may be served to the user by the Web server in the form of HTML, XML or another appropriate structured language in this example. The handling of all requests and responses, as well as the delivery of content between the client device 1102 and the application server 1108, can be handled by the Web server 1106. It should be understood that the Web and application servers are not required and are merely example components, as structured code discussed herein can be executed on any appropriate device or host machine as discussed elsewhere herein.
The data store 1110 can include several separate data tables, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store illustrated includes mechanisms for storing production data 1112 and user information 1116, which can be used to serve content. The data store also is shown to include a mechanism for storing log data 1114, which can be used for purposes such as reporting and analysis. It should be understood that there can be many other aspects that may need to be stored in the data store, such as for page image information and access right information, which can be stored in any of the above listed mechanisms as appropriate or in additional mechanisms in the data store 1110. The data store 1110 is operable, through logic associated therewith, to receive instructions from the application server 1108 and obtain, update or otherwise process data in response thereto. In one example, a user might submit a search request for a certain type of item. In this case, the data store might access the user information to verify the identity of the user and can access the catalog detail information to obtain information about items of that type. The information then can be returned to the user, such as in a results listing on a Web page that the user is able to view via a browser on the user device 1102. Information for a particular item of interest can be viewed in a dedicated page or window of the browser.
Each server typically will include an operating system that provides executable program instructions for the general administration and operation of that server and typically will include a computer-readable medium storing instructions that, when executed by a processor of the server, allow the server to perform its intended functions. Suitable implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
The environment in one embodiment is a distributed computing environment utilizing several computer systems and components that are interconnected via communication links, using one or more computer networks or direct connections. However, it will be appreciated by those of ordinary skill in the art that such a system could operate equally well in a system having fewer or a greater number of components than are illustrated in
An environment such as that illustrated in
As discussed above, the various embodiments can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems and other devices capable of communicating via a network.
Various aspects also can be implemented as part of at least one service or Web service, such as may be part of a service-oriented architecture. Services such as Web services can communicate using any appropriate type of messaging, such as by using messages in extensible markup language (XML) format and exchanged using an appropriate protocol such as SOAP (derived from the “Simple Object Access Protocol”). Processes provided or executed by such services can be written in any appropriate language, such as the Web Services Description Language (WSDL). Using a language such as WSDL allows for functionality such as the automated generation of client-side code in various SOAP frameworks.
Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, CIFS and AppleTalk. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network and any combination thereof.
In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers and business application servers. The server(s) also may be capable of executing programs or scripts in response to requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase® and IBM®.
The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen or keypad) and at least one output device (e.g., a display device, printer or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.
Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.) and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets) or both. Further, connection to other computing devices such as network input/output devices may be employed.
Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.
Number | Name | Date | Kind |
---|---|---|---|
4836670 | Hutchinson | Jun 1989 | A |
5563988 | Maes et al. | Oct 1996 | A |
5616078 | Oh | Apr 1997 | A |
6272231 | Maurer et al. | Aug 2001 | B1 |
6314241 | Matsumura | Nov 2001 | B1 |
6385331 | Harakawa et al. | May 2002 | B2 |
6434255 | Harakawa | Aug 2002 | B1 |
6750848 | Pryor | Jun 2004 | B1 |
6863609 | Okuda et al. | Mar 2005 | B2 |
7301526 | Marvit et al. | Nov 2007 | B2 |
7379566 | Hildreth | May 2008 | B2 |
7401783 | Pryor | Jul 2008 | B2 |
7519223 | Dehlin et al. | Apr 2009 | B2 |
8788977 | Bezos | Jul 2014 | B2 |
8902125 | Robbins et al. | Dec 2014 | B1 |
20040135739 | Fukushima et al. | Jul 2004 | A1 |
20040140956 | Kushler et al. | Jul 2004 | A1 |
20070071277 | Van Der Veen et al. | Mar 2007 | A1 |
20070164989 | Rochford et al. | Jul 2007 | A1 |
20070236413 | Gehlsen et al. | Oct 2007 | A1 |
20080005418 | Julian | Jan 2008 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080019589 | Yoon | Jan 2008 | A1 |
20080040692 | Sunday et al. | Feb 2008 | A1 |
20080136916 | Wolff | Jun 2008 | A1 |
20080158096 | Breed | Jul 2008 | A1 |
20080174570 | Jobs et al. | Jul 2008 | A1 |
20080266257 | Chiang | Oct 2008 | A1 |
20080266266 | Kent et al. | Oct 2008 | A1 |
20080266530 | Takahashi et al. | Oct 2008 | A1 |
20080276196 | Tang | Nov 2008 | A1 |
20090031240 | Hildreth | Jan 2009 | A1 |
20090079813 | Hildreth | Mar 2009 | A1 |
20090265627 | Kim et al. | Oct 2009 | A1 |
20090313584 | Kerr | Dec 2009 | A1 |
20090322802 | Noguchi et al. | Dec 2009 | A1 |
20100079426 | Pance et al. | Apr 2010 | A1 |
20100103139 | Soo et al. | Apr 2010 | A1 |
20100271299 | Stephanick et al. | Oct 2010 | A1 |
20110063295 | Kuo et al. | Mar 2011 | A1 |
20110157097 | Hamada et al. | Jun 2011 | A1 |
20110292078 | Lapstun et al. | Dec 2011 | A1 |
20120062845 | Davis et al. | Mar 2012 | A1 |
20120218215 | Kleinert et al. | Aug 2012 | A1 |
20120223916 | Kukulj | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
1694045 | Nov 2005 | CN |
2440348 | Jan 2008 | GB |
2002-164990 | Jun 2002 | JP |
2002-351603 | Dec 2002 | JP |
2004-318826 | Nov 2004 | JP |
2007-121489 | May 2007 | JP |
2008-97220 | Apr 2008 | JP |
2008-186247 | Aug 2008 | JP |
0215560 | Feb 2002 | WO |
2006036069 | Apr 2006 | WO |
Entry |
---|
Nokia N95 8GB Data Sheet, Nokia, 2007, 1 page. |
“Face Detection: Technology Puts Portraits in Focus”, Consumerreports.org, http://www.comsumerreports.org/cro/electronics-computers/camera-photograph/cameras, 2007, 1 page. |
“Final Office Action dated Oct. 27, 2011”, U.S. Appl. No. 12/332,049, 66 pages. |
“Final Office Action dated Jun. 6, 2013”, U.S. Appl. No. 12/332,049, 70 pages. |
“First Office Action dated Mar. 22, 2013”, China Application 200980146841.0, 39 pages. |
“International Search Report dated Apr. 7, 2010”, International Application PCT/US2009/065364, 2 pages. |
“International Written Opinion dated Apr. 7, 2010”, International Application PCT/US2009/065364, 7 pages. |
“Introducing the Wii MotionPlus, Nintendo's Upcoming Accessory for The Revolutionary Wii Remote at Nintendo:: What's New”, Nintendo Games, http://www.nintendo.com/whatsnew/detail/eMMuRj—N6vntHPDycCJAKWhE09zBvyPH, Jul. 14, 2008, 2 pages. |
“Non Final Office Action dated Nov. 7, 2012”, U.S. Appl. No. 12/332,049, 64 pages. |
“Non Final Office Action dated Dec. 21, 2012”, Korea Application 10-2011-7013875, 4 pages. |
“Non Final Office Action dated Apr. 2, 2013”, Japan Application 2011-537661, 2 pages. |
“Non Final Office Action dated Jun. 10, 2011”, U.S. Appl. No. 12/332,049, 48 pages. |
“Office Action dated May 13, 2013”, Canada Application 2,743,914, 2 pages. |
Brashear, Helene et al., “Using Multiple Sensors for Mobile Sign Language Recognition”, International Symposium on Wearable Computers, 2003, 8 pages. |
Cornell, Jay , “Does This Headline Know You're Reading It?”, h+Magazine, located at <http://hplusmagazine.com/articles/ai/does-headline-know-you%E2%80%99re-reading-it>, last accessed on Jun. 7, 2010, Mar. 19, 2010, 4 pages. |
Haro, Antonio et al., “Mobile Camera-Based Adaptive Viewing”, MUM '05 Proceedings of the 4th International Conference on Mobile and Ubiquitous Mulitmedia., 2005, 6 pages. |
Padilla, Raymond , “Eye Toy (PS2)”, <http://www.archive.gamespy.com/hardware/august03/eyetoyps2/index.shtml, Aug. 16, 2003, 2 pages. |
Schneider, Jason , “Does Face Detection Technology Really Work? Can the hottest new digital camera feature of 2007 actually improve your people pictures? Here's the surprising answer!”, http://vvww.adorama.com/catalog.tpl?article=052107op=academy—new, May 21, 2007, 5 pages. |
Tyser, Peter , “Control an iPod with Gestures”, http://www.videsignline.com/howto/170702555, Sep. 11, 2005, 4 pages. |
Zyga, Lisa , “Hacking the Wii Remote for Physics Class”, PHYSorg.com, http://www.physorg.com/news104502773.html, Jul. 24, 2007, 2 pages. |
“Non-Final Office Action dated Oct. 6, 2014,” U.S. Appl. No. 14/298,577, 9 pages. |
“Reexamination Report dated Aug. 28, 2014,” Japanese Application No. 2011-537661, 5 pages. |
“Third Office Action dated May 20, 2014,” Chinese Application No. 200980146841.0, 8 pages. |
“Decision of Rejection Dec. 1, 2014,” Chinese Application No. 200980146841.0, 12 pages. |
“Examiner's Report dated Mar. 21, 2014,” Canadian Application No. 2,743,914, 3 pages. |
“Extended European Search Report dated Jul. 17, 2014,” European Application No. 09828299.9, 13 pages. |
“Notice of Allowance dated Mar. 4, 2014”, U.S. Appl. No. 12/332,049, 10 pages. |