1. Field of the Invention
The present invention relates generally to the field of three-dimensional imagery.
2. Background Art
Human beings are able to perceive depth by using binocular vision to view the same scene from two slightly different perspectives. Depth can be simulated in two-dimensional images by capturing two different images of a scene where each image provides the same perspective as would be viewed by a human eye. The different images are combined to create a single three-dimensional (“3-D”) image for a viewer, who typically wears special 3-D eyeglasses that commonly use one of either a red or cyan filter for each eye.
However, this approach requires that two separate two-dimensional images of a scene be captured. The images must be captured using special camera arrangements, involving a pair of cameras or a single camera that can be moved between two positions in rapid succession. Alternatively, special camera equipment, such as a stereo camera with two pairs of lenses and image sensors, must be used. Moreover, this approach works only for images with a limited field of view. The separate images present a scene with a limited field of view, as opposed to a scene with a full field of view, such as in a panoramic image. Therefore, this approach cannot be used to create panoramic three-dimensional images.
Accordingly, new methods and systems for creating three-dimensional images using a single two-dimensional image of a scene are needed. New methods and systems for creating panoramic three-dimensional images using existing image capturing technology are also needed.
Embodiments of the present invention relate generally to synthesizing three-dimensional images, and more particularly, to synthesizing a three-dimensional image from a two-dimensional image.
In one embodiment of the present invention, there is provided a method for synthesizing three-dimensional images that includes generating a displacement map, based on distance data, for a first two-dimensional image using a computer-based system. The distance data represents a distance between an image object and a first position. A shifted version of the first two-dimensional image is produced based on the displacement map. Then, the first two-dimensional image and the shifted version of the first two-dimensional image are combined to produce a first three-dimensional image.
In another embodiment of the present invention, there is provided a system for synthesizing three-dimensional images that includes a displacement map generator, an image shifter, and an image synthesizer system. The displacement map generator generates a displacement map, based on distance data, for a first two-dimensional image. The distance data represents a distance between an image object and a first position. The image shifter produces a shifted version of the first two-dimensional image based on the displacement map. The image synthesizer system then combines the first two-dimensional image and the shifted version of the first two-dimensional image to produce a first three-dimensional image.
Further embodiments of the present invention enable displaying three-dimensional images, forming panoramic three-dimensional images by combining two three-dimensional images, and producing additional panoramic three-dimensional images based on additional positions.
Embodiments of the present invention may be implemented using hardware, software, or a combination thereof and may be implemented in one or more computer systems or other processing systems.
Further embodiments, features, and advantages of the present invention, as well as the structure and operation of the various embodiments, are described in detail below with reference to the accompanying drawings. It is noted that the invention is not limited to the specific embodiments described herein. Such embodiments are presented herein for illustrative purposes only. Additional embodiments will be apparent to persons skilled in the relevant art(s) based on the information contained herein.
Embodiments of the present invention are described, by way of example only, with reference to the accompanying drawings. In the drawings, like reference numbers may indicate identical or functionally similar elements. The drawing in which an element first appears is typically indicated by the leftmost digit or digits in the corresponding reference number. Further, the accompanying drawings, which are incorporated herein and form part of the specification, illustrate the embodiments of present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the relevant art(s) to make and use the invention.
While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the art with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the invention would be of significant utility.
The present invention relates to synthesizing three-dimensional images and more particularly, to synthesizing a three-dimensional image from a two-dimensional image. In the detailed description herein, references to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Client 110 communicates with one or more servers 150, for example, across network 170. Although only servers 150-152 are shown, more servers may be used as necessary. Network 170 can be any network or combination of networks that can carry data communication. Such network can include, but is not limited to, a local area network, medium area network, and/or wide area network such as the Internet. Client 110 can be a general-purpose computer with a processor, local memory, a display, and one or more input devices such as a keyboard or a mouse. Alternatively, client 110 can be a specialized computing device such as, for example, a mobile handset. Servers 150-152, similarly, can be implemented using any general-purpose computer capable of serving data to client 110.
Client 110 executes an image viewer 120, the operation of which is further described herein. Image viewer 120 may be implemented on any type of computing device. Such computing device can include, but is not limited to, a personal computer, mobile device such as a mobile phone, workstation, embedded system, game console, television, set-top box, or any other computing device. Further, a computing device can include, but is not limited to, a device having a processor and memory for executing and storing instructions. Software may include one or more applications and an operating system. Hardware can include, but is not limited to, a processor, memory and graphical user interface display. The computing device may also have multiple processors and multiple shared or separate memory components. For example, the computing device may be a clustered computing environment or server farm.
As illustrated by
In an embodiment, images retrieved and presented by image viewer 120 are panoramas, for example, in the form of panoramic images or panoramic image tiles. In a further embodiment, images retrieved and presented by image viewer 120 are three-dimensional images, including panoramic three-dimensional images that can be presented on the client display. Client display can be any type of electronic display for viewing images or can be any type of rendering device adapted to view three-dimensional images. Further description of image viewer 120 and its operation as a panorama viewer can be found in commonly owned U.S. patent application Ser. No. 11/754,267, which is incorporated by reference herein in its entirety.
In an embodiment, image viewer 120 can be a standalone application, or it can be executed within a browser 115, such as Google Chrome or Microsoft Internet Explorer. Image viewer 120, for example, can be executed as a script within browser 115, as a plug-in within browser 115, or as a program which executes within a browser plug-in, such as the Adobe (Macromedia) Flash plug-in. In an embodiment, image viewer 120 is integrated with a mapping service, such as the one described in U.S. Pat. No. 7,158,878, “DIGITAL MAPPING SYSTEM,” which is incorporated by reference herein in its entirety.
Mapping service 210 displays a visual representation of a map, e.g., as a viewport into a grid of map tiles. Mapping service 210 is implemented using a combination of markup and scripting elements, e.g., using HTML and Javascript. As the viewport is moved, mapping service 210 requests additional map tiles 220 from server(s) 150, assuming the requested map tiles have not already been cached in local cache memory. Notably, the server(s) which serve map tiles 220 can be the same or different server(s) from the server(s) which serve image tiles 140 or the other data involved herein.
In an embodiment, image viewer 120 includes three-dimensional (“3-D”) image synthesis functionality 212.
In an embodiment, displacement map generator 310, image shifter 320, image synthesizer system 330, and 3-D imaging module 350 can be implemented in software, firmware, hardware, or a combination thereof. Embodiments of displacement map generator 310, image shifter 320, image synthesizer system 330, and 3-D imaging module 350, or portions thereof, can also be implemented as computer-readable code executed on one or more computing devices capable of carrying out the functionality described herein. Examples of computing devices include, but are not limited to, a central processing unit, an application-specific integrated circuit, or other type of computing device having at least one processor and memory.
In an embodiment, configuration information 130 includes distance data associated with a two-dimensional image. In an embodiment, the distance data is used to describe the proximity of an image object to a first position. The first position can be, for example, the position of a camera used to capture the image. In an embodiment, the surface of the image object may be represented as a collection of points. Each point, in turn, may be represented as a vector, whereby each point is stored with respect to its distance to the camera, and its angle with respect to the direction in which the camera is pointed. Such information may be collected using a laser range finder in combination with the camera taking the image.
Although some formats may be more advantageous than others, embodiments are not limited to any particular format of storing the distance data. In an embodiment, such distance data may be sent from server(s) 150 of system 100, illustrated in
The distance data may be collected in a variety of ways, including, but not limited to, using a laser range finder and image matching. In an embodiment, camera arrangements employing two or more cameras, spaced slightly apart yet looking at the same scene may be used. According to an embodiment, image matching is used to analyze slight differences between the images captured by each camera in order to determine the distance at each point in the images. In another embodiment, the distance information may be compiled by using a single video camera, mounted on a vehicle and traveling at a particular velocity, to capture images of scenes as the vehicle moves forward. By using image matching, the subsequent frames of the captured images may be compared to extract the different distances between the objects and the camera. For example, image objects located at a further distance from the camera position will stay in the frame longer than image objects located closer to the camera position.
Displacement map generator 310 uses the distance data, from configuration information 130 and associated with a two-dimensional image from image tiles 140, to generate a displacement map for the two-dimensional image. In an embodiment, the distance data may be stored in memory accessible by image viewer 120. As previously mentioned, distance data may be sent from server(s) 150 to image viewer 120 as a depth map comprising a grid of discrete values, where each element of the grid corresponds with a pixel of a two-dimensional image, according to an embodiment. For example, the elements of the depth map contain the amount of distance from a first position, such as a camera position, to the image object represented in the two-dimensional image. A displacement map is generated by using the distance values derived from the depth map. For each pixel of the two-dimensional image, the distance or depth value of the pixel can be used to generate a displacement value for the pixel. Further, the generated displacement value for a pixel is inversely proportional to the distance value of the pixel. The computation performed by displacement map generator 310 to generate a displacement value for each pixel of the two-dimensional image can be illustrated by the following expression:
The term d(x,y) of the expression represents the displacement value stored as an element in the displacement map corresponding to a pixel, and the term D(x,y) represents the depth or distance value of the pixel, where x and y denote the coordinate location of the pixel in the image. The term alpha is a constant scale factor that can be used by displacement map generator 210 to control the displacement value, according to an embodiment. The scale factor also controls the degree of the three-dimensional effect in a three-dimensional image produced by image synthesizer system 330, discussed in further detail below.
Image shifter 320 is configured to use the displacement map generated by displacement map generator 310 to produce a shifted version of the two-dimensional image. In an embodiment, image shifter 320 is configured to laterally shift each of the plurality of pixels of the two-dimensional image by the displacement value, stored as an element of the displacement map corresponding to each of the plurality of pixels. For example, for a pixel located at coordinates (x,y) in the two-dimensional image, shifting the pixel laterally can be expressed as (x−d(x,y)), where d(x,y) is the displacement value as discussed above. In this example, the result of the operation (x−d(x,y)) may be a non-integer. In that case, an appropriate interpolation method, for example, but not limited to, natural neighbor (n-n), bi-linear, and bi-cubic, may be used. The appropriate interpolation method to use will depend on factors such as the available processing power. In addition, for the case of panoramic images, which tend to be spherical panoramas, wraparound along the x-axis is allowed by adding a width value, based on the dimensions of the spherical panorama, to the expression (x−d(x,y)). Therefore, if (x−d(x,y)) is less than zero, the expression (x−d(x,y)+width) is used in its place.
Image synthesizer system 330 combines the two-dimensional image and the shifted version of the two-dimensional image, produced by image shifter 320, to produce a three-dimensional image. In an embodiment, image synthesizer system 330 is configured to filter one or more first color channels of the two-dimensional image to produce a first component of the three-dimensional image. Color channels may include, for example, red, green, and blue (RGB) color channels associated with the RGB color model or standard for displaying color images. For example, image synthesizer system 330 can filter green and blue color channels of the two-dimensional image to produce the first component of the three-dimensional image. In an embodiment, image synthesizer system 330 is further configured to filter a second color channel, chromatically opposite to the first color channel, of the shifted version of the two-dimensional image to produce a second component of the three-dimensional image. Continuing with the previous example, image synthesizer system 330 can filter the red channel of the shifted version of the two-dimensional image to produce the second component. In an embodiment, image synthesizer system 330 is further configured to combine the one or more first color channels of the first component with the second color channel of the second component to produce the three-dimensional color image.
Notably, image synthesizer system 330 is not limited to configurations or embodiments that produce three-dimensional images by filtering color channels. Once the shifted version of a two-dimensional image is produced by image shifter 320, image synthesizer system 330 can be configured to operate with a variety of techniques for presenting three-dimensional images, such as, for example and without limitation, specialized three-dimensional displays, LCD shutter eyeglasses, and polarized displays with polarized eyeglasses. Other methods and techniques for viewing and displaying three-dimensional images would be known to a person of ordinary skill in the relevant art.
In an embodiment, rendering device 340 is configured to display the three-dimensional image produced by image synthesizer system 330. Rendering device 340 can be any visual rendering device that can include any type of display system that transforms display information, such as geometric, viewpoint, texture, lighting, and shading information, into a visual image. The image can be, for example, a digital image or raster graphics image, and can be displayed in either two or three dimensions.
In an embodiment, 3-D imaging module 350 is configured to combine two or more three-dimensional images to form a panoramic three-dimensional image, including panoramic three-dimensional images that comprise a 360-degree view of a scene. In an embodiment, 3-D imaging module 350 is configured to produce additional panoramic three-dimensional images based on additional positions, such as, for example, additional camera positions used to capture images representing different scenes. In an embodiment, image viewer 120 can be configured to view panoramic three-dimensional images via rendering device 340.
As described above, embodiments of displacement map generator 310, image shifter 320, image synthesizer system 330, and 3-D imaging module 350 can be operated solely at image viewer 120 at client 110. Alternatively, embodiments can be operated solely at the server via 3-D image synthesis functionality 252 at server(s) 150. In addition, embodiments can be operated entirely at one server, for example but not limited to, server 150. In addition, various components of 3-D image synthesis functionality 252, including any one or combination of embodiments of displacement map generator 310, image shifter 320, image synthesizer system 330, and 3-D imaging module 350, may be distributed among multiple servers, for example but not limited to, servers 150-152.
In an embodiment, mapping service 210 can request that browser 115 proceed to download a program 230 for image viewer 120 from server(s) 150 and to instantiate any plug-in necessary to run program 230. Program 230 may be a Flash file or some other form of executable content. Image viewer 120 executes and operates as described above. In addition, configuration information 130 and even image tiles 140, including panoramic image tiles, can be retrieved by mapping service 210 and passed to image viewer 120. Image viewer 120 and mapping service 210 communicate so as to coordinate the operation of the user interface elements, to allow the user to interact with either image viewer 120 or mapping service 210, and to have the change in location or orientation reflected in both.
As described above, embodiments of the present invention can be operated according to a client-server configuration. Alternatively, embodiments can be operated solely at the client, with configuration information 130, including distance data, image tiles 140, and map tiles 220 all available at the client. For example, configuration information 130, image tiles 140, and map tiles 220 may be stored in a storage medium accessible by client 110, such as a CD ROM or hard drive, for example. Accordingly, no communication with server(s) 150 would be needed.
Referring back to
Benefits of method 400, among others, are that it can be applied to panoramic images, works independently of the viewing direction of a scene represented in an image, can be implemented quickly and efficiently, and provides a good balance of efficiency and visual integrity. These benefits of method 400 lead to improved efficiency and user experience.
Method 400 begins in step 402, which includes generating a displacement map, based on distance data, for a first two-dimensional image. Step 402 may be performed, for example, by displacement map generator 310 of
In an embodiment, distance data represents, for each of a plurality of pixels of the first two-dimensional image, the distance from the first position to the image object represented in the first two-dimensional image at each pixel. Accordingly, generating the displacement map includes calculating a displacement value for each of the plurality of pixels. As described above, the displacement value is inversely proportional to the distance from the first position to the image object represented in the first two-dimensional image at each pixel, according to an embodiment. Also as described above, the displacement value for each of the plurality of pixels can be based on a constant scale factor, according to an embodiment. In an embodiment, the displacement map is based on one or more of laser range data and image mapping, as described above.
Method 400 proceeds to step 404, which includes producing a shifted version of the first two-dimensional image based on the displacement map. Step 404 may be performed, for example, by image shifter 320. In an embodiment, producing a shifted version of the first two-dimensional image includes laterally shifting each of the plurality of pixels of the two-dimensional image by the corresponding displacement value of each of the plurality of pixels.
Subsequently, in step 406, method 400 includes combining the first two-dimensional image and the shifted version of the first two-dimensional image to produce a first three-dimensional image. Step 406 may be tailored to comport with a variety of techniques for creating and displaying three-dimensional imagery, such as, for example and without limitation, specialized three-dimensional displays, LCD shutter eyeglasses, and polarized displays with polarized eyeglasses.
For example, in one embodiment, step 406 includes producing and combining a first and a second component of a three-dimensional image to produce a three-dimensional image.
In one embodiment, step 406 includes filtering one or more first color channels of the first two-dimensional image to produce a first component of the first three-dimensional image. For example, the green and blue color channels of image 510 can be filtered in step 406 to produce image 530, according to an embodiment. Step 406 further includes filtering a second color channel, chromatically opposite to the first color channel, of the shifted version of the first two-dimensional image to produce a second component of the first three-dimensional image. For example, the red color channel of image 510 can be filtered in step 406 to produce image 520, according to an embodiment. Step 406 further includes combining the one or more first color channels of the first component of the first three-dimensional image with the second color channel of the second component of the first three-dimensional image to produce the first three-dimensional image. For example, images 520 and 530 are combined to produce the three-dimensional image 540. Step 406 may be performed, for example, by image synthesizer system 330.
In further embodiments, method 400 includes additional steps, which are not shown in
In another embodiment, method 400 includes producing a second three-dimensional image based on a second two-dimensional image (e.g., using three-dimensional imaging module 350 of
Services such as Google Maps are capable of displaying street level images of geographical locations. The images, known on Google Maps as “Street View,” typically comprise photographs of buildings and other features, and allow a user to view a geographic location from a street level perspective (e.g., a person walking on the street at the geographic location) as compared to a top-down map perspective. In one aspect, street level images are panoramic images, such as 360 degree panoramas centered at the geographic location associated with an image. The panoramic street-level view may be created by stitching together a plurality of photographs representing different perspectives from a geographical vantage point.
In an embodiment, mapping service 210 of
In an embodiment, as image viewer 120 is instantiated by mapping service 210, image viewer 120 is presented in the form of a viewport embedded in an informational balloon window associated with the avatar icon. The orientation of the visual representation of the panorama within the viewport matches the orientation of the avatar icon. As the user manipulates the visual representation of the panorama within the viewport, image viewer 120 informs the mapping service of any changes in orientation or location so that the mapping service can update the orientation and location of the avatar icon. Likewise, as the user manipulates the orientation or location of the avatar icon within mapping service 210, mapping service 210 informs image viewer 120 so that the image viewer 120 can update its visual representation.
In an embodiment, the viewport of image viewer 120 presents a panoramic image of the selected area. The user can click and drag around on the image to look around 360 degrees. For example, the viewport can present a variety of user interface elements that are added to the underlying panorama. These elements include navigation inputs such as, for example, zoom and panning controls (e.g., navigation buttons) on the left side of the viewport and annotations in the form of lines/bars, arrows, and text that are provided directly in the panorama itself. Further description of mapping service 210 and its operation in the context of Street View panoramas can be found in commonly owned U.S. patent application Ser. No. 11/754,267, which is incorporated by reference herein in its entirety.
The panoramic images presented in the viewport of image viewer 120 have been two-dimensional panoramic images. In an embodiment, in addition to the user interface elements discussed above (e.g., navigation buttons), viewport of image viewer 120 further includes a 3-D user interface element (e.g., user interface element 610 in
Aspects of the present invention shown in
If programmable logic is used, such logic may execute on a commercially available processing platform or a special purpose device. One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computer linked or clustered with distributed functions, as well as pervasive or miniature computers that may be embedded into virtually any device.
For instance, at least one processor device and a memory may be used to implement the above described embodiments. A processor device may be a single processor, a plurality of processors, or combinations thereof. Processor devices may have one or more processor “cores.”
Various embodiments of the invention are described in terms of this example computer system 700. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter.
Processor device 704 may be a special purpose or a general purpose processor device. As will be appreciated by persons skilled in the relevant art, processor device 704 may also be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm. Processor device 704 is connected to a communication infrastructure 706, for example, a bus, message queue, network, or multi-core message-passing scheme.
Computer system 700 also includes a main memory 708, for example, random access memory (RAM), and may also include a secondary memory 710. Secondary memory 710 may include, for example, a hard disk drive 712, removable storage drive 714. Removable storage drive 714 may comprise a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. The removable storage drive 714 reads from and/or writes to a removable storage unit 718 in a well known manner. Removable storage unit 718 may comprise a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 714. As will be appreciated by persons skilled in the relevant art, removable storage unit 718 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative implementations, secondary memory 710 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 700. Such means may include, for example, a removable storage unit 722 and an interface 720. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 722 and interfaces 720 which allow software and data to be transferred from the removable storage unit 722 to computer system 700.
Computer system 700 may also include a communications interface 724. Communications interface 724 allows software and data to be transferred between computer system 700 and external devices. Communications interface 724 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 724 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 724. These signals may be provided to communications interface 724 via a communications path 726. Communications path 726 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.
In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as removable storage unit 718, removable storage unit 722, and a hard disk installed in hard disk drive 712. Computer program medium and computer usable medium may also refer to memories, such as main memory 708 and secondary memory 710, which may be memory semiconductors (e.g. DRAMs, etc.).
Computer programs (also called computer control logic) are stored in main memory 708 and/or secondary memory 710. Computer programs may also be received via communications interface 724. Such computer programs, when executed, enable computer system 700 to implement the present invention as discussed herein. In particular, the computer programs, when executed, enable processor device 704 to implement the processes of the present invention, such as the stages in the method illustrated by flowchart 400 of
Embodiments of the invention also may be directed to computer program products comprising software stored on any computer useable medium. Such software, when executed in one or more data processing device, causes a data processing device(s) to operate as described herein. Embodiments of the invention employ any computer useable or readable medium. Examples of computer useable mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, and optical storage devices, MEMS, nanotechnological storage device, etc.).
As would be understood by a person skilled in the art based on the teachings herein, several variations of the above described features of synthesizing three-dimensional images can be envisioned. These variations are within the scope of embodiments of the present invention. For the purpose of illustration only and not limitation, a few variations are provided herein. For example, one skilled in the art can envision several variations for generating a displacement map as in step 402 of method 400 of
It is to be appreciated that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.