The invention relates generally to the field of digital imaging, and in particular to the transmission of digital images and other content.
Various methods are available to share digital images between two parties. One known method is to attach a digital file comprising a digital image as part of an electronic message, for example, e-mail. When the recipient receives the electronic message, the digital file can be detached and the image viewed. Another known method employs on-line service providers, for example Ofoto, Inc. On-line service providers support websites/databases, which permit a user to store/access/share digital images between two or more parties. For example, using a website, a user can arrange a collection of images which can be viewed by individuals authorized by the user. These authorized individuals can view the collection of images and can order prints of the images. While such systems may have achieved certain degrees of success in their particular applications, some systems have disadvantages.
For example, some systems require the use of a computer, and therefore, the user needs to be computer literate to send/receive an image. Even where a user is proficient with computer, such systems typically require user to execute a number of steps in order to successfully transfer an image to a recipient. First, a connection must be established between the device having the content and the computer. Second, a connection must be established between the computer and the remote image server. Third, a user typically must provide some form of identification and authentication to access the site so that digital images or other data can be transferred to the site. Fourth, a user must then identify each image that is to be transferred from a server to the remote destination. Fifth, the user must identify the remote destination and sixth, the user typically must provide some form of confirmation that the user does indeed wish to provide the digital image or other content to the remote destination. It will be appreciated that with each additional step required in this process, users become increasingly less likely to share images in this fashion.
Accordingly, systems have been developed that have made image content sharing easier. For example, the Kodak EASYSHARE digital cameras sold by the Eastman Kodak Company a designated share button. When a user of the camera determines that the user wishes to share digital image content stored therein by sending the image content to a remote address, the user presses the share button and this causes a list of addresses that is preprogrammed into the camera to appear. The user selects from among the addresses in the list, destinations to which the selected image is to be sent. When the camera is next connected to a personal computer, EASYSHARE image management software on the computer causes such images to be automatically transmitted to each of the selected addresses. The system is exceptionally popular with consumers and has proven commercial value.
Recently, cellular telephones that incorporate digital cameras, or other devices that are otherwise are capable of sharing image content have become increasingly popular. Such cellular telephones allow users to share images or other content by way of establishing a communication link between cellular telephones using conventional dialing or speed dialing capabilities and then transferring the digital images or other content by way of the connection. Such cellular telephone based systems also typically allow a user to indicate that a particular image is to be sent to a particular e-mail address that has been prerecorded in the cellular telephone.
It will be appreciated that such methods require a user of such a digital camera or cell phone to take a number of steps to transmit digital image content. What is desired is a further reduction in the number of steps to the user must take to cause a device to transmit digital image content to a remote destination.
U.S. Patent Application Publication No. 20030184793, entitled “Method and Apparatus for Uploading Content from a Device to a Remote Network Location” filed by Pineau on Mar. 14, 2002, describes techniques for uploading content (such as a digital photograph) from a content upload device to the content server over a communication network and for automatically forwarding the content from the content server to one or more remote destinations. A user of the content output device may cause the content upload device to upload the content to the server by initiating a single action, such as pressing a single button on the content upload device and without providing information identifying the user to the content upload device. Upon receiving the content, the content server may add content to a queue, referred to as a content outbox, associated with the user. The content server may automatically forward the content in the user's content outbox to one or more remote destinations specified by preferences associated with the user's content outbox. It will be appreciated however, that while the content is transferred with the depression of a single button, the determination of how, where and with whom the content is transmitted is made automatically based upon the profile. There is no opportunity for a user to change the distribution pattern defined by the profile for a particular image. Thus such an approach does not provide a flexible solution that provides for convenient decision making for individual images.
Accordingly, a need exists for an image content sharing device and method of sharing images between at least two parties, which can be used with a computer but does not require, the use of a computer to send/receive images and which is adapted to facilitate the process of designating how an image is to be shared with remote users.
A further need exists in the art for image sharing devices that provide such increased functionality while maintaining a small size, for example, many cellular telephones and digital cameras, portable image sharing devices and the like identify the relatively small size of the device as a convenience and lifestyle advantage. Thus, what is also needed in the art is an image content sharing device that enables rapid and easy sharing of image content but that does not increase the size, cost or complexity of an image sharing device.
In one aspect of the invention, an image content sharing device is provided. The image content sharing device has a display, a memory, a user input capable of receiving more than one user input action and of providing a user input signal indicative of each of the more than one user input actions and a controller. The controller is operable in an image content presentation mode, wherein the controller causes image content to be presented on the display and at least one other mode with the controller being adapted so that when the controller is in the image content presentation mode and detects a user input signal, the controller determines at least one destination from among more than one possible destination based upon the user input signal detected and arranges for the presented image content to be automatically transmitted to the at least one destination, with the controller further being operable in at least one other mode, so that when the controller is in the at least one other mode and the controller detects the same user input signal, the controller responds thereto in a manner that is different from the manner in which the controller responds to the user input signal when the controller is in the image content presentation mode.
In another aspect of the invention, an image content sharing device is provided. The image content sharing device comprises a display, a user input circuit having a plurality of inputs adapted to provide differentiable input signals with each differentiable input signal being generated in response to a different user input action and a controller. The controller is operable to receive the set of differentiable input signals and to use the sensed input signals to perform a set of operations, including causing image content to be presented on the display. The controller is further operable during presentation of the image content, to sense at least a portion of the same set of differentiable input signals and to arrange for the image content to be transmitted to a particular destination selected from among more than one possible destination based upon the sensed differentiable input signals.
In still another aspect of the invention, a method for operating an image content sharing device is provided. In accordance with the method, image content is presented and at least one user input action during presentation of the image content is detected. A destination, from among more than one possible destination, is determined for sharing the presented image content based upon the user input action detected during the display of the digital image content and it is arranged for the presented image content to be transmitted to the determined destination without further user input action.
Scene lens system 23 can have one or more elements and can be of a fixed focus type or can be manually or automatically adjustable. In the example embodiment shown in
Scene lens system 23 can provide a fixed zoom or a variable zoom capability. In the embodiment shown, lens driver 25 is further adapted to provide such a zoom magnification by adjusting the position of one or more mobile elements (not shown) relative to one or more stationary elements (not shown) of scene lens system 23 based upon signals from signal processor 26, an automatic range finder system 27, and/or controller 32. Controller 32 can determine a zoom setting based upon manual inputs made using user input system 34 or in other ways. Scene lens system 23 can employ other known arrangements for providing an adjustable zoom, including for example a manual adjustment system.
Light from the scene 8 that is focused by scene lens system 23 onto scene image sensor 24 is converted into image signals representing an image of the scene. Scene image sensor 24 can comprise a charge couple device (CCD), a complimentary metal oxide sensor (CMOS), or any other electronic image sensor known to those of ordinary skill in the art. The image signals can be in digital or analog form.
Signal processor 26 receives image signals from scene image sensor 24 and transforms the image signals into image content in the form of digital data. As used herein the image content includes, without limitation, any form of digital data that can be used to represent a still image, a sequence of still images, combinations of still images, video segments and sequences such as any form of image, portions of images or combinations of images that can be reconstituted into a human perceptible form and perceived as providing motion images including but not limited to digital image content, including but not limited to image sequences and image streams. Where the digital image data comprises a stream of apparently moving images, the digital image data can comprise image data stored in an interleaved or interlaced image form, a sequence of still images, and/or other forms known to those of skill in the art of digital video.
Signal processor 26 can apply various image processing algorithms to the image signals when forming image content. These can include but are not limited to color and exposure balancing, interpolation and compression. Where the image signals are in the form of analog signals, signal processor 26 also converts these analog signals into a digital form. In certain embodiments of the invention, signal processor 26 can be adapted to process the image signal so that the image content formed thereby appears to have been captured at a different zoom setting than that actually provided by the optical lens system. This can be done by using a subset of the image signals from scene image sensor 24 and interpolating the subset of the image signals to form the digital image. This is known generally in the art as “digital zoom”. Such digital zoom can be used to provide electronically controllable zoom adjusted in fixed focus, manual focus, and even automatically adjustable focus systems.
Controller 32 controls the operation of the image content sharing device 10 including, but not limited to, a scene image capture system 22, display 30 and memory such as memory 40. Controller 32 can comprise a microprocessor such as a programmable general purpose microprocessor, a dedicated micro-processor or micro-controller, a combination of discrete components or any other system that can be used to control operation of image content sharing device 10. During operation, controller 32 causes image sensor 24, signal processor 26, display 30 and memory 40 to capture, present, store and/or transmit digital image content in response to signals received from a user input system 34, from signal processor 26 and from optional sensors 36.
Controller 32 cooperates with a user input system 34 to allow image content sharing device 10 to interact with a user. User input system 34 can comprise any form of transducer, switch, sensor or other device capable of receiving or sensing an input action of a user and converting this input action into a user input signal that can be used by controller 32 in operating image content sharing device 10. For example, user input system 34 can comprise a touch screen input, a touch pad input, a 4-way switch, a 6-way switch, an 8-way switch, a stylus system, a trackball system, a joystick system, a voice recognition system, a gesture recognition system or other such systems.
In the digital camera/cellular phone 12 embodiment of image content sharing device 10 shown in
It will be appreciated that the user input signal provided by user input system 34 comprises one or more signals from which controller 32 can determine what user input actions a user of image content sharing device 10 is taking at any given moment. In this regard, each transducer or other device that is capable of receiving or sensing a user input action causes user input system 34 to generate an input signal that is differentiable in that controller 32 can use the input signal to determine which what transducer or other device has been actuated by a user and/or how that device has been actuated, or what has been sensed.
Sensors 36 are optional and can include light sensors and other sensors known in the art that can be used to detect conditions in the environment surrounding image content sharing device 10 and to convert this information into a form that can be used by controller 32 in governing operation of image content sharing device 10. Sensors 36 can include audio sensors adapted to capture sounds. Such audio sensors can be of conventional design or can be capable of providing controllably focused audio capture such as the audio zoom system described in U.S. Pat. No. 4,862,278, entitled “Video Camera Microphone with Zoom Variable Acoustic Focus”, filed by Dann et al. on Oct. 14, 1986. Sensors 36 can also include biometric sensors adapted to detect characteristics of a user for security and affective imaging purposes. Where a need for illumination is determined, controller 32 can cause a source of artificial illumination 37 such as a light, strobe, or flash system to emit light.
Controller 32 generates capture signal that causes digital image content to be captured when a trigger condition is detected. Typically, the controller 32 receives a trigger signal from capture button 60. When the trigger condition occurs when a user depresses capture button 60, however, controller 32 can determine that a trigger condition exists at a particular time, or at a particular time after capture button 60 is depressed. Alternatively, controller 32 can determine that a trigger condition exists when optional sensors 36 detect certain environmental conditions, such as optical or radio frequency signals. Further controller 32 can determine that a trigger condition exists based upon affective signals obtained from the physiology of a user.
Controller 32 can also be used to generate metadata in association with the digital image content. Metadata is data that is related to digital image content or to a portion of such digital image content but that is not necessarily observable in the image content itself. In this regard, controller 32 can receive signals from signal processor 26, camera user input system 34 and other sensors 36 and can generate optional metadata based upon such signals. The metadata can include but is not limited to information such as the time, date and location that the digital image content was captured or otherwise formed, the type of image sensor 24, mode setting information, integration time information, lens system 23 setting information that characterizes the process used to capture or create the digital image content and processes, methods and algorithms used by image content sharing device 10 to form the scene image. The metadata can also include but is not limited to any other information determined by controller 32 or stored in any memory in image content sharing device 10 such as information that identifies image content sharing device 10, and/or instructions for rendering or otherwise processing the digital image with which the metadata is associated. The metadata can also comprise an instruction to incorporate a particular message into digital image content when presented. Such a message can be a text message to be rendered when the digital image content is presented or rendered. The metadata can also include audio signals. The metadata can further include digital image data. In one embodiment of the invention, where digital zoom is used to form the image content from a subset of one or more captured images, the metadata can include image data from portions of an image that are not incorporated into the subset of the digital image that is used to form the digital image. The metadata can also include any other information entered into image content sharing device 10.
The digital image content and optional metadata, can be stored in a compressed form. For example where the digital image content comprises a sequence of still images, the still images can be stored in a compressed form such as by using the JPEG (Joint Photographic Experts Group) ISO 10918-1 (ITU-T.81) standard. This JPEG compressed image data is stored using the so-called “Exif” image format defined in the Exchangeable Image File Format version 2.2 published by the Japan Electronics and Information Technology Industries Association JEITA CP-3451. Similarly, other compression systems such as the MPEG-4 (Motion Pictures Export Group) or Apple QuickTime™ standard can be used to store digital image content in a video form. Other image compression and storage forms can be used.
The digital image content and metadata can be stored in a memory such as memory 40. Memory 40 can include conventional memory devices including solid state, magnetic, optical or other data storage devices. Memory 40 can be fixed within image content sharing device 10 or it can be removable. In the embodiment of
In the embodiment shown in
Communication circuit 54 can be used to receive image content from external sources, including but not limited to a host computer (not shown), network (not shown), a separate digital image capture device or an image storage device. Such image content can be of a type that is captured using an external digital image capture system, or can be in whole or in part generated electronically, such as can be generated by digital image creation systems or digital image processing system. In this regard, it will be appreciated that while, in
For example, where communication circuit 54 is adapted to communicate by way of a cellular telephone network, communication circuit 54 can be associated with a cellular telephone number or other identifying number that for example another user of the cellular telephone network such as the user of a telephone equipped with a digital camera can use to establish a communication link with image content sharing device 10. In such an embodiment, controller 32 can cause communication circuit 54 to transmit signals causing an image to be captured by the separate image content sharing device 10 and can cause the separate image content sharing device 10 to transmit digital image content that can be received by communication circuit 54. In another alternative, image content in image content sharing device 10 can be conveyed to image content sharing device 10 when such images are captured by a separate image content sharing device and recorded on a removable memory 48 that is operatively associated with memory interface 50. Accordingly, there are a variety of ways in which image content sharing device 10 can obtain image content.
It will further be appreciated that, in certain embodiments, communication circuit 54 can provide other information to controller 32 such as a data that can be used for creating metadata and other information and instructions such as signals from a remote control device (not shown) such as a remote capture button (not shown) and can operate image content sharing device 10 in accordance with such signals.
Display 30 can comprise, for example, a color liquid crystal display (LCD), organic light emitting display (OLED) also known as an organic electro-luminescent display (OELD) or other type of video display. Display 30 can be external as is shown in
Signal processor 26 and/or controller 32 can also cooperate to generate other images such as text, graphics, icons and other information for presentation on display 30 that can allow interactive communication between controller 32 and a user of image content sharing device 10, with display 30 providing information to the user of image content sharing device 10 and the user of image content sharing device 10 using user input system 34 to interactively provide information to image content sharing device 10. Image content sharing device 10 can also have other displays such as a segmented LCD or LED display (not shown), which can also permit signal processor 26 and/or controller 32 to provide information to user. This capability is used for a variety of purposes such as establishing modes of operation, entering control settings, user preferences, and providing warnings and instructions to a user of image content sharing device 10.
Other systems such as known circuits, lights and actuators for generating visual signals, audio signals, vibrations, haptic feedback and/or other forms of human perceptible signals can also be incorporated into image content sharing device 10 for use in providing information, feedback and warnings to the user of image content sharing device 10.
Typically, display 30 has less imaging resolution than image sensor 24. Accordingly, in such embodiments, signal processor 26 and/or controller 32 are adapted to present the image content by forming evaluation content which has an appearance that corresponds to image content in image content sharing device 10 and is adapted for presentation on display 30. In one example of this type, signal processor 26 reduces the resolution of the image content captured by image capture system 22 when forming evaluation images adapted for presentation on display 30. Down sampling and other conventional techniques for reducing the overall imaging resolution can be used. For example, resampling techniques such as are described in commonly assigned U.S. Pat. No. 5,164,831, “Electronic Still Camera Providing Multi-Format Storage Of Full And Reduced Resolution Images” filed by Kuchta et al. on Mar. 15, 1990, can be used. The evaluation content can optionally be stored in a memory such as memory 40. The evaluation content can be adapted to be provided to an optional display driver 28 that can be used to drive display 30. Alternatively, the evaluation content can be converted into signals that can be transmitted by signal processor 26 in a form that directly causes display 30 to present the evaluation images. Where this is done, display driver 28 can be omitted.
In the embodiment shown in
As noted above the capture process is executed in response to controller 32 determining that a trigger condition exists. In the embodiment of
During capture and/or during an optional verification process, the image content or associated evaluation content is presented on display 30 so that users can verify that image content being captured or image content that has been captured has an acceptable appearance.
When image content sharing device 10 is in any mode other than the image content presentation mode (step 70), controller 32 is adapted to receive user input signals from user input system 34 (step 72) and to take a standard action in response to the user input (step 76). However, as will be explained in greater detail below, when image content sharing device 10 is in an image content presentation mode (step 70) and detects a user input signal indicating that user input action has been taken (step 76), controller 32 executes a sharing operation. In accordance with the invention, the sharing operation comprises determining a destination for transmitting digital image content that is currently being presented on display 30 (step 78) and for arranging for such content to be transmitted to a user (step 80). It will be appreciated that this approach enables a user to arrange for image content to be shared with a selected user by making a single user input action. This approach also offers the advantage of not requiring that image content sharing device 10 incorporate designated user inputs to allow for such functionality and thus enables image content sharing device 10 to provide this functionality without unnecessarily increasing the size or complexity of image sharing device.
Controller 32 can determine that it is to enter into an image presentation mode, when it detects any condition in which controller operate in any mode of operation wherein content is to be presented on display 30 or on any other display with which image content presentation device is associated (step 70). For example, controller 32 can enter an image content presentation mode when a user of image content sharing device 10 actuates mode select button 67 to select an image content presentation mode wherein digital image content is presented on display 30. Alternatively, where image content sharing device 10 comprises an image capture system 22, image content can be presented during capture, or during a verification process after capture. It will be appreciated that an image content presentation mode can be entered in other ways.
If during the image content presentation mode, controller 32 detects a user input signal from user input system 34 indicating that a user has made a user input action, such as where controller 32 detects a signal indicating that as illustrated in
Controller 32 determines destination information for the currently presented image content based upon which key is pressed (step 76). There are two ways in which this can be done. In the embodiment of
There are a variety of ways in which controller 32 can determine destination information based upon the detected user input action. In the embodiment of
Such a look up table or other data structure can be created manually, by entering information that defines associations between particular user input actions and particular destinations or groups of destinations using keypad 69 or some other type of user input system 34. Alternatively, associations between particular destinations and particular user input actions can be established using a personal computer or other convenient input device. The personal computer or other convenient input device can then format the associations into a LUT or other data structure that provides such associations and transfer the associations to the image content sharing device.
The LUT or other data structure can also be automatically established and or supplemented by automatically building associations between specific user input actions and remote destinations that have shared digital image content with image content sharing device 10 or with other devices such as a personal computer to which image content sharing device 10 is commonly connected. For example, controller 32 and/or communication circuit 54 can be adapted to automatically extract destination information from communications that are used to send image content to image sharing device 10 and can build a LUT or other data structure that associates a user input action with such destinations. Such a system can be further adapted to organize the LUT based upon frequency of such data and/or the nature of such sharing. For example, destinations can be prioritized or otherwise organized so that the LUT or other data structure associates the most convenient forms of user input action with destinations that are more frequently used for image content sharing. In another example, destinations can be prioritized or otherwise organized so that the LUT or other data structure associates particular forms of user input action with particular destinations based upon the nature of image content exchanges between the image sharing device and the destinations.
Once a LUT or other data structure associating particular user input actions is defined, controller 32 will monitor the user input signal from user input system 34 to detect such user input actions when image content is presented. In the example embodiment illustrated in
During the sharing process, controller 32 further arranges for the digital image content to be shared with each destination (step 80). In one embodiment of the invention this is done by controller 32 causing communication circuit 54 establish a wired or wireless communication link with each destination and to transfer the digital image content to each destination directly or by way of a server such as a telecommunication provider, an internet server, a wired or wireless communication server, a network of retail kiosks or other commercial terminals providing a communication path between an image content sharing device 10 and a destination, by way of a third party provider.
In another example embodiment, illustrated in
In embodiments where an intermediate device 90 is used to transfer digital image content to a remote server, controller 32 can be adapted to provide information other than physical or virtual address information, phone numbers or the like, in order to cause intermediate device 90 to transfer the digital image content to the destination. In one example embodiment of this type, where an image content sharing device 10 such as a digital camera is provided that is adapted to upload images when docked in a docking station associated with a personal computer, controller 32 can be adapted to arrange for the digital image content to be transferred to a destination by causing digital image content to be associated with destination metadata that is determined based upon the detected user input action.
In this embodiment, intermediate device 90 has an intermediate device controller 92, and intermediate device communication circuit 94 and intermediate device controller memory 96 with an intermediate look up table that associates the destination data with addresses or other information that can be used to help transmit the digital image content to a preferred destination. Accordingly, in such an embodiment, controller 32 need only provide digital image data and data designating a destination from which device controller 92 can determine an actual address or other information, and can cause intermediate device communication circuit 94 to transmit the digital image content to addresses that are determined in accordance with the destination data provided by controller 32. Such destination information can comprise any form of information that can be provided to intermediate controller 32 to indicate destinations that were selected by a user of image content sharing device 10 during the image content presentation mode. As shown in
The destination data can comprise for example data that characterizes any user input signal received by controller 32 when in an image content review mode or data that characterizes only selected data from any user input signal received by controller 32. Such destination data can also comprise, for example, a code representing a portion of the intermediate device look up table, an image, graphic symbol or character representing such a person. Such an approach can be useful where, for example, a user does not want to store actual address information in a portable image content sharing device 10 that could be lost or stolen.
It may be useful for image content sharing device 10 to provide a user with a graphic indication to confirm that the image will be sent to the designated recipient. In the embodiment of
As is also shown in the example of
It will be appreciated that, destination indicators 106 can be modified to generate a distinctly different output after selection than before, such as by changing the appearance of an image presented on image display 108a when, for example, key 69a is pressed. This can be used to provide graphic indication of the destinations to which the presented image content is to be presented as discussed above.
In certain alternate embodiments, a portion of a display 30 can be used to provide destination indicators 106, or, alternatively, destination indicators 106 can be provided on display 30 as an overlay while image content is also presented on display 30. In still other alternate embodiments, destination indicators 106 can provide other forms of signaling such as audio and tactile signals to indicate to a user that a particular user input action will cause presented image content to be transmitted to a particular destination.
As noted above, user input system 34 can be provided that is adapted to sense audio signals, such as by monitoring an audio type sensor 36 and adapting user input signals to incorporate sensed audio data. Where this is done the user input signal can be representative of such audio signals and controller 32 can be adapted to, alone, or in combination with signal processor 26 or in combination with other known circuits and systems interpret the audio signals when controller 32 is in an image content presentation mode and to determine when the audio signals indicate that an image is to be sent to a particular destination. For example, in one embodiment, the command “send to mike” can cause controller 32 to arrange for the digital image content to be transmitted to a destination associated with “Mike”. In another example embodiment, the command “press key 1” or “quick send key 1” can cause controller 32 to send digital image content to a destination or group of destinations associated with the number 1 key 69a on for example, key pad 69 of the embodiment of
Such audio commands can be interpreted using conventional voice recognition technology and algorithms to convert audio signals into known commands or data based upon such signals where controller 32 is adapted for such a purpose. Alternatively, a user can preprogram image capture device 10 with certain patterns of audio signals comprising spoken words, which can be stored in a memory such as memory 40. In this latter alternative, when controller 32 is operated in an image content presentation mode, controller 32 is adapted to monitor audio signals proximate to the digital image content sharing device 10 for audible signals that conform to such prerecorded patterns. Where such signals are sensed, controller 32 can be adapted to execute a response to such a command.
It will be appreciated that even during an image content presentation mode, it is not necessary that each transducer or other device used in user input 34 be dedicated to the sharing function. Instead, it will often be the case that selected ones of the user input transducers or other devices will not be used for such a purpose but will provide a consistent functionality across many modes of operation. For example, in the embodiment of
Image content sharing devices 10 that capture video type digital image content in real time and present an evaluation video stream of the video type digital image content in real time are well known. This too is one example of an image presentation mode. In one embodiment of the invention, controller 32 can be adapted to monitor or detect user input signals and to determine a destination to which the currently video stream is to be sent. This enables rapid sharing of video image content in real time with a minimum amount of user involvement in making arrangements for sharing the images, particularly where making such arrangements could interrupt the capture of the digital image content or where the user of the image content sharing device cannot risk being distracted by the task of making such arrangements.
In the embodiments above, image content sharing device 10 has been generally illustrated in the form of digital camera/cellular telephone 12. However, image content sharing device 10 can comprise any form of device meeting the limitations of the claims, including but not limited to conventional digital cameras, personal computers, personal digital assistants, digital picture frames, digital photo albums, and the like.
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.