The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate an implementation of the invention and, together with the description, explain the invention. In the drawings,
First camera 110 and second camera 120 may be any kind of camera that is capable of capturing images digitally, i.e., converting light into electric charge and process it into electronic signals, so-called picture data. In this document, picture data captured from one picture is defined as one frame, which will be described more in detail further on. Such camera may be a video camera, a camera for still photography or an image sensor, such as Charge Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS). First and second cameras 110 and 120 may also include a single chip camera, i.e., a camera where all logic is placed in the camera module. A single chip camera only requires power supply, lens and a clock source in order to operate.
Second camera 120 may be the same camera type as first camera 110 or another camera type. As shown in
Electronic device 100 may be configured to store frames and/or send frames to displaying device 150 via a communication network, such as e.g. a radio access network. The latter is, for example, implementable in video telephony. The frames may or may not be sent in real-time. Displaying device 150 may include any device being capable to display frames, such as a communication device or a mobile phone, including or being configured to be connected to a display 160. Display 160 may be configured to display frames received from electronic device 100.
A user of electronic device 100 may operate first and second cameras 110, 120, substantially simultaneously, to capture images. Captured images may be stored and/or sent, for example, to displaying device 150. Electronic device 100 may be positioned such that first camera 110 points towards a first object 170 while second camera 120 points towards a second object 180.
An exemplary scenario in which implementations of the camera function may be used, include a traveller, Amelie, using a mobile phone including first and second cameras 110, 120 to call a friend, Bert, who may be using a mobile phone. Assume Amelie wishes to show herself to Bert, in front of a building or other landmark, in the display of Bert's mobile phone. In this case, first camera 110 may capture an image that includes object 170, e.g., Amelie, and camera 120 may substantially simultaneously capture an image that includes second object 180, e.g., the building.
In another exemplary scenario, assume a deaf person, Charlotte, wishes to communicate using sign language while using a mobile phone including first and second cameras 110, 120 to call her friend, David, who may be using a mobile phone. Assume Charlotte wishes to show a golf club or other item to David, while discussing the item with David, in the display of David's mobile phone. In this case, first camera 110 may capture images that include Charlotte's hand signing sign language, and second camera 120 may substantially simultaneously capture images that include another object, the golf club, for instance.
A further example is a reporter, Edwin, who may use his mobile phone including first and second cameras 110, 120 to transmit a report to a news channel, Edwin may report a story and the news channel may broadcast the transmitted video call directly, for example, in real time. In this case, first camera 110 may capture images that include Edwin, and second camera 120 may substantially simultaneously capture images that include another object, e.g., the news scene viewable to Edwin, for instance.
A yet further example is researcher, Aase, who may use a mobile phone including first and second cameras 110, 120 to contact a colleague, Guillaume. Aase may wish to discuss research findings with Guillaume, for instance. In this case, first camera 110 may capture images that include Aase, and second camera 120 may substantially simultaneously capture images that include another object, e.g., notes regarding the research findings, lying on a table in front of Aase, for instance.
Electronic device 100 may include a frame merging unit 210, to which each one of first and second cameras 110, 120 may transmit the respective first and second frames. Frame merging unit 210 may merge the first frame (including first object 170) and the second frame (including second object 180) to form a single merged frame. In one implementation, the merge may be performed by making one of the frames smaller and placing the reduced frame on top of the larger. For example, a display of the merged frame may resemble a picture-in-picture frame. An example of merged images can be seen in display 160 depicted in
In one implementation, arrangement of the first and second frames to form the merged frame may include placement of the two frames in any configurable arrangement. For example, one frame may be superimposed over another. Another arrangement includes a dual-frame, split-screen display, e.g., side-by-side, top/bottom, etc. Another arrangement includes cropping of one or both of the frames. In one implementation, the user of electronic device 100 may select the arrangement of the frames relative to one another within the merged frame for display. In another implementation, a user of displaying device 150 may select the arrangement of the frames for display. The merged frame arrangement may be altered before, during, or after transmission, for example, during a call. The arrangement of the merged frames may be varied as a function of time.
Video may consist of several individual frames which are displayed to a user substantially in a time-dependent sequence. First camera 110 and second camera 120 may use a same clock 220, and preferably operate at the same frame rate. Clock 220 may be used to synchronize and operate first camera 110 and second camera 120, and clock 220 may be used to instruct when to retrieve a frame. First camera 100 may include a first sensor 222 being configured to generate a notification when the first frame is ready for merging and second camera 120 may include a second sensor 224 being configured to generate a notification when the second frame is ready for merging. The notifying may be performed by, for example, sending a signal to frame merging unit 210. The frame rate is defined herein as the number of frames that are displayed per unit time, e.g. second. For instance, 15 frames per second may be used. Frame merging unit 210 may wait until it has received the frame ready signal from both cameras, before merging the frames including the captured images. First camera 110 may include or be configured to connect to a first buffer 230 or memory and second camera 120 may include or be configured to connect to a second buffer 240, into which the frames may be stored prior to be merged in the merge process. In another implementation, first camera 110 and second camera 120 may operate on frame rates that differ. In which case, the camera having the highest frame rate may set the pace. Then, when a frame is ready from the camera with the highest frame rate, the frame will be merged with the other frame from the camera with lower frame rate.
The image dimensions of the frames to be used in a communication session may determined via a negotiation at the start-up of the communication session between the portable communication device and the receiving communication device, which may be standardized. The merged frame may be of the same resolution as the negotiated one. For standard video telephony, this may be accomplished using, for example, Quarter Common Intermediate Format (QCIF) (176×144 pixels). The smaller frame within the merged frame may have any size up to about QCIF, but preferably the smaller frame would have the size of around a quarter of the frame, for video telephony, for example, that may be about QQCIF (88×72 pixels).
For example, two different approaches of how to obtain the smaller frame may be used. A first technique is to set the resolution of the camera to the small size at the outset. Another technique is to resize the output data from the camera when merging the two frames. For video telephony, which may include real-time communication, it is beneficial to include time delays between the endpoints. Since resizing is time-consuming, setting the resolution of the camera may be the preferable solution. However, the resolution of the camera may be changed if the camera which produces the small frame is to be switched. The communication might, for example, be changed so the frame from first camera 110 becomes more important to the user of the displaying device 150. Then it may be desirable to switch so the frame from first camera 110 will be visible in the large area and frame from second camera 120 in the small area.
Electronic device 100 may further include an encoder 250. Encoder 250 may be used before sending the frames to displaying unit 150. The frames including the merged frames may be sent to encoder 230 for encoding. Encoder 230 may read the merged frames and encode the merged frames according to a suitable standard, such as h.263, MPEG-4 or another type of encoding. Encoder 230 may not detect any difference between a non-merged frame and a merged frame, therefore permitting any encoder to be used.
Electronic device 100 may include a transmitter 260 to be used if the user of electronic device 100 wishes to send the merged frames to displaying device 150. A communication session such as, a video telephony session, may be started between electronic device 100 and displaying device 150.
The merged, and possibly encoded, frames may then be transmitted to displaying device 150, using the set-up communication session between electronic device 100 and displaying device 150, as indicated by a dashed arrow 190 in
Displaying device 150 may receive the merged frames and decode the merged frames, if the merged frames have been encoded. Displaying device 150 may display a merged picture based on the received merged frames comprising first object 170 and second object 180 in display 160. The merged picture may be displayed in accordance with the image size and/or dimensions of the frames negotiated during start-up of the communication session.
Components associated with each of respective first and second cameras 110 may generate a notification when a frame is ready for merging (act 302).
The first frame (or sequence of first frames) and the second frame (or sequence of second frames) may be merged into one merged frame (or sequence of merged frames), which merged frame (or sequence of merged frames) may include both the captured image of first object 170 and the captured image of second object 180 (act 303). The merge process may be achieved by resizing one of the first and second frames smaller and arranging a smaller of the two frames over the larger of the two frames, for example. The smaller frame may be obtained by, e.g., setting the resolution in the associated camera to the small size or resizing the output data from the associated camera when merging the two frames. The merging may be achieved after both first camera 110 has generated a notification that the first frame is ready for merging and second camera 120 has generated a notification that the second frame is ready for merging. First camera 110 and second camera 120 may operate at different frame rates. In which case, the pace of the highest frame may be used as the merging rate.
The merged frame may be sent to a displaying device 150, which may simultaneously display via its displayer 160, the captured image of the first object 170 and the captured image of the second object 180 (act 304). The merged frame may be sent via a communications network, for example, a radio access network. The merged frame or sequence of frames may be sent in real-time to displaying device 150. This may, e.g., be used for video telephony communication with displaying device 150.
The present frame merging mechanism can be implemented through one or more processors, such as a processor 270 in electronic device 100 depicted in
It should be emphasized that the term comprises/comprising when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
The present invention is not limited to the above-describe preferred embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the invention, which is defined by the appending claims.
This application claims priority under 35 U.S.C. § 119 based on U.S. Provisional Application Ser. No. 60/828,091, filed Oct. 4, 2006, the disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60828091 | Oct 2006 | US |