As technology continues to advance, computing devices are becoming equipped with better and more capabilities and features. Faster processors, larger memory, and improved network bandwidth have made various resource intensive functionalities possible, even on mobile devices, such as smartphones, tablet devices, and the like. Video chatting, for example, has evolved substantially and has become an increasingly popular pastime activity. Not only can users video chat on home or office computers now, but they can also do so using smaller mobile devices. Though convenient, however, mobile devices typically have smaller display screens, leaving little room to display video feeds. For example, in typical video conversations, live video feeds of a user of a device as well as those of one or more other users are displayed, but the user's own live feed can take up a significant amount of space or real estate on the display, and especially so on mobile devices. Because a user's main focus during a video chat are the live video feeds of other users, rather than his own, it can be advantageous to provide solutions for altering, or otherwise controlling the display of a user's own video feed.
Moreover, as users continue to demand more from their mobile devices, such as displaying or presenting a large number of images, text items, and/or other like objects, it can be difficult for users to maneuver the display and identify the various objects. Thus, it can be advantageous to provide solutions for altering, or otherwise controlling the display of objects on a display screen, such that a user can focus in on certain objects, while moving others out of focus or immediate view.
This relates to systems, methods, and devices for controlling the display of content.
In some embodiments, systems and methods for controlling the display of live video feeds are provided. This can include adjusting the display size of at least one of the live video feeds during a video chat. For example, a user can employ a device to conduct the chat with a remote user. The device can display the live video feed of the user himself as well as a live video feed of the remote user. Because the user may not necessarily desire to view his own feed during the chat, the display of his own feed can be altered when a predefined time elapses after the chat initiates. The alteration can, for example, include decreasing the display size of the user's own feed. As another example, the alteration can include completely removing the user's own feed from the display. In either example, the alteration can additionally include increasing the display size of the remote user's feed. Thus, more screen space can be made available for the video feeds of other users.
In some embodiments, systems and methods for controlling the display of objects are provided. This can include displaying objects in different regions of a display screen, and adjusting the display of the objects in the different regions in different manners. The regions can include a first region proximate the center of the screen, which can be designated as a foreground and/or focused region, and a second region surrounding the first region, which can be designated as a background and/or unfocused region. For example, a user can employ a device to conduct a group chat with a subset of users attending a multi-user online event. In this example, the members in the group chat can be represented by a first set of objects (e.g., live video feed windows or indicators) in a first region of the screen, and those who are in the event but who are not participating in the group chat, can be represented by a second set of objects (e.g., similar live feed windows or indicators) in a second region of the screen. As another example, the user can employ the device to view a plurality of images or text items, with some in the first region of the screen, and others in the second region of the screen. Because the number of displayed objects can be large, the display of the objects in the different regions can be altered in different manners in response to one or more user inputs (e.g., touch screen inputs, mouse click inputs, or the like). In at least one embodiment, the display size of objects in the first region can be increased, whereas the display size of objects in the second region can be substantially simultaneously decreased. Moreover, the objects in the first region can also be displaced toward the second region (e.g., moved away from the center of the screen), and the objects in the second region can be displaced toward the first region (e.g., moved toward the center of the screen). In at least one embodiment, the display quality of objects in the first region can be improved, whereas the display quality of the objects in the second region can be substantially simultaneously degraded. In any of these embodiments, the alteration of the display of the various objects can allow a user to focus on select objects, while moving others out of focus or immediate view.
In at least one embodiment, a method for controlling the display of video feeds during a video chat may be provided. The method may include displaying a video feed in a first manner, altering the display of the video feed when a predefined time elapses, and maintaining the altered display of the video feed until at least one predetermined event occurs.
In at least one embodiment, method for controlling the display of objects may be provided. The method may include displaying a plurality of objects on a display screen. A first set of the plurality of objects may be displayed in a first region of the display screen, and a second set of the plurality of objects may be displayed in a second region of the display screen. The method may also include receiving a user input to adjust display of at least one of the first set of objects and the second set of objects, and adjusting display of the first set of objects and the second set of objects in different manners based on the user input.
In at least one embodiment, a system for controlling the display of video feeds during a video chat may be provided. The system may include a display configured to display video feeds, and a controller configured to instruct the display to display at least one video feed, cause the display to alter the display of the at least one video feed when a predefined time elapses, and direct the display to maintain the altered display of the at least one video feed until at least one predetermined event occurs.
In at least one embodiment, a system for controlling the display of objects may be provided. The system may include a display screen configured to display objects, and a controller configured to cause the display screen to display a plurality of objects such that a first set of the plurality of objects is displayed in a first region of the display screen, and a second set of the plurality of objects is displayed in a second region of the display screen. The controller may also be configured to receive a user input to adjust display of at least one of the first set of objects and the second set of objects, and direct the display screen to adjust the display of the first set of objects and the second set of objects in different manners based on the user input.
In at least one embodiment, a computer readable medium may be provided. The computing readable medium may include computer readable instructions that, when executed by an electronic device, cause the electronic device to display a video feed in a first manner, alter the display of the video feed when a predefined time elapses, and maintain the altered display of the video feed until at least one predetermined event occurs.
In at least one embodiment, a computer readable medium may be provided. The computer readable medium may include computer readable instructions that, when executed by an electronic device, cause the electronic device to display a plurality of objects on a display screen. A first set of the plurality of objects may be displayed in a first region of the display screen, and a second set of the plurality of objects may be displayed in a second region of the display screen. The computer readable medium may also include computer readable instructions that, when executed by an electronic device, cause the electronic device to receive a user input to adjust display of at least one of the first set of objects and the second set of objects, and adjust display of the first set of objects and the second set of objects in different manners based on the user input.
The above and other features of the present invention, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:
In accordance with at least one embodiment, users can interact with one another via user devices. For example, each user can interact with other users via a respective user device.
User device 100 can include any suitable type of electronic device operative to communicate with other devices. For example, user device 100 can include a personal computer (e.g., a desktop personal computer or a laptop personal computer), a portable communications device (e.g., a cellular telephone, a personal e-mail or messaging device, a pocket-sized personal computer, a personal digital assistant (PDA)), or any other suitable device capable of communicating with other devices.
Control circuitry 101 can include any processing circuitry or processor operative to control the operations and performance of user device 100. Storage 102 and memory 103 can be combined, and can include one or more storage mediums or memory components.
Communications circuitry 104 can include any suitable communications circuitry capable of connecting to a communications network, and transmitting and receiving communications (e.g., voice or data) to and from other devices within the communications network. Communications circuitry 104 can be configured to interface with the communications network using any suitable communications protocol. For example, communications circuitry 104 can employ Wi-Fi (e.g., a 802.11 protocol), Bluetooth®, radio frequency systems (e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communication systems), cellular networks (e.g., GSM, AMPS, GPRS, CDMA, EV-DO, EDGE, 3GSM, DECT, IS-136/TDMA, iDen, LTE or any other suitable cellular network or protocol), infrared, TCP/IP (e.g., any of the protocols used in each of the TCP/IP layers), HTTP, BitTorrent, FTP, RTP, RTSP, SSH, Voice over IP (VOIP), any other communications protocol, or any combination thereof. In at least one embodiment, communications circuitry 104 can be configured to provide wired communications paths for user device 100.
Input interface 105 can include any suitable mechanism or component capable of receiving inputs from a user. In at least one embodiment, input interface 105 can include a camera 106 and a microphone 107. Input interface 105 can also include a controller, a joystick, a keyboard, a mouse, any other suitable mechanism for receiving user inputs, or any combination thereof. Input interface 105 can also include circuitry configured to at least one of convert, encode, and decode analog signals and other signals into digital data. One or more mechanisms or components in in input interface 105 can also be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
Camera 106 can include any suitable component capable of detecting images. For example, camera 106 can detect single pictures or video frames. Camera 106 can include any suitable type of sensor capable of detecting images. In at least one embodiment, camera 106 can include a lens, one or more sensors that generate electrical signals, and circuitry that processes the generated electrical signals. These sensors can, for example, be provided on a charge-coupled device (CCD) integrated circuit. Camera 106 can be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
Microphone 107 can include any suitable component capable of detecting audio signals. For example, microphone 107 can include any suitable type of sensor capable of detecting audio signals. In at least one embodiment, microphone 107 can include one or more sensors that generate electrical signals, and circuitry that processes the generated electrical signals. Microphone 107 can also be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
Output interface 108 can include any suitable mechanism or component capable of providing outputs to a user. In at least one embodiment, output interface 108 can include a display 109 and a speaker 110. Output interface 108 can also include circuitry configured to at least one of convert, encode, and decode digital data into analog signals and other signals. For example, output interface 108 can include circuitry configured to convert digital data into analog signals for use by an external display or speaker. Any mechanism or component in output interface 108 can be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
Display 109 can include any suitable mechanism capable of displaying visual content (e.g., images or indicators that represent data). For example, display 109 can include a thin-film transistor liquid crystal display (LCD), an organic liquid crystal display (OLCD), a plasma display, a surface-conduction electron-emitter display (SED), organic light-emitting diode display (OLED), or any other suitable type of display. Display 109 can be electrically coupled with control circuitry 101, storage 102, memory 103, any other suitable components within device 100, or any combination thereof. Display 109 can display images stored in device 100 (e.g., stored in storage 102 or memory 103), images captured by device 100 (e.g., captured by camera 106), or images received by device 100 (e.g., images received using communications circuitry 104). In at least one embodiment, display 109 can display communication images received by communications circuitry 104 from other devices (e.g., other devices similar to device 100). Display 109 can be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
Speaker 110 can include any suitable mechanism capable of providing audio content. For example, speaker 110 can include a speaker for broadcasting audio content to a general area (e.g., a room in which device 100 is located). As another example, speaker 110 can include headphones or earbuds capable of broadcasting audio content directly to a user in private. Speaker 110 can be electrically coupled with control circuitry 101, storage 102, memory 103, communications circuitry 104, any other suitable components within device 100, or any combination thereof.
In at least one embodiment, the user's own video image can be minimized. In these embodiments, a system can cause the image of a user's webcam to be displayed at one scale at the start of a chat or conversation, and can subsequently cause the image to automatically reduce to a different (e.g., smaller) scale, button, or even cause the image to be non-existent, after a period of time elapses.
The system can allow the image to be re-enlarged (e.g., to its original scale) or to reappear in response to a touch, click, or other user input or activation. The image can be re-enlarged or can reappear for a short duration that may be just enough to allow the user to confirm his or her view of self (e.g., to allow the user to confirm that the view being published is to the user's satisfaction), and then may be reduced or made to disappear again thereafter.
In at least one embodiment, the system can include one or more algorithms or logic that detects or assists in detecting substantial movement of the user, the appearance of a second head (e.g., of another person other than the user), disappearance of the head of the user (e.g., when he or she walks away from the device), significant changes in the overall brightness or darkness, or the like. In response to any of these, the system can similarly re-enlarge or redisplay the webcam image, and can subsequently reduce the image or remove the image from display after a predefined time elapses.
In some embodiments, the system can be turned off or deactivated for those users who always desires to see his or her own self-image during a conversation. In at least one embodiment, the system can also allow a user to provide an input (e.g., by holding down a particular button, by clicking, tapping, or otherwise selecting a reduced webcam image) to override the short term enlargement or reappearance of the image. In some embodiments, this action can additionally cause the conversation to end.
As shown in
Because the user of the device may not necessarily need to view a live feed of himself, it is advantageous to automatically alter the display of the user's own video feed during the video conversation. Thus, in at least one embodiment, a system can be provided to control how screen 200 displays video feeds 210 and 220. The system can be implemented as software, hardware, or any combination thereof. For example, the system can be implemented as software, can be executed by a processor of the device (e.g., control circuitry 101 of device 100), and, in at least one embodiment, can cause screen 200 to alter the display size of one or more of video feeds 210 and 220.
The system can be configured to automatically alter the display in any suitable manner. For example, the system can be configured to cause video feed 210 to be scaled, or otherwise displayed on screen 200 in a smaller size, after a predetermined time elapses from the initiation of the conversation. The predetermined time can be any suitable value (e.g., 10 seconds, 15 seconds, 1 minute, etc.), and can be set by the user or can be preset and stored in the device by the provider of the device or the system. In embodiments that allow the user to set the predetermined time, the system can include a user interface (e.g., a settings or administrative window) that can be displayed on screen 200, and that can allow the user to modify various system settings.
Although
Additionally, or alternatively, the system can cause video feed 210 to be entirely removed from screen 200 (e.g., by making it become invisible or non-existent). In this way, the user can focus entirely on the video feed 220 without being distracted by his own video.
Although
In at least one embodiment, the system can initially cause video feed 210 to be reduced in size (e.g., as shown in
Because the user may occasionally wish to review his live video during the conversation, the system can be configured to rescale or increase the display size of video feed 210 after it has been reduced in size or removed from screen 200.
In at least one embodiment, the system can monitor for one or more user inputs to determine whether to increase the display size of video feed 210 or cause video feed 210 to reappear on screen 200. A suitable user input can include a touch screen input, a mouse click input, a voice command, or the like. In response to receiving a suitable user input, the system can cause screen 200 to either increase the display size of video feed 210 (if it has only previously been decreased in size, but not entirely removed), such as to that shown in
Additionally, or alternatively, the system can monitor for changes in video feed 210, and can determine whether to re-enlarge or re-display video feed 210 based on these changes. More particularly, the system can include one or more image or video analysis algorithms configured to analyze the video captured by the camera of the device. As one example, the system can include one or more algorithms for detecting substantial movement in the captured video feed (e.g., substantial movement of the user, such as getting up). As another example, the system can include one or more algorithms for detecting the presence of additional users (e.g., presence of a head of a second person within the camera's field of view). As yet another example, the system can include one or more algorithms for detecting changes in the brightness or darkness of the captured feed. In response to detecting one or more suitable changes in the captured feed, the system can cause screen 200 to either increase the display size of video feed 210 (if it has only previously been decreased in size, but not entirely removed), such as to that shown in
In at least one embodiment, the system can cause video feed 210 to be re-enlarged or re-displayed for only a short predefined time or duration (e.g., 5 seconds, 10 seconds, etc.). This predefined time can again be set by the system or the user (e.g., via a user interface), and can be sufficient to allow the user to review or confirm the live video of himself. After this predefined time elapses, the system can again cause video feed 210 to be one or more of reduced in size and removed from screen 200.
It should be appreciated that the feature of automatically altering the display of video feed 210 can be turned ON or OFF by the user (e.g., via a system user interface). For example, if the user desires to view his own video feed (e.g., as shown in
In at least one embodiment, the system can also be configured to allow the user to end a conversation session by selecting video feed 210 or window 212. More particularly, the system can be configured to end the video conversation if a suitable user selection (e.g., via touch screen selection, mouse click selection, or the like) of window 212 (e.g., any of those shown in
As briefly indicated above, it can be advantageous to provide systems and methods for controlling the display of objects on a display screen such that a user can focus on select objects, while moving others out of focus or immediate view.
In at least one embodiment, a system can divide a screen (e.g., virtually) into focused and unfocused regions (e.g., foreground and background regions). Objects displayed in the unfocused background may not necessarily be unfocused (e.g., unclear), but may be masked or otherwise diminished in visual presence, and may be presented with slightly lesser priority than objects displayed in the focused regions.
As one example, a user may be in a large scale online event with multiple users, and may be participating in a group video chat conversation with a select number of the users. The system can display the indicators (e.g., live video feeds) of users in the group differently than the indicators of users not in the group. In particular, the system can display the indicators of those in the group chat as combined or positioned close to one another within a focused region set near the center of the display. In contrast, the system can display the indicators of those users not in the group as smaller (e.g., thumbprint) video images in the unfocused background arrayed around or surrounding the group chat indicators.
To allow users to manipulate the sizes of the various indicators (or more generally, objects) and to move them between the focused and unfocused regions, the system can be configured to receive inputs. For example, the system can be configured to detect universal pinching commands, where two or more fingers of a user slide on a touchscreen to effect zooming in or out of displayed information, selections of an image scaling slider tool, or the like. In response to detecting a zoom out operation (e.g., a pinch operation on a touchscreen of the device) at or near the center of the display, the system can correspondingly reduce the size of the various objects displayed in the focused region. In at least one embodiment, these various objects can be reduced in size until they are no longer displayed (e.g., reduced sufficiently that they vanish in a “vanishing point” at or near the center of the display). Meanwhile, or substantially simultaneously to the reduction and vanishing of the focused objects, the surrounding objects in the unfocused region can be increased in size and/or visual quality, and can appear to fill in the space of the focused region by moving in toward the center of the display. This can give the effect of increasing priority to these objects, and decreasing priority to the formerly focused objects (e.g., similar to how a kaleidoscope works).
It should be appreciated that the objects can include any type of displayable element. That is, the system can be configured to control the display and transition of any type of displayable data between focused and unfocused regions. For example, the system can display a list of search results (e.g., text-based, image-based, or the like) on a mobile device screen such that a first search result appears (e.g., as a rectangle) filling a portion or the entirety of the visual area of the display as a focused object) and remaining results are presented in an unfocused region. As the screen is “pinched” to effect a zoom in operation at or near the center of the display, the first search result can shrink, and can be rearranged within a shrinking object that might either stay in the center of the screen or move toward a peripheral portion of the screen (e.g., the top of the screen), while one or more of the remaining results or associated content come into view from the unfocused region to the focused region.
The manipulation of the objects can be characterized as “oppositional” zoom. That is, while an object or objects are displayed at or near the center of a display (e.g., in the focused region) may be enlarged in response to a zoom in operation (e.g., a pinch out operation on a touch screen), a surrounding object or objects (e.g., in the unfocused region) may be reduced in size, such that some or all of the space that was previously occupied by the center objects is allocated to the surrounding objects.
It can be helpful to visualize this oppositional zoom effect by imagining a map (e.g., a geographical map) that can have some portions enlarged and other portions diminished depending on whether a zoom in or a zoom out operation is performed and the location of the zoom operation. For example, a zoom out operation at a center of the map will cause that center area to decrease in size, while simultaneously or substantially simultaneously enlarge the surrounding areas, in opposition.
It can further be helpful to visualize oppositional zoom by imagining a flat map that is operable to be manipulated over an imaginary spherical model having an imaginary hole on and through its surface. In response to a zoom in operation of a portion of the flat map at the vanishing hole, that portion of the map can enlarge or increasingly stretch over the surface of the spherical model, such that the map appears to transition between three-dimensional (“3D”) and two-dimensional (“2D”), providing an appearance of foreshortening and reduced distance view. Conversely, in response to a zoom out operation of an area of the map at the imaginary hole, that area of the map may reduce in size toward a vanishing point, while simultaneously drawing or otherwise stretching surrounding areas of the map over the spherical model surface, as if the map is being pulled or sucked into the hole. It should be appreciated, that either of these zoom operations or actions can be performed dynamically, and when released, the map can retract or otherwise “relax” back to its normal flat state.
As described above, the system may provide oppositional zoom operations that can be effected using pinch operations (e.g., pinch operations on a touch screen display), where content (e.g., objects or other suitable data) can be pinched such that it is enlarged or shrunken (or collapsed) depending on the direction of the pinch (e.g., pinch-in or pinch-out), and content beyond the edges of the display (e.g., content that is virtually stored or lying beyond the display area, and that would otherwise be displayed if the display were larger) can be simultaneously or substantially simultaneously pulled or drawn into the display screen area. The system may also provide a combination of these oppositional zoom operations. In some embodiments, that content being pinched in may collapse or shrink, and the content nearest but external to the site of the pinch may be enlarged the most or more so than content farther away from the pinch site, such that there is a balance between the enlarging (or stretching) of that content nearest the pinch site and the pulling or bringing in and enlarging of new content lying beyond the edges of the display.
In some embodiments, e.g., when content is laid out or oriented in columns (e.g., as in a spreadsheet where there are rows and columns of data of objects), a zoom operation, such a pinching operation on a touch screen, may only cause the content to be displaced or pulled in a single dimension. For example, when a pinch-in operation is performed on a particular area or spot of a column of displayed data or objects, the objects above and below the pinched area or spot (e.g., at the upper and lower edges of the column) may be drawn or pulled in, while the content at or near the pinched area shrinks or collapses.
As shown in
The objects can include any of images, video feeds (or video feed indicators), text, and the like. As one example, a user can use the device to participate in a multi-user event with other users over a communication network. Each object in screen 300 can be an indicator (e.g., an image or a live webcam feed) that represents each user in the event. The event can allow the user to engage in conversation with some, but not all, of the users in the event. For example, the user can be in a group conversation with a subset of those in the event, the subset being represented by objects 312, 314, and 316. Whereas these objects can be bunched together in region 310 at or near the center of the screen, objects 351-355, which can represent other users not in the group chat, can be scattered about in region 350.
As another example, the user can use the device to view images (e.g., photos) or text data (e.g., search results). The main photo, photos, or text data (e.g., represented by objects 312, 314, and 316) can be displayed prominently and enlarged proximate the center of the screen in region 310, and the remaining images or text data (e.g., represented by objects 351-355) may be scattered about around the center of the screen.
In each of the examples above, the user may find it difficult to navigate the screen and or identify the various objects. For example, to conduct a video chat with users that are in the event but not currently in the group chat, the user may have to scroll the screen to find other users of interest. As another example, to view other images or text items not currently displayed in the display area, the user may have to scroll the screen. Moreover, because a large number of objects may need to be displayed at a time, the objects displayed in region 350 can be small and difficult to see or distinguish. Thus, a system can be provided to allow a user to control, or otherwise alter or adjust, the display sizes of the objects displayed in regions 310 and 350.
The system can allow altering of the display of the objects in screen 300 via one or more user inputs. In at least one embodiment, the system can allow a user to zoom in or out of region 310, which can alter the display sizes of objects 312, 314, and 316 as well as those of objects 351-355. Any suitable manner of adjusting the display sizes of the objects can be employed. For example, the universal pinching technique (e.g., where two or more fingers pressed against a touchscreen and move in various directions), a slide or zoom input, or the like can be employed for zooming in and out of the objects.
Although not shown, it should be appreciated that the changes can be reversed during a zoom in operation. For example, objects 312, 314, and 316 can be enlarged during a zoom in operation, and objects 351-355 can be substantially simultaneously scaled down. Moreover, objects 351-355 can also be displaced farther away from the center of screen 300, or even vanish or disappear, so as to accommodate expanded objects 312, 314, and 316.
In at least one embodiment, during a zoom out operation, the size of the objects in region 310 can be decreased until they disappear or vanish from screen 300 altogether.
In at least one embodiment, rather than causing the objects in region 310 to disappear, the system can instead displace these objects to one or more other areas of screen 300.
Moreover, although regions 310 and 350 have been described above as being foreground or background regions associated with objects being displayed in either larger or smaller sizes, it should be appreciated that regions 310 and 350 can be designated in other suitable manners. As another example, in at least one embodiment, region 310 can be a “focused” region, and region 350 can be an “unfocused” region. In these embodiments, objects 312, 314, and 316 can be displayed more clearly or in a different color or shade than objects 351-355. For example, in the example described above where screen 300 is displayed on a device during a multi-user event, the indicators in region 350 can be displayed in lighter color (e.g., faded, blurry, or the like), or can otherwise be displayed to have a low visual priority, whereas the indicators in region 310 can be displayed in full color, or can otherwise be displayed to have a high visual priority. Moreover, in some embodiments, region 310 can be both a foreground region as well as a focused region, with objects 312, 314, and 316 being displayed both larger and clearer than objects 351-355. For example, in the example described above where images or text data are displayed on screen 300, the indicators in region 350 can be displayed in lighter color and smaller, whereas the indicators in region 310 can be displayed in full color and larger.
Although screen 300 in
While the description of various embodiments are described with respect to particular objects, such as objects 312, 314, and 316, and 351-355, it should be appreciated that the advantageous effects of zooming in and out can be applied to any graphical representation that can be scaled. For example, the effects of zooming in and out can apply to a geographical map. In this example, a zoom in or enlargement operation (e.g., via the universal pinching method) of a particular portion or location on a displayed map image can enlarge that particular image portion, and substantially simultaneously scale down the image area surrounding the particular portion. The specific portion of the displayed image to be enlarged can be selected by a user (e.g., by using a boundary selection or drawing tool to identify the area to be enlarged), and any remaining image portions surrounding this selected area can be subjected to the scaling down. Similarly, a zoom out operation on the selected image portion can instead cause it to scale down, while image areas surrounding this image portion enlarge and progress to fill in the area of the smaller image portion.
In various embodiments, the advantageous effects of zooming in and out of a screen or display described above can be applied in three-dimensional (“3D”) graphical representations. For example,
While representation 402 can be subjected to a zoom in or out operation as described above, rather than scaling the representation and the objects in 2D (e.g., as described above with respect to screen 300, where objects are simply made smaller or larger), representation 402 can be manipulated, scaled, or otherwise adjusted in 3D during a zoom operation such that an actual 3D perspective view can be provided. For example,
Returning to
As the zoom in operation continues to progress, the objects in representation 452 may be displayed differently.
Thus, similar to how the objects in screen 300 can be manipulated in scale via zoom operations, objects in 3D can be similarly manipulated. It should be appreciated that representation 452 can also be subjected to a zoom out operation. For example, starting from
It should be appreciated that
At step 506, the process can include altering the display of the video feed when a predefined time elapses. For example, the process can include altering the display of video feed 210 when a predefined time elapses, as described above with respect to
At step 508, the process can include maintaining the altered display of the video feed until at least one predetermined event occurs. For example, the process can include maintaining the altered display of video feed 210 until at least one predetermined event occurs. In at least one embodiment, the predetermined event can include receiving a user input, such as a touch screen input, a mouse click, or the like, as described above with respect to
In response to the occurrence of the at least one predetermined event, the process can also include re-displaying the video feed in the first manner. For example, if the video feed has been decreased in size, the process can include re-enlarging the display size of the video feed. As another example, if the video feed has been removed from the display screen, the process can include re-displaying the video feed in its original size. It should be appreciated that, after the video feed is re-displayed, the process can repeat the alteration of the display of the video feed when the predefined time elapses again after the re-display.
At step 606, the process can include receiving a user input to adjust the display of at least one of the first set of objects and the second set of objects. For example, the process can include receiving a user input to adjust the display of at least one of the first set of objects (e.g., objects 312, 314, and 315) and the second set of objects (e.g., objects 351-355). The user input can include a pinch command on the display screen, movement of a display size slider button, a voice command, or the like.
At step 608, the process can include adjusting the display of the first set of objects and the second set of objects in different manners in response to receiving the user input. For example, the process can include adjusting the display of the first set of objects 312, 314, and 316 and the second set of objects 351-355 in different manners. As described above with respect to
It should be appreciated that the various embodiments described above can be implemented by software, but can also be implemented in hardware or a combination of hardware and software. The various systems described above can also be embodied as computer readable code on a computer readable medium. The computer readable medium can be any data storage device that can store data, and that can thereafter be read by a computer system. Examples of a computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
The above described embodiments are presented for purposes of illustration only, and not of limitation.
Number | Name | Date | Kind |
---|---|---|---|
5040231 | Terzian | Aug 1991 | A |
5612716 | Chida et al. | Mar 1997 | A |
5847709 | Card | Dec 1998 | A |
6044146 | Gisby et al. | Mar 2000 | A |
6241612 | Heredia | Jun 2001 | B1 |
6259471 | Peters et al. | Jul 2001 | B1 |
6419137 | Marshall et al. | Jul 2002 | B1 |
6515681 | Knight | Feb 2003 | B1 |
6559863 | Megiddo | May 2003 | B1 |
6654346 | Mahalingaiah et al. | Nov 2003 | B1 |
6697614 | Dorenbosch | Feb 2004 | B2 |
7007235 | Hussein | Feb 2006 | B1 |
7236529 | Lin | Jun 2007 | B2 |
7386799 | Clanton | Jun 2008 | B1 |
7478129 | Chemtob | Jan 2009 | B1 |
7487211 | Beavers et al. | Feb 2009 | B2 |
7495687 | DuMas et al. | Feb 2009 | B2 |
7515560 | DuMas et al. | Apr 2009 | B2 |
7577636 | Fernandez | Aug 2009 | B2 |
7593032 | Civanlar et al. | Sep 2009 | B2 |
7599963 | Fernandez | Oct 2009 | B2 |
7787447 | Egan et al. | Aug 2010 | B1 |
7958457 | Brandenberg et al. | Jun 2011 | B1 |
8037139 | Fish | Oct 2011 | B1 |
8060560 | Vonog et al. | Nov 2011 | B2 |
8144187 | Moore et al. | Mar 2012 | B2 |
8147251 | Anson | Apr 2012 | B1 |
8171154 | Vonog et al. | May 2012 | B2 |
8225127 | Vonog et al. | Jul 2012 | B2 |
8230355 | Bauermeister | Jul 2012 | B1 |
8390670 | Gottlieb | Mar 2013 | B1 |
8405702 | Gottlieb | Mar 2013 | B1 |
8429704 | Vonog et al. | Apr 2013 | B2 |
8458328 | Dubovik et al. | Jun 2013 | B2 |
8463677 | Vonog et al. | Sep 2013 | B2 |
8527654 | Vonog et al. | Sep 2013 | B2 |
8549167 | Vonog et al. | Oct 2013 | B2 |
8558868 | Prentice | Oct 2013 | B2 |
8635293 | Fisher | Jan 2014 | B2 |
8647206 | Gottlieb | Feb 2014 | B1 |
8749610 | Gossweller | Jun 2014 | B1 |
8779265 | Gottlieb | Jul 2014 | B1 |
8856656 | Chao | Oct 2014 | B2 |
8902272 | Gottlieb | Dec 2014 | B1 |
8917310 | Gottlieb | Dec 2014 | B2 |
8929516 | Odinak | Jan 2015 | B2 |
9041768 | Gottlieb | May 2015 | B1 |
9215412 | Gottlieb | Dec 2015 | B2 |
9241131 | Desai | Jan 2016 | B2 |
20020094831 | Maggenti et al. | Jul 2002 | A1 |
20020102999 | Maggenti et al. | Aug 2002 | A1 |
20020143877 | Hackbarth | Oct 2002 | A1 |
20020165921 | Sapieyevski | Nov 2002 | A1 |
20020169014 | Egozy et al. | Nov 2002 | A1 |
20030000369 | Funaki | Jan 2003 | A1 |
20030014262 | Kim | Jan 2003 | A1 |
20030097385 | Lee et al. | May 2003 | A1 |
20030164084 | Redmann et al. | Sep 2003 | A1 |
20040022202 | Yang | Feb 2004 | A1 |
20040201668 | Matsubara et al. | Oct 2004 | A1 |
20040255253 | Marcjan | Dec 2004 | A1 |
20040260669 | Fernandez | Dec 2004 | A1 |
20050032539 | Noel et al. | Feb 2005 | A1 |
20050034075 | Belhumeur et al. | Feb 2005 | A1 |
20050078613 | Covell et al. | Apr 2005 | A1 |
20050132288 | Kim et al. | Jun 2005 | A1 |
20050143135 | Brems et al. | Jun 2005 | A1 |
20050210416 | MacLaurin et al. | Sep 2005 | A1 |
20050254440 | Sorrell | Nov 2005 | A1 |
20050262542 | DeWeese et al. | Nov 2005 | A1 |
20060002315 | Theurer | Jan 2006 | A1 |
20060031776 | Glein et al. | Feb 2006 | A1 |
20060055771 | Kies | Mar 2006 | A1 |
20060063555 | Robbins | Mar 2006 | A1 |
20060087987 | Witt | Apr 2006 | A1 |
20060112814 | Paepcke | Jun 2006 | A1 |
20060114314 | Dunko | Jun 2006 | A1 |
20060140138 | Hill | Jun 2006 | A1 |
20060168637 | Vysotsky | Jul 2006 | A1 |
20070039449 | Redmann | Feb 2007 | A1 |
20070140510 | Redmann | Jun 2007 | A1 |
20070078931 | Ludwig et al. | Jul 2007 | A1 |
20070186171 | Junuzovic | Aug 2007 | A1 |
20070201809 | Karaoguz | Aug 2007 | A1 |
20070229652 | Center et al. | Oct 2007 | A1 |
20070234220 | Khan | Oct 2007 | A1 |
20070236762 | Tsuji | Oct 2007 | A1 |
20070255816 | Quackenbush et al. | Nov 2007 | A1 |
20070265074 | Akahori et al. | Nov 2007 | A1 |
20080002668 | Asokan et al. | Jan 2008 | A1 |
20080037763 | Shaffer et al. | Feb 2008 | A1 |
20080120560 | Cohen et al. | May 2008 | A1 |
20080136895 | Mareachen | Jun 2008 | A1 |
20080136898 | Eisenberg et al. | Jun 2008 | A1 |
20080137559 | Sasaki et al. | Jun 2008 | A1 |
20080146339 | Olsen et al. | Jun 2008 | A1 |
20080163287 | Fernandez | Jul 2008 | A1 |
20080181260 | Vonog et al. | Jul 2008 | A1 |
20080190271 | Taub et al. | Aug 2008 | A1 |
20080232248 | Barave et al. | Sep 2008 | A1 |
20080246852 | Mori | Oct 2008 | A1 |
20080274810 | Hayashi et al. | Nov 2008 | A1 |
20080316295 | King et al. | Dec 2008 | A1 |
20090024963 | Lindley et al. | Jan 2009 | A1 |
20090033737 | Goose et al. | Feb 2009 | A1 |
20090036108 | Cho | Feb 2009 | A1 |
20090037826 | Bennetts | Feb 2009 | A1 |
20090040289 | Hetherington et al. | Feb 2009 | A1 |
20090043422 | Lee | Feb 2009 | A1 |
20090054107 | Feland, III et al. | Feb 2009 | A1 |
20090058984 | Lee | Mar 2009 | A1 |
20090070420 | Quackenbush | Mar 2009 | A1 |
20090138805 | Hildreth | May 2009 | A1 |
20090172200 | Morrison et al. | Jul 2009 | A1 |
20090186605 | Apfel et al. | Jul 2009 | A1 |
20090204906 | Irving | Aug 2009 | A1 |
20090209339 | Okada | Aug 2009 | A1 |
20090210789 | Thakkar | Aug 2009 | A1 |
20090249244 | Robinson et al. | Oct 2009 | A1 |
20090254843 | Van Wie | Oct 2009 | A1 |
20090257730 | Chen et al. | Oct 2009 | A1 |
20090288007 | Leacock | Nov 2009 | A1 |
20090327441 | Lee | Dec 2009 | A1 |
20100005411 | Duncker | Jan 2010 | A1 |
20100010890 | Ditkovski et al. | Jan 2010 | A1 |
20100026780 | Tico et al. | Feb 2010 | A1 |
20100026802 | Titus et al. | Feb 2010 | A1 |
20100030578 | Siddique et al. | Feb 2010 | A1 |
20100122184 | Vonog et al. | May 2010 | A1 |
20100130868 | Chawla | May 2010 | A1 |
20100146085 | Van Wie | Jun 2010 | A1 |
20100165904 | Woodward et al. | Jul 2010 | A1 |
20100198992 | Morrison et al. | Aug 2010 | A1 |
20100211872 | Rolston et al. | Aug 2010 | A1 |
20100258474 | Vonog et al. | Oct 2010 | A1 |
20100281375 | Pendergast et al. | Nov 2010 | A1 |
20100316232 | Acero et al. | Dec 2010 | A1 |
20110010146 | Buskies et al. | Jan 2011 | A1 |
20110011244 | Homburg | Jan 2011 | A1 |
20110055317 | Vonog et al. | Mar 2011 | A1 |
20110060992 | Jevons | Mar 2011 | A1 |
20110072366 | Spencer | Mar 2011 | A1 |
20110078532 | Vonog et al. | Mar 2011 | A1 |
20110164141 | Tico et al. | Jul 2011 | A1 |
20110179180 | Schleifer et al. | Jul 2011 | A1 |
20110181685 | Saleh | Jul 2011 | A1 |
20110185286 | Moyers | Jul 2011 | A1 |
20110201414 | Barclay et al. | Aug 2011 | A1 |
20110209104 | Hinckley et al. | Aug 2011 | A1 |
20110249073 | Cranfill | Oct 2011 | A1 |
20110267422 | Garcia | Nov 2011 | A1 |
20110270922 | Jones | Nov 2011 | A1 |
20110279634 | Periyannan | Nov 2011 | A1 |
20120038550 | Lemmey et al. | Feb 2012 | A1 |
20120039382 | Vonog et al. | Feb 2012 | A1 |
20120041859 | Vonog et al. | Feb 2012 | A1 |
20120060101 | Vonog et al. | Mar 2012 | A1 |
20120084456 | Vonog et al. | Apr 2012 | A1 |
20120084672 | Vonog et al. | Apr 2012 | A1 |
20120084682 | Sirpal | Apr 2012 | A1 |
20120098919 | Tang | Apr 2012 | A1 |
20120110162 | Dubovik et al. | May 2012 | A1 |
20120110163 | Dubovik et al. | May 2012 | A1 |
20120124128 | Vonog et al. | May 2012 | A1 |
20120127183 | Vonog et al. | May 2012 | A1 |
20120151541 | Vonog et al. | Jun 2012 | A1 |
20120182384 | Anderson | Jul 2012 | A1 |
20120192087 | Lemmey | Jul 2012 | A1 |
20120198334 | Surin et al. | Aug 2012 | A1 |
20120246227 | Vonog et al. | Sep 2012 | A1 |
20120002001 | Prentice | Oct 2012 | A1 |
20120249719 | Lemmey et al. | Oct 2012 | A1 |
20120254649 | Vonog et al. | Oct 2012 | A1 |
20120272162 | Surin et al. | Oct 2012 | A1 |
20120280905 | Vonog et al. | Nov 2012 | A1 |
20120293600 | Lemmey et al. | Nov 2012 | A1 |
20120297320 | Lemmey et al. | Nov 2012 | A1 |
20120326866 | Lemmey et al. | Dec 2012 | A1 |
20120331089 | Vonog et al. | Dec 2012 | A1 |
20120331387 | Lemmey et al. | Dec 2012 | A1 |
20120331405 | Eidelson et al. | Dec 2012 | A1 |
20130014027 | Lemmey | Jan 2013 | A1 |
20130014028 | Lemmey et al. | Jan 2013 | A1 |
20130018960 | Knysz et al. | Jan 2013 | A1 |
20130019184 | Vonog et al. | Jan 2013 | A1 |
20130021431 | Lemmey et al. | Jan 2013 | A1 |
20130024785 | Van Wie | Jan 2013 | A1 |
20130063542 | Bhat | Mar 2013 | A1 |
20130073978 | Butler | Mar 2013 | A1 |
20130088518 | Lemmey et al. | Apr 2013 | A1 |
20130097512 | Hong et al. | Apr 2013 | A1 |
20130102854 | Zheng et al. | Apr 2013 | A1 |
20130109302 | Levien et al. | May 2013 | A1 |
20130115582 | El Kaliouby et al. | May 2013 | A1 |
20130121503 | Ankolekar et al. | May 2013 | A1 |
20130123019 | Sullivan | May 2013 | A1 |
20130135468 | Kim | May 2013 | A1 |
20130151333 | El Kaliouby et al. | Jun 2013 | A1 |
20130156093 | Vonog et al. | Jun 2013 | A1 |
20130169742 | Wu | Jul 2013 | A1 |
20130173531 | Rinearson | Jul 2013 | A1 |
20130191479 | Gottlieb | Jul 2013 | A1 |
20130201279 | Civinlar et al. | Aug 2013 | A1 |
20130216206 | Dubin | Aug 2013 | A1 |
20130239063 | Ubillos et al. | Sep 2013 | A1 |
20130254287 | Biswas | Sep 2013 | A1 |
20130289983 | Park | Oct 2013 | A1 |
20130321648 | Tamiya | Dec 2013 | A1 |
20130330062 | Meikle et al. | Dec 2013 | A1 |
20140004496 | Reddy | Jan 2014 | A1 |
20140019882 | Chew et al. | Jan 2014 | A1 |
20140026157 | Wang | Jan 2014 | A1 |
20140033900 | Chapman et al. | Feb 2014 | A1 |
20140040784 | Behforooz et al. | Feb 2014 | A1 |
20140051047 | Bender et al. | Feb 2014 | A1 |
20140058828 | El Kaliouby et al. | Feb 2014 | A1 |
20140154659 | Otwell | Jun 2014 | A1 |
20140184723 | Morrison | Jul 2014 | A1 |
20140201207 | Sadowsky et al. | Jul 2014 | A1 |
20140229866 | Gottlieb | Aug 2014 | A1 |
20140229888 | Ko et al. | Aug 2014 | A1 |
20140325428 | Lee et al. | Oct 2014 | A1 |
20150046800 | Isidore et al. | Feb 2015 | A1 |
20150052453 | Yu et al. | Feb 2015 | A1 |
20150106750 | Konami | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
2771785 | Mar 2011 | CA |
2774014 | Apr 2011 | CA |
0721726 | Dec 2000 | EP |
2471221 | Jul 2012 | EP |
2484091 | Aug 2012 | EP |
2630630 | Aug 2013 | EP |
2636194 | Sep 2013 | EP |
2446529 | Aug 2008 | GB |
2009077936 | Jun 2009 | WO |
2011025989 | Mar 2011 | WO |
2011041229 | Apr 2011 | WO |
2012021173 | Feb 2012 | WO |
2012021174 | Feb 2012 | WO |
2012021901 | Feb 2012 | WO |
2012054089 | Apr 2012 | WO |
2012054895 | Apr 2012 | WO |
2012060977 | May 2012 | WO |
2012060978 | May 2012 | WO |
2012103376 | Aug 2012 | WO |
2012135384 | Oct 2012 | WO |
2012151471 | Nov 2012 | WO |
2012177641 | Dec 2012 | WO |
2012177779 | Dec 2012 | WO |
2013343207 | Mar 2013 | WO |
2013149079 | Oct 2013 | WO |
Entry |
---|
U.S. Appl. No. 12/624,829, filed Nov. 24, 2009, U.S. Pat. No. 8,405,702, Mar. 26, 2013. |
U.S. Appl. No. 12/624,840, filed Nov. 24, 2009, U.S. Pat. No. 8,390,670, Mar. 5, 2013. |
U.S. Appl. No. 12/624,848, filed Nov. 24, 2009, U.S. Pat. No. 8,902,272, Dec. 2, 2014. |
U.S. Appl. No. 12/688,631, filed Jan. 15, 2010, U.S. Pat. No. 8,647,206, Feb. 11, 2014. |
U.S. Appl. No. 12/725,332, filed Apr. 1, 2010, U.S. Pat. No. 9,344,745, May 17, 2016. |
U.S. Appl. No. 13/784,327, filed Mar. 4, 2010, U.S. Pat. No. 8,917,310, Dec. 23, 2014. |
U.S. Appl. No. 13/849,696, filed Mar. 25, 2013, U.S. Pat. No. 9,041,768, May 26, 2015. |
U.S. Appl. No. 13/925,059, filed Jun. 24, 2013, U.S. Pat. No. 9,401,937, Jul. 26, 2016. |
U.S. Appl. No. 14/051,133, filed Oct. 10, 2013. |
U.S. Appl. No. 14/068,261, filed Oct. 31, 2013. |
U.S. Appl. No. 14/252,883, filed Apr. 15, 2014. |
U.S. Appl. No. 14/255,475, filed Apr. 17, 2014. |
U.S. Appl. No. 14/272,590, filed May 8, 2014. |
U.S. Appl. No. 14/278,238, filed May 15, 2014, U.S. Pat. No. 9,401,132, Jul. 26, 2016. |
U.S. Appl. No. 14/691,781, filed Apr. 21, 2015, U.S. Pat. No. 9,357,169, May 31, 2016. |
U.S. Appl. No. 14/539,012, filed Nov. 12, 2014, U.S. Pat. No. 9,215,412, Dec. 15, 2015. |
U.S. Appl. No. 14/528,977, filed Oct. 30, 2014. |
U.S. Appl. No. 14/810,307, filed Jul. 27, 2015. |
U.S. Appl. No. 14/938,449, filed Nov. 11, 2015. |
About TokBox, Inc., All about TokBox, http://www.tokbox.com/about, retrieved Feb. 4, 2011, p. 1. |
CrunchBase Profile, CrunchBase readeo, http://www.crunchbase.com/company/readeo, retrieved Feb. 3, 2011, pp. 1-2. |
CrunchBase Profile, CrunchBase Rounds, http://www.crunchbase.com/company/6rounds, retrieved Feb. 4, 2011, pp. 1-2. |
CrunchBase Profile, CrunchBase TokBox, http://www.crunchbase.com/company/tokbox, retrieved Feb. 4, 2011, pp. 1-3. |
Online Collaboration GoToMeeting, http://www.gotomeeting.com/fec/online_collaboration, retrieved Feb. 4, 2011, pp. 1-4. |
Readeo Press Release, http://www.mmpublicity.com, Feb. 25, 2010, pp. 1-2. |
Rounds.com, Make friends online and enjoy free webcame chats, http://www.rounds.com/about, retrieved Feb. 4, 2011, pp. 1-3. |
2011 Blackboard Collaborate User's Guide. |
2007 WebEx Meeting Center User's Guide. |
MacDonald, Heidi—Shindig Offers Authors Interactive Video Conferencing—Blog posted Sep. 12, 2012—Publishers Weekly. Retrieved from [http://publishersweekly.com] on [Aug. 15, 2016]. 5 Pages. |
Shindig, Various Informational Pages Published as of Jul. 21, 2012—Retrieved via Internet Archive from [http://shindigevents.com] on [Aug. 5, 2016]. |
Slideshare—Shindig Magazine Video Chat Events. Slide Presentation published Oct. 9, 2012. Retrieved from [http://slideshart.net] on [Aug. 11, 2016]. 11 Pages. |
Miyoshi et al. “Input device using eye tracker in human-computer interactions,” Robot and Human Interactive Communication, 2001 Proceedings, 10th IEEE International, pp. 580-585. |
Robin Wauters, “6rounds Launches Video Communication Platform With Several Layers of Fun,” TechCrunch, Jun. 30, 2009, pp. 1-2, http://techcrunch.com/2009/06/30/6rounds-launches-video-communication-plafform-with-several-layers-of-fun/, accessed on Feb. 10, 2010. |
“6rounds has officially launched!” 6rounds.com Blog, Jul. 3, 2009, pp. 1-4, http://blog.6rounds.com/official-launch-video-chat-website/, accessed on Feb. 10, 2010. |
“The Gix00 Team on TV!” 6rounds.com Blog, Sep. 22, 2008, pp. 1-4, http://blog.6rounds.com/gixoo-tv-coverage-startup-factory/, accessed on Feb. 10, 2010. |
Ustream Profile, Ustream, http://www.ustream.tv/help, retrieved Jul. 11, 2014, pp. 1-3. |
Cisco, “Audio Controls Guide and Release Notes for FR29,” Cisco WebEx User Guide, pp. 1-11. |
Number | Date | Country | |
---|---|---|---|
20150116448 A1 | Apr 2015 | US |