This disclosure generally relates to the capture and sharing of video content.
The use of computing devices has greatly increased in recent years. Computing devices such as tablet computers, smart phones, cellular phones, and netbook computers, are now commonplace throughout society. Computing devices also exist with other devices, such as, for example, cars, planes, household appliances, and thermostats. With this increase in the number of computing devices, the information that is shared between computers also has greatly increased. For example, users often capture video content with their computing devices and share the captured video content. However, the processes of capturing and sharing video content are generally separate processes, which is cumbersome and inefficient for users.
Techniques, methods, and systems are disclosed herein for efficiently capturing and sharing video content with a mobile computing system. A messaging application executing on the mobile computing system can present a page of a graphical user interface (GUI) on a touchscreen of the device, and a user can interact with the GUI within the page to capture different segments of video content. Within the page, the user can initiate the capture of new segments of video content through interaction with the GUI, while previously captured segments continue to be displayed within the page of the GUI. The different segments can be displayed within the page of the GUI, and the user can edit the different segments and compose a single video content file from one or more of the different segments displayed within the page of the GUI. When the single video content file is composed from one or more different segments of captured video content, the single video content file can be appended to a message that is shared by the user with other users by the messaging application.
In a first aspect, a method includes capturing a first segment of video content, and displaying the video content of the captured first segment in a first page of a graphical user interface (GUI) of an application executing on a mobile device as the first segment is being captured, where the GUI is displayed on a touchscreen. A second segment of video content is captured, where the second segment is not temporally contiguous with the first segment, and, as the second segment is being captured, a first static screenshot of a frame of the first segment of video content is displayed in the page of the GUI and the video content of the captured second segment is also displayed in the page of the GUI.
Implementations can include one or more of the following features, alone or in any combination with each other. For example, after the second segment is captured, the first static screenshot and a second static screenshot of a frame of the second segment of video content can be displayed in the first page of the GUI. A single file of video content that includes the first and second segments can be generated.
The first and second static screenshots can be displayed in the first page of the GUI in a predetermined order, and a user's interaction with at least one of the static screenshots can be received on the touchscreen, and in response to the received user's interaction, the first and second static screenshots can be displayed within the first page in a user-determined order different from the predetermined order. Then, the single file of video content can include the content of the first and second segments arranged in the user-determined order. The predetermined order can be the order in which the segments were captured. The first and second segments of video content can be ordered for playback in the single file of video content in the user-determined order.
A user's selection of one of the first or second static screenshots can be received on the touchscreen, and, in response to receiving the user selection, the video content corresponding to the selected static screenshot in the page can be played back while displaying the non-selected static screenshot in the page.
After generating the single file of video content that includes the first and second segments, a first user input to the GUI in the first page can be received. In response to receiving the first user input to the GUI, a text entry GUI for receiving text input by the user for inclusion in a message to be sent to other users can be displayed in a second page of the GUI of the application. Also, in response to receiving the first user input to the GUI, the single file of video content can be attached to the message to be sent to other users. The first user input to the GUI can include includes a single touch of the touchscreen. The message that includes the single file of video content can be sent to be broadcasted to other users through a social media platform in which the user and the other user participate.
A user interface element that is selectable to initiate the capture of the segments of video content can be displayed in the second page of the GUI of the application.
Three or more segments of video content that are not temporally contiguous with each other can be captured. Static screenshots corresponding to the three or more segments of video content can be displayed within the first page in a predetermined order. A user's selection, on the touchscreen, with at least one of the static screenshots can be received, and, in response to the received user's selection, the selected static screenshot can be deleted from the display. A user's interaction, on the touchscreen, with at least one of the displayed static screenshots can be displayed, and, in response to the received user's interaction, the static screenshots that have not been deleted within the page can be displayed in a user-determined order different from the predetermined order. Then, the single file of video content can include the content of the segments of video content corresponding to the displayed static screenshots, after the selected static screenshot has been deleted, arranged in the user-determined order.
In another general aspect, a mobile computing system includes a camera configured for capturing segments of video content that are not temporally contiguous, a display including a touchscreen configured for displaying the captured segments of the video content and for displaying static screenshots of the captured segments of the video content and for receiving interactions by a user with the touchscreen, one or more memory devices configured for storing executable instructions, and one or more processors configured for executing the instructions. Execution of the instructions causes the system to execute an application on the mobile computing device for receiving and displaying a stream of messages that have been broadcast by other users. Execution of the application includes capturing a plurality of segments of video content that are not temporally contiguous in a page of a graphical user interface (GUI) of the application, displaying the video content of the captured segments in the page of the GUI on the touchscreen as the segments are being captured, while displaying static screenshots of frames other one or more captured segments in the page, generating a single file of video content that includes the video content of two or more of the segments, in response to receiving a user's interaction with one or more of the static screenshots through the touchscreen, receiving, through the GUI of the application, text input by the user for inclusion in a message to be broadcast to other users, and attaching the single file of video content to the message to be broadcast.
In another general aspect, a mobile computing system includes a camera configured for capturing segments of video content that are not temporally contiguous, a display including a touchscreen configured for displaying the captured segments of the video content and for displaying static screenshots of the captured segments of the video content and for receiving user interactions with the touchscreen, one or more memory devices configured for storing executable instructions, and one or more processors configured for executing the instructions. Execution of the instructions causes the system to capture a first segment of video content, display the video content of the captured first segment in a first page of a graphical user interface (GUI) of an application executing on the mobile computing system as the first segment is being captured, where the GUI is displayed on the touchscreen, capture a second segment of video content, where the second segment is not temporally contiguous with the first segment, and, as the second segment is being captured, display a first static screenshot of a frame of the first segment of video content in the page of the GUI and display the video content of the captured second segment in the page of the GUI.
Implementations can include one or more of the following features, alone or in any combination with each other. For example, the display, the one or more memory devices, and the one or more processors can be integrated into a first single housing, and the camera can located a second housing peripheral to the first housing, while the camera communicates with the one or more processors over a wireless link.
Execution of the instructions can further cause the system to, after the second segment is captured, display, in the first page of the GUI, the first static screenshot and a second static screenshot of a frame of the second segment of video content. Execution of the instructions can further cause the system to generate a single file of video content that includes the first and second segments. Execution of the instructions can further cause the system to display, in the first page of the GUI, the first and second static screenshots within the page in a predetermined order, and receive a user's interaction, on the touchscreen, with at least one of the static screenshots. In response to the received user's interaction, the first and second static screenshots can be displayed within the first page in a user-determined order different from the predetermined order, and the single file of video content can include the content of the first and second segments arranged in the user-determined order. The predetermined order can be the order in which the segments were captured.
Execution of the instructions can further cause the system to order the first and second segments of video content for playback in the single file of video content in the user-determined order.
Execution of the instructions can further cause the system to receive a user's selection, on the touchscreen, of one of the first or second static screenshots, and, in response to receiving the user selection, the video content corresponding to the selected static screenshot can be played back in the page while displaying the non-selected static screenshot in the page.
Execution of the instructions can further cause the system to receive a first user input to the GUI in the first page after generating the single file of video content that includes the first and second segments. In response to receiving the first user input to the GUI, a text entry GUI for receiving text input by the user for inclusion in a message to be sent to other users can be displayed, in a second page of the GUI of the application, and the single file of video content can be attached to the message to be sent to other users. The first user input to the GUI can include a single touch of the touchscreen.
Execution of the instructions can further cause the system to capture three or more segments of video content that are not temporally contiguous with each other, to display static screenshots corresponding to the three or more segments of video content within the first page in a predetermined order, to receive a user's selection, on the touchscreen, with at least one of the static screenshots, and, in response to the received user's selection, to delete the selected static screenshot from the display. A user's interaction, on the touchscreen, with at least one of the displayed static screenshots can be received, and, in response to the received user's interaction, the static screenshots that have not been deleted within the page in a user-determined can be displayed in an order different from the predetermined order. Then, the single file of video content can include the content of the segments of video content corresponding to the displayed static screenshots, after the selected static screenshot has been deleted, arranged in the user-determined order.
The mobile computing system 100 can include a camera 106 for capturing video content and for capturing still images. A display 108, at least a portion of which can include a touchscreen 110, can display graphical user interface 112, with which a user of the system 100 can interact with the video content captured by the camera 106. A content editor module 114 can be used to manipulate and edit video content displayed in the GUI 112 and a message composer module 116 can be used to compose a message that includes video content and that is to be sent from the system 100. A network interface 118 can provide an interface between the system 100 and a network 120 (e.g., the Internet), so that a message can be sent from the system 100 through the network 122 another user.
In some implementations, elements of the mobile computing system 100 can be physically separated from each other, yet communicate with each other via wireless or wired links. For example, in one implementation, the camera 106 could be mounted on a helmet (or another wearable device, e.g., eyeglasses) worn by the user, while the GUI 112 is presented on a display 108 of a smart phone, tablet, or a wrist-mounted wearable device carried by the user. In such an implementation, the different physical elements of the system can communicate with each other via a short range wireless communications protocol (e.g., Bluetooth). In another example implementation, the camera 106 could be mounted on a movable vehicle, such as, for example, a model automobile or a flying drone, while the GUI 112 is presented on a display 108 of a smart phone, tablet, etc. that is in communication with the moving vehicle.
The GUI 500 can include a user interface element 512 that can be toggled to turn a flash on and off. The GUI 500 can include a user interface element 514 that can be selected to play video content that has been previously captured. For example, a user may tap on the user interface element 508 to select the element and then may select user interface element 514 to play the previously-captured video content represented by the UI element 508. The GUI 500 may include a user interface element 516 that may be selected to indicate that the user has finished capturing video content within the GUI 500 and wishes to return to a different page of the application that provides the GUI 500 to do something with the captured video content (e.g., send a message with the captured video content). In some implementations, the UI element 516 can be selected to display a page of the application in which text characters can be input for a message to one or more other users and to automatically attach or embed the captured video content in a message. The GUI 500 may include the user interface element 518 that can be selected to delete any previously captured video content that is represented in the GUI 500 (e.g., the video content represented by the UI element 508) and to return the user to a different UI.
The different segments of video content represented by the UI elements 608, 610 can be captured by the user pressing and holding down the user interface element 606 (e.g., by pressing a finger to the UI element 606) for a period of time while pointing the camera at a scene and then releasing the UI element 606 (e.g., by lifting the finger off the UI element 606) to capture a first segment of video content represented by the UI element 608. Then, the user can again press and hold down the user interface element 606 for a second period of time while pointing the camera at a different scene and then releasing the UI element 606 to capture the second segment of video content represented by the UI element 610. The different segments of video content captured within the GUI 606 can be represented by UI elements 608, 610 displayed in a dock 612 used to display a plurality of UI elements representing different segments of captured video content.
The user can review previously-captured video content segments within the GUI 600. For example, a user can select a UI element 608 in the dock 612 (e.g., by tapping on the UI element 608), and then once the UI element is selected, the user can select a playback UI element 614 to play the 608 in the viewfinder portion 602 of the GUI 600.
The different segments of captured video content represented by the UI elements 608, 610 can be combined into a single video file that can be attached to a message sent by the user to one or more other users, e.g., a message that is composed in the same application executing on the system 100 that is used to capture and edit segments of video content and that is broadcasted to other users through a social media platform in which the user and the other user participate. For example, the different segments of captured video content can be played sequentially in the single video file in the order in which they appear in the dock 612. For example, content of the left-most segment 608 in the dock 612 can be played first in the single video file, followed by content of the next segment 610 in the ordered list of segments in the dock 612. The order of the segments in the dock 612 can be re-arranged by a user. For example, a user may select a segment (e.g., by pressing a finger to the UI element representing the segment) and may drag the segment to a different position in the order of segments in the dock 612. Additionally, segments of captured video shown in the dock 612 can be deleted, so they are not included in the single video file that is prepared from the captured video segment(s) in the dock 612. For example, a user may select a segment (e.g., by pressing a finger to the UI element representing the segment) and may drag the segment upward or downward away from the dock 612 to delete the selected segment.
The different segments of video content represented by the UI elements 708, 710 can be captured by the user pressing and holding down the user interface element 706 (e.g., by pressing a finger to the UI element 706) for a period of time while pointing the camera at a scene and then releasing the UI element 706 (e.g., by lifting the finger off the UI element 706) to capture a first segment of video content represented by the UI element 708. Then, the user can again press and hold down the user interface element 706 for a second period of time while pointing the camera at a different scene and then releasing the UI element 706 to capture the second segment of video content represented by the UI element 710. The different segments of video content captured within the GUI 706 can be represented by UI elements 708, 710 displayed in a dock 712 that can display a plurality of UI elements representing different segments of captured video content. While the GUI 700 is used to capture an additional, third, segment of video content, a placeholder UI element 714 can be displayed in the dock 712 to show where the video content currently being captured will be displayed when the capture is complete. Additionally, while video content is being captured in the viewfinder 702, a UI element 716 can display a duration of the video content segment currently being captured in the viewfinder 702, and indications of the length of the previously-captured video content segments can be turned off.
As explained above, the video content segments represented by UI elements 804, 806, 808 can be combined into a single video file for attachment to a message to be sent from the mobile computing system 100. A user can edit the order of the different segments within the single video file by using the touchscreen interface to rearrange the order of the UI elements 804, 806, 808 within the dock 812. In some implementations, the duration of the single video file to be attached to the outgoing message can be limited to a predetermined amount of time (e.g., 30 seconds). Thus, in some implementations, the sum of the durations of the plurality of different video content segments displayed in the dock 812 can be limited to the predetermined amount of time. The content editor 114 may limit automatically the duration of the last-captured video content segment, such that the camera automatically stops recording video content when the length of some of the durations of the different video content segments would exceed the predetermined amount of time.
A user also can remove UI elements from the dock 812, thereby deleting the corresponding video content segment from inclusion in the single video content file. For example, the UI element 804, 806, 808 can be removed from the dock 812 by selecting the UI element and then moving it in a vertical direction away from the dock. In this manner, when the duration of the single video file to be attached to the outgoing message is limited to a predetermined amount of time, a user may reclaim time within the single video content file being composed by deleting one or more individual video content segments.
The GUI 800 can include a UI element 814 that represents a time of a frame in the single video file that is composed from the plurality of video content segments, where the frame is displayed in the viewfinder 802. For example, the GUI 800 can include a horizontal progress bar 815 that extends, with increasing length, from the left side of the GUI 800 toward a right side of the GUI as different segments corresponding to UI elements 804, 806, 808 are selected and/or played back in the GUI 800. For example, in GUI 800, a total of 30 seconds of video content in three different video content segments is represented by the different UI elements 804, 806, 808 displayed in the dock 812. When UI element 804, which corresponds to four seconds of video content, is selected, and 30 seconds is the maximum duration of the single video content file or when a total of 30 seconds of video content is represented by the UI elements 804, 806, 808 displayed in the dock 812, the UI element 804 can extend 4/30ths of the horizontal distance across the GUI 800, or across the horizontal width of the viewfinder 802, or across some other predetermined maximum extent of the UI element 814. The predetermined maximum extent of the UI element 814 can correspond to the predetermined maximum duration of video content that can be included in the single video content file or, in another implementation can correspond to the total duration of video content represented by the UI elements 804, 806, 808 displayed in the dock 812. If the user selects the UI element 816 to playback the video content segment corresponding to the selected UI element 804, the progress bar may progress from 0% to 13.3% (i.e., 4/30th) of the way across its maximum extent as the video content of the segment is played back. When the video content corresponding to the UI element 808 is selected and played back, the progress bar may progress from 36.7% (i.e., 11/30th) to 100% of the way across its maximum extent as the video content of the segment is played back. As shown in
In some implementations, the GUI 800 can include one or more UI elements that can include second horizontal progress bar(s) 818 that each correspond to a screenshot 804, 806, 808 of a captured video segment. The horizontal progress bars 818 that correspond to a screenshot of a particular captured video segment can be illuminated and/or can have a length that increases from left to right as the video content of the particular captured segment is played back. As shown in
Once the single video content file or the individual video content segment has been selected for editing, the GUI 1100 can display a UI element 1104 that includes a plurality of frames of the selected content. The plurality of frames in the UI element 1104 can be displayed in an order in which the frames are played when the content is rendered. A user may interact with the UI element 1104 to trim the length of the selected content, for example by selecting a start and end frame and then selecting a UI element 1106 to indicate that the user is finished with the editing process. In some implementations, the UI element 1104 may display a predetermined number of frames of the selected video content. The user may scroll backward and forward in the frames of the content, for example, by interacting with edges 1108, 1110 of the UI element 1104, and the user may select start and end frames with which to trim the length of the video content by tapping the start and end frames in the touchscreen 110 that displays the GUI 1100 when they are displayed in the UI element 1104.
Once the user has selected the start and end frame of the trimmed video content of the single video content file or the individual video content segment, a user can select a user interface element 1106 to indicate that the trimming process is complete. In the case of trimming an individual video content segment selection of the user interface element 1106 can return the user to a GUI similar to GUI 600, GUI 804 GUI 1000, in which a plurality of video content segments, including the trimmed video content segment, are displayed. In the case of trimming a single video content file composed of a plurality of different video content segments, selection of the user interface element 1106 can call up a new GUI with which the user can add additional information to the message that will be sent with the single video content file attached.
The message that is sent by the user can be a message that is broadcasted to other users through a social media platform in which the user and the other user participate. The other user can be users that are linked to, that are friends with, or that follow, the user through the social media platform. When the message is received by the other users it can be inserted into chronological timelines of messages received by the other users within a user interface of the social media application, where receipt of a message by another user is based on a follower-subscription model, in which only other users who subscribe to the messages sent out by the user receive the sent message for insertion into their timelines.
In one or more implementations, the messaging platform 1400 is a platform for facilitating real-time communication between one or more entities. For example, the messaging platform 1400 may store millions of accounts of individuals, businesses, and/or other entities (e.g., pseudonym accounts, novelty accounts, etc.). One or more users of each account may use the messaging platform 1400 to send messages to other accounts inside and/or outside of the messaging platform 1400. The messaging platform 1400 may be configured to enable users to communicate in “real-time”, i.e., to converse with other users with a minimal delay and to conduct a conversation with one or more other users during concurrent sessions. In other words, the messaging platform 1400 may allow a user to broadcast messages and may display the messages to one or more other users within a reasonable time frame so as to facilitate a live conversation between the users. Recipients of a message may have a predefined graph relationship with an account of the user broadcasting the message (e.g., based on an asymmetric graph representing accounts as nodes and edges between accounts as relationships). In one or more implementations, the user is not an account holder or is not logged in to an account of the messaging platform 1400. In this case, the messaging platform 1400 may be configured to allow the user to broadcast messages and/or to utilize other functionality of the messaging platform 1400 by associating the user with a temporary account or identifier.
In one or more implementations, the SOR module 1416 includes functionality to generate one or more content groups, each including content associated with a subset of unviewed messages of an account of the messaging platform 1400. Relationships between accounts of the messaging platform 1400 can be represented by a connection graph.
The connection graph 1450 is a data structure representing relationships (i.e., connections) between one or more accounts. The connection graph 1450 represents accounts as nodes and relationships as edges connecting one or more nodes. A relationship may refer to any association between the accounts (e.g., following, friending, subscribing, tracking, liking, tagging, and/or etc.). The edges of the connection graph 1450 may be directed and/or undirected based on the type of relationship (e.g., bidirectional, unidirectional), in accordance with various implementations of the invention.
Many messaging platforms include functionality to broadcast streams of messages to one or more accounts based at least partially on a connection graph representing relationships between those accounts (see
Returning to
The strength of relationship from the recipient account to the connected account is not limited to any particular form. A strength of relationship can be a numerical value, e.g., a value in the range 0 through 10 with 0 representing a weakest relationship and 10 a strongest relationship. The numerical values may be continuous in a range or limited to discrete values in a range, e.g., integer values within the range 0 through 10 and/or percentage values within a range of 0 through 100 percent. An indication of a strength of relationship can be a text indicator. A text indicator can specify strength levels, e.g., “Lowest”, “Low”, “Medium”, “High”, “Highest”. A text indicator can be descriptive, e.g., “Casual”, “Highly Interested”, “Friend”, “Fan”, “Acquainted”. A text indicator can describe a relationship category, e.g., “Friend”, “Family”, “News”, “Professional”. Each indication can correspond with a predefined numeric (or other) strength of relationship value.
A strength of relationship can reflect the subjective or objective interests or preferences of the user of the recipient account. For example, a user of the recipient account may have a mild interest in the connected account and initially select a moderately strong relationship. Then after consuming content from the connected account, the recipient account may upgrade the relationship to a stronger relationship or downgrade it to a weaker one. A user possessing a particularly strong interest in the connected account (e.g., a celebrity or unique news or information source) can choose a relationship of the highest strength (e.g., “Highly Interested” or 10 on a scale of 10).
A relative strength or weakness of a relationship can be interpreted depending on a particular user content presentation. For example, it may be determined that weak relationships are more correlated with interest, while strong relationships are more correlated with social connections. Thus, an interest-based timeline (e.g., a discover timeline) can weight weak relationships more heavily, while a social-based timeline can weight strong relationships more heavily.
In addition, a strength of relationship can be used to establish a negative correlation. For example, if a user is blocking a particular account, then the messaging platform 1400 can also block messages from accounts with strong relationships to the particular account and messages from accounts to which the particular account has a strong relationship.
The example computing device 1500 includes a processing device (e.g., a processor) 1502, a main memory 1504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1506 (e.g., flash memory, static random access memory (SRAM)) and a data storage device 1518, which communicate with each other via a bus 1530.
Processing device 1502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 1502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1502 is configured to execute instructions 1526 (e.g., instructions for an application ranking system) for performing the operations and steps discussed herein.
The computing device 1500 may further include a network interface device 1508 which may communicate with a network 1520. The computing device 1500 also may include a video display unit 1510 (e.g., a liquid crystal display (LCD), a light-emitting diode (LED), or organic light emitting diode (OLED) display), an alphanumeric input device 1512 (e.g., a keyboard), a cursor control device 1514 (e.g., a trackball, a trackpad, or a mouse) and a sound generation device 1516 (e.g., a speaker). In one implementation, the video display unit 1510, the alphanumeric input device 1512, and the cursor control device 1514 may be combined into a single component or device (e.g., an LCD touch screen).
The data storage device 1518 may include a computer-readable storage medium 1528 on which is stored one or more sets of instructions 1526 (e.g., instructions for the application ranking system) embodying any one or more of the methodologies or functions described herein. The instructions 1526 may also reside, completely or at least partially, within the main memory 1504 and/or within the processing device 1502 during execution thereof by the computing device 1500, the main memory 1504 and the processing device 1502 also constituting computer-readable media. The instructions may further be transmitted or received over a network 1520 via the network interface device 1508.
While the computer-readable storage medium 1428 is shown in an example implementation to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media and magnetic media.
In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that implementations of the disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “identifying,” “determining,” “calculating,” “updating,” “transmitting,” “receiving,” “generating,” “changing,” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Implementations of the disclosure also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memory, or any type of media suitable for storing electronic instructions.
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an implementation” or “one implementation” or “an implementation” or “one implementation” throughout is not intended to mean the same implementation or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The above description sets forth numerous specific details such as examples of specific systems, components, methods and so forth, in order to provide a good understanding of several implementations of the present disclosure. It will be apparent to one skilled in the art, however, that at least some implementations of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth above are merely examples. Particular implementations may vary from these example details and still be contemplated to be within the scope of the present disclosure.
It is to be understood that the above description is intended to be illustrative and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the aspects enumerated below, along with the full scope of equivalents to which such aspects are entitled.
This application is a non-provisional of, and claims priority to, U.S. Patent Application No. 62/108,533, filed on Jan. 27, 2015, entitled “VIDEO CAPTURE AND SHARING” which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62108533 | Jan 2015 | US |