Computer systems generally employ a display or multiple displays that are mounted on a support stand and/or incorporated into a component of the computer systems. Users may view files displayed on the displays while providing user inputs using devices such as a keyboard and a mouse.
According to examples of the present disclosure, user experience of computer system users may be enhanced by employing multiple displays that facilitate a more intuitive way of interacting with user interface elements representing files. In more detail,
At block 110, files are received by the computer system. According to examples of the present disclosure, the terms “received”, “receiving”, “receive”, and the like, may include the computer system accessing the files from a computer-readable storage medium (e.g., memory device, cloud-based shared storage, etc.), or obtaining the files from a remote computer system. For example, the files may be accessed or obtained via any suitable wired or wireless connection, such as WI-FI, BLUETOOTH®, Near Far Communication (NFC), wide area communications (Internet) connection, electrical cables, electrical leads, etc.
At block 120, a first user interface that includes multiple user interface elements is displayed on the first display of the computer system. The user interface elements represent the files received at block 110.
At block 130, a first user gesture selecting a selected user interface element from the multiple user interface elements is detected. At block 140, in response to detecting the first user gesture, a second user interface is generated and displayed on the second display of the computer system. The second user interface may include a detailed representation of the file represented by selected user interface element.
At block 150, a second user gesture interacting with the selected user interface element is detected. At block 160, in response to detecting the second user gesture, the first user interface on the first display is updated to display the interaction with the selected user interface. The terms “interaction”, “Interact”, “interacting”, and the like, may refer generally to any user operation for any suitable purpose, such as organizing, editing, grouping, moving or dragging, resizing (e.g., expanding or contracting), rotating, updating attribute information, etc.
Example process 100 may be used for any suitable application. For example, the computer system may be used as a media hub to facilitate intuitive and interactive organization of media files, such as image files, video files, audio files, etc. The multiple user interface elements displayed on the first display may be thumbnails of the media files, and the detailed representation may be a high quality representation of the file represented by the selected user interface element (e.g., high resolution image or video).
The terms “user gesture”, “first user gesture”, “second user gesture”, or the like, may refer generally to any suitable operation performed by a user on the first display, or in proximity to the first display, such as a tap gesture, double-tap gesture, drag gesture, release gesture, click or double-click gesture, drag-and-drop gesture, etc. For example, a user gesture may be detected using any suitable approach, such as via a touch sensitive surface of the first display, etc.
The computer system employing process 100 may be used as in a standalone mode, examples of which will be described in further detail with reference to
Computer System
To facilitate an ergonomic way for file viewing and interaction, first display 210 and second display 220 may be disposed substantially perpendicular to each other. For example, first display 210 may be disposed substantially horizontally with respect to a user for interaction. In this case, first display 210 may have a touch sensitive surface that replaces input devices such as a keyboard, mouse, etc. A user gesture detected via the touch sensitive surface may also be referred to as a “touch gesture.” Any suitable touch technology may be used, such as resistive, capacitive, acoustic wave, infrared (IR), strain gauge, optical, acoustic pulse recognition, etc. First display 210, also known as a “touch mat” and “multi-touch surface”, may be implemented using a tablet computer with multi-touch capabilities.
Second display 220 may be disposed substantially vertically with respect to the user, such as by mounting second display 220 onto a substantially upright member for easy viewing by the user. Second display 220 may be a touch sensitive display (like first display 210), or a non-touch sensitive display implemented using any suitable display technology, such as liquid crystal display (LCD), light emitting polymer display (LPD), light emitting diode (LED) display, etc.
First display 210 displays first user interface 212, and second display 220 displays second user interface 222. First user interface 212 includes user interface elements 214-1 to 214-3, which will also be collectively referred to as “user interface elements 214” or individually as a general “user interface element 214.” User interface elements 214 may be any suitable elements that represent files and selectable for interaction, such as thumbnails, icons, buttons, models, low-resolution representations, or a combination thereof. The term “selectable” may generally refer to user interface element 214 being capable of being chosen, from multiple user interface elements 214, for the interaction.
In relation to block 120 in
Content of image or video files may be analysed using any suitable approach, such as using a content recognition engine that employs image processing techniques (e.g., feature extraction, object recognition, etc.). The result of content analysis may be a subject (e.g., a person's face. etc.) or an object (e.g., a landmark, attraction, etc.) automatically recognized from the image or video files. Attribute information of image files with a particular subject may then be updated, such as by adding a tag with the subject's name. Similarly, if a particular landmark (e.g., Eiffel Tower) is recognized, the image files may be tagged with the landmark or associated location (e.g., Paris).
Computer system 200 may then order user interlace elements 214 according to the attribute information.
Although not shown in
In the case of user interface elements 214 representing audio files, metadata and/or content of the audio files may also be analysed to automatically extract attribute information such as genre, artist, album, etc. User interface elements 214 of the audio files may then be ordered based on the extracted attribute information (e.g., according to genre, etc.).
User Gestures
Referring to blocks 130 to 140 in
Representation 224 may be a detailed or high quality representation, such as a high resolution image, or a snippet of a video or audio that is played on second display 220. In the example in
Further, referring to blocks 150 and 160 in
User gestures 260 may be detected via first display 210 based on contact made by the user, such as using finger or fingers, stylus, pointing device, etc. For example, user gesture 260 moving selected user interface element 214-3 may be detected by determining whether contact with first display 210 has been made at the first position to select user interface element 214-3 (e.g., detecting a “finger-down” event), whether the contact has been moved (e.g., detecting a “finger-dragging” event), whether the contact has ceased at the second position (e.g., detecting a “finger-up” event), etc.
In the example in
Collaboration Mode
As will be explained with reference to
The terms “local” and “remote” are used herein arbitrarily, for convenience and clarity in identifying the computer systems and their users that are involved in the collaboration mode. The roles of local computer'system 200A and remote computer system 200B may be reversed. Further, the designation of either “A” or “B” after a given reference numeral only indicates that the particular component being referenced belongs to local computer system 200A, and remote computer system 200B, respectively. Although two computer systems 200A and 200B are shown in
When operating in the collaboration mode, users may view the same user interfaces, i.e. local first user interface 212A corresponds with (e.g., mirrors) remote first user interface 212B, and local second user interface 222A with remote second user interface 222B. To enhance user interactivity during the collaboration mode, sensor unit 240A may capture information of user gestures 260 detected at local computer system 200A for projection at remote computer system 200B, and vice versa. This allows the users to provide real-time feedback through projector 230A/230B.
In more detail, sensor unit 240A may capture information of user gesture 260 at local computer system 200A for transmission to remote computer system 200B. Projector 230B at remote computer system 200B may then project an image of detected user gesture 260 onto first display 210B (see “Projected user gesture 510” shown in dotted lines in
Projector 230A at local computer system 200A may then project an image of the feedback gesture 520 onto first display 210A (see “Projected feedback gesture 530” in
Sensor unit 240 may include suitable sensor or sensors, such as depth sensor, three dimensional (3D) user interface sensor, ambient light sensor, etc. In some examples, depth sensor may gather information to identify user's hand, such as by detecting its presence, shape, contours, motion, the 3D depth, or any combination thereof. 3D user interface sensor may be used for tracking the user's hand. Ambient light sensor may be used to measure the intensity of light of the environment surrounding computer system 200 in order to adjust settings of the depth sensor and/or 3D user interface sensor, Projector 230A/230B may be implemented using any suitable technology, such as digital light processing (DLP), liquid crystal on silicon (LCoS), etc. Light projected by projector 230 may be reflected off a highly reflective surface (e.g., mirror, etc.) onto first display 210A/210B.
To further enhance interaction during the collaboration, camera unit 250A/250B may be used to capture an image or video of the respective users. The captured image or video may then be projected on a 3D object called “wedge” 540A/540B. “Wedge” may be any suitable physical 3D object with a surface on which an image or video may be projected, and may be in any suitable shape and size. An image or video of the local user at local computer system 200A may be captured by camera 250A arid projected on wedge 540B at remote computer system 200B. Similarly, an image or video of the remote user at remote computer system 200B may be captured by camera 250B, and projected on wedge 540A at local computer system 200A. Wedge 540A/540B may be implemented using any suitable 3D object on which the captured image or video may be projected. In practice, wedge 540A/540B may be moveable with respect to first display 210A/210B, for example to avoid obstructing user interface elements 214 on first user interface 212A/212B. The position of wedge 540A/540B on first display 210A/210B may be localized using sensors (e.g., in sensor unit 240A/240B and/or wedge 540A/540B) for projector 230A/230B to project the relevant image or video.
At blocks 610 and 620, local computer system 200A receives files and displays first user interface 212A on first display 210A. First user interface 212A includes user interface elements 214 that represent the received files (e.g., media files) and are each selectable for interaction via first display 210A.
At blocks 630 and 640, in response to detecting user gesture 260 selecting and interacting with user interface element 214-3, local computer system 200A updates first user interface 212A based on the interaction. At block 650, local computer system 200A generates and displays second user interface 222B on second display 220B. Second user interface 222B may include representation 224 of selected user interface element 214-3 (e.g., high quality representation). Information associated with the selection and interaction may be sent to remote computer system 200B, which may then update first user interface 212B and/or second user interface 222B accordingly.
At blocks 660 and 670, local computer system 200A sends information associated with detected user gesture 260 to remote computer system 200B. As discussed with reference to
At remote computer system 200B, the received information may then be processed and user gesture 260 projected onto first display 210B using projector 230B (see projected user gesture 510 in
At blocks 680 and 690, remote computer system 200B sends information associated with feedback gesture 520 to local computer system 200B. At block 690, local computer system 200A may process the received information to project feedback gesture 520 onto first display 210A using projector 230A (see projected feedback gesture 530 in
Computer System
Processor 710 is to perform processes described herein with reference to
Peripherals interface 740 connects processor 710 to first display 210, second display 220, projector 230, sensor unit 240, camera unit 250, and wedge 540 for processor 710 to perform processes described herein with reference to
The techniques introduced above can be implemented in special-purpose hardwired circuitry, in software and/or firmware in conjunction with programmable circuitry, or in a combination thereof. Special-purpose hardwired circuitry may be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), and others. The term ‘processor’ is to be interpreted broadly to include a processing unit, ASIC, logic unit, or programmable gate array etc.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware. software, firmware, or virtually any combination thereof.
Those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure.
Software and/or firmware to implement the techniques introduced here may be stored on a non-transitory computer-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “computer-readable storage medium”, as the term is used herein, includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant (PDA), mobile device, manufacturing tool, any device with a set of one or more processors, etc). For example, a computer-readable storage medium includes recordable/non recordable media (e.g., read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The drawings are only illustrations of an example, wherein the units or procedure shown in the drawings are not necessarily essential for implementing the present disclosure. Those skilled in the art will understand that the units in the device in the examples can be arranged in the device in the examples as described, or can be alternatively located in one or more devices different from that in the examples. The units in the examples described can be combined into one module or further divided into a plurality of sub-units.
As used herein, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first device communicatively couples to a second device, that connection may be through a direct electrical or mechanical connection, through an indirect electrical or mechanical connection via other devices and connections, through an optical electrical connection, or through a wireless electrical connection.
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2014/048831 | 7/30/2014 | WO | 00 |