The present application relates to systems and methods for managing a presentation service, and more particularly, to systems and methods for automatically displaying presentation contents on a display device based on identification information of a user captured by a terminal device associated with the display device.
Presentation is a common form of communication and is routinely used in meetings, conferences, conventions, etc. In a typical presentation, a presenter would convey information to an audience, usually by a speech accompanied by presentation materials, such as a series of visual exhibitions, which are often projected by a projector to a big screen. In order to show the visual exhibitions using the projector, electronic file(s) containing the visual exhibitions need to be loaded to a computer connected to the projector prior to the presentation. In current presentation systems, the overall presentation experience is often not smooth, particularly in the preparation stage (e.g., connecting a presentation device to the projector, finding the right presentation deck to present, etc.) and the transition stage (e.g., switching presentation devices or presentation decks for different presenters). Many glitches can occur during presentation setup and transition using the current systems. For example, when a single presenter presents using a personal device (e.g., a laptop computer) that needs to be coupled to the projector, issues like unmatched connection adaptors, unmatched display setup (e.g., unmatched display resolution), limited battery life, etc., may arise. In cases with multiple presenters, the chance of such glitches likely increases due to, for example, the multiple switching of presentation devices. As a result, presentation time is wasted, and the effect of the presentation is compromised.
One workaround to reduce the occurrences of the above-mentioned glitches is to use a dedicated presentation device and preload presentation materials before the meeting, conference, etc. This workaround may avoid onsite switching of presentation devices to avoid potential issues associated therewith. However, in cases with multiple presenters, the presenters may still experience difficulties in finding and selecting the right presentation materials corresponding to their respective presentation topics. Thus, a dedicated person is required to manually organize the presentation materials to ensure that they are properly ordered on the dedicated device and may even need to manually switch the presentation materials during the presentations.
In one aspect, a system for managing a presentation service is provided. The system may include at least one memory storing computer-executable instructions and at least one processor in communication with the memory. The computer-executable instructions, when executed by the processor, cause the processor to perform operations. The operations may include receiving, from a terminal device, identification information of a user. The operations may also include determining whether the user enrolls in the presentation service based on the identification information. In response to a determination that the user enrolls in the presentation service, the operations may include selecting presentation information associated with the user in a content database. The operations may further include causing a display device associated with the terminal device to display at least part of the presentation information.
In another aspect, a method for managing a presentation service is provided. The method may include receiving, from a terminal device, identification information of a user. The method may also include determining whether the user enrolls in the presentation service based on the identification information. In response to a determination that the user enrolls in the presentation service, the method may include selecting presentation information associated with the user in a content database. The method may further include causing a display device associated with the terminal device to display at least part of the presentation information.
In a further aspect, a non-transitory computer-readable medium storing instructions is provided. The instructions, when executed by at least one processor, cause the processor to perform a method for managing presentation services. The method may include receiving, from a terminal device, identification information of a user. The method may also include determining whether the user enrolls in the presentation service based on the identification information. In response to a determination that the user enrolls in the presentation service, the method may include selecting presentation information associated with the user in a content database. The method may further include causing a display device associated with the terminal device to display at least part of the presentation information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
Embodiments of the present disclosure provides a presentation service to address issues and glitches often experienced during a presentation. In some embodiments, the presentation service may use a server to store presentation information and automatically select the correct presentation information based on identification information of a user (e.g., a presenter, speaker, performer, or the like). The server may be coupled to at least one terminal device configured to capture the identification information. A user may preload (e.g., upload via a web interface, email to a dedicated service account, etc.) his/her presentation information to the server in advance of the meeting, conference, etc. When the user is about to present, the user may simply walk to the stage or otherwise signal the terminal device in the meeting room, which may capture identification information of the user. Based on the identification information, the server may determine whether the user enrolls in the presentation service, and if so, automatically select corresponding presentation information associated with the user. The selected presentation information may be downloaded to the terminal device, which may in turn show the presentation information on a connected display device. In some embodiment, the server may generate a media stream based on the selected presentation information and send the media stream to a networked display device for display. Embodiments of the disclosure avoid the needs to switch presentation devices and manually organize/select presentation information, thereby improving the presentation efficiency and enhancing the presentation experience.
As used herein, a “presentation” refers to any suitable form of conveying or communicating information, including, for example, a speech, a talk, a performance, a product demonstration, a show, an announcement, an introduction, a lecture, a debate, or the like. “Presentation information” refers to any information related to a presentation, including, for example, a slide deck, a video, an audio recording, a music, a movie clip, a document, a spreadsheet, or any other form of materials that assist in conveying or communicating the topic(s), idea(s), and/or content(s) of the presentation. In some embodiments, presentation information may be compiled or otherwise organized in an electronic file, such as a PowerPoint file developed by Microsoft, a Keynote file developed by Apple, a PDF file developed by Adobe, etc.
Terminal device 120/130 may be used as a frontend device configured to capture identification information of a user. The identification information may be used to identify a specific user and/or determine whether the user enrolls in or registers to the presentation service. In some embodiments, one or more terminal devices 120/130 may communicate with server 102 via communication links such as network connections.
The one or more display devices 122/132 may include a projector, a TV, an LED display panel, or the like. As shown in
In some embodiments, a display device may be associated with a terminal device, or vice versa. For example, display device 122 may be directly connected to terminal device 120, thereby associated with terminal device 120. In this case, server 102 may or may not store association information between terminal device 120 and display device 122. In another example, terminal device 130 and display device 132 may be collocated in a meeting room, a conference hall, etc., such that terminal device 130 may capture identification information of a user, based on which presentation information can be displayed on display device 132. In this case, display device 132 may be associated with terminal device 130 through association information maintained by server 102, regardless of whether terminal device 130 can establish a local connection with display device 132. In some embodiments, association information may be obtained by server 102 during a registration process of a terminal device (e.g., with information of its associated display device(s)) and/or a display device (e.g., with information of its associated terminal device(s)).
Terminal device 120/130 may be implemented as a stand-alone hardware device, an add-on hardware device, or an integrated hardware component of another device. For example, terminal device 130 may be a stand-alone device separate from display device 132. In another example, terminal device 120 may be an add-on device to display device 122 or integrated as part of display device 122. In some embodiments, a display device may be configured to implement the functionalities of a terminal device. For example, one or more functions of terminal device 120 may be implemented as built-in functions of display device 122. In some embodiments, a terminal device may be implemented by installing a software application to a general-purpose computer, such as a desktop, a laptop, a workstation, a smart phone, a tablet, or the like. In this way, part or all of the functions of terminal device 120 may be implemented by executing the software application.
As shown in
In some embodiments, ID server 112 may include an identity database that stores identification information related to the users of the presentation service disclosed herein, such as various types of user credentials (e.g., user ID, user name, login ID, access code, etc.) and/or biological features of the users. The identification information may be stored in the identity database as records, which may be inquired, retrieved, compared with, and/or matched to identification information captured by terminal device 120/130. For example, ID server 112 may include a matching engine configured to match a user credential and/or a biological feature obtained based on identification information of a user captured by terminal device 120/130 with records stored in the identity database. In some embodiments, terminal device 120 may extract the biological feature (e.g., vocal feature, facial feature, fingerprint pattern, iris feature, etc.) from the captured identification information (e.g., voice, video, etc.). In other embodiments, terminal device 112 may send the captured identification information to ID server 112, and ID server 112 may extract the biological feature based on the captured identification information. The extracted biological feature may be used for identifying the user, for example, determining whether the user enrolls in the presentation service managed by server 102. For example, the matching engine may match the extracted biological feature against all previously collected biological features related to the user. To improve the efficiency of this matching, ID server 112 may retrieve corresponding biological features from the identity database based on, for example, known identities of multiple users, and then match the extracted biological feature against a smaller set of biological features. In some embodiments, the matching engine may be implemented at terminal device 120/130 (e.g., one or more cameras, one or more microphones, or the like) and/or at presentation server 110.
Content server 114 may include a content database that manages presentation information such as electronic files. In some embodiments, content server 114 may also manage metadata related to the files. Content server 114 may provide APIs for retrieving, uploading, or downloading files. The retrieving API may receive an authenticated ID of a user, date and time, one or more identifications of terminal devices, and other metadata, and may use the received information to select one or more target files associated with the user. The selected target file(s) may be sent to rendering server 116 and/or back to a terminal device (e.g., terminal device 120). In embodiments in which rendering server 116 is provided, rendering server 116 may receive the target file(s) from the retrieve API and open and render the content therein. Rendering server 116 may also permit user interactions (e.g., advancing slides, returning to previous slides, playing video and/or audio, operating audience polls, etc.) during the presentation. In some embodiments, rendering server 116 may automatically launch an appropriate application to open a file based on, for example, the file suffix of a target file. For example, rendering server 116 may launch Microsoft PowerPoint for handling a .pptx file, Microsoft Word for a .docx file, Adobe Acrobat Reader for a .pdf file, or the like. In some embodiments, a similar rendering engine may additionally or alternatively be implemented on a terminal device or a display device. For example, when display device 132 includes a rendering engine, display device 132 may be directly connected to the presentation server 110 even if rendering server 116 is not present.
Terminal device 120/130 may include hardware components configured to capture certain identification information related to a user, such as a smartcard reader that directly retrieves a user credential, biological information acquisition device(s) such as a microphone and/or a camera to capture a user's voice and/or image to extract vocal features and/or facial features, or the like. These biological features may be used to identify the user and obtain a unique ID associated with the user. As discussed above, the identification process may occur at the terminal device side and/or at the server side. In some embodiments, the user credential may be directly obtained by terminal device 120/130 and sent to ID server 112. In some embodiments, the biological features may be sent to ID server 112.
In some embodiments, one or more terminal devices may be connected to server 102 via network connections. Server 102 may assign a unique ID to each connected terminal device. A terminal device may further be associated with one or more collocated display devices. A display device may be locally connected (e.g., wired and/or wirelessly) to the terminal device(s) and/or directly connected to presentation server 110. In embodiments where a display device, such as display device 132, is directly connected to presentation server 110, the collocated terminal device (e.g., terminal device 130) may be associated to the display device explicitly, and presentation server 110 may maintain the corresponding association information. A terminal device may further be equipped with an operating system and optionally one or more applications in communication with presentation server 110. One or more of the applications may further handle files sent from presentation server 110 in one or more formats and/or may manage user interactions during the presentation process. Accordingly, terminal device 120/130 may include, at least in part, a rendering engine similar to rendering server 116.
Terminal device 120/130 may be aggregated or disaggregated and may include one or more dedicated devices and/or be integrated with one or more other devices. For example, terminal device 120 may comprise a dedicated computer with an internal and/or external camera and/or a microphone, a hardware module integrated with display device 122, etc.
The integration or separation of component servers may affect the information flow of identification information related to the user. For example, in the embodiment shown in
In the embodiment shown in
As shown in
Communication interface 320 may be configured to send information to and receive information from other components of system 100 or 200 via communication links indicated by arrowed lines shown in
Processor 310 may include one or more processing devices configured to perform functions of the disclosed methods. Processor 310 may include any appropriate type of general-purpose or special-purpose microprocessor, digital signal processor, graphic processor, or microcontroller. In some embodiments, processor 310 may include a single core or multiple cores executing parallel processes simultaneously. For example, processor 310 can be a single-core processor configured with virtual processing technologies. In certain embodiments, processor 310 uses logical processors to simultaneously execute and control multiple processes. Processor 310 can implement virtual machine technologies, or other known technologies to provide the ability to execute, control, run, manipulate, and store multiple software processes, applications, programs, etc. In another embodiment, processor 310 may include a multiple-core processor arrangement (e.g., dual core, quad core, etc.) configured to provide parallel processing functionalities that allow server 300 to execute multiple processes simultaneously. As discussed in further detail below, processor 310 may be specially configured with one or more modules for performing method steps and functions of the disclosed embodiments. It is appreciated that other types of processor arrangements can be implemented that provide for the capabilities disclosed herein.
Memory 330 may include a volatile or non-volatile, magnetic, semiconductor-based, tape-based, optical, removable, non-removable, or other type of storage device or tangible and/or non-transitory computer-readable medium that stores one or more executable programs, such as a presentation service application and an operating system. The programs may include software that, when executed by processor 310, performs functions of one or more modules to be discussed in greater detail below. The programs may also include communication software that, when executed by processor 310, provides communications using communication interface 320, such as web browser software, tablet or smart handheld device networking software, etc.
As shown in
Processor 310 may include a content module 314 configured to implement the functions of content server 114. For example, content module 314 may be configured to receive presentation information from a user through user uploading, store the presentation information in a content database (e.g., in memory 330), search for a particular record of presentation information in the content database based on user identity or identification information of the user, and select and retrieve the presentation information associated with the user from the content database.
Processor 310 may include a rendering module 316 configured to implement the functions of rendering server 116. For example, rendering module 316 may be configured to automatically launching an application to open a file containing the presentation information associated with a user based on, for example, the file suffix of the file. In another example, rendering module 316 may be configured to facilitate user interactions (e.g., advancing slides, returning to previous slides, playing video and/or audio, operating audience polls, etc.) during the presentation.
It is noted that although
In some embodiments, processor 440 may include a rendering engine 450 configured to perform rendering and/or user interaction functions, similar to rendering module 316. In some embodiments, terminal device 400 may include a display device 460 that implements functions of display device 122/132. In some embodiments, display device 122/132 may be equipped with one or more components of terminal device 400 shown in
In some embodiments, the overall work flow of system 100/200 may include two phases: a preparation phase and a presentation phase.
In the preparation phase, a user may upload his/her presentation information to server 110/210, for example, via file upload APIs. Referring to
In step 550, processor 310 may request the user to send some metadata. The metadata may be used to retrieve the user's file(s) during the presentation phase. In one example, the metadata may include a company ID associated with the user, a designated talk time, an indicator of audience scope (e.g., public, open only to company employees, etc.), or the like. The metadata may be used to organize the user's file(s) and may increase the probability of selecting a correct file from a plurality of files during file retrieval, since a user may have multiple presentation decks stored in presentation server 110/210. In some embodiments, the specified talk time may also serve as an indicator of presentation file expiry such that the user's file(s) can be deleted when they expire.
In step 560, processor 310 may receive presentation information such as presentation files provided by the user. In some embodiments, the user may upload the file(s) via a web interface, may email the files (e.g., as an attachment) to a designated email account (e.g., with a specific subject line), optionally with metadata in one or more specific content formats.
In step 610, processor 310 may receive identification information related to a user captured by one or more terminal devices 120/130. For example, a user may walk up to the stage and/or speak a predetermined hot word (e.g., “Hello Melo”) such that his/her face may be captured and/or voice recorded by the terminal device(s). In step 620, processor 310 may extract biological features such as vocal and/or facial features from the capture identification information. In step 630, processor 310 may match the biological features with previously obtained biological features (e.g., using ID module 312 and/or comparing the biological features against a local database). If a matching biological feature is located, processor 310 may obtain a unique ID for the user and determine that the user is enrolled in step 640. If the captured biological features do not match any record, then processor 310 may determine that the user is not enrolled, and method 600 proceeds to step 650, where the presentation service stops for the user. Optionally, the requesting time and/or the ID of the terminal device may be captured. Such information may be sent to presentation server 110/120 to facilitate retrieving user's file(s). In another example, the vocal and/or facial features may be extracted at the terminal device side. In such an example, the vocal and/or facial features may be sent to presentation server 110/120.
In step 660, processor 310 may select presentation information associated with the user from the content database. If multiple candidate files exist, the file list may be obtained.
In step 670, the selected file may be retrieved from the content database and sent to the terminal device(s). If multiple candidate files exist, the file list may be returned to the terminal device(s) such that the user may select the correct file from the list (e.g., via a voice or gesture interaction). Depending on whether the display device is locally connected to the terminal device (e.g., display device 122 and terminal device 120) and if the terminal device is capable of rendering the retrieved file, the file may be downloaded by the terminal device and processed locally (e.g., shown by terminal device 120 and display device 122 in
In step 680, processor 310 may process user interaction using rendering module 316. The user interactions may include voices and/or gestures, in addition to conventional mouse/keyboard/remote control and other extant devices. In embodiments where the terminal device 120/130 has a microphone and a camera to capture the user's voice and gestures, rendering module 316 may utilize advances in voice and gesture recognition (e.g., using deep neural networks with pre-trained models) such that natural voices and gesture-based interaction may be used.
In some embodiments, the presentation service may be implemented in a centralized manner, e.g., as one or more cloud-based services. For example, a business or entity may deploy the presentation service in a private cloud and/or using dedicated server(s), thereby using a centralized embodiment. In some embodiments, the presentation server and the terminal device(s) may be integrated together to form an integrated presentation server (IPS). Such an IPS device may be connected to a local network and may be shared by local network users. Such an embodiment may be preferable, for example, for a small business having limited number of display device(s).
Multiple IPS devices may be aggregated through certain distributed protocols (e.g., for handling database replication, file storage, and retrieval) and thereby forming a distributed presentation service such that users may use the service across multiple meeting rooms/display devices, as shown in
Another aspect of the disclosure is directed to a non-transitory computer-readable medium storing instructions which, when executed, cause one or more processors to perform the methods, as discussed above. The computer-readable medium may include volatile or non-volatile, magnetic, semiconductor-based, tape-based, optical, removable, non-removable, or other types of computer-readable medium or computer-readable storage devices. For example, the computer-readable medium may be the storage device or the memory module having the computer instructions stored thereon, as disclosed. The computer-readable medium may be a disc, a flash drive, or a solid-state drive having the computer instructions stored thereon.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed system and related methods. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and related methods.
It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.
This application claims the benefits of priority to U.S. Provisional Application No. 62/618,700, filed Jan. 18, 2018, the entire contents of which are expressly incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62618700 | Jan 2018 | US |