The present application relates, generally, to devices, methods and systems for interacting with media displays, and, more particularly, to devices, systems and methods for providing interactive hardware for providing display data to a non-interactive display.
Collaborative and interactive work spaces have been made available to remote workers through software that allows for screen sharing and collaborative editing of documents. For instance, a manager can provide an on-line platform for collaborative real-time document editing. In these implementations and those like it, each participant must use a computer or tablet device, and access the collaborative platform and collaborate.
One drawback of the current collaboration systems is that they fail to replicate one of the most common ways to generate collaborative work product, collaboration before a white board. When groups use the white board, they are able to sketch ideas, identify links between concepts and manipulate data in an intuitive, natural way.
Present collaborative systems implement the collaborations within a digital space on a small computer screen. Each user, including those currently directing the collaborative work effort, are confined to direct their attention to the user interface to record their ideas. Additionally, the participants of the collaborative session are grouped around one or more computers and are not able to interact in a group dynamic. As a result, the spontaneity or dynamisms of collaborating before a black board or white board is lost.
It is with respect to these and other considerations that the disclosure made herein is presented.
In one or more implementations, the present application provides a system and method for displaying content on an electronic display device. A computing device having at least one processor is configured by executing code stored on non-transitory processor readable media. An electronic display device is configured to display content received directly or indirectly from the computing device. A wireless output device is configured to emit light signals and sound signals in response to an actuation of the wireless output device. Further, a sensor component is configured with a light receptor to receive the light signals, a sound receptor to receive the audio signals, and a communications module configured to communicate with the computing device. The light signals and sound signals from the wireless output device are received by the sensor component and usable to determine a respective location of the wireless output device relative to output of the electronic display device. The computing device uses the respective location to generate output received by the electronic display device to simulate the wireless output device operating as a writing device as a function of the generated output.
Further aspects of the present disclosure will be more readily appreciated upon review of the detailed description of its various implementations, described below, when taken in conjunction with the accompanying drawings, of which:
By way of overview and introduction, the present application includes components in a system and method for simple to use and intuitive collaboration. In one or more implementations, a wireless output device (referred herein, generally, as a “virtual marker”) is formatted in the shape of a writing implement and configured to output sound and light signals while in use, as opposed to a traditional writing tool that deposits, for example, inorganic, organic or special pigment on a substrate. In operation, a user “writes” by pressing the stylus portion of the virtual marker in direct contact with a display device, such as a television or computer monitor, to depress the stylus portion into the virtual marker, thereby causing the virtual marker to emit the sound and light signals. The virtual marker can be configured with a rechargeable battery to provide a power source for the sound and light emitting components.
Although many of the examples and implementations shown and described herein provide the virtual marker as an output device, exclusively, the application is not so limited. In one or more implementations, the virtual marker can be configured with various components, such as a microprocessor, storage memory, a communications module, a location module, a camera, or other components. For example, a processor provided with the virtual marker can be configured by code executing therein to receive data from one or more input sensors, such as of a pressure sensitive type and capable of detecting levels of pressure exerted at a movable tip which extends from the distal end of the virtual marker. In addition (or in the alternative), and as show and described in greater detail herein, the tip portion of the virtual marker is capable of moving in and out of at least a portion of the inner circumference of the stylus housing (
In one or more implementations of the present application, the audio and light signals emitted by the virtual marker are received by a sensor component and processed to convert the signals to coordinates representing a relative location of the tip of the output device on the display device. The coordinates or, alternatively, representations of the audio and light signals are transmitted to a computing device (e.g., a mobile computing device) that is communicatively coupled, such as wirelessly, to the sensor component. For example, the sensor component can be configured with BLUETOOTH, Wi-Fi or other wireless connectivity, to transmit to the computing device. In operation, the coordinates are usable in one or more software applications operating on the computing device to provide output (e.g., video output) to the display device. In one or more implementations, video output to the display device at least partially represents the “writing” of the output device. As a user traverses the tip or stylus of the virtual marker on the display device at respective positions, lines are displayed on the display device at the respective positions substantially in real-time. Images of the output can be maintained, such as in storable data files in the computing device or in remote storage (e.g., in cloud-based storage), for future access.
As used herein, a “display device” represents an output device configured to present information in a visual form. A display device may be of several types, such as cathode ray tube (“CRT”), light emitting diode (“LED”) backlit, liquid crystal display (“LCD”), plasma, or any one of a plurality of projection display devices.
In one or more implementations, one or more software applications operating on the computing device can provide functionality in connection with information associated with the respective coordinates as the marker traverses an area on the display device. For example, optical character recognition functionality can be provided for converting a user's writings to machine-encoded text. Other functionality can include live, real-time sharing and conferencing capability.
Thus, the present application can leverage technology configured within the virtual marker, e.g., light and sound emitters, sensor component and computing device to provide a new form of live and interactive functionality that can make any output display conform into a high end video conferencing suite complete with interactive whiteboard.
In one or more implementations, use of a display, such as a monitor, television or other display output device, can be implemented in the present application in various ways, such as via an Internet media extender provided by APPLE TV, ROKU, AMAZON FIRE TV or GOOGLE CHROMECAST. As used herein, an Internet media extender refers, generally, to a category of devices that provide for content to be streamed to a home theater (e.g., television, surround sound devices, and the like.). The content can be provided from a remote source, such as a computing device and/or through one or more virtual markers that incorporates, or additionally includes, a camera and/or microphone located remotely and communicating over the Internet. The present application facilitates integrating the input capabilities of the virtual marker so as to allow a simulation of directly imputing information or data onto the display device of a home theater (e.g., television) using combination of components shown and described herein.
Referring to
Computing devices 108 can have the ability to send and receive data wireles sly, such as across a communication network, and be equipped with web browser(s), software applications, or other means, to provide received data on devices. By way of example, computing device 108 may be a smartphone or other mobile computing device, a media extender, a personal computer or a tablet, but not limited to such computers. Other computing devices which can communicate over a global computer network such as palmtop computers, personal digital assistants (PDAs) and mass-marketed Internet access devices such as WebTV can be used. In addition, the hardware arrangement of the present invention is not limited to devices that are physically wired to communication network, and that wireless communication can be provided between wireless devices and data processing apparatuses (e.g., servers). In addition, system 100 can include a computing device 108 that is communicatively coupled to display device 104, such as directly or indirectly via a high-definition multimedia interface (“HDMI”) or other wired or wireless connection.
As shown in a particular and non-limiting example in
As noted herein, the virtual marker 102 can be configured with a function selector 308, which can include a selection mechanism and indicators for the user of the virtual marker 102. For example, a function selector 308 can be used to control the color of a simulated line drawn by the virtual marker 102. In addition, a user can turn, tap or otherwise transmit commands via the virtual marker 102 to control other properties, such as line thickness, style, (e.g. dotted, dashed) and brush type simulated by the virtual marker 102. Furthermore, a visual or audio indicator can be provided that allows for the user to know in advance a particular setting of the virtual marker 102. For example, a LED or other light changing device is provided to give a visual indication of the type of functionality exhibited by the virtual marker 102. As an additional function, the user virtual marker 102 can be used as an eraser, thereby generating data indicating a location on the screen to delete content. In one or more implementations, the frequency and/or rate of infrared light signals emitted by the virtual marker 102 can be set to represent one or more settings of the function selector 308. For example, infrared light pulses at 30 milliseconds (ms) represents one color (e.g., blue), while at 40 ms a different color (e.g., red) is represented. Other customization can be similarly provided as a function of the frequency of light pulses.
In one or more implementations, the virtual marker 102 is configured with a rechargeable battery and is bundled with a charging station 602 (
Without limitation to any theory or design of wireless power transmission, the charging station 602 can include a first induction coil that receives power to create an electromagnetic field within the charging station's 602 well. The virtual marker 102 can be configured to include a second induction coil that can take power provided by the electromagnetic field and covert such energy to electric current to charge the power source of the virtual marker. In this way, the induction coils function as an electrical transformer. The induction coils can be made of any suitable materials, such as silver plated copper or aluminum.
By emitting infrared light and ultrasonic sound, the virtual marker 102 simultaneously provides information relative to the position of the virtual marker 102 on the display device 104. In operation, this can be provided in response to an initial calibration procedure in which the virtual marker 102 is registered to specific points that are displayed on the output display device 104. For example, four points can be displayed on display device 104, and the user taps the virtual marker 102 on the respective points. As the user taps the virtual marker 102 on a point, the marker causes one or more infrared light emitting diodes (“I/R LEDs”) to emit and an ultrasonic transmitter to emit ultrasonic sound. In one or more implementations, a plurality of I/R LEDs are configured with the virtual marker 104 (e.g., 4 I/R LEDs) to increase output coverage, such as to a full 360°. The sensor component 106 can be configured with an infrared receiving module and an audio receiving module. The sensor component 106 can be further configured with a communications module, such as for BLUETOOTH or Wi-Fi connectivity. In one or more implementations, the audio receiving module includes two microphones that are each configured to detect ultrasonic audio frequencies and physically spaced apart within the sensor component 106. In one or more implementations, the infrared light functions as a synchronizing signal, and represents a starting time when light/sound signals are transmitted from the virtual marker 102. Thereafter, the ultrasonic sound is received first by the respective microphone positioned closest to the virtual marker 102, and second by the respective microphone positioned further away. The time values associated with the received infrared light, the first received ultrasonic sound signal and the second received ultrasonic sound signal are usable to provide a form of triangulation to represent the specific location of the virtual marker 102 relative to the display device 104. By registering the virtual marker 102 to respective positions on the display device 104 (e.g., points that are highlighted on the display device 104), the relative location of the virtual marker 102 can be calculated and a series of X/Y coordinates can be transmitted to or calculated by the computing device 108.
The following is an example formula for converting time differences of audio signals received by the first (e.g., “Left”) and second (e.g., “Right”) microphones in an example sensor component 106 into specific X/Y coordinates:
In one or more implementations, the sensor component 106 can be configured with a processor configured by code executing therein to calculate X/Y coordinates, and further configured to wirelessly communicate the coordinates and other information to the computing device 108. For example, a series of bytes can be transmitted from the sensor component 108 to the computing device 106. 1. An example data format can include five bytes: {BYTE1, BYTE2, BYTE3, BYTE4, BYTES}. BYTE1 can be a bit field which contains information representing color and whether a tip of the virtual marker 102 is pressed or not. Example BYTE1 can include 8 bits,{Bit0, Bit1, Bit2, Bit3, Bit4, Bit5, Bit6, Bit7}, and representing: Bit0-Tip On/Off; Bit1, Bit2—Color. 00—Red, 01—Green, 10—Blue, 11—Wipe. Other bits may be simply unused. Continuing with this example, BYTE2 can represent the higher byte of the X coordinate, while BYTE3 represents the lower byte of the X coordinate. BYTE4 can represent the higher byte of the Y coordinate and BYTES can represent the lower byte of the Y coordinate. For example, the following five bytes are transmitted from the sensor component 106 to the computing device 108: {5, 31, 201, 19, 198}, in which the tip is depressed (e.g., is writing), color is blue, coordinates are X=31*256+201=8137 mm, Y=19*256+198=5062 mm.
Furthermore in operation, as coordinate information is received and/or calculated by the computing device 108, a conversion can take place to transform coordinates two pixels. An application executing on the computing device 108, for example, and that outputs video to the display device 104 can inform the computing device 108 of the respective resolution of the display device 104. Screen resolution information is usable to identify respective pixels relative to the calculated coordinates, and to simulate the drawing of the lines by the virtual marker 102.
Thus, light and sound signals emitted from the virtual marker 102 and received by a light receiver and a plurality of microphones configured in the sensor component 106 can be used for triangulation and determining the location of the virtual marker 102 at a point on the display device 104 as the virtual marker 102 traverses. The information can be transmitted by the sensor component 106 to a computing device 108, and the computing device 108 operating one or more software applications can display video or other content on the display device 104 substantially in real-time. In addition to displaying lines on the display device 104 (such as shown in
Turning now to
The process begins at step 1002, and a wireless charger connected to a wall adaptor for power, charges the virtual marker 102. Further, the sensor component 106 is mounted over the display device 104, and powered for example via USB or wall adapter (step 1004). Alternatively, sensor component 106 can be configured with a rechargeable battery and be charged, including via a wireless charger, prior to use. At step 1006, the virtual marker 102 admits ultrasonic sound as it glides over the display device 104. This indicates to the sensor component 106 that the virtual marker 102 is active. Infrared light I/R LED and ultrasonic emissions are used to triangulate the for triangulation to determine the relative position of the virtual marker 102 on the display device 104. At step 1008, the sensor component 106 transmits the coordinates of the virtual marker 102 to the computing device 108 operating a software application (e.g., tvOS App) via BLUETOOTH (“BLE”) and/or Wi-Fi. At step 1010, video output is transmitted via a media extender, which sends the output to the display device screen 104 (step 1012).
In the event that a user has not yet registered and, accordingly, is not authorized to operate the application executing on the computing device, a registration process occurs at step 1128. During the registration process, communication between the computing device operating the software application and another device, such as a server computer, takes place to complete the process (step 1130). For example, a problem may occur, such as a username already being used, a password does not comply conform to a pre-determined standard, or virtually with any other registration step. In such case, an error message may be transmitted at step 1134 that represents the problem. If no error occurs, then a welcome message is provided at step 1132, and an email confirmation is transmitted at 1136. In the event that a user has already registered but has lost or forgotten his or her password, then, at step 1138, a process occurs for the user to have an his or her password reset or provided, such as by email password reset process in step 1140.
The present application provides an improvement to the functioning of computing devices by allowing a user to simulate drawing onto a display device that is not formatted to receive input, such as a non-interactive display. In one or more implementations, the computing device can send or transmit, through local or remote data channels using common networking protocols, the data relating to the current state of the canvas as well as the data initiated by a virtual marker 102 to a second computing device or remote computing platform 106. This remote computing platform can be connected to a secondary display device and used to update display in real time so as to simulate the effect of the virtual marker 102 being used directly on that respective screen. In this way, individuals in remote locations can collaboratively use their display devices to work on a single canvas while each user's input is captured and displayed to all collaborators in real time.
Moreover, computing devices 108 can communicate with data processing apparatuses using data connections, which are respectively coupled to communication network. Communication network 110 can be any communication network, but is typically the Internet or some other global computer network. Data connections can be any known arrangement for accessing communication network , such as the public Internet, private Internet (e.g. VPN), dedicated Internet connection, or dial-up serial line interface protocol/point-to-point protocol (SLIPP/PPP), integrated services digital network (ISDN), dedicated leased-line service, broadband (cable) access, frame relay, digital subscriber line (DSL), asynchronous transfer mode (ATM) or other access techniques.
One or more of respective devices 102, 104, 106, 108 and 112 can be configured with memory can be which is coupled to microprocessor(s). The memory may be used for storing data, metadata, and programs for execution by the microprocessor(s). The memory may include one or more of volatile and non-volatile memories, such as Random Access Memory (“RAM”), Read Only Memory (“ROM”), Flash, Phase Change Memory (“PCM”), or other type. The devices can include an audio input/output subsystem, which may include one or more microphones and/or speakers. One or more of respective devices 102, 104, 106, 108 and 112 can be configured to include one or more wireless transceivers, such as an IEEE 802.11 transceiver, an infrared transceiver, a BLUETOOTH transceiver, a wireless cellular telephony transceiver (e.g., 1G, 2G, 3G, 4G), or another wireless protocol to connect the data processing system with another device, external component, or a network. In addition, Gyroscope/Accelerometer can be provided.
One or more of respective devices 102, 104, 106, 108 and 112 can also be configured to include one or more input or output (“I/O”) devices and interfaces which are provided to allow a user to provide input to, receive output from, and otherwise transfer data to and from the system. These I/O devices may include a mouse, keypad or a keyboard, a touch panel or a multi-touch input panel, camera, network interface, modem, other known I/O devices or a combination of such I/O devices. The touch input panel may be a single touch input panel which is activated with a stylus or a finger or a multi-touch input panel which is activated by one finger or a stylus or multiple fingers, and the panel is capable of distinguishing between one or two or three or more touches and is capable of providing inputs derived from those touches to one or more of the respective devices
It will be apparent from this description that aspects of the application may be embodied, at least in part, in software. That is, the computer-implemented methods may be carried out in a computer system or other data processing system in response to its processor or processing system executing sequences of instructions contained in a memory, such as memory or other machine-readable storage medium. The software may further be transmitted or received over a network (not shown) via a network interface device. In various implementations, hardwired circuitry may be used in combination with the software instructions to implement the present implementations. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, or to any particular source for the instructions.
In one or more implementations, the present application provides improved processing techniques to prevent packet loss, to improve handling interruptions in communications, to reduce or eliminate latency and other issues associated with wireless technology. For example, in one or more implementations Real Time Streaming Protocol (RTSP) can be implemented, for example, for sharing output associated with a camera, microphone and/or other output devices configured with a computing device. In addition to RTSP, one or more implementations of the present application can be configured to use Web Real-Time Communication (“WebRTC”) to support browser-to-browser applications.
In one implementation WebRTC is shown with regard to communications between user computing devices 108 (such as a CHROME BOOK and mobile computing device, e.g., a smart phone) and supporting browser-to-browser applications and P2P functionality. In addition, RTSP is utilized in connection with user computing devices 108 and/or an Internet media extender, thereby enabling presentation of audio/video content from devices 108 on television 104.
In one or more implementations, the computing device 108 is configured to save each respective X/Y coordinate, including by transmitting the information to a remote storage device. The information can be saved as an array of coordinates over time, thereby enabling reproduction of respective “drawings” or output from the virtual marker 102 to be displayed on a display device 104 in the future.
In one or more implementations of the present patent application, a processor configured with code processes information representing a selection event that occurred in the display unit. For example, a user makes a selection in a remote control software application operating on his or her mobile computing device (e.g., iPhone) in a portion of the display unit while the interactive media content in the display unit is provided therein. The processing that occurs can be to determine at least a relative time and location of the selection event that occurred in the second portion of the display. The information representing the selection event can be stored in one or more databases that are accessible to at least one computing device. The selection of an item can be processed to enable the interaction with at least a portion of the interactive media content at one of the remote devices associated with the selection event. This enables results of a respective interaction associated with the selection event to be viewable or otherwise provided at one particular remote device, but not viewable or otherwise provided at other of the remote devices.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any implementation or of what can be claimed, but rather as descriptions of features that can be specific to particular implementations of particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a sub-combination or variation of a sub-combination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be noted that use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
Particular implementations of the subject matter described in this specification have been described. Other implementations are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing can be advantageous.
Publications and references to known registered marks representing various systems are cited throughout this application, the disclosures of which are incorporated herein by reference. Citation of the above publications or documents is not intended as an admission that any of the foregoing is pertinent prior art, nor does it constitute any admission as to the contents or date of these publications or documents. All references cited herein are incorporated by reference to the same extent as if each individual publication and references were specifically and individually indicated to be incorporated by reference.
While the invention has been particularly shown and described with reference to a preferred implementation thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.