Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57.
In the context of computing devices, such as touch sensing devices, some graphical user interfaces allow a user to add graphical annotations. For example, some graphical user interface applications have “pen” or “highlighter” functions that allow a user to add visual markups. The annotations, such as the “pen” or “highlighter” annotations, have permanence in that a user must manually remove the annotations to restore the graphical user interface to a previous state. For example, after making an annotation, a user may have to access an “eraser” or an “undo” element to manually remove the previous annotations.
The systems, methods, and devices described herein each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this disclosure, several non-limiting features will now be discussed briefly.
According to an embodiment, a system is disclosed comprising: a media sharing server comprising one or more first hardware processors configured to execute first computer-readable instructions to: receive media from a media sharing computing device; and cause presentation, on a user computing device, of the media as a media presentation; and a pointer event server comprising one or more second hardware processors configured to execute second computer-readable instructions to: receive, from a multi-touch sensing computing device, a first set of touch user inputs for a first pointer; cause presentation, on the media presentation of the user computing device, of a first annotation visualization corresponding to the first set of touch user inputs; receive, from the multi-touch sensing computing device, a first event that indicates that the first pointer is released; initiate a first fadeout timer based at least in part on the first event that indicates that the first pointer is released; before the first fadeout timer expires, receive, from the multi-touch sensing computing device, a second set of touch user inputs for a second pointer; cause presentation, on the media presentation of the user computing device, of a second annotation visualization corresponding to the second set of touch user inputs; receive, from the multi-touch sensing computing device, a second event that the second pointer is released; initiate a second fadeout timer that replaces the first fadeout timer, wherein initiation of the second fadeout timer is based at least in part on the second event that indicates that the second pointer is released; determine that the second fadeout timer expired; and cause presentation, on the media presentation of the user computing device, of a nearly simultaneous fadeout of the first annotation visualization and the second annotation visualization.
According to an aspect, the media sharing computing device and the multi-touch sensing computing device can be the same device.
According to another aspect, the user computing device and the multi-touch sensing computing device can be the same device.
According to yet another aspect, the media sharing server and the media sharing computing device can be the same device.
According to yet another aspect, the one or more second hardware processors can be further configured to: receive, from the multi-touch sensing computing device, a third set of touch user inputs; and cause presentation, on the media presentation of the user computing device, of a third annotation visualization corresponding to the third set of touch user inputs.
According to yet another aspect, the one or more second hardware processors can be further configured to: determine, from the third set of touch user inputs, a subset of touch user inputs that occur within a threshold period of time and within a threshold area; determine, using an exponential equation, a fadeout time period based at least in part on the subset of touch user inputs; and cause presentation, on the media presentation of the user computing device, of a fadeout of the third annotation visualization according to the fadeout time period.
According to yet another aspect, the one or more second hardware processors can be further configured to: identify that the third set of touch user inputs are located within a persistence area; and mark the third annotation visualization as a persistent visualization.
According to yet another aspect, the one or more second hardware processors can be further configured to: store, in a non-transitory computer storage medium, a plurality of pointer events, wherein each pointer event from the plurality of pointer events corresponds to a touch user input from the first set of touch user inputs, and wherein each pointer event from the plurality of pointer events comprises a timestamp; receive, from a second user computing device, a playback request; and cause presentation, on a second media presentation of the second user computing device, of a fourth annotation visualization according to each timestamp and pointer event from the plurality of pointer events.
According to yet another aspect, the first annotation visualization can include a first animation.
According to yet another aspect, wherein to cause presentation of the first annotation visualization may comprise presentation of a first animation.
According to yet another aspect, wherein presentation of a first frame of the first animation may comprise a first point and a second point, wherein presentation of the first point can be thicker than presentation of the second point.
According to yet another aspect, wherein presentation of a first frame of the first animation may comprise a first point and a second point, wherein presentation of the first point can be thicker than presentation of the second point, and wherein the first point may comprise a first contrast and the second point may comprise a second contrast.
According to yet another aspect, wherein the first fadeout of the first annotation visualization can begin at a starting point of the first animation and can end at an ending point of the first animation.
According to yet another aspect, the first fadeout of the first annotation visualization can occur without a direct user command for the first fadeout.
According to an embodiment, a system is disclosed comprising: a non-transitory computer storage medium configured to at least store computer-readable instructions; and one or more hardware processors in communication with the non-transitory computer storage medium, the one or more hardware processors configured to execute the computer-readable instructions to at least: receive a first set of user inputs for a first pointer; cause presentation, on a user computing device, of media as a media presentation; cause presentation, on the media presentation of the user computing device, of a first annotation visualization corresponding to the first set of user inputs; receive a first event that indicates that the first pointer is released; initiate a first fadeout timer based at least in part on the first event that indicates that the first pointer is released; before the first fadeout timer expires, receive a second set of user inputs for a second pointer; cause presentation, on the media presentation of the user computing device, of a second annotation visualization corresponding to the second set of user inputs; receive a second event that the second pointer is released; initiate a second fadeout timer that replaces the first fadeout timer, wherein initiation of the second fadeout timer is based at least in part on the second event that indicates that the second pointer is released; determine that the second fadeout timer expired; and cause presentation, on the media presentation of the user computing device, of a first fadeout of the first annotation visualization and a second fadeout of the second annotation visualization.
According to an aspect, the one or more hardware processors can be further configured to: receive a third set of user inputs; and cause presentation, on the media presentation of the user computing device, of a third annotation visualization corresponding to the third set of user inputs.
According to another aspect, the one or more hardware processors can be further configured to: determine, from the third set of user inputs, a subset of user inputs that occur within a threshold period of time and within a threshold area; determine, using an exponential equation, a fadeout time period based at least in part on the subset of user inputs; and cause presentation, on the media presentation of the user computing device, of a fadeout of the third annotation visualization according to the fadeout time period.
According to yet another aspect, the one or more hardware processors can be further configured to: identify that the third set of user inputs are located within a persistence area; and mark the third annotation visualization as a persistent visualization.
According to yet another aspect, the user computing device can include an interactive touch monitor.
According to yet another aspect, the one or more hardware processors can be further configured to: store, in a second non-transitory computer storage medium, a plurality of pointer events, wherein each pointer event from the plurality of pointer events corresponds to a user input from the first set of user inputs, and wherein each pointer event from the plurality of pointer events comprises a timestamp; receive, from a second user computing device, a playback request; and cause presentation, on a second media presentation of the second user computing device, of a fourth annotation visualization according to each timestamp and pointer event from the plurality of pointer events.
According to yet another aspect, wherein to cause presentation of the first annotation visualization may comprise presentation of a first animation.
According to yet another aspect, wherein presentation of a first frame of the first animation may comprise a first point and a second point, wherein presentation of the first point can be thicker than presentation of the second point, and wherein the first point may comprise a first contrast and the second point may comprise a second contrast.
According to yet another aspect, wherein the fadeout of the first annotation visualization can begin at a starting point of the first animation and can end at an ending point of the first animation.
According to yet another aspect, wherein the first set of user inputs can correspond to touch user inputs or mouse user inputs.
According to yet another aspect, the one or more hardware processors can be further configured to: receive a third set of user inputs; cause presentation, on the media presentation of the user computing device, of a third annotation visualization corresponding to the third set of touch user inputs; identify that the third set of user inputs are located within a persistence area; and mark the third annotation visualization as a persistent visualization.
According to yet another aspect, the first fadeout of the first annotation visualization can occur without a direct user command for the first fadeout.
In various embodiments, systems or computer systems are disclosed that comprise a computer readable storage medium having program instructions embodied therewith, and one or more processors configured to execute the program instructions to cause the one or more processors to perform operations comprising one or more aspects of the above- or below-described embodiments (including one or more aspects of the appended claims).
In various embodiments, methods are disclosed in which one or more aspects of the above- or below-described embodiments (including one or more aspects of the appended claims) are implemented or performed. The methods can be implemented by one or more processors executing program instructions.
In various embodiments, computer program products comprising a computer readable storage medium are disclosed, wherein the computer readable storage medium has program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above- or below-described embodiments (including one or more aspects of the appended claims).
Although certain preferred embodiments and examples are disclosed below, inventive subject matter extends beyond the specifically disclosed embodiments to other alternative embodiments or uses and to modifications and equivalents thereof. Thus, the scope of the claims appended hereto is not limited by any of the particular embodiments described below. For example, in any method or process disclosed herein, the acts or operations of the method or process may be performed in any suitable sequence and are not necessarily limited to any particular disclosed sequence. Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding certain embodiments; however, the order of description should not be construed to imply that these operations are order dependent. Additionally, the structures, systems, or devices described herein may be embodied as integrated components or as separate components. For purposes of comparing various embodiments, certain aspects and advantages of these embodiments are described. Not necessarily all such aspects or advantages are achieved by any particular embodiment. Thus, for example, various embodiments may be carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other aspects or advantages as may also be taught or suggested herein. Throughout the description, the same reference numerals are used to identify corresponding elements.
As described above, existing graphical user interfaces can allow a user to add graphical annotations. For example, a user can mark up a graphical user interface with annotations, such as a presenter marking up a presentation document or display. After an annotation has been added, subsequent user interactions are required by the user to remove the previous annotations. Such subsequent user interactions can be slow, cumbersome, or can detract from the user or presentation experience.
Disclosed herein are systems, apparatuses, and methods that may be used to advantageously improve graphical user interface annotations. Instead of requiring specific user interactions to remove graphical user interface annotations, a graphical user interface system can include logic to remove the annotations after a timeout period. While a user computing device is operating in an annotation mode, when user input is received that corresponds to a pointer (such as from a finger, pen, or mouse), an annotation visualization (such as a stroke) can be presented in the graphical user interface. There can be additional logic that causes the annotation visualization to be presented differently over time. For example, the annotation visualization (such as the stroke) can have an end that is presented with a first contrast or a first thickness that is different from another portion of the annotation visualization, or the presentation of the annotation visualization can get progressively thicker or can have a second contrast until a final thickness or contrast is achieved. Once an event is received that indicates that the pointer is released, a timer can be initiated. If the timer expires, then the annotation visualization can be removed, such as by fading out. However, if additional user input is received within the timer period, the timer can be reset and additional annotation visualizations can be presented. Once a timer has expired, one or more annotation visualizations can be removed. The presentation of the annotation visualizations and the removal of the annotation visualizations can be animated. The animations of the annotation visualizations can be configured for presentation in a manner that distinguishes, to viewers, additions of annotations from removals of those annotations, which can be especially useful during media sharing where viewers can be remote from the presenter.
In order to facilitate an understanding of the systems, apparatuses, and methods discussed herein, term(s) are defined below. The term(s) defined below, as well as other terms used herein, should be construed broadly to include the provided definition(s) the ordinary and customary meaning of the term(s), or any other implied meaning for the respective term(s). Thus, the definition(s) below do not limit the meaning of these term(s), but only provide example definitions.
Media (Presentation): Any type of content that is presented on a computing device, such as, but not limited to, any type of electronic document, a text document, a word processing document, a spreadsheet document, an application, a video, an image, or a web page. Example media or media presentation can include at least a portion of a desktop, screen, or display of a computing device such as the windows or applications within the desktop, screen, or display of the computing device.
The systems, apparatuses, and methods that may be used to advantageously improve graphical user interface annotations can be applied to a media sharing context. For example, media on a first computing device can be shared or presented on one or more second computing devices. While the media from the first computing device is being shared, an annotation graphical user interface on the first computing device can be used on the shared media. Thus, annotation visualizations on the first computing device can also be shared with the second computing device(s). Moreover, the techniques for removing the annotation visualizations after a timeout period described herein can also be applied and propagated to the second computing device(s).
The systems, apparatuses, and techniques described herein may improve graphical user interfaces or computer technology. Instead of graphical user interfaces with slow, cumbersome, or inefficient annotation capabilities, improved graphical user interfaces can include efficient presentation logic for annotations. The improved annotation user interfaces can allow users, such as presenters, to quickly or efficiently make annotations without having to access different menu or graphical user interface options (such as “eraser” or “undo” graphical user interface features). The systems, apparatuses, and techniques described herein can enable users to users to progress through media faster or to interact with user interfaces faster than existing systems. In the context of touch sensing devices, efficient graphical user interface features can be especially beneficial. For example, when making annotations with touch user input (such as by a finger or pen), it may be cumbersome to provide a user with additional user interface options to perform different annotation options (such as with “eraser” or “undo” user interface options). The configuration of animations of the annotation visualizations can also advantageously distinguish, to viewer(s), additions of annotations from removals of those annotations. Thus, the improved annotation features described herein can improve graphical user interfaces or touch-sensing computer technology.
The fadeout of annotation visualizations can occur without a direct user command for the fadeout. Example direct user commands can include an eraser or undo command. For example, instead of receiving an explicit “eraser” or “undo” command, the systems, apparatuses, and techniques described herein can cause a fadeout of an annotation visualization based on a fadeout timer. As described herein, a fadeout timer can count down while user input is not received. Upon expiration of the fadeout timer, the fadeout can occur. Thus, the fadeout can occur with the direct user command for the fadeout.
In a media sharing context, the graphical user interface annotation features can be implemented using distributed servers, such as separate media sharing and pointer event server(s). The distributed servers can improve performance of media sharing or the propagation of annotation visualizations by separating each of the respective media sharing or annotation processes to operate independently of one another because a bottleneck in one of media sharing or annotation reduces the likelihood that it could affect the other process. Thus, the improved annotation features described herein can improve media-sharing and annotation computer technology.
The graphical user interface system 200 can include a media sharing server 206, a pointer event server 208, and a pointer event metadata storage 210. The media sharing server 206 can receive media from any of the computing devices 100, 102, 202. The media sharing server 206 can cause presentation of the media as a media presentation on the computing devices 100, 102, 202. The pointer event server 208 can receive user input from any of the computing devices 100, 102, 202. The pointer event server 208 can cause the presentation of annotation visualizations on any of the computing devices 100, 102, 202 based on the user input. In some embodiments, locally received user input can cause the presentation of an annotation visualization on the respective computing device that received the user input without needing to communicate with an external server. The annotation visualizations can be removed on the computing devices 100, 102, 202 after a timer expires without receiving additional user input. The pointer event server 208 can store some user interaction data corresponding to the user input such as event data in the pointer event metadata storage 210. The pointer event server 208 can cause the presentation of annotation visualizations based on some of the event data or other data in the pointer event metadata storage 210.
In some embodiments, presentation of annotation visualizations can occur without the pointer event server 208. For example, a computing device that receives pointer user input, such as the computing device 100, can present a corresponding annotation visualization based on the locally received user input. However, other computing devices, such as the computing devices 102, 202, can present a corresponding annotation visualization based on communication with the pointer event server 208.
While multiple computing devices 100, 102, 202 are shown in
In
In
In the graphical user interface 300 of
In
Beginning at block 402, media can be received. For example, the media sharing server 206 can receive media from a media sharing computing device, such as the computing device 100. As described herein, the computing device 100 can be a multi-touch sensing computing device. Example media sharing can include sharing display information such as in the context of a screen mirror. Examples of media are described herein, such as with respect to
At block 404, media can be presented. For example, the media sharing server 206 can cause presentation of the media from the previous block 402. In particular, the media sharing server 206 can cause presentation of the media on a computing device such as a user computing device. As described herein, the user computing device can be a multi-touch sensing computing device. The media sharing server 206 and the media sharing computing device can be the same device. Examples of the presentation of media are described herein, such as with respect to
The blocks 402, 404 can be executed in a loop as shown. For example, as updates to the media are transmitted to the media sharing server 206, the presentation of the media on one or more computing devices can update subsequently or continuously. As described herein, the presentation of the media may be handled by the media sharing server 206, which may occur independently of the graphical user interface annotations that can be overlaid on the media presentation. Thus, while the media blocks 402, 404 are shown at the beginning of the example method 400, the media blocks 402, 404 can occur at any point in the method 400.
At block 406, user input can be received. For example, the pointer event server 208 can receive user input from the computing device 100. Example user input can include a set of touch user inputs for a pointer. Examples of user input are described herein, such as with respect to
At block 408, an annotation visualization can be presented. For example, the pointer event server 208 can cause presentation of an annotation visualization on a computing device, such as the computing device(s) 100, 102, 202. The presentation of the annotation visualization can be overlaid on the media presentation of a computing device, such as the computing device(s) 100, 102, 202. The annotation visualization can correspond to the user input (such as the set of touch user inputs) received at the previous block 406. Examples of annotation visualizations are described herein, such as with respect to
Presentation of the annotation visualization can include an animation. The animation can include a series of frames. A frame of the animation can include a first point and a second point, which can be connected by a line. Presentation of the first point can be thicker than presentation of the second point. The first point can include a first contrast and the second point can include a second contrast. Additional details regarding the animation of annotation visualizations are described herein, such as with respect to
In some embodiments, presentation of the annotation visualization can occur after the corresponding user input has been received, such as in the context of a playback of the media presentation or the annotation visualization. For example, the pointer event server 208 can cause a playback of the presentation of an annotation visualization minutes, hours, or days after the corresponding user input for the annotation was received. The pointer event server 208 can retrieve pointer events from the pointer event metadata storage 210. After a playback request is received from a computing device, the pointer event server 208 can cause presentation, on the media presentation of the computing device, of an annotation visualization according to each respective timestamp and pointer event from the pointer event metadata storage 210.
At block 410, an event can be received for a pointer release. For example, the pointer event server 208 can receive an event (such as a first event) from the computing device 100 that indicates that a pointer (such as a first pointer) is released. An example release of a pointer can be caused by a user removing their finger or a touchpad from a touch sensing device or by releasing a selection of a user input device such as a mouse. The release of the pointer can indicate that user input for an annotation has at least temporarily stopped.
At block 412, a fadeout timer can be set. For example, the pointer event server 208 can initiate a fadeout timer (such as a first fadeout timer) based at least in part on the received event that indicates that a pointer was released. An example fadeout timer can be for a period of time, such as, but not limited to, 1, 2, 3, 4, 5, or 6 seconds. The period of time can be configurable. If the fadeout timer expires, then the pointer event server 208 may cause a fadeout of the annotation visualization, which is described in further detail below. However, if additional user input is received after a loop back to block 406, then the fadeout timer may be reset.
In some embodiments, instead of block 412 (or the next blocks 414 or 416) being implemented by the pointer event server 208, one or more of these blocks can be implemented by a computing device, such as the computing device(s) 100, 102, 202. The logic for implementing delay can be on the client side. A computing device, such as the computing device(s) 100, 102, 202, can locally keep track of fadeout or delay of a fadeout with respect to an annotation visualization.
In particular, at a return to block 406 and before the first fadeout timer expires, the pointer event server 208 can receive from the computing device 100 second user input (such as a second set of touch user inputs) for a second pointer. At a return to block 408, the pointer event server 208 can cause presentation, on the media presentation, of a second annotation visualization corresponding to the second user input. At a return to block 410, the pointer event server 208 can receive a second event from the computing device 100 that indicates that a second pointer is released. As a result of the receipt of the second event, at a return to block 412, the pointer event server 208 can initiate a second fadeout timer that replaces the first fadeout timer. In some embodiments, instead of the first fadeout timer being replaced, the first fadeout timer can be reset. A fadeout timer can be reset when a new pointer event is registered. This solution can be robust since it can still work even in the case where a pointer event is missed. The fadeout of one or more annotation visualizations can be delayed so long as additional user input is received within a threshold period of time.
At block 414, it can be determined that a fadeout timer has expired. For example, the pointer event server 208 can determine that the first or second fadeout timer has expired. In particular, the pointer event server 208 can determine that a period of time has elapsed (such as 3 seconds) without the pointer event server 208 receiving additional user input to further delay removal of an annotation visualization.
In particular, the fadeout of annotation visualizations can occur without a direct user command for the fadeout. Instead of an explicit “eraser” or “undo” command, the pointer event server 208 can cause a fadeout of an annotation visualization after the fadeout timer has expired. The fadeout timer can count down while user input is not received. Upon expiration of the fadeout timer, the pointer event server 208 can cause the fadeout. The pointer event server 208 can cause the fadeout without a direct user command for the fadeout.
At block 416, the fadeout can be presented. For example, the pointer event server 208 can cause presentation, on the media presentation of a computing device, of a fadeout (or removal) of one or more annotation visualizations. In some embodiments, the pointer event server 208 can cause a near simultaneous fadeout (or removal), on the media presentation of a computing device, of the one or more annotation visualizations. For example, if first and second annotation visualizations are presented as first and second ovals, then the first and second ovals can fadeout (or be removed) at approximately the same time in the same graphical user interface. The fadeout of the annotation visualization can begin at a starting point of the animation when the annotation visualization was added and can end at an ending point of the same animation. Additional details of the fadeout are described in further detail herein such as with respect to
In some embodiments, the pointer event server 208 can keep track of active pointers. The pointer event server 208 can determine if there are one or more active pointers. The pointer event server 208 can cancel a fadeout timer (if active) when the number of active pointers stops being zero, and active the fadeout timer when the number of active pointers turns to zero.
In some embodiments, presentation of the fadeout can be based on logic with respect to specific user input. For example, where user input corresponds to a long touch event within a threshold area (such as a user holding their finger or pen down in a specific area), additional fadeout logic can be applied such that the fadeout can be accelerated with respect to the long touch event. The pointer event server 208 can determine, from a set of user inputs, a subset of touch user inputs that occur within a threshold period of time and within a threshold area (for example, any set of touch events that occur longer than one second within a one centimeter area). The subset of touch user inputs can thus indicate a long touch event. The pointer event server 208 can determine, using an equation (such as a linear or exponential equation), a fadeout time period based at least in part on the subset of touch user inputs. For example, where the pointer event server 208 uses an exponential equation such as an exponential decay function, touch input up to 1 second as input to the equation may output a corresponding fadeout time period of 500 milliseconds, but any touch input beyond 1 or 2 seconds may result in a corresponding fadeout time period of a much smaller amount such as 1 millisecond or even mere microseconds. The pointer event server 208 can cause presentation, on the media presentation of the computing device, of a fadeout of the third annotation visualization according to the fadeout time period, which can accelerate the fadeout animation for a viewer and provide a more efficient user experience by possibly eliminating long and cumbersome fadeout animations.
In some embodiments, at the fadeout-related blocks 412, 414, or 416, a determination can be made whether a fadeout or a fadeout timer should be applied. For example, the pointer event server 208 can determine whether a fadeout or a fadeout timer should be applied based at least on whether the annotation visualization or the corresponding user input occurred within a persistence area. Additional details regarding a persistence area are described in greater detail below with respect to
Some of the blocks 402, 404, 406, 408, 410, 412, 414, 416 of the example method 400 can correspond to the annotation pseudo-algorithm(s) described in the below Tables 1, 2, or 3.
The apparatus 600 allows an object 7 that is brought into close vicinity of, or in contact with, the touch surface 4 to interact with the propagating light at the point of touch. In this interaction, part of the light may be scattered by the object 7, part of the light may be absorbed by the object 7, and part of the light may continue to propagate in its original direction across the panel 1. Thus, the touching object 7 causes a local frustration of the total internal reflection, which leads to a decrease in the energy (or equivalently, the power or intensity) of the transmitted light, as indicated by the thinned lines “i” downstream of the touching objects 7 in
The emitters 2 can be distributed along the perimeter of the touch surface 4 to generate a corresponding number of light sheets inside the panel 1. Each light sheet can be formed as a beam of light that expands (as a “fan beam”) in the plane of the panel 1 while propagating in the panel 1 from a respective incoupling region/point on the panel 1. The detectors 3 can be distributed along the perimeter of the touch surface 4 to receive the light from the emitters 2 at a number of spaced-apart outcoupling regions/points on the panel 1. The incoupling and outcoupling regions/points can refer to the positions where the beams enter and leave, respectively, the panel 1. The light from each emitter 2 can propagate inside the panel 1 to a number of different detectors 3 on a plurality of light propagation paths D. Even if the light propagation paths D correspond to light that propagates by internal reflections inside the panel 1, the light propagation paths D may conceptually be represented as “detection lines” that extend across the touch surface 4 between pairs of emitters 2 and detectors 3, as shown in
The detectors 3 can collectively provide an output signal, which can be received or sampled by a signal processor 10. The output signal can contain a number of sub-signals, also denoted “projection signals”, each representing the energy of light emitted by a certain light emitter 2 or received by a certain light detector 3. Depending on implementation, the signal processor 10 may need to process the output signal for separation of the individual projection signals. The projection signals can represent the received energy, intensity or power of light received by the detectors 3 on the individual detection lines D. Whenever an object touches a detection line, the received energy on this detection line is decreased or “attenuated.”
The signal processor 10 may be configured to process the projection signals so as to detemline a property of the touching objects, such as a position (e.g., in the x, y coordinate system shown in
In the illustrated example, the apparatus 600 can also include a controller 12 which can be connected to selectively control the activation of the emitters 2 and, possibly, the readout of data from the detectors 3. Depending on the implementation, the emitters 2 or detectors 3 may be activated in sequence or concurrently. The signal processor 10 and the controller 12 may be configured as separate units, or they may be incorporated in a single unit. One or both of the signal processor 10 and the controller 12 may be at least partially implemented by software executed by a hardware processor 14.
The memory 806 may contain computer program instructions (grouped as modules or components in some embodiments) that the hardware processor(s) 804 executes in order to implement one or more embodiments. The memory 806 generally includes RAM, ROM or other persistent, auxiliary or non-transitory computer-readable media. The memory 806 may store an operating system that provides computer program instructions for use by the hardware processor(s) 804 in the general administration and operation of the computing system 800. The memory 806 may further include computer program instructions and other information for implementing aspects of the present disclosure. In addition, memory 806 may include or communicate with the storage device 810. A storage device 810, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to the bus 802 for storing information, data, or instructions.
The memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by hardware processor(s) 804. Such instructions, when stored in storage media accessible to hardware processor(s) 804, render the computer system 800 into a special-purpose machine that is customized to perform the operations specified in the instructions.
In general, the word “instructions,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software modules, possibly having entry and exit points, written in a programming language, such as, but not limited to, Java, Scala, Lua, C, C++, or C #. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, but not limited to, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, or may be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices by their hardware processor(s) may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, or may be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the instructions described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as the storage device 810. Volatile media includes dynamic memory, such as the main memory 806. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Computing system 800 also includes a communication interface 818 coupled to the bus 802. Communication interface 818 provides a two-way data communication to the network 822. For example, communication interface sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information via cellular, packet radio, GSM, GPRS, CDMA, WiFi, satellite, radio, RF, radio modems, ZigBee, XBee, XRF, XTend, Bluetooth, WPAN, line of sight, satellite relay, or any other wireless data link.
The computing system 800 can send messages and receive data, including program code, through the network 822 and the communication interface 818. A computing system 800 may communicate with other computing devices 830 via the network 822.
The computing system 800 may include a distributed computing environment including several computer systems that are interconnected using one or more computer networks. The computing system 800 could also operate within a computing environment having a fewer or greater number of devices than are illustrated in
The various illustrative logical blocks, modules, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A hardware processor can include electrical circuitry or digital logic circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA, other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The steps of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module stored in one or more memory devices and executed by one or more processors, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC.
Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements or states. Thus, such conditional language is not generally intended to imply that features, elements or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
This application claims benefit of U.S. Provisional Patent Application Ser. No. 62/775,270 entitled “Touch Sensing Device and Annotation Graphical User Interface” filed Dec. 4, 2018, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62775270 | Dec 2018 | US |