The present description relates generally to processing input from an input device such as an electronic stylus or pen/pencil, and/or touch inputs and presenting such input in a graphical interface.
Interaction with electronic devices can be performed using various input devices, such as touch screen displays, touch-sensitive surfaces, remote controls, mice and other input devices. Touch-sensitive surfaces and touch screen displays, in particular, have become increasingly popular input devices, as has providing handwritten input using such input devices. Providing a graphical interface for presenting handwritten input has unique challenges.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several embodiments of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Handwritten content in an electronic device can be received as input from touch inputs and/or from an electronic stylus or pen/pencil. Existing approaches to rendering handwritten input may also restrict the amount of handwritten input based on the size of a display of a given electronic device, limiting the amount of handwritten input to the dimensions of such a display. In the subject handwritten input rendering system, a graphical canvas may be provided to enable low latency rendering of handwritten input, which may then be copied over to other graphical content upon cessation of the handwritten input. In particular, implementations of the subject technology can render a scene and/or text, perform zooming in response to detecting an appropriate gesture, perform tiling of the current scene at the zoom setting, and switch to the graphical canvas for writing in a lower latency mode. Additionally, the graphical canvas provides for long (e.g., continuous) handwritten input to be rendered within the graphical canvas, which may be subsequently copied over to the current scene once the handwritten input ceases.
The network environment 100 includes an electronic device 110 and a server 120 that may be included in a group of servers 130. The network 106 may communicatively (directly or indirectly) couple, for example, the electronic device 110 with the server 120 and/or the group of servers 130. In one or more implementations, the network 106 may be an interconnected network of devices that may include, or may be communicatively coupled to, the Internet. For explanatory purposes, the network environment 100 is illustrated in
The electronic device 110 may include a touchscreen and may be, for example, a portable computing device such as a laptop computer that includes a touchscreen, a smartphone that includes a touchscreen, a peripheral device that includes a touchscreen (e.g., a digital camera, headphones), a tablet device that includes a touchscreen, a wearable device that includes a touchscreen such as a watch, a band, and the like, any other appropriate device that includes, for example, a touchscreen, or any electronic device with a touchpad. In one or more implementations, the electronic device 110 may not include a touchscreen but may support touchscreen-like gestures, such as in a virtual reality or augmented reality environment. In one or more implementations, the electronic device 110 may include a touchpad. In
The electronic device 110 may include one or more contact intensity sensors. A contact intensity sensor may include one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force and/or pressure of a contact on a touch-sensitive surface). In an example, a contact intensity sensor can receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. Further, the electronic device 110 can also include at least one contact intensity sensor that is collocated with, or proximate to, a touch-sensitive surface. The electronic device 110, in one example, may also include at least one contact intensity sensor that is located on the back of the electronic device 110, opposite the touchscreen which may be located on the front of electronic device 110.
An intensity of a contact on a touch-sensitive surface (e.g., touchscreen, touchpad, etc.) can refer to a force or a pressure (force per unit area) of a contact (e.g., a finger contact or a stylus contact) on the touch-sensitive surface. Intensity of a contact can be determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or a sum) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Thus, it is appreciated that the contact intensity sensors provided by the electronic device 110 can measure a respective force measurement and/or a respective pressure measurement of a given contact on the touch-sensitive surface.
The electronic device 110 may implement the subject system to render handwriting input in a graphical canvas and be configured to receive handwritten input via different input methods including touch input, or from an electronic stylus or pen/pencil. The electronic device 110 may implement the example software architecture for rendering handwritten input that is discussed further below with respect to
The server 120 may be part of a network of computers or the group of servers 130, such as in a cloud computing or data center implementation. The server 120 and/or the group of servers 130 may store data, such as handwritten content, photos, music, text, web pages and/or content provided therein, etc., that may be accessible on the electronic device 110. Furthermore, handwritten content generated on the electronic device 110 may be stored on the server 120 and/or the group of servers 130.
As illustrated in
The upper layer drawing framework 210 and the lower latency graphics library 215 can communicate with a GPU driver 250 to provide requests for rendering graphical content (e.g., handwritten input, etc.) onto a display of the electronic device 110. The GPU driver 250, in turn, communicates with a graphics processing unit (GPU) of the electronic device 110 to provide instructions for rendering the graphical content (e.g., based on the requests of the upper layer drawing framework 210 and/or the lower latency graphics library 215).
The touch input system 220 receives input strokes corresponding to handwritten input from a user. In one or more implementations, the touch input system 220 determines for a given input stroke the time, location, direction, stroke pressure, and/or stroke force for the input stroke. Stroke pressure as mentioned herein can refer to a measurement of pressure (e.g., force per unit area) of a contact (e.g., a finger contact or a stylus contact) corresponding to a stroke input on a given touch-sensitive surface (e.g., touchscreen, touchpad, etc.). In an example, the touch input system 220 samples multiple points within a stroke, takes a timestamp for each point sampled in each stroke. Each point within the stroke may include additional data such as location/proximity, stroke pressure, and/or stroke force. In an example, an input stroke can refer to stroke data received starting at stylus down (or an initial touch input) to stylus up (or a touch release), and, for each input stroke, a set of points that are part of each stroke are sampled. The touch input system 220 sends input strokes to the graphics rendering system 205, which may be further processed by the upper layer drawing framework 210 and/or the lower latency graphics library to render the input strokes. Examples of rendering graphics in connection with providing a graphical canvas for displaying handwritten input is described in
As further shown in
Upon receiving touch input (e.g., from an electronic stylus or pen/pencil) that indicates received input stroke data, image data from the tiles are copied over into a memory space corresponding to the lower latency graphics library 215 for lower latency rendering. This memory space stores graphics data, including the copied over image data, for being rendered to a graphical canvas.
The lower latency graphics library 215 displays a graphics context, which is transparent, on the screen of the electronic device 110 in response to the received touch input. The copied over image data from the tiles are then rendered by the lower latency graphics library 215 for display within this transparent graphics context. As used herein, a graphics context (or context) refers to a destination for drawing graphics and may be referred to as a graphical canvas herein.
The upper layer drawing framework 210 renders an image corresponding to image data of a current scene (402) in a first view. The image may include graphical data and/or textual data and/or handwritten content. The touch input system 220 receives a touch input corresponding to input stroke data for handwritten input (404) on the rendered image. In response to the touch input, the graphics rendering system 205 copies (406) the image data from the tiles into a memory space of the lower latency graphics library 215 in order to render the image data in a graphical canvas. The graphics rendering system 205, using the lower latency graphics library 215, provides for display the image data in the graphical canvas in a second view. In an implementation, the graphical canvas is overlaid over at least a portion of the rendered image (408). The graphics rendering system 205, using the lower latency graphics library 215, renders the received input stroke data in the graphical canvas where the input stroke data is continuous with the (initial) touch input (410).
If the touch input is determined to have ended (412) (e.g., based on a touch release event), the graphics rendering system 205 copies graphics data including the input stroke data from the memory space of the lower latency graphics library 215 to memory of the upper layer drawing framework 210 corresponding to tiles of the current scene (414). The upper layer drawing framework 210, using a compositing algorithm, merges the copied over graphics data with the image data of the tiles. The upper layer drawing framework 210 then provides for display (416) the merged input stroke data and tiles in the first view.
In some instances, an initial area corresponding to the set of tiles provided in the first view (e.g., rendered by the upper layer drawing framework 210) can be scrolled past the initial area based on received touch input (e.g., when the user is viewing other parts of a document with handwritten content). In these instances, the tiles from the first view (e.g., provided by the upper layer drawing framework 210) can be flushed out of memory in order to load new tiles corresponding to the new area based on the scroll movement. In other instances, tiles may be flushed out of memory after a particular memory threshold is reached in order to improve system performance and/or memory utilization. An example process of implementing this type of tile management is described in
The graphics rendering system 205, based on received touch input from the touch input system 220, detects that scrolling past an initial area of a set of tile has occurred (502). The set of tiles may be rendered by the upper layer drawing framework 210. The graphics rendering system 205 then flushes tiles out of memory (504) and loads new tiles into memory (508) in which the new tiles correspond to the new area based on the scrolling. Alternatively, scrolling past the initial area of the graphical canvas has not occurred (502), but a memory threshold has been reached (506), the graphics rendering system 205 flushes tiles according to distance from a current view, and flushes the tiles that are furthest away in distance from the current view (510). In an example, this distance can be determined based on a distance of each tile from a centroid the current view (e.g., the center of the area including the current set of tiles in memory). In yet another example, the graphics rendering system 205 can assign a value to each tile based on a metric associated with the complexity of rendering a given tile. In this example, tiles that require greater complexity (e.g., computing resources) in order to be rendered are favored over other, less complex, tiles such that the other tiles would be flushed out of memory before the more complex tiles.
The bus 608 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 600. In one or more implementations, the bus 608 communicatively connects the one or more processing unit(s) 612 with the ROM 610, the system memory 604, and the permanent storage device 602. From these various memory units, the one or more processing unit(s) 612 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 612 can be a single processor or a multi-core processor in different implementations.
The ROM 610 stores static data and instructions that are needed by the one or more processing unit(s) 612 and other modules of the electronic system 600. The permanent storage device 602, on the other hand, may be a read-and-write memory device. The permanent storage device 602 may be a non-volatile memory unit that stores instructions and data even when the electronic system 600 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 602.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 602. Like the permanent storage device 602, the system memory 604 may be a read-and-write memory device. However, unlike the permanent storage device 602, the system memory 604 may be a volatile read-and-write memory, such as random access memory. The system memory 604 may store any of the instructions and data that one or more processing unit(s) 612 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 604, the permanent storage device 602, and/or the ROM 610. From these various memory units, the one or more processing unit(s) 612 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 608 also connects to the input and output device interfaces 614 and 606. The input device interface 614 enables a user to communicate information and select commands to the electronic system 600. Input devices that may be used with the input device interface 614 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 606 may enable, for example, the display of images generated by electronic system 600. Output devices that may be used with the output device interface 606 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
The present application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/514,773, entitled “PROVIDING A GRAPHICAL CANVAS FOR HANDWRITTEN INPUT,” filed Jun. 2, 2017, which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility Patent Application for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4259905 | Abiko | Apr 1981 | A |
5680563 | Edelman | Oct 1997 | A |
6903751 | Saund | Jun 2005 | B2 |
7262785 | Silverman | Aug 2007 | B2 |
7373590 | Woolf | May 2008 | B2 |
7698660 | Sanchez | Apr 2010 | B2 |
7907141 | Saund | Mar 2011 | B2 |
8014607 | Saund | Sep 2011 | B2 |
8059890 | Bach | Nov 2011 | B2 |
8261213 | Hinckley | Sep 2012 | B2 |
8416197 | Feng | Apr 2013 | B2 |
8438489 | Barthelmess | May 2013 | B2 |
8555192 | Cao | Oct 2013 | B2 |
8907915 | Li | Dec 2014 | B2 |
8977558 | Nielsen | Mar 2015 | B2 |
9026900 | Pugh | May 2015 | B1 |
9170731 | Iwema | Oct 2015 | B2 |
9275491 | Bolz | Mar 2016 | B2 |
9310998 | Demiya | Apr 2016 | B2 |
9377950 | Rosenfeld | Jun 2016 | B2 |
9436665 | Pircher | Sep 2016 | B2 |
9454694 | Novak | Sep 2016 | B2 |
9600460 | Gilead | Mar 2017 | B2 |
9778830 | Dubin | Oct 2017 | B1 |
10064021 | Aloni | Aug 2018 | B1 |
10129188 | Liu | Nov 2018 | B2 |
10169917 | Chen | Jan 2019 | B2 |
20020057275 | Parikh | May 2002 | A1 |
20030052887 | Parikh | Mar 2003 | A1 |
20040060049 | Mendoza | Mar 2004 | A1 |
20040135784 | Cohen | Jul 2004 | A1 |
20040217947 | Fitzmaurice | Nov 2004 | A1 |
20040237033 | Woolf | Nov 2004 | A1 |
20050041834 | Wakeam | Feb 2005 | A1 |
20050044106 | Duncan | Feb 2005 | A1 |
20050044295 | Wakeam | Feb 2005 | A1 |
20050053283 | Wakeam | Mar 2005 | A1 |
20050091578 | Madan | Apr 2005 | A1 |
20050152600 | Chen | Jul 2005 | A1 |
20060147117 | Wakeam | Jul 2006 | A1 |
20060218171 | Wakeam | Sep 2006 | A1 |
20060242424 | Kitchens | Oct 2006 | A1 |
20070124700 | Koivisto | May 2007 | A1 |
20070180397 | Hoyer | Aug 2007 | A1 |
20070200713 | Weber | Aug 2007 | A1 |
20080089660 | Hashimoto | Apr 2008 | A1 |
20080109751 | Fitzmaurice | May 2008 | A1 |
20080189775 | Fujita | Aug 2008 | A1 |
20080231635 | Saund | Sep 2008 | A1 |
20080232690 | Saund | Sep 2008 | A1 |
20080235211 | Saund | Sep 2008 | A1 |
20090089453 | Bohan | Apr 2009 | A1 |
20100049832 | Peleg | Feb 2010 | A1 |
20100210332 | Imai | Aug 2010 | A1 |
20110154192 | Yang | Jun 2011 | A1 |
20110191211 | Lin | Aug 2011 | A1 |
20110242119 | Bolz | Oct 2011 | A1 |
20110302522 | Cao | Dec 2011 | A1 |
20120036429 | Ajima | Feb 2012 | A1 |
20120065944 | Nielsen | Mar 2012 | A1 |
20120216216 | Lopez Taboada | Aug 2012 | A1 |
20120254773 | Viswanathan | Oct 2012 | A1 |
20120258435 | Tee | Oct 2012 | A1 |
20130011066 | Balassanian | Jan 2013 | A1 |
20130205202 | Xiao | Aug 2013 | A1 |
20130314363 | Zhen | Nov 2013 | A1 |
20130339459 | Kumashio | Dec 2013 | A1 |
20140002384 | Li | Jan 2014 | A1 |
20140018053 | Cho | Jan 2014 | A1 |
20140075302 | Akashi | Mar 2014 | A1 |
20140098024 | Paek | Apr 2014 | A1 |
20140164984 | Farouki | Jun 2014 | A1 |
20140184531 | Demiya | Jul 2014 | A1 |
20140229318 | Natarajan | Aug 2014 | A1 |
20140247271 | Fernandez | Sep 2014 | A1 |
20140258260 | Rayborn | Sep 2014 | A1 |
20140297437 | Natarajan | Oct 2014 | A1 |
20140333987 | Keam | Nov 2014 | A1 |
20140365918 | Caldwell | Dec 2014 | A1 |
20150138122 | Cho | May 2015 | A1 |
20150156147 | Liu | Jun 2015 | A1 |
20150160729 | Nakagawa | Jun 2015 | A1 |
20150179134 | Kuo | Jun 2015 | A1 |
20150215450 | Seo | Jul 2015 | A1 |
20150271218 | Steingrimsson | Sep 2015 | A1 |
20150302242 | Lee | Oct 2015 | A1 |
20150370473 | Chen | Dec 2015 | A1 |
20160092112 | Akgun | Mar 2016 | A1 |
20160117140 | Ikeda | Apr 2016 | A1 |
20160154769 | Ikeda | Jun 2016 | A1 |
20160180161 | Novak | Jun 2016 | A1 |
20160198397 | Lee | Jul 2016 | A1 |
20160275920 | Apodaca | Sep 2016 | A1 |
20160292500 | Angelov | Oct 2016 | A1 |
20160321029 | Zhang | Nov 2016 | A1 |
20160321238 | Kurita | Nov 2016 | A1 |
20160329031 | Yang | Nov 2016 | A1 |
20160350056 | Makar | Dec 2016 | A1 |
20170010773 | Curcelli | Jan 2017 | A1 |
20170060821 | Rucine | Mar 2017 | A1 |
20170060829 | Bhatt | Mar 2017 | A1 |
20170063942 | Yamaguchi | Mar 2017 | A1 |
20170068854 | Markiewicz | Mar 2017 | A1 |
20170087460 | Perry | Mar 2017 | A1 |
20170169002 | Motoi | Jun 2017 | A1 |
20170220554 | Carter | Aug 2017 | A1 |
20170228368 | Carter | Aug 2017 | A1 |
20170285933 | Oh | Oct 2017 | A1 |
20170336960 | Chaudhri | Nov 2017 | A1 |
20170351912 | Harada | Dec 2017 | A1 |
20170358114 | Tennant | Dec 2017 | A1 |
20180068194 | Matsuda | Mar 2018 | A1 |
20180165255 | Gafford | Jun 2018 | A1 |
20180188938 | Deselaers | Jul 2018 | A1 |
20180247550 | Lu | Aug 2018 | A1 |
20180293906 | Chen | Oct 2018 | A1 |
20180329743 | Pope | Nov 2018 | A1 |
20180331976 | Pope | Nov 2018 | A1 |
20180335932 | Ta | Nov 2018 | A1 |
20180336173 | Mikutel | Nov 2018 | A1 |
20180348990 | Thimbleby | Dec 2018 | A1 |
20190036855 | Liu | Jan 2019 | A1 |
20190318148 | Hong | Oct 2019 | A1 |
Number | Date | Country | |
---|---|---|---|
20180348990 A1 | Dec 2018 | US |
Number | Date | Country | |
---|---|---|---|
62514773 | Jun 2017 | US |