Using attributes for font recommendations

Information

  • Patent Grant
  • 11537262
  • Patent Number
    11,537,262
  • Date Filed
    Wednesday, July 20, 2016
    8 years ago
  • Date Issued
    Tuesday, December 27, 2022
    2 years ago
Abstract
A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor to execute the instructions to perform operations that include receiving data representing one or more user-selected item attributes. The data includes one of at least four selectable interest levels for each of the one or more user-selected item attributes. Operations also include identifying one or more items representative of the selected interest level for each of the one or more user-selected item attributes, and initiating delivery of data representing the identified one or more items for user selection.
Description
BACKGROUND

This description relates to improving font selection efficiency by providing selectable attributes of fonts to a user. Upon selection of one or more attributes, fonts are identified and presented that reflect the selections.


Proportional to the astronomical growth of available text content, for example via the Internet, the demand to express such content has grown. Similar to the variety of products provided by online stores; content authors, publishers, graphic designers, etc. have grown accustomed to having a vast variety of fonts to present text. However, this virtual explosion in the sheer number of usable fonts can become overwhelming and can easily saturate an individual attempting to find and select a font to present textual content. Faced with such an overabundance of information, decision-making abilities can be hampered causing the individual to become frustrated.


SUMMARY

The systems and techniques described can aid individuals such as designers (e.g., website designers) by efficiently recommending fonts that reflect particular attributes (e.g., happiness, trustworthiness, etc.) identified by the designer. Rather than a simply binary indication of using a font attribute or not (e.g., font should include happiness or font should not include happiness), the designer provides a level of interest for each selected attribute. For example, happiness may be reflected in the recommended fonts or strongly reflected in the recommended fonts. By allowing a designer to select which attributes should be reflected in recommended fonts and interest level (in each selected attribute), the designer can have a desired topic (e.g., emotion) conveyed in font recommendations and not have to laboriously scan through hundreds if not thousands of fonts which may or may not be relevant to the design task at hand. Further, by efficiently providing a list of highly relevant font recommendations, a designer may select and license multiple fonts rather than just one reasonable font selected from a multitude of irrelevant fonts.


In one aspect, a computing device implemented method includes receiving data representing one or more user-selected item attributes. The data includes one of at least four selectable interest levels for each of the one or more user-selected item attributes. The method also includes identifying one or more items representative of the selected interest level for each of the one or more user-selected item attributes, initiating delivery of data representing the identified one or more items for user selection.


Implementations may include one or more of the following features. The item may be a font. Two of the at least four selectable interest levels may represent interest in having the item attribute reflected in the one or more identified items. Two of the at least four selectable interest levels may represent interest in the item attribute being absent in the one or more identified items. Identifying the one or more items representative of the selected interest level for each of the user-selected items attributes may employ a deep learning machine. One or more biased survey questions may be used to train the deep learning machine. Identifying the one or more items representative of the selected interest level for each of the user-selected item attributes may employ an ensemble of deep learning machines. The ensemble of deep learning machines may be trained using data that represents a listing of items for each of the user-selected attributes, wherein the data that represents the item listing is weighted. The data that represents the item listing may be weighted by a biquadratic curve. Data that represents fonts located at the start and end of the item listing may be similarly weighted. Identifying the one or more item representative of the selected interest level for each of the user-selected item attributes may include multiplying a numerical value representing an attribute's presence in an item and a numerical value representing the selected interest level for the attribute. At least one of the item attributes is identified from survey data for being user-selectable.


In another aspect, a system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor to execute the instructions to perform operations that include receiving data representing one or more user-selected item attributes. The data includes one of at least four selectable interest levels for each of the one or more user-selected item attributes. Operations also include identifying one or more items representative of the selected interest level for each of the one or more user-selected item attributes, and initiating delivery of data representing the identified one or more items for user selection.


Implementations may include one or more of the following features. The item may be a font. Two of the at least four selectable interest levels may represent interest in having the item attribute reflected in the one or more identified items. Two of the at least four selectable interest levels may represent interest in the item attribute being absent in the one or more identified items. Identifying the one or more items representative of the selected interest level for each of the user-selected items attributes may employ a deep learning machine. One or more biased survey questions may be used to train the deep learning machine. Identifying the one or more items representative of the selected interest level for each of the user-selected item attributes may employ an ensemble of deep learning machines. The ensemble of deep learning machines may be trained using data that represents a listing of items for each of the user-selected attributes, wherein the data that represents the item listing is weighted. The data that represents the item listing may be weighted by a biquadratic curve. Data that represents fonts located at the start and end of the item listing may be similarly weighted. Identifying the one or more item representative of the selected interest level for each of the user-selected item attributes may include multiplying a numerical value representing an attribute's presence in an item and a numerical value representing the selected interest level for the attribute. At least one of the item attributes is identified from survey data for being user-selectable.


In still another aspect, one or more computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations that include receiving data representing one or more user-selected item attributes. The data includes one of at least four selectable interest levels for each of the one or more user-selected item attributes. Operations also include identifying one or more items representative of the selected interest level for each of the one or more user-selected item attributes, and initiating delivery of data representing the identified one or more items for user selection.


Implementations may include one or more of the following features. The item may be a font. Two of the at least four selectable interest levels may represent interest in having the item attribute reflected in the one or more identified items. Two of the at least four selectable interest levels may represent interest in the item attribute being absent in the one or more identified items. Identifying the one or more items representative of the selected interest level for each of the user-selected items attributes may employ a deep learning machine. One or more biased survey questions may be used to train the deep learning machine. Identifying the one or more items representative of the selected interest level for each of the user-selected item attributes may employ an ensemble of deep learning machines. The ensemble of deep learning machines may be trained using data that represents a listing of items for each of the user-selected attributes, wherein the data that represents the item listing is weighted. The data that represents the item listing may be weighted by a biquadratic curve. Data that represents fonts located at the start and end of the item listing may be similarly weighted. Identifying the one or more item representative of the selected interest level for each of the user-selected item attributes may include multiplying a numerical value representing an attribute's presence in an item and a numerical value representing the selected interest level for the attribute. At least one of the item attributes is identified from survey data for being user-selectable.


These and other aspects, features, and various combinations may be expressed as methods, apparatus, systems, means for performing functions, program products, etc.


Other features and advantages will be apparent from the description and the claims.





DESCRIPTION OF DRAWINGS


FIG. 1 illustrates a smartphone presenting textual content in multiple fonts.



FIGS. 2-5 illustrates an interface that presents font recommendations based upon user-selected font attributes and levels of interest.



FIG. 6 is a block diagram of a network environment including a font service provider that manages font recommendations.



FIG. 7 is a block diagram of a font service provider.



FIG. 8 is a graphical representation of training learning machines to provide font recommendations.



FIG. 9 is a graphical representation for identifying fonts from selected attributes.



FIG. 10 is a flowchart of operations for identifying font recommendations from user-selected font attributes and levels of interest.



FIG. 11 illustrates an example of a computing device and a mobile computing device that can be used to implement the techniques described here.





DETAILED DESCRIPTION

Referring to FIG. 1 a computing device (e.g., a mobile smartphone 100) includes a display 102 that allows user to view (and in some cases create, edit, etc.) various types of content such as text, via one more applications. Along with presenting different content from a variety of sources (e.g., Internet sites), browsers and other types of applications (e.g., word processors) may use different types of fonts to present a desired effect. For example, web assets (e.g., web pages, web sites, web based advertisements, etc.) may be developed that use particular fonts to quickly catch the attention of a viewer. With an ever-increasing number of fonts at a designer's disposal (e.g., for web asset development, adjusting presented text, etc.), selecting an appropriate font for the project at hand could turn into be a noticeably time-consuming task. To reduce such a potential time sink, one or more techniques may be implemented to recommend available fonts (e.g., for purchase, license, etc.) to a designer. A list of font attributes can be presented to allow a web asset designer to select one or more of the attributes as a basis for identifying font recommendations. From the recommendations, the user could select one or more fonts for use in his or her development effort. In the illustrated example, a portion 104 of the display 102 presents a web page for an establishment (e.g., an arcade). The name of the arcade 106 is presented in a font that has attributes that could be considered as emotionally uplifting and project a positive connotation (e.g., for attracting arcade goers). Just as this font may suggest favorable feelings, other types of fonts may induce other feelings such as a negative, neutral or other emotions (e.g., a relatively neutral font other information 108 on another page portion). To present the displayed text, the designer may have consciously selected the font 106 to invoke such an emotion from the viewer. Selecting a font to achieve such an end result could take a considerable amount of time based on the number of potential candidate fonts (e.g. thousands of fonts). Rather than laboriously searching for this font, the designer could select a representative attribute (from a list of attributes) to trigger a process of recommending fonts that reflect the selected attribute. For example, by selecting an attribute associated with happiness, the font 106 can be identified and presented to the user as a recommendation (along with other fonts that convey happiness). Additionally, by presenting such a collection of selectable fonts that share the same attribute or attributes (e.g., happiness), the probability increases that the designer may license, purchase, etc. more fonts than originally planned, thereby potentially assisting the designer with other portions of this development and/or other developments and increase sales for a font provider.


Referring to FIG. 2, a computer system 200 used for asset development is shown that includes a display 202 presenting a split screen that includes a menu of user-selectable font attributes 204 and a panel 206 that lists font recommendations based on upon the selected attribute (or attributes). To present this information in this example, the computer system 200 includes an operating system 206 and executes a browser 208 that exchanges information with a variety of data sources (e.g., a font provider from which the content of the menu 204 and panel 206 originates). In this example, the menu 204 includes a list of thirty-one attributes (e.g., artistic, happy, sad, etc.) that are selectable by a user by using a pointing device (e.g., a mouse driven pointer positioned over an attribute and selected). In some situations, a single attribute (e.g., “Happy”) may be selected to indicate the type of fonts being sought; however, the user can also select multiple attributes (e.g., “Happy” and “Legible”). In general, each of the attributes included in a menu 204 can be considered as a quality that can be graphically represented in one or more fonts. In this particular example, the attribute labeled “Happy” is selected and a horizontal bar 208 is presented to graphically indicate the selection (along with highlighting the text label of the attribute).


Along with selecting an attribute, a level of interest in the particular attribute may also be selected by the user by interacting with the menu 204. For example, by selecting the attribute once (with the pointing device), a first level of interest is selected (as indicated with a left portion 210 of the bar 208). By selecting the first level of interest, in this example fonts that somewhat reflect the corresponding attribute (“Happy”) are identified. This first level of interest indicates that candidate fonts can convey a weaker representation of the attribute compared to a second level of interest that would strongly reflect the attribute. In this example, only the first level of interest is selected and the second level has not been selected, as indicated by a non-highlighted right portion 212 of the bar 208.


In response to the attribute selection in the menu 204, the panel 206 presents a listing of fonts identified as representing the selected attribute (e.g., fonts that convey a happy emotion). In this example, the fonts are ordered from top (of the pane 206) to bottom with the upper fonts (e.g., font 214) being identified as providing better matches and lower fonts (e.g., font 216) identified as providing a lesser match. The panel 206 also includes a graphical representation 218 of the selected attribute and an indication (e.g., a horizontal bar) of the selected level of interest. Similar to having attribute selection adjustments cause changes to the recommended fonts listed in the pane 206, adjusting the level of interest can also change the listed fonts. For example, referring to FIG. 3, the level of interest in the selected attribute (e.g., the “Happy” attribute) is increased and correspondingly the list of selectable recommended fonts changes. In this instance, the attribute is selected a second time (via user interactions with a pointing device) in the menu 204 and the right portion 212 of the horizontal bar 208 is graphically highlighted to indicate the interest in being recommended fonts that strongly reflect the “Happy” attribute. Based upon this selection, the fonts presented in pane 206 change and a different font (e.g., font 300) now appears at the top position of the selectable font recommendation list due to the desire of being presented font recommendations that strongly reflect the attribute. Similarly, changes appear while progressing down the list, including the font in the last presented position (e.g., font 302). To also assist the user in regards to the attribute and level of interest being used to identify the fonts, an updated graphical representation 304 is included in the pane 206 that indicates both the selected attribute and changed level of interest (e.g., a second level of interest).


Referring to FIG. 4, similar to adjusting the level of interest in a font attribute, the number of number of attributes selected for identifying font recommendations may be adjusted. In this illustration, along with selecting one attribute (“Happy”) a second attribute (“Legible”) is selected. Further, the level of interest for each of these attributes is independently selected. In this example, a second level of interest is selected for both of the attributes as separately indicated by graphical horizontal bars 400 and 402 provided in menu 204. Similar graphical representations 404 and 406 are included in the pane 206 that presents the list of fonts that reflect these multiple attributes and the respective level of interest selected for each attribute. Compared to the font lists presented in FIGS. 2 and 3, the fonts presented in FIG. 4 clearly reflect the selection of the “Legible” attribute. The characters included in each recommended font have sharper defined shapes and the characters are quickly recognizable. While being more legible (with respect to the font listing presented in FIGS. 2 and 3), the fonts also include shapes that convey an uplifting emotion as directed by the selection of the “Happy” attribute.


Referring to FIG. 5, attributes may be selected so that fonts are identified that are absent graphical characteristics of an attribute. By providing this functionality, a designer can clearly indicate what graphical qualities are not of interest for a project under development. As illustrated, selecting the absent of an attribute in menu 204 can be performed in a manner similar to selecting an attribute. As demonstrated in previous figures, one attribute 500 (e.g., “Happy”) is selected having a second level of interest (by using a pointing device to twice select the attribute) as indicated by a horizontal bar 502. One or more techniques may be employed to allow a user to select that a particular attribute is undesired and should be absent from recommended fonts. For example, another selectable graphic can be presented to allow the user to indicate his or her interest in not having an attribute reflected in font recommendations. In this illustrated arrangement, a graphic 504 (e.g., the word “Not”) appears next to an attribute 506 (e.g., the term “Artistic”) when the pointer (controlled by the pointing device) hovers in a location to the left of an attribute. By selecting the graphic 504 (with the pointing device), the user indicates that graphical aspects of the attribute 506 should not be represented in the font recommendations. Additionally, a level of interest can similarly be selected when requesting the absence of an attribute. As represented by horizontal bar 208, the graphic 504 has been twice selected for setting a second level, which represents a strong interest in not having the attribute 506 represented in font recommendations. A lesser level of interest may similarly be chosen by selecting the graphic 504 once. With the attributes and levels of interest selected, font recommendations are produced and presented on the pane 206. Additionally, attribute graphics 508 and 510 are included in the pane 206 and illustrate the selected attributes and their respective level of interest. Additional, a selectable graphic 512, 514 is included with each of the attribute graphics to allow the user to remove the corresponding attribute entirely from being used for identifying font recommendations.


Referring to FIG. 6, a computing environment 600 is presented that includes a computer system 602 that a user (e.g., designer) may interact with (using a keyboard, a pointing device, etc.) for gathering information (e.g., font attributes, level of interest, etc.) and presenting information (e.g., font recommendations). In this arrangement, a browser application 604 provides an interface for collecting information from the user and presenting information, however, other types of applications and processes may be employed. In one possible arrangement, the computer system 602 may execute a software agent 606 that collects information associated fonts (e.g., previous attribute and level of interest selections, fonts selected from recommendations, etc.). In some arrangements the software agent 606 may solicit information from a user (e.g., initiate surveys, etc.), or collect information in an unsolicited manner (e.g., collect pointing device movements, click data, etc.). Such agents can be considered a software module that is executable in a substantially autonomous manner. For example, upon being provided access to the computer system 602, a software agent may operate without considerable user interaction. By operating in a somewhat flexible manner, the software agent can adaptively address font information needs. The software agent 606 may operate in a somewhat persistent manner to identify information such as font attributes, levels of interest, selected recommended fonts, etc. For example, the software agent 306 may execute in a substantially continuous manner.


In the presented environment 600, font information 608 (e.g., selected attribute(s), level(s) of interest, etc.) is sent over one or more networks (e.g., the Internet 610) to a font service provider 612 for processing (e.g., identifying fonts for recommendation, etc.). After the provided information is processed to identify fonts to recommend, one or more techniques may be implemented to provide the recommendations to the computer system 602 or other computing devices. For example, one or more files may be produced by the font service provider 612 to send font recommendations 614 to the computer system 602. In some arrangements, the font service provider 612 may also provide the software agents to the computing devices in order to perform operations, such as collecting font attribute related information (e.g., selected recommended fonts, etc.), as needed. Agents delivered from the font service provider 612 may also provide other functions; for example, collecting other types of information, for example, sales information or survey responses to assist in characterizing various fonts with respect to different attributes.


To process and store information associated with font attributes being provided by the computer system 602, the font service provider 612 typically needs access to one or more libraries of fonts, font information, etc. that may be stored locally, remotely, etc. For example, font libraries and libraries of font information may be stored in a storage device 616 (e.g., one or more hard drives, CD-ROMs, etc.) on site. Being accessible by a server 618, the libraries may be used, along with information provided from computing devices, software agents, etc., to collect font attribute and levels of interest information, identify font recommendations, provide the font recommendations to end users (e.g., via the pane 206), etc. Illustrated as being stored in a single storage device 616, the font service provider 612 may also use numerous storage techniques and devices to retain collections of font information. Lists of fonts, attributes and related information can also be stored (e.g., on the storage device 616) for later retrieval and use. The font service provider 612 may also access font information at separate locations as needed. For example, along with identifying font recommendations for the computer system 602, the server 618 may be used to collect needed information from one or more sources external to the font service provider 612 (e.g., via the Internet 610).


Along with collecting and processing font attributes, and providing font recommendations; the font service provider 612 may contribute other functions. For example, determining multiple fonts as being similar may be determined by the font service provider 612 as described in U.S. patent application Ser. No. 14/046,609, entitled “Analyzing Font Similarity for Presentation”, filed 4 Oct. 2013, and, U.S. patent application Ser. No. 14/694,494, entitled “Using Similarity for Grouping Fonts and Individuals for Recommendations”, filed 23 Apr. 2015, both of which are incorporated by reference in their entirety. The font service provider 612 may also provide the functionality of characterizing and pairing fonts (based upon one or more rules) as described in U.S. patent application Ser. No. 14/690,260, entitled “Pairing Fonts for Presentation”, filed 17 Apr. 2015, which is also incorporated by reference in its entirety. In some arrangements, one or more of these functions may be provided on one or more user interfaces (UI's), application program interfaces (API's), etc. By employing these technologies, additional functionality may be provided along with recommended fonts that may more likely satisfy the interest of end users. To provide such functionally, the server 618 executes a font recommendation manager 620, which, in general, identifies font recommendations based upon attributes and levels of interest selected by a user. The font recommendation manager 620 may also provide other functionality such as collecting information and identifying attributes as being associated with particular fonts. Further, the strength that each attribute is graphically reflected in a particular font along with how the font ranks among other fonts based upon the selected attributes and levels of interest can be determined. To collect and use additional information in these determinations, the font service provider 612 may perform operations (e.g., tracking, monitoring, etc.) regarding other user interactions. For example, records may be stored (e.g., in storage device 616) that reflect particular fonts that have been requested, licensed, etc. and provided to particular users, etc.


The environment 600 may utilize various types of architectures to provide this functionality. For example, to process information (e.g., provided font information 608, survey data, monitored user interactions, etc.) to prepare font recommendations, etc., the environment may employ one or more knowledge-based systems such as an expert system. In general, such expert systems are designed solving relatively complex problems by using reasoning techniques that may employ conditional statements (e.g., if-then rules). In some arrangements such expert systems may use multiple systems such as a two sub-system design, in which one system component stores structured and/or unstructured information (e.g., a knowledge base) and a second system component applies rules, etc. to the stored information (e.g., an inference engine) to determine results of interest (e.g., font recommendations).


Referring to FIG. 7, the font service manager 620 (which includes a number of modules) is executed by the server 618 present at the font service provider 612. In this arrangement, the font recommendation manager 620 includes an information collector 700 that is capable of receiving data that represents how attributes are reflected in particular fonts. For example, survey data may be collected that provides how individuals (e.g., users, designers, etc.) view attribute/font relationships. In this arrangement, such data may be previously stored (e.g., surveys stored in a collected information database 702) and retrieved from the storage device 616. Data representing such survey information may also be retrieved from one or more sources external to the font service provider 612; for example such information may be attained from one or more storage devices of a survey manager (e.g., an entity separate from the font service provider 612). Along with survey information, the storage device 616 (or other storage devices at the font service provider 612) may contain a font database 704 that includes information about numerous fonts, font attributes, etc. From the information stored in the font database 404, data may be retrieved to produce surveys for determining how particular attributes are reflected in particular fonts. For example, the font database 704 may include data that represents various types of font families (e.g., Times New Roman, Arial, etc.) that typically include a set of fonts (e.g., regular, italic, bold, bold italic, etc.). Data for each font may represent a set of individual character shapes (glyphs). Such glyphs generally share various design features (e.g., geometry, stroke thickness, serifs, size, etc.) associated with the font. To represent such fonts one or more techniques may be utilized; for example, outline-based representations may be adopted in which lines and curves are used to define the borders of glyphs. Along with different design features, fonts may differ based on functional aspects, such as the languages (e.g., English, Chinese, Russian, etc.) for which the fonts are used. Typically fonts are scalable for a variety of sizes (e.g., for presenting by various computing devices) and may be represented in one or more formats. For example, scalable outline fonts may be represented in a format that includes data structures capable of supporting a variety of typographic visual symbols of many languages.


In this example, attribute engines 706 are included in the font recommendation manager 620 and use information in the collected information database 702 (e.g., survey data, etc.) to identify fonts that reflect attributes, levels of interest associated with attributes, etc. In one arrangement, survey data and font data are used to train the attribute engines 706 to determine attributes for other fonts. For example, the attribute engines may determine a numerical value for each of the multiple attributes present in a font (to characterize to font). For example, a font including graphical features considered to convey an uplifting emotion, a large numerical value may be determined and assigned to the “Happy” attribute. However, if the shapes and scripts of the font are fanciful, a relatively low value may be assigned to the “Legible” attribute. By having multiple numerical values represent the font, the attribute values can be grouped into a vector quantity. Referring back to the list of attributes presented in menu 204, such a vector may contain values for thirty-one attributes, however, a larger or smaller number of attributes may be represented in the vector for a font.


Once determined, the attribute vector quantities for each font can be stored for later retrieval and use. In this arrangement, a font attribute database 708 is stored in the storage device 616 and retains the vector quantities and potentially other information (e.g., attribute definitions, logs regarding attribute updates, etc.). To produce the lists of font recommendations, the font recommendation manger 620 includes a font ranker 710. Along with preparing the recommendation for sending to a user device such as the computer system 200 (shown in FIG. 2), the font ranker 710 may perform other operations such as storing determined lists of font recommendations for future retrieval and use. For example, the storage device 616 includes a font recommendation database 712 that stores previously determined recommended font lists. By storing such data, computation efficiency can be improved by using a previously determined recommendation rather than re-computing a list of recommended fonts.


Referring to FIG. 8, data flows are graphically presented that illustrate operations performed by the font recommendation manager 620. One or more techniques may be employed by the manager to determine font attributes for fonts and process font attributes to make font recommendations. For example, one or more forms of artificial intelligence, such as machine learning, can be employed such that a computing process or device may learn to determine attributes for fonts. To provide this functionality, machine learning may employ techniques such as regression to characterize fonts by the font attributes. Upon being trained, a learning machine may be capable of outputting a numerical value that represents the amount an attribute is reflected in a font. Input to the trained learning machine may take one or more forms. In one arrangement, survey data and representations of fonts may be provide the input. The survey data can include information that associates fonts to different attributes (as determined by the survey takers). For the input that represents the fonts used in the surveys, representations of the font itself may be provided (e.g., bitmaps of font characters). Numerical representations of fonts may also be used as input to the learning machine (e.g., particular features that uniquely describe each font). For example, font features (e.g., fifty-eight separate features) can be utilized such as the features described in U.S. patent application Ser. No. 14/694,494, entitled “Using Similarity for Grouping Fonts and Individuals for Recommendations”, filed 23 Apr. 2015, and, U.S. patent application Ser. No. 14/690,260, entitled “Pairing Fonts for Presentation”, filed 17 Apr. 2015, both of which are incorporated by reference in their entirety. From this information, the learning machine can output a series of numerical values that represent each of the attributes (e.g., a value reflecting the “Happy” attribute, a value reflecting the “Artistic” attribute, etc.). For example, each output values may range 0-1.0 in which low values (e.g., 0.2) represent that the font does not convey a particular attribute and higher values (e.g., 0.8, 1.0) to convey the attribute is strongly represented in the font. Various techniques may be employed to provide font information, for examples, one or more files may be provided from which font features may be produced. For example, a file including outline information of a font (e.g., an OpenType font file or “.otf” file) may be input into a machine learning system and used to produce font features (from the font outlines). In some arrangements, the input file (or files) may be used by a renderer included in the machine learning system to produce an image (e.g., one or more bitmap images) to be used for feature determination.


To implement such an environment, one or more machine learning techniques may be employed. For example, supervised learning techniques may be implemented in which training is based on a desired output that is known for an input. Supervised learning can be considered an attempt to map inputs to outputs and then estimate outputs for previously unseen inputs (a newly introduced input). Unsupervised learning techniques may also be used in which training is provided from known inputs but unknown outputs. Reinforcement learning techniques may also be employed in which the system can be considered as learning from consequences of actions taken (e.g., inputs values are known and feedback provides a performance measure). In some arrangements, the implemented technique may employ two or more of these methodologies.


In some arrangements, neural network techniques may be implemented using the font data (e.g., vectors of numerical values that represent features of the fonts, survey data, etc.) to invoke training algorithms for automatically learning the fonts and related information. Such neural networks typically employ a number of layers. Once the layers and number of units for each layer is defined, weights and thresholds of the neural network are typically set to minimize the prediction error through training of the network. Such techniques for minimizing error can be considered as fitting a model (represented by the network) to training data. By using the font data (e.g., font feature vectors), a function may be defined that quantifies error (e.g., a squared error function used in regression techniques). By minimizing error, a neural network may be developed that is capable of determining attributes for an input font. Other factors may also be accounted for during neutral network development. For example, a model may too closely attempt to fit data (e.g., fitting a curve to the extent that the modeling of an overall function is degraded). Such overfitting of a neural network may occur during the model training and one or more techniques may be implements to reduce its effects.


One type of machine learning referred to as deep learning may be utilized in which a set of algorithms attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations. Such deep learning techniques can be considered as being based on learning representations of data. In general, deep learning techniques can be considered as using a cascade of many layers of nonlinear processing units for feature extraction and transformation. The next layer uses the output from the previous layer as input. The algorithms may be supervised, unsupervised, combinations of supervised and unsupervised, etc. The techniques are based on the learning of multiple levels of features or representations of the data (e.g., font features). As such multiple layers of nonlinear processing units along with supervised or unsupervised learning of representations can be employed at each layer, with the layers forming a hierarchy from low-level to high-level features. By employing such layers, a number of parameterized transformations are used as data propagates from the input layer to the output layer.


Employing such machine learning techniques, a considerable amount of survey data and font information (e.g., one or more vectors of data representing font features such as fifty-eight features) may be used as input to produce an output that represents font attributes. For example, an output data vector may provide a numerical value for each of the thirty-one attributes listed in the menu 204.


As illustrated in the figure, a data flow 800 represents information being provided to a learning machine 802 for producing an output vector of “N” attribute values (wherein N=31 to provide an attribute value for each of the attributes listed in menu 204). In this example, a set of training fonts 806 (e.g., fifty-eight features for each training font) is input into the learning machine 802 to determine an output attribute vector. In one arrangement, the font training set 806 includes approximately 1200 fonts (e.g., 1233 fonts) that cover a variety of different font styles. These 1200 fonts are also used to collect survey data in which survey takers select how particular fonts relate to attributes (e.g., the thirty-one attributes listed in menu 204). For example, the fonts may be clustered to form 411 groups in which group members are of similar type. Fonts from different groups can be selected and used to form the basis of survey questions. In some instances, pairs of fonts from the same group are used to produce survey questions. In some arrangements, a set of survey questions (e.g., five thousand questions, ten thousand questions, etc.) are used for each attribute (e.g., the thirty-one attributes); thereby a considerable total number of questions are prepared (e.g., one hundred fifty-five thousand, three hundred ten thousand, etc.). In some instances, bias may be applied to survey questions, for example, predefined notions of attributes, fonts, etc. may be included in survey questions (e.g., a pre-survey notion that one or more particular fonts would be considered to reflect a particular attribute, multiple attributes, an absence of an attribute, etc.). Such biased questions can also provide a bolstering effect; for example, by steering questions towards particular fonts and attributes less questions may be needed to identify relationships. In some arrangements, separate surveys may be used for selecting one or more attributes, vetting one or more attributes, etc. For example, questions may be posed to survey takers for identifying particular attributes that better describe qualities reflected in a font (e.g., an attribute labeled “confident” may provide a better description of font qualities than an attribute labeled “self-reliant”). Other techniques such as machine learning techniques may be employed to optimize the selection of attributes for use in describing font qualities. For example, using different sets of attributes to train a learning machine, appropriate attributes may emerge and be identified based on the output attributes selected by the trained machine to represent one or more input fonts. To execute the surveys various techniques may be employed, for example, a crowdsourcing Internet marketplace such as the Amazon Mechanical Turk. Responses from the surveys are used to create survey data 808 that is input into the learning machine 802. Along with the fonts used to produce the surveys, additional fonts may be provided as input. In this example, approximately eight-hundred additional fonts are input as an expanded font training set to the learning machine 802. To provide this data, features of the expanded training set 810 are input and are similar in type to the features used for the training set 806 (e.g., fifty-eight feature are used to characterized each font). With this data, the learning machine produces an output for each input font. As illustrated in this example, a vector of numerical values representing each of the attributes (e.g., thirty-one attributes) are output for each font (e.g., 1233 training set fonts+800 expanded training set fonts=2033 total fonts) to create a collection of vectors 812. In some arrangements, the data included in the vector collection 812 may be reviewed and edited prior to additional processing, for example, one or more individuals may review the attributes values for one or more of the fonts and make adjustments (e.g., change values, etc.).


Armed with this output data, the font recommendation manager 620 can use this font attribute information to train another learning machine for producing attribute vectors of any other font (e.g., thousands, tens of thousands, etc. of other fonts). The training and use of this second learning machine is illustrated in data flow 814. In this example, training data is prepared by sorting the collection of 2033 vectors 812 (that provide thirty-one attribute values for each font) into a collection of vectors 814 that represent each how font is associated with each attribute. As illustrated, collection 814 includes thirty-one vectors (one for each attribute), and each vector includes 2033 values (one value for each font). For each attribute, the fonts are sorted (e.g., in descending order) such that the upper fonts are most closely associated with the attribute (e.g., have large numerical attribute values) and fonts lower on the list are not as closely associated with the attribute (e.g., have relatively small numerical attribute values). Similar sorting is executed by the font recommendation manager 620 for the vectors of the other attributes in the collection 814. Along with sorting, other processing may be executed on the values in each of the attribute vectors prior to being used in learning machine training. For example, unequal weights may be applied to the vector values to magnify errors for the upper and lower portion of the vectors. In one example, a biquadratic equation is applied to each vector (e.g., a multiplier value of 10 is applied to values located in the upper and lower portions of the vector while a multiplier values of 1 is applied to values located at the middle portion of the vector) to magnify error at upper and lower portions of the vector for training the learning machine. By applying such weighting 816, or other types of weighting and processing, the ability of the learning machine to match other fonts to the attributes may be improved. The weighted vectors are provided to train an ensemble learning machine 820, which includes a number (e.g., five, eight, etc.) of learning machines (e.g., the attribute engines 706). In this example, each individual learning machine included in the ensemble 820 performs similar tasks to compute an attribute vector (e.g., a vector of 31 values that represent the 31 attributes presented in menu 204) for a font (e.g., represented by font features 822) input into the ensemble learning machine 820. Here, each learning machine in the ensemble 820 are similar (e.g., all five are deep learning machines); however, in some arrangements two or more of the machines may employ different architectures. After an output vector is computed by each of learning machines, the results of the machines may be processed. For example, an output attribute vector 824 is attained in this example by averaging the numerical values of corresponding attributes in each output vector of the learning machines of the ensemble 820. As illustrated in the figure, attribute 1 of the vector 824 is determined by averaging the output quantities for attribute 1 for each of the five learning machines of ensemble 820. Numerical values for attributes two through thirty-one of vector 824 are determined by performing similar averaging operations to compute the attribute values for font 822 from the trained learning machine ensemble 820; however, other processing techniques may be employed to determine the attribute values.


Referring to FIG. 9, upon determining the attribute values for a font, these values can be used to calculate a score (via the font ranker 701) for ranking the font based upon the attribute(s) selected and the level(s) of interest selected by a user (via the menu 204). In one example, weighted versions of the attribute values can be summed and the quantity normalized to compute a score. In some arrangements, additional information may be used for scoring individual fonts. For example, sales information may be combined with the attribute data to assign a score to a font. With the score assigned, the fonts can then ranked based upon their score and presented. For example, higher ranked fonts may be presented initially in a list (e.g., at the top of a list presented on the pane 206) followed by lower ranked fonts (e.g., towards the bottom of the list presented in the pane 206). Provided the ranked list, the user can select one or more fonts based upon their appearance relative to the selected attribute(s) and level(s) of interest.


In the figure, one technique is illustrated by a data flow 900 for computing a score for the calculated attributes (output by the ensemble learning machine 820) for the font features 822. From the attribute vector quantity 824, an attribute difference 902 is calculated in which the difference between each attribute value and a value that represents the level of interest selected for that attribute (e.g., in menu 204). For example, one value (e.g., a value of 0.0) can represent if the selected level of interest indicates that the attribute is to be reflected in the recommended fonts, or, another value (e.g., a value of 1.0) represents if the selected level of interest indicates that the attribute should be absent from the recommended fonts. Once calculated, the attribute differences 902 can be weighted based upon the selected level of interest. For example, a selection multiplier 904 can be applied to each attribute difference based upon the level of interest selected for that attribute. As illustrated in this example, one multiplier value (e.g., 1) can be applied if a high level of interest (e.g., the second level) has been selected, and, a smaller multiplier (e.g., ½) is selected if a lower level of interest (e.g., the first level) has been selected. For the instances in which an attribute was not selected for use, another multiplier value (e.g., 0) can be applied. In this case, the zero value multiplier in effect removes the corresponding attribute from being used in the score computation. Next, a single value is attained from the weighed attribute differences by aggregating the values. In the figure, a weighted score 906 is attained by summing the weighed attribute differences and normalizing the value by dividing the sum of the selection multipliers. In other examples, different operations, normalizing techniques, etc. may be employed.


In this arrangement, upon a single quantity (e.g., a normalized sum of weighted attribute differences) being determined, other information is applied to calculate a final score for the font. For example, sales data associated with each font can be applied such that frequently licensed fonts are more highly ranked compared to fonts that are rarely licensed. To incorporate such sale data various techniques may be employed, for example, a numerical score (e.g., ranging from 0.0 for low sales to 1.0 for considerable sales) may be assigned to font. In this example, a final score is calculated from the weighted score for the font 906, the assigned sales score, and the number of attributes selected in the menu 204. Equation 908 represents the situation in which multiple attributes have been selected in the menu or for the situation in which only a single attribute is selected but with a high level of interest (e.g., a second level of interest). Equation 910 represents the scenario in which only a single attribute is selected and a lower level of interest is selected (e.g., a first level of interest). Equation 912 governs when no attributes have been selected. In this situation, the final score for ranking the font is equal to the sales score assigned to that font. As such, if no attributes are of interest for ranking the fonts in pane 206, other data such as sales information is used such that a ranking is always presented. With this information determined (e.g., by the font ranker 710), the font recommendation manager 620 can identify and rank font recommendations for delivery to the requesting user device (e.g., computer system 602) for presentation to a designer for review and selecting one or more of the recommended fonts.


Similar to identifying one or more fonts based upon a selected attribute (or attributes) and a level of interest (or levels of interest), these techniques may be employed in other endeavors. Other than fonts, other types of items may be identified; for example, different types of graphical based items such as photographs, emoji's (small digital images or icons used to express an idea, emotion, etc. in electronic communications (e.g., email, text messages, etc.), page layouts for electronic assets (e.g., electronic documents, web pages, web sites, etc.). Attributes for such items may be similar to or different from the attributes used for the fonts. For example, an attribute similar to “Happy” may be used with for emoji's; however other descriptive attributes (e.g., urban setting, rural setting, etc.) may be used for items that are photographs. For such items, features may be determined and used for machine learning; however, just the attributes may be used along with the identified items for machine learning in some arrangement. Defining such features for these items may accomplished by one or more techniques. For example, one or more autoencoder techniques may be used reduce dimensionality to define features. In some arrangements, the autoencoder may use non-linear transformation techniques and be used for training one or more learning machines, during deployment of the learning machine(s), or both. Linear transformation techniques may be used alone or in concert with non-linear techniques for feature development. For example a linear transformation technique such as principal component analysis may be used to reduce dimensionality and extract features, e.g., from photographs.


Referring to FIG. 10, a flowchart 1000 represents operations of a font recommendation manager (e.g., the font recommendation manager 620 shown in FIG. 6) being executed by a computing device (e.g., the server 618 located at the font service provider 612). Operations of the font recommendation manager 620 are typically executed by a single computing device (e.g., the server 318); however, operations may be executed by multiple computing devices. Along with being executed at a single site (e.g., the font service provider 612), the execution of operations may be distributed among two or more locations. In some arrangements, a portion of the operations may be executed at a user device (e.g., the computing device 602), an intermediate computing device (e.g., located between a user device and the font service provider), one or more computing devices located external to the font service provider 612, etc.


Operations of the font recommendation manager 620 may include receiving 1002 data representing one or more user-selected font attributes. The data includes one of at least four selectable interest levels for each of the one or more user-selected item attributes. For example, the item attributes may be font attributes. Presented a listing of font attributes (e.g., the thirty-one attributes shown in menu 204) a user can select at least one attribute (e.g., “Happy”) and one of four interest levels associated with the attribute (e.g., first level “Happy”, second level “Happy”, first level “Not Happy”, and second level “Not Happy”). In this example, the first level reflects a lesser level of interest compared to the second level of interest. Operations may also include identifying 1004 one or more items representative of the selected interest level for each of the one or more user-selected item attributes. Again, for the example in which items are fonts, based upon attributes and level of interest (for each attribute), scores may be calculated for fonts, and a ranked list of the fonts (based on the assigned scores) may be produced (e.g., highly ranked fonts appear high on the recommendation list and low ranking fonts appear towards the bottom of the list). Operations may also include initiating 1006 delivery of data representing the identified one or more items for selection by the user. Again, for the example in which items are fonts, upon identifying a list of recommended fonts (ordered by their individual ranking), the list may be provided to a computing device of a designed for review and selection of a recommended font. By improving the efficiency in delivering fonts of interest to a designer, additional fonts may be selected to assist the designer with other projects while increase licensing frequency of more fonts.



FIG. 11 shows an example of example computing device 1100 and example mobile computing device 1150, which can be used to implement the techniques described herein. For example, a portion or all of the operations of font recommendations manager 620 (shown in FIG. 6) may be executed by the computing device 1100 and/or the mobile computing device 1150. Computing device 1100 is intended to represent various forms of digital computers, including, e.g., laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 1150 is intended to represent various forms of mobile devices, including, e.g., personal digital assistants, tablet computing devices, cellular telephones, smartphones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the techniques described and/or claimed in this document.


Computing device 1100 includes processor 1102, memory 1104, storage device 1106, high-speed interface 1108 connecting to memory 1104 and high-speed expansion ports 1110, and low speed interface 1112 connecting to low speed bus 1114 and storage device 1106. Each of components 1102, 1104, 1106, 1108, 1110, and 1112, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. Processor 1102 can process instructions for execution within computing device 1100, including instructions stored in memory 1104 or on storage device 1106 to display graphical data for a GUI on an external input/output device, including, e.g., display 1116 coupled to high speed interface 1108. In other implementations, multiple processors and/or multiple busses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 1100 can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).


Memory 1104 stores data within computing device 1100. In one implementation, memory 1104 is a volatile memory unit or units. In another implementation, memory 1104 is a non-volatile memory unit or units. Memory 1104 also can be another form of computer-readable medium (e.g., a magnetic or optical disk. Memory 1104 may be non-transitory.)


Storage device 1106 is capable of providing mass storage for computing device 1100. In one implementation, storage device 1106 can be or contain a computer-readable medium (e.g., a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, such as devices in a storage area network or other configurations.) A computer program product can be tangibly embodied in a data carrier. The computer program product also can contain instructions that, when executed, perform one or more methods (e.g., those described above.) The data carrier is a computer- or machine-readable medium, (e.g., memory 1104, storage device 1106, memory on processor 1102, and the like.)


High-speed controller 1108 manages bandwidth-intensive operations for computing device 1100, while low speed controller 1112 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In one implementation, high-speed controller 1108 is coupled to memory 1104, display 1116 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 1110, which can accept various expansion cards (not shown). In the implementation, low-speed controller 1112 is coupled to storage device 1106 and low-speed expansion port 1114. The low-speed expansion port, which can include various communication ports (e.g., USB, Bluetooth®, Ethernet, wireless Ethernet), can be coupled to one or more input/output devices, (e.g., a keyboard, a pointing device, a scanner, or a networking device including a switch or router, e.g., through a network adapter.)


Computing device 1100 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as standard server 1120, or multiple times in a group of such servers. It also can be implemented as part of rack server system 1124. In addition or as an alternative, it can be implemented in a personal computer (e.g., laptop computer 1122.) In some examples, components from computing device 1100 can be combined with other components in a mobile device (not shown), e.g., device 1150. Each of such devices can contain one or more of computing device 1100, 1150, and an entire system can be made up of multiple computing devices 1100, 1150 communicating with each other.


Computing device 1150 includes processor 1152, memory 1164, an input/output device (e.g., display 1154, communication interface 1166, and transceiver 1168) among other components. Device 1150 also can be provided with a storage device, (e.g., a microdrive or other device) to provide additional storage. Each of components 1150, 1152, 1164, 1154, 1166, and 1168, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.


Processor 1152 can execute instructions within computing device 1150, including instructions stored in memory 1164. The processor can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor can provide, for example, for coordination of the other components of device 1150, e.g., control of user interfaces, applications run by device 1150, and wireless communication by device 1150.


Processor 1152 can communicate with a user through control interface 1158 and display interface 1156 coupled to display 1154. Display 1154 can be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Display interface 1156 can comprise appropriate circuitry for driving display 1154 to present graphical and other data to a user. Control interface 1158 can receive commands from a user and convert them for submission to processor 1152. In addition, external interface 1162 can communicate with processor 1142, so as to enable near area communication of device 1150 with other devices. External interface 1162 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces also can be used.


Memory 1164 stores data within computing device 1150. Memory 1164 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 1174 also can be provided and connected to device 1150 through expansion interface 1172, which can include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 1174 can provide extra storage space for device 1150, or also can store applications or other data for device 1150. Specifically, expansion memory 1174 can include instructions to carry out or supplement the processes described above, and can include secure data also. Thus, for example, expansion memory 1174 can be provided as a security module for device 1150, and can be programmed with instructions that permit secure use of device 1150. In addition, secure applications can be provided through the SIMM cards, along with additional data, (e.g., placing identifying data on the SIMM card in a non-hackable manner.)


The memory can include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in a data carrier. The computer program product contains instructions that, when executed, perform one or more methods, e.g., those described above. The data carrier is a computer- or machine-readable medium (e.g., memory 1164, expansion memory 1174, and/or memory on processor 1152), which can be received, for example, over transceiver 1168 or external interface 1162.


Device 850 can communicate wirelessly through communication interface 1166, which can include digital signal processing circuitry where necessary. Communication interface 1166 can provide for communications under various modes or protocols (e.g., GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others.) Such communication can occur, for example, through radio-frequency transceiver 1168. In addition, short-range communication can occur, e.g., using a Bluetooth®, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 1170 can provide additional navigation- and location-related wireless data to device 1150, which can be used as appropriate by applications running on device 1150. Sensors and modules such as cameras, microphones, compasses, accelerators (for orientation sensing), etc. may be included in the device.


Device 1150 also can communicate audibly using audio codec 1160, which can receive spoken data from a user and convert it to usable digital data. Audio codec 1160 can likewise generate audible sound for a user, (e.g., through a speaker in a handset of device 1150.) Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, and the like) and also can include sound generated by applications operating on device 1150.


Computing device 1150 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as cellular telephone 1180. It also can be implemented as part of smartphone 1182, a personal digital assistant, or other similar mobile device.


Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor. The programmable processor can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.


These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to a computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions.


To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a device for displaying data to the user (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor), and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be a form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in a form, including acoustic, speech, or tactile input.


The systems and techniques described here can be implemented in a computing system that includes a backend component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a frontend component (e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or a combination of such back end, middleware, or frontend components. The components of the system can be interconnected by a form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.


The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.


In some implementations, the engines described herein can be separated, combined or incorporated into a single or combined engine. The engines depicted in the figures are not intended to limit the systems described here to the software architectures shown in the figures.


A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps can be provided, or steps can be eliminated, from the described flows, and other components can be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.

Claims
  • 1. A computing device implemented method comprising: receiving, by the computing device, data representing one or more user-selected font attributes, the data includes one or more selected interest levels for each of the one or more user-selected font attributes, the one or more interest levels for each of the one or more user-selected font attributes is changeable after selection of the respective font attribute, and the one or more selected interest levels represent interest in having the one or more user-selected font attributes being present or being absent in identified fonts;identifying, by a learning machine, a deep learning machine, or an ensemble of deep learning machines, one or more fonts representative of the one or more selected interest levels for each of the one or more user-selected font attributes, wherein the learning machine, the deep learning machine, or the ensemble of deep learning machines is trained using data that represents one or more fonts having the one or more user-selected font attributes, and wherein each of the identified one or more fonts is represented by a vector of numerical values; andinitiating delivery of data representing the identified one or more fonts for user selection.
  • 2. The computing device implemented method of claim 1, wherein the ensemble of deep learning machines is trained using data that represents a listing of fonts for each of the user-selected font attributes, wherein the data that represents the font listing is weighted.
  • 3. The computing device implemented method of claim 2, wherein the data that represents the font listing is weighted by a biquadratic curve.
  • 4. The computing device implemented method of claim 2, wherein data that represents fonts located at the start and end of the font listing are similarly weighted.
  • 5. The computing device implemented method of claim 1, wherein identifying the one or more fonts representative of the selected interest level for each of the user-selected font attributes includes multiplying a numerical value representing an attribute's presence in a font and a numerical value representing the selected interest level for the attribute.
  • 6. The computing device implemented method of claim 1, wherein at least one of the font attributes is identified from survey data for being user-selectable.
  • 7. A system comprising: a computing device comprising:a memory configured to store instructions; anda processor to execute the instructions to perform operations comprising: receiving, by the computing device, data representing one or more user-selected font attributes, the data includes one or more selected interest levels for each of the one or more user-selected font attributes, the one or more interest levels for each of the one or more user-selected font attributes is changeable after selection of the respective font attribute, and the one or more selected interest levels represent interest in having the one or more user-selected font attributes being present or being absent in identified fonts;identifying, by a learning machine, a deep learning machine, or an ensemble of deep learning machines, one or more fonts representative of the one or more selected interest levels for each of the one or more user-selected font attributes, wherein the learning machine, the deep learning machine, or the ensemble of deep learning machines is trained using data that represents one or more fonts having the one or more user-selected font attributes, and wherein each of the identified one or more fonts is represented by a vector of numerical values; andinitiating delivery of data representing the identified one or more fonts for user selection.
  • 8. The system of claim 7, wherein the ensemble of deep learning machines is trained using data that represents a listing of fonts for each of the user-selected font attributes, wherein the data that represents the font listing is weighted.
  • 9. The system of claim 8, wherein the data that represents the font listing is weighted by a biquadratic curve.
  • 10. The system of claim 8, wherein data that represents fonts located at the start and end of the font listing are similarly weighted.
  • 11. The system of claim 7, wherein identifying the one or more fonts representative of the selected interest level for each of the user-selected font attributes includes multiplying a numerical value representing an attribute's presence in a font and a numerical value representing the selected interest level for the attribute.
  • 12. The system of claim 7, wherein at least one of the font attributes is identified from survey data for being user-selectable.
  • 13. One or more non-transitory computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations comprising: receiving, by the processing device, data representing one or more user-selected font attributes, the data includes one or more selected interest levels for each of the one or more user-selected font attributes, the one or more interest levels for each of the one or more user-selected font attributes is changeable after selection of the respective font attribute, and the one or more selected interest levels represent interest in having the one or more user-selected font attributes being present or being absent in identified fonts;identifying, by a learning machine, a deep learning machine, or an ensemble of deep learning machines, one or more fonts representative of the one or more selected interest levels for each of the one or more user-selected font attributes, wherein the learning machine, the deep learning machine, or the ensemble of deep learning machines is trained using data that represents one or more fonts having the one or more user-selected font attributes, and wherein each of the identified one or more fonts is represented by a vector of numerical values; andinitiating delivery of data representing the identified one or more fonts for user selection.
  • 14. The non-transitory computer readable media of claim 13, wherein the ensemble of deep learning machines is trained using data that represents a listing of fonts for each of the user-selected font attributes, wherein the data that represents the font listing is weighted.
  • 15. The non-transitory computer readable media of claim 14, wherein the data that represents the font listing is weighted by a biquadratic curve.
  • 16. The non-transitory computer readable media of claim 14, wherein data that represents fonts located at the start and end of the font listing are similarly weighted.
  • 17. The non-transitory computer readable media of claim 13, wherein identifying the one or more fonts representative of the selected interest level for each of the user-selected font attributes includes multiplying a numerical value representing an attribute's presence in a font and a numerical value representing the selected interest level for the attribute.
  • 18. The non-transitory computer readable media of claim 13, wherein at least one of the font attributes is identified from survey data for being user-selectable.
CLAIM OF PRIORITY

This application claims priority under 35 USC § 119(e) to U.S. Patent Application Ser. No. 62/195,165, filed on Jul. 21, 2015 the entire contents of which are hereby incorporated by reference.

US Referenced Citations (351)
Number Name Date Kind
4244657 Wasylyk Jan 1981 A
4998210 Kadono et al. Mar 1991 A
5263132 Parker et al. Nov 1993 A
5347266 Bauman et al. Sep 1994 A
5412771 Fenwick May 1995 A
5416898 Opstad et al. May 1995 A
5444829 Kawabata et al. Aug 1995 A
5453938 Gohara et al. Sep 1995 A
5526477 McConnell et al. Jun 1996 A
5528742 Moore et al. Jun 1996 A
5533174 Flowers et al. Jul 1996 A
5586242 McQueen et al. Dec 1996 A
5606649 Tai Feb 1997 A
5619721 Maruko Apr 1997 A
5630028 DeMeo May 1997 A
5737599 Rowe et al. Apr 1998 A
5748975 Van De Vanter May 1998 A
5757384 Ikeda May 1998 A
5761395 Miyazaki et al. Jun 1998 A
5781714 Collins et al. Jul 1998 A
5877776 Beaman et al. Mar 1999 A
5926189 Beaman et al. Jul 1999 A
5940581 Lipton Aug 1999 A
5995718 Hiraike Nov 1999 A
6012071 Krishna et al. Jan 2000 A
6016142 Chang Jan 2000 A
6031549 Hayes-Roth Feb 2000 A
6044205 Reed et al. Mar 2000 A
6065008 Simon et al. May 2000 A
6073147 Chan et al. Jun 2000 A
6111654 Cartier Aug 2000 A
6141002 Kanungo et al. Oct 2000 A
6167441 Himmel Dec 2000 A
6249908 Stamm Jun 2001 B1
6252671 Peng et al. Jun 2001 B1
6282327 Betrisey Aug 2001 B1
6313920 Dresevic et al. Nov 2001 B1
6320587 Funyu Nov 2001 B1
6323864 Kaul et al. Nov 2001 B1
6330577 Kim Dec 2001 B1
6343301 Halt et al. Jan 2002 B1
6426751 Patel Jul 2002 B1
6490051 Nguyen et al. Dec 2002 B1
6512531 Gartland Jan 2003 B1
6522330 Kobayashi Feb 2003 B2
6522347 Tsuji Feb 2003 B1
6583789 Carlson et al. Jun 2003 B1
6601009 Florschuetz Jul 2003 B2
6657625 Chik et al. Dec 2003 B1
6675358 Kido Jan 2004 B1
6678688 Unruh Jan 2004 B1
6687879 Teshima Feb 2004 B1
6704116 Abulhab Mar 2004 B1
6704648 Naik et al. Mar 2004 B1
6718519 Taieb Apr 2004 B1
6738526 Betrisey May 2004 B1
6754875 Paradies Jun 2004 B1
6760029 Phinney et al. Jul 2004 B1
6771267 Muller Aug 2004 B1
6810504 Cooper et al. Oct 2004 B2
6813747 Taieb Nov 2004 B1
6853980 Ying et al. Feb 2005 B1
6856317 Konsella et al. Feb 2005 B2
6882344 Hayes et al. Apr 2005 B1
6901427 Teshima May 2005 B2
6907444 Narasimhan et al. Jun 2005 B2
6952210 Renner et al. Oct 2005 B1
6992671 Corona Jan 2006 B1
6993538 Gray Jan 2006 B2
7050079 Estrada et al. May 2006 B1
7064757 Opstad et al. Jun 2006 B1
7064758 Chik et al. Jun 2006 B2
7155672 Adler et al. Dec 2006 B1
7184046 Hawkins Feb 2007 B1
7188313 Hughes et al. Mar 2007 B2
7228501 Brown et al. Jun 2007 B2
7231602 Truelove et al. Jun 2007 B1
7346845 Teshima et al. Mar 2008 B2
7373140 Matsumoto May 2008 B1
7477988 Dorum Jan 2009 B2
7492365 Corbin et al. Feb 2009 B2
7505040 Stamm et al. Mar 2009 B2
7539939 Schomer May 2009 B1
7552008 Newstrom et al. Jun 2009 B2
7580038 Chik et al. Aug 2009 B2
7583397 Smith Sep 2009 B2
7636885 Merz et al. Dec 2009 B2
7701458 Sahuc et al. Apr 2010 B2
7735020 Chaudhri Jun 2010 B2
7752222 Cierniak Jul 2010 B1
7768513 Klassen Aug 2010 B2
7836094 Ornstein et al. Nov 2010 B2
7882432 Nishikawa et al. Feb 2011 B2
7937658 Lunde May 2011 B1
7944447 Clegg et al. May 2011 B2
7958448 Fattic et al. Jun 2011 B2
7987244 Lewis et al. Jul 2011 B1
8098250 Clegg et al. Jan 2012 B2
8116791 Agiv Feb 2012 B2
8201088 Levantovsky Jun 2012 B2
8201093 Tuli Jun 2012 B2
8306356 Bever Nov 2012 B1
8381115 Tranchant et al. Feb 2013 B2
8413051 Bacus et al. Apr 2013 B2
8464318 Hallak Jun 2013 B1
8601374 Parham Dec 2013 B2
8643542 Wendel Feb 2014 B2
8643652 Kaplan Feb 2014 B2
8644810 Boyle Feb 2014 B1
8689101 Fux Apr 2014 B2
8707208 DiCamillo Apr 2014 B2
8731905 Tsang May 2014 B1
9063682 Bradshaw Jun 2015 B1
9317777 Kaasila Apr 2016 B2
9319444 Levantovsky Apr 2016 B2
9432671 Campanelli Aug 2016 B2
9449126 Genoni Sep 2016 B1
9483445 Joshi et al. Nov 2016 B1
9569865 Kaasila et al. Feb 2017 B2
9576196 Natarajan Feb 2017 B1
9626337 Kaasila et al. Apr 2017 B2
9691169 Kaasila Jun 2017 B2
9805288 Kaasila Oct 2017 B2
9817615 Seguin et al. Nov 2017 B2
10007863 Pereira et al. Jun 2018 B1
10032072 Tran Jul 2018 B1
10115215 Matteson Oct 2018 B2
10140261 Yang Nov 2018 B2
10157332 Gray Dec 2018 B1
10733529 Tran et al. Aug 2020 B1
10867241 Rogers et al. Dec 2020 B1
11334750 Arilla et al. May 2022 B2
20010021937 Cicchitelli et al. Sep 2001 A1
20010052901 Kawabata et al. Dec 2001 A1
20020010725 Mo Jan 2002 A1
20020029232 Bobrow et al. Mar 2002 A1
20020033824 Stamm Mar 2002 A1
20020052916 Kloba et al. May 2002 A1
20020057853 Usami May 2002 A1
20020059344 Britton et al. May 2002 A1
20020087702 Mori Jul 2002 A1
20020093506 Hobson Jul 2002 A1
20020120721 Eilers et al. Aug 2002 A1
20020122594 Goldberg et al. Sep 2002 A1
20020174186 Hashimoto et al. Nov 2002 A1
20020194261 Teshima Dec 2002 A1
20030014545 Broussard et al. Jan 2003 A1
20030076350 Vu Apr 2003 A1
20030197698 Perry et al. Oct 2003 A1
20040025118 Renner Feb 2004 A1
20040088657 Brown et al. May 2004 A1
20040119714 Everett Jun 2004 A1
20040177056 Davis et al. Sep 2004 A1
20040189643 Frisken et al. Sep 2004 A1
20040207627 Konsella et al. Oct 2004 A1
20040233198 Kubo Nov 2004 A1
20050015307 Simpson et al. Jan 2005 A1
20050033814 Ota Feb 2005 A1
20050094173 Engelman et al. May 2005 A1
20050111045 Imai May 2005 A1
20050128508 Greef et al. Jun 2005 A1
20050149942 Venkatraman Jul 2005 A1
20050190186 Klassen Sep 2005 A1
20050193336 Fux et al. Sep 2005 A1
20050200871 Miyata Sep 2005 A1
20050264570 Stamm Dec 2005 A1
20050270553 Kawara Dec 2005 A1
20050275656 Corbin et al. Dec 2005 A1
20060010371 Shur et al. Jan 2006 A1
20060017731 Matskewich et al. Jan 2006 A1
20060061790 Miura Mar 2006 A1
20060072136 Hodder et al. Apr 2006 A1
20060072137 Nishikawa et al. Apr 2006 A1
20060072162 Nakamura Apr 2006 A1
20060103653 Chik et al. May 2006 A1
20060103654 Chik et al. May 2006 A1
20060168639 Gan Jul 2006 A1
20060241861 Takashima Oct 2006 A1
20060245727 Nakano et al. Nov 2006 A1
20060253395 Corbell Nov 2006 A1
20060267986 Bae et al. Nov 2006 A1
20060269137 Evans Nov 2006 A1
20060285138 Merz et al. Dec 2006 A1
20070002016 Cho et al. Jan 2007 A1
20070006076 Cheng Jan 2007 A1
20070008309 Sahuc et al. Jan 2007 A1
20070024626 Kagle et al. Feb 2007 A1
20070050419 Weyl et al. Mar 2007 A1
20070055931 Zaima Mar 2007 A1
20070139412 Stamm Jun 2007 A1
20070139413 Stamm et al. Jun 2007 A1
20070159646 Adelberg et al. Jul 2007 A1
20070172199 Kobayashi Jul 2007 A1
20070211062 Engleman et al. Sep 2007 A1
20070283047 Theis et al. Dec 2007 A1
20080028304 Levantovsky et al. Jan 2008 A1
20080030502 Chapman Feb 2008 A1
20080115046 Yamaguchi May 2008 A1
20080118151 Bouguet et al. May 2008 A1
20080154911 Cheng Jun 2008 A1
20080222734 Redlich et al. Sep 2008 A1
20080243837 Davis Oct 2008 A1
20080282186 Basavaraju Nov 2008 A1
20080303822 Taylor Dec 2008 A1
20080306916 Gonzalez et al. Dec 2008 A1
20090031220 Tranchant Jan 2009 A1
20090037492 Baitalmal et al. Feb 2009 A1
20090037523 Kolke et al. Feb 2009 A1
20090063964 Huang Mar 2009 A1
20090070128 McCauley et al. Mar 2009 A1
20090097049 Cho Apr 2009 A1
20090100074 Joung et al. Apr 2009 A1
20090119678 Shih May 2009 A1
20090158134 Wang Jun 2009 A1
20090171766 Schiff et al. Jul 2009 A1
20090183069 Duggan et al. Jul 2009 A1
20090275351 Jeung et al. Nov 2009 A1
20090287998 Kalra Nov 2009 A1
20090290813 He Nov 2009 A1
20090303241 Priyadarshi et al. Dec 2009 A1
20090307585 Tranchant et al. Dec 2009 A1
20100014104 Soord Jan 2010 A1
20100066763 MacDougall Mar 2010 A1
20100088606 Kanno Apr 2010 A1
20100088694 Peng Apr 2010 A1
20100091024 Myadam Apr 2010 A1
20100115454 Tuli May 2010 A1
20100164984 Rane Jul 2010 A1
20100218086 Howell et al. Aug 2010 A1
20100231598 Hernandez et al. Sep 2010 A1
20100275161 DiCamillo Oct 2010 A1
20100321393 Levantovsky Dec 2010 A1
20110029103 Mann et al. Feb 2011 A1
20110032074 Novack et al. Feb 2011 A1
20110090229 Bacus et al. Apr 2011 A1
20110090230 Bacus et al. Apr 2011 A1
20110093565 Bacus et al. Apr 2011 A1
20110115797 Kaplan May 2011 A1
20110131153 Grim, III Jun 2011 A1
20110188761 Boutros et al. Aug 2011 A1
20110203000 Bacus et al. Aug 2011 A1
20110238495 Kang Sep 2011 A1
20110258535 Adler, III et al. Oct 2011 A1
20110271180 Lee Nov 2011 A1
20110276872 Kataria Nov 2011 A1
20110289407 Naik Nov 2011 A1
20110310432 Waki Dec 2011 A1
20120001922 Escher et al. Jan 2012 A1
20120016964 Veen et al. Jan 2012 A1
20120033874 Perronnin Feb 2012 A1
20120066590 Harris et al. Mar 2012 A1
20120072978 DeLuca Mar 2012 A1
20120092345 Joshi et al. Apr 2012 A1
20120102176 Lee et al. Apr 2012 A1
20120102391 Lee Apr 2012 A1
20120127069 Santhiveeran et al. May 2012 A1
20120134590 Petrou May 2012 A1
20120215640 Ramer et al. Aug 2012 A1
20120269425 Marchesotti Oct 2012 A1
20120269441 Marchesotti et al. Oct 2012 A1
20120288190 Tang Nov 2012 A1
20120306852 Taylor Dec 2012 A1
20120307263 Ichikawa et al. Dec 2012 A1
20120323694 Lita et al. Dec 2012 A1
20120323971 Pasupuleti Dec 2012 A1
20130033498 Linnerud Feb 2013 A1
20130067319 Olszewski et al. Mar 2013 A1
20130120396 Kaplan May 2013 A1
20130127872 Kaplan May 2013 A1
20130156302 Rodriguez Serrano et al. Jun 2013 A1
20130163027 Shustef Jun 2013 A1
20130179761 Cho Jul 2013 A1
20130185028 Sullivan Jul 2013 A1
20130215126 Roberts Aug 2013 A1
20130215133 Gould et al. Aug 2013 A1
20130321617 Lehmann Dec 2013 A1
20130326348 Ip et al. Dec 2013 A1
20140025756 Kamens Jan 2014 A1
20140047329 Levantovsky et al. Feb 2014 A1
20140052801 Zuo et al. Feb 2014 A1
20140059054 Liu et al. Feb 2014 A1
20140089348 Vollmert Mar 2014 A1
20140136957 Kaasila et al. May 2014 A1
20140153012 Seguin Jun 2014 A1
20140176563 Kaasila et al. Jun 2014 A1
20140195903 Kaasila et al. Jul 2014 A1
20140279039 Systrom et al. Sep 2014 A1
20140282055 Engel et al. Sep 2014 A1
20140358802 Biswas Dec 2014 A1
20150020212 Demaree Jan 2015 A1
20150030238 Yang et al. Jan 2015 A1
20150036919 Bourdev et al. Feb 2015 A1
20150055855 Rodriguez et al. Feb 2015 A1
20150062140 Levantovsky et al. Mar 2015 A1
20150074522 Harned, III et al. Mar 2015 A1
20150097842 Kaasila Apr 2015 A1
20150100882 Severenuk Apr 2015 A1
20150146020 Imaizumi et al. May 2015 A1
20150154002 Weinstein et al. Jun 2015 A1
20150178476 Horton Jun 2015 A1
20150193386 Wurtz Jul 2015 A1
20150220494 Qin et al. Aug 2015 A1
20150278167 Arnold et al. Oct 2015 A1
20150339273 Yang et al. Nov 2015 A1
20150339276 Bloem et al. Nov 2015 A1
20150339543 Campanelli Nov 2015 A1
20150348297 Kaasila Dec 2015 A1
20160078656 Borson et al. Mar 2016 A1
20160092439 Ichimi Mar 2016 A1
20160140952 Graham May 2016 A1
20160170940 Levantovsky Jun 2016 A1
20160171343 Kaasila et al. Jun 2016 A1
20160182606 Kaasila et al. Jun 2016 A1
20160246762 Eaton Aug 2016 A1
20160307156 Burner Oct 2016 A1
20160307347 Matteson et al. Oct 2016 A1
20160314377 Vieira et al. Oct 2016 A1
20160321217 Ikemoto et al. Nov 2016 A1
20160344282 Hausler Nov 2016 A1
20160344828 Hausler et al. Nov 2016 A1
20160350336 Checka Dec 2016 A1
20160371232 Ellis et al. Dec 2016 A1
20170011279 Soldevila et al. Jan 2017 A1
20170017778 Ford et al. Jan 2017 A1
20170024641 Wierzynski Jan 2017 A1
20170039445 Tredoux et al. Feb 2017 A1
20170098138 Wang et al. Apr 2017 A1
20170098140 Wang et al. Apr 2017 A1
20170124503 Bastide May 2017 A1
20170237723 Gupta et al. Aug 2017 A1
20170357877 Lin Dec 2017 A1
20180039605 Pao et al. Feb 2018 A1
20180075455 Kumnick et al. Mar 2018 A1
20180082156 Jin et al. Mar 2018 A1
20180097763 Garcia et al. Apr 2018 A1
20180109368 Johnson et al. Apr 2018 A1
20180144256 Saxena et al. May 2018 A1
20180165554 Zhang Jun 2018 A1
20180203851 Wu et al. Jul 2018 A1
20180253988 Kanayama et al. Sep 2018 A1
20180285696 Eigen et al. Oct 2018 A1
20180285965 Kaasila et al. Oct 2018 A1
20180332140 Bullock Nov 2018 A1
20180341907 Tucker et al. Nov 2018 A1
20180349527 Li et al. Dec 2018 A1
20180373921 Di Carlo Dec 2018 A1
20190019087 Fukui Jan 2019 A1
20190073537 Arilla et al. Mar 2019 A1
20190095763 Arilla et al. Mar 2019 A1
20190130232 Kaasila et al. May 2019 A1
20200219274 Afridi et al. Jul 2020 A1
Foreign Referenced Citations (21)
Number Date Country
0949574 Oct 1999 EP
2166488 Mar 2010 EP
2857983 Apr 2015 EP
06-258982 Sep 1994 JP
H10-124030 May 1998 JP
2002-507289 Mar 2002 JP
2003-288184 Oct 2003 JP
05-215915 Aug 2005 JP
05-217816 Aug 2005 JP
2007-011733 Jan 2007 JP
2009-545064 Dec 2009 JP
5140997 Nov 2012 JP
544595 Aug 2003 TW
200511041 Mar 2005 TW
WO 9423379 Oct 1994 WO
WO 9900747 Jan 1999 WO
WO 0191088 Nov 2001 WO
WO 03023614 Mar 2003 WO
WO 04012099 Feb 2004 WO
WO 05001675 Jan 2005 WO
WO 2008013720 Jan 2008 WO
Non-Patent Literature Citations (54)
Entry
Ramanathan et al. “A Novel Technique for English Font Recognition Using Support Vector Machines,” 2009 International Conference on Advances in Recent Technologies in Communication and Computing, 2009, pp. 766-769.
Ramanathan et al., “Tamil Font Recognition Using Gabor Filters and Support Vector Machines,” 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, 2009, pp. 613-615.
“A first experiment with multicoloured web fonts,” Manufactura Independente website, Feb. 28, 2011, Retrieved from the internet: http://blog.manufacturaindependente.org/2011/02/a-first-experiment-with-multicoloured-web-fonts/.
“Announcing Speakeasy: A new open-source language tool from Typekit,” Oct. 28, 2010, on-line http://blog.typekit.com/2010/10/28/announcing-speakeasy-a-new-open-source-language-tool-from-typekit/.
“Colorfont/v1,” Feb. 28, 2011, retrieved from the internet: http://manufacturaindependente.com/colorfont/v1/.
“Flash CS4 Professional ActionScript 2.0”, 2007, retrieved on http://help.adobe.com/en_US/AS2LCR/Flash_10.0/help.html?content=00000284.html on Aug. 31, 2015.
“photofont.com—Use photofonts,” Sep. 2, 2012, retrieved from the internet: http://web.archive.org/web/20120902021143/http://photofont.com/photofont/use/web.
“Saffron Type System”, retrieved from the internet Nov. 12, 2014, 7 pages.
Adobe Systems Incorporated, “PostScript Language Reference—Third Edition,” Feb. 1999, pp. 313-390.
Adobe Systems Incorporated, “The Type 42 Font Format Specification,” Technical Note #5012, Jul. 31, 1998, pp. 1-24.
Adobe Systems Incorporated, “To Unicode Mapping File Tutorial,” Adobe Technical Note, XP002348387, May 2003.
Apple Computers, “The True Type Font File,” Oct. 27, 2000, pp. 1-17.
Celik et al., “W3C, CSS3 Module: Fonts,” W3C Working Draft, Jul. 31, 2001, pp. 1-30.
Doughty, Mike, “Using OpenType® Fonts with Adobe® InDesign®,” Jun. 11, 2012 retrieved from the internet: http://webarchive.org/web/20121223032924/http://www.sketchpad.net/opentype-indesign.htm (retrieved Sep. 22, 2014), 2 pages.
European Search Report, 13179728.4, dated Sep. 10, 2015, 3 pages.
European Search Report, 14184499.3, dated Jul. 13, 2015, 7 pages.
European Search Report, 14187549.2, dated Jul. 30, 2015, 7 pages.
Extensis, Suitcase 10.2, Quick Start Guide for Macintosh, 2001, 23 pgs.
Font Pair, [online]. “Font Pair”, Jan. 20, 2015, Retrieved from URL: http://web.archive.org/web/20150120231122/http://fontpair.co/, 31 pages.
Forums.macrumors.com' [online]. “which one is the main Fonts folder ?” May 19, 2006, [retrieved on Jun. 19, 2017]. Retrieved from the Internet: URL<https://forums.macrumors.com/threads/which-one-is-the-main-fontsfolder.202284/>, 7 pages.
George Margulis, “Optical Character Recognition: Classification of Handwritten Digits and Computer Fonts”, Aug. 1, 2014, URL: https://web.archive.org/web/20140801114017/http://cs229.stanford.edu/proj2011/Margulis-OpticalCharacterRecognition.pdf.
Goswami, Gautum, “Quite ‘Writly’ Said!,” One Brick at a Time, Aug. 21, 2006, Retrieved from the internet: :http://gautamg.wordpress.com/2006/08/21/quj.te-writely-said/ (retrieved on Sep. 23, 2013), 3 pages.
International Preliminary Report on Patentability issued in PCT application No. PCT/US2013/071519 dated Jun. 9, 2015, 8 pages.
International Preliminary Report on Patentability issued in PCT application No. PCT/US2015/066145 dated Jun. 20, 2017, 7 pages.
International Preliminary Report on Patentability issued in PCT application No. PCT/US2016/023282, dated Oct. 26, 2017, 9 pages.
International Search Report & Written Opinion issued in PCT application No. PCT/US10/01272, dated Jun. 15, 2010, 6 pages.
International Search Report & Written Opinion issued in PCT application No. PCT/US2011/034050 dated Jul. 15, 2011, 13 pages.
International Search Report & Written Opinion, PCT/US2013/026051, dated Jun. 5, 2013, 9 pages.
International Search Report & Written Opinion, PCT/US2013/071519, dated Mar. 5, 2014, 12 pages.
International Search Report & Written Opinion, PCT/US2013/076917, dated Jul. 9, 2014, 11 pages.
International Search Report & Written Opinion, PCT/US2014/010786, dated Sep. 30, 2014, 9 pages.
International Search Report & Written Opinion, PCT/US2016/023282, dated Oct. 7, 2016, 16 pages.
Japanese Office Action, 2009-521768, dated Aug. 28, 2012.
Japanese Office Action, 2013-508184, dated Apr. 1, 2015.
Ma Wei-Ying et al., “Framework for adaptive content delivery in heterogeneous network environments”, Jan. 24, 2000, Retrieved from the Internet: http://www.cooltown.hp.com/papers/adcon/MMCN2000.
Open Text Exceed, User's Guide, Version 14, Nov. 2009, 372 pgs.
Saurabh, Kataria et al., “Font retrieval on a large scale: An experimental study”, 2010 17th IEEE International Conference on Image Processing (ICIP 2010); Sep. 26-29, 2010; Hong Kong, China, IEEE, Piscataway, NJ, USA, Sep. 26, 2010, pp. 2177-2180.
Supplementary European Search Report, European Patent Office, European patent application No. EP 07796924, dated Dec. 27, 2010, 8 pages.
TrueType Fundamentals, Microsoft Typography, Nov. 1997, pp. 1-17.
Typeconnection, [online]. “typeconnection”, Feb. 26, 2015, Retrieved from URL: http://web.archive.org/web/20150226074717/http://www.typeconnection.com/stepl.php, 4 pages.
Universal Type Server, Upgrading from Suitcase Server, Sep. 29, 2009, 18 pgs.
Wenzel, Martin, “An Introduction to OpenType Substitution Features,” Dec. 26, 2012, Retrieved from the internet: http://web.archive.org/web/20121226233317/http://ilovetypography.com/OpenType/opentype-features. Html (retrieved on Sep. 18, 2014), 12 pages.
Written Opposition to the grant of Japanese Patent No. 6097214 by Masatake Fujii, dated Sep. 12, 2017, 97 pages, with partial English translation.
European Search Report in European Application No. 18193233.6, dated Nov. 11, 2018, 8 pages.
European Search Report in European Application No. 18197313.2, dated Nov. 30, 2018, 7 pages.
Chen et al., “Detecting and reading text in natural scenes,” Proceedings of the 2004 IEEE Computer Society Conference Vision and Pattern Recognition; Publication [online]. 2004 [retrieved Dec. 16, 2018]. Retrieved from the Internet: <URL:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.334.2715&rep=rep1&type=pdf>; pp. 1-8.
International Search Report & Written Opinion in International Application No. PCT/US18/58191, dated Feb. 19, 2019, 17 pages.
Koren et al., “Visualization of labeled data using linear transformations.” IEEE Symposium on Information Visualization, 2003 (IEEE Cat. No. 03TH8714).
Liu, “Visual Exploration and Comparative Analytics of Multidimensional Data Sets”, Graduate Program in Computer Science and Engineering, The Ohio State University, 2016, 210 pages.
Shusen, et al. “Distortion-Guided Structure-Driven Interactive Exploration of High-Dimensional Data,” Computer Graphics Forum., 2014, 33(3):101-110.
Wu et al., “Stochastic neighbor projection on manifold for feature extraction.” Neurocomputing, 2011, 74(17):780-2789.
O'Donovan et al., “Exploratory Font Selection Using Crowdsourced Attributes,” ACT TOG, Jul. 2014, 33(4): 9 pages.
Www.dgp.toronto.edu [online], “Supplemental Material: Exploratory Font Selection Using Crowdsourced Attributes,” available on or before May 12, 2014, via Internet Archive: Wayback Machine URL<https://web.archive.org/web/20140512101752/http://www.dgp.toronto.edu/˜donovan/font/supplemental.pdf>, retrieved on Jun. 28, 2021, URL<http://www.dgp.toronto.edu/˜donovan/font/supplemental.pdf>, 9 pages.
Wikipedia.com [online], ““Feature selection,”” Wikipedia, Sep. 19, 2017, retrieved on Oct. 19, 2021, retrieved from URL <https://en.wikipedia.org/w/index.php? title=Feature selection&oldid=801416585>, 15 pages.
Provisional Applications (1)
Number Date Country
62195165 Jul 2015 US