Websites typically include a collection of webpages published on web servers for access via the Internet. A webpage is a document composed according to a web design language for rendering and displaying in a web browser. Examples of web design language include Hypertext Markup Language (“HTML”) and Extensible Markup Language (“XML”). Various components of a webpage can identify content as well as identify manners according to which text, images, videos, or other types of content is rendered and displayed on the webpage. A webpage can also be linked to other webpages via hyperlinks. When a user clicks on a hyperlink on a webpage, a web browser can retrieve a new webpage defined in the hyperlink to render and display the new webpage in place of the original webpage in the web browser.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Developing a website typically starts with a design team generating a design of various webpages of the website. The design can include arrangements of various user interface (“UI”) components for rendering and displaying text, image, video, or other types of content on a webpage as well as desired functionalities of such UI components. For example, a design of a webpage can include a title component having a text property for containing a string value for use as a caption for the title component. In another example, a design can also include a button component with a text property for containing a label for the button component as well as an appearance property that defines a visual appearance of the button component (e.g., primary, secondary, etc.). In further examples, the design of a webpage can also include image, link action, paragraph, video, or other suitable types of UI components with corresponding properties.
Upon completion of the design, the design team can pass the design of the webpage to a prototype team for functionalizing the various UI components on the design using, for instance, a pseudocode. For example, the prototype team can generate data structures that describe suitable rendering and displaying of the UI components as well as functionalizing the UI components on the webpage. Upon completion of prototyping the webpage according to the design, the prototype team can send the prototyped webpage back to the design team for verification. Upon receiving feedback from the design team, prototype team can revise and reconfigure the prototyped webpage and send the revised webpage to the design team for further feedback. Such a feedback and revision process can be repeated multiple times until the prototyped webpage is satisfactory to the design team. Subsequently, a production team can convert the prototyped webpage to HTML, XML, or other suitable types of web design codes for deployment on webservers.
The foregoing process for developing a website or webpage can have certain drawbacks. First, the foregoing process can be error prone because communications between the design team and the prototype team can sometimes be distorted such that the intent of the webpage design is misunderstood, misinterpreted, or misconstrued during prototyping. Messages, or meanings of the messages, from one team or team member to another can often mutate when the messages are transmitted, repeated, paraphrased, or responded to multiple times. Secondly, the foregoing process involves having the design team providing feedback to the prototype team multiple times for adjusting the prototyped webpage. Such repetitive operations in order to converge on a satisfactory design can be labor intensive and costly.
Several embodiments of the disclosed technology can address certain aspects of the foregoing drawbacks by implementing at least partially automated prototyping of designs of webpages. In some implementations, a design team (or other suitable entities) can generate a data schema for defining various UI components of designs for webpages. For example, a design team can define a data schema for a button component to include definitions of appearance, text, state, or other types of properties of the button component and possible values of such properties. The appearance property can include a property value that describes a color, shading, or other visual features of a button component as a primary or secondary appearance. The text property can include a text value (e.g., a text string) that is a label for the button component. The data schema can also include a state property that can include a value of enabled or disabled or other suitable types of properties for a button component. In other embodiments, the data schema can also be generated automatically using existing webpages or via other suitable techniques.
In certain embodiments, the data schema can also include a child property that can be configured to define one or more levels of nested child or sub-components in a UI component. For example, one child property can include a button component subordinate to a login component while another child property can include a heading component subordinate to an image component. The sub-components can also further include additional subordinate components of their own with additional levels of nesting. As described in more detail later, such nested child properties can be used to identify multiple levels of sub-components of a webpage design to facilitate automated generation of data structures that describe the webpage design.
Using the data schema, a data generator can be configured to generate one or more sets of component data of UI components according to the defined data schema. For instance, in the example above, the data generator can be configured to generate component data for multiple button components that have different appearances and/or labels of text values according to the defined data schema. Each set of component data can include an appearance value and a text value corresponding to the appearance and text properties, respectively. Using both the data schema and the component data, a data visualizer can be configured to create a screenshot or other suitable types of image of the button components with respective appearances and text values as labels. For instance, the data visualizer can generate an image of a button (e.g., a rectangular square) having an appearance defined by the appearance value (e.g., primary) and a label defined by the text value generated by the data generator (e.g., “Hello World!”).
A model developer can be configured to develop a recognition model of the various UI components defined in the data schema using both the component data and the screenshots generated using the component data as training datasets. In certain implementations, the model developer can be configured to identify the various UI components on the screenshots based on the training datasets using a “neural network” or “artificial neural network” configured to “learn” or progressively improve performance of tasks by studying known examples. The neural network can include multiple layers of objects generally refers to as “neurons” or “artificial neurons.” Each neuron can be configured to perform a function, such as a non-linear activation function, based on one or more inputs via corresponding connections. Artificial neurons and connections typically have a contribution value that adjusts as learning proceeds. The contribution value increases or decreases a strength of an input at a connection. Typically, artificial neurons are organized in layers. Different layers may perform different kinds of transformations on respective inputs. Signals typically travel from an input layer, to an output layer, possibly after traversing one or more intermediate layers. Thus, by using a neural network, the model developer can provide a recognition model 118 that can be used by a prototype developer to automatically convert a design for a webpage into a data structure suitable for generating HTML, XML, or other web design codes for the webpage. In additional implementations, the model developer can be configured to generate the recognition model based on user provided rules or via other suitable techniques.
In operation, a prototype developer can be configured to use the recognition model from the model developer to automatically generate a data structure that describes a design for a webpage via computer vision. For instance, the design team can generate a screenshot or image that includes multiple UI components for a design of a webpage. Upon receiving the screenshot or image, for example, via a camera or scanner, the prototype developer can be configured to determine whether an area of the screenshot or image contains nested UI components based on the recognition model. For instance, the recognition model of a card component can indicate one or more possible subcomponents such as a title, image, or paragraph subcomponent. Based on the indication, the prototype developer can be configured to search the focused area and determine whether a title or image is found. Upon finding a title or image, the prototype developer can indicate that the area includes nested UI components. Otherwise, the prototype developer can indicate that the area contains no nested UI components.
In response to determining that the area does not contain any nested UI components, the prototype developer can be configured to identify and recognize the UI component on the screenshot or image based on visual appearances of the UI component using the recognition model. For example, the prototype developer can identify that a UI component having a rectangular shape and a label of text within the rectangular shape corresponds to a button component. The prototype developer can then be configured to identify various properties of the button component. For example, the prototype developer can identify an appearance of the button component based on a color, shading, or other suitable parameters of the rectangular shape. The prototype developer can also identify a text property by identifying the label of text via Optical Character Recognition (“OCR”). Upon completion of identifying the button component and associated properties, the prototype developer can be configured to convert the identified UI component into a data structure having various properties and property values that describe the UI component. As such, the prototype developer can identify the UI component as a button component and using a data structure with an appearance and text properties to describe the appearance and label of the recognized button component.
In response to determining that the area does contain nested UI components, the prototype developer can be configured to identify the next largest sub-area in the focused area and determine whether the sub-area contains nested UI components. Upon determining that the sub-area does not contain nested UI components, the prototype developer can be configured to recognize the UI component and convert the recognized UI component into another data structure, as described above. Upon determining that the sub-area does contain nested UI components, the prototype developer can be configured to repeat the foregoing processes until no more nested UI components is found. As such, by implementing the foregoing recognition and conversion procedures, the prototype developer can be configured to convert the screenshot or image of the design for the webpage into a data structure in, for instance, pseudocode that describes the various U components on the webpage. A production team can then develop and deploy HTML, XML, or other suitable web design codes for the webpage based on the data structure from the prototype developer.
Several embodiments of the disclosed technology can at least reduce risks of miscommunication between the design team and the prototype team. By using the prototype developer, a design for a webpage from the design team can be automatically converted into a data structure in, for instance, pseudocode. As such, communications between the design team and the prototype team can be at least reduced or even eliminated. The foregoing technique can also reduce time and efforts for developing the webpage by at least reducing the feedback and adjustment operations between the design and prototype teams. As such, costs for developing webpages and websites can be reduced when compared to using the feedback and adjustment process.
Certain embodiments of systems, devices, components, modules, routines, data structures, and processes for automatic conversion of webpage designs to data structures are described below. In the following description, specific details of components are included to provide a thorough understanding of certain embodiments of the disclosed technology. A person skilled in the relevant art will also understand that the technology can have additional embodiments. The technology can also be practiced without several of the details of the embodiments described below with reference to
As used herein, a UI component can be a user interface element designed for rendering and displaying corresponding types of content on a webpage. For example, a UI component can include a button component designed to render and display a toggle button on a webpage. In another example, a UI component can include a table component designed to render and display an array of data on a webpage. In yet another example, a UI component can also include a user interface element designed to render and display a dialog, video, image, paragraph, or other suitable types of content.
Also used herein, a data schema can be a diagrammatic representation of a data structure that can be used to describe a UI component. A data schema can identify various properties of a data structure that describes a UI component as well as possible values of the properties. A data structure is a manifestation of a corresponding data schema used in a data resource. For instance, the following can be an example data schema for a button component:
As shown above, the example data schema can include a source (i.e., “http://json-schema.org/draft-07/schema”), an identifier (i.e., http://example.com/button.json), a type (i.e., “object”), a title (i.e., “A button schema”), and identification of one or more required properties (i.e., “appearance” and “text”). The data schema can also define each of the properties, such as “appearance,” with corresponding type, title, description, and possible values, i.e., “primary” and “secondary.” Using the foregoing data schema, component data can be generated for a button component having a syntax according to the defined data schema. For instance, the following is an example of component data for a button component with corresponding appearances and text property values:
A data schema can be defined by a designer, a design team, and/or at least partially generated automatically based on, for instance, existing webpages or via other suitable techniques.
Typical webpage development involves multiple rounds of feedback and adjustment of a prototyped design for a webpage between a design team and a prototype team. Such a development process can be error prone and costly. Several embodiments of the disclosed technology can address certain aspects of the foregoing drawbacks by implementing at least partially automated prototyping of designs of webpages. By using a prototype developer, a design for a webpage from the design team can be automatically converted into a data structure using a recognition model. As such, communications between the design team and the prototype team can be at least reduced or even eliminated. The disclosed technique can also reduce time and efforts for developing the webpage by at least reducing the feedback and adjustment operations between the design and prototype teams. As such, costs for developing webpages and websites can be reduced when compared to using the feedback and adjustment process, as described in more detail below with reference to
Components within a system may take different forms within the system. As one example, a system comprising a first component, a second component and a third component can, without limitation, encompass a system that has the first component being a property in source code, the second component being a binary compiled library, and the third component being a thread created at runtime. The computer program, procedure, or process may be compiled into object, intermediate, or machine code and presented for execution by one or more processors of a personal computer, a network server, a laptop computer, a smartphone, and/or other suitable computing devices.
Equally, components may include hardware circuitry. A person of ordinary skill in the art would recognize that hardware may be considered fossilized software, and software may be considered liquefied hardware. As just one example, software instructions in a component may be burned to a Programmable Logic Array circuit or may be designed as a hardware circuit with appropriate integrated circuits. Equally, hardware may be emulated by software. Various implementations of source, intermediate, and/or object code and associated data may be stored in a computer memory that includes read-only memory, random-access memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other suitable computer readable storage media excluding propagated signals.
As shown in
The computing device 102 can be configured to facilitate a designer 101 to perform various tasks. For example, the computing device 102 can facilitate the user 101 to compose, modify, or perform other suitable actions on a data schema 110. In other examples, the computing device 102 can also facilitate the user 101 to perform various computational, communication, or other suitable types of tasks. In the illustrated embodiment, the computing device 102 includes a desktop computer. In other embodiments, the computing device 102 can also include a laptop computer, a tablet, a smartphone, or other suitable types of electronic device with additional and/or different hardware/software components.
The schema designer 103 can be configured to facilitate composition or modification of a data schema 110 by the designer 101. In one implementation, the schema designer 103 can include a text editor. In other implementations, the schema designer 103 can include another suitable type of application for receiving input from the designer 101 and generate a data schema 110 according to the received input. Though not shown in
As shown in
As shown above, the example button component can be a button that has a “primary” appearance and a displayed label of “Hello World!” The data generator 104 can also generate additional and different component data 112 based on the same data schema 110. For instance, the following is another example component data 112 for another button component:
Thus, the example button component above has a different appearance and label, i.e., “Hello Pluto!” and “secondary” than those of the other example button component above.
Upon generating the set of component data 112, the data generator 104 can transmit the component data 112 to the data visualizer 106 for generating screenshots 114 (or other suitable types of images) of UI components based on the component data 112. In certain embodiments, the data visualizer 106 can be configured to identify an image template 107 (e.g., “button”) based on an identifier of the UI component in the component data 112. The data visualizer 106 can then format a copy of the image template 107 using property values of the various properties defined in the component data 112 to generate the screenshot 114. For example, as shown in
Upon generating the screenshots 114 based on the component data 112, the data visualizer 106 can be configured to transmit both the screenshots 114 and the component data 112 to the model developer 116 for generating a recognition model 118 that correlates visual features of the screenshots 114 to one or more of the UI components based on (i) the generated component data 112 and (ii) the set of screenshots 114 generated based on the component data 112. In certain implementations, the model developer 116 can be configured to identify the various UI components on the screenshots based on the training datasets using a “neural network” or “artificial neural network” configured to “learn” or progressively improve performance of tasks by studying known examples. The neural network can include multiple layers of objects generally refers to as “neurons” or “artificial neurons.” Each neuron can be configured to perform a function, such as a non-linear activation function, based on one or more inputs via corresponding connections. Artificial neurons and connections typically have a contribution value that adjusts as learning proceeds. The contribution value increases or decreases a strength of an input at a connection.
Typically, artificial neurons are organized in layers. Different layers may perform different kinds of transformations on respective inputs. Signals typically travel from an input layer, to an output layer, possibly after traversing one or more intermediate layers. Thus, by using a neural network, the model developer 116 can provide a recognition model 118 that can be used by the prototype developer 120 (shown in
The web page designer 140 can be configured to provide facilities, such as template galleries of UI components and menus, the designer 101 can use to compose a design 141 for a webpage. One suitable webpage designer 140 is Adobe Photoshop provided by Adobe of San Jose, Calif. As shown in
As shown in
The interface module 122 can be configured to capture an image of the design 141. For example, in one embodiment, the interface module 122 can include a camera driver that is configured to capture a snapshot of the design 141 on a whiteboard, piece of paper, or other media via a camera or scanner (not shown). In other embodiments, the interface module 122 can be configured to capture the snapshot of the design 141 by receiving the design 141 as an image or other suitable types of electronic file. In further embodiments, the interface module 122 can be configured to manually target one or more websites and capture screenshots of such websites instead of receiving the design 141 from the webpage designer 140. The captured screenshots can then be processed by the analysis module 124 as described below and compiled into a library of webpage designs or for other suitable uses. Upon receiving the design 141, the interface module 122 can be configured to pass the design 141 to the analysis module 124 for further processing.
The analysis module 124 can be configured to analyze one or more areas on the received design 141 and recognize UI components and sub-components on the design 141 based on the recognition model 118 from the data store 108. For example, the analysis module 124 can be configured to determine whether an area of the screenshot or image of the design 141 contains nested UI components. For instance, as shown in
In response to determining that the area does not contain any nested UI components, the analysis module 124 can be configured to identify and recognize the UI component on the screenshot or image based on visual appearances of the UI component using the recognition model 118. For example, the analysis module 124 can identify that an area corresponding the image component 143 based on visual features included in the area containing the image component 143. The analysis module 124 can then be configured to identify various properties of the image component. In the illustrated example, the analysis module 124 can identify that a source property of the image component 143 includes a string, i.e., “https://placehold.it/400x220/414141” that is a source of the image in the image component 143. In another example, the analysis module 124 can also identify that the paragraph component 148 can include a size property, e.g., a number of words, and a text property containing text processed via Optical Character Recognition (“OCR”). In further examples, the analysis module 124 can also generate multiple candidate UI components based on the recognition model 118. For instance, the analysis module 124 can identify that an area corresponding the image component 143 can be an image component or another badge component. In certain implementations, the analysis module 124 can be configured to output all candidate UI components to the designer 101 for selection. In other implementations, the analysis module 124 can be configured to select a closest match (e.g., the image component) based on the visual features and optionally output the other candidates (e.g., the badge component) as one or more alternates.
In response to determining that the area does contain nested UI components, the analysis module 124 can be configured to identify the next largest sub-area in the focused area and determine whether the sub-area contains nested UI components. For example, upon identifying the card component 142 includes an image component 143, a badge component 144, a heading component 146, a paragraph component 148, and a linked-action component 150, the analysis module 124 can be configured to determine whether the image component 143 includes additional sub-components. Upon determining that the sub-area, e.g., the image component 143, does not contain nested UI components, the analysis module 124 can be configured to recognize the UI component and convert the recognized UI component into a data structure. In one embodiment, the data structure can be generated according to the data schema 110 of
Upon determining that the sub-area also contains nested UI components, the analysis module 124 can be configured to repeat the foregoing processes until no more nested UI components is found. As such, by implementing the foregoing recognition and conversion procedures, the analysis module 126 can be configured to convert the screenshot or image of the design 141 for the webpage into a data structure in, for instance, pseudocode that describes the various UI components on the webpage. The following is an example data structure converted based on the screenshot of the design 141 in
As shown above, the prototype developer 120 can be configured to identify that the design 141 includes a card component 142 that has several sub-components, i.e., the image component 142, the badge component 144, the heading component 146, the paragraph component 148, and the linked-action component 150 identified individually with a component ID, e.g., “image-1.” The data structure above can also define and describe various properties of the component and sub-components. For instance, the badge component 144 can include a text property with a value of “LOREM.” As such, based on the data structure, an optional webpage coder 130 or a production team can then develop and deploy HTML, XML, or other suitable web design codes for the webpage based on the data structure from the prototype developer 120.
Several embodiments of the disclosed technology can thus at least reduce risks of miscommunication between the design team and the prototype team. By using the prototype developer 120, a design 141 for a webpage from the designer 101 can be automatically converted into a data structure in, for instance, pseudocode. As such, communications between the designer 101 and the prototype team can be at least reduced or even eliminated. The foregoing technique can also reduce time and efforts for developing the webpage by at least reducing the feedback and adjustment operations between the design and prototype teams. As such, costs for developing webpages and websites can be reduced when compared to using the feedback and adjustment process.
The process 200 can also include generating component data of UI components based on the defined data schema at stage 204. The component data can be generated to comply with a syntax of the defined data schema. As such, the component data can include at least an identifier of the UI component as well as one or more properties and associated property values as defined in the data schema. The process 200 can further includes generating screenshots or other suitable types of images of the UI components using the generated component data at stage 206. The screenshots can be generated using image templates and formatting such image templates according to the property values of the one or more properties of the UI component, as described above with reference to
The process 200 can further includes developing a recognition model for recognizing various screenshots of UI components at stage 208. In certain implementations, a neural network can be used to develop the recognition model based on both the component data and the corresponding screenshots generated based on the component data. In other implementations, the recognition model can be generated in other suitable manners, as described above with reference to
As shown in
Depending on the desired configuration, the processor 304 can be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The processor 304 can include one more level of caching, such as a level-one cache 310 and a level-two cache 312, a processor core 314, and registers 316. An example processor core 314 can include an arithmetic logic unit (ALU), a floating-point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller 318 can also be used with processor 304, or in some implementations memory controller 318 can be an internal part of processor 304.
Depending on the desired configuration, the system memory 306 can be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. The system memory 306 can include an operating system 320, one or more applications 322, and program data 324. This described basic configuration 302 is illustrated in
The computing device 300 can have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 302 and any other devices and interfaces. For example, a bus/interface controller 330 can be used to facilitate communications between the basic configuration 302 and one or more data storage devices 332 via a storage interface bus 334. The data storage devices 332 can be removable storage devices 336, non-removable storage devices 338, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media can include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The term “computer readable storage media” or “computer readable storage device” excludes propagated signals and communication media.
The system memory 306, removable storage devices 336, and non-removable storage devices 338 are examples of computer readable storage media. Computer readable storage media include, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other media which can be used to store the desired information and which can be accessed by computing device 300. Any such computer readable storage media can be a part of computing device 300. The term “computer readable storage medium” excludes propagated signals and communication media.
The computing device 300 can also include an interface bus 340 for facilitating communication from various interface devices (e.g., output devices 342, peripheral interfaces 344, and communication devices 346) to the basic configuration 302 via bus/interface controller 330. Example output devices 342 include a graphics processing unit 348 and an audio processing unit 350, which can be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 352. Example peripheral interfaces 344 include a serial interface controller 354 or a parallel interface controller 356, which can be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 358. An example communication device 346 includes a network controller 360, which can be arranged to facilitate communications with one or more other computing devices 362 over a network communication link via one or more communication ports 364.
The network communication link can be one example of a communication media. Communication media can typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. A “modulated data signal” can be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein can include both storage media and communication media.
The computing device 300 can be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. The computing device 300 can also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
From the foregoing, it will be appreciated that specific embodiments of the disclosure have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. In addition, many of the elements of one embodiment may be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the technology is not limited except as by the appended claims.