Embodiments of the present disclosure relate generally to search engines and, more particularly, but not by way of limitation, to using a specially configured search engine to generate congruent item replacements for design motifs.
In recent years, users have increasingly used image-based search sites to find items (e.g., couches, pillows, sinks, tables, clothes) to decorate their homes, offices, etc. However, images of the items are a poor substitute for the items themselves, since the users cannot inspect the items in real life, e.g., in a showroom. Thus, the users are forced to make decisions based only on the provided images and description data. Further, some users, such as clothing designers or interior designers, seek to create arrangements of items that aesthetically function together in a design or style motif. For example, a clothing designer may select shoes, pants, a shirt, a watch, and a hat to create a stylish outfit, or an interior designer may select a couch, chair, table, vase, and floor lamp to create a living room style motif. While it is difficult to judge individual items using only images, it is far more complex if not impossible for an average user to create an aesthetically pleasing arrangement of items using online images because the user cannot examine how the items look next to one another. Additionally because the user may not have the requisite design experience to create stylish arrangements of items.
Various ones of the appended drawings merely illustrate example embodiments of the present disclosure and should not be considered as limiting its scope.
The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.
A non-expert user can submit style criteria specifying a desired style for an environment selected by the non-expert user. A design expert can review the criteria data and select a plurality of items on a website and arrange the items in a visual representation such as a floor plan or image of a room. The non-expert user can create a request to change some or all of the items selected and arranged by the expert user. For example, the non-expert user may opt to view a similar arrangement of items but at a lower cost. The design replacement system can access a data structure, such as a network graph, to find congruent placement items to include in the design. The replacement items can be identified taking into account the class or type of items (e.g., a chair class, lamp class), a constraint value (e.g., lower price, different materials), and user interactions (e.g., saves, likes) with the items on the website. Once the replacement items are identified, the replacement items replace the initial items in the design. In this way, the non-expert user can find replacement items while maintaining the arrangement of items as arranged by the expert, and further maintaining the selection and style of the item classes as selected by the expert user.
With reference to
In various implementations, the client device 110 comprises a computing device that includes at least a display and communication capabilities that provide access to the networked system 102 via the network 104. The client device 110 comprises, but is not limited to, a remote device, work station, computer, general purpose computer, Internet appliance, hand-held device, wireless device, portable device, wearable computer, cellular or mobile phone, personal digital assistant (PDA), smart phone, tablet, ultrabook, netbook, laptop, desktop, multi-processor system, microprocessor-based or programmable consumer electronic, game consoles, set-top box, network personal computer (PC), mini-computer, and so forth. In an example embodiment, the client device 110 comprises one or more of a touch screen, accelerometer, gyroscope, biometric sensor, camera, microphone, Global Positioning System (GPS) device, and the like.
The client device 110 communicates with the network 104 via a wired or wireless connection. For example, one or more portions of the network 104 comprises an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a Wireless Fidelity (WI-FI®) network, a Worldwide Interoperability for Microwave Access (WiMax) network, another type of network, or any suitable combination thereof.
In some example embodiments, the client device 110 includes one or more of the applications 114 (also referred to as “apps”) such as, but not limited to, web browsers, book reader apps (operable to read e-books), media apps (operable to present various media forms including audio and video), fitness apps, biometric monitoring apps, messaging apps, electronic mail (email) apps. In some implementations, the client application 114 includes various components operable to present information to the user and communicate with networked system 102.
The web client 112 accesses the various systems of the networked system 102 via the web interface supported by a web server 122. Similarly, the programmatic client 116 and client application 114 accesses the various services and functions provided by the networked system 102 via the programmatic interface provided by an application program interface (API) server 120.
Users (e.g., the user 106) comprise a person, a machine, or other means of interacting with the client device 110. In some example embodiments, the user 106 is not part of the network architecture 100, but interacts with the network architecture 100 via the client device 110 or another means. For instance, the user 106 provides input (e.g., selections of items or other style criteria, or slider generated requests) to the client device 110 and the input is communicated to the networked system 102 via the network 104. In this instance, the networked system 102, in response to receiving the input from the user 106, communicates information to the client device 110 via the network 104 to be presented to the user 106. In this way, the user 106 can interact with the networked system 102 using the client device 110.
The API server 120 and the web server 122 are coupled to, and provide programmatic and web interfaces respectively to, one or more application server 140. The application server 140 can host a support server 150, which can comprise one or more modules or applications 114 and each of which can be embodied as hardware, software, firmware, or any combination thereof. The support server 150 can provide data that is not locally stored on the client device 110. For example, the item catalog, item images, and access to the data structure can be provided to the design replacement system 114 via the support server 150. The application server 140 is, in turn, shown to be coupled to a database server 124 that facilitates access to one or more information storage repositories, such as database 126. In an example embodiment, the database 126 comprises item data, images of items, user profiles, criteria data for styles, saved visual representations displaying arranged items, and the network graph data structure. In some embodiments, some or all of the data stored in database 126 is stored within client device 110 for local access that doesn't require network communications. Additionally, a third party application 132, executing on third party server 130, is shown as having programmatic access to the networked system 102 via the programmatic interface provided by the API server 120. For example, the third party application 132, utilizing information retrieved from the networked system 102, supports one or more features or functions on a website hosted by the third party.
Application server 140 can further host a network site 155, which may be a website that displays webpages for items. The network site 155 has an integrated search engine that allows users 106 to search for items. A webpage for a given item may display one or more images of the item, descriptive data (e.g., title, description of the item, style data), and further be configured to allow a user 106 to purchase the item through the network site 155. Further, users 106 may have user profiles on the network site 155, that the users 106 can access by logging in using respective username and password combinations for the respective users 106. Further, each user profile may have an image gallery, in which users 106 can save items. For example, an image gallery may be an image board and each item webpage may have a “Save to my gallery” button, that saves an item or webpage to the gallery in the user 106's profile on network site 155.
Further, while the client-server-based network architecture 100 shown in
As illustrated, the design replacement engine 114 comprises a criteria engine 200, a model engine 205, and a replacement engine 210. The criteria engine 200 manages receiving criteria data, e.g., a design motif, from a user 106. The model engine 205 is configured to manage generating representations of the physical items, such as the physical items placed in a two-dimensional representation of a room or the physical items placed in a floor plan showing the respective locations of the physical items in a room. The replacement engine 210 is configured to receive a request for replacement items, and identify items that satisfy criteria data and one or more attributes specified in the request.
At operation 310, the criteria engine 200 receives selections of items from a second user, made in accordance with the design motif. For example, after the first user selects his/her criteria data, including a design motif, the criteria data is transmitted to an additional client device of an expert user. The expert user is a user 106 deemed as an design expert, e.g., deemed as a design expert by the first user. In some example embodiments, the first user explicitly chooses a specific design expert as the expert user through a user interface provided by the design replacement system 114. In some example embodiments, once the design motif from the first user is known, the criteria engine 200 accesses a table in database 126 to select designers that specialize in the design motif of the first user. In some example embodiments, per the table in database 126, a plurality of designers are available and the first user can select one of the designers from the plurality as the expert user. Alternatively, the designer having the highest customer reviews is selected as the expert user, according to some example embodiments.
Once the expert user receives the design motif of the first user, the expert user can select items (e.g., couches, chairs, rugs, lamps) that, in the expert user's opinion, satisfy the design motif of the first user. The expert user can further arrange the selected items in a visual representation of an environment. For example, the first user, in the criteria data, may have specified that the design motif is for a given room. The expert user can use a floor plan of the given room to specify locations of the selected items as further discussed below. Further, the expert user can arrange the items in a 3D rendered model or photo of the room, e.g., by overlaying images of the selected items in a mock-up of the given room.
At operation 315, the model engine 205 displays the selected items in a design presentation. For example, the floor plan or image of the room comprising the arranged items is transmitted and displayed on a client device 110 of the first user.
At operation 320, the replacement engine 210 receives a request to modify one or more selected items relative to a specified attribute of the items. For example, the items initially selected and arranged by the expert user may not satisfy the first user. The first user can use a graphical user interface (GUI) element, such as a slider, to request different items that still match the design motif. The GUI element can be configured to select a new value from a range of values for a given attribute. For example, the first set of physical items arranged by the expert user may be too expensive for the first user. The first user, wanting the same design motif for the arranged items but at a lower cost range, can use the GUI element to select a lower value for the price attribute. The request is then received by the replacement engine 210 and processed at operation 325.
At operation 325, the replacement engine 210 generates different items as replacement items to include in the design. Notably, the request for replacement items is not merely a filter that cleaves off more expensive items from items displayed in the arrangement. Instead, the request is for different items that satisfy the new specified price attribute, and additionally satisfy the design motif in the arrangement as configured by the expert user. In this way, the first user can review searchable options of items in a way that maintains the design motif or style, and further maintains the expert arrangement of the expert user. At operation 330, the model engine 205 displays the replacement items in the representation of an environment (e.g., photo of a room, 3D rendered model of a room, floor plan of a room).
At operation 415, the replacement engine 210 determines the highest ranked items within the matching items that satisfy the constraint. For example, from the identified item nodes matching the new value, the highest ranked items nodes, according to user interactions, are selected at operation 415, according to some example embodiments. The user interactions are interactions of users 106 on a network site 155 (e.g., website, mobile device application) in which the items available for arrangement are displayed and searchable online. Examples of user interactions include views of a page (e.g., webpage, app page) that displays a given item, and saves of an item to a user 106's image gallery on the website. Accordingly, the highest ranked item nodes would be webpages of items that have the most page views (e.g., top ten most viewed), or items that have been most saved to a user 106's image gallery on the website to bookmark the respective items. At operation 420, the highest ranked of the matching items are displayed within the arranged design as replacement items. As discussed, a replacement item of a given class or type replaces another item of the same class or type (e.g., a lamp replaces a lamp). In some example embodiments, the highest ranking items are first shown in the arrangement, and the first user 106 can toggle to the other replacement items using toggle GUI controls as discussed below in further detail.
In some example embodiments, the model engine 205 displays a 3D model of the room based on a floor plan or measurements received from the user 106 (e.g., the non-expert user that selects the design motif). Further, the expert user selects one or more items, as discussed above, and then places them in the 3D model of the room via the model engine 205. The model engine 205 then outputs a 3D render of the objects in the room for display to the user 106 (e.g., non-expert user).
The user interface 1000 further comprises a GUI element 1015 that the user 106 can use to create a request for different items to replace the items in the design. In particular, the GUI element 1015 is a slider that is movable along a range of values, according to some example embodiments. The top of the range corresponds to the most expensive items and the bottom of the range corresponds to the least expensive items. The example GUI element 1015 has five notches that correspond to levels or thresholds that a user 106 can select to find replacement items for a given selected level. The available items (e.g., items in the catalog) may be pre-categorized in the data structure according to the five levels.
Further, a user 106 may use the GUI element 1015 to find a replacement item for an individual item, instead of finding a full set of replacement items. For example, assume the user 106 is satisfied with the look and cost of the vase 1110C and lamps 1110A and 1110B, however the user 106 thinks stool 1110D costs to much or otherwise does not like the way it looks. The user 106 may select GUI element 1025 to individually select 1110D for replacement. In particular, the user 106 may select GUI element 1025, and move the GUI element 1015 to a new level by sliding it down the range of values. As a result, new replacement items are found just for the stool 1110D. Other items may have similar toggle elements (e.g., GUI elements 1020) and individual selector elements (e.g., GUI element 1025) to more finely tune a design.
Interactions between users 106 and items are represented by edges, lines, or arrows connecting the nodes. As discussed above, the interactions may be user interactions between a user 106 of a network site 155 and items displayed on the network site 155. For example, the expert node 1205 has “liked” vase 1215 and vase 1240 on a network site 155, and the interactions are displayed as relationship arrows from expert node 1205 to vase 1215 and vase 1240. Similarly, the non-expert user 1210 has “liked” vase 1220 and 1230. Further, other users 1255 may likewise interact with nodes, as illustrated by the interactions arrow from other users 1255 to vase 1225.
Although only vase nodes are displayed for clarify, it is appreciated that network graph data structure 1200 may comprise nodes for every item available or viewable on the network site 155. Assuming, as an example, the user 106 is submitting a request to find different items for the vase in a given design (e.g.,
Further responsive to the request, item nodes having the specified attribute are identified. Thus, in the example illustrated, medium tier items 1250 which have medium cost are identified, and high end items 1245 are excluded. Further, responsive to the request, user interactions are used to identify specific replacement items, as discussed with reference to
Further, as discussed above, other attributes of nodes can be used to find replacement items. For example, assume other users 1255 of the network site 155 most like, save, highly review, or purchase vase 1225. Further assume vase 1235 was included in the first arrangement of items and the user 106 submitted a request for replacement items. Responsive to the request, the replacement engine 210 may select the most interacted with item as the replacement item. Thus, the item node for vase 1225 is identified and used to replace vase 1235. The user 106 may then toggle to the second most interacted with item using toggle controls as discussed above.
In some example embodiments, replacement items are the items that are most visually similar to the initial items. Visual similarity can be learned through machine learning images (e.g., neural networks, support vector machines) of the item nodes and assigning similarity scores for each of the items that have similar shape or form with respect to one another. For example, assume vase 1235 is initially included in a design by the expert user. Further assume that the replacement engine 210 has used a machine learning module to create visual similarity scores for each of the item nodes 1215, 1220, 1225, which are from the medium tier items 1250. The similarity score may be stored in a relationship (e.g., edge or arrow) between nodes. Responsive to the request for new different items at a new value (e.g., medium price value), the replacement engine 210 may select, as a replacement item, the item node which has the most visually similar score to vase 1235. In the example of
Further, in some example embodiments, edges may connect item nodes that correspond to items depicted in the same image. For example, an image of a living room may items such as lamp_1, chair_1, and footstool_1. The nodes that correspond to the lamp_1, chair_1, and footstool_1 then have edges connected to each other denoting the relationship of appearing the same image together.
The edge relationship of appearing in the same image together can be used to find replacement items per a design motif according to some example embodiments. Continuing the example, further assume that different image of a different living room depicts lamp_2, chair_1, and footstool_2 (where the chair_1 is the chair that appears in both images of the different living rooms). The item node corresponding to chair_1 then may have edges that connect it to lamp_1, lamp_2, footstool_1, and footstool_2. If a user seeks to replace lamp_1, lamp_2 may be recommended because chair_1 has been depicted with both lamp_1 and lamp_2, albeit in different images. That is, in other words, a query is generated to request any lamps that appear in other items that appear with depict lamp_1. As a result, because chair_1 is depicted with lamp_2 in the different image of the different living room, lamp_2 is recommended as a replacement item. This approach can capture relationships between items that do not necessarily share the same attributes. For example, lamp_1, chair_1, and footstool_1 may all be of different design styles, but still be selected by an expert designer as being stylish together. Likewise for lamp_2, chair_2, and footstool_2; that is, lamp_2, chair_2, and footstool_2 may share attributes or may have entirely different attributes but nonetheless assumed to be related to each because a design has arranged them together. More complex queries are also available in the appeared-together approach (e.g., requesting a replacement item using a appeared together in the same image relationship and further requesting that the replacement item be less than a certain dollar amount).
Further, in some example embodiments, the item(s) suggested need not necessarily replace a previously suggested item. In particular, for example, the edge relationships may use the edge relationships of the design graph to suggest a new item of a different type from those suggested. Using the example above, an expert has selected chair_1, and footstool_1. The user may request a lamp that matches chair_1 and footstool_1. In response to the user's request, the design graph is utilized to recommend lamp_1 based lamp_1 appearing in images with chair_1 and footstool_1 (e.g., thereby sharing edges in the design graph).
Certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules can constitute either software modules (e.g., code embodied on a machine-readable medium) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and can be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) can be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In some embodiments, a hardware module can be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module can include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module can be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module can include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware modules become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) can be driven by cost and time considerations.
Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein, “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules can be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications can be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module can perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module can then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules can also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The various operations of example methods described herein can be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors.
Similarly, the methods described herein can be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method can be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network 104 (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)).
The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented modules can be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented modules are distributed across a number of geographic locations.
The modules, methods, applications 114 and so forth described in conjunction with
The machine 1300 can include processors 1310, memory/storage 1330, and I/O components 1350, which can be configured to communicate with each other such as via a bus 1302. In an example embodiment, the processors 1310 (e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) can include, for example, processor 1312 and processor 1314 that may execute instructions 1316. The term “processor” is intended to include multi-core processor 1310 that may comprise two or more independent processors 1312, 1314 (sometimes referred to as “cores”) that can execute instructions 1316 contemporaneously. Although
The memory/storage 1330 can include a memory 1332, such as a main memory, or other memory storage, and a storage unit 1336, both accessible to the processors 1310 such as via the bus 1302. The storage unit 1336 and memory 1332 store the instructions 1316, embodying any one or more of the methodologies or functions described herein. The instructions 1316 can also reside, completely or partially, within the memory 1332, within the storage unit 1336, within at least one of the processors 1310 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 1300. Accordingly, the memory 1332, the storage unit 1336, and the memory of the processors 1310 are examples of machine-readable media.
As used herein, the term “machine-readable medium” means a device able to store instructions 1316 and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., erasable programmable read-only memory (EEPROM)) or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions 1316. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions 1316) for execution by a machine (e.g., machine 1300), such that the instructions 1316, when executed by one or more processors of the machine 1300 (e.g., processors 1310), cause the machine 1300 to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based”storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se.
The I/O components 1350 can include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 1350 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1350 can include many other components that are not shown in
In further example embodiments, the I/O components 1350 can include biometric components 1356, motion components 1358, environmental components 1360, or position components 1362 among a wide array of other components. For example, the biometric components 1356 can include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components 1358 can include acceleration sensor components (e.g., an accelerometer), gravitation sensor components, rotation sensor components (e.g., a gyroscope), and so forth. The environmental components 1360 can include, for example, illumination sensor components (e.g., a photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., a barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 1362 can include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication can be implemented using a wide variety of technologies. The I/O components 1350 may include communication components 1364 operable to couple the machine 1300 to a network 1380 or devices 1370 via a coupling 1382 and a coupling 1372, respectively. For example, the communication components 1364 include a network interface component or other suitable device to interface with the network 1380. In further examples, communication components 1364 include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices 1370 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)).
Moreover, the communication components 1364 can detect identifiers or include components operable to detect identifiers. For example, the communication components 1364 can include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via the communication components 1364, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth.
In various example embodiments, one or more portions of the network 1380 can be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN ((WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network 1380 or a portion of the network 1380 may include a wireless or cellular network, and the coupling 1382 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling 1382 can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology.
The instructions 1316 can be transmitted or received over the network 1380 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 1364) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, the instructions 1316 can be transmitted or received using a transmission medium via the coupling 1372 (e.g., a peer-to-peer coupling) to devices 1370. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 1316 for execution by the machine 1300, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed.
The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.