This description relates to techniques for adaptive rendering by using dynamically adjustable distance fields.
With the increased use of electronically presented content for conveying information, more electronic displays are being incorporated into objects (e.g., vehicle dashboards, entertainment systems, cellular telephones, eReaders, etc.) or produced for stand alone use (e.g., televisions, computer displays, etc.). With such a variety of uses, electronic displays may be found in nearly every geographical location for stationary applications (e.g., presenting imagery in homes, offices, etc.), mobile applications (e.g., presenting imagery in cars, airplanes, etc.), etc. Further, such displays may be used for presenting various types of content such as still imagery, textual content such as electronic mail (email), documents, web pages, electronic books (ebooks), magazines and video along with other types of content such as audio.
The systems and techniques described here relate to producing and using distance fields for presenting glyphs based upon environmental conditions and potentially adjusting distance field rendering process to dynamically provide a reasonably consistent viewing experience to a viewer.
In one aspect, a computer-implemented method includes receiving data representing a portion of a graphical object, and receiving data representative of one or more environmental conditions. For the portion of the graphical object, the method includes defining a field of scalar values to present the graphical object on a display wherein each scalar value is based on a distance between the portion of the graphical object and a corresponding point. The method also includes calculating one or more visual property values based on the scalar values and the one or more environmental conditions, and presenting the graphical object using the calculated one or more visual property values.
Implementations may include one or more of the following features. Calculating the one or more of the visual property values may include using a modulation function for mapping the scalar values to pixel values. Calculating the one or more of the visual property values may include adjusting a parameter of the modulation function based on the one or more environmental conditions for mapping the scalar values to pixel values. The parameter may represent one of stroke weight and edge sharpness. The modulation function may be a continuous stroke modulation. One of the one or more environmental conditions may represent the physical orientation of the display, ambient light, etc. The corresponding point may represent a pixel of the display, a sub-pixel of the display, etc. Each scalar value may be based on a distance to the portion of the graphical object nearest to the corresponding point. The graphical object may be a glyph. Environmental information may include user related information such as a user-specified preference, presence of a user, etc.
In another aspect, a system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor to execute the instructions to perform operations that include receiving data representing a portion of a graphical object, and, receiving data representative of one or more environmental conditions. For the portion of the graphical object, operations include defining a field of scalar values to present the graphical object on a display, wherein each scalar value is based on a distance between the portion of the graphical object and a corresponding point. Operations also include calculating one or more visual property values based on the scalar values and the one or more environmental conditions, and presenting the graphical object using the calculated one or more visual property values.
Implementations may include one or more of the following features. Calculating the one or more of the visual property values may include using a modulation function for mapping the scalar values to pixel values. Calculating the one or more of the visual property values may include adjusting a parameter of the modulation function based on the one or more environmental conditions for mapping the scalar values to pixel values. The parameter may represent one of stroke weight and edge sharpness. The modulation function may be a continuous stroke modulation. One of the one or more environmental conditions may represent the physical orientation of the display, ambient light, etc. The corresponding point may represent a pixel of the display, a sub-pixel of the display, etc. Each scalar value may be based on a distance to the portion of the graphical object nearest to the corresponding point. The graphical object may be a glyph. Environmental information may include user related information such as a user-specified preference, presence of a user, etc.
In another aspect, one or more computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations that include receiving data representing a portion of a graphical object and receiving data representative of one or more environmental conditions. For the portion of the graphical object, operations include defining a field of scalar values for presenting the graphical object on a display, wherein each scalar value is based on a distance between the portion of the graphical object and a corresponding point. Operations also include calculating one or more visual property values based on the scalar values and the one or more environmental conditions, and, presenting the graphical object using the calculated one or more visual property values
Implementations may include one or more of the following features. Calculating the one or more of the visual property values may include using a modulation function for mapping the scalar values to pixel values. Calculating the one or more of the visual property values may include adjusting a parameter of the modulation function based on the one or more environmental conditions for mapping the scalar values to pixel values. The parameter may represent one of stroke weight and edge sharpness. The modulation function may be a continuous stroke modulation. One of the one or more environmental conditions may represent the physical orientation of the display, ambient light, etc. The corresponding point may represent a pixel of the display, a sub-pixel of the display, etc. Each scalar value may be based on a distance to the portion of the graphical object nearest to the corresponding point. The graphical object may be a glyph. Environmental information may include user related information such as a user-specified preference, presence of a user, etc.
These and other aspects and features and various combinations of them may be expressed as methods, apparatus, systems, means for performing functions, program products, and in other ways.
Other features and advantages will be apparent from the description and the claims.
a)-(c) illustrate using distance fields for presenting glyphs.
Referring to
Referring to
To sense environmental conditions that may affect the presentation of content, one or more techniques and methodology may be implemented. For example, one or more types of sensing techniques may be used for collecting information reflective of environmental conditions experienced by electronic displays. For example, passive and active sensor technology may be utilized to collect information regarding environmental conditions. In this illustrated example, a sensor 206 (e.g., light sensor) is embedded into the dashboard of the vehicle 200 at a location that is relatively proximate to the electronic display 202. In some arrangements, one or more such sensors may be located closer or farther from the electronic display. Sensors may also be included in the electronic display itself; for example, one or more light sensors may be incorporated such that their sensing surfaces are substantially flush to the surface of the electronic display. Sensors and/or arrays of sensors may be mounted throughout the vehicle 200 for collecting such information (e.g., sensing devices, sensing material, etc. may be embedded into windows of the vehicle, mounted onto various internal and external surfaces of the vehicle, etc.). Sensing functionality may also be provided from other devices, for example, which include sensors not incorporated into the vehicle. For example, the sensing capability of computing devices (e.g., a cellular telephone 208) may be exploited for collecting environmental conditions. Once collected, the computing device may provide the collected information for accessing the environmental conditions (e.g., incident ambient light) being experienced by the electronic display. In the illustrated example, the cellular telephone 208 may collect and provide environmental condition information to access the current conditions being experienced by the electronic display 202. To provide this information various types of technology may be used; for example, one or more wireless links (e.g., radio frequency, light emissions, etc.) may be established and protocols (e.g., Bluetooth, etc.) used to provide the collected information.
Along with natural conditions (e.g., ambient light, etc.), environment conditions may also include other types of information. For example, information associated with one or more viewers of the electronic display may be collected and used for presenting content. Viewer-related information may be collected, for example, from the viewer or from information sources associated with the viewer. With reference to the illustrated vehicle 200, information may be collected for estimating the perspective at which the viewer sees the electronic display 202. For example, information may be provided based upon actions of the viewer (e.g., the position of a car seat 210 used by the viewer, any adjustments to the position of the seat as controlled by the viewer, etc.). In some arrangements, multiple viewers (e.g., present in the vehicle 200) may be monitored and one or more displays may be adjusted (e.g., adjust the content rendering on the respective display being viewed). For example, a head's up display may be adjusted for the driver of a vehicle while a display incorporated into the rear of the driver's seat may be adjusted for a backseat viewer. Viewer activity may also be considered an environmental activity that can be monitored and provide a trigger event for adjusting the rendering of content on one or more displays. Such activities may be associated with controlling conditions internal or external to the vehicle 200 (e.g., weather conditions, time of day, season of year, etc.). For example, lighting conditions within the cabin of the vehicle 200 (e.g., turning on one or more lights, raising or lowering the roof for a convertible vehicle, etc.) may be controlled by the viewer and used to represent the environmental conditions. In some arrangements, viewer activities may also include relatively simple viewer movements. For example, the eyes of a viewer (e.g., driver of a vehicle) may be tracked (e.g., by a visual eye tracking system incorporated into the dash board of a vehicle) and corresponding adjustments executed to the rendering of display content (e.g., adjusting content rendering during time periods when the driver is focused on the display).
Other information may also be collected that is associated with one or more viewers of the electronic display. For example, characteristics of each viewer (e.g., height, gender, location in a vehicle, one or more quantities representing their eyesight, etc.) and information that represents additional information about the viewer's vision (e.g., viewer wears proscription glasses, contacts, sunglasses, has one or more medical conditions, etc.). Viewer characteristics may also be collected from the viewer, compared to being actively provided from the viewer. For example the presence, identity, etc. of a viewer may be detected using one or more techniques. In one arrangement, a facial recognition system (e.g., incorporated into the vehicle, a device residing within the vehicle, etc.) may be used to detect the face of one or more viewers (e.g., the driver of the vehicle). The facial expression of the viewer may also be identified by the system and corresponding action taken (e.g., if the viewer's eyes are squinted or if an angry facial expression is detected, appropriately adjust the rendering of the content presented on the electronic display). One or more feedback techniques may be implemented to adjust content rendering based upon, for example, viewer reaction to previous adjustments (e.g., the facial expression of an anger viewer changes to indicate pleasure, more intense anger, etc.). Other types of information may also be collected from the viewer; for example, user preferences may be collected from a viewer, system-provided, etc. for adjusting content rendering. Audio signals such as speech may also be collected (e.g., from one or more audio sensors) and used to determine if content rendering should be adjusted to assist the viewer. Other types of audio content may also be collected; for example, audio signals may be collected from other passengers in the vehicle to determine if rendering should be adjusted (e.g., if many passengers are talking in the vehicle the content rendering may be adjusted to ease the driver's ability to read the content). Audio content may also be collected external to the vehicle to provide a measure of vehicle's environment (e.g., in a busy urban setting, in a relatively quiet rural location, etc.). Position information provided from one or more systems (e.g., a global positioning system (GPS)) present within the vehicle and/or located external to the vehicle, may be used to provide information regarding environmental conditions (e.g., position of the vehicle, direction of travel, etc.) and used to determine if content rendering should be adjusted. In this particular example, a content rendering engine 212 is included within the dashboard of the vehicle 200 and processes the provided environmental information and correspondingly adjusts the presented content, if needed. One or more computing devices incorporated into the vehicle 200 may provide a portion of the functionality of the content rendering engine 212. Computing devices separate from the vehicle may also be used to provide the functionality; for example, one or more computing devices external to the vehicles (e.g., one or more remotely located servers) may be used in isolation or in concert with the computational capability included in the vehicle. One or more devices present within the vehicle (e.g., cellular telephone 208) may be utilized for providing the functionality of the content rendering engine 212.
Environmental conditions may also include other types of detected information, such as detecting information associated with the platform within which content is being displayed. For example, objects such as traffic signs, construction site warning lights, store fronts, etc. may be detected (e.g., by one or more image collecting devices incorporated into the exterior or interior of a vehicle) and have representations prepared for presenting to occupants of the vehicle (e.g., the driver). Based upon the identified content, the rendering of the corresponding representations may be adjusted, for example to quickly grab that attention of the vehicle driver (e.g., to warn that the vehicle is approaching a construction site, a potential or impending accident with another car, etc.). In some arrangements, input provided by an occupant (e.g., indicating that he is interested in finding a particular restaurant, style of restaurant, etc.) may be used to signify when rendering adjustments should be executed (e.g., when a Chinese restaurant is detected by the vehicle cameras, rending is adjusted to alert the driver to the nearby restaurant).
Referring to
Referring to
With reference to the
Different conventions may be used for indicating if a point is located inside the outline of the glyph or outside the outline of the glyph. For one convention, negative distances may be used to define a point located exterior to the outline of the glyph and positive distances used to define points located interior to the glyph outline. Once distances are determined, the distance field values may be used to determine the visual property values (e.g., gray scale value, a density value, etc.) for the individual points (e.g., pixels, RGB sub-pixels, etc.). For example, one or more thresholds may be defined for use with the distance field values to determine the visual property values to be presented for the corresponding point or points. In one arrangement, one threshold may define a cutoff distance beyond which a minimum visual property value is assigned (e.g., a fully transparent density value of 0). Similarly, another threshold may define an interior cutoff distance within which a maximum visual property value is assigned (e.g., a completely opaque density value of 255). Along with defining thresholds to clearly define cutoff distances, one or more techniques may be implemented for mapping distances (from a distance field) to visual property values (e.g., pixel values, sub-pixel values, etc.). For example, distance values from a distance field such as an adaptively sampled distance field (ADF) may be processed (e.g., by anti-aliasing processing) to reduce the jaggedness of edges and corners being presented. Such ADF's may be considered explicit ADF's that are produced by using a top-down spatial subdivision to generate a spatial hierarchy of explicit cells, and each explicit cell contains a set of sampled distance values. One or more reconstruction techniques may be implemented to reconstruct a distance field within each explicit cell and map the reconstructed distances to appropriate density values. Alternatively, rather than initially producing cells, implicit ADF cells may be generated during rendering (e.g., in an on-demand manner). For this technique, preprocessing is executed (e.g., on data that represents a glyph) and implicit ADF cells are generated and rendered by first reconstructing the distance field within the implicit ADF cell and then mapping the reconstructed distances to the appropriate density values.
One or more techniques may be implemented for mapping distances from a distance field (e.g., an ADF) to glyph pixel values or other types of visual property values. In general, mapping such quantities can be considered as changing a value that represents one quantity (e.g., distance) into another value that represents another quantity (e.g., a numerical value that represents a pixel color, density, etc.). For one technique, a modulation function may be applied to the scalar distance field values to produce visual property values such as pixel values (e.g., for a bitmap image of the glyph). In one arrangement, the modulation function may be a continuous stroke modulation (CSM) that is capable of modulating one or more visual property when producing pixel values. For example, one or more parameters of the CSM may be used to control the mapping of the distance field values to pixel values (or other types of visual property values). Such parameters may represent various visual properties and other aspects for presenting glyphs or other types of graphical objects. For example, one CSM parameter may represent stroke weight and another parameter may represent edge sharpness. By changing these parameters, the appearance of glyphs, characters, fonts, etc. may be adjusted to appear, for example, sharper, blurred, thinner, thicker, etc.
Various techniques may be implemented for selecting information associated with applying a modulation function to map a distance field values to visual property values. For example, one or more parameters of the modulation function may be selected; one or more values for the individual parameters may be selected, etc. In one arrangement, selections may be executed through an interface that presents appropriate information to a user (e.g., selectable modulation function parameters, selectable parameter values, one or more data fields for user entered parameter values, etc.). Other types of information and data may also be used for selecting parameters, determining parameter values, etc. For example, one or more environmental conditions may be used for selecting parameters (e.g., stroke weight, edge sharpness, etc.), parameter values, etc. for mapping distance field values to corresponding pixel values. In concert with environmental conditions, other types of conditions may factor into the selecting parameters (e.g., modulation function parameters), selecting or determining values (e.g., parameter values), etc. For example, user preferences (e.g., user specified preferences), user related information (e.g., system collected viewer characteristics, viewer provided characteristics, detected presence of a particular viewer, etc.), properties of a display or a display device (e.g., display size, resolution, etc.), display characteristics (e.g., foreground and background colors, etc.), font type (e.g., a scalable font such as an OpenType font), font characteristics (e.g., point size, font attributes such as a bold typeface, etc.) may be used in the determinations.
Once determined, the modulation function parameters (e.g., CSM parameters), parameter values, etc. one or more operations may be executed. For example, along with using the parameters with the modulation function to map distance field values to pixel values, the parameters may be stored for later retrieval and use. Such storage may be local to the device presenting the text or storage may be local at one or more devices external to the presentation device. For example, the parameters, information associated with the parameters, etc. may be remotely stored at a server (or a collections of servers) for use in other situations. In some arrangements, the parameters and associated information may be provided as a service through a service provider or other type of entity. Different types of storage techniques may be implemented, for example, one or more tables may be employed to store the parameters and corresponding information. In some arrangements, the parameters and/or associated information (e.g., parameter values) may be stored with one or more assets. For example, the parameters and associated information may be stored in a file that also stores an asset (e.g., CSM parameters may be stored in a file that contains data of an electronic document). By commonly storing the parameters and the asset, one or more operations could be executed that used both sets of stored information. For example, along with presenting the stored asset (e.g., the text of an electronic document), the stored parameters and related information (e.g., parameter values) may be presented (e.g., to a user). By presenting such information, the user can interact with the presented information (e.g., edit, adjust, etc. parameters and parameter values through a user interface, application, etc.) for adjusting the presented text (e.g., substantially in real time) as desired.
To demonstrate a distance field representation,
The modulation function, one or more parameters of the modulation function, etc. may be based upon one or more conditions such as environmental conditions. For example, based on an environmental condition, one or more modulation functions may be selected for assigning visual property values to pixels based upon their corresponding distance field values. Similarly, one or more parameters associated with the modulation function (or multiple modulation functions) may be selected, adjusted, etc. based upon a condition such as an environmental condition.
In the illustrated example, the distance field and associated distance field values can be considered as being fixed after being calculated. Since the pixels of the grid 402 correspond to display pixels (e.g., pixels of a computing device display), the relationship between the glyph segment 400 and the pixels typically remain unchanged. For example, the minimum distance between each pixel of the grid 402 and the closest point of the glyph segment may be constant along with the corresponding scalar values of a calculated distance field. As such, the mapping of the scalar values of the distance field to corresponding visual property values may only change based upon the technique used to perform the mapping (e.g., the modulation function, parameters of the modulation function, etc.). However, in some situations, the relationship between the grid pixels and the presented glyph segment may change thereby causing the distance field, scalar values of a distance field, etc. to change. For example, changes to one or both of the endpoints used to define the minimum distance between a pixel and a glyph segment can cause the corresponding scalar value of the distance field to change. In one situation that could cause such a change; the orientation of the grid may be changed to still provide substantially the same viewing perspective of the presented glyph. For example, a computing device presenting the glyph (e.g., a tablet computing device) may be physically rotated 90° clockwise (as indicated by the arched line 408). To account for this orientation change and still present the same view, the glyph segment may be similarly rotated (e.g., rotated 90° clockwise). However, as illustrated in the figure, while the glyph segment 400 has been rotated present the same view, the distance between the grid pixels and the glyph segment may change. For example, while the pixel 404 still resides at the same location of the display; due to the 90° clockwise rotation, the pixel 404 is now located differently with respect to the glyph segment 400. As such, the distance between the center of the pixel 404 and the closest point of the glyph segment 400 has changed (as represented with line 410). In this example, the distance has decreased (as indicated by comparing the length of line 406 to the line 410). As such, the distance field needs to be recalculated based upon this orientation change of the computing device. Further, mapping of the recalculated scalar values of the distance field to corresponding visual characteristics (e.g., gray scale values) may need to similarly be recalculated. For illustration in this example, based upon the new distance value (represented by the line 410) indicating that pixel 404 is closer to the glyph segment 400 than the previous distance value (represented by the line 406), a darker gray scale level is assigned to the pixel.
Other situations may similarly occur in which the distance field may need to be recalculated. For example, the geometry of individual pixels and segments (sub-pixels) that form individual pixels may impact image rendering. Referring to
Referring to
In some arrangements, pixels and sub-pixels may have symmetric geometries such as the collection of hexagon shaped sub-pixels 430 and the pixels and sub-pixels shown in
Referring to
Referring to
Along with operations associated with using a modulation function to map distance field values (e.g., selecting one or more parameters, setting and/or adjusting parameter values, etc.), one or more techniques and methodologies may be used by the content rendering engine 602 to present and adjust the presentation of content. For example, the content to be presented may be adjusted to improve its legibility based upon the provided environmental conditions. Adjustments may include changes to the rendering of the content being presented. For example, for textual content, the weight and sharpness of the text may be controlled. Similarly the contrast between brighter and dimmer portions of the text may be adjusted to improve legibility. Linear and nonlinear operations associated with coding and decoding values such as luminance values (e.g., gamma correction) may similarly be adjusted for textual content. Geometrical shapes associated with text (e.g., line thickness, font type, etc.) along with visual characteristics (e.g., text color, shadowing, shading, font hinting, etc.) may be adjusted by the content rendering engine 602 due to changes in pixel geometry and/or one or more environmental conditions.
The techniques and methodologies for adjusting content presentation may also include adjusting parameters of the one or more electronic displays being used to present the content. For example, lighting parameters of a display (e.g., foreground lighting levels, back lighting levels, etc.), resolution of the display, the number of bits used to represent the color of a pixel (e.g., color depth), colors associated with the display (e.g., color maps), and other parameters may be changed for adjusting the presented content.
One or more operations and algorithms may be implemented to identify appropriate adjustments for content presentation. For example, based upon the one or more of the provided environmental conditions and the content (e.g., text) to be presented, one or more substantially optimal rendering parameters (e.g., in addition to modulation function parameters) may be identified along with appropriate values by the content rendering engine 602. Once identified, the parameters may be used by the computer system 600, provided to one or more other computing devices, etc. for adjusting the content for presentation on one or more electronic displays. One or more techniques may be utilized to trigger the determination of presenting content with or without adjustments, for example, one or more detected events (e.g., user input selection, etc.) may be defined to initiate the operations of the content rendering engine 602. Adjustment trigger events may also include device orientation changes that affect content layout presentation (e.g., changing between portrait and landscape displays), changes in pixel geometry (e.g., changes between horizontal rows of RGB sub-pixel components and vertical columns). Presentation or presentation adjustments may also be determined and acted upon in a predefined manner. For example, adjustments may be determined and executed in a periodic manner (e.g., every second, fraction of a second) so that a viewer (or viewers) is given an impression that environmental conditions are periodically sampled and adjustments are regularly executed. In some arrangements, the frequency of the executed adjustment may be increased such that the viewer or viewers perceive the adjustments nearly occurring in real time. Adjustments may also be executed during one or more particular time periods, for example, in a piecewise manner. For example, adjustments may be executed more frequently during time periods when experienced environmental conditions are more troublesome (e.g., lower incident angles of the sun during the summer) and less frequent during time periods when potentially dangerous environmental conditions (e.g., periods of less glare) are generally not experienced.
Referring to
Operations may include receiving 702 data representing a portion of a graphical object. For example, a graphical object such as a glyph may be received and may be a portion of a character included in some text to be prepared for presentation. Operations may also include receiving 704 data representative of one or more environmental conditions. For example, the orientation of a device's display, ambient light level incident upon one or more electronic displays, the position and viewing angle of one or more viewers, etc. may be received by a content rendering engine. Operations may also include, for the portion of the graphical object, defining 706 a field of scalar values to present the graphical object on a display. Each scalar value may be based on a distance between the portion of the graphical object and a corresponding point. For example, the scalar value may be based on a distance between a point such as a sampling point (e.g., the center of a pixel, sub-pixel, etc.) and the nearest point or edge of the graphical object such as the outline of a glyph. In some situations, environmental changes (e.g., orientation changes to a display) may call for the distance field to be recalculated while is other situations (e.g., changes in ambient light) the values of the distance field may remain static and a modulation function, parameter of an modulation function, etc. may be changed, adjusted, etc. to address the variation in the environmental condition. As mentioned above, one or more conventions may be utilized to define the distance field values (e.g., a convention for defining positive values and negative values). Operations may also include calculating 708 one or more visual property values based on the scalar values and the one or more environmental conditions. For example, based upon an environmental condition a modulation function (e.g., CSM), a modulation parameter (e.g., stroke weight parameter), a parameter value, etc. may be selected and used with the scalar values of the distance field to define a visual property value such as a pixel value. Operations may also include presenting 710 the graphical object using the calculated one or more visual property values. For example, once the modulation function selected, one or more parameters of the function adjusted, etc. for an environmental condition, the modulation function is used to map the values of the distance field to pixel values to produce a bitmap of the graphical object (e.g., a glyph) for presentation with desirable visual characteristics (e.g., sharpening edges, softening edges, dilating and/or eroding the outline of the glyph, etc.). Along with improving computation efficiency by using one or more modulation functions to adjust the mapping of distance field values to pixel values rather than recalculating the distance field values, the modulation functions, function parameters, etc. may be more efficiently adjusted (e.g., based on an environmental condition, by a user, etc.) to provide additional flexibility. Further, in some arrangements, the operations may be executed over a relatively short period of time and in a repetitive manner such that presentation adjustments may be executed nearly in real time.
Computing device 800 includes processor 802, memory 804, storage device 806, high-speed interface 808 connecting to memory 804 and high-speed expansion ports 810, and low speed interface 812 connecting to low speed bus 814 and storage device 806. Each of components 802, 804, 806, 808, 810, and 812, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. Processor 802 can process instructions for execution within computing device 800, including instructions stored in memory 804 or on storage device 806 to display graphical data for a GUI on an external input/output device, including, e.g., display 816 coupled to high speed interface 808. In other implementations, multiple processors and/or multiple buses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 800 can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
Memory 804 stores data within computing device 800. In one implementation, memory 804 is a volatile memory unit or units. In another implementation, memory 804 is a non-volatile memory unit or units. Memory 804 also can be another form of computer-readable medium, including, e.g., a magnetic or optical disk.
Storage device 806 is capable of providing mass storage for computing device 800. In one implementation, storage device 806 can be or contain a computer-readable medium, including, e.g., a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in a data carrier. The computer program product also can contain instructions that, when executed, perform one or more methods, including, e.g., those described above. The data carrier is a computer- or machine-readable medium, including, e.g., memory 804, storage device 806, memory on processor 802, and the like.
High-speed controller 808 manages bandwidth-intensive operations for computing device 800, while low speed controller 812 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In one implementation, high-speed controller 808 is coupled to memory 804, display 816 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 810, which can accept various expansion cards (not shown). In the implementation, low-speed controller 812 is coupled to storage device 806 and low-speed expansion port 814. The low-speed expansion port, which can include various communication ports (e.g., USB, Bluetooth®, Ethernet, wireless Ethernet), can be coupled to one or more input/output devices, including, e.g., a keyboard, a pointing device, a scanner, or a networking device including, e.g., a switch or router, e.g., through a network adapter.
Computing device 800 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as standard server 820, or multiple times in a group of such servers. It also can be implemented as part of rack server system 824. In addition or as an alternative, it can be implemented in a personal computer including, e.g., laptop computer 822. In some examples, components from computing device 800 can be combined with other components in a mobile device (not shown), including, e.g., device 850. Each of such devices can contain one or more of computing device 800, 850, and an entire system can be made up of multiple computing devices 800, 850 communicating with each other.
Computing device 850 includes processor 852, memory 864, an input/output device including, e.g., display 854, communication interface 866, and transceiver 868, among other components. Device 850 also can be provided with a storage device, including, e.g., a microdrive or other device, to provide additional storage. Each of components 850, 852, 864, 854, 866, and 868, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
Processor 852 can execute instructions within computing device 850, including instructions stored in memory 864. The processor can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor can provide, for example, for coordination of the other components of device 850, including, e.g., control of user interfaces, applications run by device 850, and wireless communication by device 850.
Processor 852 can communicate with a user through control interface 858 and display interface 856 coupled to display 854. Display 854 can be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Display interface 856 can comprise appropriate circuitry for driving display 854 to present graphical and other data to a user. Control interface 858 can receive commands from a user and convert them for submission to processor 852. In addition, external interface 862 can communicate with processor 842, so as to enable near area communication of device 850 with other devices. External interface 862 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces also can be used.
Memory 864 stores data within computing device 850. Memory 864 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 874 also can be provided and connected to device 850 through expansion interface 872, which can include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 874 can provide extra storage space for device 850, or also can store applications or other data for device 850. Specifically, expansion memory 874 can include instructions to carry out or supplement the processes described above, and can include secure data also. Thus, for example, expansion memory 874 can be provided as a security module for device 850, and can be programmed with instructions that permit secure use of device 850. In addition, secure applications can be provided through the SIMM cards, along with additional data, including, e.g., placing identifying data on the SIMM card in a non-hackable manner.
The memory can include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in a data carrier. The computer program product contains instructions that, when executed, perform one or more methods, including, e.g., those described above. The data carrier is a computer- or machine-readable medium, including, e.g., memory 864, expansion memory 874, and/or memory on processor 852, which can be received, for example, over transceiver 868 or external interface 862.
Device 850 can communicate wirelessly through communication interface 866, which can include digital signal processing circuitry where necessary. Communication interface 866 can provide for communications under various modes or protocols, including, e.g., GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication can occur, for example, through radio-frequency transceiver 968. In addition, short-range communication can occur, including, e.g., using a Bluetooth®, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 870 can provide additional navigation- and location-related wireless data to device 850, which can be used as appropriate by applications running on device 850.
Device 850 also can communicate audibly using audio codec 860, which can receive spoken data from a user and convert it to usable digital data. Audio codec 860 can likewise generate audible sound for a user, including, e.g., through a speaker, e.g., in a handset of device 850. Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, and the like) and also can include sound generated by applications operating on device 850.
Computing device 850 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as cellular telephone 880. It also can be implemented as part of smartphone 882, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to a computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying data to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be a form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in a form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or a combination of such back end, middleware, or front end components. The components of the system can be interconnected by a form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In some implementations, the engines described herein can be separated, combined or incorporated into a single or combined engine. The engines depicted in the figures are not intended to limit the systems described here to the software architectures shown in the figures.
Processes described herein and variations thereof (referred to as “the processes”) include functionality to ensure that party privacy is protected. To this end, the processes may be programmed to confirm that a user's membership in a social networking account is publicly known before divulging, to another party, that the user is a member. Likewise, the processes may be programmed to confirm that information about a party is publicly known before divulging that information to another party, or even before incorporating that information into a social graph.
A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps can be provided, or steps can be eliminated, from the described flows, and other components can be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.