This disclosure relates to providing user feedback regarding a boundary of displayed content.
Devices such as mobile devices and desktop computers are configured to display image content such as documents, e-mails, and pictures on a screen. In some instances, rather than displaying the entire image content, the screen displays a portion of the image content. For example, rather than displaying every single page in a document, the screen may display only the first page when the document is opened. To transition from one portion of the image content to another portion of the image content, the user may scroll the image content in two dimensions, e.g., up-down or right-left.
The devices may also allow the user to zoom-in or zoom-out of the displayed image content. Zooming into the image magnifies part of the image content. Zooming out of the image content provides large amounts of displayed image content on a reduced scale.
There may be a limit as to how much a user can scroll and zoom on the displayed image content. For example, if the image content is displaying the first page, the user may not be allowed to scroll further up. If the image content is displaying the last page, the user may not be able to scroll further down. There may also be practical limitations on how far the user can zoom-in or zoom-out of the image content. For example, the device may limit the user from zooming in any further than 1600% or zooming out any further than 10% for the displayed image content.
In one example, aspects of this disclosure are directed to a computer-readable storage medium comprising instructions that cause one or more processors of a computing device to receive a request that is based upon a user gesture to extend an image content portion of image content beyond a boundary of the image content, wherein the image content portion is currently displayed on a display screen and within the boundary of the image content, and responsive to receiving the request, distort one or more visible attributes of the image content portion that is displayed on the display screen to indicate recognition of the request and to further indicate that the request will not be processed to extend the image content portion beyond the boundary of the image content.
In another example, aspects of this disclosure are directed to a method comprising receiving, with at least one processor, a request that is based upon a user gesture to extend an image content portion beyond a boundary of the image content, wherein the image content portion is currently displayed on a display screen and within the boundary of the image content, and responsive to receiving the request, distorting, with the at least one processor, one or more visible attributes of the image content portion that is displayed on the display screen to indicate recognition of the request and to further indicate that the request will not be processed to extend the image content portion beyond the boundary of the image content.
In another example, aspects of this disclosure are directed a device at least one processor configured to receive a request that is based upon a user gesture to extend an image content portion beyond a boundary of the image content, wherein the image content portion is currently displayed on a display screen and within the boundary of the image content, and means for distorting one or more visible attributes of the image content portion that is displayed on the display screen to indicate recognition of the request and to further indicate that the request will not be processed to extend the image content portion beyond the boundary of the image content, in response to the request.
Aspects of this disclosure may provide some advantages. The distortion of the visible attributes of the content may indicate to the user that the user is attempting to extend a portion of the image content beyond a content boundary. In aspects of this disclosure, the user is provided an indication that his or her request to extend beyond the boundary is recognized by the distortion to the visible attributes of the content. Otherwise, it may be possible that the user may not know that the device recognized the attempt, and may conclude that the device is malfunctioning.
The details of one or more embodiments of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
Certain aspects of the disclosure are directed to techniques to provide a user of a device with an indication that he or she has reached a boundary of image content on a display screen of the device. Examples of the boundary of image content include a scroll boundary and a zoom boundary. Users of devices, such as mobile devices, may perform scroll and zoom functions with respect to the image content presented on a display screen. Scrolling the image content can be performed in one or two dimensions (up-down, or right-left), and provides the user with additional image content. Zooming into the images magnifies part of the image content. Zooming out of the images provides larger amounts of the image content on a reduced scale. Zooming may be considered as scrolling in the third dimension where the image content appears closer (zoom in) or further away (zoom out).
The scroll and zoom functions are typically bounded by boundaries. When at the end of the image content, the user cannot scroll the image content any further down. Similarly, when at the top of the image content, the user cannot scroll the image content any further up. The zoom functions may be bounded by practical limitations of the device. The device may support magnification only up to a certain level, and may not support additional magnification. Similarly, the device may be limited in the amount of the image content it can display and still be recognizable by the user.
When a user attempts to further extend the image content beyond these example viewable boundaries, e.g., a scroll boundary or a zoom boundary, in aspects of this disclosure, the device may distort one or more visible attributes of the image content to indicate to the user that he or she has reached such a boundary. Visible attributes of the image content may be considered as the manner in which the image content is displayed. For example, when the user attempts to further extend the image content beyond a boundary, the device may warp, curve, or shade at least some parts of the image content in response to the user's indication to extend a portion of the image content beyond the content's boundary. Warping or curving may include some distortion of at least some parts of the portion of the image content. Shading may include changing the color or brightness, e.g., lighting, of at least some parts of the portion of the image content to distort the portion of the image content.
As illustrated in
As illustrated in
A user gesture, as used in this disclosure, may be considered as any technique to scroll the displayed image content portion, e.g., image content portions 4, upward, downward, leftward, rightward, or any possible combinational direction, e.g., diagonally. As described in more detail below, a user gesture may also be considered as any technique to zoom-in or zoom-out of the displayed image content portion.
The user gesture may be submitted via a user interface. Examples of the user interface include, but are not limited to, display screen 6, itself, in examples where display screen 6 is a touch screen, a keyboard, a mouse, one or more buttons, a trackball, or any other type of input mechanism. As one example, the user may utilize a stylus pen or one of the user's digits, such as the index finger, and place the stylus pen or digit on display screen 6, in examples where display screen 6 is a touch screen. The user may then provide a gesture such as dragging the digit or stylus pen upwards on display screen 6 to scroll image content portion 4A upwards. The user may scroll image content portion 4A downward, rightward, leftward, or diagonally in a substantially similar manner. As another example, the user may utilize the trackball and rotate the trackball with an up, down, right, left, or diagonal gesture to scroll image content portion 4A upward, downward, rightward, leftward, or diagonally.
It should be noted that in some instances, based on the example of the input mechanism, image content portion 4A may scroll in the opposite direction then the user gesture. However, the scrolling of image content portion 4A may still be based on the type of user gesture entered by the user. For example, if the user enters the user gesture via a mouse attached to a desktop computer, when the user scrolls downwards via the mouse, image content portion 4A may scroll upwards. Similarly, when the user scrolls upwards via the mouse, image content portion 4A may scroll downwards, when the user scrolls rightward via the mouse, image content portion 4A may scroll leftward, and when the user scrolls leftward, image content portion 4A may scroll rightward. Aspects of this disclosure are described in the context of image content portion 4A moving in the same direction as the user gesture. However, aspects of this disclosure should not be considered limited as such.
Although not shown in
Furthermore, it should be noted that the example techniques to scroll image content portions 4 are provided for illustration purposes only and should not be considered as limiting. In general, aspects of this disclosure may be applicable to any technique to allow a user to scroll image content portions 4 in a vertical direction, horizontal direction, right direction, left direction, diagonal direction, or in any combinational direction, e.g., in a circle.
In the examples illustrated in
As noted above, in
The example locations of image content portions 4 relative to image content 2, in
In some instances, after the user scrolled to a scroll boundary, the user may not realize that he or she scrolled to the scroll boundary. Scrolling beyond a scroll boundary may not be possible because there is no additional image content to be displayed. The user may, nevertheless, keep trying to scroll further than the scroll boundary. For example, the user may try to scroll image content portion 4B upwards, not realizing the image content portion 4B is at the scroll boundary. This may cause the user to become frustrated because the user may believe that his or her request for additional scrolling is not being recognized and may conclude that the device is malfunctioning.
In some aspects of this disclosure, one or more processors within the device that displays image content 2 and image content portions 4 on display screen 6 may receive a request based upon a user gesture to extend image content portions 4 beyond a scroll boundary. In response to the request, the one or more processors may distort one or more visible attributes of image content portions 4 to indicate recognition of the request and to further indicate that the request will not be processed to extend image content portions 4 beyond the scroll boundary. Examples of distorting the visible attributes include, but are not limited to, warping, curving, and shading at least some of image content portions 4. Warping or curving may include some distortion of at least some parts of the portion of the image content. Shading may include changing the color or brightness, e.g., lighting, of at least some parts of the portion of the image content to distort the portion of the image content.
In some examples, the one or more processors may distort the one or more visible attributes of image content portions 4 for a brief moment, e.g., for one second or less, however, the one or more processors may distort the visible attributes for other lengths of times. At the conclusion of the moment, e.g., after one second, the processors may remove the distortion to the visible attributes.
As one example, when the user attempts to further extend image content portion 4C downward beyond the scroll boundary, the one or more processors may warp, curve, and/or shade at least some parts of image content portion 4C to distort parts of image content portion 4C. The one or more processors may similarly warp, curve, and/or shade at least some parts of image content portions 4B, 4D, and 4E if the user attempts to further scroll beyond the upward, leftward, and rightward scroll boundaries, respectively, to distort parts of image content portions 4B, 4D, and 4E.
As another example, the user may request to extend image content portion 4B beyond the top scroll boundary. As illustrated in
As another example, the user may request to extend image content portion 4D beyond the left scroll boundary. In response, the one or more processors may italicize at least a part of image content portion 4D to indicate that the user is attempting to scroll beyond a scroll boundary. However, the user may not see the italicized part of image content portion 4D, and may again request to extend image content portion 4D beyond the left scroll boundary. In some of these instances, the one or more processors may further distort visible attributes of image content portion 4D. For example, as illustrated by image content portion
It should be noted that although
The distortion of the visible attributes may indicate to the user that the user is attempting to extend an image content portion, for example, but not limited to, one of image content portions 4, beyond the scroll boundary. Moreover, the distortion of the visible attributes may indicate to the user that the user's request to extend an image content portion beyond the scroll boundary is recognized, but will not be processed. In this manner, the user may recognize that the device is operating correctly, but the request to extend an image content portion will not be processed because the image content portion is at the scroll boundary.
In some instances, the user may desire to zoom into image content of image content 8 to magnify some portion of image content 8. Similarly, the user may desire to zoom out of the image content that is currently displayed to display larger amounts of image content 8. However, the zoom functions may be bounded by practical limitations. Image content 8 may be magnified only up to a certain level, and may not be magnified any further. Similarly, there may be a limit in the amount of image content 8 that can displayed and still be recognizable by the user.
To zoom into or out of image content 8, the user may provide a user gesture in a substantial similar manner as described above. As one example, display screen 6 may display a zoom in button and a zoom out button. The user may tap the location on display screen 6 that displays the zoom in button to zoom in, and may tap the location on display screen 6 that displays the zoom out button to zoom out, in examples where display screen 6 is a touch screen. As another example, the user may place two digits, e.g., the index finger and thumb, on display screen 6. The user may then provide a multi-touch user gesture of extending the index finger and thumb in opposite directions, relative to each other, to zoom in.
However, like scrolling, there may be a boundary beyond which the user cannot zoom in or zoom out any further. The boundary beyond which the user cannot zoom in or zoom out may be referred to as a zoom boundary. The zoom boundary may be a function of the practical limitations of zooming. As one example, the user may not be allowed to magnify, e.g., zoom in, by more than 1600%. As another example, the user may not be allowed to zoom out to less than 10%. In these examples, the zoom boundaries may be 1600% and 10%.
As illustrated in
Similar to the scrolling examples provided above with respect to
In some aspects of this disclosure, one or more processors within the device that displays image content 8 and image content portions 10A-10C on display screen 6 may receive a request based upon a user gesture to extend image content portions 10B and 10C beyond a zoom boundary. In response to the request, the one or more processors may distort one or more visible attributes of image content portion 10B and 10C to indicate recognition of the request and to further indicate that the request will not be processed to extend image content portions 10A and 10B beyond the zoom boundary. Examples of distorting visible attributes include, but are not limited to, warping, curving, and shading at least some of image content portions 10A and 10B. Additional examples of distorting visible attributes include, but are not limited to, bolding, italicizing, underlining, and the like, as well as, any combination thereof.
For example, as illustrated in
Furthermore, as described above with respect to
Device 20 may include additional components not shown in
Display screen 12 may be substantially similar to display screen 6 (
User interface 18 allows a user of device 20 to interact with device 20. Examples of user interface 20 include a keypad embedded on device 20, a keyboard, a mouse, one or more buttons, a trackball, or any other type of input mechanism that allows the user to interact with device 20. In some examples, user interface 18 may allow the user to provide the user gesture to scroll the image content or zoom into or out of the image content.
In some examples, display screen 12 may provide some or all of the functionality of user interface 18. For example, display screen 12 may be a touch screen that allows the user to interact with device 20. In these examples, user interface 18 may be formed within display screen 12. In some examples where display screen 12 provides some or all of the functionality of user interface 18, user interface 18 may not be necessary on device 20.
However, in some examples where display screen 12 provides some or all of the functionality of user interface 18, device 20 may still include user interface 18 for additional ways for the user to interact with device 20.
One or more processors 14 may include any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry. One or more processors 14 may execute applications stored on storage device 16. For ease of description, aspects of this disclosure are described in the context of a single processor 14. However, it should be understood that aspects of this disclosure described with a single processor 14 may be implemented in one or more processors. When processor 14 executes the applications, processor 14 may generate image content such as image content 2 (
In addition to storing applications that are executed by processor 14, storage device 16 may also include instructions that cause processor 14, beyond boundary determination module 15, and attribute distortion module 17 to perform various functions ascribed to processor 14, beyond boundary determination module 15, and attribute distortion module 17 in this disclosure. Storage device 16 may be a computer-readable, machine-readable, or processor-readable storage medium that comprises instructions that cause one or more processors, e.g., processor 14, beyond boundary determination module 15, and attribute distortion module 17, to perform various functions.
Storage device 16 may include any volatile, non-volatile, magnetic, optical, or electrical media, such as a random access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other digital media. Storage device 16 may be considered as a non-transitory storage medium. The term “non-transitory” means that storage device 16 is not a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that storage device 16 is non-movable. As one example, storage device 16 may be removed from device 20, and moved to another device. As another example, a storage device, substantially similar to storage device 16, may be inserted into device 20.
As described above, in some instances, the user may attempt to scroll image content beyond a scroll boundary to zoom image content beyond a zoom boundary. As used in this disclosure, the term boundary may include both or either of the scroll boundary and the zoom boundary. Processor 14 may be configured to receive the request that is based upon a user gesture to scroll or zoom an image content portion such as image content portion 4A (
In some examples, the boundary of the image content, such as the scroll boundary, may be defined by the ends of image content, e.g., locations within the image content beyond which there is no image content. In some examples, the boundary of the image content, such as zoom boundary, may be defined by the practical limitations of device 20. Processor 14 may be configured to identify the boundary, e.g., the scroll boundary and/or the zoom boundary based on the type of application executed by processor 14 that generated the image content. Processor 14 may provide such boundary information to beyond boundary determination module 15.
In addition, processor 14 may provide the request to extend the image content to beyond boundary determination module 15. Beyond boundary determination module 15 may be configured to determine whether the request to extent the image content portion includes a request to extent the image content portion beyond the boundary of the image content. For example, beyond boundary determination module 15 may compare the request to extend the image content portion with the boundary of the image content to determine whether the request to extent the image content portion includes a request to extend the image content portion beyond the boundary of the image content.
If the requests includes the request to extend the image content portion beyond the boundary of the image content, beyond boundary determination module 15 may indicate to attribute distortion module 17, that the user is requesting to extend the image content portion beyond the boundary of the image content. In response to the request, attribute distortion module 17 may be configured to distort one or more visible attributes of the image content portion to indicate recognition of the request and to further indicate that the request will not be processed to extend the image content beyond the boundary of the image content. Non-limiting examples of the functionality of attribute distortion module 17 include distorting one or more visible attributes such as warping, curving, or shading parts of the image content portion or the entire the image content portion.
Attribute distortion module 17 may distort one or more visible attributes of the image content portion at a location substantially close to the boundary when the user requests to extend the image content portion beyond a boundary. For example, if the user attempts to the scroll the image content portion above the top scroll boundary, as determined by beyond boundary determination module 15, attribute distortion module 17 may warp the top part of the image content portion. As another example, if the user attempts to zoom in the image content portion more than the zoom boundary, as determined by beyond boundary determination module 15, attribute distortion module 17 may shade the middle part of the image content portion. Attribute distortion module 17 may distort, e.g., warp, curve, or shade, parts of the image content portion when the user attempts to extend the image content portion beyond the bottom, right, left, or zoom out boundaries in a substantially similar fashion. Warping, curving, and shading are provided merely as examples of distortions to the visible attributes. In some examples, attribute distortion module 17 may be configured to distort the visible attributes in a manner different than warping, curving, and/or shading.
In some examples, attribute distortion module 17 may be configured to distort the one or more visible attributes based on the characteristic of the user gesture to extend the image content portion beyond the boundary. The characteristic of the user gesture may include characteristics such as how fast the user applied the user gesture, how many times the user applied the user gesture, the location of the user gesture, e.g., starting and ending locations of the user gesture, an amount the user requested to extend the image content beyond the boundary and the like. The user gesture characteristics may be identified by processor 14. Processor 14 may provide the user gesture characteristics to attribute distortion module 17. In some instances, attribute distortion module 17 may be configured to distort the one or more visible attribute more for a given user gesture characteristic than for other user gesture characteristics.
As one example, the user may provide a user gesture to scroll an image content portion upwards when the image content portion is at the scroll boundary. If the user gesture started at the bottom of display screen 12 and extended all the way to the top of display screen 12, attribute distortion module 17 may warp at least some of the image content portion more than the amount that attribute distortion module 17 would warp at least some of the image content portion if the user gesture started at the middle of display screen 12 and extended almost to the top of display screen 12.
As another example, the user may provide a user gesture to zoom into an image content portion when the image content portion is at the zoom boundary. The user gesture may be tapping a location of display screen 12 that displays a zoom in button. If the user repeatedly tapped the zoom in button, at a relatively high tapping frequency, attribute distortion module 17 may shade at least some of the image content portion more than the amount that attribute distortion module 17 would shade at least some of the image content portion if there were fewer taps at a lower tapping frequency.
As described above, attribute distortion module 17 may be configured to distort one or more visible attributes of the image content portions when processor 14 receives a request to extend an image content portion beyond a boundary of the image content, as may be determined by beyond boundary determination module 15. As one example, to distort the one or more visible attributes of the image content portion, attribute distortion module 17 may distort primitives that represent the image content portion.
To display the image content, including the image content portion, processor 14 may map the image content to a plurality of primitives. The primitives may be lines or polygons such as triangles and rectangles. For purposes of illustration, aspects of this disclosure are described in the context of the primitives being triangles, although aspects of this disclosure are not limited to examples where the primitives are triangles.
Processor 14 may map the image content to a triangle mesh on display screen 12. The triangle mesh may include a plurality of triangles, where each triangle includes a portion of display screen 12. Processor 14 may map each of the plurality of triangles to the image content, including the image content portion. Each triangle in the triangle mesh may be defined by the location of its vertices on display screen 12. The vertices may be defined in two dimensions (2-D) or three dimensions (3-D) based on the type of image content. For example, some graphical image content may be defined in 3-D or 2-D, and documental image content may be defined in 2-D.
To warp or curve a part of image content portion or the entire the image content portion, attribute distortion module 17 may displace the vertices of the triangles that represent the image content portion. For example, attribute distortion module 17 may distort the vertex location of one or more triangles that represent the image content portion that is being extended beyond the boundary. The distortion of the vertex location may be performed in 2-D or 3-D based on the desired distortion of the one or more visible attributes. For example, distortion of the vertex location for curving may be performed in 2-D and distortion of the vertex location for warping may be performed in 3-D.
To shade a part of the image content portion or the entire image content portion, attribute distortion module 17 may distort the color or brightness of one or more triangles that represent the image content portion. The distortion of the shading of the one or more triangles may be performed in 2-D.
In some examples, the amount by which attribute distortion module 17 displaces one or more primitives, e.g., triangles, may be based on the user gesture characteristics, as described above. As one example, the displacement of the one or more primitives may be localized at the location where the user entered the user gesture. As another example, the displacement of the one or more primitives may be based on the direction and/or magnitude of the user gesture. The magnitude of the user gesture may be considered as the user gesture characteristics.
For instance, attribute distortion module 17 may displace, color, or brighten the one or more triangles that represent the image content portion based on the amount of times the user entered the user gesture and/or the location of the user gesture. If the user gesture started at the bottom of the image content portion on display screen 12 and extended to the top of display screen 12, and image content portion was at the scroll boundary, attribute distortion module 17 may displace the one or more triangles that represent the image content portion more than if the user gesture started at the middle of the image content portion and extended to the top of display screen 12. In another instance, for every time that the user enters a user gesture to zoom into the image content portion, when the image content portion is at the zoom boundary, attribute distortion module 17 may brighten more and more parts of the image content portion, or brighten parts of the image content portion more and more.
The displacement of the one or more primitives, e.g., triangles, and/or the changes in the color or brightness of the one or more primitives may indicate to the user that the image content portion is at a boundary, e.g., scroll boundary or zoom boundary. Such distortions in the visible attributes of the image content portion may indicate recognition of the request to extend the image content portion beyond the boundary, and may also indicate that the request will not be processed.
In some examples, the user of device 20, or some other entity, may select the manner in which attribute distortion module 17 will distort the image content portion in response to a request to extent the image content portion beyond a boundary. The user may select the primary distortion that is to be applied to the image content portion when the user requests to extend the image content portion beyond a boundary. The user may also select other distortions that are to be applied to the image content portion after at least one user request to extend the image content portion beyond a boundary.
For example, the user may select curving as the primary distortion that is applied to the image content portion when the user requests to extent the image content portion beyond a boundary. The user may select shading as the secondary distortion that is applied to the image content portion when the user requests to extent the image content portion beyond a boundary. At the first instance when the user requests to extent the image content beyond a boundary, attribute distortion module 17 may curve the image content portion. If the user attempts again to extent the image content beyond the boundary, attribute distortion module 17 may shade the image content portion.
It should be noted that in some examples, attribute distortion module 17 may remove the distortions to the one or more visible attributes after a brief moment. The user may then enter a subsequent user gesture to extent after attribute distortion module 17 removed the distortions to the visible attributes. However, aspects of this disclosure are not so limited. In some examples, the user may enter a subsequent user gesture before attribute distortion module 17 removed the distortions to the one or more visible attributes.
Attribute distortion module 17 and beyond boundary determination module 15 may be implemented in hardware, software, firmware, or a combination thereof. For example, attribute distortion module 17 and beyond boundary determination module 15 may be implemented in a microprocessor, a controller, a DSP, an ASIC, a FPGA, or equivalent discrete or integrated logic circuitry. Furthermore, although shown as separate units in
In some examples, in addition to distorting one or more visible attributes of the image content portion, device 20 may also provide non-visual indicators responsive to the request to extend the image content portion beyond a boundary of the image content. Non-limiting examples of the non-visual indicators include vibrations and sounds. As one example, in response to the request to extend the image content portion beyond the boundary of the image content, processor 14 may cause device 20 to vibrate. The vibration of device 20 may indicate recognition of the request and indicate that the request will not be processed. As another example, processor 14 may cause a speaker of user interface 18 to produce a sound, such as a “boing” sound, or any other sound, in response to the request to extend the image content portion beyond the boundary of the image content. Other examples of non-visual indicators may be possible and may be provided in response to the request to extend the image content portion beyond the boundary of the image content, in accordance with aspects of this disclosure. The non-visual indicators may work in conjunction with the visual indicators, e.g., distortion of the visible attributes, to indicate to the user that the image content portion is at a boundary, e.g., scroll or zoom boundary.
As one example, the user may enter a user gesture via digit 23A, of the user's hand, to extend image content portion 22 beyond a boundary. As indicated in
As one example, attribute distortion module 17 may distort image content portion 22, as illustrated by image content portion 24 in
Although not shown specifically in
For example, the user may enter a first user gesture to extent image content portion 22 beyond the scroll boundary, as illustrated by digit 23A in
It should be noted that in some examples, before the subsequent user gesture, the distortion of image content portion 24 may be removed. For example, the image content may be displayed in a substantially similar manner as image content portion 22.
In some examples, the user gesture may start by the user placing a digit on the middle of bottom of image content portion 22 and dragging the digit in an upward direction. In response, attribute distortion module 17 may distort image content portion 22 as illustrated by image content portion 26 in
For instance, as illustrated in
It should be noted that the examples of
Furthermore, although digit 23A and digit 23B are shown as located on different parts of the image content, aspects of this disclosure are not so limited. In some examples, digit 23A and digit 23B may be located in the same location. For example, during subsequent user gestures, the user may place the digit, or any of the other input mechanisms, e.g., mouse location, stylus pen, or other input mechanisms, in a substantially similar location.
Responsive to the request, one or more visible attributes of the image content portion may be distorted (30). The distortion of the one or more visible attributes may be performed by a means for distorting. The distortion of the one or more visible attributes may indicate recognition of the request. The distortion of the one or more visible attributes may also indicate that the request will not be processed to extend the image content portion beyond the boundary of the image content.
In some examples, in addition to distorting one or more visible attributes of the image content portion, non-visual indicators may be provided in response to the request to extend the image content portion beyond the boundary of the image content (38). Examples of the non-visual indicators include vibrating the device and/or providing a sound from the device. After the distortion to the primitives and/or at the conclusion of the non-visual indicators, the distortions to the image content may be removed (40).
Conventional devices may not be equipped to provide a user with an indication that the user is requesting extending an image content portion beyond the boundary of the image content. In some conventional devices that may provide an indication that the user is requesting extending an image content portion beyond the boundary of the image content, such indications may not be easily seen by the user. Aspects of this disclosure may provide users with a clear indication that the user is requesting extending an image content portion beyond the boundary of the image content.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. Various features described as modules, units or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices or other hardware devices. In some cases, various features of electronic circuitry may be implemented as one or more integrated circuit devices, such as an integrated circuit chip or chipset.
If implemented in hardware, this disclosure may be directed to an apparatus such a processor or an integrated circuit device, such as an integrated circuit chip or chipset. Alternatively or additionally, if implemented in software or firmware, the techniques may be realized at least in part by a computer-readable data storage medium comprising instructions that, when executed, cause a processor to perform one or more of the methods described above. For example, the computer-readable data storage medium may store such instructions for execution by a processor.
A computer-readable medium may form part of a computer program product, which may include packaging materials. A computer-readable medium may comprise a computer data storage medium such as RAM, ROM, NVRAM, EEPROM, FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer.
The code or instructions may be software and/or firmware executed by processing circuitry including one or more processors, such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, functionality described in this disclosure may be provided within software modules or hardware modules.
Various aspects have been described in this disclosure. These and other aspects are within the scope of the following claims.
This application is a continuation of U.S. application Ser. No. 12/847,335, filed Jul. 30, 2010, the entire contents of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 12847335 | Jul 2010 | US |
Child | 13250648 | US |