Examples described herein relate to a system and method for an image centric mobile application.
An electronic personal display is a mobile computing device that displays information to a user. While an electronic personal display may be capable of many of the functions of a personal computer, a user can typically interact directly with an electronic personal display without the use of a keyboard that is separate from, or coupled to, but distinct from the electronic personal display itself. Some examples of electronic personal displays include mobile digital devices/tablet computers and electronic readers (e-readers) such (e.g., Apple iPad®, Microsoft® Surface™, Samsung Galaxy Tab® and the like), handheld multimedia smartphones (e.g., Apple iPhone®, Samsung Galaxy S®, and the like), and handheld electronic readers (e.g., Amazon Kindle®, Barnes and Noble Nook®, Kobo Aura HD, Kobo Aura H2O, Kobo GLO and the like).
Some electronic personal display devices are purpose built devices designed to perform especially well at displaying digitally stored content for reading or viewing thereon. For example, a purpose build device may include a display that reduces glare, performs well in high lighting conditions, and/or mimics the look of text as presented via actual discrete pages of paper. While such purpose built devices may excel at displaying content for a user to read, they may also perform other functions, such as displaying images, emitting audio, recording audio, and web surfing, among others.
Electronic personal displays are among numerous kinds of consumer devices that can receive services and utilize resources across a network service. Such devices can operate applications or provide other functionality that links a device to a particular account of a specific service. For example, the electronic reader (e-reader) devices typically link to an online bookstore, and media playback devices often include applications that enable the user to access an online media electronic library (or e-library). In this context, the user accounts can enable the user to receive the full benefit and functionality of the device.
Yet further, such devices may incorporate a touch screen display having integrated touch sensors and touch sensing functionality, whereby user input commands via touch-based gestures are received thereon.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate various embodiments and, together with the Description of Embodiments, serve to explain principles discussed below. The drawings referred to in this brief description of the drawings should not be understood as being drawn to scale unless specifically noted.
A method and system for image centric mobile application is provided. The method includes accessing a possible title image database comprising a plurality of images of book covers presentable to a user on a display, presenting one or more of the plurality of images to the user on the display, receiving a user touch input associated with the one or more of the plurality of images presented to the user and learning user preferences associated with the one or more of the plurality of images presented to the user based on the user touch input.
In one embodiment, a binary decision process can be implemented to allow a user to make a simple “yes” or “no” decision about a particular book cover image. In a sense, the image centric mobile application described herein functions as a platform that enables a user to make quick image based decisions about whether they are interested in a book title or not. In one embodiment, the image-centric decision process described herein can be used to provide content to a user in a unique and fun way.
In one embodiment, a user is presented with one book cover at a time and the image of the book cover is displayed on an e-Reading device. If the user likes the image, a simple swipe to the right will enable the user to peruse options associated with the title including adding the title to a wish list, automatic purchase, view the back cover, etc.
In the event the user is not interested in the image, a left swipe on the e-Reading device will signal disinterest and the title image will not be shown to the user again. In this example, a simple left swipe on the e-Reading device provides feedback that the user is not interested in that particular title, based on the image provided.
Embodiments described herein increase discoverability of titles for a user and breaks the user out of a filter bubble that can increase browsing speed and replicates the physical act of browsing in a brick and motor store. Additionally, the simple yes or no decision making appeals to a broader age group than traditional media discovery methods and make purchasing of media easier with a simple one action purchase. Embodiments described herein help develop the use of wish lists and create additional revenue channels for media sales.
In one embodiment, the image centric application enables a user to judge a book by its cover. The interface and process described herein reduces the e-book decision making process to its essence, a book cover, the reader, and a decision. In one embodiment, a queue of e-books is ready and available as inputs to the process. The input may be sourced from an E-book store, or externally based on various “bestseller” lists or from publicly-available blogger lists.
“E-books” are a form of electronic publication content stored in digital format in a computer non-transitory memory, viewable on a computing device having display functionality. An e-book can correspond to, or mimic, the paginated format of a printed publication for viewing, such as provided by printed literary works (e.g., novels) and periodicals (e.g., magazines, comic books, journals, etc.). Optionally, some e-books may have chapter designations, as well as content that corresponds to graphics or images (e.g., such as in the case of magazines or comic books).
Multi-function devices, such as cellular-telephony or messaging devices, can utilize specialized applications (e.g., specialized e-reading application software) to view e-books in a format that mimics the paginated printed publication. Still further, some devices (sometimes labeled as “e-readers”) can display digitally-stored content in a more reading-centric manner, while also providing, via a user input interface, the ability to manipulate that content for viewing, such as via discrete pages arranged sequentially (that is, pagination) corresponding to an intended or natural reading progression, or flow, of the content therein.
An “e-reading device”, variously referred to herein as an electronic personal display or mobile computing device, can refer to any computing device that can display or otherwise render an e-book. By way of example, an e-reading device can include a mobile computing device on which an e-reading application can be executed to render content that includes e-books (e.g., comic books, magazines, etc.). Such mobile computing devices can include, for example, a multi-functional computing device for cellular telephony/messaging (e.g., feature phone or smart phone), a tablet computer device, an ultra-mobile computing device, or a wearable computing device with a form factor of a wearable accessory device (e.g., smart watch or bracelet, glass-wear integrated with a computing device, etc.). As another example, an e-reading device can include an e-reader device, such as a purpose-built device that is optimized for an e-reading experience (e.g., with E-ink displays).
In one embodiment, a user is presented with one book cover at a time and the image of the book cover is displayed on an e-Reading device. If the user likes the image, a simple swipe to the right will enable the user to peruse options associated with the title including adding the title to a wish list, automatic purchase, view the back cover, etc.
System 100 includes an electronic personal display device, shown by way of example as an e-reading device 110, and a network service 120. The network service 120 can include multiple servers and other computing resources that provide various services in connection with one or more applications that are installed on the e-reading device 110. By way of example, in one implementation, the network service 120 can provide e-book services that communicate with the e-reading device 110. The e-book services provided through network service 120 can, for example, include services in which e-books are sold, shared, downloaded and/or stored. More generally, the network service 120 can provide various other content services, including content rendering services (e.g., streaming media) or other network-application environments or services.
The e-reading device 110 can correspond to any electronic personal display device on which applications and application resources (e.g., e-books, media files, documents) can be rendered and consumed. For example, the e-reading device 110 can correspond to a tablet or a telephony/messaging device (e.g., smart phone). In one implementation, for example, e-reading device 110 can run an e-reader application that links the device to the network service 120 and enables e-books provided through the service to be viewed and consumed. In another implementation, the e-reading device 110 can run a media playback or streaming application that receives files or streaming data from the network service 120. By way of example, the e-reading device 110 can be equipped with hardware and software to optimize certain application activities, such as reading electronic content (e.g., e-books). For example, the e-reading device 110 can have a tablet-like form factor, although variations are possible. In some cases, the e-reading device 110 can also have an E-ink display.
In additional detail, the network service 120 can include a device interface 128, a resource store 122 and a user account store 124. The user account store 124 can associate the e-reading device 110 with a user and with an account 125. The account 125 can also be associated with one or more application resources (e.g., e-books), which can be stored in the resource store 122. The device interface 128 can handle requests from the e-reading device 110, and further interface the requests of the device with services and functionality of the network service 120. The device interface 128 can utilize information provided with a user account 125 in order to enable services, such as purchasing downloads or determining what e-books and content items are associated with the user device. Additionally, the device interface 128 can provide the e-reading device 110 with access to the content store 122, which can include, for example, an online store. The device interface 128 can handle input to identify content items (e.g., e-books), and further to link content items to the account 125 of the user.
Yet further, the user account store 124 can retain metadata for individual accounts 125 to identify resources that have been purchased or made available for consumption for a given account. The e-reading device 110 may be associated with the user account 125, and multiple devices may be associated with the same account. As described in greater detail below, the e-reading device 110 can store resources (e.g., e-books) that are purchased or otherwise made available to the user of the e-reading device 110, as well as to archive e-books and other digital content items that have been purchased for the user account 125, but are not stored on the particular computing device.
With reference to an example of
E-reading device 110 can also include one or more motion sensors 130 arranged to detect motion imparted thereto, such as by a user while reading or in accessing associated functionality. In general, the motion sensor(s) 130 may be selected from one or more of a number of motion recognition sensors, such as but not limited to, an accelerometer, a magnetometer, a gyroscope and a camera. Further still, motion sensor 130 may incorporate or apply some combination of the latter motion recognition sensors.
E-reading device 110 further includes motion gesture logic 137 to interpret user input motions as commands based on detection of the input motions by motion sensor(s) 130. For example, input motions performed on e-reading device 110 such as a tilt, a shake, a rotation, a swivel or partial rotation and an inversion may be detected via motion sensors 130 and interpreted as respective commands by motion gesture logic 137.
In some embodiments, the e-reading device 110 includes features for providing functionality related to displaying paginated content. The e-reading device 110 can include page transitioning logic 115, which enables the user to transition through paginated content. The e-reading device 110 can display pages from e-books, and enable the user to transition from one page state to another. In particular, an e-book can provide content that is rendered sequentially in pages, and the e-book can display page states in the form of single pages, multiple pages or portions thereof. Accordingly, a given page state can coincide with, for example, a single page, or two or more pages displayed at once. The page transitioning logic 115 can operate to enable the user to transition from a given page state to another page state In the specific example embodiment where a given page state coincides with a single page, for instance, each page state corresponding to one page of the digitally constructed series of pages paginated to comprise, in one embodiment, an e-book. In some implementations, the page transitioning logic 115 enables single page transitions, chapter transitions, or cluster transitions (multiple pages at one time).
The page transitioning logic 115 can be responsive to various kinds of interfaces and actions in order to enable page transitioning. In one implementation, the user can signal a page transition event to transition page states by, for example, interacting with the touch-sensing region of the display screen 116. For example, the user may swipe the surface of the display screen 116 in a particular direction (e.g., up, down, left, or right) to indicate a sequential direction of a page transition. In variations, the user can specify different kinds of page transitioning input (e.g., single page turns, multiple page turns, chapter turns, etc.) through different kinds of input. Additionally, the page turn input of the user can be provided with a magnitude to indicate a magnitude (e.g., number of pages) in the transition of the page state.
For example, a user can touch and hold the surface of the display screen 116 in order to cause a cluster or chapter page state transition, while a tap in the same region can effect a single page state transition (e.g., from one page to the next in sequence). In another example, a user can specify page turns of different kinds or magnitudes through single taps, sequenced taps or patterned taps on the touch sensing region of the display screen 116. Although discussed in context of “taps” herein, it is contemplated that a gesture action provided in sufficient proximity to touch sensors of display screen 116, without physically touching thereon, may also register as a “contact” with display screen 116, to accomplish a similar effect as a tap, and such embodiments are also encompassed by the description herein.
According to some embodiments, the e-reading device 110 includes display sensor logic 135 to detect and interpret user input or user input commands made through interaction with the touch sensors 138. By way of example, display sensor logic 135 can detect a user making contact with the touch-sensing region of the display screen 116, otherwise known as a touch event. More specifically, display sensor logic 135 can detect a touch events also referred to herein as a tap, an initial tap held in contact with display screen 116 for longer than some pre-defined threshold duration of time (otherwise known as a “long press” or a “long touch”), multiple taps performed either sequentially or generally simultaneously, swiping gesture actions made through user interaction with the touch sensing region of the display screen 116, or any combination of these gesture actions. Although referred to herein as a “touch” or a tap, it should be appreciated that in some design implementations, sufficient proximity to the screen surface, even without actual physical contact, may register a “contact” or a “touch event”. Furthermore, display sensor logic 135 can interpret such interactions in a variety of ways. For example, each such interaction may be interpreted as a particular type of user input associated with a respective input command, execution of which may trigger a change in state of display 116.
The term “sustained touch” is also used herein and refers to a touch event that is held in sustained contact with display screen 116, during which sustained contact period the user or observer may take additional input actions, including gestures, on display screen 116 contemporaneously with the sustained contact. Thus a long touch is distinguishable from a sustained touch, in that the former only requires a touch event to be held for some pre-defined threshold duration of time, upon expiration of which an associated input command may be automatically triggered.
In one implementation, display sensor logic 135 implements operations to monitor for the user contacting or superimposing upon, using a finger, thumb or stylus, a surface of display 116 coinciding with a placement of one or more touch sensor components 138, that is, a touch event, and also detects and correlates a particular gesture (e.g., pinching, swiping, tapping, etc.) as a particular type of input or user action. Display sensor logic 135 may also sense directionality of a user gesture action so as to distinguish between, for example, leftward, rightward, upward, downward and diagonal swipes along a surface portion of display screen 116 for the purpose of associating respective input commands therewith.
Processor 210 can implement functionality using the logic and instructions stored in memory 250. Additionally, in some implementations, processor 210 utilizes the network interface 220 to communicate with the network service 120 (see
In some implementations, display 116 can correspond to, for example, a liquid crystal display (LCD) or light emitting diode (LED) display that illuminates in order to provide content generated from processor 210. In some implementations, display 116 can be touch-sensitive. For example, in some embodiments, one or more of the touch sensor components 138 may be integrated with display 116. In other embodiments, the touch sensor components 138 may be provided (e.g., as a layer) above or below display 116 such that individual touch sensor components 138 track different regions of display 116. Further, in some variations, display 116 can correspond to an electronic paper type display, which mimics conventional paper in the manner in which content is displayed. Examples of such display technologies include electrophoretic displays, electro-wetting displays, and electro-fluidic displays.
Processor 210 can receive input from various sources, including touch sensor components 138, display 116, keystroke input 209 such as from a virtual or rendered keyboard, and other input mechanisms 299 (e.g., buttons, mouse, microphone, etc.). With reference to examples described herein, processor 210 can respond to input detected at the touch sensor components 138. In some embodiments, processor 210 responds to inputs from the touch sensor components 138 in order to facilitate or enhance e-book activities such as generating e-book content on display 116, performing page transitions of the displayed e-book content, powering off the device 110 and/or display 116, activating a screen saver, launching or closing an application, and/or otherwise altering a state of display 116.
In some embodiments, memory 250 may store display sensor logic 135 that monitors for user interactions detected through the touch sensor components 138, and further processes the user interactions as a particular input or type of input. In an alternative embodiment, display sensor logic module 135 may be integrated with the touch sensor components 138. For example, the touch sensor components 138 can be provided as a modular component that includes integrated circuits or other hardware logic, and such resources can provide some or all of display sensor logic 135. In variations, some or all of display sensor logic 135 may be implemented with processor 210 (which utilizes instructions stored in memory 250), or with an alternative processing resource.
E-reading device 110 further includes wireless connectivity subsystem 213, comprising a wireless communication receiver, a transmitter, and associated components, such as one or more embedded or internal antenna elements, local oscillators, and a processing module such as a digital signal processor (DSP) (not shown). As will be apparent to those skilled in the field of communications, the particular design of wireless connectivity subsystem 213 depends on the communication network in which computing device 110 is intended to operate, such as in accordance with Wi-Fi, Bluetooth, Near Field Communication (NFC) communication protocols, and the like.
Image centric application module 275 can be implemented as a software module, comprising instructions stored in memory 250, on mobile computing device 110. One or more embodiments of image centric application module 275 described herein may be implemented using programmatic modules or components, a portion of a program, or software in conjunction with one or more hardware component(s) capable of performing one or more stated tasks or functions. As used herein, such module or component can exist on a hardware component independently of other modules or components. Alternatively, a module or component can be a shared element or process of other modules, programs or machines.
Display screen 116 of computing device 110 includes touch functionality whereby user input commands may be accomplished via gesture actions performed at display screen 116. In the context of reading digitally rendered pages comprising content of an e-book, for example, come common input commands accomplished via gesture actions received at display screen 116 may include, for example, page turns, making annotations, adjusting illumination levels or contrast of the device display screen, and re-sizing the font size of text in the content.
In one embodiment, a binary decision process can be implemented to allow a user to make a simple “yes” or “no” decision about a particular book cover image. In a sense, the image centric mobile application described herein functions as a platform that enables a user to make quick image based decisions about whether they are interested in a book title or not. In one embodiment, the image-centric decision process described herein can be used to provide content to a user in a unique and fun way.
In one embodiment, a list of possible titles 305 is maintained and from this list of possible titles, a user is presented by the image presenter 350 with one book cover at a time and the image of the book cover is provided 340 on an e-Reading device. If the user likes the image, a simple swipe to the right will enable the user to peruse options associated with the title including adding the title to a wish list, automatic purchase, view the back cover, etc. In one embodiment, a swipe left or right constitutes user input 320.
In the event the user is not interested in the image, a left swipe on the e-Reading device will signal disinterest and the title image will not be shown to the user again. In one embodiment, the learning engine 310 maintains a record of the user's swipes for particular images and in some embodiments can learn a user's preferences for title images. In this example, a simple left swipe on the e-Reading device provides feedback that the user is not interested in that particular title, based on the image provided.
Embodiments described herein increase discoverability of titles for a user and breaks the user out of a filter bubble that can increase browsing speed and replicates the physical act of browsing in a brick and motor store. Additionally, the simple yes or no decision making appeals to a broader age group than traditional media discovery methods and make purchasing of media easier with a simple one action purchase. Embodiments described herein help develop the use of wish lists and create additional revenue channels for media sales.
In one embodiment, the image centric application 275 enables a user to judge a book by its cover. The interface and process described herein reduces the e-book decision making process to its essence, a book cover 420, the reader display 116, and a decision. In one embodiment, a queue of e-books 305 is ready and available as inputs to the process. The input may be sourced from an E-book store, or externally based on various “bestseller” lists or from publicly-available blogger lists.
In one embodiment, a user has the choice of entering a left swipe 430 or a right swipe 440 to make a simple yes or no decision. In an alternate embodiment, a yes key 480 or a no key 475 can be used to make a binary decision about an book cover image.
At 502, method 500 includes accessing a possible title image database comprising a plurality of images of book covers presentable to a user on a display. In one embodiment, the possible list of book covers is customized for the user based on a user's reading history.
At 504, method 500 includes presenting one or more of the plurality of images to the user on the display. In one embodiment, the image is of a front cover of a book.
At 506, method 500 includes receiving a user touch input associated with the one or more of the plurality of images presented to the user. In one embodiment, 506 includes receiving a left or right swipe. In another embodiment, 506 includes receiving an up or down swipe. In another embodiment, 506 includes receiving a touch selection of a yes or no touch sensitive button.
At 508, method 500 includes learning user preferences associated with the one or more of the plurality of images presented to the user based on the user touch input. In one embodiment, the images presented in 504 are selected based on the learned user preferences performed in 508.
In one embodiment, 500 includes determining a left swipe or a right swipe on the display as a user input associated with the one or more of the plurality of images presented to the user, wherein in response to receiving the right swipe, providing the user a back cover image of a media title associated with the one or more of the plurality of images.
In one embodiment, method 500 includes, in response to the input receiver receiving the left swipe, the learning process of step 508 prevents the one or more of the plurality of images presented to the user from being displayed again.
In one embodiment, method 500 includes, in response to the input receiver receiving the right swipe, the learning process of step 508 enables the user to add a media title associated with the one or more of the plurality of images to a wish list associated with the user.
In one embodiment, method 500 includes, in response to the input receiver receiving a right swipe, the learning process of step 508 enables a user to buy a media title associated with the one or more of the plurality of images.
In one embodiment, method 500 includes, in response to the input receiver receiving a right swipe, the learning process of step 508 provides the user a summary of a media title associated with the one or more of the plurality of images.
In one embodiment, method 500 includes, in response to the input receiver receiving a right swipe, the learning process of step 508 provides the user a back cover image of a media title associated with the one or more of the plurality of images.
With reference now to
System 600 of
System 600 also includes data storage features such as a computer usable volatile memory 608, e.g., random access memory (RAM), coupled to bus 604 for storing information and instructions for processors 210A, 210B, and 210C. System 600 also includes computer usable non-volatile memory 610, e.g., read only memory (ROM), coupled to bus 604 for storing static information and instructions for processors 210A, 210B, and 210C. Also present in system 600 is a data storage unit 612 (e.g., a magnetic or optical disk and disk drive) coupled to bus 604 for storing information and instructions.
Computer system 600 of
System 600 also includes or couples with display 116 for visibly displaying information such as alphanumeric text and graphic images. In some embodiments, system 600 also includes or couples with one or more optional touch sensors 138 for communicating information, cursor control, gesture input, command selection, and/or other user input to processor 210A or one or more of the processors in a multi-processor embodiment. In some embodiments, system 600 also includes or couples with one or more optional speakers 150 for emitting audio output. In some embodiments, system 600 also includes or couples with an optional microphone 160 for receiving/capturing audio inputs. In some embodiments, system 600 also includes or couples with an optional digital camera 170 for receiving/capturing digital images as an input.
Optional touch sensor(s) 138 allows a user of computer system 600 (e.g., a user of an eReader of which computer system 600 is a part) to dynamically signal the movement of a visible symbol (cursor) on display 116 and indicate user selections of selectable items displayed. In some embodiment other implementations of a cursor control device and/or user input device may also be included to provide input to computer system 600, a variety of these are well known and include: trackballs, keypads, directional keys, and the like.
System 600 is also well suited to having a cursor directed or user input received by other means such as, for example, voice commands received via microphone 160. System 600 also includes an input/output (I/O) device 620 for coupling system 600 with external entities. For example, in one embodiment, I/O device 620 is a modem for enabling wired communications or modem and radio for enabling wireless communications between system 600 and an external device and/or external network such as, but not limited to, the Internet. I/O device 620 may include a short-range wireless radio such as a Bluetooth® radio, Wi-Fi radio (e.g., a radio compliant with Institute of Electrical and Electronics Engineers' (IEEE) 802.11 standards), or the like.
Referring still to
In some embodiments, all or portions of various embodiments described herein are stored, for example, as an application 624 and/or module 626 in memory locations within RAM 608, ROM 610, computer-readable storage media within data storage unit 612, peripheral computer-readable storage media 602, and/or other tangible computer readable storage media.
Although illustrative embodiments have been described in detail herein with reference to the accompanying drawings, variations to specific embodiments and details are encompassed by this disclosure. It is intended that the scope of embodiments described herein be defined by claims and their equivalents. Furthermore, it is contemplated that a particular feature described, either individually or as part of an embodiment, can be combined with other individually described features, or parts of other embodiments.