In general, this document describes systems and techniques for presenting electronic images such as digital or digitized photographs.
Digital photography has simplified taking, viewing, and printing photographs. Photographs can be taken either using high-end equipment such as digital single lens reflex (SLR) cameras, low resolution cameras including point-and-shoot cameras and cellular telephone instruments with suitable capabilities. Photographs can be transferred either individually as files or collectively as folders containing multiple files from the cameras to other media including computers, printers, and storage devices.
Software applications, such as iPhoto (manufactured by Apple Computer, Inc. of Cupertino, Calif.), can be used to arrange, display, and edit digital photographs obtained from a camera or any other electronic image in a digital format. Such software applications provide a user in possession of a large repository of photographs with the capabilities to organize, view, and edit the photographs. Users can organize photographs into albums and create slide shows to view the albums. Software manufacturers regularly add features to the software so that frequent operations, including transferring photographs from the device to a computer, and arranging and displaying the photographs, are relatively easy for an average user to perform.
In one example, a system can upload multiple albums of images, display each album as a thumbnail in a user interface, represent each album by an image in the album, and allow a user to scan the images in the album by moving a cursor across the thumbnail representing the album.
In one aspect, a computer implemented method is described. The method includes displaying, within a user interface, a view pane having a vertical direction and a horizontal direction, displaying, within the view pane, a poster frame represented by a bounded region, the poster frame representing a container, the container including several objects, wherein each object has an associated location identifier, and grouping two or more objects determined to be in sufficient proximity based on comparing the associated location identifiers of the grouped objects.
This, and other aspects, can include one or more of the following features. The object can represent a digital image. The grouping can be performed in response to user input. Each object can be captured using a device. The location identifier can be associated with the object before the object is captured. The location identifier can be associated with the object after the object is captured. The associated location identifier can be altered based on user input. The location identifier can be obtained from a global positioning system. The location identifier can include at least one of latitude and longitude.
In another aspect, a medium bearing instructions to enable one or more machines to perform operations is described. The operations include displaying, within a user interface, a view pane having a vertical direction and a horizontal direction, displaying, within the view pane, a poster frame represented by a bounded region, the poster frame representing a container, the container including several objects, wherein each object has an associated location identifier, and grouping two or more objects determined to be in sufficient proximity based on comparing the associated location identifiers of the grouped objects.
This, and other aspects, can include one or more of the following features. The object can represent a digital image. The grouping can be performed in response to user input. Each object can be captured using a device. The location identifier can be associated with the object before the object is captured. The location identifier can be associated with the object after the object is captured. The associated location identifier can be altered based on user input. The location identifier can be obtained from a global positioning system. The location identifier can include at least one of latitude and longitude.
In another aspect, a computer implemented method performed by a digital imaging device is described. The method includes receiving from a remote source geographic location information relating to a location of the digital imaging device, and associating the received geographic location information with one or more digital image objects captured by the digital imaging device at or near a location at which the geographic location information was received from the remote source.
This, and other aspects, can include one or more of the following features. The digital imaging device can include at least one of a digital image camera and a digital video camera. Receiving geographic information from the remote source can include receiving Global Positioning System (GPS) information from an orbiting satellite. Receiving geographic information from that of March source can include receiving signals from a terrestrial-based system. The terrestrial-based system can include one or more of a Long Range Navigation (LORAN) system and a cellular telephone network. Receiving geographic information from data more source can include receiving information from a human user. The digital image object can include at least one of a digital photograph and a digital video.
In another aspect, a medium bearing instructions to enable one or more machines to perform operations is described. The operations include receiving from a remote source geographic location information relating to a location of the digital imaging device, and associating the received geographic location information with one or more digital image objects captured by the digital imaging device at or near a location at which the geographic location information was received from the remote source.
This, and other aspects, can include one or more of the following features. The digital imaging device can include at least one of a digital image camera and a digital video camera. Receiving geographic information from the remote source can include receiving Global Positioning System (GPS) information from an orbiting satellite. Receiving geographic information from that of March source can include receiving signals from a terrestrial-based system. The terrestrial-based system can include one or more of a Long Range Navigation (LORAN) system and a cellular telephone network. Receiving geographic information from data more source can include receiving information from a human user. The digital image object can include at least one of a digital photograph and a digital video.
In another aspect, a computer implemented method is described. The method includes receiving information including one or more digital image objects and geographic location information associated with one or more of the digital image objects, and grouping together two or more digital image objects having associated geographic location information indicating that the two or more digital image objects were captured at or near a substantially same location.
This, and other aspects, can include one or more of the following features. The geographic location information for a digital image object can be contained within the digital image object. The digital image object can include at least one of a digital photograph and a digital video. The method can further include applying a common image processing operation to all digital image objects within a same group. The common image processing operation can include one or more of viewing, editing, moving, filtering, and storing. The method can further include comparing respective geographic location information associated with two digital image objects to determine if the two digital image objects were captured at respective locations sufficiently close to be deemed the substantially same location.
In another aspect, a medium bearing instructions to enable one or more machines to perform operations is described. The operations include receiving information including one or more digital image objects and geographic location information associated with one or more of the digital image objects, and grouping together two or more digital image objects having associated geographic location information indicating that the two or more digital image objects were captured at or near a substantially same location.
This, and other aspects, can include one or more of the following features. The geographic location information for a digital image object can be contained within the digital image object. The digital image object can include at least one of a digital photograph and a digital video. The operations can further include applying a common image processing operation to all digital image objects within a same group. The common image processing operation can include one or more of viewing, editing, moving, filtering, and storing. The operations can further include comparing respective geographic location information associated with two digital image objects to determine if the two digital image objects were captured at respective locations sufficiently close to be deemed the substantially same location.
The systems and techniques described here may provide one or more of the following advantages. Several images taken over a period of time can be grouped and collectively uploaded as albums. Each album can be a container represented by a poster frame on a user interface, where the poster frame is an image in the container. This can meaningfully represent a container containing images and allow users to identify the container based on the representative image depicting the container. Further, each container can be represented by a poster frame and the poster frames representing containers can be arranged within the user interface to indicate the chronological order in which the images were taken. The poster frames can be wrapped within the viewable area of the user interface to avoid horizontal scrolling within the user interface to access poster frames. In addition, the images within a poster frames can be viewed by placing a cursor on a display device operated by a pointing device, such as a mouse, at a desired position on a poster frames representing the container. Multiple images in a container can be scanned by moving the cursor across the poster frames representing the container. Furthermore, the management of large repositories of images can be simplified. In addition, images can be grouped based on one or more key words associated with the images, e.g., location.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
The user interface 100 can include an information pane 120. The information pane 120 can display metadata related to the most recently accessed poster frames 110. In some implementations, the information pane 120 can display metadata related to the poster frame 110 currently being accessed. For example, a poster frame 110 can display multiple images taken at several time instances. The information pane 120 can display information including the time stamps of the first and last images in the container represented by a poster frame 110, the number of images in the container, the size of the container (e.g., in gigabytes), and the like.
The user interface 100 can include a tool bar 125. The tool bar 125 can include one or more user control buttons 130. The user control buttons 130 can be configured to perform operations including rotate, scan, start slide show, and the like upon activation, e.g., clicking by a user. The tool bar 125 can also include a slider 135 configured to alter the dimensions of a poster frame based on input. In some implementations, the slider 135 can include a pointer 137 that can be moved. The position of a pointer 137 on the slider 135 can correspond to the dimensions of a poster frame 110. A user can alter the position of the pointer 137 using the cursor on the display device. In some implementations, the user can move the pointer 137 on the slider 135 by placing the cursor on the pointer 137, and dragging the pointer 137. In response to a change in the position of the pointer 137 on the slider 135, the dimensions of each poster frame 110 can be altered. A cursor can be represented by a conventional display 145 when positioned away from the poster frame 110. The conventional display can include an arrow.
In some implementations, a poster frame 110 can be represented by one of the images contained in the poster frame 110. When the container that the poster frame 110 represents is first uploaded for display on the view pane 105, the first image in the container can be assigned to represent the poster frame 110. Alternatively, any image in the container can be assigned to represent the poster frame 110. In some implementations, a user can rate the images in a container. The ratings of the images can be tracked and the poster frame 110 can be represented by the image with the highest rating. In other implementations, the user interactions with a container can be tracked. For example, a user may view one or more images in a container more often than other images in the container. An image viewed more often than the others can be used to represent the poster frame 110. In some implementations, a higher resolution image can be assigned to represent the container. In other implementations, a user can assign an image to represent a poster frame 110. The image representing a poster frame 110 can change over time due to one or more factors including addition of new images, deletion of old images, frequency of viewing, and the like.
The containers can be arranged in an order that can depend on factors including a name assigned to the container, a time stamp on the images in the container, and the like. Names can be assigned to containers by the cameras using which the images in the containers were taken. In a default implementation, the containers can be uploaded under the same name as that assigned to the containers by the cameras. The containers can be displayed chronologically in the order in which the images in the containers were taken based on the time stamp on each time image and/or each container. Alternatively, the containers can be displayed alphabetically based on the container names.
In some implementations, the poster frames 110 can be arranged in an order beginning from a position substantially adjacent to the left vertical edge of the view pane 105. The first poster frame 110 can be displayed substantially adjacent to the top left hand corner of the view pane 105. A new poster frame 110 can be positioned to the right of a previously displayed poster frame 110 in the same row as the first poster frame 110. In this manner, the poster frame 110 can be arranged from left to right in a row. The default horizontal and vertical dimensions of all the poster frame 110 can be pre-determined and can be uniform. In a default implementation, the assigned horizontal and vertical dimensions may correspond to a central location of the pointer 137 on the slider 135. Two frames displayed on the same row can be separated by a predetermined space.
In some implementations, as poster frames 110 are arranged in a row, each frame separated by a system assigned space, the sum of the horizontal dimensions of the poster frames 110 in a row and the spaces between the poster frames 110 in the row can exceed the available horizontal dimension of the view pane 105. Consequently, a poster frame 110 can be positioned substantially adjacent to the right vertical edge of the view pane 105. In such cases, the next poster frame 110 can be wrapped and displayed as the first poster frame 110 in a new row vertically displaced from the first row. The position of the first poster frame 110 in a new row can be substantially vertically aligned with that of the first poster frame 110 in the previous row. The space between rows can be pre-determined and uniform for all rows. Thus, multiple poster frames 110 can be arranged within the horizontal viewable region of a view pane 105. In this manner, the need to scroll horizontally to view poster frames 110 that are outside the viewing area of the view pane 105 can be avoided. In addition, the order of display of the poster frames 110 can correspond to an order in which the images in the corresponding containers were taken. The progression of time can correspond to the position of the poster frames 110 going from left to right in the horizontal direction and top to bottom in the vertical direction.
A user may wish to alter the order of display of poster frames 110 in the view pane 105. Such alterations can include adding a new poster frame 110, removing, repositioning, resizing a displayed poster frame 110, and the like. In a default implementation, containers can be detected and uploaded in the view pane 105. A file can be identified to be an image based on the file type, e.g., JPG, TIFF, GIF, DWG, and the like. All the detected containers can be displayed in the view pane 105. In other implementations, a user can select the containers that the user wishes to display in the view pane 105. In some implementations, uploading and displaying containers as poster frames 110 can be a combination of automatic detection and choices by a user.
A user may wish to remove one or more poster frames 110 displayed in the view pane 105. The 110 that the user wishes to remove may be adjacent to each other. Alternatively, the positions of the poster frames 110 may be non-adjacent to each other on a same row or on different rows. The poster frames 110 can be selected individually or as a group. In some implementations, the user can remove the poster frames 110 by pressing the “Delete” key on a key board. In other implementations, the user may drag the selected poster frames 110 and drop them into a location outside the view pane 105 (e.g., Trash, Recycle Bin). When a poster frame 110 is deleted from display, the remaining poster frames 110 can be repositioned to occupy the void created by the deleted poster frame 110. For example, if two rows of poster frames 110, each row containing five poster frames 110, are displayed in a view pane and if a user deletes the fourth poster frame 110 in the first row, the fifth poster frame 110 can be repositioned in the first row to occupy the void created by the deleted frame. Further, the first poster frame 110 in the second row can be repositioned to the fifth poster frame 110 in the first row. In this manner, all poster frames 110 in a view pane 105 can be displayed as a continuous sequence.
In some implementations, a user can change the position of a poster frame 110 in the view pane 105. A user can select a poster frame 110, drag the poster frame 110 from a present position and insert the poster frame 110 in a new position. Further, the position of all the poster frames 110 can be shifted either to the right, to a new row, or as required so that all poster frames 110 in a view pane are displayed as a continuous sequence.
When the sum of the vertical dimensions of poster frames 110 in rows and the spaces between the rows exceeds the vertical dimension of the view pane 105, a vertical scroll bar 140 can be incorporated in the user interface 100 to permit vertical scrolling to view poster frames that lie outside the area of the view pane 105. In some implementations, the contents of the view pane 105 can be vertically scrolled by placing a cursor on the vertical scroll bar 140 and dragging the bar. Alternatively, or in addition, a key board can be used to vertically scroll the view pane 105. A user can vertically scroll one or more rows by pressing a single key (e.g., arrow key) or a combination of keys (e.g., “command”+“home”, “command”+“end”, and the like). In other implementations, the user can pan the view pane 105 by placing the cursor anywhere on the view pane 105 and dragging the pane in a vertical direction.
In some implementations, moving the slider 135 from left of the user interface 100 to the right of the user interface 100 can cause an increase in the dimensions of each poster frame 110 and vice versa. As the dimensions of poster frames 110 in a row are increased using the slider 135, the horizontal and vertical dimensions of each poster frame 110 can be uniformly increased. The space between frames in the same row and between rows can also be uniformly increased to maintain the aesthetics of display and simplify viewing. In other implementations, the space between frames may be constant. As the dimensions of poster frames 110 in a row increase, the horizontal dimension of the row also increases. The horizontal dimension of the view pane 105 may be insufficient to display the poster frames 110 of larger dimensions in the same row. In such cases, the poster frame 110 on the right extreme of a row can be wrapped to the next row. All frames in the view pane 105 can be repositioned to accommodate the displaced frame while maintaining the order in which the poster frames 110 are displayed.
In some implementations, metadata related to each poster frame 110 can be displayed adjacent to each poster frame 110, for example, in the space between two rows. The metadata can include, the name of the poster frame 110 which can be either a system default name or a user-defined name, a time stamp, the number of photos in the poster frame, and the like. When a user deletes or repositions a poster frame 110, the metadata corresponding to the poster frame 110 can also be deleted or repositioned, respectively.
A poster frame 110 that corresponds to a container can include one or more images. In some implementations, the images in a container can be photographs that may have been taken over a period of time. The order in which cameras used to take photographs store the photographs can be chronological with the earliest taken photograph stored first. Alternatively, the order can be alphabetical, based on the file name assigned to each photograph. The photographs can be imported in an order same as the one in which the photographs are saved in the camera. Subsequently, the order in which the photographs are stored can be altered based on user input. Such alterations can include re-arranging the position of the photograph in the container, changing the name associated with a photograph and arranging the photographs alphabetically, and the like. In other implementations, the images in a container can be electronic images such as CAD drawings. The drawings can be assigned file names either automatically or based on user input. The drawings can be imported in an alphabetical order based on the assigned file name. Subsequently, the order can be altered by operations including altering the file name, re-arranging the position of the drawing, and the like. When the poster frames 110 are displayed in the view pane 105, previewing the images contained in each poster frame 110 can be enabled. In response to placing a cursor at a position on the poster frame 110, an image contained in the poster frame 110 can be displayed in place of the image assigned to represent the poster frame 110.
In some implementations, when the cursor is scanned across a poster frame 110 and moved away from the poster frame 110, the display of the poster frame 110 can be restored to the image assigned to represent the poster frame 110. In other implementations, the display of the poster frame 110 can be restored to the image assigned to represent the poster frame 110 depending on the position of the cursor on the poster frame. In other implementations, the user can be provided with an option to either preview images in a container represented by a poster frame by scanning over the poster frame or to only view the image assigned to represent the poster frame 110 when the cursor is scanned across the poster frame 110. In other implementations, the most recent image in the poster frame 110 that was previewed by scanning can be displayed. In other implementations, a user can choose an image to represent a poster frame. The user may position the cursor at a location on the poster frame to preview the image in the poster frame. The user can set the previewed image to represent the poster frame by striking a key, e.g., “Command” key. Alternatively, the user can set the image to represent the poster frame using the pointing device to operate the cursor. A cursor can be operated using virtually any suitable pointing device (e.g., mouse, track ball, stylus, touch screen, touch pad). The images in the container can be previewed by a simply moving the cursor across the poster frame 110 using the pointing device without requiring additional operation, such as clicking a mouse at any position on the poster frame 110 representing the container.
In some implementations, as the user moves the cursor across a poster frame 110, the display of the cursor can be altered from a conventional display (e.g., an arrow) to a specific display, e.g., an arrow including an image. Upon detecting that the cursor has been positioned over a poster frame 110, the display of the cursor can be automatically changed from the conventional display to the specific display. This can indicate that a poster frame 110 is being previewed. In some implementations, the specific display can be defined by the system. In other implementations, the specific display can be altered by the user. For example, the user can have a database of displays. The user can use one of the displays as the specific display. In other implementations, the user can define a specific display for each poster frame. Alternatively, the user can define multiple displays for the same poster frame. The user can define a first specific display for a second group of poster frames and a second specific display for a second group of poster frames. In some implementations, a plurality of specific displays can be configured such that the specific display of the cursor is altered based on a relationship between the images being previewed. For example, the specific display, during preview, of images in a container that share a common attribute value, such as a date when the images were created, can be common. The relationship between images that share a common specific display can be predetermined. Alternatively, the relationship can be specified by a user. In some implementations, the specific display and the conventional display can be simultaneously displayed when the cursor is positioned over a poster frame. When the cursor is moved away from the poster frame, only the conventional display can be displayed.
In addition
The preview scroll bar 155 can include a preview pointer 160 within the bounded region of the preview scroll bar 155. A user can alter the position of a preview pointer 160 in the preview scroll bar 155 using the cursor operated by the suitable pointing device. The position of the preview pointer 160 in the preview scroll bar 155 can correspond to an image in the container such that as the position of the preview pointer 160 in the preview scroll bar 155 is altered, the image displayed in the bounded region of the poster frame 110 is also altered. In some implementations, the size of the preview pointer 160 in the preview scroll bar 155 can correspond to the number of images in the container represented by the poster frame 110. A user can move the preview pointer 160 using the pointing device, e.g., by positioning the cursor over the preview pointer 160, clicking a mouse, and dragging the preview pointer 160. As the preview pointer 160 is moved, an image in the container corresponding to the position of the preview pointer 160 can be displayed within the bounded region of the poster frame 110. In this manner, the images in the container can be previewed. In other implementations, the scroll bar 155 can include advance tools 165 on the edges of the preview scroll bar 155. The advance tools 165 on the edges of the preview scroll bar 155 can be configured to advance the images in the container. For example, if the orientation of the scroll bar is horizontal, by clicking on the advance tool on the left edge of the scroll bar using the pointing device, the user can step through each image in the container until the user views the first image in the container. Similarly, by clicking on the advance tool on the right edge of the scroll bar using the pointing device, the user can step through each image in the container until the user views the last image in the container. In this manner, the scroll bar can be further configured to enable a user to step through the images in the container one at a time.
The number of images that each poster frame 110 can contain is limited only by available storage space. The dimensions of a poster frame 110 can remain constant regardless of the number of images in the container represented by the poster frame 110. In a poster frame 110 displayed on a display device, a physical space (e.g., one or more pixels) in the horizontal dimension of the poster frame 110 can represent an image. The physical space representing an image in a container containing few images may be larger when compared to that representing an image in a container containing several images. If the resolution of the cursor is less than the physical space representing an image, then the same image can be previewed by placing the cursor at multiple adjacent positions on the poster frame 110. For example, if a container contains only two images, the first image can be previewed if the cursor is placed at any location on the left half of the poster frame 110 representing the container. Similarly, the second image can be previewed if the cursor is placed at any location on the right half of the poster frame 110 representing the container. Conversely, if a poster frame 110 represents several images, the smallest unit of physical space of the display device may be greater than the physical space required to represent an image. In such cases, if the resolution of the cursor is greater than the physical space representing an image, the physical space occupied by a cursor may span more than one image. Consequently, it may not be possible to preview all the images in the container when the cursor is scanned horizontally across the poster frame 110 representing the container.
In some implementations, while previewing a container, certain images in a container can be skipped if the resolution of the cursor is greater than the physical space representing each image in the container. In some implementations, one or more images can be skipped based on the order in which the images are stored. For example, when the cursor is moved by a distance equal to the resolution of the cursor (e.g., 1 pixel), two images may be skipped. In this example, as the cursor is moved from the left edge to the right edge of the poster frame 110, the first, fourth, seventh image, and so on, may be displayed in place of the image assigned to represent the poster frame 110. In some implementations, the size of the images can be used to skip images during previews. For example, high resolution images are generally files of larger sizes. All the high resolution images in a container may be displayed during a preview. Low resolution images may be excluded from the preview. In some implementations, the previewed images can be those images that have a higher rating than other images in the container. In some implementations, a rounding algorithm can be used to choose photos that can either be included or excluded from the preview.
In some implementations, the tool bar can include a zoom control button. When the resolution of the cursor is greater than the physical space representing each image in a container, the zoom control button can be used to increase the granularity of the poster frame. For example, the zoom control button can be used to enlarge the poster frame. The physical space representing each image can be kept constant. In this manner, the physical space representing each image can be increased to either equal or be greater than the resolution of the cursor. In such implementations, upon zooming the poster frame, more images in the container represented by the poster frame can be previewed by moving the cursor across the poster frame. In some implementations, the zoom control button can be activated by positioning the cursor over the zoom control button and clicking the mouse or other pointing device used to operate the cursor. Alternatively, the zoom control button can be activated by a key stroke on a key board.
In some implementations, the speed at which a cursor is scanned across a poster frame 110 can be higher than the speed at which the display of images in a poster frames 110 can be updated. If the speed at which the cursor is scanned across a poster frame 110 is greater than a threshold, certain images can be displayed for a preview while other images can be skipped. The images chosen for display can be based on factors including a position of the image in the order of storage, size of the image, ratings of the image, and the like. In some implementations, if the speed at which the cursor is scanned is high, then no image in a container can be previewed.
In some implementations, an image in a container can be chosen by placing the cursor over the poster frame representing the container and clicking the mouse. Alternatively, or in addition, an image can be chosen by placing the cursor over the poster frame representing the container and selecting a key on a keyboard, e.g., the “Enter” key. Additionally, when an image in a container in a poster frame 110 is previewed, successive images can subsequently be previewed using the keys on the key board. For example, the user can place a cursor on a poster frame 110. In response, an image in the container can be displayed corresponding to the location of the cursor in the poster frame 110. Subsequently, the user can use keys on a key board (e.g., arrow keys) to preview successive images stored in the container. In some implementations, by pressing the right arrow key, the user can scan from the beginning of the container to the end of the container. Conversely, the user can scan from the end to the beginning of the container using the left arrow key. In other implementations, any combination of keys can be used to scan successive photos in the container. In addition, keys and/or key sequences can be used to jump to the beginning or end of a container from anywhere in the container. Such keys can include the “Home” key, the “End” key, and the like. In addition, keys and key sequences can also be used to jump from one container to another, e.g., “Command”+“Home” key to jump to the first container, “Command”+“End” key to jump to the last container, tab key to jump from one container to the next, and the like.
In some implementations, a user can split a container into multiple containers using a key stroke. For example, a user previewing the images in a container can place the cursor at any position on the container. Subsequently, the user can strike a key, e.g., “Command” key. In response, the container can be split into two containers, where each container can be represented by a poster frame. When a container represented by a poster frame 110 is split into two containers, each container represented by a respective poster frame 110, the poster frames 110 in the view pane 105 can be re-positioned to accommodate the new poster frame 110. Such re-positioning can include moving poster frames in the same row, moving a poster frame to a different row, creating a new row containing one or more poster frames, and the like. In this manner, the sequence in which the poster frames 110 are displayed can be retained. Anew container can further be divided into two more containers. In some implementations, the number of containers into which one container can be divided can be specified by a user. In some implementations, the cursor can be positioned at a location on a first poster frame. An image corresponding to the location of the cursor can be displayed within the bounded region of the first poster frame. When a user strikes a key to split the first poster frame representing a container, the first of the two split poster frames representing the first split container can include all the images from the start of the first container to the image that was being previewed. The second of the two containers can include the remainder of the photographs in the first container. In some implementations, when a first container is split, each of the split containers can contain half the number of images of the first poster frame. In other implementations, when a first container is divided into a number of containers specified by the user, each split containers can contain the same number of images. In other implementations, the number of images in each split container can be specified by the user.
In some implementations, key words can be associated with poster frames 110. For example, all poster frames that represent containers containing photographs that were taken during a time frame (e.g., the same week) can be associated with common key words. The poster frames can be identified based on the key words and poster frames 110 associated with the same key words can be manipulated as a group, e.g., displayed on an view pane, deleted, merged, and the like. Alternatively, a user can provide key words to poster frames 110. For example, a user may take photographs at an event that occurs at regular intervals of time, e.g., every week. A user may associate a name to the photographs taken during the event. Subsequently, the user can identify all containers represented by 110 using the name. In another example, the images may correspond to CAD drawings where groups of drawings represent different parts of a machine. A user may assign key words denoting a part of the machine to the images corresponding to the part.
The orientation of the images 205 depends on the orientation of the camera used to take the photographs 205 (e.g., landscape or portrait). In a default implementation, the horizontal and vertical dimensions of an image 205 in landscape orientation can equal the horizontal and vertical dimensions of a poster frame 110 displayed in a landscape orientation in the view pane 105. The horizontal and vertical dimensions of an image 205 in portrait orientation can equal the vertical and horizontal dimensions of a poster frame 110, respectively, displayed in the view pane 105. The space separating two adjacent images 205 can equal the space separating two adjacent poster frames 110. The space separating two rows of images 205 can equal the space separating two rows of poster frames 110. Images 205 displayed in a row can be in either landscape orientation or portrait orientation. In some implementations, the bottom edges of all the images 205 in a row can be aligned. In such implementations, the top edge of the images 205 in the row may or may not be aligned depending upon the orientations of the images 205 positioned in that row. Alternatively, in some implementations, the top edges of all the images 205 in a row can be aligned.
In some implementations, the number of images 205 in a container displayed across one or more rows may exceed the vertical dimension of the view pane 105. In such implementations, a vertical scroll bar can be incorporated in the user interface 100 so the user can scroll the view pane 105 to access images 205 that are positioned outside the viewing area of the view pane 105. A user can use either the pointing device (e.g., mouse, track ball, stylus, touch pad, touch screen, near contact screen) that controls the cursor, a key board, or a combination of both to operate the vertical scroll bar and scroll the view pane 105.
In some implementations, when a user clicks on a poster frame 110, the images 205 contained in the poster frame 110 can be displayed in the order in which they are stored. The order can be based on the time when each image 205 was taken. In some implementations, one or more images 205 in a container can be compared and boundaries 215 within a container can be recommended. In some implementations, the chronological order in which the images 205 in the container were taken can be compared. For example, a user may have taken a group of photographs 205 on a first day. Subsequently, the user may have taken a second group of photographs 205 on a second day. The user may upload both groups of photographs 205 simultaneously. Initially, both groups of photographs 205 may be displayed as belonging to the same container. The time stamp on the photographs 205 may be compared and a recommendation may be presented to split the container into two groups, the first group containing the photographs 205 taken on the first day and the second group containing the photographs 205 taken on the second day.
In another example, the images 205 in a container may be compared based on the content of the images 205. A container may contain a first group of images 205 containing a blue background and a second group of images 205 containing a green background. The backgrounds of the images 205 can be compared, the images 205 of the common content (e.g., background) can be grouped, and a recommendation may be presented that the images 205 in the two groups may belong to separate containers. In some implementations, one or more combinations of content of images 205 and metadata associated with the images 205 can be used in making comparisons.
In some implementations, the recommendation to split a container into two groups can be presented by altering a display of the portion of the view pane 105 on which the thumbnails, representing the images 205 identified as belonging to the same group, are positioned.
In some implementations, it may be determined that images 205 in a container can belong to multiple groups. In such cases, the display of the view pane 105 can be changed such that images 205 identified as belonging to the same group have a common background, regardless of the number of groups. Images 205 identified as belonging to the same group can be adjacently positioned in the same row or separately on the same or different rows.
In some implementations, in addition to providing a recommendation to split a container into two or more containers based on view pane 105 display, a user can be provided with mechanisms to accept or reject the recommendations or, alternatively, make user-modifications to the groups in a container. In some implementations, an “OK” button can be displayed at the boundary. A user can accept the boundary by positioning the cursor on the “OK” button and clicking the mouse configured to operate the cursor. In some implementations, when a user positions a cursor on an boundary 215, a merge icon 220 (e.g., a “+” sign) can be displayed at the boundary 215. If a user clicks on the merge icon 220, the two groups separated by the boundary 215 can be merged into the same group. Upon merging, the background display of the view pane 105 for the two groups can be changed to be uniform.
In implementations with no boundaries in a container, when a user identifies a boundary 215 between a first and a second image 205 in the container, the images 205 from the beginning of the container to the first image 205 can be grouped to create a first container. Similarly, the images 205 from the second image 205 to the end of the container can be grouped to create a second container. Subsequently, when a view pane 105 displaying poster frames 110 representing containers is displayed, what was originally one poster frame 110 can be displayed as two poster frames 110, each poster frame 110 representing a container containing images 205 of the first and second groups, respectively.
In some implementations, one or more boundaries 215 may already be identified in a container. In such implementations, the user can specify a boundary 215 between two images 205 in a group by positioning and clicking the split icon 230 between two images 205 in the group. A first group including the images 205 beginning from the first image 205 of the group to the first of the two images 205 between which the user specified a boundary 215 can be created. A second group including the images 205 beginning from the second of the two images 205 between which the user specified boundary 215 to the last image 205 of the group can be created. In other implementations, a user can drag an image 205 from one group and include the image 205 in another group. The user can drag the images 205 across boundaries 215 by operations including drag and drop using the pointing device used to operate the cursor, cut and paste using the key board, or combinations of the pointing device and the keyboard. In this manner, a user can split images 205 in a container into one or more containers.
Subsequent to grouping images 205 into containers, when the poster frames 110 representing containers are displayed on the view pane 105, each group that was created in a container can be displayed by a new poster frame 110. The new poster frame 110 for each group can positioned at and adjacent to the same location as the poster frame 110 for the container. The remaining poster frames 110 in the view pane 105 can be repositioned such that the order of display of poster frames 110, which can represent the time line in which the images 205 in each container were taken is maintained.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the subject matter. For example, as the slider 135 on the user interface 100 is operated to reduce the size of the thumbnails representing frames (e.g., poster frames 110, thumbnails representing images 205), the horizontal dimension of a row of thumbnails can be decreased. In some implementations, thumbnails from one row can be repositioned to another row so that the horizontal dimension of the rows equals the horizontal dimension of the view pane 105. In other implementations, even if the horizontal dimension of the row decreases due to decrease in thumbnail dimensions, each thumbnail can be retained in the same position on the row.
In some implementations, moving the pointer 137 on the slider 135 to the right can cause an increase in the dimensions of the thumbnails. In such implementations, when the pointer 137 on the slider 135 is positioned at the right extreme of the slider 135, each thumbnail (e.g., poster frame, thumbnail representing an image 205) in the view pane 105 can occupy the entire view pane 105. In such implementations, a navigation mechanism may be incorporated into the tool bar 125 so that a user may navigate to access the thumbnails on the view pane 105.
In some implementations, the user can view each image 205 in a container in the view pane 105 by choosing the image 205. When a user views one of the images 205 in the container, the remainder of the images 205 in the container can be displayed as thumbnails in an additional pane above the view pane 105. In such implementations, the user can choose the next image 205 that the user wishes to view from the additional pane displayed above the view pane 105.
In some implementations, the two dimensional time line may correspond to a vertical positioning of thumbnails. For example, the poster frames 110 can be arranged vertically in columns. When the sum of the vertical dimensions of the poster frames 110 and the spaces between the frames exceeds the vertical dimension of the view pane 105, subsequent poster frames can be positioned in a new, horizontally displaced column. The first poster frame 110 of the new column can be substantially vertically aligned with the first poster frame 110 of the previous column. In this manner, vertical scrolling to access poster frames outside the viewing area of the view pane 105 can be avoided. When the space occupied by the columns exceeds the horizontal dimension of the view pane 105, a horizontal scroll bar can be incorporated in the user interface 100 to allow the user to navigate to access columns of thumbnails 110 that may lie outside the viewing area of the view pane 105.
In some implementations, thumbnails representing images 205 can also be displayed in columns. In other implementations, the horizontal or vertical display of poster frames and/or images 205 can be based on user input.
In some implementations, two or more poster frames 110 displayed on the view pane 105 can be merged. In other implementations, when a user scans a mouse across a poster frame 110, two images 205 positioned consecutively in the container represented by the poster frame 110 can be displayed on the frame such that the first of the two images 205 is displayed on the left half of the poster frame 110 and the second image 205 is displayed on the right half. Based on the display, the user can create boundaries 215 between the two images 205. In such implementations, a container can be split into two containers, such that the first split container contains images 205 beginning from the start of the container to the first image 205, while the second split container contains images 205 from the second image 205 to the end of the container. Subsequently, each split container can be represented by a separate poster frame 110.
In some implementations, each container can be represented by more than one frame. A second slider 135 may be incorporated in the tool bar 125 and operatively coupled to change the number of poster frames 110 used to represent a container. For example, a user may position the slider 135 such that a poster frame 110 is represented by two frames. In such cases, when the user positions the cursor over one of the two frames 110, a first image 205 corresponding to the position of the cursor on the poster frame 110 can be displayed on the first poster frame 110. An image 205 adjacent to the displayed image 205 can be displayed on the second poster frame 110. Based on the display, the user can create boundaries 215 between two images 205.
In some implementations, a user can create a new container while previewing a container by scanning the cursor across the poster frame 110. When a user creates a new container, an icon representing a new container can be displayed on the project pane 115. When the user positions the cursor on the poster frame 110, an image 205 corresponding to the position of the cursor on the poster frame 110 can be displayed. The user can include the image 205 in the new container by operations including drag and drop using the pointing device, copy and paste using the keyboard, or combinations of pointing device and keyboard operations. In this manner, the user can create one or more containers of images 205 chosen from different containers represented by poster frames 110 on the view pane 105.
The dimensions of the user interface 100 can be altered based on user input using a pointing device to operate a cursor, a keyboard, or both. In some implementations, altering the dimensions of the user interface 100 causes the dimensions of the thumbnails in the view pane 105 in the user interface 100 to be changed. In other implementations, despite a change in the dimensions of the user interface 100, the dimensions of the thumbnails remains unaltered.
In some implementations, a view pane 105 may represent folders containing files. As a user scrolls across the poster frame 110, metadata associated with the document in the folder (e.g., file name, date of creation, last date of editing, and the like) can be displayed on the poster frame 110. In other implementations, each poster frame 110 can represent a document, e.g., a text document. As the user scrolls across the poster frame 110, each page in the document can be displayed on the poster frame 110. In this manner, a user may be able to preview the contends of the text document. In other implementations, the file can be a Adobe PDF file and each page on the PDF file can be displayed on the poster frame, the file can be a Microsoft Power Point file and each slide in the Power Point file can be displayed on the poster frame, the file can be a Microsoft Excel file and each spreadsheet in the Excel file can be displayed on the poster frame, and the like.
In some implementations, the user interface including the view pane and the poster frames representing containers of images can be viewed on virtually any suitable display device connected to the storage device on which the images are stored. The display device can include a computer monitor, an LCD screen, a projection screen, and the like. Alternatively, or in addition, the user interface and the images can be transmitted over a network (e.g., wired, wireless, internet, and the like) for display on a remote display device. In some implementations, the images to be viewed can be stored locally and can be viewed from a remote location. A system in the remote location can be operatively coupled to the local system to communicate over a network, e.g., the internet. The local system can be a server where the images can be stored and the user interface and other features of the user interface can be installed. The remote system can be a computer connected to the internet. A user at the remote system can enter a uniform resource locator (URL) pointing to the server in a web browser. In response, the local system can present the remote system with the user interface. Using the user interface, a user in the remote location can preview images. In some implementations, the images may reside on the local system. A user at the remote system can preview the images in the local system. In other implementations, the user at the remote system can preview images stored in the remote system using the user interface transmitted to the remote system from the local system over the network. In some implementations, a first user at a first remote location can perform operations including previewing images in the local or first remote system, creating containers of images, and the like, and subsequently transmit the containers with images to the local system. Subsequently, a second user wishing to view the images created by the first user can establish a connection with the local system. The local system can transmit the user interface to the second user. In this manner, the second user at the second remote location can view the contents of the containers created by the first user. In other implementations, the first user can transmit the containers containing images to the second user. The second user can access the user interface in the local system to view the images in the containers stored in the second user's remote system. Alternatively, the second user can access the images stored in the first user's system and preview the images using the user interface transmitted to the second user from the local system. In this manner, images stored in one location can be viewed and manipulated at a different location.
In other implementations, the location identifier can be associated with an object after the object is captured. For example, after the user captures a digital image, the user can include the location identifier to the captured image using the device with which the image was captured. Alternatively, when the user uploads the objects into a user interface for editing, the user can include location identifiers to the objects. In some implementations, the device, e.g., camera used to capture images, can include a GPS system that can receive and associate location information such as latitude, longitude, and the like, to a captured image. In this manner, several objects can be captured and a location identifier can be associated with each captured object.
Subsequently, a user can group two or more objects based on the location identifiers. In some implementations, a first location identifier corresponding to a first object and a second location identifier corresponding to a second object can be compared. In instances where the user captured the first and second objects from the same or sufficiently near location, the first and second location identifier can be identical. In instances where the user moved from a first to a second location before capturing the first and second object, respectively, the first and second location identifiers can be different. The first and second location identifiers can be compared at 615. For example, if the first and second location identifiers are latitude and longitude values, a difference between the latitude values and longitude values can be determined. Alternatively, if the first location identifier and the second location identifier are reference distances from a point, then the distance between the two identifiers can be determined. The proximity of the locations where the first and second objects were captured can be determined at 620. In some implementations, if the result of comparing the two identifiers is less than a pre-set threshold, then the locations of the first and second objects can be deemed to be in sufficient proximity to each other to be regarded as essentially the same location. For example, if the first and second identifier each correspond to a distance from a point, and the distance between the first and second identifier is less than a pre-set distance, then the location where the first and second objects were captured is determined to be in proximity to each other. If the locations of two objects are determined to be in proximity to each other, then the objects can be grouped at 625. The determination of whether two objects are in sufficient proximity to be regarded as being at the same location can be performed essentially in real time by a device used to capture the objects (e.g., a digital camera) or subsequently in an editing or other software environment executing on a computer system.
Such comparison of the location of objects based on the associated location identifiers can be extended to any number of objects, and objects can be grouped into one or more groups. Operations including viewing, editing, moving, filtering, storing, and the like, can be performed on a group, which, in turn, can cause the operations to be performed on each object in the group. In some implementations, a digital imaging device can receive geographic location information relating to a location of the digital imaging device from a remote source. The digital imaging device can include a digital camera, a digital video camera, a cellular telephone configured to capture digital images, and the like. The geographic location information can include a latitude, a longitude, and virtually any information that can characterize location. The geographic location information can be GPS data received from an orbiting satellite. Alternatively, or in addition, the geographic location information can be received from a terrestrial-based system such as Long Range Navigation (LORAN) system, a cellular telephone network, or both. The geographic location information can also be received from a human user such as a user of the digital imaging device or a user who can access the digital imaging device, e.g., over a network that can be either wired or wireless. For example, a first user in possession of geographic location information can transmit the information to the digital imaging device which can be in the possession of a second user. The received geographic location information can be associated with one or more digital image objects captured by the digital imaging device when the device is at or near a location at which the geographic location information was received. For example, if the digital imaging device is located at a first location, a digital image of an object is captured at the first location, and geographic location information related to the first location is received from a remote source, the geographic location information can be associated with the digital image object. This can enable a user or a system or both to identify the digital image object based on the geographic location information. Further, the geographic location information can be applied to several digital image objects captured at the same or a substantially same location. Operations including viewing, editing, grouping, storing, and the like, can be performed to all digital image objects that share a common geographic location information. In some implementations, two or more digital imaging devices can have respective geographic location information that indicate that the two devices are at either the same location or at two locations that are substantially near each other. In such implementations when digital images are retrieved from each device, since the images are associated with the geographic location information, and since the geographic information location for the images indicate that they were captured at the same or substantially same locations, the images from the two devices can be automatically grouped.
In some implementations, two or more digital image objects can be associated with geographic location information indicating that the two objects were captured at or near a substantially same location. The two or more objects can be grouped together based on the associated geographic location information. The geographic information location can be contained within the digital image object. For example, a device with which the object was captured can include the location information. All digital image objects captured at or near a substantially same location can be associated with the same location information. Subsequently, when the device is moved, the location information can be updated, e.g., by receiving updated information from a remote source such as a GPS satellite. All image objects captured at or near the substantially same updated location can be associated with the updated location information. In examples where the image objects were captured at different locations, the associated geographic location information for one object can be compared with that of another object to determine if the image objects were captured at locations that can be deemed as sufficiently close to each other. For example, if the geographic location information relates to a latitude and longitude, a first latitude and a first longitude related to a first object can be compared with a second latitude and second longitude related to a second object. If the comparison, e.g., subtraction, results in values that are less than a threshold, the image objects can be deemed to be at substantially the same location. In some implementations, geographic information relating to two or more locations can be collected and grouped to create a geographic location information group. For example, different points in a city, such as San Francisco, can have different geographic location information, e.g., different latitudes and longitudes. The latitudes and longitudes of the different points can be collected and grouped under, e.g., “San Francisco latitudes and longitudes. In some implementations, a look-up table stored as “San Francisco latitudes and longitudes” can include the latitudes and longitudes of several points in San Francisco. If the geographic location information associated with an object is the same as or substantially close to the geographic location information of a point stored in the geographic location information group, then it can be determined that the image object was captured at a point in the group. In this manner, objects captured at several locations within a group of locations can be grouped.
In some implementations, the geographic location information can be obtained from a storage device operatively coupled to the system on which the software is installed. Alternatively, the geographic location information can be stored at a remote location, which can be accessed to retrieve the information. In some implementations, a user can assign the geographic location information. For example, the user can access a map of the location where the user captured or desires to capture the image objects. Such a map can be displayed on a screen in the user interface. The geographic location information, e.g., latitude and longitude, for each point on the map can be known. The user can select a poster frame representing an object from a portion of the user interface and select a position on the map by methods including clicking, clicking and dragging, and the like. In response to the user positioning an object at a location on the map, the latitude and longitude associated with the position on the map can be associated with the object. In this manner, a user can associate geographic location information with an object. Accordingly, other implementations are within the scope of the following claims.
This application claims the benefit of the filing date of U.S. application Ser. No. 11/685,672, filed on Mar. 13, 2007, and entitled “Interactive Image Thumbnails”, the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 11685672 | Mar 2007 | US |
Child | 11760684 | US |