There are many sets of pictures and videos available that have been rated and standardized in various ways. These can be immensely useful as stimuli in psychological experiments.
- Picture sets
- Fractals and natural scenes
- Amsterdam library of object images (ALOI)
- Bank of standardized stimuli (BOSS)
- Bonin et al.'s set of 299 pictures
- CAT dataset
- CIPR still images
- Dartmouth database of children’s faces
- Ecological alternative to Snodgrass and Vanderwart
- Geneva affective picture database (GAPED)
- Graspable objects and matched non-objects
- HatField image test
- International affective pictures system (IAPS)
- Face place
- Migo et al.'s photos with similarity information
- Natural scenes collection (nature/ campus scenes)
- Nencki Affective Picture System (NAPS)
- Nishimoto et al.'s set of 360 pictures
- Normative ratings for flanker stimuli
- Novel Object and Unusual Name Database (NOUN)
- A pool of pairs of related objects (POPORO)
- Psychological image collection at Stirling (PICS)
- Revised Snodgrass and Vanderwart object pictorial set
- Segmentation evaluation database
- Snodgrass and Vanderwart object pictorial set
- UB KinFace Database
- UPenn natural image database
- What do saliency models predict?
- Video sets
Tips for creators
If you have created a stimulus set yourself, here are a few tips:
- Choose a license for distributing your work. As you can see in the list below, many creators do not specify a license. Unlicensed work is non-free by default, which is problematic for usage and sharing. For a useful guide, see http://creativecommons.org/choose/.
- Upload your work to an easily accessible location. Do not upload your stimuli as supplementary information on a (paywalled) publisher's website. A good site for uploading academic material is FigShare.
If you are looking for software to present stimuli, take a look at OpenSesame, a free and graphical experiment builder.
Fractals and natural scenes
Description: A collection of fractals and natural images with visual-search targets (a small F or H) embedded in them. The fractals are original. The natural images are adapted from the UPenn natural image database. For each image, a pupillary luminance map and saliency map is available (see paper for details).
License: Creative Commons Attribution
Reference: Mathôt, S., Siebold, A., Donk, M., & Vitu, F. (2015). Large pupils predict goal-driven eye movements. Journal of Experimental Psychology: General, 144(3), 513-521. doi:10.1037/a0039168
Amsterdam library of object images (ALOI)
Description: An extensive set of photos of small objects. The viewing angles and lighting conditions (illumination angle and color) have been systematically varied for each object. Stereo images ("3D") are also inluded.
Reference: Geusebroek, J. M., Burghouts, G. J., & Smeulders, A. W. M. (2005). The Amsterdam library of object images. International Journal of Computer Vision, 61(1), 103–112.
Bank of standardized stimuli (BOSS)
Description: A large set of full color photos of various objects. Highly recommended.
Reference 1: Brodeur, M. B., Dionne-Dostie, E., Montreuil, T., & Lepage, M. (2010). The bank of standardized stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PloS ONE, 5(5), e10773
Reference 2: O'Sullivan, M., Lepage, M., Bouras, M., Montreuil, T., Brodeur, M. B. (2012). North-American norms for name disagreement: Pictorial stimuli naming discrepancies. PLoS ONE, 7(10), e47802.
Reference 3: Brodeur, M. B., Kehayia, E., Dion-Lessard, G., Chauret, M., Montreuil, T., Dionne-Dostie, E., & Lepage, M. (2012). The bank of standardized stimuli (BOSS): comparison between French and English norms. Behavior Research Methods.
Bonin et al.'s set of 299 pictures
Description: A set of 299 black-and-white line drawings, with various normative ratings, such as name agreement (in French) and naming latency.
Reference: Bonin, P., Peereman, R., Malardier, N., Méot, A., & Chalard, M. (2003). A new set of 299 pictures for psycholinguistic studies: French norms for name agreement, image agreement, conceptual familiarity, visual complexity, image variability, age of acquisition, and naming latencies. Behavior Research Methods, Instruments, & Computers, 35(1), 158-167.
Description: A database of 10,000 cat images. All images are annotated with nine points, indicating facial landmarks.
Reference: Zhang, W., Sun, J., & Tang, X. (2008). Cat Head Detection - How to Effectively Exploit Shape and Texture Features, Proc. of European Conf. Computer Vision, 4, 802-816.
CIPR still images
Description: A collection of images. Just images, no additional information is provided.
Dartmouth database of children’s faces
Description: Contains images of 40 male and 40 female models between the ages of 6 and 16. Models are photographed on a black background and are wearing black bibs and black hats to cover hair and ears. They are photographed from 5 different camera angles and pose 8 different facial expressions. Models were rated by independent raters and are ranked for the overall believability of their poses.
License: Dartmouth College (custom license)
Reference: Dalrymple, K.A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children's Faces: Acquisition and validation of a new face stimulus set. PLoS ONE, 8(11), e79131. doi:10.1371/journal.pone.0079131
Ecological alternative to Snodgrass and Vanderwart
A template based on this set is included with OpenSesame.
Description: A set of 360 colour photographs of objects, animals, and scenes. Various normative ratings are available.
License: Creative Commons Attribution
Link: http://dx.plos.org/10.1371/journal.pone.0037527 (see Supporting information for download links)
Reference: Moreno-Martínez, F. J., & Montoro, P. R. (2012). An ecological alternative to Snodgrass & Vanderwart: 360 high quality colour images with norms for seven psycholinguistic variables. PLoS One, 7(5), e37528
Geneva affective picture database (GAPED)
Description: Mostly scenes with a strong valence.
Reference: Dan-Glauser, E. S., & Scherer, K. R. (2011). The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behavior Research Methods. doi:10.3758/s13428-011-0064-1
Graspable objects and matched non-objects
Description: A set of graspable objects and non-objects. Each non-object is matched to a real object in terms of texture, using a texture-synthesis algorithm, and shape.
License: Mixed license, see download page for details
Reference: van der Linden, L., Mathôt, S., Vitu, F. (2015). The role of object affordances and center of gravity in eye movements towards isolated daily-life objects. Journal of Vision. 15(5), 1-18. doi:10.1167/15.5.8
HatField image test
Description: A set of high-quality colour photographs, chosen to span a wide range of categories and naming difficulties.
License: Described as 'free from copyright', but you do need to apply via an online form.
Reference: Adlington, R. L., Laws, K. R., & Gale, T. M. (2009). The HatField image test: A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology, 31(6), 731-75.
International affective pictures system (IAPS)
Description: A collection of pictures that have been rated on valence, arousal and dominance. Unfortunately, you can't download these pictures directly, but you have to put in a request (see the link below). They are freely available for non-profit research, though.
Reference: Lang, P.J., Bradley, M.M., & Cuthbert, B.N. (2008). International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainseville, FL
Description: Multiple images, with different views, emotions, and disguises, of over 200 individuals of many different races.
Reference: Righi, G., Peissig, J. J., & Tarr, M. J. (2012). Recognizing disguised faces. Visual Cognition, 20(2), 143-169. doi:10.1080/13506285.2012.654624
Description: FaceSrub comprises a total of 107,818 face images of male and female 530 celebrities, with about 200 images per person. The images were retrieved from the Internet and are taken under real-world situations (uncontrolled conditions). Name and gender annotations of the faces are included.
Referece: Ng, H.W., & Winkler, S. (2014). A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP). Retrieved from http://vintage.winklerbros.net/Publications/icip2014a.pdf
Migo et al.'s photos with similarity information
Description: A collection of gray-scale photos, with similarity ratings for pairs of objects within a set (for example, the similarity between pairs of different pens). The idea is very nice, but unfortunately the quality of the photos is fairly low. Also, the stimuli are paywalled together with the paper.
Reference: Migo, E. M., Montaldi, D., & Mayes, A. R. (2013). A visual object stimulus database with standardized similarity information. Behavior Research Methods, 45(2), 344–354. doi:10.3758/s13428-012-0255-4
Natural scenes collection (nature/ campus scenes)
Description: Natural images of nature scenes containing no man-made objects or people (nature scene collection) and university campus scenes containing cars, building, and people (campus scene collection).
Reference (for the campus scene collection): Burge, J., & Geisler, W. S. (2011). Optimal defocus estimation in individual natural images. Proceedings of the National Academy of Sciences, 108(40), 16849–16854.
Reference (for the natural scene collection): Geisler, W. S., & Perry, J. S. (2011). Statistics for optimal point prediction in natural images. Journal of Vision.
Nencki Affective Picture System (NAPS)
Description: A large set of photos with normative ratings of valence, arousal, and approach avoidance. The stimuli are not directly downloadable, but available upon request.
License: A custom license that permits academic use.
Reference: Marchewka, A., Zurawski, L., Jednoróg, K., & Grabowska, A. (2013). The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behavior Research Methods. doi:10.3758/s13428-013-0379-1
Nishimoto et al.'s set of 360 pictures
Description: A collection of 360 black-and-white line drawings, with various norms for Japanese.
Reference: Nishimoto, T., Ueda, T., Miyawaki, K., Une, Y., & Takahashi, M. (2012). The role of imagery-related properties in picture naming: A newly standardized set of 360 pictures for Japanese. Behavior Research Methods.
Normative ratings for flanker stimuli
Description: A set of shapes, symbols, letters, false fonts, and digits with various normative ratings.
License: Creative Commons Attribution
Reference: Chanceaux, M., Mathôt, S., & Grainger, J. (2014). Effects of number, complexity, and familiarity of flankers on crowded letter identification. Journal of Vision, 14(6), 7. doi:10.1167/14.6.7
Novel Object and Unusual Name Database (NOUN)
Description: Database of images of novel, unusual objects for experimental research. The database includes 64 primary stimuli and a collection of 10 novel categories, each including three exemplars.
Reference: Horst, J. S., & Hout, M. C. (in press). The Novel Object and Unusual Name (NOUN) Database: A collection of novel images for use in experimental research. Behavior Research Methods.
A pool of pairs of related objects (POPORO)
Description: A collection of pairs of objects with norms for semantic relatedness. Validated using both behavioural measures and EEG.
Reference: Kovalenko, L.Y., Chaumon, M., & Busch, N.A. (2012). A pool of pairs of related objects (POPORO) for investigating visual semantic integration: Behavioral and electrophysiological validation. Brain Topography. doi:10.1007/s10548-011-0216-8
Psychological image collection at Stirling (PICS)
Description: A diverse collection of stimuli, including faces, objects and textures. Some ratings for the faces are also available.
Reference: The makers request that you simply cite the link above.
Revised Snodgrass and Vanderwart object pictorial set
Description: Full color line drawings, derived from the original Snodgrass & Vanderwart set.
Link: http://wiki.cnbc.cmu.edu/Objects (Unofficial)
Reference: Rossion, B., & Pourtois, G. (2004). Revisiting Snodgrass and Vanderwart’s object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33(2), 217-236.
Segmentation evaluation database
Description: A large database of grayscale photos of one or two objects collected from various sources. The primary goal of this database is to test image-segmentation algorithms.
License: Varies per photo.
Snodgrass and Vanderwart object pictorial set
Description: A classic set of black and white line drawings. Apparently there are some licensing issues and, despite the fact that this is a widely known set of pictures, it seems hard to get a hold of them. I would recommend using the revised set by Rossion & Pourtois instead.
Link: http://wiki.cnbc.cmu.edu/Objects (Unofficial)
Reference: Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning & Memory, 6(2), 174-215. doi:10.1037/0278-73184.108.40.206
UB KinFace Database
Description: A large set of photos of people with information about kinship (e.g., the person in photo X is the father of the person in photo Y).
License: Free for academic use
Reference: Xia, S., Shao, M., & Fu, Y. (2011). Kinship verification through transfer learning. International Joint Conferences on Artificial Intelligence, 2539-2544.
Reference: Umla-Runge, K., Zimmer, H. D., Fu, X., & Wang, L. (2012). An action video clip database rated for familiarity in China and Germany. Behavior Research Methods.
UPenn natural image database
Description: A collection of photos of the Okavango Delta of Botswana, a savanna habitat where humans (and their eyes) presumably evolved. It's a bit quirky, perhaps, but I like the concept.
Reference: Tkačik, G., Garrigan, P., Ratliff, C., Milčinski, G., Klein, J. M., Seyfarth, L. H., Sterling, P., et al. (2011). Natural images from the birthplace of the human eye. PLoS ONE, 6(6), e20409. doi:10.1371/journal.pone.0020409
What do saliency models predict?
Desription: A set of images that includes subjective saliency ratings.
Reference: Koehler, K., Guo, F., Zhang, S., & Eckstein, M. P. (2014). What do saliency models predict? Journal of Vision, 14(3), 14, 1-27. doi:10.1167/14.3.14
Faces and motion Exeter database (FAMED)
Description: A set of videos of 32 speaking male actors. Different viewpoints, headgear, and facial expressions are included.
License: Restrictive, but free for academic use.
Umla-Runge et al.'s action video clips
Description: 784 videos of actions with familiarity ratings for Eastern and Western cultures.