

To find visually similar results for a Pin, we consider the similarity scores of a given feature to billions of other features.

For the past couple of months, we’ve been experimenting with improving Related Pins with these visual signals, as detailed in our latest white paper, released today. These features can then be used to compute a similarity score between any two images. With close collaboration with members of the Berkeley Vision and Learning Center, we use deep learning to learn powerful image features by utilizing our richly annotated dataset of billions of Pins curated by Pinners. The core of our visual search system is how we represent images, and was built in just a few months by a team of four engineers.
