About a month ago I shared a web app that let you compare magazine covers using image hashes.
https://news.ycombinator.com/item?id=46518106Samin100 suggested giving CLIP and DinoV2 a shot for better results. I had no idea what those were, but researching them led me to learn about vision transformers. DinoV2 was created by Meta, and CLIP is by OpenAI.
The updated version of the magazine comparison tool lets you use those two models (Photo = DinoV2 and Design = CLIP)
I've personally really enjoyed the journey through New Yorker covers:
Bikes
https://shoplurker.com/labs/img-compare/match?model=vt&cover...