Image content retrieval¶
CLIP-TPU project introduction¶
CLIP, which stands for Constrastive Language-Image Pre-training, is a text-image pre-training model implemented by OpenAI using contrast learning. CLIP has a very impressive performance in zero-shot text-image retrieval, zero-shot image classification, Vincennes chart task guidance, open-domain detection segmentation and other tasks.
The CLIP-TPU project implements the TPU algorithm transplantation of CLIP, and users can use this project to verify the actual performance of CLIP on AIBOX-1684X.
Project repository link: CLIP-TPU
ImageSearch-tpu project introduction¶
ImageSearch-tpu project realizes the function of searching a large number of images based on CLIP, and provides a visual WebUI for users to use.
Project repository link: ImageSearch-tpu