0
Fork 0
mirror of https://github.com/immich-app/immich.git synced 2025-01-21 00:52:43 -05:00

docs: update FAQ CLIP search explanation (#12986)

This commit is contained in:
bo0tzz 2024-09-27 19:07:00 +02:00 committed by GitHub
parent 7c15e11efc
commit dbe542803f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -187,7 +187,7 @@ However, when the trash is emptied, the files will re-appear in the main timelin
### How does smart search work? ### How does smart search work?
Immich uses CLIP models. For more information about CLIP and its capabilities, read about it [here](https://openai.com/research/clip). Immich uses CLIP models. An ML model converts each image to an "embedding", which is essentially a string of numbers that semantically encodes what is in the image. The same is done for the text that you enter when you do a search, and that text embedding is then compared with those of the images to find similar ones. As such, there are no "tags", "labels", or "descriptions" generated that you can look at. For more information about CLIP and its capabilities, read about it [here](https://openai.com/research/clip).
### How does facial recognition work? ### How does facial recognition work?