* feat: add support for docker images
Issue #724
A new config section under "HTTP" called "Compat" is added which
currently takes a list of possible compatible legacy media-types.
https://github.com/opencontainers/image-spec/blob/main/media-types.md#compatibility-matrix
Only "docker2s2" (Docker Manifest V2 Schema V2) is currently supported.
Garbage collection also needs to be made aware of non-OCI compatible
layer types.
feat: add cve support for non-OCI compatible layer types
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
*
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: add more docker compat tests
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat: add additional validation checks for non-OCI images
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* ci: make "full" images docker-compatible
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Exit early in case of all folders and known non-blob file names.
This avoids the logic for validating digests, and log message generation.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
cache.HasBlob() looks in both buckets: duplicates and original blobs
Because we want to check if the blob is in original bucket let's use
cache.GetBlob() because it's looking only in original bucket.
Signed-off-by: Eusebiu Petu <petu.eusebiu@gmail.com>
In case of delete by tag only the tag is removed, the manifest itself would continue to be accessible by digest.
In case of delete by digest the manifest would be completely removed (provided it is not used by an index or another reference).
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
when a client pushes an image zot's inline dedupe
will try to find the blob path corresponding with the blob digest
that it's currently pushed and if it's found in the cache
then zot will make a symbolic link to that cache entry and report
to the client that the blob already exists on the location.
Before this patch authorization was not applied on this process meaning
that a user could copy blobs without having permissions on the source repo.
Added a rule which says that the client should have read permissions on the source repo
before deduping, otherwise just Stat() the blob and return the corresponding status code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(oras)!: remove ORAS artifact references support
ORAS artifacts/references predated OCI dist-spec 1.1.0 which now has the
same functionality and likely to see wider adoption.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: update to released official images
So that they are unlikely to be deleted.
*-rc images may be cleaned up over time.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
no need to run dedupe/restore blobs for images being pushed or synced while
running dedupe task, they are already deduped/restored inline.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
fix(gc): fix cleaning deduped blobs because they have the modTime of
the original blobs, fixed by updating the modTime when hard linking
the blobs.
fix(gc): failing to parse rootDir at zot startup when using s3 storage
because there are no files under rootDir and we can not create empty dirs
on s3, fixed by creating an empty file under rootDir.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
now gc stress on s3 storage is using minio for ci/cd builds
gc stress on s3 storage is using localstack for nightly builds
fixed(gc): make sure we don't remove repo if there are blobs
being uploaded or the number of blobs gc'ed is not 0
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
unified both local and s3 ImageStore logic into a single ImageStore
added a new driver interface for common file/dirs manipulations
to be implemented by different storage types
refactor(gc): drop umoci dependency, implemented internal gc
added retentionDelay config option that specifies
the garbage collect delay for images without tags
this will also clean manifests which are part of an index image
(multiarch) that no longer exist.
fix(dedupe): skip blobs under .sync/ directory
if startup dedupe is running while also syncing is running
ignore blobs under sync's temporary storage
fix(storage): do not allow image indexes modifications
when deleting a manifest verify that it is not part of a multiarch image
and throw a MethodNotAllowed error to the client if it is.
we don't want to modify multiarch images
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>