* feat: add support for docker images
Issue #724
A new config section under "HTTP" called "Compat" is added which
currently takes a list of possible compatible legacy media-types.
https://github.com/opencontainers/image-spec/blob/main/media-types.md#compatibility-matrix
Only "docker2s2" (Docker Manifest V2 Schema V2) is currently supported.
Garbage collection also needs to be made aware of non-OCI compatible
layer types.
feat: add cve support for non-OCI compatible layer types
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
*
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: add more docker compat tests
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat: add additional validation checks for non-OCI images
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* ci: make "full" images docker-compatible
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Exit early in case of all folders and known non-blob file names.
This avoids the logic for validating digests, and log message generation.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Looks like we didn't have many GC tests for retaining multiarch images.
I added more data to the existing image retention tests, besides the new GC tests.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
cache.HasBlob() looks in both buckets: duplicates and original blobs
Because we want to check if the blob is in original bucket let's use
cache.GetBlob() because it's looking only in original bucket.
Signed-off-by: Eusebiu Petu <petu.eusebiu@gmail.com>
In case of delete by tag only the tag is removed, the manifest itself would continue to be accessible by digest.
In case of delete by digest the manifest would be completely removed (provided it is not used by an index or another reference).
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
when a client pushes an image zot's inline dedupe
will try to find the blob path corresponding with the blob digest
that it's currently pushed and if it's found in the cache
then zot will make a symbolic link to that cache entry and report
to the client that the blob already exists on the location.
Before this patch authorization was not applied on this process meaning
that a user could copy blobs without having permissions on the source repo.
Added a rule which says that the client should have read permissions on the source repo
before deduping, otherwise just Stat() the blob and return the corresponding status code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(oras)!: remove ORAS artifact references support
ORAS artifacts/references predated OCI dist-spec 1.1.0 which now has the
same functionality and likely to see wider adoption.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: update to released official images
So that they are unlikely to be deleted.
*-rc images may be cleaned up over time.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
This causes the "fair" scheduler to run it too often in the detriment of other generators.
The intention was to run it every 2 hours but the measurement unit for 7200 was not specified.
Add more logs, including showing a generator name, in order to troubleshoot this kind of issues easier in the future.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
- Cosign supports 2 types of signature formats:
1. Using tag -> each new signature of the same manifest is
added as a new layer of the signature manifest having that
specific tag("{alghoritm}-{digest_of_signed_manifest}.sig")
2. Using referrers -> each new signature of the same manifest is
added as a new manifest
- For adding these cosign signature to metadb, we reserved index 0 of the
list of cosign signatures for tag-based signatures. When a new tag-based
signature is added for the same manifest, the element on first position
in its list of cosign signatures(in metadb) will be updated/overwritten.
When a new cosign signature(using referrers) will be added for the same
manifest this new signature will be appended to the list of cosign
signatures.
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
Which could be imported independently. See more details:
1. "zotregistry.io/zot/pkg/test/common" - currently used as
tcommon "zotregistry.io/zot/pkg/test/common" - inside pkg/test
test "zotregistry.io/zot/pkg/test/common" - in tests
. "zotregistry.io/zot/pkg/test/common" - in tests
Decouple zb from code in test/pkg in order to keep the size small.
2. "zotregistry.io/zot/pkg/test/image-utils" - curently used as
. "zotregistry.io/zot/pkg/test/image-utils"
3. "zotregistry.io/zot/pkg/test/deprecated" - curently used as
"zotregistry.io/zot/pkg/test/deprecated"
This one will bre replaced gradually by image-utils in the future.
4. "zotregistry.io/zot/pkg/test/signature" - (cosign + notation) use as
"zotregistry.io/zot/pkg/test/signature"
5. "zotregistry.io/zot/pkg/test/auth" - (bearer + oidc) curently used as
authutils "zotregistry.io/zot/pkg/test/auth"
6. "zotregistry.io/zot/pkg/test/oci-utils" - curently used as
ociutils "zotregistry.io/zot/pkg/test/oci-utils"
Some unused functions were removed, some were replaced, and in
a few cases specific funtions were moved to the files they were used in.
Added an interface for the StoreController, this reduces the number of imports
of the entire image store, decreasing binary size for tests.
If the zb code was still coupled with pkg/test, this would have reflected in zb size.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
no need to run dedupe/restore blobs for images being pushed or synced while
running dedupe task, they are already deduped/restored inline.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- implement scrub also for S3 storage by replacing umoci
- change scrub implementation for ImageIndex
- take the `Subject` into consideration when running scrub
- remove test code relying on the umoci library. Since we started
relying on images in test/data, and we create our own images using
go code we can obtain digests by other means. (cherry picked from commit 489d4e2d23c1b4e48799283f8281024bbef6123f)
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
fix(gc): fix cleaning deduped blobs because they have the modTime of
the original blobs, fixed by updating the modTime when hard linking
the blobs.
fix(gc): failing to parse rootDir at zot startup when using s3 storage
because there are no files under rootDir and we can not create empty dirs
on s3, fixed by creating an empty file under rootDir.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
now gc stress on s3 storage is using minio for ci/cd builds
gc stress on s3 storage is using localstack for nightly builds
fixed(gc): make sure we don't remove repo if there are blobs
being uploaded or the number of blobs gc'ed is not 0
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>