* feat: add support for docker images
Issue #724
A new config section under "HTTP" called "Compat" is added which
currently takes a list of possible compatible legacy media-types.
https://github.com/opencontainers/image-spec/blob/main/media-types.md#compatibility-matrix
Only "docker2s2" (Docker Manifest V2 Schema V2) is currently supported.
Garbage collection also needs to be made aware of non-OCI compatible
layer types.
feat: add cve support for non-OCI compatible layer types
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
*
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: add more docker compat tests
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat: add additional validation checks for non-OCI images
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* ci: make "full" images docker-compatible
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Looks like we didn't have many GC tests for retaining multiarch images.
I added more data to the existing image retention tests, besides the new GC tests.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
when a client pushes an image zot's inline dedupe
will try to find the blob path corresponding with the blob digest
that it's currently pushed and if it's found in the cache
then zot will make a symbolic link to that cache entry and report
to the client that the blob already exists on the location.
Before this patch authorization was not applied on this process meaning
that a user could copy blobs without having permissions on the source repo.
Added a rule which says that the client should have read permissions on the source repo
before deduping, otherwise just Stat() the blob and return the corresponding status code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(oras)!: remove ORAS artifact references support
ORAS artifacts/references predated OCI dist-spec 1.1.0 which now has the
same functionality and likely to see wider adoption.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: update to released official images
So that they are unlikely to be deleted.
*-rc images may be cleaned up over time.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
We use chartmuseum lib for handling bearer requests, which is not
implementing the token spec, mainly it expects "scope" parameter
to be given on every request, even for /v2/ route which doesn't represent
a resource.
Handle this /v2/ route inside our code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
For CLI output is similar to:
CRITICAL 0, HIGH 1, MEDIUM 1, LOW 0, UNKNOWN 0, TOTAL 2
ID SEVERITY TITLE
CVE-2023-0464 HIGH openssl: Denial of service by excessive resou...
CVE-2023-0465 MEDIUM openssl: Invalid certificate policies in leaf...
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
- Cosign supports 2 types of signature formats:
1. Using tag -> each new signature of the same manifest is
added as a new layer of the signature manifest having that
specific tag("{alghoritm}-{digest_of_signed_manifest}.sig")
2. Using referrers -> each new signature of the same manifest is
added as a new manifest
- For adding these cosign signature to metadb, we reserved index 0 of the
list of cosign signatures for tag-based signatures. When a new tag-based
signature is added for the same manifest, the element on first position
in its list of cosign signatures(in metadb) will be updated/overwritten.
When a new cosign signature(using referrers) will be added for the same
manifest this new signature will be appended to the list of cosign
signatures.
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
ci(notation): update to latest notation version
fix(sync): add layers info when syncing signatures
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
Which could be imported independently. See more details:
1. "zotregistry.io/zot/pkg/test/common" - currently used as
tcommon "zotregistry.io/zot/pkg/test/common" - inside pkg/test
test "zotregistry.io/zot/pkg/test/common" - in tests
. "zotregistry.io/zot/pkg/test/common" - in tests
Decouple zb from code in test/pkg in order to keep the size small.
2. "zotregistry.io/zot/pkg/test/image-utils" - curently used as
. "zotregistry.io/zot/pkg/test/image-utils"
3. "zotregistry.io/zot/pkg/test/deprecated" - curently used as
"zotregistry.io/zot/pkg/test/deprecated"
This one will bre replaced gradually by image-utils in the future.
4. "zotregistry.io/zot/pkg/test/signature" - (cosign + notation) use as
"zotregistry.io/zot/pkg/test/signature"
5. "zotregistry.io/zot/pkg/test/auth" - (bearer + oidc) curently used as
authutils "zotregistry.io/zot/pkg/test/auth"
6. "zotregistry.io/zot/pkg/test/oci-utils" - curently used as
ociutils "zotregistry.io/zot/pkg/test/oci-utils"
Some unused functions were removed, some were replaced, and in
a few cases specific funtions were moved to the files they were used in.
Added an interface for the StoreController, this reduces the number of imports
of the entire image store, decreasing binary size for tests.
If the zb code was still coupled with pkg/test, this would have reflected in zb size.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
no need to run dedupe/restore blobs for images being pushed or synced while
running dedupe task, they are already deduped/restored inline.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- implement scrub also for S3 storage by replacing umoci
- change scrub implementation for ImageIndex
- take the `Subject` into consideration when running scrub
- remove test code relying on the umoci library. Since we started
relying on images in test/data, and we create our own images using
go code we can obtain digests by other means. (cherry picked from commit 489d4e2d23c1b4e48799283f8281024bbef6123f)
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
fix(gc): fix cleaning deduped blobs because they have the modTime of
the original blobs, fixed by updating the modTime when hard linking
the blobs.
fix(gc): failing to parse rootDir at zot startup when using s3 storage
because there are no files under rootDir and we can not create empty dirs
on s3, fixed by creating an empty file under rootDir.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
1. Move existing CVE DB download generator/task login under the cve package
2. Add a new CVE scanner task generator and task type to run in the background, as well as tests for it
3. Move the CVE cache in its own package
4. Add a CVE scanner methods to check if an entry is present in the cache, and to retreive the results
5. Modify the FilterTags MetaDB method to not exit on first error
This is needed in order to pass all tags to the generator,
instead of the generator stopping at the first set of invalid data
6. Integrate the new scanning task generator with the existing zot code.
7. Fix an issue where the CVE scan results for multiarch images was not cached
8. Rewrite some of the older CVE tests to use the new image-utils test package
9. Use the CVE scanner as attribute of the controller instead of CveInfo.
Remove functionality of CVE DB update from CveInfo, it is now responsible,
as the name states, only for providing CVE information.
10. The logic to get maximum severity and cve count for image sumaries now uses only the scanner cache.
11. Removed the GetCVESummaryForImage method from CveInfo as it was only used in tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
1. Only scan CVEs for images returned by graphql calls
Since pagination was refactored to account for image indexes, we had started
to run the CVE scanner before pagination was applied, resulting in
decreased ZOT performance if CVE information was requested
2. Increase in medory-cache of cve results to 1m, from 10k digests.
3. Update CVE model to use CVSS severity values in our code.
Previously we relied upon the strings returned by trivy directly,
and the sorting they implemented.
Since CVE severities are standardized, we don't need to pass around
an adapter object just for pagination and sorting purposes anymore.
This also improves our testing since we don't mock the sorting functions anymore.
4. Fix a flaky CLI test not waiting for the zot service to start.
5. Add the search build label on search/cve tests which were missing it.
6. The boltdb update method was used in a few places where view was supposed to be called.
7. Add logs for start and finish of parsing MetaDB.
8. Avoid unmarshalling twice to obtain annotations for multiarch images.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>