The Trivy library now supports multiple locations from where to download the DBs.
The zot code has been updated to properly call the updated library functions.
If at some point we would want to support multiple Trivy DBs in zot, we could look into it more.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
There are 2 remaining exceptions that I am aware of:
1. The tests under test/blackbox/cve.bats
2. One of the cli tests checking the server attempts download of the databases
from the default url
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat: add support for docker images
Issue #724
A new config section under "HTTP" called "Compat" is added which
currently takes a list of possible compatible legacy media-types.
https://github.com/opencontainers/image-spec/blob/main/media-types.md#compatibility-matrix
Only "docker2s2" (Docker Manifest V2 Schema V2) is currently supported.
Garbage collection also needs to be made aware of non-OCI compatible
layer types.
feat: add cve support for non-OCI compatible layer types
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
*
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: add more docker compat tests
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat: add additional validation checks for non-OCI images
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* ci: make "full" images docker-compatible
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
chore: upgrade trivy to v0.55.2, also update the logic of waiting for zot to start in some jobs
Seems like there's an increate in the time zot requires to start before servicing requests.
From my GitHub observations it is better check using curl instead of relying on hardcoded 5s or 10s values.
The logic in .github/workflows/cluster.yaml seems to be old and out of date.
Even on main right now there is only 1 our of 3 zots actualy running.
The other 2 are actually erroring: Error: operation timeout: boltdb file is already in use, path '/tmp/zot/cache.db'
This is unrelated to this PR, I am seeing the same issue in the olders workflow runs still showing the logs
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
when a UI client tries to view image details
for an image with multiple tags pointing to the same digest
we make a query to dynamodb having duplicate keys (same digest)
resulting in an error and the client is redirect back to image
overview.
closes: #2464
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Fallback to Created field and the History entries in the image config
only if the annotation "org.opencontainers.image.created" is not available
closes#2210
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
This causes the "fair" scheduler to run it too often in the detriment of other generators.
The intention was to run it every 2 hours but the measurement unit for 7200 was not specified.
Add more logs, including showing a generator name, in order to troubleshoot this kind of issues easier in the future.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
For CLI output is similar to:
CRITICAL 0, HIGH 1, MEDIUM 1, LOW 0, UNKNOWN 0, TOTAL 2
ID SEVERITY TITLE
CVE-2023-0464 HIGH openssl: Denial of service by excessive resou...
CVE-2023-0465 MEDIUM openssl: Invalid certificate policies in leaf...
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- added a new field 'IsDeletable' for graphql ImageSummary struct.
- apply cors on DeleteManifest route
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
- Cosign supports 2 types of signature formats:
1. Using tag -> each new signature of the same manifest is
added as a new layer of the signature manifest having that
specific tag("{alghoritm}-{digest_of_signed_manifest}.sig")
2. Using referrers -> each new signature of the same manifest is
added as a new manifest
- For adding these cosign signature to metadb, we reserved index 0 of the
list of cosign signatures for tag-based signatures. When a new tag-based
signature is added for the same manifest, the element on first position
in its list of cosign signatures(in metadb) will be updated/overwritten.
When a new cosign signature(using referrers) will be added for the same
manifest this new signature will be appended to the list of cosign
signatures.
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
ci(notation): update to latest notation version
fix(sync): add layers info when syncing signatures
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>