when a client pushes an image zot's inline dedupe
will try to find the blob path corresponding with the blob digest
that it's currently pushed and if it's found in the cache
then zot will make a symbolic link to that cache entry and report
to the client that the blob already exists on the location.
Before this patch authorization was not applied on this process meaning
that a user could copy blobs without having permissions on the source repo.
Added a rule which says that the client should have read permissions on the source repo
before deduping, otherwise just Stat() the blob and return the corresponding status code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(oras)!: remove ORAS artifact references support
ORAS artifacts/references predated OCI dist-spec 1.1.0 which now has the
same functionality and likely to see wider adoption.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: update to released official images
So that they are unlikely to be deleted.
*-rc images may be cleaned up over time.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
For CLI output is similar to:
CRITICAL 0, HIGH 1, MEDIUM 1, LOW 0, UNKNOWN 0, TOTAL 2
ID SEVERITY TITLE
CVE-2023-0464 HIGH openssl: Denial of service by excessive resou...
CVE-2023-0465 MEDIUM openssl: Invalid certificate policies in leaf...
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
no need to run dedupe/restore blobs for images being pushed or synced while
running dedupe task, they are already deduped/restored inline.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
fix(gc): fix cleaning deduped blobs because they have the modTime of
the original blobs, fixed by updating the modTime when hard linking
the blobs.
fix(gc): failing to parse rootDir at zot startup when using s3 storage
because there are no files under rootDir and we can not create empty dirs
on s3, fixed by creating an empty file under rootDir.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
1. Move existing CVE DB download generator/task login under the cve package
2. Add a new CVE scanner task generator and task type to run in the background, as well as tests for it
3. Move the CVE cache in its own package
4. Add a CVE scanner methods to check if an entry is present in the cache, and to retreive the results
5. Modify the FilterTags MetaDB method to not exit on first error
This is needed in order to pass all tags to the generator,
instead of the generator stopping at the first set of invalid data
6. Integrate the new scanning task generator with the existing zot code.
7. Fix an issue where the CVE scan results for multiarch images was not cached
8. Rewrite some of the older CVE tests to use the new image-utils test package
9. Use the CVE scanner as attribute of the controller instead of CveInfo.
Remove functionality of CVE DB update from CveInfo, it is now responsible,
as the name states, only for providing CVE information.
10. The logic to get maximum severity and cve count for image sumaries now uses only the scanner cache.
11. Removed the GetCVESummaryForImage method from CveInfo as it was only used in tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
1. Only scan CVEs for images returned by graphql calls
Since pagination was refactored to account for image indexes, we had started
to run the CVE scanner before pagination was applied, resulting in
decreased ZOT performance if CVE information was requested
2. Increase in medory-cache of cve results to 1m, from 10k digests.
3. Update CVE model to use CVSS severity values in our code.
Previously we relied upon the strings returned by trivy directly,
and the sorting they implemented.
Since CVE severities are standardized, we don't need to pass around
an adapter object just for pagination and sorting purposes anymore.
This also improves our testing since we don't mock the sorting functions anymore.
4. Fix a flaky CLI test not waiting for the zot service to start.
5. Add the search build label on search/cve tests which were missing it.
6. The boltdb update method was used in a few places where view was supposed to be called.
7. Add logs for start and finish of parsing MetaDB.
8. Avoid unmarshalling twice to obtain annotations for multiarch images.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- using secrets manager for storing public keys and certificates
- adding a default truststore for notation verification and upload all certificates to this default truststore
- removig `truststoreName` query param from notation api for uploading certificates
(cherry picked from commit eafcc1a213)
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>
unified both local and s3 ImageStore logic into a single ImageStore
added a new driver interface for common file/dirs manipulations
to be implemented by different storage types
refactor(gc): drop umoci dependency, implemented internal gc
added retentionDelay config option that specifies
the garbage collect delay for images without tags
this will also clean manifests which are part of an index image
(multiarch) that no longer exist.
fix(dedupe): skip blobs under .sync/ directory
if startup dedupe is running while also syncing is running
ignore blobs under sync's temporary storage
fix(storage): do not allow image indexes modifications
when deleting a manifest verify that it is not part of a multiarch image
and throw a MethodNotAllowed error to the client if it is.
we don't want to modify multiarch images
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
This change introduces OpenID authn by using providers such as Github,
Gitlab, Google and Dex.
User sessions are now used for web clients to identify
and persist an authenticated users session, thus not requiring every request to
use credentials.
Another change is apikey feature, users can create/revoke their api keys and use them
to authenticate when using cli clients such as skopeo.
eg:
login:
/auth/login?provider=github
/auth/login?provider=gitlab
and so on
logout:
/auth/logout
redirectURL:
/auth/callback/github
/auth/callback/gitlab
and so on
If network policy doesn't allow inbound connections, this callback wont work!
for more info read documentation added in this commit.
Signed-off-by: Alex Stan <alexandrustan96@yahoo.ro>
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Alex Stan <alexandrustan96@yahoo.ro>
when pushing manifests, zot will validate blobs (layers + config blob) are
present in repo, currently it opens(in case of filesystem storage) or download(
in case of cloud storage) each blob.
fixed that by adding a new method ImageStore.CheckBlobPresence() on storage
to check blobs presence without checking the cache like ImageStore.CheckBlob() method does.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Initial code was contributed by Bogdan BIVOLARU <104334+bogdanbiv@users.noreply.github.com>
Moved implementation from a separate db to repodb by Andrei Aaron <aaaron@luxoft.com>
Not done yet:
- run/test dynamodb implementation, only boltdb was tested
- add additional coverage for existing functionality
- add web-based APIs to toggle the stars/bookmarks on/off
Initially graphql mutation was discussed for the missing API but
we decided REST endpoints would be better suited for configuration
feat(userdb): complete functionality for userdb integration
- dynamodb rollback changes to user starred repos in case increasing the total star count fails
- dynamodb increment/decrement repostars in repometa when user stars/unstars a repo
- dynamodb check anonymous user permissions are working as intendend
- common test handle anonymous users
- RepoMeta2RepoSummary set IsStarred and IsBookmarked
feat(userdb): rest api calls for toggling stars/bookmarks on/off
test(userdb): blackbox tests
test(userdb): move preferences tests in a different file with specific build tags
feat(repodb): add is-starred and is-bookmarked fields to repo-meta
- removed duplicated logic for determining if a repo is starred/bookmarked
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
Co-authored-by: Andrei Aaron <aaaron@luxoft.com>
- before, the download count for a manifest and repo star count were lost after reload
- now we are keeping these values when we reset the repo-meta structure
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
Changed repodb to store more information about the referrer needed for the referrers query
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>