* feat: add support for docker images
Issue #724
A new config section under "HTTP" called "Compat" is added which
currently takes a list of possible compatible legacy media-types.
https://github.com/opencontainers/image-spec/blob/main/media-types.md#compatibility-matrix
Only "docker2s2" (Docker Manifest V2 Schema V2) is currently supported.
Garbage collection also needs to be made aware of non-OCI compatible
layer types.
feat: add cve support for non-OCI compatible layer types
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
*
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: add more docker compat tests
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat: add additional validation checks for non-OCI images
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* ci: make "full" images docker-compatible
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
In case of delete by tag only the tag is removed, the manifest itself would continue to be accessible by digest.
In case of delete by digest the manifest would be completely removed (provided it is not used by an index or another reference).
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
fix(authn): configurable hashing/encryption keys used to secure cookies
If they are not configured zot will generate a random hashing key at startup,
invalidating all cookies if zot is restarted. closes: #2526
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
when a client pushes an image zot's inline dedupe
will try to find the blob path corresponding with the blob digest
that it's currently pushed and if it's found in the cache
then zot will make a symbolic link to that cache entry and report
to the client that the blob already exists on the location.
Before this patch authorization was not applied on this process meaning
that a user could copy blobs without having permissions on the source repo.
Added a rule which says that the client should have read permissions on the source repo
before deduping, otherwise just Stat() the blob and return the corresponding status code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
This commit includes support for periodic repo sync in a scale-out
cluster.
Before this commit, all cluster members would sync all the repos as
the config is shared.
With this change, in periodic sync, the cluster member checks whether
it manages the repo. If it does not manage the repo, it will skip the
sync.
This commit also includes a unit test to test on-demand sync too, but
there are no logic changes for it as it is implicitly handled by the
proxying logic.
Signed-off-by: Vishwas Rajashekar <vrajashe@cisco.com>
* feat(cluster): initial commit for scale-out cluster
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat(cluster): support shared storage scale out
This change introduces support for shared storage backed
zot cluster scale out.
New feature
Multiple stateless zot instances can run using the same shared
storage backend where each instance looks at a specific set
of repositories based on a siphash of the repository name to improve
scale as the load is distributed across multiple instances.
For a given config, there will only be one instance that can perform
dist-spec read/write on a given repository.
What's changed?
- introduced a transparent request proxy for dist-spec endpoints based on
siphash of repository name.
- new config for scale out cluster that specifies list of
cluster members.
Signed-off-by: Vishwas Rajashekar <vrajashe@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Signed-off-by: Vishwas Rajashekar <vrajashe@cisco.com>
Co-authored-by: Ramkumar Chinchani <rchincha@cisco.com>
* fix(oras)!: remove ORAS artifact references support
ORAS artifacts/references predated OCI dist-spec 1.1.0 which now has the
same functionality and likely to see wider adoption.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* test: update to released official images
So that they are unlikely to be deleted.
*-rc images may be cleaned up over time.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
BREAKING CHANGE: the dist spec version in the config files needs to be bumped to 1.1.0
in order for the config verification to pass without warnings.
Also fix 1 dependabot alert for helm.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Refactor and add more coverage to test flacky coverage in case sessions
which are already deleted are flagged as expired/for deletion.
See coverage drop in pkg/api/cookiestore.go:
8e68255946/indirect-changes
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
This causes the "fair" scheduler to run it too often in the detriment of other generators.
The intention was to run it every 2 hours but the measurement unit for 7200 was not specified.
Add more logs, including showing a generator name, in order to troubleshoot this kind of issues easier in the future.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
We use chartmuseum lib for handling bearer requests, which is not
implementing the token spec, mainly it expects "scope" parameter
to be given on every request, even for /v2/ route which doesn't represent
a resource.
Handle this /v2/ route inside our code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
We cannot assume the LDAP server will have group attributes programmed
everytime. So handle it accordingly.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
We expect panics in the server/datapath to be few and far between.
So the backtraces are more valuable now.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
init shutdown routine after controller.Init()
check for nil values before stopping http server and task scheduler.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- added a new field 'IsDeletable' for graphql ImageSummary struct.
- apply cors on DeleteManifest route
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
ci(notation): update to latest notation version
fix(sync): add layers info when syncing signatures
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>