* feat: add redis cache support
https://github.com/project-zot/zot/pull/2005
Fixes https://github.com/project-zot/zot/issues/2004
* feat: add redis cache support
Currently, we have dynamoDB as the remote shared cache but ideal only
for the cloud use case.
For on-prem use case, add support for redis.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat(redis): added blackbox tests for redis
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* feat(redis): dummy implementation of MetaDB interface for redis cache
Signed-off-by: Alexei Dodon <adodon@cisco.com>
* feat: check validity of driver configuration on metadb instantiation
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat: multiple fixes for redis cache driver implementation
- add missing method GetAllBlobs
- add redis cache tests, with and without mocking
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): redis implementation for MetaDB
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): use redsync to block concurrent write access to the redis DB
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): update .github/workflows/cluster.yaml to also test redis
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(metadb): add keyPrefix parameter for redis and remove unneeded method meta.Crate()
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): support RedisCluster configuration and add unit tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): more tests for redis metadb implementation
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): add more examples and update examples/README.md
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): move option parsing and redis client initialization under pkg/api/config/redis
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* chore(cachedb): move Cache interface to pkg/storage/types
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): reorganize code in pkg/storage/cache.go
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): call redis.SetLogger() with the zot logger as parameter
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(redis): rename pkg/meta/redisdb to pkg/meta/redis
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Signed-off-by: Alexei Dodon <adodon@cisco.com>
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Co-authored-by: a <a@tuxpa.in>
Co-authored-by: Ramkumar Chinchani <rchincha@cisco.com>
Co-authored-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: Alexei Dodon <adodon@cisco.com>
* chore: update zui version
Signed-off-by: Ramkumar Chinchani <rchincha.dev@gmail.com>
* chore: upload zap scan artifacts with different names for different scanned images
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha.dev@gmail.com>
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Co-authored-by: Andrei Aaron <aaaron@luxoft.com>
chore: upgrade trivy to v0.55.2, also update the logic of waiting for zot to start in some jobs
Seems like there's an increate in the time zot requires to start before servicing requests.
From my GitHub observations it is better check using curl instead of relying on hardcoded 5s or 10s values.
The logic in .github/workflows/cluster.yaml seems to be old and out of date.
Even on main right now there is only 1 our of 3 zots actualy running.
The other 2 are actually erroring: Error: operation timeout: boltdb file is already in use, path '/tmp/zot/cache.db'
This is unrelated to this PR, I am seeing the same issue in the olders workflow runs still showing the logs
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
* feat(cluster): initial commit for scale-out cluster
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
* feat(cluster): support shared storage scale out
This change introduces support for shared storage backed
zot cluster scale out.
New feature
Multiple stateless zot instances can run using the same shared
storage backend where each instance looks at a specific set
of repositories based on a siphash of the repository name to improve
scale as the load is distributed across multiple instances.
For a given config, there will only be one instance that can perform
dist-spec read/write on a given repository.
What's changed?
- introduced a transparent request proxy for dist-spec endpoints based on
siphash of repository name.
- new config for scale out cluster that specifies list of
cluster members.
Signed-off-by: Vishwas Rajashekar <vrajashe@cisco.com>
---------
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
Signed-off-by: Vishwas Rajashekar <vrajashe@cisco.com>
Co-authored-by: Ramkumar Chinchani <rchincha@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* feat(sync): local tmp store
Signed-off-by: a <a@tuxpa.in>
* fix(sync): various fixes for s3+remote storage feature
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: a <a@tuxpa.in>
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: a <a@tuxpa.in>
ci(notation): update to latest notation version
fix(sync): add layers info when syncing signatures
Signed-off-by: Andreea-Lupu <andreealupu1470@yahoo.com>