Fallback to Created field and the History entries in the image config
only if the annotation "org.opencontainers.image.created" is not available
closes#2210
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
This causes the "fair" scheduler to run it too often in the detriment of other generators.
The intention was to run it every 2 hours but the measurement unit for 7200 was not specified.
Add more logs, including showing a generator name, in order to troubleshoot this kind of issues easier in the future.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Generators are now ordered by rank in the priority queue.
The rank computation formula is:
- 100/(1+generated_task_count) for high priority tasks
- 10/(1+generated_task_count) for medium priority tasks
- 1/(1+generated_task_count) for low priority tasks
Note the ranks are used when comparing generators both with the same priority and with different priority.
So now we are:
- giving an opportunity to all generators with the same priority to take turns generating tasks
- giving roughly 1 low priority and 10 medium priority tasks the opportunity to run for every 100 high priority tasks running.
After a generator generates a task, the generators are reordered in the priority queue based on rank.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
We use chartmuseum lib for handling bearer requests, which is not
implementing the token spec, mainly it expects "scope" parameter
to be given on every request, even for /v2/ route which doesn't represent
a resource.
Handle this /v2/ route inside our code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
We cannot assume the LDAP server will have group attributes programmed
everytime. So handle it accordingly.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
We expect panics in the server/datapath to be few and far between.
So the backtraces are more valuable now.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
init shutdown routine after controller.Init()
check for nil values before stopping http server and task scheduler.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
For CLI output is similar to:
CRITICAL 0, HIGH 1, MEDIUM 1, LOW 0, UNKNOWN 0, TOTAL 2
ID SEVERITY TITLE
CVE-2023-0464 HIGH openssl: Denial of service by excessive resou...
CVE-2023-0465 MEDIUM openssl: Invalid certificate policies in leaf...
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- added a new field 'IsDeletable' for graphql ImageSummary struct.
- apply cors on DeleteManifest route
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* feat(sync): local tmp store
Signed-off-by: a <a@tuxpa.in>
* fix(sync): various fixes for s3+remote storage feature
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: a <a@tuxpa.in>
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
Co-authored-by: a <a@tuxpa.in>
wait for workers to finish before exiting
should fix tests reporting they couldn't remove rootDir because it's being
written by tasks
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- MetaDB stores the time of the last update of a repo
- During startup we check if the layout has been updated after the last recorded change in the db
- If this is the case, the repo is parsed and updated in the DB otherwise it's skipped
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
in the case of an already existing meta db without pushTimestamp field
its value would be 0 until image is updated, check for zero values and update them
with time.Now() so that retention logic won't remove them.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>