Since the scheduler no longer executes generators in a fixed order, and scrub logic refactoring,
the scrub tasks may or may not complete in the expected time.
Increase sleep times used to search for tasks results in zot logs.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
This causes the "fair" scheduler to run it too often in the detriment of other generators.
The intention was to run it every 2 hours but the measurement unit for 7200 was not specified.
Add more logs, including showing a generator name, in order to troubleshoot this kind of issues easier in the future.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Generators are now ordered by rank in the priority queue.
The rank computation formula is:
- 100/(1+generated_task_count) for high priority tasks
- 10/(1+generated_task_count) for medium priority tasks
- 1/(1+generated_task_count) for low priority tasks
Note the ranks are used when comparing generators both with the same priority and with different priority.
So now we are:
- giving an opportunity to all generators with the same priority to take turns generating tasks
- giving roughly 1 low priority and 10 medium priority tasks the opportunity to run for every 100 high priority tasks running.
After a generator generates a task, the generators are reordered in the priority queue based on rank.
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
We use chartmuseum lib for handling bearer requests, which is not
implementing the token spec, mainly it expects "scope" parameter
to be given on every request, even for /v2/ route which doesn't represent
a resource.
Handle this /v2/ route inside our code.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
- update cve tests
- update scrub tests
- update tests for parsing storage and loading into meta DB
- update controller tests
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
instead of reading entire files before calculating their digests
stream them by using their Reader method.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
We cannot assume the LDAP server will have group attributes programmed
everytime. So handle it accordingly.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
We expect panics in the server/datapath to be few and far between.
So the backtraces are more valuable now.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
init shutdown routine after controller.Init()
check for nil values before stopping http server and task scheduler.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
For CLI output is similar to:
CRITICAL 0, HIGH 1, MEDIUM 1, LOW 0, UNKNOWN 0, TOTAL 2
ID SEVERITY TITLE
CVE-2023-0464 HIGH openssl: Denial of service by excessive resou...
CVE-2023-0465 MEDIUM openssl: Invalid certificate policies in leaf...
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
- added a new field 'IsDeletable' for graphql ImageSummary struct.
- apply cors on DeleteManifest route
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* fix(scheduler): data race when pushing new tasks
the problem here is that scheduler can be closed in two ways:
- canceling the context given as argument to scheduler.RunScheduler()
- running scheduler.Shutdown()
because of this shutdown can trigger a data race between calling scheduler.inShutdown()
and actually pushing tasks into the pool workers
solved that by keeping a quit channel and listening on both quit channel and ctx.Done()
and closing the worker chan and scheduler afterwards.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
* refactor(scheduler): refactor into a single shutdown
before this we could stop scheduler either by closing the context
provided to RunScheduler(ctx) or by running Shutdown().
simplify things by getting rid of the external context in RunScheduler().
keep an internal context in the scheduler itself and pass it down to all tasks.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
---------
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>