2019-10-13 08:23:14 -05:00
|
|
|
// Copyright 2019 Gitea. All rights reserved.
|
2022-11-27 13:20:29 -05:00
|
|
|
// SPDX-License-Identifier: MIT
|
2019-10-13 08:23:14 -05:00
|
|
|
|
|
|
|
package task
|
|
|
|
|
|
|
|
import (
|
2023-09-16 09:39:12 -05:00
|
|
|
"context"
|
2019-10-13 08:23:14 -05:00
|
|
|
"fmt"
|
|
|
|
|
2022-08-24 21:31:57 -05:00
|
|
|
admin_model "code.gitea.io/gitea/models/admin"
|
2023-09-07 23:51:15 -05:00
|
|
|
"code.gitea.io/gitea/models/db"
|
2021-12-09 20:27:50 -05:00
|
|
|
repo_model "code.gitea.io/gitea/models/repo"
|
2021-11-24 04:49:20 -05:00
|
|
|
user_model "code.gitea.io/gitea/models/user"
|
2020-01-07 06:23:09 -05:00
|
|
|
"code.gitea.io/gitea/modules/graceful"
|
2021-07-24 11:03:58 -05:00
|
|
|
"code.gitea.io/gitea/modules/json"
|
2019-10-13 08:23:14 -05:00
|
|
|
"code.gitea.io/gitea/modules/log"
|
2021-11-16 10:25:33 -05:00
|
|
|
base "code.gitea.io/gitea/modules/migration"
|
2020-01-07 06:23:09 -05:00
|
|
|
"code.gitea.io/gitea/modules/queue"
|
2021-05-31 03:25:47 -05:00
|
|
|
"code.gitea.io/gitea/modules/secret"
|
|
|
|
"code.gitea.io/gitea/modules/setting"
|
2019-10-13 08:23:14 -05:00
|
|
|
"code.gitea.io/gitea/modules/structs"
|
2020-01-12 07:11:17 -05:00
|
|
|
"code.gitea.io/gitea/modules/timeutil"
|
2021-05-31 03:25:47 -05:00
|
|
|
"code.gitea.io/gitea/modules/util"
|
2023-09-06 07:08:51 -05:00
|
|
|
repo_service "code.gitea.io/gitea/services/repository"
|
2019-10-13 08:23:14 -05:00
|
|
|
)
|
|
|
|
|
|
|
|
// taskQueue is a global queue of tasks
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 06:49:59 -05:00
|
|
|
var taskQueue *queue.WorkerPoolQueue[*admin_model.Task]
|
2019-10-13 08:23:14 -05:00
|
|
|
|
|
|
|
// Run a task
|
2023-09-16 09:39:12 -05:00
|
|
|
func Run(ctx context.Context, t *admin_model.Task) error {
|
2019-10-13 08:23:14 -05:00
|
|
|
switch t.Type {
|
|
|
|
case structs.TaskTypeMigrateRepo:
|
2023-09-16 09:39:12 -05:00
|
|
|
return runMigrateTask(ctx, t)
|
2019-10-13 08:23:14 -05:00
|
|
|
default:
|
2020-01-07 06:23:09 -05:00
|
|
|
return fmt.Errorf("Unknown task type: %d", t.Type)
|
2019-10-13 08:23:14 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Init will start the service to get all unfinished tasks and run them
|
|
|
|
func Init() error {
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 02:31:55 -05:00
|
|
|
taskQueue = queue.CreateSimpleQueue(graceful.GetManager().ShutdownContext(), "task", handler)
|
2020-01-07 06:23:09 -05:00
|
|
|
if taskQueue == nil {
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 02:31:55 -05:00
|
|
|
return fmt.Errorf("unable to create task queue")
|
2019-10-13 08:23:14 -05:00
|
|
|
}
|
Improve queue and logger context (#24924)
Before there was a "graceful function": RunWithShutdownFns, it's mainly
for some modules which doesn't support context.
The old queue system doesn't work well with context, so the old queues
need it.
After the queue refactoring, the new queue works with context well, so,
use Golang context as much as possible, the `RunWithShutdownFns` could
be removed (replaced by RunWithCancel for context cancel mechanism), the
related code could be simplified.
This PR also fixes some legacy queue-init problems, eg:
* typo : archiver: "unable to create codes indexer queue" => "unable to
create repo-archive queue"
* no nil check for failed queues, which causes unfriendly panic
After this PR, many goroutines could have better display name:
![image](https://github.com/go-gitea/gitea/assets/2114189/701b2a9b-8065-4137-aeaa-0bda2b34604a)
![image](https://github.com/go-gitea/gitea/assets/2114189/f1d5f50f-0534-40f0-b0be-f2c9daa5fe92)
2023-05-26 02:31:55 -05:00
|
|
|
go graceful.GetManager().RunWithCancel(taskQueue)
|
2019-10-13 08:23:14 -05:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
Rewrite queue (#24505)
# ⚠️ Breaking
Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).
If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.
Example:
```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```
Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.
# The problem
The old queue package has some legacy problems:
* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.
It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.
# The new queue package
It keeps using old config and concept as much as possible.
* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.
There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.
Almost ready for review.
TODO:
* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)
## Code coverage:
![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)
2023-05-08 06:49:59 -05:00
|
|
|
func handler(items ...*admin_model.Task) []*admin_model.Task {
|
|
|
|
for _, task := range items {
|
2023-09-16 09:39:12 -05:00
|
|
|
if err := Run(db.DefaultContext, task); err != nil {
|
2020-01-07 06:23:09 -05:00
|
|
|
log.Error("Run task failed: %v", err)
|
|
|
|
}
|
|
|
|
}
|
2022-01-22 16:22:14 -05:00
|
|
|
return nil
|
2020-01-07 06:23:09 -05:00
|
|
|
}
|
|
|
|
|
2019-10-13 08:23:14 -05:00
|
|
|
// MigrateRepository add migration repository to task
|
2023-09-16 09:39:12 -05:00
|
|
|
func MigrateRepository(ctx context.Context, doer, u *user_model.User, opts base.MigrateOptions) error {
|
|
|
|
task, err := CreateMigrateTask(ctx, doer, u, opts)
|
2019-10-13 08:23:14 -05:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
return taskQueue.Push(task)
|
|
|
|
}
|
2020-01-12 07:11:17 -05:00
|
|
|
|
|
|
|
// CreateMigrateTask creates a migrate task
|
2023-09-16 09:39:12 -05:00
|
|
|
func CreateMigrateTask(ctx context.Context, doer, u *user_model.User, opts base.MigrateOptions) (*admin_model.Task, error) {
|
2021-05-31 03:25:47 -05:00
|
|
|
// encrypt credentials for persistence
|
|
|
|
var err error
|
|
|
|
opts.CloneAddrEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.CloneAddr)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2022-03-30 21:25:40 -05:00
|
|
|
opts.CloneAddr = util.SanitizeCredentialURLs(opts.CloneAddr)
|
2021-05-31 03:25:47 -05:00
|
|
|
opts.AuthPasswordEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthPassword)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
opts.AuthPassword = ""
|
|
|
|
opts.AuthTokenEncrypted, err = secret.EncryptSecret(setting.SecretKey, opts.AuthToken)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
opts.AuthToken = ""
|
2020-01-12 07:11:17 -05:00
|
|
|
bs, err := json.Marshal(&opts)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-08-24 21:31:57 -05:00
|
|
|
task := &admin_model.Task{
|
2020-01-12 07:11:17 -05:00
|
|
|
DoerID: doer.ID,
|
|
|
|
OwnerID: u.ID,
|
|
|
|
Type: structs.TaskTypeMigrateRepo,
|
2023-05-11 03:25:46 -05:00
|
|
|
Status: structs.TaskStatusQueued,
|
2020-01-12 07:11:17 -05:00
|
|
|
PayloadContent: string(bs),
|
|
|
|
}
|
|
|
|
|
2023-09-16 09:39:12 -05:00
|
|
|
if err := admin_model.CreateTask(ctx, task); err != nil {
|
2020-01-12 07:11:17 -05:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2023-09-16 09:39:12 -05:00
|
|
|
repo, err := repo_service.CreateRepositoryDirectly(ctx, doer, u, repo_service.CreateRepoOptions{
|
2020-01-12 07:11:17 -05:00
|
|
|
Name: opts.RepoName,
|
|
|
|
Description: opts.Description,
|
|
|
|
OriginalURL: opts.OriginalURL,
|
|
|
|
GitServiceType: opts.GitServiceType,
|
2024-05-19 19:56:45 -05:00
|
|
|
IsPrivate: opts.Private || setting.Repository.ForcePrivate,
|
2020-01-12 07:11:17 -05:00
|
|
|
IsMirror: opts.Mirror,
|
2021-12-09 20:27:50 -05:00
|
|
|
Status: repo_model.RepositoryBeingMigrated,
|
2020-01-12 07:11:17 -05:00
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
task.EndTime = timeutil.TimeStampNow()
|
|
|
|
task.Status = structs.TaskStatusFailed
|
2023-09-16 09:39:12 -05:00
|
|
|
err2 := task.UpdateCols(ctx, "end_time", "status")
|
2020-01-12 07:11:17 -05:00
|
|
|
if err2 != nil {
|
|
|
|
log.Error("UpdateCols Failed: %v", err2.Error())
|
|
|
|
}
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
task.RepoID = repo.ID
|
2023-09-16 09:39:12 -05:00
|
|
|
if err = task.UpdateCols(ctx, "repo_id"); err != nil {
|
2020-01-12 07:11:17 -05:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2021-11-13 06:28:50 -05:00
|
|
|
return task, nil
|
2020-01-12 07:11:17 -05:00
|
|
|
}
|
2023-08-03 21:21:32 -05:00
|
|
|
|
|
|
|
// RetryMigrateTask retry a migrate task
|
2023-09-16 09:39:12 -05:00
|
|
|
func RetryMigrateTask(ctx context.Context, repoID int64) error {
|
|
|
|
migratingTask, err := admin_model.GetMigratingTask(ctx, repoID)
|
2023-08-03 21:21:32 -05:00
|
|
|
if err != nil {
|
|
|
|
log.Error("GetMigratingTask: %v", err)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if migratingTask.Status == structs.TaskStatusQueued || migratingTask.Status == structs.TaskStatusRunning {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO Need to removing the storage/database garbage brought by the failed task
|
|
|
|
|
|
|
|
// Reset task status and messages
|
|
|
|
migratingTask.Status = structs.TaskStatusQueued
|
|
|
|
migratingTask.Message = ""
|
2023-09-16 09:39:12 -05:00
|
|
|
if err = migratingTask.UpdateCols(ctx, "status", "message"); err != nil {
|
2023-08-03 21:21:32 -05:00
|
|
|
log.Error("task.UpdateCols failed: %v", err)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
return taskQueue.Push(migratingTask)
|
|
|
|
}
|
feat(quota): Quota enforcement
The previous commit laid out the foundation of the quota engine, this
one builds on top of it, and implements the actual enforcement.
Enforcement happens at the route decoration level, whenever possible. In
case of the API, when over quota, a 413 error is returned, with an
appropriate JSON payload. In case of web routes, a 413 HTML page is
rendered with similar information.
This implementation is for a **soft quota**: quota usage is checked
before an operation is to be performed, and the operation is *only*
denied if the user is already over quota. This makes it possible to go
over quota, but has the significant advantage of being practically
implementable within the current Forgejo architecture.
The goal of enforcement is to deny actions that can make the user go
over quota, and allow the rest. As such, deleting things should - in
almost all cases - be possible. A prime exemption is deleting files via
the web ui: that creates a new commit, which in turn increases repo
size, thus, is denied if the user is over quota.
Limitations
-----------
Because we generally work at a route decorator level, and rarely
look *into* the operation itself, `size:repos:public` and
`size:repos:private` are not enforced at this level, the engine enforces
against `size:repos:all`. This will be improved in the future.
AGit does not play very well with this system, because AGit PRs count
toward the repo they're opened against, while in the GitHub-style fork +
pull model, it counts against the fork. This too, can be improved in the
future.
There's very little done on the UI side to guard against going over
quota. What this patch implements, is enforcement, not prevention. The
UI will still let you *try* operations that *will* result in a denial.
Signed-off-by: Gergely Nagy <forgejo@gergo.csillger.hu>
2024-07-06 03:30:16 -05:00
|
|
|
|
|
|
|
func SetMigrateTaskMessage(ctx context.Context, repoID int64, message string) error {
|
|
|
|
migratingTask, err := admin_model.GetMigratingTask(ctx, repoID)
|
|
|
|
if err != nil {
|
|
|
|
log.Error("GetMigratingTask: %v", err)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
migratingTask.Message = message
|
|
|
|
if err = migratingTask.UpdateCols(ctx, "message"); err != nil {
|
|
|
|
log.Error("task.UpdateCols failed: %v", err)
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|