0
Fork 0
mirror of https://github.com/project-zot/zot.git synced 2024-12-16 21:56:37 -05:00

feat(repodb): Implement RepoDB for image specific information using boltdb/dynamodb (#979)

* feat(repodb): implement a DB for image specific information using boltdb

(cherry picked from commit e3cb60b856)

Some other fixes/improvements on top (Andrei)

Global search: The last updated attribute on repo level is now computed correctly.
Global search: Fix and enhance tests: validate more fields, and fix CVE verification logic
RepoListWithNewestImage: The vendors and platforms at repo level are no longer containing duplicate entries
CVE: scan OCIUncompressedLayer instead of skiping them (used in tests)
bug(repodb): do no try to increment download counters for signatures

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

Add filtering to global search API (Laurentiu)

(cherry picked from commit a87976d635ea876fe8ced532e8adb7c3bb24098f)

Original work by Laurentiu Niculae <niculae.laurentiu1@gmail.com>

Fix pagination bug

 - when limit was bigger than the repo count result contained empty results
 - now correctly returns only maximum available number of repo results

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

Add history to the fields returned from RepoDB

Consolidate fields used in packages
- pkg/extensions/search/common/common_test
- pkg/extensions/search/common/common
Refactor duplicate code in GlobalSearch verification
Add vulnerability scan results to image:tag reply

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

Refactor ExpandedRepoInfo to using RepoDB

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit fd7dc85c3a9d028fd8860d3791cad4df769ed005)

Init RepoDB at startup
 - sync with storage
 - ignore images without a tag

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit 359898facd6541b2aa99ee95080f7aabf28c2650)

Update request to get image:tag to use repodb

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

Sync RepoDB logging
 - added logging for errors

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit 2e128f4d01712b34c70b5468285100b0657001bb)

sync-repodb minor error checking fix

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

Improve tests for syncing RepoDB with storage

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit b18408c6d64e01312849fc18b929e3a2a7931e9e)

Update scoring rule for repos
  - now prioritize matches to the end of the repo name

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit 6961346ccf02223132b3b12a2132c80bd1b6b33c)

Upgrade search filters to permit multiple values
  - multiple values for os and arch

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit 3ffb72c6fc0587ff827a03fe4f76a13b27b876a0)

feature(repodb): add pagination for RepoListWithNewestImage

Signed-off-by: Alex Stan <alexandrustan96@yahoo.ro>
(cherry picked from commit 32c917f2dc65363b0856345289353559a8027aee)

test(fix): fix tests failing since repodb is used for listing all repos

1. One of the tests was verifying disk/oci related erros and is not applicable
2. Another test was actually broken in an older PR, the default store and
the substore were using the same repo names (the substore ones were unprefixed),
which should not be the case, this was causing a single entry to show
in the RepoDB instead of two separate entries for each test image
Root cause in: b61aff62cd (diff-b86e11fa5a3102b336caebec3b30a9d35e26af554dd8658f124dba2404b7d24aR88)

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

chore: move code reponsible for transforming objects to gql_generated types to separate package

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

Process input for global search
  - Clean input: query, filter strings
  - Add validation for global search input

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit f1ca8670fbe4a4a327ea25cf459237dbf23bb78a)

fix: only call cve scanning for data shown to the user

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

GQL omit scanning for CVE if field is not required

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit 5479ce45d6cb2abcf5fbccadeaf6f3393c3f6bf1)

Fix filtering logic in RepoDB
  - filter parameter was set to false instead of being calculator from the later image

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit a82d2327e34e5da617af0b7ca78a2dba90999f0a)

bug(repodb): Checking signature returns error if signed image is not found
  - we considere a signature image orfan when the image it signs is not found
  - we need this to ignore such signatures in certain cases

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
(cherry picked from commit d0418505f76467accd8e1ee34fcc2b2a165efae5)

feat(repodb): CVE logic to use repoDB

Also update some method signatures to remove usage of:
github.com/google/go-containerregistry/pkg/v1

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

* feat(repodb): refactor repodb update logic

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* fix(repodb): minor fixes

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): move repodb logic inside meta directory under pkg

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): change factory class for repodb initialization with factory metrod

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): simplify repodb configuration
  - repodb now shares config parameters with the cache
  - config taken directly from storage config

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* fix(authors): fix authors information to work properly with repodb

Ideally this commit would be squshed in the repodb commit
but as-is it is easier to cherry-pick on other branches

Signed-off-by: Andrei Aaron <andaaron@cisco.com>

* feat(repodb): dynamodb support for repodb
  - clean-up repodb code + coverage improvements

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(dynamo): tables used by dynamo are created automatically if they don't exists
  - if the table exists nothing happens

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* test(repodb): coverage tests
  - minor fix for CVEListForImage to fix the tests
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): add descriptor with media type

  - to represent images and multi-arch images

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): support signatures on repo level

  - added to follow the behavior of signing and signature verification tools
    that work on a manifest level for each repo
  - all images with different tags but the same manifest will be signed at once

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): old repodb version migration support

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): tests for coverage

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): WIP fixing tests

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* feat(repodb): work on patchRepoDB tests

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* fix(repodb): create dynamo tables only for linux amd

Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>

* fix(ci): fix a typo in ci-cd.yml

Signed-off-by: Andrei Aaron <aaaron@luxoft.com>

Signed-off-by: Andrei Aaron <andaaron@cisco.com>
Signed-off-by: Laurentiu Niculae <niculae.laurentiu1@gmail.com>
Signed-off-by: Andrei Aaron <aaaron@luxoft.com>
Co-authored-by: Andrei Aaron <andaaron@cisco.com>
Co-authored-by: Andrei Aaron <aaaron@luxoft.com>
This commit is contained in:
LaurentiuNiculae 2023-01-09 22:37:44 +02:00 committed by GitHub
parent f69b104838
commit f408df0dac
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
61 changed files with 13863 additions and 2488 deletions

View file

@ -81,6 +81,8 @@ jobs:
echo "Startup complete" echo "Startup complete"
aws dynamodb --endpoint-url http://localhost:4566 --region "us-east-2" create-table --table-name BlobTable --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 aws dynamodb --endpoint-url http://localhost:4566 --region "us-east-2" create-table --table-name BlobTable --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
aws dynamodb --endpoint-url http://localhost:4566 --region "us-east-2" create-table --table-name RepoMetadataTable --attribute-definitions AttributeName=RepoName,AttributeType=S --key-schema AttributeName=RepoName,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
aws dynamodb --endpoint-url http://localhost:4566 --region "us-east-2" create-table --table-name ManifestDataTable --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
env: env:
AWS_ACCESS_KEY_ID: fake AWS_ACCESS_KEY_ID: fake
AWS_SECRET_ACCESS_KEY: fake AWS_SECRET_ACCESS_KEY: fake

View file

@ -3,61 +3,76 @@ package errors
import "errors" import "errors"
var ( var (
ErrBadConfig = errors.New("config: invalid config") ErrBadConfig = errors.New("config: invalid config")
ErrCliBadConfig = errors.New("cli: bad config") ErrCliBadConfig = errors.New("cli: bad config")
ErrRepoNotFound = errors.New("repository: not found") ErrRepoNotFound = errors.New("repository: not found")
ErrRepoIsNotDir = errors.New("repository: not a directory") ErrRepoIsNotDir = errors.New("repository: not a directory")
ErrRepoBadVersion = errors.New("repository: unsupported layout version") ErrRepoBadVersion = errors.New("repository: unsupported layout version")
ErrManifestNotFound = errors.New("manifest: not found") ErrManifestNotFound = errors.New("manifest: not found")
ErrBadManifest = errors.New("manifest: invalid contents") ErrBadManifest = errors.New("manifest: invalid contents")
ErrBadIndex = errors.New("index: invalid contents") ErrBadIndex = errors.New("index: invalid contents")
ErrUploadNotFound = errors.New("uploads: not found") ErrUploadNotFound = errors.New("uploads: not found")
ErrBadUploadRange = errors.New("uploads: bad range") ErrBadUploadRange = errors.New("uploads: bad range")
ErrBlobNotFound = errors.New("blob: not found") ErrBlobNotFound = errors.New("blob: not found")
ErrBadBlob = errors.New("blob: bad blob") ErrBadBlob = errors.New("blob: bad blob")
ErrBadBlobDigest = errors.New("blob: bad blob digest") ErrBadBlobDigest = errors.New("blob: bad blob digest")
ErrUnknownCode = errors.New("error: unknown error code") ErrUnknownCode = errors.New("error: unknown error code")
ErrBadCACert = errors.New("tls: invalid ca cert") ErrBadCACert = errors.New("tls: invalid ca cert")
ErrBadUser = errors.New("auth: non-existent user") ErrBadUser = errors.New("auth: non-existent user")
ErrEntriesExceeded = errors.New("ldap: too many entries returned") ErrEntriesExceeded = errors.New("ldap: too many entries returned")
ErrLDAPEmptyPassphrase = errors.New("ldap: empty passphrase") ErrLDAPEmptyPassphrase = errors.New("ldap: empty passphrase")
ErrLDAPBadConn = errors.New("ldap: bad connection") ErrLDAPBadConn = errors.New("ldap: bad connection")
ErrLDAPConfig = errors.New("config: invalid LDAP configuration") ErrLDAPConfig = errors.New("config: invalid LDAP configuration")
ErrCacheRootBucket = errors.New("cache: unable to create/update root bucket") ErrCacheRootBucket = errors.New("cache: unable to create/update root bucket")
ErrCacheNoBucket = errors.New("cache: unable to find bucket") ErrCacheNoBucket = errors.New("cache: unable to find bucket")
ErrCacheMiss = errors.New("cache: miss") ErrCacheMiss = errors.New("cache: miss")
ErrRequireCred = errors.New("ldap: bind credentials required") ErrRequireCred = errors.New("ldap: bind credentials required")
ErrInvalidCred = errors.New("ldap: invalid credentials") ErrInvalidCred = errors.New("ldap: invalid credentials")
ErrEmptyJSON = errors.New("cli: config json is empty") ErrEmptyJSON = errors.New("cli: config json is empty")
ErrInvalidArgs = errors.New("cli: Invalid Arguments") ErrInvalidArgs = errors.New("cli: Invalid Arguments")
ErrInvalidFlagsCombination = errors.New("cli: Invalid combination of flags") ErrInvalidFlagsCombination = errors.New("cli: Invalid combination of flags")
ErrInvalidURL = errors.New("cli: invalid URL format") ErrInvalidURL = errors.New("cli: invalid URL format")
ErrUnauthorizedAccess = errors.New("auth: unauthorized access. check credentials") ErrUnauthorizedAccess = errors.New("auth: unauthorized access. check credentials")
ErrCannotResetConfigKey = errors.New("cli: cannot reset given config key") ErrCannotResetConfigKey = errors.New("cli: cannot reset given config key")
ErrConfigNotFound = errors.New("cli: config with the given name does not exist") ErrConfigNotFound = errors.New("cli: config with the given name does not exist")
ErrNoURLProvided = errors.New("cli: no URL provided in argument or via config") ErrNoURLProvided = errors.New("cli: no URL provided in argument or via config")
ErrIllegalConfigKey = errors.New("cli: given config key is not allowed") ErrIllegalConfigKey = errors.New("cli: given config key is not allowed")
ErrScanNotSupported = errors.New("search: scanning of image media type not supported") ErrScanNotSupported = errors.New("search: scanning of image media type not supported")
ErrCLITimeout = errors.New("cli: Query timed out while waiting for results") ErrCLITimeout = errors.New("cli: Query timed out while waiting for results")
ErrDuplicateConfigName = errors.New("cli: cli config name already added") ErrDuplicateConfigName = errors.New("cli: cli config name already added")
ErrInvalidRoute = errors.New("routes: invalid route prefix") ErrInvalidRoute = errors.New("routes: invalid route prefix")
ErrImgStoreNotFound = errors.New("routes: image store not found corresponding to given route") ErrImgStoreNotFound = errors.New("routes: image store not found corresponding to given route")
ErrEmptyValue = errors.New("cache: empty value") ErrEmptyValue = errors.New("cache: empty value")
ErrEmptyRepoList = errors.New("search: no repository found") ErrEmptyRepoList = errors.New("search: no repository found")
ErrInvalidRepositoryName = errors.New("routes: not a repository name") ErrInvalidRepositoryName = errors.New("routes: not a repository name")
ErrSyncMissingCatalog = errors.New("sync: couldn't fetch upstream registry's catalog") ErrSyncMissingCatalog = errors.New("sync: couldn't fetch upstream registry's catalog")
ErrMethodNotSupported = errors.New("storage: method not supported") ErrMethodNotSupported = errors.New("storage: method not supported")
ErrInvalidMetric = errors.New("metrics: invalid metric func") ErrInvalidMetric = errors.New("metrics: invalid metric func")
ErrInjected = errors.New("test: injected failure") ErrInjected = errors.New("test: injected failure")
ErrSyncInvalidUpstreamURL = errors.New("sync: upstream url not found in sync config") ErrSyncInvalidUpstreamURL = errors.New("sync: upstream url not found in sync config")
ErrRegistryNoContent = errors.New("sync: could not find a Content that matches localRepo") ErrRegistryNoContent = errors.New("sync: could not find a Content that matches localRepo")
ErrSyncReferrerNotFound = errors.New("sync: couldn't find upstream referrer") ErrSyncReferrerNotFound = errors.New("sync: couldn't find upstream referrer")
ErrSyncReferrer = errors.New("sync: failed to get upstream referrer") ErrSyncReferrer = errors.New("sync: failed to get upstream referrer")
ErrImageLintAnnotations = errors.New("routes: lint checks failed") ErrImageLintAnnotations = errors.New("routes: lint checks failed")
ErrParsingAuthHeader = errors.New("auth: failed parsing authorization header") ErrParsingAuthHeader = errors.New("auth: failed parsing authorization header")
ErrBadType = errors.New("core: invalid type") ErrBadType = errors.New("core: invalid type")
ErrParsingHTTPHeader = errors.New("routes: invalid HTTP header") ErrParsingHTTPHeader = errors.New("routes: invalid HTTP header")
ErrBadRange = errors.New("storage: bad range") ErrBadRange = errors.New("storage: bad range")
ErrBadLayerCount = errors.New("manifest: layers count doesn't correspond to config history") ErrBadLayerCount = errors.New("manifest: layers count doesn't correspond to config history")
ErrManifestConflict = errors.New("manifest: multiple manifests found") ErrManifestConflict = errors.New("manifest: multiple manifests found")
ErrManifestMetaNotFound = errors.New("repodb: image metadata not found for given manifest digest")
ErrManifestDataNotFound = errors.New("repodb: image data not found for given manifest digest")
ErrRepoMetaNotFound = errors.New("repodb: repo metadata not found for given repo name")
ErrTagMetaNotFound = errors.New("repodb: tag metadata not found for given repo and tag names")
ErrTypeAssertionFailed = errors.New("storage: failed DatabaseDriver type assertion")
ErrInvalidRequestParams = errors.New("resolver: parameter sent has invalid value")
ErrOrphanSignature = errors.New("repodb: signature detected but signed image doesn't exit")
ErrBadCtxFormat = errors.New("type assertion failed")
ErrEmptyRepoName = errors.New("repodb: repo name can't be empty string")
ErrEmptyTag = errors.New("repodb: tag can't be empty string")
ErrEmptyDigest = errors.New("repodb: digest can't be empty string")
ErrInvalidRepoTagFormat = errors.New("invalid format for tag search, not following repo:tag")
ErrLimitIsNegative = errors.New("pageturner: limit has negative value")
ErrOffsetIsNegative = errors.New("pageturner: offset has negative value")
ErrSortCriteriaNotSupported = errors.New("pageturner: the sort criteria is not supported")
) )

View file

@ -16,7 +16,10 @@
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable" "cacheTablename": "ZotBlobTable",
"repoMetaTablename": "ZotRepoMetadataTable",
"manifestDataTablename": "ZotManifestDataTable",
"versionTablename": "ZotVersion"
} }
}, },
"http": { "http": {

View file

@ -17,7 +17,10 @@
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable" "cacheTablename": "ZotBlobTable",
"repoMetaTablename": "ZotRepoMetadataTable",
"manifestDataTablename": "ZotManifestDataTable",
"versionTablename": "ZotVersion"
} }
}, },
"http": { "http": {

View file

@ -15,7 +15,7 @@
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "MainTable" "cacheTablename": "MainTable"
}, },
"subPaths": { "subPaths": {
"/a": { "/a": {
@ -59,7 +59,7 @@
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "cTable" "cacheTablename": "cTable"
} }
} }
} }

2
go.mod
View file

@ -54,6 +54,7 @@ require (
github.com/aquasecurity/trivy v0.0.0-00010101000000-000000000000 github.com/aquasecurity/trivy v0.0.0-00010101000000-000000000000
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.17.9 github.com/aws/aws-sdk-go-v2/service/dynamodb v1.17.9
github.com/containers/image/v5 v5.23.0 github.com/containers/image/v5 v5.23.0
github.com/gobwas/glob v0.2.3
github.com/notaryproject/notation-go v0.12.0-beta.1 github.com/notaryproject/notation-go v0.12.0-beta.1
github.com/opencontainers/distribution-spec/specs-go v0.0.0-20220620172159-4ab4752c3b86 github.com/opencontainers/distribution-spec/specs-go v0.0.0-20220620172159-4ab4752c3b86
github.com/sigstore/cosign v1.13.1 github.com/sigstore/cosign v1.13.1
@ -207,7 +208,6 @@ require (
github.com/go-playground/validator/v10 v10.11.0 // indirect github.com/go-playground/validator/v10 v10.11.0 // indirect
github.com/go-redis/redis/v8 v8.11.5 // indirect github.com/go-redis/redis/v8 v8.11.5 // indirect
github.com/go-restruct/restruct v0.0.0-20191227155143-5734170a48a1 // indirect github.com/go-restruct/restruct v0.0.0-20191227155143-5734170a48a1 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt v3.2.2+incompatible // indirect github.com/golang-jwt/jwt v3.2.2+incompatible // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect github.com/golang-jwt/jwt/v4 v4.4.2 // indirect

View file

@ -24,6 +24,10 @@ import (
ext "zotregistry.io/zot/pkg/extensions" ext "zotregistry.io/zot/pkg/extensions"
"zotregistry.io/zot/pkg/extensions/monitoring" "zotregistry.io/zot/pkg/extensions/monitoring"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
"zotregistry.io/zot/pkg/meta/repodb/repodbfactory"
"zotregistry.io/zot/pkg/scheduler" "zotregistry.io/zot/pkg/scheduler"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
"zotregistry.io/zot/pkg/storage/cache" "zotregistry.io/zot/pkg/storage/cache"
@ -40,6 +44,7 @@ const (
type Controller struct { type Controller struct {
Config *config.Config Config *config.Config
Router *mux.Router Router *mux.Router
RepoDB repodb.RepoDB
StoreController storage.StoreController StoreController storage.StoreController
Log log.Logger Log log.Logger
Audit *log.Logger Audit *log.Logger
@ -162,6 +167,12 @@ func (c *Controller) Run(reloadCtx context.Context) error {
return err return err
} }
if err := c.InitRepoDB(reloadCtx); err != nil {
return err
}
c.StartBackgroundTasks(reloadCtx)
monitoring.SetServerInfo(c.Metrics, c.Config.Commit, c.Config.BinaryType, c.Config.GoVersion, monitoring.SetServerInfo(c.Metrics, c.Config.Commit, c.Config.BinaryType, c.Config.GoVersion,
c.Config.DistSpecVersion) c.Config.DistSpecVersion)
@ -248,7 +259,7 @@ func (c *Controller) Run(reloadCtx context.Context) error {
return server.Serve(listener) return server.Serve(listener)
} }
func (c *Controller) InitImageStore(reloadCtx context.Context) error { func (c *Controller) InitImageStore(ctx context.Context) error {
c.StoreController = storage.StoreController{} c.StoreController = storage.StoreController{}
linter := ext.GetLinter(c.Config, c.Log) linter := ext.GetLinter(c.Config, c.Log)
@ -327,8 +338,6 @@ func (c *Controller) InitImageStore(reloadCtx context.Context) error {
} }
} }
c.StartBackgroundTasks(reloadCtx)
return nil return nil
} }
@ -464,7 +473,7 @@ func CreateCacheDatabaseDriver(storageConfig config.StorageConfig, log log.Logge
dynamoParams := cache.DynamoDBDriverParameters{} dynamoParams := cache.DynamoDBDriverParameters{}
dynamoParams.Endpoint, _ = storageConfig.CacheDriver["endpoint"].(string) dynamoParams.Endpoint, _ = storageConfig.CacheDriver["endpoint"].(string)
dynamoParams.Region, _ = storageConfig.CacheDriver["region"].(string) dynamoParams.Region, _ = storageConfig.CacheDriver["region"].(string)
dynamoParams.TableName, _ = storageConfig.CacheDriver["tablename"].(string) dynamoParams.TableName, _ = storageConfig.CacheDriver["cachetablename"].(string)
driver, _ := storage.Create("dynamodb", dynamoParams, log) driver, _ := storage.Create("dynamodb", dynamoParams, log)
@ -477,6 +486,99 @@ func CreateCacheDatabaseDriver(storageConfig config.StorageConfig, log log.Logge
return nil return nil
} }
func (c *Controller) InitRepoDB(reloadCtx context.Context) error {
if c.Config.Extensions != nil && c.Config.Extensions.Search != nil && *c.Config.Extensions.Search.Enable {
driver, err := CreateRepoDBDriver(c.Config.Storage.StorageConfig, c.Log) //nolint:contextcheck
if err != nil {
return err
}
err = driver.PatchDB()
if err != nil {
return err
}
err = repodb.SyncRepoDB(driver, c.StoreController, c.Log)
if err != nil {
return err
}
c.RepoDB = driver
}
return nil
}
func CreateRepoDBDriver(storageConfig config.StorageConfig, log log.Logger) (repodb.RepoDB, error) {
if storageConfig.RemoteCache {
dynamoParams := getDynamoParams(storageConfig.CacheDriver, log)
return repodbfactory.Create("dynamodb", dynamoParams) //nolint:contextcheck
}
params := bolt.DBParameters{}
params.RootDir = storageConfig.RootDirectory
return repodbfactory.Create("boltdb", params) //nolint:contextcheck
}
func getDynamoParams(cacheDriverConfig map[string]interface{}, log log.Logger) dynamoParams.DBDriverParameters {
allParametersOk := true
endpoint, ok := toStringIfOk(cacheDriverConfig, "endpoint", log)
allParametersOk = allParametersOk && ok
region, ok := toStringIfOk(cacheDriverConfig, "region", log)
allParametersOk = allParametersOk && ok
repoMetaTablename, ok := toStringIfOk(cacheDriverConfig, "repometatablename", log)
allParametersOk = allParametersOk && ok
manifestDataTablename, ok := toStringIfOk(cacheDriverConfig, "manifestdatatablename", log)
allParametersOk = allParametersOk && ok
versionTablename, ok := toStringIfOk(cacheDriverConfig, "versiontablename", log)
allParametersOk = allParametersOk && ok
if !allParametersOk {
panic("dynamo parameters are not specified correctly, can't proceede")
}
return dynamoParams.DBDriverParameters{
Endpoint: endpoint,
Region: region,
RepoMetaTablename: repoMetaTablename,
ManifestDataTablename: manifestDataTablename,
VersionTablename: versionTablename,
}
}
func toStringIfOk(cacheDriverConfig map[string]interface{}, param string, log log.Logger) (string, bool) {
val, ok := cacheDriverConfig[param]
if !ok {
log.Error().Msgf("parsing CacheDriver config failed, field '%s' is not present", param)
return "", false
}
str, ok := val.(string)
if !ok {
log.Error().Msgf("parsing CacheDriver config failed, parameter '%s' isn't a string", param)
return "", false
}
if str == "" {
log.Error().Msgf("parsing CacheDriver config failed, field '%s' is is empty", param)
return "", false
}
return str, ok
}
func (c *Controller) LoadNewConfig(reloadCtx context.Context, config *config.Config) { func (c *Controller) LoadNewConfig(reloadCtx context.Context, config *config.Config) {
// reload access control config // reload access control config
c.Config.AccessControl = config.AccessControl c.Config.AccessControl = config.AccessControl
@ -514,7 +616,7 @@ func (c *Controller) StartBackgroundTasks(reloadCtx context.Context) {
// Enable extensions if extension config is provided for DefaultStore // Enable extensions if extension config is provided for DefaultStore
if c.Config != nil && c.Config.Extensions != nil { if c.Config != nil && c.Config.Extensions != nil {
ext.EnableMetricsExtension(c.Config, c.Log, c.Config.Storage.RootDirectory) ext.EnableMetricsExtension(c.Config, c.Log, c.Config.Storage.RootDirectory)
ext.EnableSearchExtension(c.Config, c.Log, c.StoreController) ext.EnableSearchExtension(c.Config, c.StoreController, c.RepoDB, c.Log)
} }
if c.Config.Storage.SubPaths != nil { if c.Config.Storage.SubPaths != nil {

View file

@ -162,10 +162,13 @@ func TestCreateCacheDatabaseDriver(t *testing.T) {
} }
conf.Storage.CacheDriver = map[string]interface{}{ conf.Storage.CacheDriver = map[string]interface{}{
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable", "cacheTablename": "BlobTable",
"repoMetaTablename": "RepoMetadataTable",
"manifestDataTablename": "ManifestDataTable",
"versionTablename": "Version",
} }
driver := api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log) driver := api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log)
@ -174,19 +177,25 @@ func TestCreateCacheDatabaseDriver(t *testing.T) {
// negative test cases // negative test cases
conf.Storage.CacheDriver = map[string]interface{}{ conf.Storage.CacheDriver = map[string]interface{}{
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable", "cacheTablename": "BlobTable",
"repoMetaTablename": "RepoMetadataTable",
"manifestDataTablename": "ManifestDataTable",
"versionTablename": "Version",
} }
driver = api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log) driver = api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log)
So(driver, ShouldBeNil) So(driver, ShouldBeNil)
conf.Storage.CacheDriver = map[string]interface{}{ conf.Storage.CacheDriver = map[string]interface{}{
"name": "dummy", "name": "dummy",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable", "cacheTablename": "BlobTable",
"repoMetaTablename": "RepoMetadataTable",
"manifestDataTablename": "ManifestDataTable",
"versionTablename": "Version",
} }
driver = api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log) driver = api.CreateCacheDatabaseDriver(conf.Storage.StorageConfig, log)
@ -194,6 +203,50 @@ func TestCreateCacheDatabaseDriver(t *testing.T) {
}) })
} }
func TestCreateRepoDBDriver(t *testing.T) {
Convey("Test CreateCacheDatabaseDriver dynamo", t, func() {
log := log.NewLogger("debug", "")
dir := t.TempDir()
conf := config.New()
conf.Storage.RootDirectory = dir
conf.Storage.Dedupe = true
conf.Storage.RemoteCache = true
conf.Storage.StorageDriver = map[string]interface{}{
"name": "s3",
"rootdirectory": "/zot",
"region": "us-east-2",
"bucket": "zot-storage",
"secure": true,
"skipverify": false,
}
conf.Storage.CacheDriver = map[string]interface{}{
"name": "dummy",
"endpoint": "http://localhost:4566",
"region": "us-east-2",
"cachetablename": "BlobTable",
"repometatablename": "RepoMetadataTable",
"manifestdatatablename": "ManifestDataTable",
}
testFunc := func() { _, _ = api.CreateRepoDBDriver(conf.Storage.StorageConfig, log) }
So(testFunc, ShouldPanic)
conf.Storage.CacheDriver = map[string]interface{}{
"name": "dummy",
"endpoint": "http://localhost:4566",
"region": "us-east-2",
"cachetablename": "",
"repometatablename": "RepoMetadataTable",
"manifestdatatablename": "ManifestDataTable",
"versiontablename": 1,
}
testFunc = func() { _, _ = api.CreateRepoDBDriver(conf.Storage.StorageConfig, log) }
So(testFunc, ShouldPanic)
})
}
func TestRunAlreadyRunningServer(t *testing.T) { func TestRunAlreadyRunningServer(t *testing.T) {
Convey("Run server on unavailable port", t, func() { Convey("Run server on unavailable port", t, func() {
port := test.GetFreePort() port := test.GetFreePort()
@ -6454,6 +6507,7 @@ func TestSearchRoutes(t *testing.T) {
repoName := "testrepo" repoName := "testrepo"
inaccessibleRepo := "inaccessible" inaccessibleRepo := "inaccessible"
cfg, layers, manifest, err := test.GetImageComponents(10000) cfg, layers, manifest, err := test.GetImageComponents(10000)
So(err, ShouldBeNil) So(err, ShouldBeNil)
@ -6515,7 +6569,7 @@ func TestSearchRoutes(t *testing.T) {
Policies: []config.Policy{ Policies: []config.Policy{
{ {
Users: []string{user1}, Users: []string{user1},
Actions: []string{"read"}, Actions: []string{"read", "create"},
}, },
}, },
DefaultPolicy: []string{}, DefaultPolicy: []string{},
@ -6523,8 +6577,8 @@ func TestSearchRoutes(t *testing.T) {
inaccessibleRepo: config.PolicyGroup{ inaccessibleRepo: config.PolicyGroup{
Policies: []config.Policy{ Policies: []config.Policy{
{ {
Users: []string{}, Users: []string{user1},
Actions: []string{}, Actions: []string{"create"},
}, },
}, },
DefaultPolicy: []string{}, DefaultPolicy: []string{},
@ -6542,9 +6596,38 @@ func TestSearchRoutes(t *testing.T) {
cm.StartAndWait(port) cm.StartAndWait(port)
defer cm.StopServer() defer cm.StopServer()
cfg, layers, manifest, err := test.GetImageComponents(10000)
So(err, ShouldBeNil)
err = test.UploadImageWithBasicAuth(
test.Image{
Config: cfg,
Layers: layers,
Manifest: manifest,
Tag: "latest",
}, baseURL, repoName,
user1, password1)
So(err, ShouldBeNil)
// data for the inaccessible repo
cfg, layers, manifest, err = test.GetImageComponents(10000)
So(err, ShouldBeNil)
err = test.UploadImageWithBasicAuth(
test.Image{
Config: cfg,
Layers: layers,
Manifest: manifest,
Tag: "latest",
}, baseURL, inaccessibleRepo,
user1, password1)
So(err, ShouldBeNil)
query := ` query := `
{ {
GlobalSearch(query:""){ GlobalSearch(query:"testrepo"){
Repos { Repos {
Name Name
Score Score
@ -6569,24 +6652,41 @@ func TestSearchRoutes(t *testing.T) {
So(resp, ShouldNotBeNil) So(resp, ShouldNotBeNil)
So(resp.StatusCode(), ShouldEqual, http.StatusUnauthorized) So(resp.StatusCode(), ShouldEqual, http.StatusUnauthorized)
// credentials for user unauthorized to access repo conf.AccessControl = &config.AccessControlConfig{
user2 := "notWorking" Repositories: config.Repositories{
password2 := "notWorking" repoName: config.PolicyGroup{
testString2 := getCredString(user2, password2) Policies: []config.Policy{
htpasswdPath2 := test.MakeHtpasswdFileFromString(testString2) {
defer os.Remove(htpasswdPath2) Users: []string{user1},
Actions: []string{},
ctlr.Config.HTTP.Auth = &config.AuthConfig{ },
HTPasswd: config.AuthHTPasswd{ },
Path: htpasswdPath2, DefaultPolicy: []string{},
},
inaccessibleRepo: config.PolicyGroup{
Policies: []config.Policy{
{
Users: []string{},
Actions: []string{},
},
},
DefaultPolicy: []string{},
},
},
AdminPolicy: config.Policy{
Users: []string{},
Actions: []string{},
}, },
} }
// authenticated, but no access to resource // authenticated, but no access to resource
resp, err = resty.R().SetBasicAuth(user2, password2).Get(baseURL + constants.FullSearchPrefix + resp, err = resty.R().SetBasicAuth(user1, password1).Get(baseURL + constants.FullSearchPrefix +
"?query=" + url.QueryEscape(query)) "?query=" + url.QueryEscape(query))
So(err, ShouldBeNil) So(err, ShouldBeNil)
So(resp, ShouldNotBeNil) So(resp, ShouldNotBeNil)
So(resp.StatusCode(), ShouldEqual, http.StatusUnauthorized) So(resp.StatusCode(), ShouldEqual, http.StatusOK)
So(string(resp.Body()), ShouldNotContainSubstring, repoName)
So(string(resp.Body()), ShouldNotContainSubstring, inaccessibleRepo)
}) })
}) })
} }

View file

@ -35,6 +35,7 @@ import (
ext "zotregistry.io/zot/pkg/extensions" ext "zotregistry.io/zot/pkg/extensions"
"zotregistry.io/zot/pkg/extensions/sync" "zotregistry.io/zot/pkg/extensions/sync"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
repoDBUpdate "zotregistry.io/zot/pkg/meta/repodb/update"
localCtx "zotregistry.io/zot/pkg/requestcontext" localCtx "zotregistry.io/zot/pkg/requestcontext"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
"zotregistry.io/zot/pkg/test" //nolint:goimports "zotregistry.io/zot/pkg/test" //nolint:goimports
@ -124,7 +125,7 @@ func (rh *RouteHandler) SetupRoutes() {
} else { } else {
// extended build // extended build
ext.SetupMetricsRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.Log) ext.SetupMetricsRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.Log)
ext.SetupSearchRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.Log) ext.SetupSearchRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.RepoDB, rh.c.Log)
gqlPlayground.SetupGQLPlaygroundRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.Log) gqlPlayground.SetupGQLPlaygroundRoutes(rh.c.Config, rh.c.Router, rh.c.StoreController, rh.c.Log)
} }
} }
@ -401,6 +402,18 @@ func (rh *RouteHandler) GetManifest(response http.ResponseWriter, request *http.
return return
} }
if rh.c.RepoDB != nil {
err := repoDBUpdate.OnGetManifest(name, reference, digest, content, rh.c.StoreController, rh.c.RepoDB, rh.c.Log)
if errors.Is(err, zerr.ErrOrphanSignature) {
rh.c.Log.Error().Err(err).Msgf("image is an orphan signature")
} else if err != nil {
response.WriteHeader(http.StatusInternalServerError)
return
}
}
response.Header().Set(constants.DistContentDigestKey, digest.String()) response.Header().Set(constants.DistContentDigestKey, digest.String())
response.Header().Set("Content-Length", fmt.Sprintf("%d", len(content))) response.Header().Set("Content-Length", fmt.Sprintf("%d", len(content)))
response.Header().Set("Content-Type", mediaType) response.Header().Set("Content-Type", mediaType)
@ -601,6 +614,18 @@ func (rh *RouteHandler) UpdateManifest(response http.ResponseWriter, request *ht
return return
} }
if rh.c.RepoDB != nil {
err := repoDBUpdate.OnUpdateManifest(name, reference, mediaType, digest, body, rh.c.StoreController, rh.c.RepoDB,
rh.c.Log)
if errors.Is(err, zerr.ErrOrphanSignature) {
rh.c.Log.Error().Err(err).Msgf("pushed image is an orphan signature")
} else if err != nil {
response.WriteHeader(http.StatusInternalServerError)
return
}
}
response.Header().Set("Location", fmt.Sprintf("/v2/%s/manifests/%s", name, digest)) response.Header().Set("Location", fmt.Sprintf("/v2/%s/manifests/%s", name, digest))
response.Header().Set(constants.DistContentDigestKey, digest.String()) response.Header().Set(constants.DistContentDigestKey, digest.String())
response.WriteHeader(http.StatusCreated) response.WriteHeader(http.StatusCreated)
@ -647,6 +672,25 @@ func (rh *RouteHandler) DeleteManifest(response http.ResponseWriter, request *ht
detectCollision = acCtx.CanDetectManifestCollision(name) detectCollision = acCtx.CanDetectManifestCollision(name)
} }
manifestBlob, manifestDigest, mediaType, err := imgStore.GetImageManifest(name, reference)
if err != nil {
if errors.Is(err, zerr.ErrRepoNotFound) { //nolint:gocritic // errorslint conflicts with gocritic:IfElseChain
WriteJSON(response, http.StatusBadRequest,
NewErrorList(NewError(NAME_UNKNOWN, map[string]string{"name": name})))
} else if errors.Is(err, zerr.ErrManifestNotFound) {
WriteJSON(response, http.StatusNotFound,
NewErrorList(NewError(MANIFEST_UNKNOWN, map[string]string{"reference": reference})))
} else if errors.Is(err, zerr.ErrBadManifest) {
WriteJSON(response, http.StatusBadRequest,
NewErrorList(NewError(UNSUPPORTED, map[string]string{"reference": reference})))
} else {
rh.c.Log.Error().Err(err).Msg("unexpected error")
response.WriteHeader(http.StatusInternalServerError)
}
return
}
err = imgStore.DeleteImageManifest(name, reference, detectCollision) err = imgStore.DeleteImageManifest(name, reference, detectCollision)
if err != nil { if err != nil {
if errors.Is(err, zerr.ErrRepoNotFound) { //nolint:gocritic // errorslint conflicts with gocritic:IfElseChain if errors.Is(err, zerr.ErrRepoNotFound) { //nolint:gocritic // errorslint conflicts with gocritic:IfElseChain
@ -669,6 +713,18 @@ func (rh *RouteHandler) DeleteManifest(response http.ResponseWriter, request *ht
return return
} }
if rh.c.RepoDB != nil {
err := repoDBUpdate.OnDeleteManifest(name, reference, mediaType, manifestDigest, manifestBlob,
rh.c.StoreController, rh.c.RepoDB, rh.c.Log)
if errors.Is(err, zerr.ErrOrphanSignature) {
rh.c.Log.Error().Err(err).Msgf("pushed image is an orphan signature")
} else if err != nil {
response.WriteHeader(http.StatusInternalServerError)
return
}
}
response.WriteHeader(http.StatusAccepted) response.WriteHeader(http.StatusAccepted)
} }

View file

@ -8,7 +8,6 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"log" "log"
"os" "os"
"os/exec" "os/exec"
@ -31,7 +30,6 @@ import (
zotErrors "zotregistry.io/zot/errors" zotErrors "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/api" "zotregistry.io/zot/pkg/api"
"zotregistry.io/zot/pkg/api/config" "zotregistry.io/zot/pkg/api/config"
"zotregistry.io/zot/pkg/api/constants"
extconf "zotregistry.io/zot/pkg/extensions/config" extconf "zotregistry.io/zot/pkg/extensions/config"
"zotregistry.io/zot/pkg/test" "zotregistry.io/zot/pkg/test"
) )
@ -1438,129 +1436,43 @@ func TestServerResponse(t *testing.T) {
} }
func TestServerResponseGQLWithoutPermissions(t *testing.T) { func TestServerResponseGQLWithoutPermissions(t *testing.T) {
port := test.GetFreePort() Convey("Test accessing a blobs folder without having permissions fails fast", t, func() {
url := test.GetBaseURL(port) port := test.GetFreePort()
conf := config.New() conf := config.New()
conf.HTTP.Port = port conf.HTTP.Port = port
dir := t.TempDir() dir := t.TempDir()
err := test.CopyFiles("../../test/data/zot-test", path.Join(dir, "zot-test")) err := test.CopyFiles("../../test/data/zot-test", path.Join(dir, "zot-test"))
if err != nil {
panic(err)
}
err = os.Chmod(path.Join(dir, "zot-test", "blobs"), 0o000)
if err != nil {
panic(err)
}
conf.Storage.RootDirectory = dir
cveConfig := &extconf.CVEConfig{
UpdateInterval: 2,
}
defaultVal := true
searchConfig := &extconf.SearchConfig{
BaseConfig: extconf.BaseConfig{Enable: &defaultVal},
CVE: cveConfig,
}
conf.Extensions = &extconf.ExtensionConfig{
Search: searchConfig,
}
logFile, err := os.CreateTemp(t.TempDir(), "zot-log*.txt")
if err != nil {
panic(err)
}
logPath := logFile.Name()
defer os.Remove(logPath)
writers := io.MultiWriter(os.Stdout, logFile)
ctlr := api.NewController(conf)
ctlr.Log.Logger = ctlr.Log.Output(writers)
go func(controller *api.Controller) {
// this blocks
if err := controller.Run(context.Background()); err != nil {
return
}
}(ctlr)
// wait till ready
for {
res, err := resty.R().Get(url + constants.FullSearchPrefix)
if err == nil && res.StatusCode() == 422 {
break
}
time.Sleep(100 * time.Millisecond)
}
_, err = test.ReadLogFileAndSearchString(logPath, "DB update completed, next update scheduled", 90*time.Second)
if err != nil {
panic(err)
}
defer func(controller *api.Controller) {
err = os.Chmod(path.Join(dir, "zot-test", "blobs"), 0o777)
if err != nil { if err != nil {
panic(err) panic(err)
} }
ctx := context.Background()
_ = controller.Server.Shutdown(ctx)
}(ctlr)
Convey("Test all images", t, func() { err = os.Chmod(path.Join(dir, "zot-test", "blobs"), 0o000)
args := []string{"imagetest"} if err != nil {
configPath := makeConfigFile(fmt.Sprintf(`{"configs":[{"_name":"imagetest","url":"%s","showspinner":false}]}`, url)) panic(err)
defer os.Remove(configPath) }
cveCmd := NewImageCommand(new(searchService))
buff := bytes.NewBufferString("")
cveCmd.SetOut(buff)
cveCmd.SetErr(buff)
cveCmd.SetArgs(args)
err = cveCmd.Execute()
So(err, ShouldNotBeNil)
})
Convey("Test all images verbose", t, func() { defer func() {
args := []string{"imagetest", "--verbose"} err = os.Chmod(path.Join(dir, "zot-test", "blobs"), 0o777)
configPath := makeConfigFile(fmt.Sprintf(`{"configs":[{"_name":"imagetest","url":"%s","showspinner":false}]}`, url)) if err != nil {
defer os.Remove(configPath) panic(err)
cmd := NewImageCommand(new(searchService)) }
buff := bytes.NewBufferString("") }()
cmd.SetOut(buff)
cmd.SetErr(buff)
cmd.SetArgs(args)
err := cmd.Execute()
So(err, ShouldNotBeNil)
})
Convey("Test image by name", t, func() { conf.Storage.RootDirectory = dir
args := []string{"imagetest", "--name", "zot-test"} defaultVal := true
configPath := makeConfigFile(fmt.Sprintf(`{"configs":[{"_name":"imagetest","url":"%s","showspinner":false}]}`, url)) searchConfig := &extconf.SearchConfig{
defer os.Remove(configPath) BaseConfig: extconf.BaseConfig{Enable: &defaultVal},
cmd := NewImageCommand(new(searchService)) }
buff := bytes.NewBufferString("") conf.Extensions = &extconf.ExtensionConfig{
cmd.SetOut(buff) Search: searchConfig,
cmd.SetErr(buff) }
cmd.SetArgs(args)
err := cmd.Execute()
So(err, ShouldNotBeNil)
})
Convey("Test image by digest", t, func() { ctlr := api.NewController(conf)
args := []string{"imagetest", "--digest", test.GetTestBlobDigest("zot-test", "manifest").Encoded()} if err := ctlr.Run(context.Background()); err != nil {
configPath := makeConfigFile(fmt.Sprintf(`{"configs":[{"_name":"imagetest","url":"%s","showspinner":false}]}`, url)) So(err, ShouldNotBeNil)
defer os.Remove(configPath) }
cmd := NewImageCommand(new(searchService))
buff := bytes.NewBufferString("")
cmd.SetOut(buff)
cmd.SetErr(buff)
cmd.SetArgs(args)
err := cmd.Execute()
So(err, ShouldNotBeNil)
}) })
} }

View file

@ -238,7 +238,7 @@ func TestVerify(t *testing.T) {
"name":"dynamodb", "name":"dynamodb",
"endpoint":"http://localhost:4566", "endpoint":"http://localhost:4566",
"region":"us-east-2", "region":"us-east-2",
"tableName":"BlobTable" "cacheTablename":"BlobTable"
} }
}, },
"http":{ "http":{
@ -305,7 +305,7 @@ func TestVerify(t *testing.T) {
"name":"dynamodb", "name":"dynamodb",
"endpoint":"http://localhost:4566", "endpoint":"http://localhost:4566",
"region":"us-east-2", "region":"us-east-2",
"tableName":"BlobTable" "cacheTablename":"BlobTable"
}, },
"storageDriver":{ "storageDriver":{
"name":"s3", "name":"s3",
@ -389,7 +389,7 @@ func TestVerify(t *testing.T) {
"name":"dynamodb", "name":"dynamodb",
"endpoint":"http://localhost:4566", "endpoint":"http://localhost:4566",
"region":"us-east-2", "region":"us-east-2",
"tableName":"BlobTable" "cacheTablename":"BlobTable"
} }
} }
} }
@ -468,7 +468,7 @@ func TestVerify(t *testing.T) {
"name":"dynamodb", "name":"dynamodb",
"endpoint":"http://localhost:4566", "endpoint":"http://localhost:4566",
"region":"us-east-2", "region":"us-east-2",
"tableName":"BlobTable" "cacheTablename":"BlobTable"
} }
} }
} }

View file

@ -16,6 +16,7 @@ import (
cveinfo "zotregistry.io/zot/pkg/extensions/search/cve" cveinfo "zotregistry.io/zot/pkg/extensions/search/cve"
"zotregistry.io/zot/pkg/extensions/search/gql_generated" "zotregistry.io/zot/pkg/extensions/search/gql_generated"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
) )
@ -24,7 +25,9 @@ import (
// The library doesn't seem to handle concurrency very well internally. // The library doesn't seem to handle concurrency very well internally.
var cveInfo cveinfo.CveInfo //nolint:gochecknoglobals var cveInfo cveinfo.CveInfo //nolint:gochecknoglobals
func EnableSearchExtension(config *config.Config, log log.Logger, storeController storage.StoreController) { func EnableSearchExtension(config *config.Config, storeController storage.StoreController,
repoDB repodb.RepoDB, log log.Logger,
) {
if config.Extensions.Search != nil && *config.Extensions.Search.Enable && config.Extensions.Search.CVE != nil { if config.Extensions.Search != nil && *config.Extensions.Search.Enable && config.Extensions.Search.CVE != nil {
defaultUpdateInterval, _ := time.ParseDuration("2h") defaultUpdateInterval, _ := time.ParseDuration("2h")
@ -34,7 +37,7 @@ func EnableSearchExtension(config *config.Config, log log.Logger, storeControlle
log.Warn().Msg("CVE update interval set to too-short interval < 2h, changing update duration to 2 hours and continuing.") //nolint:lll // gofumpt conflicts with lll log.Warn().Msg("CVE update interval set to too-short interval < 2h, changing update duration to 2 hours and continuing.") //nolint:lll // gofumpt conflicts with lll
} }
cveInfo = cveinfo.NewCVEInfo(storeController, log) cveInfo = cveinfo.NewCVEInfo(storeController, repoDB, log)
go func() { go func() {
err := downloadTrivyDB(log, config.Extensions.Search.CVE.UpdateInterval) err := downloadTrivyDB(log, config.Extensions.Search.CVE.UpdateInterval)
@ -63,7 +66,7 @@ func downloadTrivyDB(log log.Logger, updateInterval time.Duration) error {
} }
func SetupSearchRoutes(config *config.Config, router *mux.Router, storeController storage.StoreController, func SetupSearchRoutes(config *config.Config, router *mux.Router, storeController storage.StoreController,
log log.Logger, repoDB repodb.RepoDB, log log.Logger,
) { ) {
log.Info().Msg("setting up search routes") log.Info().Msg("setting up search routes")
@ -74,12 +77,12 @@ func SetupSearchRoutes(config *config.Config, router *mux.Router, storeControlle
// cveinfo should already be initialized by this time // cveinfo should already be initialized by this time
// as EnableSearchExtension is supposed to be called earlier, but let's be sure // as EnableSearchExtension is supposed to be called earlier, but let's be sure
if cveInfo == nil { if cveInfo == nil {
cveInfo = cveinfo.NewCVEInfo(storeController, log) cveInfo = cveinfo.NewCVEInfo(storeController, repoDB, log)
} }
resConfig = search.GetResolverConfig(log, storeController, cveInfo) resConfig = search.GetResolverConfig(log, storeController, repoDB, cveInfo)
} else { } else {
resConfig = search.GetResolverConfig(log, storeController, nil) resConfig = search.GetResolverConfig(log, storeController, repoDB, nil)
} }
graphqlPrefix := router.PathPrefix(constants.FullSearchPrefix).Methods("OPTIONS", "GET", "POST") graphqlPrefix := router.PathPrefix(constants.FullSearchPrefix).Methods("OPTIONS", "GET", "POST")

View file

@ -9,18 +9,21 @@ import (
"zotregistry.io/zot/pkg/api/config" "zotregistry.io/zot/pkg/api/config"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
) )
// EnableSearchExtension ... // EnableSearchExtension ...
func EnableSearchExtension(config *config.Config, log log.Logger, storeController storage.StoreController) { func EnableSearchExtension(config *config.Config, storeController storage.StoreController,
repoDB repodb.RepoDB, log log.Logger,
) {
log.Warn().Msg("skipping enabling search extension because given zot binary doesn't include this feature," + log.Warn().Msg("skipping enabling search extension because given zot binary doesn't include this feature," +
"please build a binary that does so") "please build a binary that does so")
} }
// SetupSearchRoutes ... // SetupSearchRoutes ...
func SetupSearchRoutes(conf *config.Config, router *mux.Router, func SetupSearchRoutes(config *config.Config, router *mux.Router, storeController storage.StoreController,
storeController storage.StoreController, log log.Logger, repoDB repodb.RepoDB, log log.Logger,
) { ) {
log.Warn().Msg("skipping setting up search routes because given zot binary doesn't include this feature," + log.Warn().Msg("skipping setting up search routes because given zot binary doesn't include this feature," +
"please build a binary that does so") "please build a binary that does so")

View file

@ -18,6 +18,7 @@ const (
LabelAnnotationCreated = "org.label-schema.build-date" LabelAnnotationCreated = "org.label-schema.build-date"
LabelAnnotationVendor = "org.label-schema.vendor" LabelAnnotationVendor = "org.label-schema.vendor"
LabelAnnotationDescription = "org.label-schema.description" LabelAnnotationDescription = "org.label-schema.description"
LabelAnnotationLicenses = "org.label-schema.license"
LabelAnnotationTitle = "org.label-schema.name" LabelAnnotationTitle = "org.label-schema.name"
LabelAnnotationDocumentation = "org.label-schema.usage" LabelAnnotationDocumentation = "org.label-schema.usage"
LabelAnnotationSource = "org.label-schema.vcs-url" LabelAnnotationSource = "org.label-schema.vcs-url"
@ -192,6 +193,10 @@ func GetDescription(annotations map[string]string) string {
return GetAnnotationValue(annotations, ispec.AnnotationDescription, LabelAnnotationDescription) return GetAnnotationValue(annotations, ispec.AnnotationDescription, LabelAnnotationDescription)
} }
func GetLicenses(annotations map[string]string) string {
return GetAnnotationValue(annotations, ispec.AnnotationLicenses, LabelAnnotationLicenses)
}
func GetVendor(annotations map[string]string) string { func GetVendor(annotations map[string]string) string {
return GetAnnotationValue(annotations, ispec.AnnotationVendor, LabelAnnotationVendor) return GetAnnotationValue(annotations, ispec.AnnotationVendor, LabelAnnotationVendor)
} }
@ -220,12 +225,6 @@ func GetCategories(labels map[string]string) string {
return categories return categories
} }
func GetLicenses(annotations map[string]string) string {
licenses := annotations[ispec.AnnotationLicenses]
return licenses
}
func GetAnnotations(annotations, labels map[string]string) ImageAnnotations { func GetAnnotations(annotations, labels map[string]string) ImageAnnotations {
description := GetDescription(annotations) description := GetDescription(annotations)
if description == "" { if description == "" {

File diff suppressed because it is too large Load diff

View file

@ -7,6 +7,7 @@ import (
"fmt" "fmt"
"path" "path"
"strconv" "strconv"
"strings"
"time" "time"
notreg "github.com/notaryproject/notation-go/registry" notreg "github.com/notaryproject/notation-go/registry"
@ -22,7 +23,7 @@ type OciLayoutUtils interface { //nolint: interfacebloat
GetImageManifest(repo string, reference string) (ispec.Manifest, godigest.Digest, error) GetImageManifest(repo string, reference string) (ispec.Manifest, godigest.Digest, error)
GetImageManifests(repo string) ([]ispec.Descriptor, error) GetImageManifests(repo string) ([]ispec.Descriptor, error)
GetImageBlobManifest(repo string, digest godigest.Digest) (ispec.Manifest, error) GetImageBlobManifest(repo string, digest godigest.Digest) (ispec.Manifest, error)
GetImageInfo(repo string, digest godigest.Digest) (ispec.Image, error) GetImageInfo(repo string, configDigest godigest.Digest) (ispec.Image, error)
GetImageTagsWithTimestamp(repo string) ([]TagInfo, error) GetImageTagsWithTimestamp(repo string) ([]TagInfo, error)
GetImagePlatform(imageInfo ispec.Image) (string, string) GetImagePlatform(imageInfo ispec.Image) (string, string)
GetImageManifestSize(repo string, manifestDigest godigest.Digest) int64 GetImageManifestSize(repo string, manifestDigest godigest.Digest) int64
@ -147,7 +148,7 @@ func (olu BaseOciLayoutUtils) GetImageBlobManifest(repo string, digest godigest.
return blobIndex, nil return blobIndex, nil
} }
func (olu BaseOciLayoutUtils) GetImageInfo(repo string, digest godigest.Digest) (ispec.Image, error) { func (olu BaseOciLayoutUtils) GetImageInfo(repo string, configDigest godigest.Digest) (ispec.Image, error) {
var imageInfo ispec.Image var imageInfo ispec.Image
var lockLatency time.Time var lockLatency time.Time
@ -157,7 +158,7 @@ func (olu BaseOciLayoutUtils) GetImageInfo(repo string, digest godigest.Digest)
imageStore.RLock(&lockLatency) imageStore.RLock(&lockLatency)
defer imageStore.RUnlock(&lockLatency) defer imageStore.RUnlock(&lockLatency)
blobBuf, err := imageStore.GetBlobContent(repo, digest) blobBuf, err := imageStore.GetBlobContent(repo, configDigest)
if err != nil { if err != nil {
olu.Log.Error().Err(err).Msg("unable to open image layers file") olu.Log.Error().Err(err).Msg("unable to open image layers file")
@ -230,6 +231,10 @@ func (olu BaseOciLayoutUtils) checkNotarySignature(name string, digest godigest.
// check cosign signature corresponding to manifest. // check cosign signature corresponding to manifest.
func (olu BaseOciLayoutUtils) checkCosignSignature(name string, digest godigest.Digest) bool { func (olu BaseOciLayoutUtils) checkCosignSignature(name string, digest godigest.Digest) bool {
if digest.Validate() != nil {
return false
}
imageStore := olu.StoreController.GetImageStore(name) imageStore := olu.StoreController.GetImageStore(name)
// if manifest is signed using cosign mechanism, cosign adds a new manifest. // if manifest is signed using cosign mechanism, cosign adds a new manifest.
@ -342,8 +347,8 @@ func (olu BaseOciLayoutUtils) GetExpandedRepoInfo(name string) (RepoInfo, error)
return RepoInfo{}, err return RepoInfo{}, err
} }
repoPlatforms := make([]OsArch, 0) repoVendorsSet := make(map[string]bool, len(manifestList))
repoVendors := make([]string, 0, len(manifestList)) repoPlatformsSet := make(map[string]OsArch, len(manifestList))
var lastUpdatedImageSummary ImageSummary var lastUpdatedImageSummary ImageSummary
@ -381,13 +386,16 @@ func (olu BaseOciLayoutUtils) GetExpandedRepoInfo(name string) (RepoInfo, error)
continue continue
} }
os, arch := olu.GetImagePlatform(imageConfigInfo) opSys, arch := olu.GetImagePlatform(imageConfigInfo)
osArch := OsArch{ osArch := OsArch{
Os: os, Os: opSys,
Arch: arch, Arch: arch,
} }
repoPlatforms = append(repoPlatforms, osArch) if opSys != "" || arch != "" {
osArchString := strings.TrimSpace(fmt.Sprintf("%s %s", opSys, arch))
repoPlatformsSet[osArchString] = osArch
}
layers := make([]LayerSummary, 0) layers := make([]LayerSummary, 0)
@ -410,7 +418,53 @@ func (olu BaseOciLayoutUtils) GetExpandedRepoInfo(name string) (RepoInfo, error)
// get image info from manifest annotation, if not found get from image config labels. // get image info from manifest annotation, if not found get from image config labels.
annotations := GetAnnotations(manifest.Annotations, imageConfigInfo.Config.Labels) annotations := GetAnnotations(manifest.Annotations, imageConfigInfo.Config.Labels)
repoVendors = append(repoVendors, annotations.Vendor) if annotations.Vendor != "" {
repoVendorsSet[annotations.Vendor] = true
}
imageConfigHistory := imageConfigInfo.History
allHistory := []LayerHistory{}
if len(imageConfigHistory) == 0 {
for _, layer := range layers {
allHistory = append(allHistory, LayerHistory{
Layer: layer,
HistoryDescription: HistoryDescription{},
})
}
} else {
// iterator over manifest layers
var layersIterator int
// since we are appending pointers, it is important to iterate with an index over slice
for i := range imageConfigHistory {
allHistory = append(allHistory, LayerHistory{
HistoryDescription: HistoryDescription{
Created: *imageConfigHistory[i].Created,
CreatedBy: imageConfigHistory[i].CreatedBy,
Author: imageConfigHistory[i].Author,
Comment: imageConfigHistory[i].Comment,
EmptyLayer: imageConfigHistory[i].EmptyLayer,
},
})
if imageConfigHistory[i].EmptyLayer {
continue
}
if layersIterator+1 > len(layers) {
olu.Log.Error().Err(errors.ErrBadLayerCount).
Msgf("error on creating layer history for imaeg %s %s", name, man.Digest)
break
}
allHistory[i].Layer = layers[layersIterator]
layersIterator++
}
}
olu.Log.Debug().Msgf("all history %v", allHistory)
size := strconv.Itoa(int(imageSize)) size := strconv.Itoa(int(imageSize))
manifestDigest := man.Digest.String() manifestDigest := man.Digest.String()
@ -436,6 +490,7 @@ func (olu BaseOciLayoutUtils) GetExpandedRepoInfo(name string) (RepoInfo, error)
Labels: annotations.Labels, Labels: annotations.Labels,
Source: annotations.Source, Source: annotations.Source,
Layers: layers, Layers: layers,
History: allHistory,
} }
imageSummaries = append(imageSummaries, imageSummary) imageSummaries = append(imageSummaries, imageSummary)
@ -453,6 +508,19 @@ func (olu BaseOciLayoutUtils) GetExpandedRepoInfo(name string) (RepoInfo, error)
size := strconv.FormatInt(repoSize, 10) size := strconv.FormatInt(repoSize, 10)
repoPlatforms := make([]OsArch, 0, len(repoPlatformsSet))
for _, osArch := range repoPlatformsSet {
repoPlatforms = append(repoPlatforms, osArch)
}
repoVendors := make([]string, 0, len(repoVendorsSet))
for vendor := range repoVendorsSet {
vendor := vendor
repoVendors = append(repoVendors, vendor)
}
summary := RepoSummary{ summary := RepoSummary{
Name: name, Name: name,
LastUpdated: lastUpdatedTag.Timestamp, LastUpdated: lastUpdatedTag.Timestamp,

View file

@ -0,0 +1,75 @@
package convert_test
import (
"context"
"encoding/json"
"errors"
"testing"
"github.com/99designs/gqlgen/graphql"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey"
"zotregistry.io/zot/pkg/extensions/search/convert"
cveinfo "zotregistry.io/zot/pkg/extensions/search/cve"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
"zotregistry.io/zot/pkg/test/mocks"
)
var ErrTestError = errors.New("TestError")
func TestConvertErrors(t *testing.T) {
Convey("", t, func() {
repoDB, err := bolt.NewBoltDBWrapper(bolt.DBParameters{
RootDir: t.TempDir(),
})
So(err, ShouldBeNil)
configBlob, err := json.Marshal(ispec.Image{})
So(err, ShouldBeNil)
manifestBlob, err := json.Marshal(ispec.Manifest{
Layers: []ispec.Descriptor{
{
MediaType: ispec.MediaTypeImageLayerGzip,
Size: 0,
Digest: godigest.NewDigestFromEncoded(godigest.SHA256, "digest"),
},
},
})
So(err, ShouldBeNil)
repoMeta11 := repodb.ManifestMetadata{
ManifestBlob: manifestBlob,
ConfigBlob: configBlob,
}
digest11 := godigest.FromString("abc1")
err = repoDB.SetManifestMeta("repo1", digest11, repoMeta11)
So(err, ShouldBeNil)
err = repoDB.SetRepoTag("repo1", "0.1.0", digest11, ispec.MediaTypeImageManifest)
So(err, ShouldBeNil)
repoMetas, manifestMetaMap, err := repoDB.SearchRepos(context.Background(), "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldBeNil)
ctx := graphql.WithResponseContext(context.Background(),
graphql.DefaultErrorPresenter, graphql.DefaultRecover)
_ = convert.RepoMeta2RepoSummary(
ctx,
repoMetas[0],
manifestMetaMap,
convert.SkipQGLField{},
mocks.CveInfoMock{
GetCVESummaryForImageFn: func(image string) (cveinfo.ImageCVESummary, error) {
return cveinfo.ImageCVESummary{}, ErrTestError
},
},
)
So(graphql.GetErrors(ctx).Error(), ShouldContainSubstring, "unable to run vulnerability scan on tag")
})
}

View file

@ -0,0 +1,280 @@
package convert
import (
"strconv"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/extensions/search/common"
"zotregistry.io/zot/pkg/extensions/search/gql_generated"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
)
func BuildImageInfo(repo string, tag string, manifestDigest godigest.Digest,
manifest ispec.Manifest, imageConfig ispec.Image, isSigned bool,
) *gql_generated.ImageSummary {
layers := []*gql_generated.LayerSummary{}
size := int64(0)
log := log.NewLogger("debug", "")
allHistory := []*gql_generated.LayerHistory{}
formattedManifestDigest := manifestDigest.String()
configDigest := manifest.Config.Digest.String()
annotations := common.GetAnnotations(manifest.Annotations, imageConfig.Config.Labels)
lastUpdated := common.GetImageLastUpdated(imageConfig)
authors := annotations.Authors
if authors == "" {
authors = imageConfig.Author
}
history := imageConfig.History
if len(history) == 0 {
for _, layer := range manifest.Layers {
size += layer.Size
digest := layer.Digest.String()
layerSize := strconv.FormatInt(layer.Size, 10)
layer := &gql_generated.LayerSummary{
Size: &layerSize,
Digest: &digest,
}
layers = append(
layers,
layer,
)
allHistory = append(allHistory, &gql_generated.LayerHistory{
Layer: layer,
HistoryDescription: &gql_generated.HistoryDescription{},
})
}
formattedSize := strconv.FormatInt(size, 10)
imageInfo := &gql_generated.ImageSummary{
RepoName: &repo,
Tag: &tag,
Digest: &formattedManifestDigest,
ConfigDigest: &configDigest,
Size: &formattedSize,
Layers: layers,
History: allHistory,
Vendor: &annotations.Vendor,
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
LastUpdated: &lastUpdated,
IsSigned: &isSigned,
Platform: &gql_generated.OsArch{
Os: &imageConfig.OS,
Arch: &imageConfig.Architecture,
},
}
return imageInfo
}
// iterator over manifest layers
var layersIterator int
// since we are appending pointers, it is important to iterate with an index over slice
for i := range history {
allHistory = append(allHistory, &gql_generated.LayerHistory{
HistoryDescription: &gql_generated.HistoryDescription{
Created: history[i].Created,
CreatedBy: &history[i].CreatedBy,
Author: &history[i].Author,
Comment: &history[i].Comment,
EmptyLayer: &history[i].EmptyLayer,
},
})
if history[i].EmptyLayer {
continue
}
if layersIterator+1 > len(manifest.Layers) {
formattedSize := strconv.FormatInt(size, 10)
log.Error().Err(zerr.ErrBadLayerCount).Msg("error on creating layer history for ImageSummary")
return &gql_generated.ImageSummary{
RepoName: &repo,
Tag: &tag,
Digest: &formattedManifestDigest,
ConfigDigest: &configDigest,
Size: &formattedSize,
Layers: layers,
History: allHistory,
Vendor: &annotations.Vendor,
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
LastUpdated: &lastUpdated,
IsSigned: &isSigned,
Platform: &gql_generated.OsArch{
Os: &imageConfig.OS,
Arch: &imageConfig.Architecture,
},
}
}
size += manifest.Layers[layersIterator].Size
digest := manifest.Layers[layersIterator].Digest.String()
layerSize := strconv.FormatInt(manifest.Layers[layersIterator].Size, 10)
layer := &gql_generated.LayerSummary{
Size: &layerSize,
Digest: &digest,
}
layers = append(
layers,
layer,
)
allHistory[i].Layer = layer
layersIterator++
}
formattedSize := strconv.FormatInt(size, 10)
imageInfo := &gql_generated.ImageSummary{
RepoName: &repo,
Tag: &tag,
Digest: &formattedManifestDigest,
ConfigDigest: &configDigest,
Size: &formattedSize,
Layers: layers,
History: allHistory,
Vendor: &annotations.Vendor,
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
LastUpdated: &lastUpdated,
IsSigned: &isSigned,
Platform: &gql_generated.OsArch{
Os: &imageConfig.OS,
Arch: &imageConfig.Architecture,
},
}
return imageInfo
}
// updateRepoBlobsMap adds all the image blobs and their respective size to the repo blobs map
// and returnes the total size of the image.
func updateRepoBlobsMap(manifestDigest string, manifestSize int64, configDigest string, configSize int64,
layers []ispec.Descriptor, repoBlob2Size map[string]int64,
) int64 {
imgSize := int64(0)
// add config size
imgSize += configSize
repoBlob2Size[configDigest] = configSize
// add manifest size
imgSize += manifestSize
repoBlob2Size[manifestDigest] = manifestSize
// add layers size
for _, layer := range layers {
repoBlob2Size[layer.Digest.String()] = layer.Size
imgSize += layer.Size
}
return imgSize
}
func getLayersSummaries(manifestContent ispec.Manifest) []*gql_generated.LayerSummary {
layers := make([]*gql_generated.LayerSummary, 0, len(manifestContent.Layers))
for _, layer := range manifestContent.Layers {
size := strconv.FormatInt(layer.Size, 10)
digest := layer.Digest.String()
layers = append(layers, &gql_generated.LayerSummary{
Size: &size,
Digest: &digest,
})
}
return layers
}
func getAllHistory(manifestContent ispec.Manifest, configContent ispec.Image) (
[]*gql_generated.LayerHistory, error,
) {
allHistory := []*gql_generated.LayerHistory{}
layerSummaries := getLayersSummaries(manifestContent)
history := configContent.History
if len(history) == 0 {
// We don't have any image history metadata
// let's make due with just the layer metadata
for _, layer := range layerSummaries {
allHistory = append(allHistory, &gql_generated.LayerHistory{
Layer: layer,
HistoryDescription: &gql_generated.HistoryDescription{},
})
}
return allHistory, nil
}
// Iterator over manifest layers
var layersIterator int
// Since we are appending pointers, it is important to iterate with an index over slice
for i := range history {
allHistory = append(allHistory, &gql_generated.LayerHistory{
HistoryDescription: &gql_generated.HistoryDescription{
Created: history[i].Created,
CreatedBy: &history[i].CreatedBy,
Author: &history[i].Author,
Comment: &history[i].Comment,
EmptyLayer: &history[i].EmptyLayer,
},
})
if history[i].EmptyLayer {
continue
}
if layersIterator+1 > len(manifestContent.Layers) {
return allHistory, zerr.ErrBadLayerCount
}
allHistory[i].Layer = layerSummaries[layersIterator]
layersIterator++
}
return allHistory, nil
}
func imageHasSignatures(signatures repodb.ManifestSignatures) bool {
// (sigType, signatures)
for _, sigs := range signatures {
if len(sigs) > 0 {
return true
}
}
return false
}

View file

@ -0,0 +1,546 @@
package convert
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
"github.com/99designs/gqlgen/graphql"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/vektah/gqlparser/v2/gqlerror"
"zotregistry.io/zot/pkg/extensions/search/common"
cveinfo "zotregistry.io/zot/pkg/extensions/search/cve"
"zotregistry.io/zot/pkg/extensions/search/gql_generated"
"zotregistry.io/zot/pkg/meta/repodb"
)
type SkipQGLField struct {
Vulnerabilities bool
}
func RepoMeta2RepoSummary(ctx context.Context, repoMeta repodb.RepoMetadata,
manifestMetaMap map[string]repodb.ManifestMetadata, skip SkipQGLField, cveInfo cveinfo.CveInfo,
) *gql_generated.RepoSummary {
var (
repoLastUpdatedTimestamp = time.Time{}
repoPlatformsSet = map[string]*gql_generated.OsArch{}
repoVendorsSet = map[string]bool{}
lastUpdatedImageSummary *gql_generated.ImageSummary
repoStarCount = repoMeta.Stars
isBookmarked = false
isStarred = false
repoDownloadCount = 0
repoName = repoMeta.Name
// map used to keep track of all blobs of a repo without dublicates as
// some images may have the same layers
repoBlob2Size = make(map[string]int64, 10)
// made up of all manifests, configs and image layers
size = int64(0)
)
for tag, descriptor := range repoMeta.Tags {
var (
manifestContent ispec.Manifest
manifestDigest = descriptor.Digest
imageSignatures = repoMeta.Signatures[descriptor.Digest]
)
err := json.Unmarshal(manifestMetaMap[manifestDigest].ManifestBlob, &manifestContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal manifest blob for image: %s:%s, manifest digest: %s, "+
"error: %s", repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
var configContent ispec.Image
err = json.Unmarshal(manifestMetaMap[manifestDigest].ConfigBlob, &configContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal config blob for image: %s:%s, manifest digest: %s, error: %s",
repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
var (
tag = tag
isSigned = imageHasSignatures(imageSignatures)
configDigest = manifestContent.Config.Digest.String()
configSize = manifestContent.Config.Size
opSys = configContent.OS
arch = configContent.Architecture
osArch = gql_generated.OsArch{Os: &opSys, Arch: &arch}
imageLastUpdated = common.GetImageLastUpdated(configContent)
downloadCount = repoMeta.Statistics[descriptor.Digest].DownloadCount
size = updateRepoBlobsMap(
manifestDigest, int64(len(manifestMetaMap[manifestDigest].ManifestBlob)),
configDigest, configSize,
manifestContent.Layers,
repoBlob2Size)
imageSize = strconv.FormatInt(size, 10)
)
annotations := common.GetAnnotations(manifestContent.Annotations, configContent.Config.Labels)
authors := annotations.Authors
if authors == "" {
authors = configContent.Author
}
historyEntries, err := getAllHistory(manifestContent, configContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("error generating history on tag %s in repo %s: "+
"manifest digest: %s, error: %s", tag, repoMeta.Name, manifestDigest, err.Error()))
}
imageCveSummary := cveinfo.ImageCVESummary{}
imageSummary := gql_generated.ImageSummary{
RepoName: &repoName,
Tag: &tag,
Digest: &manifestDigest,
ConfigDigest: &configDigest,
LastUpdated: &imageLastUpdated,
IsSigned: &isSigned,
Size: &imageSize,
Platform: &osArch,
Vendor: &annotations.Vendor,
DownloadCount: &downloadCount,
Layers: getLayersSummaries(manifestContent),
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
History: historyEntries,
Vulnerabilities: &gql_generated.ImageVulnerabilitySummary{
MaxSeverity: &imageCveSummary.MaxSeverity,
Count: &imageCveSummary.Count,
},
}
if annotations.Vendor != "" {
repoVendorsSet[annotations.Vendor] = true
}
if opSys != "" || arch != "" {
osArchString := strings.TrimSpace(fmt.Sprintf("%s %s", opSys, arch))
repoPlatformsSet[osArchString] = &gql_generated.OsArch{Os: &opSys, Arch: &arch}
}
if repoLastUpdatedTimestamp.Equal(time.Time{}) {
// initialize with first time value
repoLastUpdatedTimestamp = imageLastUpdated
lastUpdatedImageSummary = &imageSummary
} else if repoLastUpdatedTimestamp.Before(imageLastUpdated) {
repoLastUpdatedTimestamp = imageLastUpdated
lastUpdatedImageSummary = &imageSummary
}
repoDownloadCount += repoMeta.Statistics[descriptor.Digest].DownloadCount
}
// calculate repo size = sum all manifest, config and layer blobs sizes
for _, blobSize := range repoBlob2Size {
size += blobSize
}
repoSize := strconv.FormatInt(size, 10)
score := 0
repoPlatforms := make([]*gql_generated.OsArch, 0, len(repoPlatformsSet))
for _, osArch := range repoPlatformsSet {
repoPlatforms = append(repoPlatforms, osArch)
}
repoVendors := make([]*string, 0, len(repoVendorsSet))
for vendor := range repoVendorsSet {
vendor := vendor
repoVendors = append(repoVendors, &vendor)
}
// We only scan the latest image on the repo for performance reasons
// Check if vulnerability scanning is disabled
if cveInfo != nil && lastUpdatedImageSummary != nil && !skip.Vulnerabilities {
imageName := fmt.Sprintf("%s:%s", repoMeta.Name, *lastUpdatedImageSummary.Tag)
imageCveSummary, err := cveInfo.GetCVESummaryForImage(imageName)
if err != nil {
// Log the error, but we should still include the image in results
graphql.AddError(
ctx,
gqlerror.Errorf(
"unable to run vulnerability scan on tag %s in repo %s: error: %s",
*lastUpdatedImageSummary.Tag, repoMeta.Name, err.Error(),
),
)
}
lastUpdatedImageSummary.Vulnerabilities = &gql_generated.ImageVulnerabilitySummary{
MaxSeverity: &imageCveSummary.MaxSeverity,
Count: &imageCveSummary.Count,
}
}
return &gql_generated.RepoSummary{
Name: &repoName,
LastUpdated: &repoLastUpdatedTimestamp,
Size: &repoSize,
Platforms: repoPlatforms,
Vendors: repoVendors,
Score: &score,
NewestImage: lastUpdatedImageSummary,
DownloadCount: &repoDownloadCount,
StarCount: &repoStarCount,
IsBookmarked: &isBookmarked,
IsStarred: &isStarred,
}
}
func RepoMeta2ImageSummaries(ctx context.Context, repoMeta repodb.RepoMetadata,
manifestMetaMap map[string]repodb.ManifestMetadata, skip SkipQGLField, cveInfo cveinfo.CveInfo,
) []*gql_generated.ImageSummary {
imageSummaries := make([]*gql_generated.ImageSummary, 0, len(repoMeta.Tags))
for tag, descriptor := range repoMeta.Tags {
var (
manifestContent ispec.Manifest
manifestDigest = descriptor.Digest
imageSignatures = repoMeta.Signatures[descriptor.Digest]
)
err := json.Unmarshal(manifestMetaMap[manifestDigest].ManifestBlob, &manifestContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal manifest blob for image: %s:%s, "+
"manifest digest: %s, error: %s", repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
var configContent ispec.Image
err = json.Unmarshal(manifestMetaMap[manifestDigest].ConfigBlob, &configContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal config blob for image: %s:%s, "+
"manifest digest: %s, error: %s", repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
imageCveSummary := cveinfo.ImageCVESummary{}
// Check if vulnerability scanning is disabled
if cveInfo != nil && !skip.Vulnerabilities {
imageName := fmt.Sprintf("%s:%s", repoMeta.Name, tag)
imageCveSummary, err = cveInfo.GetCVESummaryForImage(imageName)
if err != nil {
// Log the error, but we should still include the manifest in results
graphql.AddError(ctx, gqlerror.Errorf("unable to run vulnerability scan on tag %s in repo %s: "+
"manifest digest: %s, error: %s", tag, repoMeta.Name, manifestDigest, err.Error()))
}
}
imgSize := int64(0)
imgSize += manifestContent.Config.Size
imgSize += int64(len(manifestMetaMap[manifestDigest].ManifestBlob))
for _, layer := range manifestContent.Layers {
imgSize += layer.Size
}
var (
repoName = repoMeta.Name
tag = tag
configDigest = manifestContent.Config.Digest.String()
imageLastUpdated = common.GetImageLastUpdated(configContent)
isSigned = imageHasSignatures(imageSignatures)
imageSize = strconv.FormatInt(imgSize, 10)
os = configContent.OS
arch = configContent.Architecture
osArch = gql_generated.OsArch{Os: &os, Arch: &arch}
downloadCount = repoMeta.Statistics[descriptor.Digest].DownloadCount
)
annotations := common.GetAnnotations(manifestContent.Annotations, configContent.Config.Labels)
authors := annotations.Authors
if authors == "" {
authors = configContent.Author
}
historyEntries, err := getAllHistory(manifestContent, configContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("error generating history on tag %s in repo %s: "+
"manifest digest: %s, error: %s", tag, repoMeta.Name, manifestDigest, err.Error()))
}
imageSummary := gql_generated.ImageSummary{
RepoName: &repoName,
Tag: &tag,
Digest: &manifestDigest,
ConfigDigest: &configDigest,
LastUpdated: &imageLastUpdated,
IsSigned: &isSigned,
Size: &imageSize,
Platform: &osArch,
Vendor: &annotations.Vendor,
DownloadCount: &downloadCount,
Layers: getLayersSummaries(manifestContent),
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
History: historyEntries,
Vulnerabilities: &gql_generated.ImageVulnerabilitySummary{
MaxSeverity: &imageCveSummary.MaxSeverity,
Count: &imageCveSummary.Count,
},
}
imageSummaries = append(imageSummaries, &imageSummary)
}
return imageSummaries
}
func RepoMeta2ExpandedRepoInfo(ctx context.Context, repoMeta repodb.RepoMetadata,
manifestMetaMap map[string]repodb.ManifestMetadata, skip SkipQGLField, cveInfo cveinfo.CveInfo,
) (*gql_generated.RepoSummary, []*gql_generated.ImageSummary) {
var (
repoLastUpdatedTimestamp = time.Time{}
repoPlatformsSet = map[string]*gql_generated.OsArch{}
repoVendorsSet = map[string]bool{}
lastUpdatedImageSummary *gql_generated.ImageSummary
repoStarCount = repoMeta.Stars
isBookmarked = false
isStarred = false
repoDownloadCount = 0
repoName = repoMeta.Name
// map used to keep track of all blobs of a repo without dublicates as
// some images may have the same layers
repoBlob2Size = make(map[string]int64, 10)
// made up of all manifests, configs and image layers
size = int64(0)
imageSummaries = make([]*gql_generated.ImageSummary, 0, len(repoMeta.Tags))
)
for tag, descriptor := range repoMeta.Tags {
var (
manifestContent ispec.Manifest
manifestDigest = descriptor.Digest
imageSignatures = repoMeta.Signatures[descriptor.Digest]
)
err := json.Unmarshal(manifestMetaMap[manifestDigest].ManifestBlob, &manifestContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal manifest blob for image: %s:%s, manifest digest: %s, "+
"error: %s", repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
var configContent ispec.Image
err = json.Unmarshal(manifestMetaMap[manifestDigest].ConfigBlob, &configContent)
if err != nil {
graphql.AddError(ctx, gqlerror.Errorf("can't unmarshal config blob for image: %s:%s, manifest digest: %s, error: %s",
repoMeta.Name, tag, manifestDigest, err.Error()))
continue
}
var (
tag = tag
isSigned = imageHasSignatures(imageSignatures)
configDigest = manifestContent.Config.Digest.String()
configSize = manifestContent.Config.Size
opSys = configContent.OS
arch = configContent.Architecture
osArch = gql_generated.OsArch{Os: &opSys, Arch: &arch}
imageLastUpdated = common.GetImageLastUpdated(configContent)
downloadCount = repoMeta.Statistics[descriptor.Digest].DownloadCount
size = updateRepoBlobsMap(
manifestDigest, int64(len(manifestMetaMap[manifestDigest].ManifestBlob)),
configDigest, configSize,
manifestContent.Layers,
repoBlob2Size)
imageSize = strconv.FormatInt(size, 10)
)
annotations := common.GetAnnotations(manifestContent.Annotations, configContent.Config.Labels)
authors := annotations.Authors
if authors == "" {
authors = configContent.Author
}
imageCveSummary := cveinfo.ImageCVESummary{}
imageSummary := gql_generated.ImageSummary{
RepoName: &repoName,
Tag: &tag,
Digest: &manifestDigest,
ConfigDigest: &configDigest,
LastUpdated: &imageLastUpdated,
IsSigned: &isSigned,
Size: &imageSize,
Platform: &osArch,
Vendor: &annotations.Vendor,
DownloadCount: &downloadCount,
Layers: getLayersSummaries(manifestContent),
Description: &annotations.Description,
Title: &annotations.Title,
Documentation: &annotations.Documentation,
Licenses: &annotations.Licenses,
Labels: &annotations.Labels,
Source: &annotations.Source,
Authors: &authors,
Vulnerabilities: &gql_generated.ImageVulnerabilitySummary{
MaxSeverity: &imageCveSummary.MaxSeverity,
Count: &imageCveSummary.Count,
},
}
imageSummaries = append(imageSummaries, &imageSummary)
if annotations.Vendor != "" {
repoVendorsSet[annotations.Vendor] = true
}
if opSys != "" || arch != "" {
osArchString := strings.TrimSpace(fmt.Sprintf("%s %s", opSys, arch))
repoPlatformsSet[osArchString] = &gql_generated.OsArch{Os: &opSys, Arch: &arch}
}
if repoLastUpdatedTimestamp.Equal(time.Time{}) {
// initialize with first time value
repoLastUpdatedTimestamp = imageLastUpdated
lastUpdatedImageSummary = &imageSummary
} else if repoLastUpdatedTimestamp.Before(imageLastUpdated) {
repoLastUpdatedTimestamp = imageLastUpdated
lastUpdatedImageSummary = &imageSummary
}
repoDownloadCount += repoMeta.Statistics[descriptor.Digest].DownloadCount
}
// calculate repo size = sum all manifest, config and layer blobs sizes
for _, blobSize := range repoBlob2Size {
size += blobSize
}
repoSize := strconv.FormatInt(size, 10)
score := 0
repoPlatforms := make([]*gql_generated.OsArch, 0, len(repoPlatformsSet))
for _, osArch := range repoPlatformsSet {
repoPlatforms = append(repoPlatforms, osArch)
}
repoVendors := make([]*string, 0, len(repoVendorsSet))
for vendor := range repoVendorsSet {
vendor := vendor
repoVendors = append(repoVendors, &vendor)
}
// We only scan the latest image on the repo for performance reasons
// Check if vulnerability scanning is disabled
if cveInfo != nil && lastUpdatedImageSummary != nil && !skip.Vulnerabilities {
imageName := fmt.Sprintf("%s:%s", repoMeta.Name, *lastUpdatedImageSummary.Tag)
imageCveSummary, err := cveInfo.GetCVESummaryForImage(imageName)
if err != nil {
// Log the error, but we should still include the image in results
graphql.AddError(
ctx,
gqlerror.Errorf(
"unable to run vulnerability scan on tag %s in repo %s: error: %s",
*lastUpdatedImageSummary.Tag, repoMeta.Name, err.Error(),
),
)
}
lastUpdatedImageSummary.Vulnerabilities = &gql_generated.ImageVulnerabilitySummary{
MaxSeverity: &imageCveSummary.MaxSeverity,
Count: &imageCveSummary.Count,
}
}
summary := &gql_generated.RepoSummary{
Name: &repoName,
LastUpdated: &repoLastUpdatedTimestamp,
Size: &repoSize,
Platforms: repoPlatforms,
Vendors: repoVendors,
Score: &score,
NewestImage: lastUpdatedImageSummary,
DownloadCount: &repoDownloadCount,
StarCount: &repoStarCount,
IsBookmarked: &isBookmarked,
IsStarred: &isStarred,
}
return summary, imageSummaries
}
func GetPreloads(ctx context.Context) map[string]bool {
if !graphql.HasOperationContext(ctx) {
return map[string]bool{}
}
nestedPreloads := GetNestedPreloads(
graphql.GetOperationContext(ctx),
graphql.CollectFieldsCtx(ctx, nil),
"",
)
preloads := map[string]bool{}
for _, str := range nestedPreloads {
preloads[str] = true
}
return preloads
}
func GetNestedPreloads(ctx *graphql.OperationContext, fields []graphql.CollectedField, prefix string,
) []string {
preloads := []string{}
for _, column := range fields {
prefixColumn := GetPreloadString(prefix, column.Name)
preloads = append(preloads, prefixColumn)
preloads = append(preloads,
GetNestedPreloads(ctx, graphql.CollectFields(ctx, column.Selections, nil), prefixColumn)...,
)
}
return preloads
}
func GetPreloadString(prefix, name string) string {
if len(prefix) > 0 {
return prefix + "." + name
}
return name
}

View file

@ -1,6 +1,7 @@
package cveinfo package cveinfo
import ( import (
"encoding/json"
"fmt" "fmt"
godigest "github.com/opencontainers/go-digest" godigest "github.com/opencontainers/go-digest"
@ -10,6 +11,7 @@ import (
cvemodel "zotregistry.io/zot/pkg/extensions/search/cve/model" cvemodel "zotregistry.io/zot/pkg/extensions/search/cve/model"
"zotregistry.io/zot/pkg/extensions/search/cve/trivy" "zotregistry.io/zot/pkg/extensions/search/cve/trivy"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
) )
@ -40,30 +42,59 @@ type ImageCVESummary struct {
} }
type BaseCveInfo struct { type BaseCveInfo struct {
Log log.Logger Log log.Logger
Scanner Scanner Scanner Scanner
LayoutUtils common.OciLayoutUtils RepoDB repodb.RepoDB
} }
func NewCVEInfo(storeController storage.StoreController, log log.Logger) *BaseCveInfo { func NewCVEInfo(storeController storage.StoreController, repoDB repodb.RepoDB,
layoutUtils := common.NewBaseOciLayoutUtils(storeController, log) log log.Logger,
scanner := trivy.NewScanner(storeController, layoutUtils, log) ) *BaseCveInfo {
scanner := trivy.NewScanner(storeController, repoDB, log)
return &BaseCveInfo{Log: log, Scanner: scanner, LayoutUtils: layoutUtils} return &BaseCveInfo{
Log: log,
Scanner: scanner,
RepoDB: repoDB,
}
} }
func (cveinfo BaseCveInfo) GetImageListForCVE(repo, cveID string) ([]ImageInfoByCVE, error) { func (cveinfo BaseCveInfo) GetImageListForCVE(repo, cveID string) ([]ImageInfoByCVE, error) {
imgList := make([]ImageInfoByCVE, 0) imgList := make([]ImageInfoByCVE, 0)
manifests, err := cveinfo.LayoutUtils.GetImageManifests(repo) repoMeta, err := cveinfo.RepoDB.GetRepoMeta(repo)
if err != nil { if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Msg("unable to get list of tags from repo") cveinfo.Log.Error().Err(err).Str("repo", repo).Str("cve-id", cveID).
Msg("unable to get list of tags from repo")
return imgList, err return imgList, err
} }
for _, manifest := range manifests { for tag, descriptor := range repoMeta.Tags {
tag := manifest.Annotations[ispec.AnnotationRefName] manifestDigestStr := descriptor.Digest
manifestDigest, err := godigest.Parse(manifestDigestStr)
if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Str("tag", tag).
Str("cve-id", cveID).Str("digest", manifestDigestStr).Msg("unable to parse digest")
return nil, err
}
manifestMeta, err := cveinfo.RepoDB.GetManifestMeta(repo, manifestDigest)
if err != nil {
return nil, err
}
var manifestContent ispec.Manifest
err = json.Unmarshal(manifestMeta.ManifestBlob, &manifestContent)
if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Str("tag", tag).
Str("cve-id", cveID).Msg("unable to unmashal manifest blob")
continue
}
image := fmt.Sprintf("%s:%s", repo, tag) image := fmt.Sprintf("%s:%s", repo, tag)
@ -79,19 +110,10 @@ func (cveinfo BaseCveInfo) GetImageListForCVE(repo, cveID string) ([]ImageInfoBy
for id := range cveMap { for id := range cveMap {
if id == cveID { if id == cveID {
digest := manifest.Digest
imageBlobManifest, err := cveinfo.LayoutUtils.GetImageBlobManifest(repo, digest)
if err != nil {
cveinfo.Log.Error().Err(err).Msg("unable to read image blob manifest")
return []ImageInfoByCVE{}, err
}
imgList = append(imgList, ImageInfoByCVE{ imgList = append(imgList, ImageInfoByCVE{
Tag: tag, Tag: tag,
Digest: digest, Digest: manifestDigest,
Manifest: imageBlobManifest, Manifest: manifestContent,
}) })
break break
@ -103,24 +125,59 @@ func (cveinfo BaseCveInfo) GetImageListForCVE(repo, cveID string) ([]ImageInfoBy
} }
func (cveinfo BaseCveInfo) GetImageListWithCVEFixed(repo, cveID string) ([]common.TagInfo, error) { func (cveinfo BaseCveInfo) GetImageListWithCVEFixed(repo, cveID string) ([]common.TagInfo, error) {
tagsInfo, err := cveinfo.LayoutUtils.GetImageTagsWithTimestamp(repo) repoMeta, err := cveinfo.RepoDB.GetRepoMeta(repo)
if err != nil { if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Msg("unable to get list of tags from repo") cveinfo.Log.Error().Err(err).Str("repo", repo).Str("cve-id", cveID).
Msg("unable to get list of tags from repo")
return []common.TagInfo{}, err return []common.TagInfo{}, err
} }
vulnerableTags := make([]common.TagInfo, 0) vulnerableTags := make([]common.TagInfo, 0)
allTags := make([]common.TagInfo, 0)
var hasCVE bool for tag, descriptor := range repoMeta.Tags {
manifestDigestStr := descriptor.Digest
for _, tag := range tagsInfo { manifestDigest, err := godigest.Parse(manifestDigestStr)
image := fmt.Sprintf("%s:%s", repo, tag.Name) if err != nil {
tagInfo := common.TagInfo{Name: tag.Name, Timestamp: tag.Timestamp, Digest: tag.Digest} cveinfo.Log.Error().Err(err).Str("repo", repo).Str("tag", tag).
Str("cve-id", cveID).Str("digest", manifestDigestStr).Msg("unable to parse digest")
continue
}
manifestMeta, err := cveinfo.RepoDB.GetManifestMeta(repo, manifestDigest)
if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Str("tag", tag).
Str("cve-id", cveID).Msg("unable to obtain manifest meta")
continue
}
var configContent ispec.Image
err = json.Unmarshal(manifestMeta.ConfigBlob, &configContent)
if err != nil {
cveinfo.Log.Error().Err(err).Str("repo", repo).Str("tag", tag).
Str("cve-id", cveID).Msg("unable to unmashal manifest blob")
continue
}
tagInfo := common.TagInfo{
Name: tag,
Timestamp: common.GetImageLastUpdated(configContent),
Digest: manifestDigest,
}
allTags = append(allTags, tagInfo)
image := fmt.Sprintf("%s:%s", repo, tag)
isValidImage, _ := cveinfo.Scanner.IsImageFormatScannable(image) isValidImage, _ := cveinfo.Scanner.IsImageFormatScannable(image)
if !isValidImage { if !isValidImage {
cveinfo.Log.Debug().Str("image", image). cveinfo.Log.Debug().Str("image", image).Str("cve-id", cveID).
Msg("image media type not supported for scanning, adding as a vulnerable image") Msg("image media type not supported for scanning, adding as a vulnerable image")
vulnerableTags = append(vulnerableTags, tagInfo) vulnerableTags = append(vulnerableTags, tagInfo)
@ -130,7 +187,7 @@ func (cveinfo BaseCveInfo) GetImageListWithCVEFixed(repo, cveID string) ([]commo
cveMap, err := cveinfo.Scanner.ScanImage(image) cveMap, err := cveinfo.Scanner.ScanImage(image)
if err != nil { if err != nil {
cveinfo.Log.Debug().Str("image", image). cveinfo.Log.Debug().Str("image", image).Str("cve-id", cveID).
Msg("scanning failed, adding as a vulnerable image") Msg("scanning failed, adding as a vulnerable image")
vulnerableTags = append(vulnerableTags, tagInfo) vulnerableTags = append(vulnerableTags, tagInfo)
@ -138,31 +195,24 @@ func (cveinfo BaseCveInfo) GetImageListWithCVEFixed(repo, cveID string) ([]commo
continue continue
} }
hasCVE = false if _, hasCVE := cveMap[cveID]; hasCVE {
for id := range cveMap {
if id == cveID {
hasCVE = true
break
}
}
if hasCVE {
vulnerableTags = append(vulnerableTags, tagInfo) vulnerableTags = append(vulnerableTags, tagInfo)
} }
} }
if len(vulnerableTags) != 0 { var fixedTags []common.TagInfo
cveinfo.Log.Info().Str("repo", repo).Msg("comparing fixed tags timestamp")
tagsInfo = common.GetFixedTags(tagsInfo, vulnerableTags) if len(vulnerableTags) != 0 {
cveinfo.Log.Info().Str("repo", repo).Str("cve-id", cveID).Msgf("Vulnerable tags: %v", vulnerableTags)
fixedTags = common.GetFixedTags(allTags, vulnerableTags)
cveinfo.Log.Info().Str("repo", repo).Str("cve-id", cveID).Msgf("Fixed tags: %v", fixedTags)
} else { } else {
cveinfo.Log.Info().Str("repo", repo).Str("cve-id", cveID). cveinfo.Log.Info().Str("repo", repo).Str("cve-id", cveID).
Msg("image does not contain any tag that have given cve") Msg("image does not contain any tag that have given cve")
fixedTags = allTags
} }
return tagsInfo, nil return fixedTags, nil
} }
func (cveinfo BaseCveInfo) GetCVEListForImage(image string) (map[string]cvemodel.CVE, error) { func (cveinfo BaseCveInfo) GetCVEListForImage(image string) (map[string]cvemodel.CVE, error) {

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,7 @@
package trivy package trivy
import ( import (
"encoding/json"
"flag" "flag"
"path" "path"
"strings" "strings"
@ -11,13 +12,15 @@ import (
"github.com/aquasecurity/trivy/pkg/commands/operation" "github.com/aquasecurity/trivy/pkg/commands/operation"
"github.com/aquasecurity/trivy/pkg/types" "github.com/aquasecurity/trivy/pkg/types"
regTypes "github.com/google/go-containerregistry/pkg/v1/types" regTypes "github.com/google/go-containerregistry/pkg/v1/types"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1" ispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
"zotregistry.io/zot/errors" zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/extensions/search/common" "zotregistry.io/zot/pkg/extensions/search/common"
cvemodel "zotregistry.io/zot/pkg/extensions/search/cve/model" cvemodel "zotregistry.io/zot/pkg/extensions/search/cve/model"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
) )
@ -69,7 +72,7 @@ type cveTrivyController struct {
} }
type Scanner struct { type Scanner struct {
layoutUtils common.OciLayoutUtils repoDB repodb.RepoDB
cveController cveTrivyController cveController cveTrivyController
storeController storage.StoreController storeController storage.StoreController
log log.Logger log log.Logger
@ -77,7 +80,7 @@ type Scanner struct {
} }
func NewScanner(storeController storage.StoreController, func NewScanner(storeController storage.StoreController,
layoutUtils common.OciLayoutUtils, log log.Logger, repoDB repodb.RepoDB, log log.Logger,
) *Scanner { ) *Scanner {
cveController := cveTrivyController{} cveController := cveTrivyController{}
@ -107,7 +110,7 @@ func NewScanner(storeController storage.StoreController,
return &Scanner{ return &Scanner{
log: log, log: log,
layoutUtils: layoutUtils, repoDB: repoDB,
cveController: cveController, cveController: cveController,
storeController: storeController, storeController: storeController,
dbLock: &sync.Mutex{}, dbLock: &sync.Mutex{},
@ -146,36 +149,44 @@ func (scanner Scanner) getTrivyContext(image string) *trivyCtx {
func (scanner Scanner) IsImageFormatScannable(image string) (bool, error) { func (scanner Scanner) IsImageFormatScannable(image string) (bool, error) {
imageDir, inputTag := common.GetImageDirAndTag(image) imageDir, inputTag := common.GetImageDirAndTag(image)
manifests, err := scanner.layoutUtils.GetImageManifests(imageDir) repoMeta, err := scanner.repoDB.GetRepoMeta(imageDir)
if err != nil { if err != nil {
return false, err return false, err
} }
for _, manifest := range manifests { manifestDigestStr, ok := repoMeta.Tags[inputTag]
tag, ok := manifest.Annotations[ispec.AnnotationRefName] if !ok {
return false, zerr.ErrTagMetaNotFound
}
if ok && inputTag != "" && tag != inputTag { manifestDigest, err := godigest.Parse(manifestDigestStr.Digest)
continue if err != nil {
} return false, err
}
blobManifest, err := scanner.layoutUtils.GetImageBlobManifest(imageDir, manifest.Digest) manifestData, err := scanner.repoDB.GetManifestData(manifestDigest)
if err != nil { if err != nil {
return false, err return false, err
} }
imageLayers := blobManifest.Layers var manifestContent ispec.Manifest
for _, imageLayer := range imageLayers { err = json.Unmarshal(manifestData.ManifestBlob, &manifestContent)
switch imageLayer.MediaType { if err != nil {
case ispec.MediaTypeImageLayer, ispec.MediaTypeImageLayerGzip, string(regTypes.DockerLayer): scanner.log.Error().Err(err).Str("image", image).Msg("unable to unmashal manifest blob")
return true, nil
default: return false, zerr.ErrScanNotSupported
scanner.log.Debug().Str("image", }
image).Msgf("image media type %s not supported for scanning", imageLayer.MediaType)
return false, errors.ErrScanNotSupported for _, imageLayer := range manifestContent.Layers {
} switch imageLayer.MediaType {
case ispec.MediaTypeImageLayerGzip, ispec.MediaTypeImageLayer, string(regTypes.DockerLayer):
return true, nil
default:
scanner.log.Debug().Str("image", image).
Msgf("image media type %s not supported for scanning", imageLayer.MediaType)
return false, zerr.ErrScanNotSupported
} }
} }
@ -185,7 +196,7 @@ func (scanner Scanner) IsImageFormatScannable(image string) (bool, error) {
func (scanner Scanner) ScanImage(image string) (map[string]cvemodel.CVE, error) { func (scanner Scanner) ScanImage(image string) (map[string]cvemodel.CVE, error) {
cveidMap := make(map[string]cvemodel.CVE) cveidMap := make(map[string]cvemodel.CVE)
scanner.log.Info().Str("image", image).Msg("scanning image") scanner.log.Debug().Str("image", image).Msg("scanning image")
tCtx := scanner.getTrivyContext(image) tCtx := scanner.getTrivyContext(image)

View file

@ -16,6 +16,8 @@ import (
"zotregistry.io/zot/pkg/extensions/monitoring" "zotregistry.io/zot/pkg/extensions/monitoring"
"zotregistry.io/zot/pkg/extensions/search/common" "zotregistry.io/zot/pkg/extensions/search/common"
"zotregistry.io/zot/pkg/log" "zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
"zotregistry.io/zot/pkg/storage" "zotregistry.io/zot/pkg/storage"
"zotregistry.io/zot/pkg/storage/local" "zotregistry.io/zot/pkg/storage/local"
"zotregistry.io/zot/pkg/test" "zotregistry.io/zot/pkg/test"
@ -83,9 +85,15 @@ func TestMultipleStoragePath(t *testing.T) {
storeController.SubStore = subStore storeController.SubStore = subStore
layoutUtils := common.NewBaseOciLayoutUtils(storeController, log) repoDB, err := bolt.NewBoltDBWrapper(bolt.DBParameters{
RootDir: firstRootDir,
})
So(err, ShouldBeNil)
scanner := NewScanner(storeController, layoutUtils, log) err = repodb.SyncRepoDB(repoDB, storeController, log)
So(err, ShouldBeNil)
scanner := NewScanner(storeController, repoDB, log)
So(scanner.storeController.DefaultStore, ShouldNotBeNil) So(scanner.storeController.DefaultStore, ShouldNotBeNil)
So(scanner.storeController.SubStore, ShouldNotBeNil) So(scanner.storeController.SubStore, ShouldNotBeNil)

View file

@ -64,6 +64,7 @@ type ComplexityRoot struct {
GlobalSearchResult struct { GlobalSearchResult struct {
Images func(childComplexity int) int Images func(childComplexity int) int
Layers func(childComplexity int) int Layers func(childComplexity int) int
Page func(childComplexity int) int
Repos func(childComplexity int) int Repos func(childComplexity int) int
} }
@ -126,19 +127,26 @@ type ComplexityRoot struct {
Name func(childComplexity int) int Name func(childComplexity int) int
} }
PageInfo struct {
NextPage func(childComplexity int) int
ObjectCount func(childComplexity int) int
Pages func(childComplexity int) int
PreviousPage func(childComplexity int) int
}
Query struct { Query struct {
BaseImageList func(childComplexity int, image string) int BaseImageList func(childComplexity int, image string) int
CVEListForImage func(childComplexity int, image string) int CVEListForImage func(childComplexity int, image string) int
DerivedImageList func(childComplexity int, image string) int DerivedImageList func(childComplexity int, image string) int
ExpandedRepoInfo func(childComplexity int, repo string) int ExpandedRepoInfo func(childComplexity int, repo string) int
GlobalSearch func(childComplexity int, query string) int GlobalSearch func(childComplexity int, query string, filter *Filter, requestedPage *PageInput) int
Image func(childComplexity int, image string) int Image func(childComplexity int, image string) int
ImageList func(childComplexity int, repo string) int ImageList func(childComplexity int, repo string) int
ImageListForCve func(childComplexity int, id string) int ImageListForCve func(childComplexity int, id string) int
ImageListForDigest func(childComplexity int, id string) int ImageListForDigest func(childComplexity int, id string) int
ImageListWithCVEFixed func(childComplexity int, id string, image string) int ImageListWithCVEFixed func(childComplexity int, id string, image string) int
Referrers func(childComplexity int, repo string, digest string, typeArg string) int Referrers func(childComplexity int, repo string, digest string, typeArg string) int
RepoListWithNewestImage func(childComplexity int) int RepoListWithNewestImage func(childComplexity int, requestedPage *PageInput) int
} }
Referrer struct { Referrer struct {
@ -157,6 +165,7 @@ type ComplexityRoot struct {
RepoSummary struct { RepoSummary struct {
DownloadCount func(childComplexity int) int DownloadCount func(childComplexity int) int
IsBookmarked func(childComplexity int) int IsBookmarked func(childComplexity int) int
IsStarred func(childComplexity int) int
LastUpdated func(childComplexity int) int LastUpdated func(childComplexity int) int
Name func(childComplexity int) int Name func(childComplexity int) int
NewestImage func(childComplexity int) int NewestImage func(childComplexity int) int
@ -173,10 +182,10 @@ type QueryResolver interface {
ImageListForCve(ctx context.Context, id string) ([]*ImageSummary, error) ImageListForCve(ctx context.Context, id string) ([]*ImageSummary, error)
ImageListWithCVEFixed(ctx context.Context, id string, image string) ([]*ImageSummary, error) ImageListWithCVEFixed(ctx context.Context, id string, image string) ([]*ImageSummary, error)
ImageListForDigest(ctx context.Context, id string) ([]*ImageSummary, error) ImageListForDigest(ctx context.Context, id string) ([]*ImageSummary, error)
RepoListWithNewestImage(ctx context.Context) ([]*RepoSummary, error) RepoListWithNewestImage(ctx context.Context, requestedPage *PageInput) ([]*RepoSummary, error)
ImageList(ctx context.Context, repo string) ([]*ImageSummary, error) ImageList(ctx context.Context, repo string) ([]*ImageSummary, error)
ExpandedRepoInfo(ctx context.Context, repo string) (*RepoInfo, error) ExpandedRepoInfo(ctx context.Context, repo string) (*RepoInfo, error)
GlobalSearch(ctx context.Context, query string) (*GlobalSearchResult, error) GlobalSearch(ctx context.Context, query string, filter *Filter, requestedPage *PageInput) (*GlobalSearchResult, error)
DerivedImageList(ctx context.Context, image string) ([]*ImageSummary, error) DerivedImageList(ctx context.Context, image string) ([]*ImageSummary, error)
BaseImageList(ctx context.Context, image string) ([]*ImageSummary, error) BaseImageList(ctx context.Context, image string) ([]*ImageSummary, error)
Image(ctx context.Context, image string) (*ImageSummary, error) Image(ctx context.Context, image string) (*ImageSummary, error)
@ -275,6 +284,13 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
return e.complexity.GlobalSearchResult.Layers(childComplexity), true return e.complexity.GlobalSearchResult.Layers(childComplexity), true
case "GlobalSearchResult.Page":
if e.complexity.GlobalSearchResult.Page == nil {
break
}
return e.complexity.GlobalSearchResult.Page(childComplexity), true
case "GlobalSearchResult.Repos": case "GlobalSearchResult.Repos":
if e.complexity.GlobalSearchResult.Repos == nil { if e.complexity.GlobalSearchResult.Repos == nil {
break break
@ -548,6 +564,34 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
return e.complexity.PackageInfo.Name(childComplexity), true return e.complexity.PackageInfo.Name(childComplexity), true
case "PageInfo.NextPage":
if e.complexity.PageInfo.NextPage == nil {
break
}
return e.complexity.PageInfo.NextPage(childComplexity), true
case "PageInfo.ObjectCount":
if e.complexity.PageInfo.ObjectCount == nil {
break
}
return e.complexity.PageInfo.ObjectCount(childComplexity), true
case "PageInfo.Pages":
if e.complexity.PageInfo.Pages == nil {
break
}
return e.complexity.PageInfo.Pages(childComplexity), true
case "PageInfo.PreviousPage":
if e.complexity.PageInfo.PreviousPage == nil {
break
}
return e.complexity.PageInfo.PreviousPage(childComplexity), true
case "Query.BaseImageList": case "Query.BaseImageList":
if e.complexity.Query.BaseImageList == nil { if e.complexity.Query.BaseImageList == nil {
break break
@ -606,7 +650,7 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
return 0, false return 0, false
} }
return e.complexity.Query.GlobalSearch(childComplexity, args["query"].(string)), true return e.complexity.Query.GlobalSearch(childComplexity, args["query"].(string), args["filter"].(*Filter), args["requestedPage"].(*PageInput)), true
case "Query.Image": case "Query.Image":
if e.complexity.Query.Image == nil { if e.complexity.Query.Image == nil {
@ -685,7 +729,12 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
break break
} }
return e.complexity.Query.RepoListWithNewestImage(childComplexity), true args, err := ec.field_Query_RepoListWithNewestImage_args(context.TODO(), rawArgs)
if err != nil {
return 0, false
}
return e.complexity.Query.RepoListWithNewestImage(childComplexity, args["requestedPage"].(*PageInput)), true
case "Referrer.Annotations": case "Referrer.Annotations":
if e.complexity.Referrer.Annotations == nil { if e.complexity.Referrer.Annotations == nil {
@ -750,6 +799,13 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
return e.complexity.RepoSummary.IsBookmarked(childComplexity), true return e.complexity.RepoSummary.IsBookmarked(childComplexity), true
case "RepoSummary.IsStarred":
if e.complexity.RepoSummary.IsStarred == nil {
break
}
return e.complexity.RepoSummary.IsStarred(childComplexity), true
case "RepoSummary.LastUpdated": case "RepoSummary.LastUpdated":
if e.complexity.RepoSummary.LastUpdated == nil { if e.complexity.RepoSummary.LastUpdated == nil {
break break
@ -813,7 +869,10 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in
func (e *executableSchema) Exec(ctx context.Context) graphql.ResponseHandler { func (e *executableSchema) Exec(ctx context.Context) graphql.ResponseHandler {
rc := graphql.GetOperationContext(ctx) rc := graphql.GetOperationContext(ctx)
ec := executionContext{rc, e} ec := executionContext{rc, e}
inputUnmarshalMap := graphql.BuildUnmarshalerMap() inputUnmarshalMap := graphql.BuildUnmarshalerMap(
ec.unmarshalInputFilter,
ec.unmarshalInputPageInput,
)
first := true first := true
switch rc.Operation.Operation { switch rc.Operation.Operation {
@ -902,6 +961,7 @@ type RepoInfo {
Search everything. Can search Images, Repos and Layers Search everything. Can search Images, Repos and Layers
""" """
type GlobalSearchResult { type GlobalSearchResult {
Page: PageInfo
Images: [ImageSummary] Images: [ImageSummary]
Repos: [RepoSummary] Repos: [RepoSummary]
Layers: [LayerSummary] Layers: [LayerSummary]
@ -926,7 +986,7 @@ type ImageSummary {
DownloadCount: Int DownloadCount: Int
Layers: [LayerSummary] Layers: [LayerSummary]
Description: String Description: String
Licenses: String Licenses: String # The value of the annotation if present, 'unknown' otherwise).
Labels: String Labels: String
Title: String Title: String
Source: String Source: String
@ -952,10 +1012,11 @@ type RepoSummary {
Platforms: [OsArch] Platforms: [OsArch]
Vendors: [String] Vendors: [String]
Score: Int Score: Int
NewestImage: ImageSummary NewestImage: ImageSummary # Newest based on created timestamp
DownloadCount: Int DownloadCount: Int
StarCount: Int StarCount: Int
IsBookmarked: Boolean IsBookmarked: Boolean
IsStarred: Boolean
} }
# Currently the same as LayerInfo, we can refactor later # Currently the same as LayerInfo, we can refactor later
@ -1015,6 +1076,35 @@ type OsArch {
Arch: String Arch: String
} }
enum SortCriteria {
RELEVANCE
UPDATE_TIME
ALPHABETIC_ASC
ALPHABETIC_DSC
STARS
DOWNLOADS
}
type PageInfo {
ObjectCount: Int!
PreviousPage: Int
NextPage: Int
Pages: Int
}
# Pagination parameters
input PageInput {
limit: Int
offset: Int
sortBy: SortCriteria
}
input Filter {
Os: [String]
Arch: [String]
HasToBeSigned: Boolean
}
type Query { type Query {
""" """
Returns a CVE list for the image specified in the arugment Returns a CVE list for the image specified in the arugment
@ -1039,7 +1129,7 @@ type Query {
""" """
Returns a list of repos with the newest tag within Returns a list of repos with the newest tag within
""" """
RepoListWithNewestImage: [RepoSummary!]! # Newest based on created timestamp RepoListWithNewestImage(requestedPage: PageInput): [RepoSummary!]! # Newest based on created timestamp
""" """
Returns all the images from the specified repo Returns all the images from the specified repo
@ -1054,7 +1144,7 @@ type Query {
""" """
Searches within repos, images, and layers Searches within repos, images, and layers
""" """
GlobalSearch(query: String!): GlobalSearchResult! GlobalSearch(query: String!, filter: Filter, requestedPage: PageInput): GlobalSearchResult!
""" """
List of images which use the argument image List of images which use the argument image
@ -1157,6 +1247,24 @@ func (ec *executionContext) field_Query_GlobalSearch_args(ctx context.Context, r
} }
} }
args["query"] = arg0 args["query"] = arg0
var arg1 *Filter
if tmp, ok := rawArgs["filter"]; ok {
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("filter"))
arg1, err = ec.unmarshalOFilter2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐFilter(ctx, tmp)
if err != nil {
return nil, err
}
}
args["filter"] = arg1
var arg2 *PageInput
if tmp, ok := rawArgs["requestedPage"]; ok {
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("requestedPage"))
arg2, err = ec.unmarshalOPageInput2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐPageInput(ctx, tmp)
if err != nil {
return nil, err
}
}
args["requestedPage"] = arg2
return args, nil return args, nil
} }
@ -1277,6 +1385,21 @@ func (ec *executionContext) field_Query_Referrers_args(ctx context.Context, rawA
return args, nil return args, nil
} }
func (ec *executionContext) field_Query_RepoListWithNewestImage_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) {
var err error
args := map[string]interface{}{}
var arg0 *PageInput
if tmp, ok := rawArgs["requestedPage"]; ok {
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("requestedPage"))
arg0, err = ec.unmarshalOPageInput2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐPageInput(ctx, tmp)
if err != nil {
return nil, err
}
}
args["requestedPage"] = arg0
return args, nil
}
func (ec *executionContext) field_Query___type_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { func (ec *executionContext) field_Query___type_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) {
var err error var err error
args := map[string]interface{}{} args := map[string]interface{}{}
@ -1719,6 +1842,57 @@ func (ec *executionContext) fieldContext_CVEResultForImage_CVEList(ctx context.C
return fc, nil return fc, nil
} }
func (ec *executionContext) _GlobalSearchResult_Page(ctx context.Context, field graphql.CollectedField, obj *GlobalSearchResult) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_GlobalSearchResult_Page(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.Page, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
return graphql.Null
}
res := resTmp.(*PageInfo)
fc.Result = res
return ec.marshalOPageInfo2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐPageInfo(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_GlobalSearchResult_Page(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "GlobalSearchResult",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
switch field.Name {
case "ObjectCount":
return ec.fieldContext_PageInfo_ObjectCount(ctx, field)
case "PreviousPage":
return ec.fieldContext_PageInfo_PreviousPage(ctx, field)
case "NextPage":
return ec.fieldContext_PageInfo_NextPage(ctx, field)
case "Pages":
return ec.fieldContext_PageInfo_Pages(ctx, field)
}
return nil, fmt.Errorf("no field named %q was found under type PageInfo", field.Name)
},
}
return fc, nil
}
func (ec *executionContext) _GlobalSearchResult_Images(ctx context.Context, field graphql.CollectedField, obj *GlobalSearchResult) (ret graphql.Marshaler) { func (ec *executionContext) _GlobalSearchResult_Images(ctx context.Context, field graphql.CollectedField, obj *GlobalSearchResult) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_GlobalSearchResult_Images(ctx, field) fc, err := ec.fieldContext_GlobalSearchResult_Images(ctx, field)
if err != nil { if err != nil {
@ -1860,6 +2034,8 @@ func (ec *executionContext) fieldContext_GlobalSearchResult_Repos(ctx context.Co
return ec.fieldContext_RepoSummary_StarCount(ctx, field) return ec.fieldContext_RepoSummary_StarCount(ctx, field)
case "IsBookmarked": case "IsBookmarked":
return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field) return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field)
case "IsStarred":
return ec.fieldContext_RepoSummary_IsStarred(ctx, field)
} }
return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name) return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name)
}, },
@ -3520,6 +3696,173 @@ func (ec *executionContext) fieldContext_PackageInfo_FixedVersion(ctx context.Co
return fc, nil return fc, nil
} }
func (ec *executionContext) _PageInfo_ObjectCount(ctx context.Context, field graphql.CollectedField, obj *PageInfo) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_PageInfo_ObjectCount(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.ObjectCount, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
if !graphql.HasFieldError(ctx, fc) {
ec.Errorf(ctx, "must not be null")
}
return graphql.Null
}
res := resTmp.(int)
fc.Result = res
return ec.marshalNInt2int(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_PageInfo_ObjectCount(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "PageInfo",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
return nil, errors.New("field of type Int does not have child fields")
},
}
return fc, nil
}
func (ec *executionContext) _PageInfo_PreviousPage(ctx context.Context, field graphql.CollectedField, obj *PageInfo) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_PageInfo_PreviousPage(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.PreviousPage, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
return graphql.Null
}
res := resTmp.(*int)
fc.Result = res
return ec.marshalOInt2ᚖint(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_PageInfo_PreviousPage(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "PageInfo",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
return nil, errors.New("field of type Int does not have child fields")
},
}
return fc, nil
}
func (ec *executionContext) _PageInfo_NextPage(ctx context.Context, field graphql.CollectedField, obj *PageInfo) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_PageInfo_NextPage(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.NextPage, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
return graphql.Null
}
res := resTmp.(*int)
fc.Result = res
return ec.marshalOInt2ᚖint(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_PageInfo_NextPage(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "PageInfo",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
return nil, errors.New("field of type Int does not have child fields")
},
}
return fc, nil
}
func (ec *executionContext) _PageInfo_Pages(ctx context.Context, field graphql.CollectedField, obj *PageInfo) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_PageInfo_Pages(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.Pages, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
return graphql.Null
}
res := resTmp.(*int)
fc.Result = res
return ec.marshalOInt2ᚖint(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_PageInfo_Pages(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "PageInfo",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
return nil, errors.New("field of type Int does not have child fields")
},
}
return fc, nil
}
func (ec *executionContext) _Query_CVEListForImage(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { func (ec *executionContext) _Query_CVEListForImage(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_Query_CVEListForImage(ctx, field) fc, err := ec.fieldContext_Query_CVEListForImage(ctx, field)
if err != nil { if err != nil {
@ -3883,7 +4226,7 @@ func (ec *executionContext) _Query_RepoListWithNewestImage(ctx context.Context,
}() }()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children ctx = rctx // use context from middleware stack in children
return ec.resolvers.Query().RepoListWithNewestImage(rctx) return ec.resolvers.Query().RepoListWithNewestImage(rctx, fc.Args["requestedPage"].(*PageInput))
}) })
if err != nil { if err != nil {
ec.Error(ctx, err) ec.Error(ctx, err)
@ -3928,10 +4271,23 @@ func (ec *executionContext) fieldContext_Query_RepoListWithNewestImage(ctx conte
return ec.fieldContext_RepoSummary_StarCount(ctx, field) return ec.fieldContext_RepoSummary_StarCount(ctx, field)
case "IsBookmarked": case "IsBookmarked":
return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field) return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field)
case "IsStarred":
return ec.fieldContext_RepoSummary_IsStarred(ctx, field)
} }
return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name) return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name)
}, },
} }
defer func() {
if r := recover(); r != nil {
err = ec.Recover(ctx, r)
ec.Error(ctx, err)
}
}()
ctx = graphql.WithFieldContext(ctx, fc)
if fc.Args, err = ec.field_Query_RepoListWithNewestImage_args(ctx, field.ArgumentMap(ec.Variables)); err != nil {
ec.Error(ctx, err)
return
}
return fc, nil return fc, nil
} }
@ -4106,7 +4462,7 @@ func (ec *executionContext) _Query_GlobalSearch(ctx context.Context, field graph
}() }()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children ctx = rctx // use context from middleware stack in children
return ec.resolvers.Query().GlobalSearch(rctx, fc.Args["query"].(string)) return ec.resolvers.Query().GlobalSearch(rctx, fc.Args["query"].(string), fc.Args["filter"].(*Filter), fc.Args["requestedPage"].(*PageInput))
}) })
if err != nil { if err != nil {
ec.Error(ctx, err) ec.Error(ctx, err)
@ -4131,6 +4487,8 @@ func (ec *executionContext) fieldContext_Query_GlobalSearch(ctx context.Context,
IsResolver: true, IsResolver: true,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
switch field.Name { switch field.Name {
case "Page":
return ec.fieldContext_GlobalSearchResult_Page(ctx, field)
case "Images": case "Images":
return ec.fieldContext_GlobalSearchResult_Images(ctx, field) return ec.fieldContext_GlobalSearchResult_Images(ctx, field)
case "Repos": case "Repos":
@ -4994,6 +5352,8 @@ func (ec *executionContext) fieldContext_RepoInfo_Summary(ctx context.Context, f
return ec.fieldContext_RepoSummary_StarCount(ctx, field) return ec.fieldContext_RepoSummary_StarCount(ctx, field)
case "IsBookmarked": case "IsBookmarked":
return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field) return ec.fieldContext_RepoSummary_IsBookmarked(ctx, field)
case "IsStarred":
return ec.fieldContext_RepoSummary_IsStarred(ctx, field)
} }
return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name) return nil, fmt.Errorf("no field named %q was found under type RepoSummary", field.Name)
}, },
@ -5461,6 +5821,47 @@ func (ec *executionContext) fieldContext_RepoSummary_IsBookmarked(ctx context.Co
return fc, nil return fc, nil
} }
func (ec *executionContext) _RepoSummary_IsStarred(ctx context.Context, field graphql.CollectedField, obj *RepoSummary) (ret graphql.Marshaler) {
fc, err := ec.fieldContext_RepoSummary_IsStarred(ctx, field)
if err != nil {
return graphql.Null
}
ctx = graphql.WithFieldContext(ctx, fc)
defer func() {
if r := recover(); r != nil {
ec.Error(ctx, ec.Recover(ctx, r))
ret = graphql.Null
}
}()
resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) {
ctx = rctx // use context from middleware stack in children
return obj.IsStarred, nil
})
if err != nil {
ec.Error(ctx, err)
return graphql.Null
}
if resTmp == nil {
return graphql.Null
}
res := resTmp.(*bool)
fc.Result = res
return ec.marshalOBoolean2ᚖbool(ctx, field.Selections, res)
}
func (ec *executionContext) fieldContext_RepoSummary_IsStarred(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) {
fc = &graphql.FieldContext{
Object: "RepoSummary",
Field: field,
IsMethod: false,
IsResolver: false,
Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) {
return nil, errors.New("field of type Boolean does not have child fields")
},
}
return fc, nil
}
func (ec *executionContext) ___Directive_name(ctx context.Context, field graphql.CollectedField, obj *introspection.Directive) (ret graphql.Marshaler) { func (ec *executionContext) ___Directive_name(ctx context.Context, field graphql.CollectedField, obj *introspection.Directive) (ret graphql.Marshaler) {
fc, err := ec.fieldContext___Directive_name(ctx, field) fc, err := ec.fieldContext___Directive_name(ctx, field)
if err != nil { if err != nil {
@ -7234,6 +7635,94 @@ func (ec *executionContext) fieldContext___Type_specifiedByURL(ctx context.Conte
// region **************************** input.gotpl ***************************** // region **************************** input.gotpl *****************************
func (ec *executionContext) unmarshalInputFilter(ctx context.Context, obj interface{}) (Filter, error) {
var it Filter
asMap := map[string]interface{}{}
for k, v := range obj.(map[string]interface{}) {
asMap[k] = v
}
fieldsInOrder := [...]string{"Os", "Arch", "HasToBeSigned"}
for _, k := range fieldsInOrder {
v, ok := asMap[k]
if !ok {
continue
}
switch k {
case "Os":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("Os"))
it.Os, err = ec.unmarshalOString2ᚕᚖstring(ctx, v)
if err != nil {
return it, err
}
case "Arch":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("Arch"))
it.Arch, err = ec.unmarshalOString2ᚕᚖstring(ctx, v)
if err != nil {
return it, err
}
case "HasToBeSigned":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("HasToBeSigned"))
it.HasToBeSigned, err = ec.unmarshalOBoolean2ᚖbool(ctx, v)
if err != nil {
return it, err
}
}
}
return it, nil
}
func (ec *executionContext) unmarshalInputPageInput(ctx context.Context, obj interface{}) (PageInput, error) {
var it PageInput
asMap := map[string]interface{}{}
for k, v := range obj.(map[string]interface{}) {
asMap[k] = v
}
fieldsInOrder := [...]string{"limit", "offset", "sortBy"}
for _, k := range fieldsInOrder {
v, ok := asMap[k]
if !ok {
continue
}
switch k {
case "limit":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("limit"))
it.Limit, err = ec.unmarshalOInt2ᚖint(ctx, v)
if err != nil {
return it, err
}
case "offset":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("offset"))
it.Offset, err = ec.unmarshalOInt2ᚖint(ctx, v)
if err != nil {
return it, err
}
case "sortBy":
var err error
ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("sortBy"))
it.SortBy, err = ec.unmarshalOSortCriteria2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐSortCriteria(ctx, v)
if err != nil {
return it, err
}
}
}
return it, nil
}
// endregion **************************** input.gotpl ***************************** // endregion **************************** input.gotpl *****************************
// region ************************** interface.gotpl *************************** // region ************************** interface.gotpl ***************************
@ -7351,6 +7840,10 @@ func (ec *executionContext) _GlobalSearchResult(ctx context.Context, sel ast.Sel
switch field.Name { switch field.Name {
case "__typename": case "__typename":
out.Values[i] = graphql.MarshalString("GlobalSearchResult") out.Values[i] = graphql.MarshalString("GlobalSearchResult")
case "Page":
out.Values[i] = ec._GlobalSearchResult_Page(ctx, field, obj)
case "Images": case "Images":
out.Values[i] = ec._GlobalSearchResult_Images(ctx, field, obj) out.Values[i] = ec._GlobalSearchResult_Images(ctx, field, obj)
@ -7673,6 +8166,46 @@ func (ec *executionContext) _PackageInfo(ctx context.Context, sel ast.SelectionS
return out return out
} }
var pageInfoImplementors = []string{"PageInfo"}
func (ec *executionContext) _PageInfo(ctx context.Context, sel ast.SelectionSet, obj *PageInfo) graphql.Marshaler {
fields := graphql.CollectFields(ec.OperationContext, sel, pageInfoImplementors)
out := graphql.NewFieldSet(fields)
var invalids uint32
for i, field := range fields {
switch field.Name {
case "__typename":
out.Values[i] = graphql.MarshalString("PageInfo")
case "ObjectCount":
out.Values[i] = ec._PageInfo_ObjectCount(ctx, field, obj)
if out.Values[i] == graphql.Null {
invalids++
}
case "PreviousPage":
out.Values[i] = ec._PageInfo_PreviousPage(ctx, field, obj)
case "NextPage":
out.Values[i] = ec._PageInfo_NextPage(ctx, field, obj)
case "Pages":
out.Values[i] = ec._PageInfo_Pages(ctx, field, obj)
default:
panic("unknown field " + strconv.Quote(field.Name))
}
}
out.Dispatch()
if invalids > 0 {
return graphql.Null
}
return out
}
var queryImplementors = []string{"Query"} var queryImplementors = []string{"Query"}
func (ec *executionContext) _Query(ctx context.Context, sel ast.SelectionSet) graphql.Marshaler { func (ec *executionContext) _Query(ctx context.Context, sel ast.SelectionSet) graphql.Marshaler {
@ -8093,6 +8626,10 @@ func (ec *executionContext) _RepoSummary(ctx context.Context, sel ast.SelectionS
out.Values[i] = ec._RepoSummary_IsBookmarked(ctx, field, obj) out.Values[i] = ec._RepoSummary_IsBookmarked(ctx, field, obj)
case "IsStarred":
out.Values[i] = ec._RepoSummary_IsStarred(ctx, field, obj)
default: default:
panic("unknown field " + strconv.Quote(field.Name)) panic("unknown field " + strconv.Quote(field.Name))
} }
@ -8513,6 +9050,21 @@ func (ec *executionContext) marshalNImageSummary2ᚖzotregistryᚗioᚋzotᚋpkg
return ec._ImageSummary(ctx, sel, v) return ec._ImageSummary(ctx, sel, v)
} }
func (ec *executionContext) unmarshalNInt2int(ctx context.Context, v interface{}) (int, error) {
res, err := graphql.UnmarshalInt(v)
return res, graphql.ErrorOnPath(ctx, err)
}
func (ec *executionContext) marshalNInt2int(ctx context.Context, sel ast.SelectionSet, v int) graphql.Marshaler {
res := graphql.MarshalInt(v)
if res == graphql.Null {
if !graphql.HasFieldError(ctx, graphql.GetFieldContext(ctx)) {
ec.Errorf(ctx, "the requested element is null which the schema does not allow")
}
}
return res
}
func (ec *executionContext) marshalNReferrer2ᚕᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐReferrer(ctx context.Context, sel ast.SelectionSet, v []*Referrer) graphql.Marshaler { func (ec *executionContext) marshalNReferrer2ᚕᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐReferrer(ctx context.Context, sel ast.SelectionSet, v []*Referrer) graphql.Marshaler {
ret := make(graphql.Array, len(v)) ret := make(graphql.Array, len(v))
var wg sync.WaitGroup var wg sync.WaitGroup
@ -8968,6 +9520,14 @@ func (ec *executionContext) marshalOCVE2ᚖzotregistryᚗioᚋzotᚋpkgᚋextens
return ec._CVE(ctx, sel, v) return ec._CVE(ctx, sel, v)
} }
func (ec *executionContext) unmarshalOFilter2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐFilter(ctx context.Context, v interface{}) (*Filter, error) {
if v == nil {
return nil, nil
}
res, err := ec.unmarshalInputFilter(ctx, v)
return &res, graphql.ErrorOnPath(ctx, err)
}
func (ec *executionContext) marshalOHistoryDescription2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐHistoryDescription(ctx context.Context, sel ast.SelectionSet, v *HistoryDescription) graphql.Marshaler { func (ec *executionContext) marshalOHistoryDescription2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐHistoryDescription(ctx context.Context, sel ast.SelectionSet, v *HistoryDescription) graphql.Marshaler {
if v == nil { if v == nil {
return graphql.Null return graphql.Null
@ -9285,6 +9845,21 @@ func (ec *executionContext) marshalOPackageInfo2ᚖzotregistryᚗioᚋzotᚋpkg
return ec._PackageInfo(ctx, sel, v) return ec._PackageInfo(ctx, sel, v)
} }
func (ec *executionContext) marshalOPageInfo2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐPageInfo(ctx context.Context, sel ast.SelectionSet, v *PageInfo) graphql.Marshaler {
if v == nil {
return graphql.Null
}
return ec._PageInfo(ctx, sel, v)
}
func (ec *executionContext) unmarshalOPageInput2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐPageInput(ctx context.Context, v interface{}) (*PageInput, error) {
if v == nil {
return nil, nil
}
res, err := ec.unmarshalInputPageInput(ctx, v)
return &res, graphql.ErrorOnPath(ctx, err)
}
func (ec *executionContext) marshalOReferrer2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐReferrer(ctx context.Context, sel ast.SelectionSet, v *Referrer) graphql.Marshaler { func (ec *executionContext) marshalOReferrer2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐReferrer(ctx context.Context, sel ast.SelectionSet, v *Referrer) graphql.Marshaler {
if v == nil { if v == nil {
return graphql.Null return graphql.Null
@ -9340,6 +9915,22 @@ func (ec *executionContext) marshalORepoSummary2ᚖzotregistryᚗioᚋzotᚋpkg
return ec._RepoSummary(ctx, sel, v) return ec._RepoSummary(ctx, sel, v)
} }
func (ec *executionContext) unmarshalOSortCriteria2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐSortCriteria(ctx context.Context, v interface{}) (*SortCriteria, error) {
if v == nil {
return nil, nil
}
var res = new(SortCriteria)
err := res.UnmarshalGQL(v)
return res, graphql.ErrorOnPath(ctx, err)
}
func (ec *executionContext) marshalOSortCriteria2ᚖzotregistryᚗioᚋzotᚋpkgᚋextensionsᚋsearchᚋgql_generatedᚐSortCriteria(ctx context.Context, sel ast.SelectionSet, v *SortCriteria) graphql.Marshaler {
if v == nil {
return graphql.Null
}
return v
}
func (ec *executionContext) unmarshalOString2ᚕᚖstring(ctx context.Context, v interface{}) ([]*string, error) { func (ec *executionContext) unmarshalOString2ᚕᚖstring(ctx context.Context, v interface{}) ([]*string, error) {
if v == nil { if v == nil {
return nil, nil return nil, nil

View file

@ -3,6 +3,9 @@
package gql_generated package gql_generated
import ( import (
"fmt"
"io"
"strconv"
"time" "time"
) )
@ -26,8 +29,15 @@ type CVEResultForImage struct {
CVEList []*Cve `json:"CVEList"` CVEList []*Cve `json:"CVEList"`
} }
type Filter struct {
Os []*string `json:"Os"`
Arch []*string `json:"Arch"`
HasToBeSigned *bool `json:"HasToBeSigned"`
}
// Search everything. Can search Images, Repos and Layers // Search everything. Can search Images, Repos and Layers
type GlobalSearchResult struct { type GlobalSearchResult struct {
Page *PageInfo `json:"Page"`
Images []*ImageSummary `json:"Images"` Images []*ImageSummary `json:"Images"`
Repos []*RepoSummary `json:"Repos"` Repos []*RepoSummary `json:"Repos"`
Layers []*LayerSummary `json:"Layers"` Layers []*LayerSummary `json:"Layers"`
@ -100,6 +110,19 @@ type PackageInfo struct {
FixedVersion *string `json:"FixedVersion"` FixedVersion *string `json:"FixedVersion"`
} }
type PageInfo struct {
ObjectCount int `json:"ObjectCount"`
PreviousPage *int `json:"PreviousPage"`
NextPage *int `json:"NextPage"`
Pages *int `json:"Pages"`
}
type PageInput struct {
Limit *int `json:"limit"`
Offset *int `json:"offset"`
SortBy *SortCriteria `json:"sortBy"`
}
type Referrer struct { type Referrer struct {
MediaType *string `json:"MediaType"` MediaType *string `json:"MediaType"`
ArtifactType *string `json:"ArtifactType"` ArtifactType *string `json:"ArtifactType"`
@ -126,4 +149,54 @@ type RepoSummary struct {
DownloadCount *int `json:"DownloadCount"` DownloadCount *int `json:"DownloadCount"`
StarCount *int `json:"StarCount"` StarCount *int `json:"StarCount"`
IsBookmarked *bool `json:"IsBookmarked"` IsBookmarked *bool `json:"IsBookmarked"`
IsStarred *bool `json:"IsStarred"`
}
type SortCriteria string
const (
SortCriteriaRelevance SortCriteria = "RELEVANCE"
SortCriteriaUpdateTime SortCriteria = "UPDATE_TIME"
SortCriteriaAlphabeticAsc SortCriteria = "ALPHABETIC_ASC"
SortCriteriaAlphabeticDsc SortCriteria = "ALPHABETIC_DSC"
SortCriteriaStars SortCriteria = "STARS"
SortCriteriaDownloads SortCriteria = "DOWNLOADS"
)
var AllSortCriteria = []SortCriteria{
SortCriteriaRelevance,
SortCriteriaUpdateTime,
SortCriteriaAlphabeticAsc,
SortCriteriaAlphabeticDsc,
SortCriteriaStars,
SortCriteriaDownloads,
}
func (e SortCriteria) IsValid() bool {
switch e {
case SortCriteriaRelevance, SortCriteriaUpdateTime, SortCriteriaAlphabeticAsc, SortCriteriaAlphabeticDsc, SortCriteriaStars, SortCriteriaDownloads:
return true
}
return false
}
func (e SortCriteria) String() string {
return string(e)
}
func (e *SortCriteria) UnmarshalGQL(v interface{}) error {
str, ok := v.(string)
if !ok {
return fmt.Errorf("enums must be strings")
}
*e = SortCriteria(str)
if !e.IsValid() {
return fmt.Errorf("%s is not a valid SortCriteria", str)
}
return nil
}
func (e SortCriteria) MarshalGQL(w io.Writer) {
fmt.Fprint(w, strconv.Quote(e.String()))
} }

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -42,6 +42,7 @@ type RepoInfo {
Search everything. Can search Images, Repos and Layers Search everything. Can search Images, Repos and Layers
""" """
type GlobalSearchResult { type GlobalSearchResult {
Page: PageInfo
Images: [ImageSummary] Images: [ImageSummary]
Repos: [RepoSummary] Repos: [RepoSummary]
Layers: [LayerSummary] Layers: [LayerSummary]
@ -66,7 +67,7 @@ type ImageSummary {
DownloadCount: Int DownloadCount: Int
Layers: [LayerSummary] Layers: [LayerSummary]
Description: String Description: String
Licenses: String Licenses: String # The value of the annotation if present, 'unknown' otherwise).
Labels: String Labels: String
Title: String Title: String
Source: String Source: String
@ -92,10 +93,11 @@ type RepoSummary {
Platforms: [OsArch] Platforms: [OsArch]
Vendors: [String] Vendors: [String]
Score: Int Score: Int
NewestImage: ImageSummary NewestImage: ImageSummary # Newest based on created timestamp
DownloadCount: Int DownloadCount: Int
StarCount: Int StarCount: Int
IsBookmarked: Boolean IsBookmarked: Boolean
IsStarred: Boolean
} }
# Currently the same as LayerInfo, we can refactor later # Currently the same as LayerInfo, we can refactor later
@ -155,6 +157,35 @@ type OsArch {
Arch: String Arch: String
} }
enum SortCriteria {
RELEVANCE
UPDATE_TIME
ALPHABETIC_ASC
ALPHABETIC_DSC
STARS
DOWNLOADS
}
type PageInfo {
ObjectCount: Int!
PreviousPage: Int
NextPage: Int
Pages: Int
}
# Pagination parameters
input PageInput {
limit: Int
offset: Int
sortBy: SortCriteria
}
input Filter {
Os: [String]
Arch: [String]
HasToBeSigned: Boolean
}
type Query { type Query {
""" """
Returns a CVE list for the image specified in the arugment Returns a CVE list for the image specified in the arugment
@ -179,7 +210,7 @@ type Query {
""" """
Returns a list of repos with the newest tag within Returns a list of repos with the newest tag within
""" """
RepoListWithNewestImage: [RepoSummary!]! # Newest based on created timestamp RepoListWithNewestImage(requestedPage: PageInput): [RepoSummary!]! # Newest based on created timestamp
""" """
Returns all the images from the specified repo Returns all the images from the specified repo
@ -194,7 +225,7 @@ type Query {
""" """
Searches within repos, images, and layers Searches within repos, images, and layers
""" """
GlobalSearch(query: String!): GlobalSearchResult! GlobalSearch(query: String!, filter: Filter, requestedPage: PageInput): GlobalSearchResult!
""" """
List of images which use the argument image List of images which use the argument image

View file

@ -6,26 +6,26 @@ package search
import ( import (
"context" "context"
"fmt"
"github.com/vektah/gqlparser/v2/gqlerror" "github.com/vektah/gqlparser/v2/gqlerror"
"zotregistry.io/zot/pkg/extensions/search/common" "zotregistry.io/zot/pkg/extensions/search/common"
"zotregistry.io/zot/pkg/extensions/search/convert"
"zotregistry.io/zot/pkg/extensions/search/gql_generated" "zotregistry.io/zot/pkg/extensions/search/gql_generated"
) )
// CVEListForImage is the resolver for the CVEListForImage field. // CVEListForImage is the resolver for the CVEListForImage field.
func (r *queryResolver) CVEListForImage(ctx context.Context, image string) (*gql_generated.CVEResultForImage, error) { func (r *queryResolver) CVEListForImage(ctx context.Context, image string) (*gql_generated.CVEResultForImage, error) {
cveidMap, err := r.cveInfo.GetCVEListForImage(image)
if err != nil {
return &gql_generated.CVEResultForImage{}, err
}
_, copyImgTag := common.GetImageDirAndTag(image) _, copyImgTag := common.GetImageDirAndTag(image)
if copyImgTag == "" { if copyImgTag == "" {
return &gql_generated.CVEResultForImage{}, gqlerror.Errorf("no reference provided") return &gql_generated.CVEResultForImage{}, gqlerror.Errorf("no reference provided")
} }
cveidMap, err := r.cveInfo.GetCVEListForImage(image)
if err != nil {
return &gql_generated.CVEResultForImage{}, err
}
cveids := []*gql_generated.Cve{} cveids := []*gql_generated.Cve{}
for id, cveDetail := range cveidMap { for id, cveDetail := range cveidMap {
@ -95,7 +95,13 @@ func (r *queryResolver) ImageListForCve(ctx context.Context, id string) ([]*gql_
} }
isSigned := olu.CheckManifestSignature(repo, imageByCVE.Digest) isSigned := olu.CheckManifestSignature(repo, imageByCVE.Digest)
imageInfo := BuildImageInfo(repo, imageByCVE.Tag, imageByCVE.Digest, imageByCVE.Manifest, imageConfig, isSigned) imageInfo := convert.BuildImageInfo(
repo, imageByCVE.Tag,
imageByCVE.Digest,
imageByCVE.Manifest,
imageConfig,
isSigned,
)
affectedImages = append( affectedImages = append(
affectedImages, affectedImages,
@ -135,7 +141,7 @@ func (r *queryResolver) ImageListWithCVEFixed(ctx context.Context, id string, im
} }
isSigned := olu.CheckManifestSignature(image, digest) isSigned := olu.CheckManifestSignature(image, digest)
imageInfo := BuildImageInfo(image, tag.Name, digest, manifest, imageConfig, isSigned) imageInfo := convert.BuildImageInfo(image, tag.Name, digest, manifest, imageConfig, isSigned)
unaffectedImages = append(unaffectedImages, imageInfo) unaffectedImages = append(unaffectedImages, imageInfo)
} }
@ -192,41 +198,12 @@ func (r *queryResolver) ImageListForDigest(ctx context.Context, id string) ([]*g
} }
// RepoListWithNewestImage is the resolver for the RepoListWithNewestImage field. // RepoListWithNewestImage is the resolver for the RepoListWithNewestImage field.
func (r *queryResolver) RepoListWithNewestImage(ctx context.Context) ([]*gql_generated.RepoSummary, error) { func (r *queryResolver) RepoListWithNewestImage(ctx context.Context, requestedPage *gql_generated.PageInput) ([]*gql_generated.RepoSummary, error) {
r.log.Info().Msg("extension api: finding image list") r.log.Info().Msg("extension api: finding image list")
olu := common.NewBaseOciLayoutUtils(r.storeController, r.log) reposSummary, err := repoListWithNewestImage(ctx, r.cveInfo, r.log, requestedPage, r.repoDB)
reposSummary := make([]*gql_generated.RepoSummary, 0)
repoList := []string{}
defaultRepoList, err := r.storeController.DefaultStore.GetRepositories()
if err != nil { if err != nil {
r.log.Error().Err(err).Msg("extension api: error extracting default store repo list") r.log.Error().Err(err).Msg("unable to retrieve repo list")
return reposSummary, err
}
if len(defaultRepoList) > 0 {
repoList = append(repoList, defaultRepoList...)
}
subStore := r.storeController.SubStore
for _, store := range subStore {
subRepoList, err := store.GetRepositories()
if err != nil {
r.log.Error().Err(err).Msg("extension api: error extracting substore repo list")
return reposSummary, err
}
repoList = append(repoList, subRepoList...)
}
reposSummary, err = repoListWithNewestImage(ctx, repoList, olu, r.cveInfo, r.log)
if err != nil {
r.log.Error().Err(err).Msg("extension api: error extracting substore image list")
return reposSummary, err return reposSummary, err
} }
@ -273,137 +250,27 @@ func (r *queryResolver) ImageList(ctx context.Context, repo string) ([]*gql_gene
// ExpandedRepoInfo is the resolver for the ExpandedRepoInfo field. // ExpandedRepoInfo is the resolver for the ExpandedRepoInfo field.
func (r *queryResolver) ExpandedRepoInfo(ctx context.Context, repo string) (*gql_generated.RepoInfo, error) { func (r *queryResolver) ExpandedRepoInfo(ctx context.Context, repo string) (*gql_generated.RepoInfo, error) {
olu := common.NewBaseOciLayoutUtils(r.storeController, r.log) repoInfo, err := expandedRepoInfo(ctx, repo, r.repoDB, r.cveInfo, r.log)
origRepoInfo, err := olu.GetExpandedRepoInfo(repo) return repoInfo, err
if err != nil {
r.log.Error().Err(err).Msgf("error getting repo '%s'", repo)
return &gql_generated.RepoInfo{}, err
}
// repos type is of common deep copy this to search
repoInfo := &gql_generated.RepoInfo{}
images := make([]*gql_generated.ImageSummary, 0)
summary := &gql_generated.RepoSummary{}
summary.LastUpdated = &origRepoInfo.Summary.LastUpdated
summary.Name = &origRepoInfo.Summary.Name
summary.Platforms = []*gql_generated.OsArch{}
summary.NewestImage = &gql_generated.ImageSummary{
RepoName: &origRepoInfo.Summary.NewestImage.RepoName,
Tag: &origRepoInfo.Summary.NewestImage.Tag,
LastUpdated: &origRepoInfo.Summary.NewestImage.LastUpdated,
Digest: &origRepoInfo.Summary.NewestImage.Digest,
ConfigDigest: &origRepoInfo.Summary.NewestImage.ConfigDigest,
IsSigned: &origRepoInfo.Summary.NewestImage.IsSigned,
Size: &origRepoInfo.Summary.NewestImage.Size,
Platform: &gql_generated.OsArch{
Os: &origRepoInfo.Summary.NewestImage.Platform.Os,
Arch: &origRepoInfo.Summary.NewestImage.Platform.Arch,
},
Vendor: &origRepoInfo.Summary.NewestImage.Vendor,
Score: &origRepoInfo.Summary.NewestImage.Score,
Description: &origRepoInfo.Summary.NewestImage.Description,
Title: &origRepoInfo.Summary.NewestImage.Title,
Documentation: &origRepoInfo.Summary.NewestImage.Documentation,
Licenses: &origRepoInfo.Summary.NewestImage.Licenses,
Labels: &origRepoInfo.Summary.NewestImage.Labels,
Source: &origRepoInfo.Summary.NewestImage.Source,
}
for _, platform := range origRepoInfo.Summary.Platforms {
platform := platform
summary.Platforms = append(summary.Platforms, &gql_generated.OsArch{
Os: &platform.Os,
Arch: &platform.Arch,
})
}
summary.Size = &origRepoInfo.Summary.Size
for _, vendor := range origRepoInfo.Summary.Vendors {
vendor := vendor
summary.Vendors = append(summary.Vendors, &vendor)
}
score := -1 // score not relevant for this query
summary.Score = &score
for _, image := range origRepoInfo.ImageSummaries {
tag := image.Tag
digest := image.Digest
configDigest := image.ConfigDigest
isSigned := image.IsSigned
size := image.Size
imageSummary := &gql_generated.ImageSummary{
Tag: &tag,
Digest: &digest,
ConfigDigest: &configDigest,
IsSigned: &isSigned,
RepoName: &repo,
}
layers := make([]*gql_generated.LayerSummary, 0)
for _, l := range image.Layers {
size := l.Size
digest := l.Digest
layerInfo := &gql_generated.LayerSummary{Digest: &digest, Size: &size}
layers = append(layers, layerInfo)
}
imageSummary.Layers = layers
imageSummary.Size = &size
images = append(images, imageSummary)
}
repoInfo.Summary = summary
repoInfo.Images = images
return repoInfo, nil
} }
// GlobalSearch is the resolver for the GlobalSearch field. // GlobalSearch is the resolver for the GlobalSearch field.
func (r *queryResolver) GlobalSearch(ctx context.Context, query string) (*gql_generated.GlobalSearchResult, error) { func (r *queryResolver) GlobalSearch(ctx context.Context, query string, filter *gql_generated.Filter, requestedPage *gql_generated.PageInput) (*gql_generated.GlobalSearchResult, error) {
query = cleanQuerry(query) if err := validateGlobalSearchInput(query, filter, requestedPage); err != nil {
defaultStore := r.storeController.DefaultStore
olu := common.NewBaseOciLayoutUtils(r.storeController, r.log)
var name, tag string
_, err := fmt.Sscanf(query, "%s %s", &name, &tag)
if err != nil {
name = query
}
repoList, err := defaultStore.GetRepositories()
if err != nil {
r.log.Error().Err(err).Msg("unable to search repositories")
return &gql_generated.GlobalSearchResult{}, err return &gql_generated.GlobalSearchResult{}, err
} }
availableRepos, err := userAvailableRepos(ctx, repoList) query = cleanQuery(query)
if err != nil { filter = cleanFilter(filter)
r.log.Error().Err(err).Msg("unable to filter user available repositories")
return &gql_generated.GlobalSearchResult{}, err repos, images, layers, err := globalSearch(ctx, query, r.repoDB, filter, requestedPage, r.cveInfo, r.log)
}
repos, images, layers := globalSearch(availableRepos, name, tag, olu, r.cveInfo, r.log)
return &gql_generated.GlobalSearchResult{ return &gql_generated.GlobalSearchResult{
Images: images, Images: images,
Repos: repos, Repos: repos,
Layers: layers, Layers: layers,
}, nil }, err
} }
// DependencyListForImage is the resolver for the DependencyListForImage field. // DependencyListForImage is the resolver for the DependencyListForImage field.
@ -563,23 +430,12 @@ func (r *queryResolver) BaseImageList(ctx context.Context, image string) ([]*gql
// Image is the resolver for the Image field. // Image is the resolver for the Image field.
func (r *queryResolver) Image(ctx context.Context, image string) (*gql_generated.ImageSummary, error) { func (r *queryResolver) Image(ctx context.Context, image string) (*gql_generated.ImageSummary, error) {
repo, tag := common.GetImageDirAndTag(image) repo, tag := common.GetImageDirAndTag(image)
layoutUtils := common.NewBaseOciLayoutUtils(r.storeController, r.log)
if tag == "" { if tag == "" {
return &gql_generated.ImageSummary{}, gqlerror.Errorf("no reference provided") return &gql_generated.ImageSummary{}, gqlerror.Errorf("no reference provided")
} }
digest, manifest, imageConfig, err := extractImageDetails(ctx, layoutUtils, repo, tag, r.log) return getImageSummary(ctx, repo, tag, r.repoDB, r.cveInfo, r.log)
if err != nil {
r.log.Error().Err(err).Msg("unable to get image details")
return nil, err
}
isSigned := layoutUtils.CheckManifestSignature(repo, digest)
result := BuildImageInfo(repo, tag, digest, *manifest, *imageConfig, isSigned)
return result, nil
} }
// Referrers is the resolver for the Referrers field. // Referrers is the resolver for the Referrers field.

View file

@ -0,0 +1,893 @@
package bolt
import (
"context"
"encoding/json"
"os"
"path"
"strings"
"time"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/rs/zerolog"
bolt "go.etcd.io/bbolt"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/meta/repodb/common"
"zotregistry.io/zot/pkg/meta/repodb/version"
localCtx "zotregistry.io/zot/pkg/requestcontext"
)
type DBParameters struct {
RootDir string
}
type DBWrapper struct {
DB *bolt.DB
Patches []func(DB *bolt.DB) error
Log log.Logger
}
func NewBoltDBWrapper(params DBParameters) (*DBWrapper, error) {
const perms = 0o600
boltDB, err := bolt.Open(path.Join(params.RootDir, "repo.db"), perms, &bolt.Options{Timeout: time.Second * 10})
if err != nil {
return nil, err
}
err = boltDB.Update(func(transaction *bolt.Tx) error {
versionBuck, err := transaction.CreateBucketIfNotExists([]byte(repodb.VersionBucket))
if err != nil {
return err
}
err = versionBuck.Put([]byte(version.DBVersionKey), []byte(version.CurrentVersion))
if err != nil {
return err
}
_, err = transaction.CreateBucketIfNotExists([]byte(repodb.ManifestDataBucket))
if err != nil {
return err
}
_, err = transaction.CreateBucketIfNotExists([]byte(repodb.RepoMetadataBucket))
if err != nil {
return err
}
return nil
})
if err != nil {
return nil, err
}
return &DBWrapper{
DB: boltDB,
Patches: version.GetBoltDBPatches(),
Log: log.Logger{Logger: zerolog.New(os.Stdout)},
}, nil
}
func (bdw DBWrapper) SetManifestData(manifestDigest godigest.Digest, manifestData repodb.ManifestData) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.ManifestDataBucket))
mdBlob, err := json.Marshal(manifestData)
if err != nil {
return errors.Wrapf(err, "repodb: error while calculating blob for manifest with digest %s", manifestDigest)
}
err = buck.Put([]byte(manifestDigest), mdBlob)
if err != nil {
return errors.Wrapf(err, "repodb: error while setting manifest data with for digest %s", manifestDigest)
}
return nil
})
return err
}
func (bdw DBWrapper) GetManifestData(manifestDigest godigest.Digest) (repodb.ManifestData, error) {
var manifestData repodb.ManifestData
err := bdw.DB.View(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.ManifestDataBucket))
mdBlob := buck.Get([]byte(manifestDigest))
if len(mdBlob) == 0 {
return zerr.ErrManifestDataNotFound
}
err := json.Unmarshal(mdBlob, &manifestData)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmashaling manifest meta for digest %s", manifestDigest)
}
return nil
})
return manifestData, err
}
func (bdw DBWrapper) SetManifestMeta(repo string, manifestDigest godigest.Digest, manifestMeta repodb.ManifestMetadata,
) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMeta := repodb.RepoMetadata{
Name: repo,
Tags: map[string]repodb.Descriptor{},
Statistics: map[string]repodb.DescriptorStatistics{},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob := repoBuck.Get([]byte(repo))
if len(repoMetaBlob) > 0 {
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
}
mdBlob, err := json.Marshal(repodb.ManifestData{
ManifestBlob: manifestMeta.ManifestBlob,
ConfigBlob: manifestMeta.ConfigBlob,
})
if err != nil {
return errors.Wrapf(err, "repodb: error while calculating blob for manifest with digest %s", manifestDigest)
}
err = dataBuck.Put([]byte(manifestDigest), mdBlob)
if err != nil {
return errors.Wrapf(err, "repodb: error while setting manifest meta with for digest %s", manifestDigest)
}
updatedRepoMeta := common.UpdateManifestMeta(repoMeta, manifestDigest, manifestMeta)
updatedRepoMetaBlob, err := json.Marshal(updatedRepoMeta)
if err != nil {
return errors.Wrapf(err, "repodb: error while calculating blob for updated repo meta '%s'", repo)
}
return repoBuck.Put([]byte(repo), updatedRepoMetaBlob)
})
return err
}
func (bdw DBWrapper) GetManifestMeta(repo string, manifestDigest godigest.Digest) (repodb.ManifestMetadata, error) {
var manifestMetadata repodb.ManifestMetadata
err := bdw.DB.View(func(tx *bolt.Tx) error {
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
mdBlob := dataBuck.Get([]byte(manifestDigest))
if len(mdBlob) == 0 {
return zerr.ErrManifestMetaNotFound
}
var manifestData repodb.ManifestData
err := json.Unmarshal(mdBlob, &manifestData)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmashaling manifest meta for digest %s", manifestDigest)
}
var repoMeta repodb.RepoMetadata
repoMetaBlob := repoBuck.Get([]byte(repo))
if len(repoMetaBlob) > 0 {
err = json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmashaling manifest meta for digest %s", manifestDigest)
}
}
manifestMetadata.ManifestBlob = manifestData.ManifestBlob
manifestMetadata.ConfigBlob = manifestData.ConfigBlob
manifestMetadata.DownloadCount = repoMeta.Statistics[manifestDigest.String()].DownloadCount
manifestMetadata.Signatures = repodb.ManifestSignatures{}
if repoMeta.Signatures[manifestDigest.String()] != nil {
manifestMetadata.Signatures = repoMeta.Signatures[manifestDigest.String()]
}
return nil
})
return manifestMetadata, err
}
func (bdw DBWrapper) SetRepoTag(repo string, tag string, manifestDigest godigest.Digest,
mediaType string,
) error {
if err := common.ValidateRepoTagInput(repo, tag, manifestDigest); err != nil {
return err
}
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
// object not found
if len(repoMetaBlob) == 0 {
// create a new object
repoMeta := repodb.RepoMetadata{
Name: repo,
Tags: map[string]repodb.Descriptor{
tag: {
Digest: manifestDigest.String(),
MediaType: mediaType,
},
},
Statistics: map[string]repodb.DescriptorStatistics{
manifestDigest.String(): {DownloadCount: 0},
},
Signatures: map[string]repodb.ManifestSignatures{
manifestDigest.String(): {},
},
}
repoMetaBlob, err := json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
}
// object found
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
repoMeta.Tags[tag] = repodb.Descriptor{
Digest: manifestDigest.String(),
MediaType: mediaType,
}
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) GetRepoMeta(repo string) (repodb.RepoMetadata, error) {
var repoMeta repodb.RepoMetadata
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
// object not found
if repoMetaBlob == nil {
return zerr.ErrRepoMetaNotFound
}
// object found
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
return nil
})
return repoMeta, err
}
func (bdw DBWrapper) DeleteRepoTag(repo string, tag string) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
// object not found
if repoMetaBlob == nil {
return nil
}
// object found
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
delete(repoMeta.Tags, tag)
if len(repoMeta.Tags) == 0 {
return buck.Delete([]byte(repo))
}
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) IncrementRepoStars(repo string) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrRepoMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
repoMeta.Stars++
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) DecrementRepoStars(repo string) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrRepoMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
if repoMeta.Stars > 0 {
repoMeta.Stars--
}
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) GetRepoStars(repo string) (int, error) {
stars := 0
err := bdw.DB.View(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
buck.Get([]byte(repo))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrRepoMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
stars = repoMeta.Stars
return nil
})
return stars, err
}
func (bdw DBWrapper) GetMultipleRepoMeta(ctx context.Context, filter func(repoMeta repodb.RepoMetadata) bool,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, error) {
var (
foundRepos = make([]repodb.RepoMetadata, 0)
pageFinder repodb.PageFinder
)
pageFinder, err := repodb.NewBaseRepoPageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return nil, err
}
err = bdw.DB.View(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
cursor := buck.Cursor()
for repoName, repoMetaBlob := cursor.First(); repoName != nil; repoName, repoMetaBlob = cursor.Next() {
if ok, err := localCtx.RepoIsUserAvailable(ctx, string(repoName)); !ok || err != nil {
continue
}
repoMeta := repodb.RepoMetadata{}
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
if filter(repoMeta) {
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
})
}
}
foundRepos = pageFinder.Page()
return nil
})
return foundRepos, err
}
func (bdw DBWrapper) IncrementImageDownloads(repo string, reference string) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrManifestMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
manifestDigest := reference
if !common.ReferenceIsDigest(reference) {
// search digest for tag
descriptor, found := repoMeta.Tags[reference]
if !found {
return zerr.ErrManifestMetaNotFound
}
manifestDigest = descriptor.Digest
}
manifestStatistics := repoMeta.Statistics[manifestDigest]
manifestStatistics.DownloadCount++
repoMeta.Statistics[manifestDigest] = manifestStatistics
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) AddManifestSignature(repo string, signedManifestDigest godigest.Digest,
sygMeta repodb.SignatureMetadata,
) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrManifestMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
var (
manifestSignatures repodb.ManifestSignatures
found bool
)
if manifestSignatures, found = repoMeta.Signatures[signedManifestDigest.String()]; !found {
manifestSignatures = repodb.ManifestSignatures{}
}
signatureSlice := manifestSignatures[sygMeta.SignatureType]
if !common.SignatureAlreadyExists(signatureSlice, sygMeta) {
if sygMeta.SignatureType == repodb.NotationType {
signatureSlice = append(signatureSlice, repodb.SignatureInfo{
SignatureManifestDigest: sygMeta.SignatureDigest,
LayersInfo: sygMeta.LayersInfo,
})
} else if sygMeta.SignatureType == repodb.CosignType {
signatureSlice = []repodb.SignatureInfo{{
SignatureManifestDigest: sygMeta.SignatureDigest,
LayersInfo: sygMeta.LayersInfo,
}}
}
}
manifestSignatures[sygMeta.SignatureType] = signatureSlice
repoMeta.Signatures[signedManifestDigest.String()] = manifestSignatures
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) DeleteSignature(repo string, signedManifestDigest godigest.Digest,
sigMeta repodb.SignatureMetadata,
) error {
err := bdw.DB.Update(func(tx *bolt.Tx) error {
buck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMetaBlob := buck.Get([]byte(repo))
if repoMetaBlob == nil {
return zerr.ErrManifestMetaNotFound
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
sigType := sigMeta.SignatureType
var (
manifestSignatures repodb.ManifestSignatures
found bool
)
if manifestSignatures, found = repoMeta.Signatures[signedManifestDigest.String()]; !found {
return zerr.ErrManifestMetaNotFound
}
signatureSlice := manifestSignatures[sigType]
newSignatureSlice := make([]repodb.SignatureInfo, 0, len(signatureSlice)-1)
for _, sigDigest := range signatureSlice {
if sigDigest.SignatureManifestDigest != sigMeta.SignatureDigest {
newSignatureSlice = append(newSignatureSlice, sigDigest)
}
}
manifestSignatures[sigType] = newSignatureSlice
repoMeta.Signatures[signedManifestDigest.String()] = manifestSignatures
repoMetaBlob, err = json.Marshal(repoMeta)
if err != nil {
return err
}
return buck.Put([]byte(repo), repoMetaBlob)
})
return err
}
func (bdw DBWrapper) SearchRepos(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
var (
foundRepos = make([]repodb.RepoMetadata, 0)
foundManifestMetadataMap = make(map[string]repodb.ManifestMetadata)
pageFinder repodb.PageFinder
)
pageFinder, err := repodb.NewBaseRepoPageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
err = bdw.DB.View(func(tx *bolt.Tx) error {
var (
manifestMetadataMap = make(map[string]repodb.ManifestMetadata)
repoBuck = tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck = tx.Bucket([]byte(repodb.ManifestDataBucket))
)
cursor := repoBuck.Cursor()
for repoName, repoMetaBlob := cursor.First(); repoName != nil; repoName, repoMetaBlob = cursor.Next() {
if ok, err := localCtx.RepoIsUserAvailable(ctx, string(repoName)); !ok || err != nil {
continue
}
var repoMeta repodb.RepoMetadata
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
if score := common.ScoreRepoName(searchText, string(repoName)); score != -1 {
var (
// specific values used for sorting that need to be calculated based on all manifests from the repo
repoDownloads = 0
repoLastUpdated time.Time
firstImageChecked = true
osSet = map[string]bool{}
archSet = map[string]bool{}
isSigned = false
)
for _, descriptor := range repoMeta.Tags {
var manifestMeta repodb.ManifestMetadata
manifestMeta, manifestDownloaded := manifestMetadataMap[descriptor.Digest]
if !manifestDownloaded {
manifestMetaBlob := dataBuck.Get([]byte(descriptor.Digest))
if manifestMetaBlob == nil {
return zerr.ErrManifestMetaNotFound
}
err := json.Unmarshal(manifestMetaBlob, &manifestMeta)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmarshaling manifest metadata for digest %s", descriptor.Digest)
}
}
// get fields related to filtering
var configContent ispec.Image
err = json.Unmarshal(manifestMeta.ConfigBlob, &configContent)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmarshaling config content for digest %s", descriptor.Digest)
}
osSet[configContent.OS] = true
archSet[configContent.Architecture] = true
// get fields related to sorting
repoDownloads += repoMeta.Statistics[descriptor.Digest].DownloadCount
imageLastUpdated := common.GetImageLastUpdatedTimestamp(configContent)
if firstImageChecked || repoLastUpdated.Before(imageLastUpdated) {
repoLastUpdated = imageLastUpdated
firstImageChecked = false
isSigned = common.CheckIsSigned(repoMeta.Signatures[descriptor.Digest])
}
manifestMetadataMap[descriptor.Digest] = manifestMeta
}
repoFilterData := repodb.FilterData{
OsList: common.GetMapKeys(osSet),
ArchList: common.GetMapKeys(archSet),
IsSigned: isSigned,
}
if !common.AcceptedByFilter(filter, repoFilterData) {
continue
}
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
Score: score,
Downloads: repoDownloads,
UpdateTime: repoLastUpdated,
})
}
}
foundRepos = pageFinder.Page()
// keep just the manifestMeta we need
for _, repoMeta := range foundRepos {
for _, manifestDigest := range repoMeta.Tags {
foundManifestMetadataMap[manifestDigest.Digest] = manifestMetadataMap[manifestDigest.Digest]
}
}
return nil
})
return foundRepos, foundManifestMetadataMap, err
}
func (bdw DBWrapper) SearchTags(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
var (
foundRepos = make([]repodb.RepoMetadata, 0)
foundManifestMetadataMap = make(map[string]repodb.ManifestMetadata)
pageFinder repodb.PageFinder
)
pageFinder, err := repodb.NewBaseImagePageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
searchedRepo, searchedTag, err := common.GetRepoTag(searchText)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrap(err, "repodb: error while parsing search text, invalid format")
}
err = bdw.DB.View(func(tx *bolt.Tx) error {
var (
manifestMetadataMap = make(map[string]repodb.ManifestMetadata)
repoBuck = tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck = tx.Bucket([]byte(repodb.ManifestDataBucket))
cursor = repoBuck.Cursor()
)
repoName, repoMetaBlob := cursor.Seek([]byte(searchedRepo))
for ; repoName != nil; repoName, repoMetaBlob = cursor.Next() {
if ok, err := localCtx.RepoIsUserAvailable(ctx, string(repoName)); !ok || err != nil {
continue
}
repoMeta := repodb.RepoMetadata{}
err := json.Unmarshal(repoMetaBlob, &repoMeta)
if err != nil {
return err
}
if string(repoName) == searchedRepo {
matchedTags := make(map[string]repodb.Descriptor)
// take all manifestMetas
for tag, descriptor := range repoMeta.Tags {
if !strings.HasPrefix(tag, searchedTag) {
continue
}
matchedTags[tag] = descriptor
// in case tags reference the same manifest we don't download from DB multiple times
if manifestMeta, manifestExists := manifestMetadataMap[descriptor.Digest]; manifestExists {
manifestMetadataMap[descriptor.Digest] = manifestMeta
continue
}
manifestMetaBlob := dataBuck.Get([]byte(descriptor.Digest))
if manifestMetaBlob == nil {
return zerr.ErrManifestMetaNotFound
}
var manifestMeta repodb.ManifestMetadata
err := json.Unmarshal(manifestMetaBlob, &manifestMeta)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmashaling manifest metadata for digest %s", descriptor.Digest)
}
var configContent ispec.Image
err = json.Unmarshal(manifestMeta.ConfigBlob, &configContent)
if err != nil {
return errors.Wrapf(err, "repodb: error while unmashaling manifest metadata for digest %s", descriptor.Digest)
}
imageFilterData := repodb.FilterData{
OsList: []string{configContent.OS},
ArchList: []string{configContent.Architecture},
IsSigned: false,
}
if !common.AcceptedByFilter(filter, imageFilterData) {
delete(matchedTags, tag)
delete(manifestMetadataMap, descriptor.Digest)
continue
}
manifestMetadataMap[descriptor.Digest] = manifestMeta
}
repoMeta.Tags = matchedTags
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
})
}
}
foundRepos = pageFinder.Page()
// keep just the manifestMeta we need
for _, repoMeta := range foundRepos {
for _, descriptor := range repoMeta.Tags {
foundManifestMetadataMap[descriptor.Digest] = manifestMetadataMap[descriptor.Digest]
}
}
return nil
})
return foundRepos, foundManifestMetadataMap, err
}
func (bdw *DBWrapper) PatchDB() error {
var DBVersion string
err := bdw.DB.View(func(tx *bolt.Tx) error {
versionBuck := tx.Bucket([]byte(repodb.VersionBucket))
DBVersion = string(versionBuck.Get([]byte(version.DBVersionKey)))
return nil
})
if err != nil {
return errors.Wrapf(err, "patching the database failed, can't read db version")
}
if version.GetVersionIndex(DBVersion) == -1 {
return errors.New("DB has broken format, no version found")
}
for patchIndex, patch := range bdw.Patches {
if patchIndex < version.GetVersionIndex(DBVersion) {
continue
}
err := patch(bdw.DB)
if err != nil {
return err
}
}
return nil
}

View file

@ -0,0 +1,479 @@
package bolt_test
import (
"context"
"encoding/json"
"os"
"testing"
"github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey"
"go.etcd.io/bbolt"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
)
func TestWrapperErrors(t *testing.T) {
Convey("Errors", t, func() {
tmpDir := t.TempDir()
boltDBParams := bolt.DBParameters{RootDir: tmpDir}
boltdbWrapper, err := bolt.NewBoltDBWrapper(boltDBParams)
defer os.Remove("repo.db")
So(boltdbWrapper, ShouldNotBeNil)
So(err, ShouldBeNil)
repoMeta := repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err := json.Marshal(repoMeta)
So(err, ShouldBeNil)
Convey("GetManifestData", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
return dataBuck.Put([]byte("digest1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
_, err = boltdbWrapper.GetManifestData("digest1")
So(err, ShouldNotBeNil)
_, err = boltdbWrapper.GetManifestMeta("repo1", "digest1")
So(err, ShouldNotBeNil)
})
Convey("SetManifestMeta", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
err := dataBuck.Put([]byte("digest1"), repoMetaBlob)
if err != nil {
return err
}
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.SetManifestMeta("repo1", "digest1", repodb.ManifestMetadata{})
So(err, ShouldNotBeNil)
_, err = boltdbWrapper.GetManifestMeta("repo1", "digest1")
So(err, ShouldNotBeNil)
})
Convey("SetRepoTag", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.SetRepoTag("repo1", "tag", "digest", ispec.MediaTypeImageManifest)
So(err, ShouldNotBeNil)
})
Convey("DeleteRepoTag", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.DeleteRepoTag("repo1", "tag")
So(err, ShouldNotBeNil)
})
Convey("IncrementRepoStars", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.IncrementRepoStars("repo2")
So(err, ShouldNotBeNil)
err = boltdbWrapper.IncrementRepoStars("repo1")
So(err, ShouldNotBeNil)
})
Convey("DecrementRepoStars", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.DecrementRepoStars("repo2")
So(err, ShouldNotBeNil)
err = boltdbWrapper.DecrementRepoStars("repo1")
So(err, ShouldNotBeNil)
})
Convey("GetRepoStars", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
_, err = boltdbWrapper.GetRepoStars("repo1")
So(err, ShouldNotBeNil)
})
Convey("GetMultipleRepoMeta", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
_, err = boltdbWrapper.GetMultipleRepoMeta(context.TODO(), func(repoMeta repodb.RepoMetadata) bool {
return true
}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("IncrementImageDownloads", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.IncrementImageDownloads("repo2", "tag")
So(err, ShouldNotBeNil)
err = boltdbWrapper.IncrementImageDownloads("repo1", "tag")
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), repoMetaBlob)
})
So(err, ShouldBeNil)
err = boltdbWrapper.IncrementImageDownloads("repo1", "tag")
So(err, ShouldNotBeNil)
})
Convey("AddManifestSignature", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.AddManifestSignature("repo2", digest.FromString("dig"),
repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.AddManifestSignature("repo1", digest.FromString("dig"),
repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), repoMetaBlob)
})
So(err, ShouldBeNil)
// signatures not found
err = boltdbWrapper.AddManifestSignature("repo1", digest.FromString("dig"),
repodb.SignatureMetadata{})
So(err, ShouldBeNil)
//
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMeta := repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{},
Signatures: map[string]repodb.ManifestSignatures{
"digest1": {
"cosgin": {{}},
},
"digest2": {
"notation": {{}},
},
},
}
repoMetaBlob, err := json.Marshal(repoMeta)
So(err, ShouldBeNil)
return repoBuck.Put([]byte("repo1"), repoMetaBlob)
})
So(err, ShouldBeNil)
err = boltdbWrapper.AddManifestSignature("repo1", digest.FromString("dig"),
repodb.SignatureMetadata{
SignatureType: "cosign",
SignatureDigest: "digest1",
})
So(err, ShouldBeNil)
err = boltdbWrapper.AddManifestSignature("repo1", digest.FromString("dig"),
repodb.SignatureMetadata{
SignatureType: "notation",
SignatureDigest: "digest2",
})
So(err, ShouldBeNil)
})
Convey("DeleteSignature", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
err = boltdbWrapper.DeleteSignature("repo2", digest.FromString("dig"),
repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DeleteSignature("repo1", digest.FromString("dig"),
repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
repoMeta := repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{},
Signatures: map[string]repodb.ManifestSignatures{
"digest1": {
"cosgin": []repodb.SignatureInfo{
{
SignatureManifestDigest: "sigDigest1",
},
{
SignatureManifestDigest: "sigDigest2",
},
},
},
"digest2": {
"notation": {{}},
},
},
}
repoMetaBlob, err := json.Marshal(repoMeta)
So(err, ShouldBeNil)
return repoBuck.Put([]byte("repo1"), repoMetaBlob)
})
So(err, ShouldBeNil)
err = boltdbWrapper.DeleteSignature("repo1", "digest1",
repodb.SignatureMetadata{
SignatureType: "cosgin",
SignatureDigest: "sigDigest2",
})
So(err, ShouldBeNil)
})
Convey("SearchRepos", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
_, _, err = boltdbWrapper.SearchRepos(context.Background(), "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
err := dataBuck.Put([]byte("dig1"), []byte("wrong json"))
if err != nil {
return err
}
repoMeta := repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag1": {Digest: "dig1", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err := json.Marshal(repoMeta)
So(err, ShouldBeNil)
err = repoBuck.Put([]byte("repo1"), repoMetaBlob)
if err != nil {
return err
}
repoMeta = repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag2": {Digest: "dig2", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err = json.Marshal(repoMeta)
So(err, ShouldBeNil)
return repoBuck.Put([]byte("repo2"), repoMetaBlob)
})
So(err, ShouldBeNil)
_, _, err = boltdbWrapper.SearchRepos(context.Background(), "repo1", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
_, _, err = boltdbWrapper.SearchRepos(context.Background(), "repo2", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
manifestMeta := repodb.ManifestMetadata{
ManifestBlob: []byte("{}"),
ConfigBlob: []byte("wrong json"),
Signatures: repodb.ManifestSignatures{},
}
manifestMetaBlob, err := json.Marshal(manifestMeta)
if err != nil {
return err
}
err = dataBuck.Put([]byte("dig1"), manifestMetaBlob)
if err != nil {
return err
}
repoMeta = repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag1": {Digest: "dig1", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err = json.Marshal(repoMeta)
So(err, ShouldBeNil)
return repoBuck.Put([]byte("repo1"), repoMetaBlob)
})
So(err, ShouldBeNil)
_, _, err = boltdbWrapper.SearchRepos(context.Background(), "repo1", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchTags", func() {
ctx := context.Background()
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
return repoBuck.Put([]byte("repo1"), []byte("wrong json"))
})
So(err, ShouldBeNil)
_, _, err = boltdbWrapper.SearchTags(ctx, "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
_, _, err = boltdbWrapper.SearchTags(ctx, "repo1:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
err = boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
repoBuck := tx.Bucket([]byte(repodb.RepoMetadataBucket))
dataBuck := tx.Bucket([]byte(repodb.ManifestDataBucket))
manifestMeta := repodb.ManifestMetadata{
ManifestBlob: []byte("{}"),
ConfigBlob: []byte("wrong json"),
Signatures: repodb.ManifestSignatures{},
}
manifestMetaBlob, err := json.Marshal(manifestMeta)
if err != nil {
return err
}
err = dataBuck.Put([]byte("dig1"), manifestMetaBlob)
if err != nil {
return err
}
err = dataBuck.Put([]byte("wrongManifestData"), []byte("wrong json"))
if err != nil {
return err
}
// manifest data doesn't exist
repoMeta = repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag2": {Digest: "dig2", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err = json.Marshal(repoMeta)
So(err, ShouldBeNil)
err = repoBuck.Put([]byte("repo1"), repoMetaBlob)
if err != nil {
return err
}
// manifest data is wrong
repoMeta = repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag2": {Digest: "wrongManifestData", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err = json.Marshal(repoMeta)
So(err, ShouldBeNil)
err = repoBuck.Put([]byte("repo2"), repoMetaBlob)
if err != nil {
return err
}
repoMeta = repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"tag1": {Digest: "dig1", MediaType: ispec.MediaTypeImageManifest},
},
Signatures: map[string]repodb.ManifestSignatures{},
}
repoMetaBlob, err = json.Marshal(repoMeta)
So(err, ShouldBeNil)
return repoBuck.Put([]byte("repo3"), repoMetaBlob)
})
So(err, ShouldBeNil)
_, _, err = boltdbWrapper.SearchTags(ctx, "repo1:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
_, _, err = boltdbWrapper.SearchTags(ctx, "repo2:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
_, _, err = boltdbWrapper.SearchTags(ctx, "repo3:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
})
}

57
pkg/meta/repodb/common.go Normal file
View file

@ -0,0 +1,57 @@
package repodb
import (
"time"
)
// DetailedRepoMeta is a auxiliary structure used for sorting RepoMeta arrays by information
// that's not directly available in the RepoMetadata structure (ex. that needs to be calculated
// by iterating the manifests, etc.)
type DetailedRepoMeta struct {
RepoMeta RepoMetadata
Score int
Downloads int
UpdateTime time.Time
}
func SortFunctions() map[SortCriteria]func(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return map[SortCriteria]func(pageBuffer []DetailedRepoMeta) func(i, j int) bool{
AlphabeticAsc: SortByAlphabeticAsc,
AlphabeticDsc: SortByAlphabeticDsc,
Relevance: SortByRelevance,
UpdateTime: SortByUpdateTime,
Downloads: SortByDownloads,
}
}
func SortByAlphabeticAsc(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return func(i, j int) bool {
return pageBuffer[i].RepoMeta.Name < pageBuffer[j].RepoMeta.Name
}
}
func SortByAlphabeticDsc(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return func(i, j int) bool {
return pageBuffer[i].RepoMeta.Name > pageBuffer[j].RepoMeta.Name
}
}
func SortByRelevance(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return func(i, j int) bool {
return pageBuffer[i].Score < pageBuffer[j].Score
}
}
// SortByUpdateTime sorting descending by time.
func SortByUpdateTime(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return func(i, j int) bool {
return pageBuffer[i].UpdateTime.After(pageBuffer[j].UpdateTime)
}
}
// SortByDownloads returns a comparison function for descendant sorting by downloads.
func SortByDownloads(pageBuffer []DetailedRepoMeta) func(i, j int) bool {
return func(i, j int) bool {
return pageBuffer[i].Downloads > pageBuffer[j].Downloads
}
}

View file

@ -0,0 +1,199 @@
package common
import (
"strings"
"time"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/meta/repodb"
)
func UpdateManifestMeta(repoMeta repodb.RepoMetadata, manifestDigest godigest.Digest,
manifestMeta repodb.ManifestMetadata,
) repodb.RepoMetadata {
updatedRepoMeta := repoMeta
updatedStatistics := repoMeta.Statistics[manifestDigest.String()]
updatedStatistics.DownloadCount = manifestMeta.DownloadCount
updatedRepoMeta.Statistics[manifestDigest.String()] = updatedStatistics
if manifestMeta.Signatures == nil {
manifestMeta.Signatures = repodb.ManifestSignatures{}
}
updatedRepoMeta.Signatures[manifestDigest.String()] = manifestMeta.Signatures
return updatedRepoMeta
}
func SignatureAlreadyExists(signatureSlice []repodb.SignatureInfo, sm repodb.SignatureMetadata) bool {
for _, sigInfo := range signatureSlice {
if sm.SignatureDigest == sigInfo.SignatureManifestDigest {
return true
}
}
return false
}
func ReferenceIsDigest(reference string) bool {
_, err := godigest.Parse(reference)
return err == nil
}
func ValidateRepoTagInput(repo, tag string, manifestDigest godigest.Digest) error {
if repo == "" {
return zerr.ErrEmptyRepoName
}
if tag == "" {
return zerr.ErrEmptyTag
}
if manifestDigest == "" {
return zerr.ErrEmptyDigest
}
return nil
}
func ScoreRepoName(searchText string, repoName string) int {
searchTextSlice := strings.Split(searchText, "/")
repoNameSlice := strings.Split(repoName, "/")
if len(searchTextSlice) > len(repoNameSlice) {
return -1
}
if len(searchTextSlice) == 1 {
// check if it maches first or last name in path
if index := strings.Index(repoNameSlice[len(repoNameSlice)-1], searchTextSlice[0]); index != -1 {
return index + 1
}
// we'll make repos that match the first name in path less important than matching the last name in path
if index := strings.Index(repoNameSlice[0], searchTextSlice[0]); index != -1 {
return (index + 1) * 10
}
return -1
}
if len(searchTextSlice) < len(repoNameSlice) &&
strings.HasPrefix(repoName, searchText) {
return 1
}
// searchText and repoName match perfectly up until the last name in path
for i := 0; i < len(searchTextSlice)-1; i++ {
if searchTextSlice[i] != repoNameSlice[i] {
return -1
}
}
// check the last
if index := strings.Index(repoNameSlice[len(repoNameSlice)-1], searchTextSlice[len(searchTextSlice)-1]); index != -1 {
return (index + 1)
}
return -1
}
func GetImageLastUpdatedTimestamp(configContent ispec.Image) time.Time {
var timeStamp *time.Time
if configContent.Created != nil && !configContent.Created.IsZero() {
return *configContent.Created
}
if len(configContent.History) != 0 {
timeStamp = configContent.History[len(configContent.History)-1].Created
}
if timeStamp == nil {
timeStamp = &time.Time{}
}
return *timeStamp
}
func CheckIsSigned(signatures repodb.ManifestSignatures) bool {
for _, signatures := range signatures {
if len(signatures) > 0 {
return true
}
}
return false
}
func GetRepoTag(searchText string) (string, string, error) {
const repoTagCount = 2
splitSlice := strings.Split(searchText, ":")
if len(splitSlice) != repoTagCount {
return "", "", zerr.ErrInvalidRepoTagFormat
}
repo := strings.TrimSpace(splitSlice[0])
tag := strings.TrimSpace(splitSlice[1])
return repo, tag, nil
}
func GetMapKeys[K comparable, V any](genericMap map[K]V) []K {
keys := make([]K, 0, len(genericMap))
for k := range genericMap {
keys = append(keys, k)
}
return keys
}
// acceptedByFilter checks that data contains at least 1 element of each filter
// criteria(os, arch) present in filter.
func AcceptedByFilter(filter repodb.Filter, data repodb.FilterData) bool {
if filter.Arch != nil {
foundArch := false
for _, arch := range filter.Arch {
foundArch = foundArch || containsString(data.ArchList, *arch)
}
if !foundArch {
return false
}
}
if filter.Os != nil {
foundOs := false
for _, os := range filter.Os {
foundOs = foundOs || containsString(data.OsList, *os)
}
if !foundOs {
return false
}
}
if filter.HasToBeSigned != nil && *filter.HasToBeSigned != data.IsSigned {
return false
}
return true
}
func containsString(strSlice []string, str string) bool {
for _, val := range strSlice {
if strings.EqualFold(val, str) {
return true
}
}
return false
}

View file

@ -0,0 +1,453 @@
package dynamo_test
import (
"context"
"os"
"strings"
"testing"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
"github.com/rs/zerolog"
. "github.com/smartystreets/goconvey/convey"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
dynamo "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper"
"zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/iterator"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
)
func TestIterator(t *testing.T) {
const (
endpoint = "http://localhost:4566"
region = "us-east-2"
)
Convey("TestIterator", t, func() {
dynamoWrapper, err := dynamo.NewDynamoDBWrapper(dynamoParams.DBDriverParameters{
Endpoint: endpoint,
Region: region,
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
})
So(err, ShouldBeNil)
So(dynamoWrapper.ResetManifestDataTable(), ShouldBeNil)
So(dynamoWrapper.ResetRepoMetaTable(), ShouldBeNil)
err = dynamoWrapper.SetRepoTag("repo1", "tag1", "manifestType", "manifestDigest1")
So(err, ShouldBeNil)
err = dynamoWrapper.SetRepoTag("repo2", "tag2", "manifestType", "manifestDigest2")
So(err, ShouldBeNil)
err = dynamoWrapper.SetRepoTag("repo3", "tag3", "manifestType", "manifestDigest3")
So(err, ShouldBeNil)
repoMetaAttributeIterator := iterator.NewBaseDynamoAttributesIterator(
dynamoWrapper.Client,
"RepoMetadataTable",
"RepoMetadata",
1,
log.Logger{Logger: zerolog.New(os.Stdout)},
)
attribute, err := repoMetaAttributeIterator.First(context.Background())
So(err, ShouldBeNil)
So(attribute, ShouldNotBeNil)
attribute, err = repoMetaAttributeIterator.Next(context.Background())
So(err, ShouldBeNil)
So(attribute, ShouldNotBeNil)
attribute, err = repoMetaAttributeIterator.Next(context.Background())
So(err, ShouldBeNil)
So(attribute, ShouldNotBeNil)
attribute, err = repoMetaAttributeIterator.Next(context.Background())
So(err, ShouldBeNil)
So(attribute, ShouldBeNil)
})
}
func TestIteratorErrors(t *testing.T) {
Convey("errors", t, func() {
customResolver := aws.EndpointResolverWithOptionsFunc(
func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "endpoint",
SigningRegion: region,
}, nil
})
cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion("region"),
config.WithEndpointResolverWithOptions(customResolver))
So(err, ShouldBeNil)
repoMetaAttributeIterator := iterator.NewBaseDynamoAttributesIterator(
dynamodb.NewFromConfig(cfg),
"RepoMetadataTable",
"RepoMetadata",
1,
log.Logger{Logger: zerolog.New(os.Stdout)},
)
_, err = repoMetaAttributeIterator.First(context.Background())
So(err, ShouldNotBeNil)
})
}
func TestWrapperErrors(t *testing.T) {
const (
endpoint = "http://localhost:4566"
region = "us-east-2"
)
ctx := context.Background()
Convey("Errors", t, func() {
dynamoWrapper, err := dynamo.NewDynamoDBWrapper(dynamoParams.DBDriverParameters{ //nolint:contextcheck
Endpoint: endpoint,
Region: region,
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
})
So(err, ShouldBeNil)
So(dynamoWrapper.ResetManifestDataTable(), ShouldBeNil) //nolint:contextcheck
So(dynamoWrapper.ResetRepoMetaTable(), ShouldBeNil) //nolint:contextcheck
Convey("SetManifestData", func() {
dynamoWrapper.ManifestDataTablename = "WRONG table"
err := dynamoWrapper.SetManifestData("dig", repodb.ManifestData{})
So(err, ShouldNotBeNil)
})
Convey("GetManifestData", func() {
dynamoWrapper.ManifestDataTablename = "WRONG table"
_, err := dynamoWrapper.GetManifestData("dig")
So(err, ShouldNotBeNil)
})
Convey("GetManifestData unmarshal error", func() {
err := setBadManifestData(dynamoWrapper.Client, "dig")
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetManifestData("dig")
So(err, ShouldNotBeNil)
})
Convey("SetManifestMeta GetRepoMeta error", func() {
err := setBadRepoMeta(dynamoWrapper.Client, "repo1")
So(err, ShouldBeNil)
err = dynamoWrapper.SetManifestMeta("repo1", "dig", repodb.ManifestMetadata{})
So(err, ShouldNotBeNil)
})
Convey("GetManifestMeta GetManifestData not found error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetManifestMeta("repo", "dig")
So(err, ShouldNotBeNil)
})
Convey("GetManifestMeta GetRepoMeta Not Found error", func() {
err := dynamoWrapper.SetManifestData("dig", repodb.ManifestData{})
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetManifestMeta("repoNotFound", "dig")
So(err, ShouldNotBeNil)
})
Convey("GetManifestMeta GetRepoMeta error", func() {
err := dynamoWrapper.SetManifestData("dig", repodb.ManifestData{})
So(err, ShouldBeNil)
err = setBadRepoMeta(dynamoWrapper.Client, "repo")
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetManifestMeta("repo", "dig")
So(err, ShouldNotBeNil)
})
Convey("IncrementRepoStars GetRepoMeta error", func() {
err = dynamoWrapper.IncrementRepoStars("repo")
So(err, ShouldNotBeNil)
})
Convey("DecrementRepoStars GetRepoMeta error", func() {
err = dynamoWrapper.DecrementRepoStars("repo")
So(err, ShouldNotBeNil)
})
Convey("DeleteRepoTag Client.GetItem error", func() {
strSlice := make([]string, 10000)
repoName := strings.Join(strSlice, ".")
err = dynamoWrapper.DeleteRepoTag(repoName, "tag")
So(err, ShouldNotBeNil)
})
Convey("DeleteRepoTag unmarshal error", func() {
err = setBadRepoMeta(dynamoWrapper.Client, "repo")
So(err, ShouldBeNil)
err = dynamoWrapper.DeleteRepoTag("repo", "tag")
So(err, ShouldNotBeNil)
})
Convey("GetRepoMeta Client.GetItem error", func() {
strSlice := make([]string, 10000)
repoName := strings.Join(strSlice, ".")
_, err = dynamoWrapper.GetRepoMeta(repoName)
So(err, ShouldNotBeNil)
})
Convey("GetRepoMeta unmarshal error", func() {
err = setBadRepoMeta(dynamoWrapper.Client, "repo")
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetRepoMeta("repo")
So(err, ShouldNotBeNil)
})
Convey("IncrementImageDownloads GetRepoMeta error", func() {
err = dynamoWrapper.IncrementImageDownloads("repoNotFound", "")
So(err, ShouldNotBeNil)
})
Convey("IncrementImageDownloads tag not found error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
err = dynamoWrapper.IncrementImageDownloads("repo", "notFoundTag")
So(err, ShouldNotBeNil)
})
Convey("IncrementImageDownloads GetManifestMeta error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
err = dynamoWrapper.IncrementImageDownloads("repo", "tag")
So(err, ShouldNotBeNil)
})
Convey("AddManifestSignature GetRepoMeta error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
err = dynamoWrapper.AddManifestSignature("repoNotFound", "tag", repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
})
Convey("AddManifestSignature ManifestSignatures signedManifestDigest not found error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
err = dynamoWrapper.AddManifestSignature("repo", "tagNotFound", repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
})
Convey("AddManifestSignature SignatureType repodb.NotationType", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag", "dig", "")
So(err, ShouldBeNil)
err = dynamoWrapper.AddManifestSignature("repo", "tagNotFound", repodb.SignatureMetadata{
SignatureType: "notation",
})
So(err, ShouldBeNil)
})
Convey("DeleteSignature GetRepoMeta error", func() {
err = dynamoWrapper.DeleteSignature("repoNotFound", "tagNotFound", repodb.SignatureMetadata{})
So(err, ShouldNotBeNil)
})
Convey("DeleteSignature sigDigest.SignatureManifestDigest != sigMeta.SignatureDigest true", func() {
err := setRepoMeta(dynamoWrapper.Client, repodb.RepoMetadata{
Name: "repo",
Signatures: map[string]repodb.ManifestSignatures{
"tag1": {
"cosign": []repodb.SignatureInfo{
{SignatureManifestDigest: "dig1"},
{SignatureManifestDigest: "dig2"},
},
},
},
})
So(err, ShouldBeNil)
err = dynamoWrapper.DeleteSignature("repo", "tag1", repodb.SignatureMetadata{
SignatureDigest: "dig2",
SignatureType: "cosign",
})
So(err, ShouldBeNil)
})
Convey("GetMultipleRepoMeta unmarshal error", func() {
err = setBadRepoMeta(dynamoWrapper.Client, "repo") //nolint:contextcheck
So(err, ShouldBeNil)
_, err = dynamoWrapper.GetMultipleRepoMeta(ctx, func(repoMeta repodb.RepoMetadata) bool { return true },
repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchRepos repoMeta unmarshal error", func() {
err = setBadRepoMeta(dynamoWrapper.Client, "repo") //nolint:contextcheck
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchRepos(ctx, "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchRepos GetManifestMeta error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag1", "notFoundDigest", "") //nolint:contextcheck
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchRepos(ctx, "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchRepos config unmarshal error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag1", "dig1", "") //nolint:contextcheck
So(err, ShouldBeNil)
err = dynamoWrapper.SetManifestData("dig1", repodb.ManifestData{ //nolint:contextcheck
ManifestBlob: []byte("{}"),
ConfigBlob: []byte("bad json"),
})
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchRepos(ctx, "", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchTags repoMeta unmarshal error", func() {
err = setBadRepoMeta(dynamoWrapper.Client, "repo") //nolint:contextcheck
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchTags(ctx, "repo:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchTags GetManifestMeta error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag1", "manifestNotFound", "") //nolint:contextcheck
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchTags(ctx, "repo:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
Convey("SearchTags config unmarshal error", func() {
err := dynamoWrapper.SetRepoTag("repo", "tag1", "dig1", "") //nolint:contextcheck
So(err, ShouldBeNil)
err = dynamoWrapper.SetManifestData( //nolint:contextcheck
"dig1",
repodb.ManifestData{
ManifestBlob: []byte("{}"),
ConfigBlob: []byte("bad json"),
},
)
So(err, ShouldBeNil)
_, _, err = dynamoWrapper.SearchTags(ctx, "repo:", repodb.Filter{}, repodb.PageInput{})
So(err, ShouldNotBeNil)
})
})
}
func setBadManifestData(client *dynamodb.Client, digest string) error {
mdAttributeValue, err := attributevalue.Marshal("string")
if err != nil {
return err
}
_, err = client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#MD": "ManifestData",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":ManifestData": mdAttributeValue,
},
Key: map[string]types.AttributeValue{
"Digest": &types.AttributeValueMemberS{
Value: digest,
},
},
TableName: aws.String("ManifestDataTable"),
UpdateExpression: aws.String("SET #MD = :ManifestData"),
})
return err
}
func setBadRepoMeta(client *dynamodb.Client, repoName string) error {
repoAttributeValue, err := attributevalue.Marshal("string")
if err != nil {
return err
}
_, err = client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#RM": "RepoMetadata",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":RepoMetadata": repoAttributeValue,
},
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{
Value: repoName,
},
},
TableName: aws.String("RepoMetadataTable"),
UpdateExpression: aws.String("SET #RM = :RepoMetadata"),
})
return err
}
func setRepoMeta(client *dynamodb.Client, repoMeta repodb.RepoMetadata) error {
repoAttributeValue, err := attributevalue.Marshal(repoMeta)
if err != nil {
return err
}
_, err = client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#RM": "RepoMetadata",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":RepoMetadata": repoAttributeValue,
},
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{
Value: repoMeta.Name,
},
},
TableName: aws.String("RepoMetadataTable"),
UpdateExpression: aws.String("SET #RM = :RepoMetadata"),
})
return err
}

View file

@ -0,0 +1,977 @@
package dynamo
import (
"context"
"encoding/json"
"os"
"strings"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/rs/zerolog"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb" //nolint:go-staticcheck
"zotregistry.io/zot/pkg/meta/repodb/common"
"zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/iterator"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
"zotregistry.io/zot/pkg/meta/repodb/version"
localCtx "zotregistry.io/zot/pkg/requestcontext"
)
type DBWrapper struct {
Client *dynamodb.Client
RepoMetaTablename string
ManifestDataTablename string
VersionTablename string
Patches []func(client *dynamodb.Client, tableNames map[string]string) error
Log log.Logger
}
func NewDynamoDBWrapper(params dynamoParams.DBDriverParameters) (*DBWrapper, error) {
// custom endpoint resolver to point to localhost
customResolver := aws.EndpointResolverWithOptionsFunc(
func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: params.Endpoint,
SigningRegion: region,
}, nil
})
// Using the SDK's default configuration, loading additional config
// and credentials values from the environment variables, shared
// credentials, and shared configuration files
cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion(params.Region),
config.WithEndpointResolverWithOptions(customResolver))
if err != nil {
return nil, err
}
dynamoWrapper := DBWrapper{
Client: dynamodb.NewFromConfig(cfg),
RepoMetaTablename: params.RepoMetaTablename,
ManifestDataTablename: params.ManifestDataTablename,
VersionTablename: params.VersionTablename,
Patches: version.GetDynamoDBPatches(),
Log: log.Logger{Logger: zerolog.New(os.Stdout)},
}
err = dynamoWrapper.createVersionTable()
if err != nil {
return nil, err
}
err = dynamoWrapper.createRepoMetaTable()
if err != nil {
return nil, err
}
err = dynamoWrapper.createManifestDataTable()
if err != nil {
return nil, err
}
// Using the Config value, create the DynamoDB client
return &dynamoWrapper, nil
}
func (dwr DBWrapper) SetManifestData(manifestDigest godigest.Digest, manifestData repodb.ManifestData) error {
mdAttributeValue, err := attributevalue.Marshal(manifestData)
if err != nil {
return err
}
_, err = dwr.Client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#MD": "ManifestData",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":ManifestData": mdAttributeValue,
},
Key: map[string]types.AttributeValue{
"Digest": &types.AttributeValueMemberS{
Value: manifestDigest.String(),
},
},
TableName: aws.String(dwr.ManifestDataTablename),
UpdateExpression: aws.String("SET #MD = :ManifestData"),
})
return err
}
func (dwr DBWrapper) GetManifestData(manifestDigest godigest.Digest) (repodb.ManifestData, error) {
resp, err := dwr.Client.GetItem(context.Background(), &dynamodb.GetItemInput{
TableName: aws.String(dwr.ManifestDataTablename),
Key: map[string]types.AttributeValue{
"Digest": &types.AttributeValueMemberS{Value: manifestDigest.String()},
},
})
if err != nil {
return repodb.ManifestData{}, err
}
if resp.Item == nil {
return repodb.ManifestData{}, zerr.ErrManifestDataNotFound
}
var manifestData repodb.ManifestData
err = attributevalue.Unmarshal(resp.Item["ManifestData"], &manifestData)
if err != nil {
return repodb.ManifestData{}, err
}
return manifestData, nil
}
func (dwr DBWrapper) SetManifestMeta(repo string, manifestDigest godigest.Digest, manifestMeta repodb.ManifestMetadata,
) error {
if manifestMeta.Signatures == nil {
manifestMeta.Signatures = repodb.ManifestSignatures{}
}
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
if !errors.Is(err, zerr.ErrRepoMetaNotFound) {
return err
}
repoMeta = repodb.RepoMetadata{
Name: repo,
Tags: map[string]repodb.Descriptor{},
Statistics: map[string]repodb.DescriptorStatistics{},
Signatures: map[string]repodb.ManifestSignatures{},
}
}
err = dwr.SetManifestData(manifestDigest, repodb.ManifestData{
ManifestBlob: manifestMeta.ManifestBlob,
ConfigBlob: manifestMeta.ConfigBlob,
})
if err != nil {
return err
}
updatedRepoMeta := common.UpdateManifestMeta(repoMeta, manifestDigest, manifestMeta)
err = dwr.setRepoMeta(repo, updatedRepoMeta)
if err != nil {
return err
}
return err
}
func (dwr DBWrapper) GetManifestMeta(repo string, manifestDigest godigest.Digest,
) (repodb.ManifestMetadata, error) { //nolint:contextcheck
manifestData, err := dwr.GetManifestData(manifestDigest)
if err != nil {
if errors.Is(err, zerr.ErrManifestDataNotFound) {
return repodb.ManifestMetadata{}, zerr.ErrManifestMetaNotFound
}
return repodb.ManifestMetadata{},
errors.Wrapf(err, "error while constructing manifest meta for manifest '%s' from repo '%s'",
manifestDigest, repo)
}
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
if errors.Is(err, zerr.ErrRepoMetaNotFound) {
return repodb.ManifestMetadata{}, zerr.ErrManifestMetaNotFound
}
return repodb.ManifestMetadata{},
errors.Wrapf(err, "error while constructing manifest meta for manifest '%s' from repo '%s'",
manifestDigest, repo)
}
manifestMetadata := repodb.ManifestMetadata{}
manifestMetadata.ManifestBlob = manifestData.ManifestBlob
manifestMetadata.ConfigBlob = manifestData.ConfigBlob
manifestMetadata.DownloadCount = repoMeta.Statistics[manifestDigest.String()].DownloadCount
manifestMetadata.Signatures = repodb.ManifestSignatures{}
if repoMeta.Signatures[manifestDigest.String()] != nil {
manifestMetadata.Signatures = repoMeta.Signatures[manifestDigest.String()]
}
return manifestMetadata, nil
}
func (dwr DBWrapper) IncrementRepoStars(repo string) error {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return err
}
repoMeta.Stars++
err = dwr.setRepoMeta(repo, repoMeta)
return err
}
func (dwr DBWrapper) DecrementRepoStars(repo string) error {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return err
}
if repoMeta.Stars > 0 {
repoMeta.Stars--
}
err = dwr.setRepoMeta(repo, repoMeta)
return err
}
func (dwr DBWrapper) GetRepoStars(repo string) (int, error) {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return 0, err
}
return repoMeta.Stars, nil
}
func (dwr DBWrapper) SetRepoTag(repo string, tag string, manifestDigest godigest.Digest, mediaType string) error {
if err := common.ValidateRepoTagInput(repo, tag, manifestDigest); err != nil {
return err
}
resp, err := dwr.Client.GetItem(context.TODO(), &dynamodb.GetItemInput{
TableName: aws.String(dwr.RepoMetaTablename),
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{Value: repo},
},
})
if err != nil {
return err
}
repoMeta := repodb.RepoMetadata{
Name: repo,
Tags: map[string]repodb.Descriptor{},
Statistics: map[string]repodb.DescriptorStatistics{},
Signatures: map[string]repodb.ManifestSignatures{},
}
if resp.Item != nil {
err := attributevalue.Unmarshal(resp.Item["RepoMetadata"], &repoMeta)
if err != nil {
return err
}
}
repoMeta.Tags[tag] = repodb.Descriptor{
Digest: manifestDigest.String(),
MediaType: mediaType,
}
err = dwr.setRepoMeta(repo, repoMeta)
return err
}
func (dwr DBWrapper) DeleteRepoTag(repo string, tag string) error {
resp, err := dwr.Client.GetItem(context.TODO(), &dynamodb.GetItemInput{
TableName: aws.String(dwr.RepoMetaTablename),
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{Value: repo},
},
})
if err != nil {
return err
}
if resp.Item == nil {
return nil
}
var repoMeta repodb.RepoMetadata
err = attributevalue.Unmarshal(resp.Item["RepoMetadata"], &repoMeta)
if err != nil {
return err
}
delete(repoMeta.Tags, tag)
if len(repoMeta.Tags) == 0 {
_, err := dwr.Client.DeleteItem(context.Background(), &dynamodb.DeleteItemInput{
TableName: aws.String(dwr.RepoMetaTablename),
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{Value: repo},
},
})
return err
}
repoAttributeValue, err := attributevalue.Marshal(repoMeta)
if err != nil {
return err
}
_, err = dwr.Client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#RM": "RepoMetadata",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":RepoMetadata": repoAttributeValue,
},
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{
Value: repo,
},
},
TableName: aws.String(dwr.RepoMetaTablename),
UpdateExpression: aws.String("SET #RM = :RepoMetadata"),
})
return err
}
func (dwr DBWrapper) GetRepoMeta(repo string) (repodb.RepoMetadata, error) {
resp, err := dwr.Client.GetItem(context.TODO(), &dynamodb.GetItemInput{
TableName: aws.String(dwr.RepoMetaTablename),
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{Value: repo},
},
})
if err != nil {
return repodb.RepoMetadata{}, err
}
if resp.Item == nil {
return repodb.RepoMetadata{}, zerr.ErrRepoMetaNotFound
}
var repoMeta repodb.RepoMetadata
err = attributevalue.Unmarshal(resp.Item["RepoMetadata"], &repoMeta)
if err != nil {
return repodb.RepoMetadata{}, err
}
return repoMeta, nil
}
func (dwr DBWrapper) IncrementImageDownloads(repo string, reference string) error {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return err
}
manifestDigest := reference
if !common.ReferenceIsDigest(reference) {
// search digest for tag
descriptor, found := repoMeta.Tags[reference]
if !found {
return zerr.ErrManifestMetaNotFound
}
manifestDigest = descriptor.Digest
}
manifestMeta, err := dwr.GetManifestMeta(repo, godigest.Digest(manifestDigest))
if err != nil {
return err
}
manifestMeta.DownloadCount++
err = dwr.SetManifestMeta(repo, godigest.Digest(manifestDigest), manifestMeta)
return err
}
func (dwr DBWrapper) AddManifestSignature(repo string, signedManifestDigest godigest.Digest,
sygMeta repodb.SignatureMetadata,
) error {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return err
}
var (
manifestSignatures repodb.ManifestSignatures
found bool
)
if manifestSignatures, found = repoMeta.Signatures[signedManifestDigest.String()]; !found {
manifestSignatures = repodb.ManifestSignatures{}
}
signatureSlice := manifestSignatures[sygMeta.SignatureType]
if !common.SignatureAlreadyExists(signatureSlice, sygMeta) {
if sygMeta.SignatureType == repodb.NotationType {
signatureSlice = append(signatureSlice, repodb.SignatureInfo{
SignatureManifestDigest: sygMeta.SignatureDigest,
LayersInfo: sygMeta.LayersInfo,
})
} else if sygMeta.SignatureType == repodb.CosignType {
signatureSlice = []repodb.SignatureInfo{{
SignatureManifestDigest: sygMeta.SignatureDigest,
LayersInfo: sygMeta.LayersInfo,
}}
}
}
manifestSignatures[sygMeta.SignatureType] = signatureSlice
repoMeta.Signatures[signedManifestDigest.String()] = manifestSignatures
err = dwr.setRepoMeta(repoMeta.Name, repoMeta)
return err
}
func (dwr DBWrapper) DeleteSignature(repo string, signedManifestDigest godigest.Digest,
sigMeta repodb.SignatureMetadata,
) error {
repoMeta, err := dwr.GetRepoMeta(repo)
if err != nil {
return err
}
sigType := sigMeta.SignatureType
var (
manifestSignatures repodb.ManifestSignatures
found bool
)
if manifestSignatures, found = repoMeta.Signatures[signedManifestDigest.String()]; !found {
return zerr.ErrManifestMetaNotFound
}
signatureSlice := manifestSignatures[sigType]
newSignatureSlice := make([]repodb.SignatureInfo, 0, len(signatureSlice)-1)
for _, sigDigest := range signatureSlice {
if sigDigest.SignatureManifestDigest != sigMeta.SignatureDigest {
newSignatureSlice = append(newSignatureSlice, sigDigest)
}
}
manifestSignatures[sigType] = newSignatureSlice
repoMeta.Signatures[signedManifestDigest.String()] = manifestSignatures
err = dwr.setRepoMeta(repoMeta.Name, repoMeta)
return err
}
func (dwr DBWrapper) GetMultipleRepoMeta(ctx context.Context,
filter func(repoMeta repodb.RepoMetadata) bool, requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, error) {
var (
repoMetaAttributeIterator iterator.AttributesIterator
pageFinder repodb.PageFinder
)
repoMetaAttributeIterator = iterator.NewBaseDynamoAttributesIterator(
dwr.Client, dwr.RepoMetaTablename, "RepoMetadata", 0, dwr.Log,
)
pageFinder, err := repodb.NewBaseRepoPageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return nil, err
}
repoMetaAttribute, err := repoMetaAttributeIterator.First(ctx)
for ; repoMetaAttribute != nil; repoMetaAttribute, err = repoMetaAttributeIterator.Next(ctx) {
if err != nil {
// log
return []repodb.RepoMetadata{}, err
}
var repoMeta repodb.RepoMetadata
err := attributevalue.Unmarshal(repoMetaAttribute, &repoMeta)
if err != nil {
return []repodb.RepoMetadata{}, err
}
if ok, err := localCtx.RepoIsUserAvailable(ctx, repoMeta.Name); !ok || err != nil {
continue
}
if filter(repoMeta) {
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
})
}
}
foundRepos := pageFinder.Page()
return foundRepos, err
}
func (dwr DBWrapper) SearchRepos(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
var (
foundManifestMetadataMap = make(map[string]repodb.ManifestMetadata)
manifestMetadataMap = make(map[string]repodb.ManifestMetadata)
repoMetaAttributeIterator iterator.AttributesIterator
pageFinder repodb.PageFinder
)
repoMetaAttributeIterator = iterator.NewBaseDynamoAttributesIterator(
dwr.Client, dwr.RepoMetaTablename, "RepoMetadata", 0, dwr.Log,
)
pageFinder, err := repodb.NewBaseRepoPageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
repoMetaAttribute, err := repoMetaAttributeIterator.First(ctx)
for ; repoMetaAttribute != nil; repoMetaAttribute, err = repoMetaAttributeIterator.Next(ctx) {
if err != nil {
// log
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
var repoMeta repodb.RepoMetadata
err := attributevalue.Unmarshal(repoMetaAttribute, &repoMeta)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
if ok, err := localCtx.RepoIsUserAvailable(ctx, repoMeta.Name); !ok || err != nil {
continue
}
if score := common.ScoreRepoName(searchText, repoMeta.Name); score != -1 {
var (
// specific values used for sorting that need to be calculated based on all manifests from the repo
repoDownloads = 0
repoLastUpdated time.Time
firstImageChecked = true
osSet = map[string]bool{}
archSet = map[string]bool{}
isSigned = false
)
for _, descriptor := range repoMeta.Tags {
var manifestMeta repodb.ManifestMetadata
manifestMeta, manifestDownloaded := manifestMetadataMap[descriptor.Digest]
if !manifestDownloaded {
manifestMeta, err = dwr.GetManifestMeta(repoMeta.Name, godigest.Digest(descriptor.Digest)) //nolint:contextcheck
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrapf(err, "repodb: error while unmarshaling manifest metadata for digest %s", descriptor.Digest)
}
}
// get fields related to filtering
var configContent ispec.Image
err = json.Unmarshal(manifestMeta.ConfigBlob, &configContent)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrapf(err, "repodb: error while unmarshaling config content for digest %s", descriptor.Digest)
}
osSet[configContent.OS] = true
archSet[configContent.Architecture] = true
// get fields related to sorting
repoDownloads += repoMeta.Statistics[descriptor.Digest].DownloadCount
imageLastUpdated := common.GetImageLastUpdatedTimestamp(configContent)
if firstImageChecked || repoLastUpdated.Before(imageLastUpdated) {
repoLastUpdated = imageLastUpdated
firstImageChecked = false
isSigned = common.CheckIsSigned(manifestMeta.Signatures)
}
manifestMetadataMap[descriptor.Digest] = manifestMeta
}
repoFilterData := repodb.FilterData{
OsList: common.GetMapKeys(osSet),
ArchList: common.GetMapKeys(archSet),
IsSigned: isSigned,
}
if !common.AcceptedByFilter(filter, repoFilterData) {
continue
}
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
Score: score,
Downloads: repoDownloads,
UpdateTime: repoLastUpdated,
})
}
}
foundRepos := pageFinder.Page()
// keep just the manifestMeta we need
for _, repoMeta := range foundRepos {
for _, descriptor := range repoMeta.Tags {
foundManifestMetadataMap[descriptor.Digest] = manifestMetadataMap[descriptor.Digest]
}
}
return foundRepos, foundManifestMetadataMap, err
}
func (dwr DBWrapper) SearchTags(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
var (
foundManifestMetadataMap = make(map[string]repodb.ManifestMetadata)
manifestMetadataMap = make(map[string]repodb.ManifestMetadata)
repoMetaAttributeIterator = iterator.NewBaseDynamoAttributesIterator(
dwr.Client, dwr.RepoMetaTablename, "RepoMetadata", 0, dwr.Log,
)
pageFinder repodb.PageFinder
)
pageFinder, err := repodb.NewBaseImagePageFinder(requestedPage.Limit, requestedPage.Offset, requestedPage.SortBy)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
searchedRepo, searchedTag, err := common.GetRepoTag(searchText)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrap(err, "repodb: error while parsing search text, invalid format")
}
repoMetaAttribute, err := repoMetaAttributeIterator.First(ctx)
for ; repoMetaAttribute != nil; repoMetaAttribute, err = repoMetaAttributeIterator.Next(ctx) {
if err != nil {
// log
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
var repoMeta repodb.RepoMetadata
err := attributevalue.Unmarshal(repoMetaAttribute, &repoMeta)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, err
}
if ok, err := localCtx.RepoIsUserAvailable(ctx, repoMeta.Name); !ok || err != nil {
continue
}
if repoMeta.Name == searchedRepo {
matchedTags := make(map[string]repodb.Descriptor)
// take all manifestMetas
for tag, descriptor := range repoMeta.Tags {
if !strings.HasPrefix(tag, searchedTag) {
continue
}
matchedTags[tag] = descriptor
// in case tags reference the same manifest we don't download from DB multiple times
if manifestMeta, manifestExists := manifestMetadataMap[descriptor.Digest]; manifestExists {
manifestMetadataMap[descriptor.Digest] = manifestMeta
continue
}
manifestMeta, err := dwr.GetManifestMeta(repoMeta.Name, godigest.Digest(descriptor.Digest)) //nolint:contextcheck
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrapf(err, "repodb: error while unmashaling manifest metadata for digest %s", descriptor.Digest)
}
var configContent ispec.Image
err = json.Unmarshal(manifestMeta.ConfigBlob, &configContent)
if err != nil {
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{},
errors.Wrapf(err, "repodb: error while unmashaling manifest metadata for digest %s", descriptor.Digest)
}
imageFilterData := repodb.FilterData{
OsList: []string{configContent.OS},
ArchList: []string{configContent.Architecture},
IsSigned: false,
}
if !common.AcceptedByFilter(filter, imageFilterData) {
delete(matchedTags, tag)
delete(manifestMetadataMap, descriptor.Digest)
continue
}
manifestMetadataMap[descriptor.Digest] = manifestMeta
}
repoMeta.Tags = matchedTags
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repoMeta,
})
}
}
foundRepos := pageFinder.Page()
// keep just the manifestMeta we need
for _, repoMeta := range foundRepos {
for _, descriptor := range repoMeta.Tags {
foundManifestMetadataMap[descriptor.Digest] = manifestMetadataMap[descriptor.Digest]
}
}
return foundRepos, foundManifestMetadataMap, err
}
func (dwr *DBWrapper) PatchDB() error {
DBVersion, err := dwr.getDBVersion()
if err != nil {
return errors.Wrapf(err, "patching dynamo failed, error retrieving database version")
}
if version.GetVersionIndex(DBVersion) == -1 {
return errors.New("DB has broken format, no version found")
}
for patchIndex, patch := range dwr.Patches {
if patchIndex < version.GetVersionIndex(DBVersion) {
continue
}
tableNames := map[string]string{
"RepoMetaTablename": dwr.RepoMetaTablename,
"ManifestDataTablename": dwr.ManifestDataTablename,
"VersionTablename": dwr.VersionTablename,
}
err := patch(dwr.Client, tableNames)
if err != nil {
return err
}
}
return nil
}
func (dwr DBWrapper) setRepoMeta(repo string, repoMeta repodb.RepoMetadata) error {
repoAttributeValue, err := attributevalue.Marshal(repoMeta)
if err != nil {
return err
}
_, err = dwr.Client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#RM": "RepoMetadata",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":RepoMetadata": repoAttributeValue,
},
Key: map[string]types.AttributeValue{
"RepoName": &types.AttributeValueMemberS{
Value: repo,
},
},
TableName: aws.String(dwr.RepoMetaTablename),
UpdateExpression: aws.String("SET #RM = :RepoMetadata"),
})
return err
}
func (dwr DBWrapper) createRepoMetaTable() error {
_, err := dwr.Client.CreateTable(context.Background(), &dynamodb.CreateTableInput{
TableName: aws.String(dwr.RepoMetaTablename),
AttributeDefinitions: []types.AttributeDefinition{
{
AttributeName: aws.String("RepoName"),
AttributeType: types.ScalarAttributeTypeS,
},
},
KeySchema: []types.KeySchemaElement{
{
AttributeName: aws.String("RepoName"),
KeyType: types.KeyTypeHash,
},
},
BillingMode: types.BillingModePayPerRequest,
})
if err != nil && strings.Contains(err.Error(), "Table already exists") {
return nil
}
return err
}
func (dwr DBWrapper) deleteRepoMetaTable() error {
_, err := dwr.Client.DeleteTable(context.Background(), &dynamodb.DeleteTableInput{
TableName: aws.String(dwr.RepoMetaTablename),
})
return err
}
func (dwr DBWrapper) ResetRepoMetaTable() error {
err := dwr.deleteRepoMetaTable()
if err != nil {
return err
}
return dwr.createRepoMetaTable()
}
func (dwr DBWrapper) createManifestDataTable() error {
_, err := dwr.Client.CreateTable(context.Background(), &dynamodb.CreateTableInput{
TableName: aws.String(dwr.ManifestDataTablename),
AttributeDefinitions: []types.AttributeDefinition{
{
AttributeName: aws.String("Digest"),
AttributeType: types.ScalarAttributeTypeS,
},
},
KeySchema: []types.KeySchemaElement{
{
AttributeName: aws.String("Digest"),
KeyType: types.KeyTypeHash,
},
},
BillingMode: types.BillingModePayPerRequest,
})
if err != nil && strings.Contains(err.Error(), "Table already exists") {
return nil
}
return err
}
func (dwr *DBWrapper) createVersionTable() error {
_, err := dwr.Client.CreateTable(context.Background(), &dynamodb.CreateTableInput{
TableName: aws.String(dwr.VersionTablename),
AttributeDefinitions: []types.AttributeDefinition{
{
AttributeName: aws.String("VersionKey"),
AttributeType: types.ScalarAttributeTypeS,
},
},
KeySchema: []types.KeySchemaElement{
{
AttributeName: aws.String("VersionKey"),
KeyType: types.KeyTypeHash,
},
},
BillingMode: types.BillingModePayPerRequest,
})
if err != nil && strings.Contains(err.Error(), "Table already exists") {
return nil
}
if err == nil {
mdAttributeValue, err := attributevalue.Marshal(version.CurrentVersion)
if err != nil {
return err
}
_, err = dwr.Client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#V": "Version",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":Version": mdAttributeValue,
},
Key: map[string]types.AttributeValue{
"VersionKey": &types.AttributeValueMemberS{
Value: version.DBVersionKey,
},
},
TableName: aws.String(dwr.VersionTablename),
UpdateExpression: aws.String("SET #V = :Version"),
})
if err != nil {
return err
}
}
return err
}
func (dwr *DBWrapper) getDBVersion() (string, error) {
resp, err := dwr.Client.GetItem(context.TODO(), &dynamodb.GetItemInput{
TableName: aws.String(dwr.VersionTablename),
Key: map[string]types.AttributeValue{
"VersionKey": &types.AttributeValueMemberS{Value: version.DBVersionKey},
},
})
if err != nil {
return "", err
}
if resp.Item == nil {
return "", nil
}
var version string
err = attributevalue.Unmarshal(resp.Item["Version"], &version)
if err != nil {
return "", err
}
return version, nil
}
func (dwr DBWrapper) deleteManifestDataTable() error {
_, err := dwr.Client.DeleteTable(context.Background(), &dynamodb.DeleteTableInput{
TableName: aws.String(dwr.ManifestDataTablename),
})
return err
}
func (dwr DBWrapper) ResetManifestDataTable() error {
err := dwr.deleteManifestDataTable()
if err != nil {
return err
}
return dwr.createManifestDataTable()
}

View file

@ -0,0 +1,99 @@
package iterator
import (
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
"zotregistry.io/zot/pkg/log"
)
type AttributesIterator interface {
First(ctx context.Context) (types.AttributeValue, error)
Next(ctx context.Context) (types.AttributeValue, error)
}
type BaseAttributesIterator struct {
Client *dynamodb.Client
Table string
Attribute string
itemBuffer []map[string]types.AttributeValue
currentItemIndex int
lastEvaluatedKey map[string]types.AttributeValue
readLimit *int32
log log.Logger
}
func NewBaseDynamoAttributesIterator(client *dynamodb.Client, table, attribute string, maxReadLimit int32,
log log.Logger,
) *BaseAttributesIterator {
var readLimit *int32
if maxReadLimit > 0 {
readLimit = &maxReadLimit
}
return &BaseAttributesIterator{
Client: client,
Table: table,
Attribute: attribute,
itemBuffer: []map[string]types.AttributeValue{},
currentItemIndex: 0,
readLimit: readLimit,
log: log,
}
}
func (dii *BaseAttributesIterator) First(ctx context.Context) (types.AttributeValue, error) {
scanOutput, err := dii.Client.Scan(ctx, &dynamodb.ScanInput{
TableName: aws.String(dii.Table),
Limit: dii.readLimit,
})
if err != nil {
return nil, err
}
if len(scanOutput.Items) == 0 {
return nil, nil
}
dii.itemBuffer = scanOutput.Items
dii.lastEvaluatedKey = scanOutput.LastEvaluatedKey
dii.currentItemIndex = 1
return dii.itemBuffer[0][dii.Attribute], nil
}
func (dii *BaseAttributesIterator) Next(ctx context.Context) (types.AttributeValue, error) {
if len(dii.itemBuffer) <= dii.currentItemIndex {
if dii.lastEvaluatedKey == nil {
return nil, nil
}
scanOutput, err := dii.Client.Scan(ctx, &dynamodb.ScanInput{
TableName: aws.String(dii.Table),
ExclusiveStartKey: dii.lastEvaluatedKey,
})
if err != nil {
return nil, err
}
// all items have been scanned
if len(scanOutput.Items) == 0 {
return nil, nil
}
dii.itemBuffer = scanOutput.Items
dii.lastEvaluatedKey = scanOutput.LastEvaluatedKey
dii.currentItemIndex = 0
}
nextItem := dii.itemBuffer[dii.currentItemIndex][dii.Attribute]
dii.currentItemIndex++
return nextItem, nil
}

View file

@ -0,0 +1,5 @@
package params
type DBDriverParameters struct {
Endpoint, Region, RepoMetaTablename, ManifestDataTablename, VersionTablename string
}

View file

@ -0,0 +1,241 @@
package repodb
import (
"sort"
"github.com/pkg/errors"
zerr "zotregistry.io/zot/errors"
)
// PageFinder permits keeping a pool of objects using Add
// and returning a specific page.
type PageFinder interface {
// Add
Add(detailedRepoMeta DetailedRepoMeta)
Page() []RepoMetadata
Reset()
}
// RepoPageFinder implements PageFinder. It manages RepoMeta objects and calculates the page
// using the given limit, offset and sortBy option.
type RepoPageFinder struct {
limit int
offset int
sortBy SortCriteria
pageBuffer []DetailedRepoMeta
}
func NewBaseRepoPageFinder(limit, offset int, sortBy SortCriteria) (*RepoPageFinder, error) {
if sortBy == "" {
sortBy = AlphabeticAsc
}
if limit < 0 {
return nil, zerr.ErrLimitIsNegative
}
if offset < 0 {
return nil, zerr.ErrOffsetIsNegative
}
if _, found := SortFunctions()[sortBy]; !found {
return nil, errors.Wrapf(zerr.ErrSortCriteriaNotSupported, "sorting repos by '%s' is not supported", sortBy)
}
return &RepoPageFinder{
limit: limit,
offset: offset,
sortBy: sortBy,
pageBuffer: make([]DetailedRepoMeta, 0, limit),
}, nil
}
func (bpt *RepoPageFinder) Reset() {
bpt.pageBuffer = []DetailedRepoMeta{}
}
func (bpt *RepoPageFinder) Add(namedRepoMeta DetailedRepoMeta) {
bpt.pageBuffer = append(bpt.pageBuffer, namedRepoMeta)
}
func (bpt *RepoPageFinder) Page() []RepoMetadata {
if len(bpt.pageBuffer) == 0 {
return []RepoMetadata{}
}
sort.Slice(bpt.pageBuffer, SortFunctions()[bpt.sortBy](bpt.pageBuffer))
// the offset and limit are calculatd in terms of repos counted
start := bpt.offset
end := bpt.offset + bpt.limit
// we'll return an empty array when the offset is greater than the number of elements
if start >= len(bpt.pageBuffer) {
start = len(bpt.pageBuffer)
end = start
}
if end >= len(bpt.pageBuffer) {
end = len(bpt.pageBuffer)
}
detailedReposPage := bpt.pageBuffer[start:end]
if start == 0 && end == 0 {
detailedReposPage = bpt.pageBuffer
}
repos := make([]RepoMetadata, 0, len(detailedReposPage))
for _, drm := range detailedReposPage {
repos = append(repos, drm.RepoMeta)
}
return repos
}
type ImagePageFinder struct {
limit int
offset int
sortBy SortCriteria
pageBuffer []DetailedRepoMeta
}
func NewBaseImagePageFinder(limit, offset int, sortBy SortCriteria) (*ImagePageFinder, error) {
if sortBy == "" {
sortBy = AlphabeticAsc
}
if limit < 0 {
return nil, zerr.ErrLimitIsNegative
}
if offset < 0 {
return nil, zerr.ErrOffsetIsNegative
}
if _, found := SortFunctions()[sortBy]; !found {
return nil, errors.Wrapf(zerr.ErrSortCriteriaNotSupported, "sorting repos by '%s' is not supported", sortBy)
}
return &ImagePageFinder{
limit: limit,
offset: offset,
sortBy: sortBy,
pageBuffer: make([]DetailedRepoMeta, 0, limit),
}, nil
}
func (bpt *ImagePageFinder) Reset() {
bpt.pageBuffer = []DetailedRepoMeta{}
}
func (bpt *ImagePageFinder) Add(namedRepoMeta DetailedRepoMeta) {
bpt.pageBuffer = append(bpt.pageBuffer, namedRepoMeta)
}
func (bpt *ImagePageFinder) Page() []RepoMetadata {
if len(bpt.pageBuffer) == 0 {
return []RepoMetadata{}
}
sort.Slice(bpt.pageBuffer, SortFunctions()[bpt.sortBy](bpt.pageBuffer))
repoStartIndex := 0
tagStartIndex := 0
// the offset and limit are calculatd in terms of tags counted
remainingOffset := bpt.offset
remainingLimit := bpt.limit
// bring cursor to position in RepoMeta array
for _, drm := range bpt.pageBuffer {
if remainingOffset < len(drm.RepoMeta.Tags) {
tagStartIndex = remainingOffset
break
}
remainingOffset -= len(drm.RepoMeta.Tags)
repoStartIndex++
}
// offset is larger than the number of tags
if repoStartIndex >= len(bpt.pageBuffer) {
return []RepoMetadata{}
}
repos := make([]RepoMetadata, 0)
// finish counting remaining tags inside the first repo meta
partialTags := map[string]Descriptor{}
firstRepoMeta := bpt.pageBuffer[repoStartIndex].RepoMeta
tags := make([]string, 0, len(firstRepoMeta.Tags))
for k := range firstRepoMeta.Tags {
tags = append(tags, k)
}
sort.Strings(tags)
for i := tagStartIndex; i < len(tags); i++ {
tag := tags[i]
partialTags[tag] = firstRepoMeta.Tags[tag]
remainingLimit--
if remainingLimit == 0 {
firstRepoMeta.Tags = partialTags
repos = append(repos, firstRepoMeta)
return repos
}
}
firstRepoMeta.Tags = partialTags
repos = append(repos, firstRepoMeta)
repoStartIndex++
// continue with the remaining repos
for i := repoStartIndex; i < len(bpt.pageBuffer); i++ {
repoMeta := bpt.pageBuffer[i].RepoMeta
if len(repoMeta.Tags) > remainingLimit {
partialTags := map[string]Descriptor{}
tags := make([]string, 0, len(repoMeta.Tags))
for k := range repoMeta.Tags {
tags = append(tags, k)
}
sort.Strings(tags)
for _, tag := range tags {
partialTags[tag] = repoMeta.Tags[tag]
remainingLimit--
if remainingLimit == 0 {
repoMeta.Tags = partialTags
repos = append(repos, repoMeta)
break
}
}
return repos
}
// add the whole repo
repos = append(repos, repoMeta)
remainingLimit -= len(repoMeta.Tags)
if remainingLimit == 0 {
return repos
}
}
// we arrive here when the limit is bigger than the number of tags
return repos
}

View file

@ -0,0 +1,178 @@
package repodb_test
import (
"testing"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey"
"zotregistry.io/zot/pkg/meta/repodb"
)
func TestPagination(t *testing.T) {
Convey("Repo Pagination", t, func() {
Convey("reset", func() {
pageFinder, err := repodb.NewBaseRepoPageFinder(1, 0, repodb.AlphabeticAsc)
So(err, ShouldBeNil)
So(pageFinder, ShouldNotBeNil)
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Reset()
So(pageFinder.Page(), ShouldBeEmpty)
})
})
Convey("Image Pagination", t, func() {
Convey("create new pageFinder errors", func() {
pageFinder, err := repodb.NewBaseImagePageFinder(-1, 10, repodb.AlphabeticAsc)
So(pageFinder, ShouldBeNil)
So(err, ShouldNotBeNil)
pageFinder, err = repodb.NewBaseImagePageFinder(2, -1, repodb.AlphabeticAsc)
So(pageFinder, ShouldBeNil)
So(err, ShouldNotBeNil)
pageFinder, err = repodb.NewBaseImagePageFinder(2, 1, "wrong sorting criteria")
So(pageFinder, ShouldBeNil)
So(err, ShouldNotBeNil)
})
Convey("Reset", func() {
pageFinder, err := repodb.NewBaseImagePageFinder(1, 0, repodb.AlphabeticAsc)
So(err, ShouldBeNil)
So(pageFinder, ShouldNotBeNil)
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Add(repodb.DetailedRepoMeta{})
pageFinder.Reset()
So(pageFinder.Page(), ShouldBeEmpty)
})
Convey("Page", func() {
Convey("limit < len(tags)", func() {
pageFinder, err := repodb.NewBaseImagePageFinder(5, 2, repodb.AlphabeticAsc)
So(err, ShouldBeNil)
So(pageFinder, ShouldNotBeNil)
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo1",
Tags: map[string]repodb.Descriptor{
"tag1": {
Digest: "dig1",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo2",
Tags: map[string]repodb.Descriptor{
"Tag1": {
Digest: "dig1",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag2": {
Digest: "dig2",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag3": {
Digest: "dig3",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag4": {
Digest: "dig4",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo3",
Tags: map[string]repodb.Descriptor{
"Tag11": {
Digest: "dig11",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag12": {
Digest: "dig12",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag13": {
Digest: "dig13",
MediaType: ispec.MediaTypeImageManifest,
},
"Tag14": {
Digest: "dig14",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
result := pageFinder.Page()
So(result[0].Tags, ShouldContainKey, "Tag2")
So(result[0].Tags, ShouldContainKey, "Tag3")
So(result[0].Tags, ShouldContainKey, "Tag4")
So(result[1].Tags, ShouldContainKey, "Tag11")
So(result[1].Tags, ShouldContainKey, "Tag12")
})
Convey("limit > len(tags)", func() {
pageFinder, err := repodb.NewBaseImagePageFinder(3, 0, repodb.AlphabeticAsc)
So(err, ShouldBeNil)
So(pageFinder, ShouldNotBeNil)
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo1",
Tags: map[string]repodb.Descriptor{
"tag1": {
Digest: "dig1",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo2",
Tags: map[string]repodb.Descriptor{
"Tag1": {
Digest: "dig1",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
pageFinder.Add(repodb.DetailedRepoMeta{
RepoMeta: repodb.RepoMetadata{
Name: "repo3",
Tags: map[string]repodb.Descriptor{
"Tag11": {
Digest: "dig11",
MediaType: ispec.MediaTypeImageManifest,
},
},
},
})
result := pageFinder.Page()
So(result[0].Tags, ShouldContainKey, "tag1")
So(result[1].Tags, ShouldContainKey, "Tag1")
So(result[2].Tags, ShouldContainKey, "Tag11")
})
})
})
}

158
pkg/meta/repodb/repodb.go Normal file
View file

@ -0,0 +1,158 @@
package repodb
import (
"context"
godigest "github.com/opencontainers/go-digest"
)
// MetadataDB.
const (
ManifestDataBucket = "ManifestData"
UserMetadataBucket = "UserMeta"
RepoMetadataBucket = "RepoMetadata"
VersionBucket = "Version"
)
const (
SignaturesDirPath = "/tmp/zot/signatures"
SigKey = "dev.cosignproject.cosign/signature"
NotationType = "notation"
CosignType = "cosign"
)
type RepoDB interface { //nolint:interfacebloat
// IncrementRepoStars adds 1 to the star count of an image
IncrementRepoStars(repo string) error
// IncrementRepoStars subtracts 1 from the star count of an image
DecrementRepoStars(repo string) error
// GetRepoStars returns the total number of stars a repo has
GetRepoStars(repo string) (int, error)
// SetRepoTag sets the tag of a manifest in the tag list of a repo
SetRepoTag(repo string, tag string, manifestDigest godigest.Digest, mediaType string) error
// DeleteRepoTag delets the tag from the tag list of a repo
DeleteRepoTag(repo string, tag string) error
// GetRepoMeta returns RepoMetadata of a repo from the database
GetRepoMeta(repo string) (RepoMetadata, error)
// GetMultipleRepoMeta returns information about all repositories as map[string]RepoMetadata filtered by the filter
// function
GetMultipleRepoMeta(ctx context.Context, filter func(repoMeta RepoMetadata) bool, requestedPage PageInput) (
[]RepoMetadata, error)
// SetManifestData sets ManifestData for a given manifest in the database
SetManifestData(manifestDigest godigest.Digest, md ManifestData) error
// GetManifestData return the manifest and it's related config
GetManifestData(manifestDigest godigest.Digest) (ManifestData, error)
// GetManifestMeta returns ManifestMetadata for a given manifest from the database
GetManifestMeta(repo string, manifestDigest godigest.Digest) (ManifestMetadata, error)
// GetManifestMeta sets ManifestMetadata for a given manifest in the database
SetManifestMeta(repo string, manifestDigest godigest.Digest, mm ManifestMetadata) error
// IncrementManifestDownloads adds 1 to the download count of a manifest
IncrementImageDownloads(repo string, reference string) error
// AddManifestSignature adds signature metadata to a given manifest in the database
AddManifestSignature(repo string, signedManifestDigest godigest.Digest, sm SignatureMetadata) error
// DeleteSignature delets signature metadata to a given manifest from the database
DeleteSignature(repo string, signedManifestDigest godigest.Digest, sm SignatureMetadata) error
// SearchRepos searches for repos given a search string
SearchRepos(ctx context.Context, searchText string, filter Filter, requestedPage PageInput) (
[]RepoMetadata, map[string]ManifestMetadata, error)
// SearchTags searches for images(repo:tag) given a search string
SearchTags(ctx context.Context, searchText string, filter Filter, requestedPage PageInput) (
[]RepoMetadata, map[string]ManifestMetadata, error)
PatchDB() error
}
type ManifestMetadata struct {
ManifestBlob []byte
ConfigBlob []byte
DownloadCount int
Signatures ManifestSignatures
}
type ManifestData struct {
ManifestBlob []byte
ConfigBlob []byte
}
// Descriptor represents an image. Multiple images might have the same digests but different tags.
type Descriptor struct {
Digest string
MediaType string
}
type DescriptorStatistics struct {
DownloadCount int
}
type ManifestSignatures map[string][]SignatureInfo
type RepoMetadata struct {
Name string
Tags map[string]Descriptor
Statistics map[string]DescriptorStatistics
Signatures map[string]ManifestSignatures
Stars int
}
type LayerInfo struct {
LayerDigest string
LayerContent []byte
SignatureKey string
Signer string
}
type SignatureInfo struct {
SignatureManifestDigest string
LayersInfo []LayerInfo
}
type SignatureMetadata struct {
SignatureType string
SignatureDigest string
LayersInfo []LayerInfo
}
type SortCriteria string
const (
Relevance = SortCriteria("RELEVANCE")
UpdateTime = SortCriteria("UPDATE_TIME")
AlphabeticAsc = SortCriteria("ALPHABETIC_ASC")
AlphabeticDsc = SortCriteria("ALPHABETIC_DSC")
Stars = SortCriteria("STARS")
Downloads = SortCriteria("DOWNLOADS")
)
type PageInput struct {
Limit int
Offset int
SortBy SortCriteria
}
type Filter struct {
Os []*string
Arch []*string
HasToBeSigned *bool
}
type FilterData struct {
OsList []string
ArchList []string
IsSigned bool
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,36 @@
package repodbfactory
import (
"zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/meta/repodb"
boltdb_wrapper "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
dynamodb_wrapper "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
)
func Create(dbtype string, parameters interface{}) (repodb.RepoDB, error) { //nolint:contextcheck
switch dbtype {
case "boltdb":
{
properParameters, ok := parameters.(boltdb_wrapper.DBParameters)
if !ok {
panic("failed type assertion")
}
return boltdb_wrapper.NewBoltDBWrapper(properParameters)
}
case "dynamodb":
{
properParameters, ok := parameters.(dynamoParams.DBDriverParameters)
if !ok {
panic("failed type assertion")
}
return dynamodb_wrapper.NewDynamoDBWrapper(properParameters)
}
default:
{
return nil, errors.ErrBadConfig
}
}
}

View file

@ -0,0 +1,62 @@
package repodbfactory_test
import (
"os"
"testing"
. "github.com/smartystreets/goconvey/convey"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
"zotregistry.io/zot/pkg/meta/repodb/repodbfactory"
)
func TestCreateDynamo(t *testing.T) {
skipDynamo(t)
Convey("Create", t, func() {
dynamoDBDriverParams := dynamoParams.DBDriverParameters{
Endpoint: os.Getenv("DYNAMODBMOCK_ENDPOINT"),
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
Region: "us-east-2",
}
repoDB, err := repodbfactory.Create("dynamodb", dynamoDBDriverParams)
So(repoDB, ShouldNotBeNil)
So(err, ShouldBeNil)
})
Convey("Fails", t, func() {
So(func() { _, _ = repodbfactory.Create("dynamodb", bolt.DBParameters{RootDir: "root"}) }, ShouldPanic)
repoDB, err := repodbfactory.Create("random", bolt.DBParameters{RootDir: "root"})
So(repoDB, ShouldBeNil)
So(err, ShouldNotBeNil)
})
}
func TestCreateBoltDB(t *testing.T) {
Convey("Create", t, func() {
rootDir := t.TempDir()
repoDB, err := repodbfactory.Create("boltdb", bolt.DBParameters{
RootDir: rootDir,
})
So(repoDB, ShouldNotBeNil)
So(err, ShouldBeNil)
})
Convey("fails", t, func() {
So(func() { _, _ = repodbfactory.Create("boltdb", dynamoParams.DBDriverParameters{}) }, ShouldPanic)
})
}
func skipDynamo(t *testing.T) {
t.Helper()
if os.Getenv("DYNAMODBMOCK_ENDPOINT") == "" {
t.Skip("Skipping testing without AWS DynamoDB mock server")
}
}

View file

@ -0,0 +1,273 @@
package repodb
import (
"encoding/json"
"errors"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/storage"
)
// SyncRepoDB will sync all repos found in the rootdirectory of the oci layout that zot was deployed on.
func SyncRepoDB(repoDB RepoDB, storeController storage.StoreController, log log.Logger) error {
allRepos, err := getAllRepos(storeController)
if err != nil {
rootDir := storeController.DefaultStore.RootDir()
log.Error().Err(err).Msgf("sync-repodb: failed to get all repo names present under %s", rootDir)
return err
}
for _, repo := range allRepos {
err := SyncRepo(repo, repoDB, storeController, log)
if err != nil {
log.Error().Err(err).Msgf("sync-repodb: failed to sync repo %s", repo)
return err
}
}
return nil
}
// SyncRepo reads the contents of a repo and syncs all images signatures found.
func SyncRepo(repo string, repoDB RepoDB, storeController storage.StoreController, log log.Logger) error {
imageStore := storeController.GetImageStore(repo)
indexBlob, err := imageStore.GetIndexContent(repo)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to read index.json for repo %s", repo)
return err
}
var indexContent ispec.Index
err = json.Unmarshal(indexBlob, &indexContent)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to unmarshal index.json for repo %s", repo)
return err
}
err = resetRepoMetaTags(repo, repoDB, log)
if err != nil && !errors.Is(err, zerr.ErrRepoMetaNotFound) {
log.Error().Err(err).Msgf("sync-repo: failed to reset tag field in RepoMetadata for repo %s", repo)
return err
}
type foundSignatureData struct {
repo string
tag string
signatureType string
signedManifestDigest string
signatureDigest string
}
var signaturesFound []foundSignatureData
for _, manifest := range indexContent.Manifests {
tag, hasTag := manifest.Annotations[ispec.AnnotationRefName]
if !hasTag {
log.Warn().Msgf("sync-repo: image without tag found, will not be synced into RepoDB")
continue
}
manifestMetaIsPresent, err := isManifestMetaPresent(repo, manifest, repoDB)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: error checking manifestMeta in RepoDB")
return err
}
if manifestMetaIsPresent {
err = repoDB.SetRepoTag(repo, tag, manifest.Digest, manifest.MediaType)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to set repo tag for %s:%s", repo, tag)
return err
}
continue
}
manifestBlob, digest, _, err := imageStore.GetImageManifest(repo, manifest.Digest.String())
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to set repo tag for %s:%s", repo, tag)
return err
}
isSignature, signatureType, signedManifestDigest, err := storage.CheckIsImageSignature(repo,
manifestBlob, tag, storeController)
if err != nil {
if errors.Is(err, zerr.ErrOrphanSignature) {
continue
} else {
log.Error().Err(err).Msgf("sync-repo: failed checking if image is signature for %s:%s", repo, tag)
return err
}
}
if isSignature {
// We'll ignore signatures now because the order in which the signed image and signature are added into
// the DB matters. First we add the normal images then the signatures
signaturesFound = append(signaturesFound, foundSignatureData{
repo: repo,
tag: tag,
signatureType: signatureType,
signedManifestDigest: signedManifestDigest.String(),
signatureDigest: digest.String(),
})
continue
}
manifestData, err := NewManifestData(repo, manifestBlob, storeController)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to create manifest data for image %s:%s manifest digest %s ",
repo, tag, manifest.Digest.String())
return err
}
err = repoDB.SetManifestMeta(repo, manifest.Digest, ManifestMetadata{
ManifestBlob: manifestData.ManifestBlob,
ConfigBlob: manifestData.ConfigBlob,
DownloadCount: 0,
Signatures: ManifestSignatures{},
})
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to set manifest meta for image %s:%s manifest digest %s ",
repo, tag, manifest.Digest.String())
return err
}
err = repoDB.SetRepoTag(repo, tag, manifest.Digest, manifest.MediaType)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to repo tag for repo %s and tag %s",
repo, tag)
return err
}
}
// manage the signatures found
for _, sigData := range signaturesFound {
err := repoDB.AddManifestSignature(repo, godigest.Digest(sigData.signedManifestDigest), SignatureMetadata{
SignatureType: sigData.signatureType,
SignatureDigest: sigData.signatureDigest,
})
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed set signature meta for signed image %s:%s manifest digest %s ",
sigData.repo, sigData.tag, sigData.signedManifestDigest)
return err
}
}
return nil
}
// resetRepoMetaTags will delete all tags from a repometadata.
func resetRepoMetaTags(repo string, repoDB RepoDB, log log.Logger) error {
repoMeta, err := repoDB.GetRepoMeta(repo)
if err != nil && !errors.Is(err, zerr.ErrRepoMetaNotFound) {
log.Error().Err(err).Msgf("sync-repo: failed to get RepoMeta for repo %s", repo)
return err
}
if errors.Is(err, zerr.ErrRepoMetaNotFound) {
log.Info().Msgf("sync-repo: RepoMeta not found for repo %s, new RepoMeta will be created", repo)
return nil
}
for tag := range repoMeta.Tags {
// We should have a way to delete all tags at once
err := repoDB.DeleteRepoTag(repo, tag)
if err != nil {
log.Error().Err(err).Msgf("sync-repo: failed to delete tag %s from RepoMeta for repo %s", tag, repo)
return err
}
}
return nil
}
func getAllRepos(storeController storage.StoreController) ([]string, error) {
allRepos, err := storeController.DefaultStore.GetRepositories()
if err != nil {
return nil, err
}
if storeController.SubStore != nil {
for _, store := range storeController.SubStore {
substoreRepos, err := store.GetRepositories()
if err != nil {
return nil, err
}
allRepos = append(allRepos, substoreRepos...)
}
}
return allRepos, nil
}
// isManifestMetaPresent checks if the manifest with a certain digest is present in a certain repo.
func isManifestMetaPresent(repo string, manifest ispec.Descriptor, repoDB RepoDB) (bool, error) {
_, err := repoDB.GetManifestMeta(repo, manifest.Digest)
if err != nil && !errors.Is(err, zerr.ErrManifestMetaNotFound) {
return false, err
}
if errors.Is(err, zerr.ErrManifestMetaNotFound) {
return false, nil
}
return true, nil
}
// NewManifestMeta takes raw data about an image and createa a new ManifestMetadate object.
func NewManifestData(repoName string, manifestBlob []byte, storeController storage.StoreController,
) (ManifestData, error) {
var (
manifestContent ispec.Manifest
configContent ispec.Image
manifestData ManifestData
)
imgStore := storeController.GetImageStore(repoName)
err := json.Unmarshal(manifestBlob, &manifestContent)
if err != nil {
return ManifestData{}, err
}
configBlob, err := imgStore.GetBlobContent(repoName, manifestContent.Config.Digest)
if err != nil {
return ManifestData{}, err
}
err = json.Unmarshal(configBlob, &configContent)
if err != nil {
return ManifestData{}, err
}
manifestData.ManifestBlob = manifestBlob
manifestData.ConfigBlob = configBlob
return manifestData, nil
}

View file

@ -0,0 +1,647 @@
package repodb_test
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"os"
"path"
"testing"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
oras "github.com/oras-project/artifacts-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/extensions/monitoring"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
dynamo "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
"zotregistry.io/zot/pkg/storage"
"zotregistry.io/zot/pkg/storage/local"
"zotregistry.io/zot/pkg/test"
"zotregistry.io/zot/pkg/test/mocks"
)
const repo = "repo"
var ErrTestError = errors.New("test error")
func TestSyncRepoDBErrors(t *testing.T) {
Convey("SyncRepoDB", t, func() {
imageStore := mocks.MockedImageStore{
GetIndexContentFn: func(repo string) ([]byte, error) {
return nil, ErrTestError
},
GetRepositoriesFn: func() ([]string, error) {
return []string{"repo1", "repo2"}, nil
},
}
storeController := storage.StoreController{DefaultStore: imageStore}
repoDB := mocks.RepoDBMock{}
// sync repo fail
err := repodb.SyncRepoDB(repoDB, storeController, log.NewLogger("debug", ""))
So(err, ShouldNotBeNil)
Convey("getAllRepos errors", func() {
imageStore1 := mocks.MockedImageStore{
GetRepositoriesFn: func() ([]string, error) {
return []string{"repo1", "repo2"}, nil
},
}
imageStore2 := mocks.MockedImageStore{
GetRepositoriesFn: func() ([]string, error) {
return nil, ErrTestError
},
}
storeController := storage.StoreController{
DefaultStore: imageStore1,
SubStore: map[string]storage.ImageStore{
"a": imageStore2,
},
}
err := repodb.SyncRepoDB(repoDB, storeController, log.NewLogger("debug", ""))
So(err, ShouldNotBeNil)
})
})
Convey("SyncRepo", t, func() {
imageStore := mocks.MockedImageStore{}
storeController := storage.StoreController{DefaultStore: &imageStore}
repoDB := mocks.RepoDBMock{}
log := log.NewLogger("debug", "")
Convey("imageStore.GetIndexContent errors", func() {
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return nil, ErrTestError
}
err := repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("json.Unmarshal errors", func() {
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return []byte("Invalid JSON"), nil
}
err := repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("resetRepoMetaTags errors", func() {
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return []byte("{}"), nil
}
Convey("repoDB.GetRepoMeta errors", func() {
repoDB.GetRepoMetaFn = func(repo string) (repodb.RepoMetadata, error) {
return repodb.RepoMetadata{}, ErrTestError
}
err := repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("repoDB.DeleteRepoTag errors", func() {
repoDB.GetRepoMetaFn = func(repo string) (repodb.RepoMetadata, error) {
return repodb.RepoMetadata{
Tags: map[string]repodb.Descriptor{
"digest1": {
Digest: "tag1",
MediaType: ispec.MediaTypeImageManifest,
},
},
}, nil
}
repoDB.DeleteRepoTagFn = func(repo, tag string) error {
return ErrTestError
}
err := repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
})
Convey("isManifestMetaPresent errors", func() {
indexContent := ispec.Index{
Manifests: []ispec.Descriptor{
{
Digest: godigest.FromString("manifest1"),
MediaType: ispec.MediaTypeImageManifest,
Annotations: map[string]string{
ispec.AnnotationRefName: "tag1",
},
},
},
}
indexBlob, err := json.Marshal(indexContent)
So(err, ShouldBeNil)
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return indexBlob, nil
}
Convey("repoDB.GetManifestMeta errors", func() {
repoDB.GetManifestMetaFn = func(repo string, manifestDigest godigest.Digest) (repodb.ManifestMetadata, error) {
return repodb.ManifestMetadata{}, ErrTestError
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
})
Convey("manifestMetaIsPresent true", func() {
indexContent := ispec.Index{
Manifests: []ispec.Descriptor{
{
Digest: godigest.FromString("manifest1"),
MediaType: ispec.MediaTypeImageManifest,
Annotations: map[string]string{
ispec.AnnotationRefName: "tag1",
},
},
},
}
indexBlob, err := json.Marshal(indexContent)
So(err, ShouldBeNil)
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return indexBlob, nil
}
Convey("repoDB.SetRepoTag", func() {
repoDB.SetRepoTagFn = func(repo, tag string, manifestDigest godigest.Digest, mediaType string) error {
return ErrTestError
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
})
Convey("manifestMetaIsPresent false", func() {
indexContent := ispec.Index{
Manifests: []ispec.Descriptor{
{
Digest: godigest.FromString("manifest1"),
MediaType: ispec.MediaTypeImageManifest,
Annotations: map[string]string{
ispec.AnnotationRefName: "tag1",
},
},
},
}
indexBlob, err := json.Marshal(indexContent)
So(err, ShouldBeNil)
imageStore.GetIndexContentFn = func(repo string) ([]byte, error) {
return indexBlob, nil
}
repoDB.GetManifestMetaFn = func(repo string, manifestDigest godigest.Digest) (repodb.ManifestMetadata, error) {
return repodb.ManifestMetadata{}, zerr.ErrManifestMetaNotFound
}
Convey("GetImageManifest errors", func() {
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return nil, "", "", ErrTestError
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("CheckIsImageSignature errors", func() {
// CheckIsImageSignature will fail because of a invalid json
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte("Invalid JSON"), "", "", nil
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("CheckIsImageSignature -> not signature", func() {
manifestContent := ispec.Manifest{}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return manifestBlob, "", "", nil
}
Convey("imgStore.GetBlobContent errors", func() {
imageStore.GetBlobContentFn = func(repo string, digest godigest.Digest) ([]byte, error) {
return nil, ErrTestError
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
Convey("json.Unmarshal(configBlob errors", func() {
imageStore.GetBlobContentFn = func(repo string, digest godigest.Digest) ([]byte, error) {
return []byte("invalid JSON"), nil
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
})
Convey("CheckIsImageSignature -> is signature", func() {
manifestContent := oras.Manifest{
Subject: &oras.Descriptor{
Digest: "123",
},
}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return manifestBlob, "", "", nil
}
repoDB.AddManifestSignatureFn = func(repo string, signedManifestDigest godigest.Digest,
sm repodb.SignatureMetadata,
) error {
return ErrTestError
}
err = repodb.SyncRepo("repo", repoDB, storeController, log)
So(err, ShouldNotBeNil)
})
})
})
}
func TestSyncRepoDBWithStorage(t *testing.T) {
Convey("Boltdb", t, func() {
rootDir := t.TempDir()
imageStore := local.NewImageStore(rootDir, false, 0, false, false,
log.NewLogger("debug", ""), monitoring.NewMetricsServer(false, log.NewLogger("debug", "")), nil, nil)
storeController := storage.StoreController{DefaultStore: imageStore}
manifests := []ispec.Manifest{}
for i := 0; i < 3; i++ {
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
manifests = append(manifests, manifest)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: fmt.Sprintf("tag%d", i),
},
repo,
storeController)
So(err, ShouldBeNil)
}
// add fake signature for tag1
signatureTag, err := test.GetCosignSignatureTagForManifest(manifests[1])
So(err, ShouldBeNil)
manifestBlob, err := json.Marshal(manifests[1])
So(err, ShouldBeNil)
signedManifestDigest := godigest.FromBytes(manifestBlob)
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: signatureTag,
},
repo,
storeController)
So(err, ShouldBeNil)
// remove tag2 from index.json
indexPath := path.Join(rootDir, repo, "index.json")
indexFile, err := os.Open(indexPath)
So(err, ShouldBeNil)
buf, err := io.ReadAll(indexFile)
So(err, ShouldBeNil)
var index ispec.Index
if err = json.Unmarshal(buf, &index); err == nil {
for _, manifest := range index.Manifests {
if val, ok := manifest.Annotations[ispec.AnnotationRefName]; ok && val == "tag2" {
delete(manifest.Annotations, ispec.AnnotationRefName)
break
}
}
}
buf, err = json.Marshal(index)
So(err, ShouldBeNil)
err = os.WriteFile(indexPath, buf, 0o600)
So(err, ShouldBeNil)
repoDB, err := bolt.NewBoltDBWrapper(bolt.DBParameters{
RootDir: rootDir,
})
So(err, ShouldBeNil)
err = repodb.SyncRepoDB(repoDB, storeController, log.NewLogger("debug", ""))
So(err, ShouldBeNil)
repos, err := repoDB.GetMultipleRepoMeta(
context.Background(),
func(repoMeta repodb.RepoMetadata) bool { return true },
repodb.PageInput{},
)
So(err, ShouldBeNil)
So(len(repos), ShouldEqual, 1)
So(len(repos[0].Tags), ShouldEqual, 2)
for _, descriptor := range repos[0].Tags {
manifestMeta, err := repoDB.GetManifestMeta(repo, godigest.Digest(descriptor.Digest))
So(err, ShouldBeNil)
So(manifestMeta.ManifestBlob, ShouldNotBeNil)
So(manifestMeta.ConfigBlob, ShouldNotBeNil)
if descriptor.Digest == signedManifestDigest.String() {
So(repos[0].Signatures[descriptor.Digest], ShouldNotBeEmpty)
So(manifestMeta.Signatures["cosign"], ShouldNotBeEmpty)
}
}
})
Convey("Ignore orphan signatures", t, func() {
rootDir := t.TempDir()
imageStore := local.NewImageStore(rootDir, false, 0, false, false,
log.NewLogger("debug", ""), monitoring.NewMetricsServer(false, log.NewLogger("debug", "")), nil, nil)
storeController := storage.StoreController{DefaultStore: imageStore}
// add an image
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: "tag1",
},
repo,
storeController)
So(err, ShouldBeNil)
// add mock cosign signature without pushing the signed image
_, _, manifest, err = test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
signatureTag, err := test.GetCosignSignatureTagForManifest(manifest)
So(err, ShouldBeNil)
// get the body of the signature
config, layers, manifest, err = test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: signatureTag,
},
repo,
storeController)
So(err, ShouldBeNil)
// test that we have only 1 image inside the repo
repoDB, err := bolt.NewBoltDBWrapper(bolt.DBParameters{
RootDir: rootDir,
})
So(err, ShouldBeNil)
err = repodb.SyncRepoDB(repoDB, storeController, log.NewLogger("debug", ""))
So(err, ShouldBeNil)
repos, err := repoDB.GetMultipleRepoMeta(
context.Background(),
func(repoMeta repodb.RepoMetadata) bool { return true },
repodb.PageInput{},
)
So(err, ShouldBeNil)
So(len(repos), ShouldEqual, 1)
So(repos[0].Tags, ShouldContainKey, "tag1")
So(repos[0].Tags, ShouldNotContainKey, signatureTag)
})
}
func TestSyncRepoDBDynamoWrapper(t *testing.T) {
skipIt(t)
Convey("Dynamodb", t, func() {
rootDir := t.TempDir()
imageStore := local.NewImageStore(rootDir, false, 0, false, false,
log.NewLogger("debug", ""), monitoring.NewMetricsServer(false, log.NewLogger("debug", "")), nil, nil)
storeController := storage.StoreController{DefaultStore: imageStore}
manifests := []ispec.Manifest{}
for i := 0; i < 3; i++ {
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
manifests = append(manifests, manifest)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: fmt.Sprintf("tag%d", i),
},
repo,
storeController)
So(err, ShouldBeNil)
}
// add fake signature for tag1
signatureTag, err := test.GetCosignSignatureTagForManifest(manifests[1])
So(err, ShouldBeNil)
manifestBlob, err := json.Marshal(manifests[1])
So(err, ShouldBeNil)
signedManifestDigest := godigest.FromBytes(manifestBlob)
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: signatureTag,
},
repo,
storeController)
So(err, ShouldBeNil)
// remove tag2 from index.json
indexPath := path.Join(rootDir, repo, "index.json")
indexFile, err := os.Open(indexPath)
So(err, ShouldBeNil)
buf, err := io.ReadAll(indexFile)
So(err, ShouldBeNil)
var index ispec.Index
if err = json.Unmarshal(buf, &index); err == nil {
for _, manifest := range index.Manifests {
if val, ok := manifest.Annotations[ispec.AnnotationRefName]; ok && val == "tag2" {
delete(manifest.Annotations, ispec.AnnotationRefName)
break
}
}
}
buf, err = json.Marshal(index)
So(err, ShouldBeNil)
err = os.WriteFile(indexPath, buf, 0o600)
So(err, ShouldBeNil)
dynamoWrapper, err := dynamo.NewDynamoDBWrapper(dynamoParams.DBDriverParameters{
Endpoint: os.Getenv("DYNAMODBMOCK_ENDPOINT"),
Region: "us-east-2",
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
})
So(err, ShouldBeNil)
err = dynamoWrapper.ResetManifestDataTable()
So(err, ShouldBeNil)
err = dynamoWrapper.ResetRepoMetaTable()
So(err, ShouldBeNil)
err = repodb.SyncRepoDB(dynamoWrapper, storeController, log.NewLogger("debug", ""))
So(err, ShouldBeNil)
repos, err := dynamoWrapper.GetMultipleRepoMeta(
context.Background(),
func(repoMeta repodb.RepoMetadata) bool { return true },
repodb.PageInput{},
)
t.Logf("%#v", repos)
So(err, ShouldBeNil)
So(len(repos), ShouldEqual, 1)
So(len(repos[0].Tags), ShouldEqual, 2)
for _, descriptor := range repos[0].Tags {
manifestMeta, err := dynamoWrapper.GetManifestMeta(repo, godigest.Digest(descriptor.Digest))
So(err, ShouldBeNil)
So(manifestMeta.ManifestBlob, ShouldNotBeNil)
So(manifestMeta.ConfigBlob, ShouldNotBeNil)
if descriptor.Digest == signedManifestDigest.String() {
So(manifestMeta.Signatures, ShouldNotBeEmpty)
}
}
})
Convey("Ignore orphan signatures", t, func() {
rootDir := t.TempDir()
imageStore := local.NewImageStore(rootDir, false, 0, false, false,
log.NewLogger("debug", ""), monitoring.NewMetricsServer(false, log.NewLogger("debug", "")), nil, nil)
storeController := storage.StoreController{DefaultStore: imageStore}
// add an image
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: "tag1",
},
repo,
storeController)
So(err, ShouldBeNil)
// add mock cosign signature without pushing the signed image
_, _, manifest, err = test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
signatureTag, err := test.GetCosignSignatureTagForManifest(manifest)
So(err, ShouldBeNil)
// get the body of the signature
config, layers, manifest, err = test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(
test.Image{
Config: config,
Layers: layers,
Manifest: manifest,
Tag: signatureTag,
},
repo,
storeController)
So(err, ShouldBeNil)
// test that we have only 1 image inside the repo
repoDB, err := dynamo.NewDynamoDBWrapper(dynamoParams.DBDriverParameters{
Endpoint: os.Getenv("DYNAMODBMOCK_ENDPOINT"),
Region: "us-east-2",
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
})
So(err, ShouldBeNil)
err = repodb.SyncRepoDB(repoDB, storeController, log.NewLogger("debug", ""))
So(err, ShouldBeNil)
repos, err := repoDB.GetMultipleRepoMeta(
context.Background(),
func(repoMeta repodb.RepoMetadata) bool { return true },
repodb.PageInput{},
)
So(err, ShouldBeNil)
t.Logf("%#v", repos)
So(len(repos), ShouldEqual, 1)
So(repos[0].Tags, ShouldContainKey, "tag1")
So(repos[0].Tags, ShouldNotContainKey, signatureTag)
})
}
func skipIt(t *testing.T) {
t.Helper()
if os.Getenv("S3MOCK_ENDPOINT") == "" {
t.Skip("Skipping testing without AWS S3 mock server")
}
}

View file

@ -0,0 +1,205 @@
package update
import (
godigest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/log"
"zotregistry.io/zot/pkg/meta/repodb"
"zotregistry.io/zot/pkg/storage"
)
// OnUpdateManifest is called when a new manifest is added. It updates repodb according to the type
// of image pushed(normal images, signatues, etc.). In care of any errors, it makes sure to keep
// consistency between repodb and the image store.
func OnUpdateManifest(name, reference, mediaType string, digest godigest.Digest, body []byte,
storeController storage.StoreController, repoDB repodb.RepoDB, log log.Logger,
) error {
imgStore := storeController.GetImageStore(name)
// check if image is a signature
isSignature, signatureType, signedManifestDigest, err := storage.CheckIsImageSignature(name, body, reference,
storeController)
if err != nil {
if errors.Is(err, zerr.ErrOrphanSignature) {
log.Warn().Err(err).Msg("image has signature format but it doesn't sign any image")
return zerr.ErrOrphanSignature
}
log.Error().Err(err).Msg("can't check if image is a signature or not")
if err := imgStore.DeleteImageManifest(name, reference, false); err != nil {
log.Error().Err(err).Msgf("couldn't remove image manifest %s in repo %s", reference, name)
return err
}
return err
}
metadataSuccessfullySet := true
if isSignature {
err = repoDB.AddManifestSignature(name, signedManifestDigest, repodb.SignatureMetadata{
SignatureType: signatureType,
SignatureDigest: digest.String(),
})
if err != nil {
log.Error().Err(err).Msg("repodb: error while putting repo meta")
metadataSuccessfullySet = false
}
} else {
err := SetMetadataFromInput(name, reference, mediaType, digest, body,
storeController, repoDB, log)
if err != nil {
metadataSuccessfullySet = false
}
}
if !metadataSuccessfullySet {
log.Info().Msgf("uploding image meta was unsuccessful for tag %s in repo %s", reference, name)
if err := imgStore.DeleteImageManifest(name, reference, false); err != nil {
log.Error().Err(err).Msgf("couldn't remove image manifest %s in repo %s", reference, name)
return err
}
return err
}
return nil
}
// OnDeleteManifest is called when a manifest is deleted. It updates repodb according to the type
// of image pushed(normal images, signatues, etc.). In care of any errors, it makes sure to keep
// consistency between repodb and the image store.
func OnDeleteManifest(name, reference, mediaType string, digest godigest.Digest, manifestBlob []byte,
storeController storage.StoreController, repoDB repodb.RepoDB, log log.Logger,
) error {
imgStore := storeController.GetImageStore(name)
isSignature, signatureType, signedManifestDigest, err := storage.CheckIsImageSignature(name, manifestBlob,
reference, storeController)
if err != nil {
if errors.Is(err, zerr.ErrOrphanSignature) {
log.Warn().Err(err).Msg("image has signature format but it doesn't sign any image")
return zerr.ErrOrphanSignature
}
log.Error().Err(err).Msg("can't check if image is a signature or not")
return err
}
manageRepoMetaSuccessfully := true
if isSignature {
err = repoDB.DeleteSignature(name, signedManifestDigest, repodb.SignatureMetadata{
SignatureDigest: digest.String(),
SignatureType: signatureType,
})
if err != nil {
log.Error().Err(err).Msg("repodb: can't check if image is a signature or not")
manageRepoMetaSuccessfully = false
}
} else {
err = repoDB.DeleteRepoTag(name, reference)
if err != nil {
log.Info().Msg("repodb: restoring image store")
// restore image store
_, err := imgStore.PutImageManifest(name, reference, mediaType, manifestBlob)
if err != nil {
log.Error().Err(err).Msg("repodb: error while restoring image store, database is not consistent")
}
manageRepoMetaSuccessfully = false
}
}
if !manageRepoMetaSuccessfully {
log.Info().Msgf("repodb: deleting image meta was unsuccessful for tag %s in repo %s", reference, name)
return err
}
return nil
}
// OnDeleteManifest is called when a manifest is downloaded. It increments the download couter on that manifest.
func OnGetManifest(name, reference string, digest godigest.Digest, body []byte,
storeController storage.StoreController, repoDB repodb.RepoDB, log log.Logger,
) error {
// check if image is a signature
isSignature, _, _, err := storage.CheckIsImageSignature(name, body, reference,
storeController)
if err != nil {
if errors.Is(err, zerr.ErrOrphanSignature) {
log.Warn().Err(err).Msg("image has signature format but it doesn't sign any image")
return err
}
log.Error().Err(err).Msg("can't check if manifest is a signature or not")
return err
}
if !isSignature {
err := repoDB.IncrementImageDownloads(name, reference)
if err != nil {
log.Error().Err(err).Msg("unexpected error")
return err
}
}
return nil
}
// SetMetadataFromInput receives raw information about the manifest pushed and tries to set manifest metadata
// and update repo metadata by adding the current tag (in case the reference is a tag).
// The function expects image manifest.
func SetMetadataFromInput(repo, reference, mediaType string, digest godigest.Digest, manifestBlob []byte,
storeController storage.StoreController, repoDB repodb.RepoDB, log log.Logger,
) error {
imageMetadata, err := repodb.NewManifestData(repo, manifestBlob, storeController)
if err != nil {
return err
}
err = repoDB.SetManifestMeta(repo, digest, repodb.ManifestMetadata{
ManifestBlob: imageMetadata.ManifestBlob,
ConfigBlob: imageMetadata.ConfigBlob,
DownloadCount: 0,
Signatures: repodb.ManifestSignatures{},
})
if err != nil {
log.Error().Err(err).Msg("repodb: error while putting image meta")
return err
}
if refferenceIsDigest(reference) {
return nil
}
err = repoDB.SetRepoTag(repo, reference, digest, mediaType)
if err != nil {
log.Error().Err(err).Msg("repodb: error while putting repo meta")
return err
}
return nil
}
func refferenceIsDigest(reference string) bool {
_, err := godigest.Parse(reference)
return err == nil
}

View file

@ -0,0 +1,185 @@
package update_test
import (
"encoding/json"
"errors"
"testing"
"time"
godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1"
oras "github.com/oras-project/artifacts-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey"
zerr "zotregistry.io/zot/errors"
"zotregistry.io/zot/pkg/extensions/monitoring"
"zotregistry.io/zot/pkg/log"
bolt_wrapper "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
repoDBUpdate "zotregistry.io/zot/pkg/meta/repodb/update"
"zotregistry.io/zot/pkg/storage"
"zotregistry.io/zot/pkg/storage/local"
"zotregistry.io/zot/pkg/test"
"zotregistry.io/zot/pkg/test/mocks"
)
var ErrTestError = errors.New("test error")
func TestOnUpdateManifest(t *testing.T) {
Convey("On UpdateManifest", t, func() {
rootDir := t.TempDir()
storeController := storage.StoreController{}
log := log.NewLogger("debug", "")
metrics := monitoring.NewMetricsServer(false, log)
storeController.DefaultStore = local.NewImageStore(rootDir, true, 1*time.Second,
true, true, log, metrics, nil, nil,
)
repoDB, err := bolt_wrapper.NewBoltDBWrapper(bolt_wrapper.DBParameters{
RootDir: rootDir,
})
So(err, ShouldBeNil)
config, layers, manifest, err := test.GetRandomImageComponents(100)
So(err, ShouldBeNil)
err = test.WriteImageToFileSystem(test.Image{Config: config, Manifest: manifest, Layers: layers, Tag: "tag1"},
"repo", storeController)
So(err, ShouldBeNil)
manifestBlob, err := json.Marshal(manifest)
So(err, ShouldBeNil)
digest := godigest.FromBytes(manifestBlob)
err = repoDBUpdate.OnUpdateManifest("repo", "tag1", "", digest, manifestBlob, storeController, repoDB, log)
So(err, ShouldBeNil)
repoMeta, err := repoDB.GetRepoMeta("repo")
So(err, ShouldBeNil)
So(repoMeta.Tags, ShouldContainKey, "tag1")
})
}
func TestUpdateErrors(t *testing.T) {
Convey("Update operations", t, func() {
Convey("On UpdateManifest", func() {
imageStore := mocks.MockedImageStore{}
storeController := storage.StoreController{DefaultStore: &imageStore}
repoDB := mocks.RepoDBMock{}
log := log.NewLogger("debug", "")
Convey("zerr.ErrOrphanSignature", func() {
manifestContent := oras.Manifest{
Subject: &oras.Descriptor{
Digest: "123",
},
}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte{}, "", "", zerr.ErrManifestNotFound
}
err = repoDBUpdate.OnUpdateManifest("repo", "tag1", "", "digest", manifestBlob,
storeController, repoDB, log)
So(err, ShouldNotBeNil)
})
})
Convey("On DeleteManifest", func() {
imageStore := mocks.MockedImageStore{}
storeController := storage.StoreController{DefaultStore: &imageStore}
repoDB := mocks.RepoDBMock{}
log := log.NewLogger("debug", "")
Convey("CheckIsImageSignature errors", func() {
manifestContent := oras.Manifest{
Subject: &oras.Descriptor{
Digest: "123",
},
}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte{}, "", "", zerr.ErrManifestNotFound
}
err = repoDBUpdate.OnDeleteManifest("repo", "tag1", "digest", "media", manifestBlob,
storeController, repoDB, log)
So(err, ShouldNotBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte{}, "", "", ErrTestError
}
err = repoDBUpdate.OnDeleteManifest("repo", "tag1", "digest", "media", manifestBlob,
storeController, repoDB, log)
So(err, ShouldNotBeNil)
})
})
Convey("On GetManifest", func() {
imageStore := mocks.MockedImageStore{}
storeController := storage.StoreController{DefaultStore: &imageStore}
repoDB := mocks.RepoDBMock{}
log := log.NewLogger("debug", "")
Convey("CheckIsImageSignature errors", func() {
manifestContent := oras.Manifest{
Subject: &oras.Descriptor{
Digest: "123",
},
}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte{}, "", "", zerr.ErrManifestNotFound
}
err = repoDBUpdate.OnGetManifest("repo", "tag1", "digest", manifestBlob,
storeController, repoDB, log)
So(err, ShouldNotBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return []byte{}, "", "", ErrTestError
}
err = repoDBUpdate.OnGetManifest("repo", "tag1", "media", manifestBlob,
storeController, repoDB, log)
So(err, ShouldNotBeNil)
})
})
Convey("SetMetadataFromInput", func() {
imageStore := mocks.MockedImageStore{}
storeController := storage.StoreController{DefaultStore: &imageStore}
repoDB := mocks.RepoDBMock{}
log := log.NewLogger("debug", "")
err := repoDBUpdate.SetMetadataFromInput("repo", "ref", "digest", "", []byte("BadManifestBlob"),
storeController, repoDB, log)
So(err, ShouldNotBeNil)
// reference is digest
manifestContent := ispec.Manifest{}
manifestBlob, err := json.Marshal(manifestContent)
So(err, ShouldBeNil)
imageStore.GetImageManifestFn = func(repo, reference string) ([]byte, godigest.Digest, string, error) {
return manifestBlob, "", "", nil
}
imageStore.GetBlobContentFn = func(repo string, digest godigest.Digest) ([]byte, error) {
return []byte("{}"), nil
}
err = repoDBUpdate.SetMetadataFromInput("repo", string(godigest.FromString("reference")), "", "digest",
manifestBlob, storeController, repoDB, log)
So(err, ShouldBeNil)
})
})
}

View file

@ -0,0 +1,31 @@
package version
const (
Version1 = "V1"
Version2 = "V2"
Version3 = "V3"
CurrentVersion = Version1
)
const (
versionV1Index = iota
versionV2Index
versionV3Index
)
const DBVersionKey = "DBVersion"
func GetVersionIndex(dbVersion string) int {
index, ok := map[string]int{
Version1: versionV1Index,
Version2: versionV2Index,
Version3: versionV3Index,
}[dbVersion]
if !ok {
return -1
}
return index
}

View file

@ -0,0 +1,14 @@
package version
import (
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"go.etcd.io/bbolt"
)
func GetBoltDBPatches() []func(DB *bbolt.DB) error {
return []func(DB *bbolt.DB) error{}
}
func GetDynamoDBPatches() []func(client *dynamodb.Client, tableNames map[string]string) error {
return []func(client *dynamodb.Client, tableNames map[string]string) error{}
}

View file

@ -0,0 +1,194 @@
package version_test
import (
"context"
"errors"
"os"
"testing"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
"github.com/aws/aws-sdk-go/aws"
. "github.com/smartystreets/goconvey/convey"
"go.etcd.io/bbolt"
"zotregistry.io/zot/pkg/meta/repodb"
bolt "zotregistry.io/zot/pkg/meta/repodb/boltdb-wrapper"
dynamo "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper"
dynamoParams "zotregistry.io/zot/pkg/meta/repodb/dynamodb-wrapper/params"
"zotregistry.io/zot/pkg/meta/repodb/version"
)
var ErrTestError = errors.New("test error")
func TestVersioningBoltDB(t *testing.T) {
Convey("Tests", t, func() {
tmpDir := t.TempDir()
boltDBParams := bolt.DBParameters{RootDir: tmpDir}
boltdbWrapper, err := bolt.NewBoltDBWrapper(boltDBParams)
defer os.Remove("repo.db")
So(boltdbWrapper, ShouldNotBeNil)
So(err, ShouldBeNil)
boltdbWrapper.Patches = []func(DB *bbolt.DB) error{
func(DB *bbolt.DB) error {
return nil
},
}
Convey("success", func() {
boltdbWrapper.Patches = []func(DB *bbolt.DB) error{
func(DB *bbolt.DB) error { // V1 to V2
return nil
},
}
err := setBoltDBVersion(boltdbWrapper.DB, version.Version1)
So(err, ShouldBeNil)
err = boltdbWrapper.PatchDB()
So(err, ShouldBeNil)
})
Convey("DBVersion is empty", func() {
err := boltdbWrapper.DB.Update(func(tx *bbolt.Tx) error {
versionBuck := tx.Bucket([]byte(repodb.VersionBucket))
return versionBuck.Put([]byte(version.DBVersionKey), []byte(""))
})
So(err, ShouldBeNil)
err = boltdbWrapper.PatchDB()
So(err, ShouldNotBeNil)
})
Convey("iterate patches with skip", func() {
boltdbWrapper.Patches = []func(DB *bbolt.DB) error{
func(DB *bbolt.DB) error { // V1 to V2
return nil
},
func(DB *bbolt.DB) error { // V2 to V3
return nil
},
func(DB *bbolt.DB) error { // V3 to V4
return nil
},
}
err := setBoltDBVersion(boltdbWrapper.DB, version.Version1)
So(err, ShouldBeNil)
// we should skip the first patch
err = boltdbWrapper.PatchDB()
So(err, ShouldBeNil)
})
Convey("patch has error", func() {
boltdbWrapper.Patches = []func(DB *bbolt.DB) error{
func(DB *bbolt.DB) error { // V1 to V2
return ErrTestError
},
}
err = boltdbWrapper.PatchDB()
So(err, ShouldNotBeNil)
})
})
}
func setBoltDBVersion(db *bbolt.DB, vers string) error {
err := db.Update(func(tx *bbolt.Tx) error {
versionBuck := tx.Bucket([]byte(repodb.VersionBucket))
return versionBuck.Put([]byte(version.DBVersionKey), []byte(vers))
})
return err
}
func TestVersioningDynamoDB(t *testing.T) {
const (
endpoint = "http://localhost:4566"
region = "us-east-2"
)
Convey("Tests", t, func() {
dynamoWrapper, err := dynamo.NewDynamoDBWrapper(dynamoParams.DBDriverParameters{
Endpoint: endpoint,
Region: region,
RepoMetaTablename: "RepoMetadataTable",
ManifestDataTablename: "ManifestDataTable",
VersionTablename: "Version",
})
So(err, ShouldBeNil)
So(dynamoWrapper.ResetManifestDataTable(), ShouldBeNil)
So(dynamoWrapper.ResetRepoMetaTable(), ShouldBeNil)
Convey("DBVersion is empty", func() {
err := setDynamoDBVersion(dynamoWrapper.Client, "")
So(err, ShouldBeNil)
err = dynamoWrapper.PatchDB()
So(err, ShouldNotBeNil)
})
Convey("iterate patches with skip", func() {
dynamoWrapper.Patches = []func(client *dynamodb.Client, tableNames map[string]string) error{
func(client *dynamodb.Client, tableNames map[string]string) error { // V1 to V2
return nil
},
func(client *dynamodb.Client, tableNames map[string]string) error { // V2 to V3
return nil
},
func(client *dynamodb.Client, tableNames map[string]string) error { // V3 to V4
return nil
},
}
err := setDynamoDBVersion(dynamoWrapper.Client, version.Version1)
So(err, ShouldBeNil)
// we should skip the first patch
err = dynamoWrapper.PatchDB()
So(err, ShouldBeNil)
})
Convey("patch has error", func() {
dynamoWrapper.Patches = []func(client *dynamodb.Client, tableNames map[string]string) error{
func(client *dynamodb.Client, tableNames map[string]string) error { // V1 to V2
return ErrTestError
},
}
err = dynamoWrapper.PatchDB()
So(err, ShouldNotBeNil)
})
})
}
func setDynamoDBVersion(client *dynamodb.Client, vers string) error {
mdAttributeValue, err := attributevalue.Marshal(vers)
if err != nil {
return err
}
_, err = client.UpdateItem(context.TODO(), &dynamodb.UpdateItemInput{
ExpressionAttributeNames: map[string]string{
"#V": "Version",
},
ExpressionAttributeValues: map[string]types.AttributeValue{
":Version": mdAttributeValue,
},
Key: map[string]types.AttributeValue{
"VersionKey": &types.AttributeValueMemberS{
Value: version.DBVersionKey,
},
},
TableName: aws.String("Version"),
UpdateExpression: aws.String("SET #V = :Version"),
})
return err
}

View file

@ -0,0 +1,28 @@
package requestcontext
import (
"context"
zerr "zotregistry.io/zot/errors"
)
func RepoIsUserAvailable(ctx context.Context, repoName string) (bool, error) {
authzCtxKey := GetContextKey()
if authCtx := ctx.Value(authzCtxKey); authCtx != nil {
acCtx, ok := authCtx.(AccessControlContext)
if !ok {
err := zerr.ErrBadCtxFormat
return false, err
}
if acCtx.IsAdmin || acCtx.CanReadRepo(repoName) {
return true, nil
}
return false, nil
}
return true, nil
}

View file

@ -4,6 +4,7 @@ import (
"context" "context"
glob "github.com/bmatcuk/doublestar/v4" //nolint:gci glob "github.com/bmatcuk/doublestar/v4" //nolint:gci
"zotregistry.io/zot/errors" "zotregistry.io/zot/errors"
) )

View file

@ -2,6 +2,7 @@ package cache
import ( import (
"context" "context"
"strings"
"github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/config"
@ -51,7 +52,7 @@ func (d *DynamoDBDriver) NewTable(tableName string) error {
WriteCapacityUnits: aws.Int64(5), WriteCapacityUnits: aws.Int64(5),
}, },
}) })
if err != nil { if err != nil && !strings.Contains(err.Error(), "Table already exists") {
return err return err
} }
@ -87,8 +88,15 @@ func NewDynamoDBCache(parameters interface{}, log zlog.Logger) Cache {
return nil return nil
} }
driver := &DynamoDBDriver{client: dynamodb.NewFromConfig(cfg), tableName: properParameters.TableName, log: log}
err = driver.NewTable(driver.tableName)
if err != nil {
log.Error().Err(err).Msgf("unable to create table for cache '%s'", driver.tableName)
}
// Using the Config value, create the DynamoDB client // Using the Config value, create the DynamoDB client
return &DynamoDBDriver{client: dynamodb.NewFromConfig(cfg), tableName: properParameters.TableName, log: log} return driver
} }
func (d *DynamoDBDriver) Name() string { func (d *DynamoDBDriver) Name() string {

View file

@ -8,6 +8,7 @@ import (
"strings" "strings"
"github.com/docker/distribution/registry/storage/driver" "github.com/docker/distribution/registry/storage/driver"
"github.com/gobwas/glob"
"github.com/notaryproject/notation-go" "github.com/notaryproject/notation-go"
godigest "github.com/opencontainers/go-digest" godigest "github.com/opencontainers/go-digest"
imeta "github.com/opencontainers/image-spec/specs-go" imeta "github.com/opencontainers/image-spec/specs-go"
@ -664,3 +665,75 @@ func IsSupportedMediaType(mediaType string) bool {
mediaType == ispec.MediaTypeArtifactManifest || mediaType == ispec.MediaTypeArtifactManifest ||
mediaType == oras.MediaTypeArtifactManifest mediaType == oras.MediaTypeArtifactManifest
} }
// imageIsSignature checks if the given image (repo:tag) represents a signature. The function
// returns:
//
// - bool: if the image is a signature or not
//
// - string: the type of signature
//
// - string: the digest of the image it signs
//
// - error: any errors that occur.
func CheckIsImageSignature(repoName string, manifestBlob []byte, reference string,
storeController StoreController,
) (bool, string, godigest.Digest, error) {
const cosign = "cosign"
var manifestContent oras.Manifest
err := json.Unmarshal(manifestBlob, &manifestContent)
if err != nil {
return false, "", "", err
}
// check notation signature
if manifestContent.Subject != nil {
imgStore := storeController.GetImageStore(repoName)
_, signedImageManifestDigest, _, err := imgStore.GetImageManifest(repoName,
manifestContent.Subject.Digest.String())
if err != nil {
if errors.Is(err, zerr.ErrManifestNotFound) {
return true, "notation", signedImageManifestDigest, zerr.ErrOrphanSignature
}
return false, "", "", err
}
return true, "notation", signedImageManifestDigest, nil
}
// check cosign
cosignTagRule := glob.MustCompile("sha256-*.sig")
if tag := reference; cosignTagRule.Match(reference) {
prefixLen := len("sha256-")
digestLen := 64
signedImageManifestDigestEncoded := tag[prefixLen : prefixLen+digestLen]
signedImageManifestDigest := godigest.NewDigestFromEncoded(godigest.SHA256,
signedImageManifestDigestEncoded)
imgStore := storeController.GetImageStore(repoName)
_, signedImageManifestDigest, _, err := imgStore.GetImageManifest(repoName,
signedImageManifestDigest.String())
if err != nil {
if errors.Is(err, zerr.ErrManifestNotFound) {
return true, cosign, signedImageManifestDigest, zerr.ErrOrphanSignature
}
return false, "", "", err
}
if signedImageManifestDigest.String() == "" {
return true, cosign, signedImageManifestDigest, zerr.ErrOrphanSignature
}
return true, cosign, signedImageManifestDigest, nil
}
return false, "", "", nil
}

View file

@ -1023,7 +1023,7 @@ func (is *ObjectStorage) checkCacheBlob(digest godigest.Digest) (string, error)
return dstRecord, nil return dstRecord, nil
} }
func (is *ObjectStorage) copyBlob(repo string, blobPath string, dstRecord string) (int64, error) { func (is *ObjectStorage) copyBlob(repo string, blobPath, dstRecord string) (int64, error) {
if err := is.initRepo(repo); err != nil { if err := is.initRepo(repo); err != nil {
is.log.Error().Err(err).Str("repo", repo).Msg("unable to initialize an empty repo") is.log.Error().Err(err).Str("repo", repo).Msg("unable to initialize an empty repo")

View file

@ -706,7 +706,7 @@ func TestNegativeCasesObjectsStorage(t *testing.T) {
controller := api.NewController(conf) controller := api.NewController(conf)
So(controller, ShouldNotBeNil) So(controller, ShouldNotBeNil)
err = controller.InitImageStore(context.TODO()) err = controller.InitImageStore(context.Background())
So(err, ShouldBeNil) So(err, ShouldBeNil)
}) })

View file

@ -1,6 +1,7 @@
package test package test
import ( import (
"bytes"
"context" "context"
"crypto/rand" "crypto/rand"
"encoding/json" "encoding/json"
@ -12,16 +13,22 @@ import (
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
"os/exec"
"path" "path"
"strings" "strings"
"time" "time"
godigest "github.com/opencontainers/go-digest" godigest "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go" "github.com/opencontainers/image-spec/specs-go"
imagespec "github.com/opencontainers/image-spec/specs-go/v1" ispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/umoci" "github.com/opencontainers/umoci"
"github.com/phayes/freeport" "github.com/phayes/freeport"
"github.com/sigstore/cosign/cmd/cosign/cli/generate"
"github.com/sigstore/cosign/cmd/cosign/cli/options"
"github.com/sigstore/cosign/cmd/cosign/cli/sign"
"gopkg.in/resty.v1" "gopkg.in/resty.v1"
"zotregistry.io/zot/pkg/storage"
) )
const ( const (
@ -59,8 +66,8 @@ var (
) )
type Image struct { type Image struct {
Manifest imagespec.Manifest Manifest ispec.Manifest
Config imagespec.Image Config ispec.Image
Layers [][]byte Layers [][]byte
Tag string Tag string
} }
@ -219,6 +226,50 @@ func NewControllerManager(controller Controller) ControllerManager {
return cm return cm
} }
func WriteImageToFileSystem(image Image, repoName string, storeController storage.StoreController) error {
store := storeController.GetImageStore(repoName)
err := store.InitRepo(repoName)
if err != nil {
return err
}
for _, layerBlob := range image.Layers {
layerReader := bytes.NewReader(layerBlob)
layerDigest := godigest.FromBytes(layerBlob)
_, _, err = store.FullBlobUpload(repoName, layerReader, layerDigest)
if err != nil {
return err
}
}
configBlob, err := json.Marshal(image.Config)
if err != nil {
return err
}
configReader := bytes.NewReader(configBlob)
configDigest := godigest.FromBytes(configBlob)
_, _, err = store.FullBlobUpload(repoName, configReader, configDigest)
if err != nil {
return err
}
manifestBlob, err := json.Marshal(image.Manifest)
if err != nil {
return err
}
_, err = store.PutImageManifest(repoName, image.Tag, ispec.MediaTypeImageManifest, manifestBlob)
if err != nil {
return err
}
return nil
}
func WaitTillServerReady(url string) { func WaitTillServerReady(url string) {
for { for {
_, err := resty.R().Get(url) _, err := resty.R().Get(url)
@ -241,7 +292,7 @@ func WaitTillTrivyDBDownloadStarted(rootDir string) {
} }
// Adapted from https://gist.github.com/dopey/c69559607800d2f2f90b1b1ed4e550fb // Adapted from https://gist.github.com/dopey/c69559607800d2f2f90b1b1ed4e550fb
func randomString(n int) string { func RandomString(n int) string {
const letters = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-" const letters = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-"
ret := make([]byte, n) ret := make([]byte, n)
@ -261,14 +312,14 @@ func randomString(n int) string {
func GetRandomImageConfig() ([]byte, godigest.Digest) { func GetRandomImageConfig() ([]byte, godigest.Digest) {
const maxLen = 16 const maxLen = 16
randomAuthor := randomString(maxLen) randomAuthor := RandomString(maxLen)
config := imagespec.Image{ config := ispec.Image{
Platform: imagespec.Platform{ Platform: ispec.Platform{
Architecture: "amd64", Architecture: "amd64",
OS: "linux", OS: "linux",
}, },
RootFS: imagespec.RootFS{ RootFS: ispec.RootFS{
Type: "layers", Type: "layers",
DiffIDs: []godigest.Digest{}, DiffIDs: []godigest.Digest{},
}, },
@ -286,7 +337,7 @@ func GetRandomImageConfig() ([]byte, godigest.Digest) {
} }
func GetEmptyImageConfig() ([]byte, godigest.Digest) { func GetEmptyImageConfig() ([]byte, godigest.Digest) {
config := imagespec.Image{} config := ispec.Image{}
configBlobContent, err := json.MarshalIndent(&config, "", "\t") configBlobContent, err := json.MarshalIndent(&config, "", "\t")
if err != nil { if err != nil {
@ -299,12 +350,12 @@ func GetEmptyImageConfig() ([]byte, godigest.Digest) {
} }
func GetImageConfig() ([]byte, godigest.Digest) { func GetImageConfig() ([]byte, godigest.Digest) {
config := imagespec.Image{ config := ispec.Image{
Platform: imagespec.Platform{ Platform: ispec.Platform{
Architecture: "amd64", Architecture: "amd64",
OS: "linux", OS: "linux",
}, },
RootFS: imagespec.RootFS{ RootFS: ispec.RootFS{
Type: "layers", Type: "layers",
DiffIDs: []godigest.Digest{}, DiffIDs: []godigest.Digest{},
}, },
@ -355,7 +406,7 @@ func GetOciLayoutDigests(imagePath string) (godigest.Digest, godigest.Digest, go
panic(err) panic(err)
} }
var manifest imagespec.Manifest var manifest ispec.Manifest
err = json.Unmarshal(manifestBuf, &manifest) err = json.Unmarshal(manifestBuf, &manifest)
if err != nil { if err != nil {
@ -372,13 +423,13 @@ func GetOciLayoutDigests(imagePath string) (godigest.Digest, godigest.Digest, go
return manifestDigest, configDigest, layerDigest return manifestDigest, configDigest, layerDigest
} }
func GetImageComponents(layerSize int) (imagespec.Image, [][]byte, imagespec.Manifest, error) { func GetImageComponents(layerSize int) (ispec.Image, [][]byte, ispec.Manifest, error) {
config := imagespec.Image{ config := ispec.Image{
Platform: imagespec.Platform{ Platform: ispec.Platform{
Architecture: "amd64", Architecture: "amd64",
OS: "linux", OS: "linux",
}, },
RootFS: imagespec.RootFS{ RootFS: ispec.RootFS{
Type: "layers", Type: "layers",
DiffIDs: []godigest.Digest{}, DiffIDs: []godigest.Digest{},
}, },
@ -387,7 +438,7 @@ func GetImageComponents(layerSize int) (imagespec.Image, [][]byte, imagespec.Man
configBlob, err := json.Marshal(config) configBlob, err := json.Marshal(config)
if err = Error(err); err != nil { if err = Error(err); err != nil {
return imagespec.Image{}, [][]byte{}, imagespec.Manifest{}, err return ispec.Image{}, [][]byte{}, ispec.Manifest{}, err
} }
configDigest := godigest.FromBytes(configBlob) configDigest := godigest.FromBytes(configBlob)
@ -398,16 +449,16 @@ func GetImageComponents(layerSize int) (imagespec.Image, [][]byte, imagespec.Man
schemaVersion := 2 schemaVersion := 2
manifest := imagespec.Manifest{ manifest := ispec.Manifest{
Versioned: specs.Versioned{ Versioned: specs.Versioned{
SchemaVersion: schemaVersion, SchemaVersion: schemaVersion,
}, },
Config: imagespec.Descriptor{ Config: ispec.Descriptor{
MediaType: "application/vnd.oci.image.config.v1+json", MediaType: "application/vnd.oci.image.config.v1+json",
Digest: configDigest, Digest: configDigest,
Size: int64(len(configBlob)), Size: int64(len(configBlob)),
}, },
Layers: []imagespec.Descriptor{ Layers: []ispec.Descriptor{
{ {
MediaType: "application/vnd.oci.image.layer.v1.tar", MediaType: "application/vnd.oci.image.layer.v1.tar",
Digest: godigest.FromBytes(layers[0]), Digest: godigest.FromBytes(layers[0]),
@ -419,6 +470,118 @@ func GetImageComponents(layerSize int) (imagespec.Image, [][]byte, imagespec.Man
return config, layers, manifest, nil return config, layers, manifest, nil
} }
func GetRandomImageComponents(layerSize int) (ispec.Image, [][]byte, ispec.Manifest, error) {
config := ispec.Image{
Platform: ispec.Platform{
Architecture: "amd64",
OS: "linux",
},
RootFS: ispec.RootFS{
Type: "layers",
DiffIDs: []godigest.Digest{},
},
Author: "ZotUser",
}
configBlob, err := json.Marshal(config)
if err = Error(err); err != nil {
return ispec.Image{}, [][]byte{}, ispec.Manifest{}, err
}
configDigest := godigest.FromBytes(configBlob)
layer := make([]byte, layerSize)
_, err = rand.Read(layer)
if err != nil {
return ispec.Image{}, [][]byte{}, ispec.Manifest{}, err
}
layers := [][]byte{
layer,
}
schemaVersion := 2
manifest := ispec.Manifest{
Versioned: specs.Versioned{
SchemaVersion: schemaVersion,
},
Config: ispec.Descriptor{
MediaType: "application/vnd.oci.image.config.v1+json",
Digest: configDigest,
Size: int64(len(configBlob)),
},
Layers: []ispec.Descriptor{
{
MediaType: "application/vnd.oci.image.layer.v1.tar",
Digest: godigest.FromBytes(layers[0]),
Size: int64(len(layers[0])),
},
},
}
return config, layers, manifest, nil
}
func GetImageWithConfig(conf ispec.Image) (ispec.Image, [][]byte, ispec.Manifest, error) {
configBlob, err := json.Marshal(conf)
if err = Error(err); err != nil {
return ispec.Image{}, [][]byte{}, ispec.Manifest{}, err
}
configDigest := godigest.FromBytes(configBlob)
layerSize := 100
layer := make([]byte, layerSize)
_, err = rand.Read(layer)
if err != nil {
return ispec.Image{}, [][]byte{}, ispec.Manifest{}, err
}
layers := [][]byte{
layer,
}
schemaVersion := 2
manifest := ispec.Manifest{
Versioned: specs.Versioned{
SchemaVersion: schemaVersion,
},
Config: ispec.Descriptor{
MediaType: "application/vnd.oci.image.config.v1+json",
Digest: configDigest,
Size: int64(len(configBlob)),
},
Layers: []ispec.Descriptor{
{
MediaType: "application/vnd.oci.image.layer.v1.tar",
Digest: godigest.FromBytes(layers[0]),
Size: int64(len(layers[0])),
},
},
}
return conf, layers, manifest, nil
}
func GetCosignSignatureTagForManifest(manifest ispec.Manifest) (string, error) {
manifestBlob, err := json.Marshal(manifest)
if err != nil {
return "", err
}
manifestDigest := godigest.FromBytes(manifestBlob)
return GetCosignSignatureTagForDigest(manifestDigest), nil
}
func GetCosignSignatureTagForDigest(manifestDigest godigest.Digest) string {
return manifestDigest.Algorithm().String() + "-" + manifestDigest.Encoded() + ".sig"
}
func UploadImage(img Image, baseURL, repo string) error { func UploadImage(img Image, baseURL, repo string) error {
for _, blob := range img.Layers { for _, blob := range img.Layers {
resp, err := resty.R().Post(baseURL + "/v2/" + repo + "/blobs/uploads/") resp, err := resty.R().Post(baseURL + "/v2/" + repo + "/blobs/uploads/")
@ -463,7 +626,7 @@ func UploadImage(img Image, baseURL, repo string) error {
return err return err
} }
if ErrStatusCode(resp.StatusCode()) != http.StatusAccepted && ErrStatusCode(resp.StatusCode()) == -1 { if ErrStatusCode(resp.StatusCode()) != http.StatusAccepted || ErrStatusCode(resp.StatusCode()) == -1 {
return ErrPostBlob return ErrPostBlob
} }
@ -480,7 +643,7 @@ func UploadImage(img Image, baseURL, repo string) error {
return err return err
} }
if ErrStatusCode(resp.StatusCode()) != http.StatusCreated && ErrStatusCode(resp.StatusCode()) == -1 { if ErrStatusCode(resp.StatusCode()) != http.StatusCreated || ErrStatusCode(resp.StatusCode()) == -1 {
return ErrPostBlob return ErrPostBlob
} }
@ -498,7 +661,7 @@ func UploadImage(img Image, baseURL, repo string) error {
return err return err
} }
func UploadArtifact(baseURL, repo string, artifactManifest *imagespec.Artifact) error { func UploadArtifact(baseURL, repo string, artifactManifest *ispec.Artifact) error {
// put manifest // put manifest
artifactManifestBlob, err := json.Marshal(artifactManifest) artifactManifestBlob, err := json.Marshal(artifactManifest)
if err != nil { if err != nil {
@ -508,7 +671,7 @@ func UploadArtifact(baseURL, repo string, artifactManifest *imagespec.Artifact)
artifactManifestDigest := godigest.FromBytes(artifactManifestBlob) artifactManifestDigest := godigest.FromBytes(artifactManifestBlob)
_, err = resty.R(). _, err = resty.R().
SetHeader("Content-type", imagespec.MediaTypeArtifactManifest). SetHeader("Content-type", ispec.MediaTypeArtifactManifest).
SetBody(artifactManifestBlob). SetBody(artifactManifestBlob).
Put(baseURL + "/v2/" + repo + "/manifests/" + artifactManifestDigest.String()) Put(baseURL + "/v2/" + repo + "/manifests/" + artifactManifestDigest.String())
@ -567,3 +730,164 @@ func ReadLogFileAndSearchString(logPath string, stringToMatch string, timeout ti
} }
} }
} }
func UploadImageWithBasicAuth(img Image, baseURL, repo, user, password string) error {
for _, blob := range img.Layers {
resp, err := resty.R().
SetBasicAuth(user, password).
Post(baseURL + "/v2/" + repo + "/blobs/uploads/")
if err != nil {
return err
}
if resp.StatusCode() != http.StatusAccepted {
return ErrPostBlob
}
loc := resp.Header().Get("Location")
digest := godigest.FromBytes(blob).String()
resp, err = resty.R().
SetBasicAuth(user, password).
SetHeader("Content-Length", fmt.Sprintf("%d", len(blob))).
SetHeader("Content-Type", "application/octet-stream").
SetQueryParam("digest", digest).
SetBody(blob).
Put(baseURL + loc)
if err != nil {
return err
}
if resp.StatusCode() != http.StatusCreated {
return ErrPutBlob
}
}
// upload config
cblob, err := json.Marshal(img.Config)
if err = Error(err); err != nil {
return err
}
cdigest := godigest.FromBytes(cblob)
resp, err := resty.R().
SetBasicAuth(user, password).
Post(baseURL + "/v2/" + repo + "/blobs/uploads/")
if err = Error(err); err != nil {
return err
}
if ErrStatusCode(resp.StatusCode()) != http.StatusAccepted || ErrStatusCode(resp.StatusCode()) == -1 {
return ErrPostBlob
}
loc := Location(baseURL, resp)
// uploading blob should get 201
resp, err = resty.R().
SetBasicAuth(user, password).
SetHeader("Content-Length", fmt.Sprintf("%d", len(cblob))).
SetHeader("Content-Type", "application/octet-stream").
SetQueryParam("digest", cdigest.String()).
SetBody(cblob).
Put(loc)
if err = Error(err); err != nil {
return err
}
if ErrStatusCode(resp.StatusCode()) != http.StatusCreated || ErrStatusCode(resp.StatusCode()) == -1 {
return ErrPostBlob
}
// put manifest
manifestBlob, err := json.Marshal(img.Manifest)
if err = Error(err); err != nil {
return err
}
_, err = resty.R().
SetBasicAuth(user, password).
SetHeader("Content-type", "application/vnd.oci.image.manifest.v1+json").
SetBody(manifestBlob).
Put(baseURL + "/v2/" + repo + "/manifests/" + img.Tag)
return err
}
func SignImageUsingCosign(repoTag, port string) error {
cwd, err := os.Getwd()
if err != nil {
return err
}
defer func() { _ = os.Chdir(cwd) }()
tdir, err := os.MkdirTemp("", "cosign")
if err != nil {
return err
}
defer os.RemoveAll(tdir)
_ = os.Chdir(tdir)
// generate a keypair
os.Setenv("COSIGN_PASSWORD", "")
err = generate.GenerateKeyPairCmd(context.TODO(), "", nil)
if err != nil {
return err
}
imageURL := fmt.Sprintf("localhost:%s/%s", port, repoTag)
// sign the image
return sign.SignCmd(&options.RootOptions{Verbose: true, Timeout: 1 * time.Minute},
options.KeyOpts{KeyRef: path.Join(tdir, "cosign.key"), PassFunc: generate.GetPass},
options.RegistryOptions{AllowInsecure: true},
map[string]interface{}{"tag": "1.0"},
[]string{imageURL},
"", "", true, "", "", "", false, false, "", true)
}
func SignImageUsingNotary(repoTag, port string) error {
cwd, err := os.Getwd()
if err != nil {
return err
}
defer func() { _ = os.Chdir(cwd) }()
tdir, err := os.MkdirTemp("", "notation")
if err != nil {
return err
}
defer os.RemoveAll(tdir)
_ = os.Chdir(tdir)
_, err = exec.LookPath("notation")
if err != nil {
return err
}
os.Setenv("XDG_CONFIG_HOME", tdir)
// generate a keypair
cmd := exec.Command("notation", "cert", "generate-test", "--trust", "notation-sign-test")
err = cmd.Run()
if err != nil {
return err
}
// sign the image
image := fmt.Sprintf("localhost:%s/%s", port, repoTag)
cmd = exec.Command("notation", "sign", "--key", "notation-sign-test", "--plain-http", image)
return cmd.Run()
}

View file

@ -6,6 +6,7 @@ package test_test
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"os" "os"
"path" "path"
"testing" "testing"
@ -14,6 +15,7 @@ import (
godigest "github.com/opencontainers/go-digest" godigest "github.com/opencontainers/go-digest"
ispec "github.com/opencontainers/image-spec/specs-go/v1" ispec "github.com/opencontainers/image-spec/specs-go/v1"
. "github.com/smartystreets/goconvey/convey" . "github.com/smartystreets/goconvey/convey"
"golang.org/x/crypto/bcrypt"
"zotregistry.io/zot/pkg/api" "zotregistry.io/zot/pkg/api"
"zotregistry.io/zot/pkg/api/config" "zotregistry.io/zot/pkg/api/config"
@ -387,6 +389,78 @@ func TestUploadImage(t *testing.T) {
So(err, ShouldBeNil) So(err, ShouldBeNil)
}) })
Convey("Upload image with authentification", t, func() {
tempDir := t.TempDir()
conf := config.New()
port := test.GetFreePort()
baseURL := test.GetBaseURL(port)
user1 := "test"
password1 := "test"
testString1 := getCredString(user1, password1)
htpasswdPath := test.MakeHtpasswdFileFromString(testString1)
defer os.Remove(htpasswdPath)
conf.HTTP.Auth = &config.AuthConfig{
HTPasswd: config.AuthHTPasswd{
Path: htpasswdPath,
},
}
conf.HTTP.Port = port
conf.AccessControl = &config.AccessControlConfig{
Repositories: config.Repositories{
"repo": config.PolicyGroup{
Policies: []config.Policy{
{
Users: []string{user1},
Actions: []string{"read", "create"},
},
},
DefaultPolicy: []string{},
},
"inaccessibleRepo": config.PolicyGroup{
Policies: []config.Policy{
{
Users: []string{user1},
Actions: []string{"create"},
},
},
DefaultPolicy: []string{},
},
},
AdminPolicy: config.Policy{
Users: []string{},
Actions: []string{},
},
}
ctlr := api.NewController(conf)
ctlr.Config.Storage.RootDirectory = tempDir
go startServer(ctlr)
defer stopServer(ctlr)
test.WaitTillServerReady(baseURL)
Convey("Request fail while pushing layer", func() {
err := test.UploadImageWithBasicAuth(test.Image{Layers: [][]byte{{1, 2, 3}}}, "badURL", "", "", "")
So(err, ShouldNotBeNil)
})
Convey("Request status is not StatusOk while pushing layer", func() {
err := test.UploadImageWithBasicAuth(test.Image{Layers: [][]byte{{1, 2, 3}}}, baseURL, "repo", "", "")
So(err, ShouldNotBeNil)
})
Convey("Request fail while pushing config", func() {
err := test.UploadImageWithBasicAuth(test.Image{}, "badURL", "", "", "")
So(err, ShouldNotBeNil)
})
Convey("Request status is not StatusOk while pushing config", func() {
err := test.UploadImageWithBasicAuth(test.Image{}, baseURL, "repo", "", "")
So(err, ShouldNotBeNil)
})
})
Convey("Blob upload wrong response status code", t, func() { Convey("Blob upload wrong response status code", t, func() {
port := test.GetFreePort() port := test.GetFreePort()
baseURL := test.GetBaseURL(port) baseURL := test.GetBaseURL(port)
@ -481,6 +555,17 @@ func TestUploadImage(t *testing.T) {
}) })
} }
func getCredString(username, password string) string {
hash, err := bcrypt.GenerateFromPassword([]byte(password), 10)
if err != nil {
panic(err)
}
usernameAndHash := fmt.Sprintf("%s:%s", username, string(hash))
return usernameAndHash
}
func TestInjectUploadImage(t *testing.T) { func TestInjectUploadImage(t *testing.T) {
Convey("Inject failures for unreachable lines", t, func() { Convey("Inject failures for unreachable lines", t, func() {
port := test.GetFreePort() port := test.GetFreePort()
@ -566,6 +651,81 @@ func TestReadLogFileAndSearchString(t *testing.T) {
}) })
} }
func TestInjectUploadImageWithBasicAuth(t *testing.T) {
Convey("Inject failures for unreachable lines", t, func() {
port := test.GetFreePort()
baseURL := test.GetBaseURL(port)
tempDir := t.TempDir()
conf := config.New()
conf.HTTP.Port = port
conf.Storage.RootDirectory = tempDir
user := "user"
password := "password"
testString := getCredString(user, password)
htpasswdPath := test.MakeHtpasswdFileFromString(testString)
defer os.Remove(htpasswdPath)
conf.HTTP.Auth = &config.AuthConfig{
HTPasswd: config.AuthHTPasswd{
Path: htpasswdPath,
},
}
ctlr := api.NewController(conf)
go startServer(ctlr)
defer stopServer(ctlr)
test.WaitTillServerReady(baseURL)
layerBlob := []byte("test")
layerPath := path.Join(tempDir, "test", ".uploads")
if _, err := os.Stat(layerPath); os.IsNotExist(err) {
err = os.MkdirAll(layerPath, 0o700)
if err != nil {
t.Fatal(err)
}
}
img := test.Image{
Layers: [][]byte{
layerBlob,
}, // invalid format that will result in an error
Config: ispec.Image{},
}
Convey("first marshal", func() {
injected := test.InjectFailure(0)
if injected {
err := test.UploadImageWithBasicAuth(img, baseURL, "test", "user", "password")
So(err, ShouldNotBeNil)
}
})
Convey("CreateBlobUpload POST call", func() {
injected := test.InjectFailure(1)
if injected {
err := test.UploadImageWithBasicAuth(img, baseURL, "test", "user", "password")
So(err, ShouldNotBeNil)
}
})
Convey("UpdateBlobUpload PUT call", func() {
injected := test.InjectFailure(3)
if injected {
err := test.UploadImageWithBasicAuth(img, baseURL, "test", "user", "password")
So(err, ShouldNotBeNil)
}
})
Convey("second marshal", func() {
injected := test.InjectFailure(5)
if injected {
err := test.UploadImageWithBasicAuth(img, baseURL, "test", "user", "password")
So(err, ShouldNotBeNil)
}
})
})
}
func startServer(c *api.Controller) { func startServer(c *api.Controller) {
// this blocks // this blocks
ctx := context.Background() ctx := context.Background()

View file

@ -14,7 +14,6 @@ type OciLayoutUtilsMock struct {
GetImageInfoFn func(repo string, digest godigest.Digest) (ispec.Image, error) GetImageInfoFn func(repo string, digest godigest.Digest) (ispec.Image, error)
GetImageTagsWithTimestampFn func(repo string) ([]common.TagInfo, error) GetImageTagsWithTimestampFn func(repo string) ([]common.TagInfo, error)
GetImagePlatformFn func(imageInfo ispec.Image) (string, string) GetImagePlatformFn func(imageInfo ispec.Image) (string, string)
GetImageVendorFn func(imageInfo ispec.Image) string
GetImageManifestSizeFn func(repo string, manifestDigest godigest.Digest) int64 GetImageManifestSizeFn func(repo string, manifestDigest godigest.Digest) int64
GetImageConfigSizeFn func(repo string, manifestDigest godigest.Digest) int64 GetImageConfigSizeFn func(repo string, manifestDigest godigest.Digest) int64
GetRepoLastUpdatedFn func(repo string) (common.TagInfo, error) GetRepoLastUpdatedFn func(repo string) (common.TagInfo, error)
@ -81,14 +80,6 @@ func (olum OciLayoutUtilsMock) GetImagePlatform(imageInfo ispec.Image) (string,
return "", "" return "", ""
} }
func (olum OciLayoutUtilsMock) GetImageVendor(imageInfo ispec.Image) string {
if olum.GetImageVendorFn != nil {
return olum.GetImageVendorFn(imageInfo)
}
return ""
}
func (olum OciLayoutUtilsMock) GetImageManifestSize(repo string, manifestDigest godigest.Digest) int64 { func (olum OciLayoutUtilsMock) GetImageManifestSize(repo string, manifestDigest godigest.Digest) int64 {
if olum.GetImageManifestSizeFn != nil { if olum.GetImageManifestSizeFn != nil {
return olum.GetImageManifestSizeFn(repo, manifestDigest) return olum.GetImageManifestSizeFn(repo, manifestDigest)

View file

@ -0,0 +1,263 @@
package mocks
import (
"context"
godigest "github.com/opencontainers/go-digest"
"zotregistry.io/zot/pkg/meta/repodb"
)
type RepoDBMock struct {
SetRepoDescriptionFn func(repo, description string) error
IncrementRepoStarsFn func(repo string) error
DecrementRepoStarsFn func(repo string) error
GetRepoStarsFn func(repo string) (int, error)
SetRepoLogoFn func(repo string, logoPath string) error
SetRepoTagFn func(repo string, tag string, manifestDigest godigest.Digest, mediaType string) error
DeleteRepoTagFn func(repo string, tag string) error
GetRepoMetaFn func(repo string) (repodb.RepoMetadata, error)
GetMultipleRepoMetaFn func(ctx context.Context, filter func(repoMeta repodb.RepoMetadata) bool,
requestedPage repodb.PageInput) ([]repodb.RepoMetadata, error)
GetManifestDataFn func(manifestDigest godigest.Digest) (repodb.ManifestData, error)
SetManifestDataFn func(manifestDigest godigest.Digest, mm repodb.ManifestData) error
GetManifestMetaFn func(repo string, manifestDigest godigest.Digest) (repodb.ManifestMetadata, error)
SetManifestMetaFn func(repo string, manifestDigest godigest.Digest, mm repodb.ManifestMetadata) error
IncrementImageDownloadsFn func(repo string, reference string) error
AddManifestSignatureFn func(repo string, signedManifestDigest godigest.Digest, sm repodb.SignatureMetadata) error
DeleteSignatureFn func(repo string, signedManifestDigest godigest.Digest, sm repodb.SignatureMetadata) error
SearchReposFn func(ctx context.Context, searchText string, filter repodb.Filter, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
SearchTagsFn func(ctx context.Context, searchText string, filter repodb.Filter, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
SearchDigestsFn func(ctx context.Context, searchText string, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
SearchLayersFn func(ctx context.Context, searchText string, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
SearchForAscendantImagesFn func(ctx context.Context, searchText string, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
SearchForDescendantImagesFn func(ctx context.Context, searchText string, requestedPage repodb.PageInput) (
[]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error)
PatchDBFn func() error
}
func (sdm RepoDBMock) SetRepoDescription(repo, description string) error {
if sdm.SetRepoDescriptionFn != nil {
return sdm.SetRepoDescriptionFn(repo, description)
}
return nil
}
func (sdm RepoDBMock) IncrementRepoStars(repo string) error {
if sdm.IncrementRepoStarsFn != nil {
return sdm.IncrementRepoStarsFn(repo)
}
return nil
}
func (sdm RepoDBMock) DecrementRepoStars(repo string) error {
if sdm.DecrementRepoStarsFn != nil {
return sdm.DecrementRepoStarsFn(repo)
}
return nil
}
func (sdm RepoDBMock) GetRepoStars(repo string) (int, error) {
if sdm.GetRepoStarsFn != nil {
return sdm.GetRepoStarsFn(repo)
}
return 0, nil
}
func (sdm RepoDBMock) SetRepoLogo(repo string, logoPath string) error {
if sdm.SetRepoLogoFn != nil {
return sdm.SetRepoLogoFn(repo, logoPath)
}
return nil
}
func (sdm RepoDBMock) SetRepoTag(repo string, tag string, manifestDigest godigest.Digest, mediaType string) error {
if sdm.SetRepoTagFn != nil {
return sdm.SetRepoTagFn(repo, tag, manifestDigest, mediaType)
}
return nil
}
func (sdm RepoDBMock) DeleteRepoTag(repo string, tag string) error {
if sdm.DeleteRepoTagFn != nil {
return sdm.DeleteRepoTagFn(repo, tag)
}
return nil
}
func (sdm RepoDBMock) GetRepoMeta(repo string) (repodb.RepoMetadata, error) {
if sdm.GetRepoMetaFn != nil {
return sdm.GetRepoMetaFn(repo)
}
return repodb.RepoMetadata{}, nil
}
func (sdm RepoDBMock) GetMultipleRepoMeta(ctx context.Context, filter func(repoMeta repodb.RepoMetadata) bool,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, error) {
if sdm.GetMultipleRepoMetaFn != nil {
return sdm.GetMultipleRepoMetaFn(ctx, filter, requestedPage)
}
return []repodb.RepoMetadata{}, nil
}
func (sdm RepoDBMock) GetManifestData(manifestDigest godigest.Digest) (repodb.ManifestData, error) {
if sdm.GetManifestDataFn != nil {
return sdm.GetManifestData(manifestDigest)
}
return repodb.ManifestData{}, nil
}
func (sdm RepoDBMock) SetManifestData(manifestDigest godigest.Digest, md repodb.ManifestData) error {
if sdm.SetManifestDataFn != nil {
return sdm.SetManifestData(manifestDigest, md)
}
return nil
}
func (sdm RepoDBMock) GetManifestMeta(repo string, manifestDigest godigest.Digest) (repodb.ManifestMetadata, error) {
if sdm.GetManifestMetaFn != nil {
return sdm.GetManifestMetaFn(repo, manifestDigest)
}
return repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SetManifestMeta(repo string, manifestDigest godigest.Digest, mm repodb.ManifestMetadata) error {
if sdm.SetManifestMetaFn != nil {
return sdm.SetManifestMetaFn(repo, manifestDigest, mm)
}
return nil
}
func (sdm RepoDBMock) IncrementImageDownloads(repo string, reference string) error {
if sdm.IncrementImageDownloadsFn != nil {
return sdm.IncrementImageDownloadsFn(repo, reference)
}
return nil
}
func (sdm RepoDBMock) AddManifestSignature(repo string, signedManifestDigest godigest.Digest,
sm repodb.SignatureMetadata,
) error {
if sdm.AddManifestSignatureFn != nil {
return sdm.AddManifestSignatureFn(repo, signedManifestDigest, sm)
}
return nil
}
func (sdm RepoDBMock) DeleteSignature(repo string, signedManifestDigest godigest.Digest,
sm repodb.SignatureMetadata,
) error {
if sdm.DeleteSignatureFn != nil {
return sdm.DeleteSignatureFn(repo, signedManifestDigest, sm)
}
return nil
}
func (sdm RepoDBMock) SearchRepos(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchReposFn != nil {
return sdm.SearchReposFn(ctx, searchText, filter, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SearchTags(ctx context.Context, searchText string, filter repodb.Filter,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchTagsFn != nil {
return sdm.SearchTagsFn(ctx, searchText, filter, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SearchDigests(ctx context.Context, searchText string, requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchDigestsFn != nil {
return sdm.SearchDigestsFn(ctx, searchText, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SearchLayers(ctx context.Context, searchText string, requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchLayersFn != nil {
return sdm.SearchLayersFn(ctx, searchText, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SearchForAscendantImages(ctx context.Context, searchText string, requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchForAscendantImagesFn != nil {
return sdm.SearchForAscendantImagesFn(ctx, searchText, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) SearchForDescendantImages(ctx context.Context, searchText string,
requestedPage repodb.PageInput,
) ([]repodb.RepoMetadata, map[string]repodb.ManifestMetadata, error) {
if sdm.SearchForDescendantImagesFn != nil {
return sdm.SearchForDescendantImagesFn(ctx, searchText, requestedPage)
}
return []repodb.RepoMetadata{}, map[string]repodb.ManifestMetadata{}, nil
}
func (sdm RepoDBMock) PatchDB() error {
if sdm.PatchDBFn != nil {
return sdm.PatchDBFn()
}
return nil
}

View file

@ -34,7 +34,10 @@ function setup() {
"name": "dynamodb", "name": "dynamodb",
"endpoint": "http://localhost:4566", "endpoint": "http://localhost:4566",
"region": "us-east-2", "region": "us-east-2",
"tableName": "BlobTable" "cacheTablename": "BlobTable",
"repoMetaTablename": "RepoMetadataTable",
"manifestDataTablename": "ManifestDataTable",
"versionTablename": "Version"
} }
}, },
"http": { "http": {
@ -63,6 +66,8 @@ function setup() {
EOF EOF
awslocal s3 --region "us-east-2" mb s3://zot-storage awslocal s3 --region "us-east-2" mb s3://zot-storage
awslocal dynamodb --region "us-east-2" create-table --table-name "BlobTable" --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5 awslocal dynamodb --region "us-east-2" create-table --table-name "BlobTable" --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
awslocal dynamodb --region "us-east-2" create-table --table-name "RepoMetadataTable" --attribute-definitions AttributeName=RepoName,AttributeType=S --key-schema AttributeName=RepoName,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
awslocal dynamodb --region "us-east-2" create-table --table-name "ManifestDataTable" --attribute-definitions AttributeName=Digest,AttributeType=S --key-schema AttributeName=Digest,KeyType=HASH --provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=5
zot_serve_strace ${zot_config_file} zot_serve_strace ${zot_config_file}
wait_zot_reachable "http://127.0.0.1:8080/v2/_catalog" wait_zot_reachable "http://127.0.0.1:8080/v2/_catalog"
} }