Because s3 doesn't support hard links we store duplicated blobs
as empty files. When the original blob is deleted its content is
moved to the the next duplicated blob and so on.
Signed-off-by: Petu Eusebiu <peusebiu@cisco.com>
PR (linter: upgrade linter version #405) triggered lint job which failed
with many errors generated by various linters. Configurations were added to
golangcilint.yaml and several refactorings were made in order to improve the
results of the linter.
maintidx linter disabled
Signed-off-by: Alex Stan <alexandrustan96@yahoo.ro>
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Behavior controlled by configuration (default=off)
It is a trade-off between performance and consistency.
References:
[1] https://github.com/golang/go/issues/20599
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
previously mount blob will look for blob that is provided in http request and try to hard link that path
but ideally we should look for path from our cache and do the hard link of that particular path.
this commit does the same.
added support to point multiple storage locations in zot by running multiple instance of zot in background.
see examples/config-multiple.json for more info about config.
Closes#181
In use cases, when there are large images with shared layers across
repositories, clients may benefit from not re-uploading the same blobs
over and over again.
We ensure this by hard linking when check-blob api is called.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
The idea initially was to use bazel to do our builds, however golang
build system is now good enough and our code base is entirely go.
It is also slowing down our travis ci/cd pipeline.
The storage layer is protected with read-write locks.
However, we may be holding the locks over unnecessarily large critical
sections.
The typical workflow is that a blob is first uploaded via a per-client
private session-id meaning the blob is not publicly visible yet. When
the blob being uploaded is very large, the transfer takes a long time
while holding the lock.
Private session-id based uploads don't really need locks, and hold locks
only when blobs are published after the upload is complete.
dstRecord :- blob path stored in cache.
dst :- blob path that is trying to be uploaded.
Currently, if the actual blob on disk may have been removed by GC/delete, during syncing the cache dst is being passed to DeleteBlob function and retry section is being continuously called because DeleteBlob function never deletes dst path (doesn't exist in db), dstRecord should be passed into DeleteBlob function because dstRecord is actual blob path stored in db.
If dst and dstRecord path value is same then this issue will not be produced and DeleteBlob method will delete the blob info from cache but if both are different then DeleteBlob method will try to delete dst path which is not in cache.
Note:- boltdb delete method return nil even when value doesn't exist (https://godoc.org/github.com/boltdb/bolt#Bucket.Delete)
We perform inline garbage collection of orphan blobs. However, the
dist-spec poses a problem because blobs begin their life as orphan blobs
and then a manifest is add which refers to these blobs.
We use umoci's GC() to perform garbage collection and policy support
has been added recently which can control whether a blob can be skipped
for GC.
In this patch, we use a time-based policy to skip blobs.