Some LDAP servers are not MT-safe in that when searches happen with binds
in flight leads to errors such as:
"comment: No other operations may be performed on the connection while a
bind is outstanding"
Add goroutine-id in logs to help debug MT bugs.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
With recent docker client-side changes, on 'docker pull' we see:
"Error response from daemon: missing or empty Content-Type header"
Hence, set Content-Type header.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
previously mount blob will look for blob that is provided in http request and try to hard link that path
but ideally we should look for path from our cache and do the hard link of that particular path.
this commit does the same.
- Show individual layers with size and digest under each image
- Include config digest for each image
See example below
```
IMAGE NAME TAG DIGEST CONFIG LAYERS SIZE
test/godev 0.4.7 7d38d8ca 05b9f86e 519MB
f824a027 65MB
a98af0f5 52MB
ba5b2bc4 163MB
58b1ca8d 228MB
67d798ee 12MB
test/cdev test 2292b4ae cf6f6c77 280MB
f824a027 65MB
a98af0f5 52MB
ba5b2bc4 163MB
test/cdev 0.4.7 2292b4ae cf6f6c77 280MB
f824a027 65MB
a98af0f5 52MB
ba5b2bc4 163MB
Note the new layers and config fields will be visible in the json/yaml format regardless of the value of the verbose flag
```
added support to point multiple storage locations in zot by running multiple instance of zot in background.
see examples/config-multiple.json for more info about config.
Closes#181
In use cases, when there are large images with shared layers across
repositories, clients may benefit from not re-uploading the same blobs
over and over again.
We ensure this by hard linking when check-blob api is called.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
The idea initially was to use bazel to do our builds, however golang
build system is now good enough and our code base is entirely go.
It is also slowing down our travis ci/cd pipeline.
The storage layer is protected with read-write locks.
However, we may be holding the locks over unnecessarily large critical
sections.
The typical workflow is that a blob is first uploaded via a per-client
private session-id meaning the blob is not publicly visible yet. When
the blob being uploaded is very large, the transfer takes a long time
while holding the lock.
Private session-id based uploads don't really need locks, and hold locks
only when blobs are published after the upload is complete.
If image vulnerability scan does not support any media type, considering those images as an infected image and now this images will not be shown in fixed images list.
Fixes issue #130
Uses GraphQL API of zot to fetch CVE info
- Get all images affected by a CVE (input: CVEID)
- Get all CVEs of a layer (input: image:tag)
- Get all layers of an image which have resolved a CVE (input: image,
CVEID)
- Get all layers of an image affected by a CVE (input: image, CVEID)
dstRecord :- blob path stored in cache.
dst :- blob path that is trying to be uploaded.
Currently, if the actual blob on disk may have been removed by GC/delete, during syncing the cache dst is being passed to DeleteBlob function and retry section is being continuously called because DeleteBlob function never deletes dst path (doesn't exist in db), dstRecord should be passed into DeleteBlob function because dstRecord is actual blob path stored in db.
If dst and dstRecord path value is same then this issue will not be produced and DeleteBlob method will delete the blob info from cache but if both are different then DeleteBlob method will try to delete dst path which is not in cache.
Note:- boltdb delete method return nil even when value doesn't exist (https://godoc.org/github.com/boltdb/bolt#Bucket.Delete)