Some LDAP servers are not MT-safe in that when searches happen with binds
in flight leads to errors such as:
"comment: No other operations may be performed on the connection while a
bind is outstanding"
Add goroutine-id in logs to help debug MT bugs.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
With recent docker client-side changes, on 'docker pull' we see:
"Error response from daemon: missing or empty Content-Type header"
Hence, set Content-Type header.
Signed-off-by: Ramkumar Chinchani <rchincha@cisco.com>
previously mount blob will look for blob that is provided in http request and try to hard link that path
but ideally we should look for path from our cache and do the hard link of that particular path.
this commit does the same.
added support to point multiple storage locations in zot by running multiple instance of zot in background.
see examples/config-multiple.json for more info about config.
Closes#181
The idea initially was to use bazel to do our builds, however golang
build system is now good enough and our code base is entirely go.
It is also slowing down our travis ci/cd pipeline.
The storage layer is protected with read-write locks.
However, we may be holding the locks over unnecessarily large critical
sections.
The typical workflow is that a blob is first uploaded via a per-client
private session-id meaning the blob is not publicly visible yet. When
the blob being uploaded is very large, the transfer takes a long time
while holding the lock.
Private session-id based uploads don't really need locks, and hold locks
only when blobs are published after the upload is complete.
This is useful if we want to roll out experimental versions of zot
pointing to some storage shared with another zot instance.
Also, when under storage full conditions, will be useful to turn on this
flag to prevent further writes.
We perform inline garbage collection of orphan blobs. However, the
dist-spec poses a problem because blobs begin their life as orphan blobs
and then a manifest is add which refers to these blobs.
We use umoci's GC() to perform garbage collection and policy support
has been added recently which can control whether a blob can be skipped
for GC.
In this patch, we use a time-based policy to skip blobs.
Go version changed to 1.14.4
Golangci-lint changed to 1.26.0
Bazel version changed to 3.0.0
Bazel rules_go version changed to 0.23.3
Bazel gazelle version changed to v0.21.0
Bazel build tools version changed to 0.25.1
Bazel skylib version changed to 1.0.2
header
containers/image is the dominant client library to interact with
registries.
It detects which authentication to use based on the WWW-Authenticate
header returned when pinging "/v2/" end-point. If we didn't return this
header, then creds are not used for other write-protected end-points.
Hence, the compatibility fix.
Since we want to conform to dist-spec, sometimes the gc and dedupe
optimizations conflict with the conformance tests that are being run.
So allow them to be turned off via configuration params.
Upstream conformance tests are being updated, so we need to align along
with our internal GC and dedupe features.
Add a new example config file which plays nice with conformance tests.
DeleteImageManifest() updated to deal with the case where the same
manifest can be created with multiple tags and deleted with the same
digest - so all entries must be deleted.
DeleteBlob() delete the digest key (bucket) when last reference is
dropped
As the number of repos and layers increases, the greater the probability
that layers are duplicated. We dedupe using hard links when content is
the same. This is intended to be purely a storage layer optimization.
Access control when available is orthogonal this optimization.
Add a durable cache to help speed up layer lookups.
Update README.
Add more unit tests.
Now that we're GCing blobs on delete/update manifest, we should lock the
blob queries so that they don't race with each other.
This is a pretty coarse grained lock, there's probably a better way to do
this.
Signed-off-by: Tycho Andersen <tycho@tycho.ws>
Previously, CheckManifest() was not checking for repo not found
condition and would default to 500 status code.
Add the check now to return 404.
Fixes issue #74
- that errors be returned a certain way using the new NewErrorList()
method and the string enum constants
- allow for full blob upload without a session with repo name and digest