mirror of
https://github.com/willnorris/imageproxy.git
synced 2024-12-30 22:34:18 -05:00
update all downstream dependencies
no specific features I'm looking to add, just keeping thing up to date. Unit tests and my manual testing seems like everything is still working as expected.
This commit is contained in:
parent
17f19d612f
commit
b5984d2822
25 changed files with 1661 additions and 486 deletions
2
vendor/github.com/disintegration/imaging/LICENSE
generated
vendored
2
vendor/github.com/disintegration/imaging/LICENSE
generated
vendored
|
@ -1,6 +1,6 @@
|
||||||
The MIT License (MIT)
|
The MIT License (MIT)
|
||||||
|
|
||||||
Copyright (c) 2012-2014 Grigory Dryapak
|
Copyright (c) 2012-2017 Grigory Dryapak
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
|
185
vendor/github.com/disintegration/imaging/README.md
generated
vendored
185
vendor/github.com/disintegration/imaging/README.md
generated
vendored
|
@ -1,5 +1,9 @@
|
||||||
# Imaging
|
# Imaging
|
||||||
|
|
||||||
|
[![GoDoc](https://godoc.org/github.com/disintegration/imaging?status.svg)](https://godoc.org/github.com/disintegration/imaging)
|
||||||
|
[![Build Status](https://travis-ci.org/disintegration/imaging.svg?branch=master)](https://travis-ci.org/disintegration/imaging)
|
||||||
|
[![Coverage Status](https://coveralls.io/repos/github/disintegration/imaging/badge.svg?branch=master)](https://coveralls.io/github/disintegration/imaging?branch=master)
|
||||||
|
|
||||||
Package imaging provides basic image manipulation functions (resize, rotate, flip, crop, etc.).
|
Package imaging provides basic image manipulation functions (resize, rotate, flip, crop, etc.).
|
||||||
This package is based on the standard Go image package and works best along with it.
|
This package is based on the standard Go image package and works best along with it.
|
||||||
|
|
||||||
|
@ -22,17 +26,18 @@ http://godoc.org/github.com/disintegration/imaging
|
||||||
A few usage examples can be found below. See the documentation for the full list of supported functions.
|
A few usage examples can be found below. See the documentation for the full list of supported functions.
|
||||||
|
|
||||||
### Image resizing
|
### Image resizing
|
||||||
|
|
||||||
```go
|
```go
|
||||||
// resize srcImage to size = 128x128px using the Lanczos filter
|
// Resize srcImage to size = 128x128px using the Lanczos filter.
|
||||||
dstImage128 := imaging.Resize(srcImage, 128, 128, imaging.Lanczos)
|
dstImage128 := imaging.Resize(srcImage, 128, 128, imaging.Lanczos)
|
||||||
|
|
||||||
// resize srcImage to width = 800px preserving the aspect ratio
|
// Resize srcImage to width = 800px preserving the aspect ratio.
|
||||||
dstImage800 := imaging.Resize(srcImage, 800, 0, imaging.Lanczos)
|
dstImage800 := imaging.Resize(srcImage, 800, 0, imaging.Lanczos)
|
||||||
|
|
||||||
// scale down srcImage to fit the 800x600px bounding box
|
// Scale down srcImage to fit the 800x600px bounding box.
|
||||||
dstImageFit := imaging.Fit(srcImage, 800, 600, imaging.Lanczos)
|
dstImageFit := imaging.Fit(srcImage, 800, 600, imaging.Lanczos)
|
||||||
|
|
||||||
// resize and crop the srcImage to fill the 100x100px area
|
// Resize and crop the srcImage to fill the 100x100px area.
|
||||||
dstImageFill := imaging.Fill(srcImage, 100, 100, imaging.Center, imaging.Lanczos)
|
dstImageFill := imaging.Fill(srcImage, 100, 100, imaging.Center, imaging.Lanczos)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -49,146 +54,138 @@ The full list of supported filters: NearestNeighbor, Box, Linear, Hermite, Mitc
|
||||||
|
|
||||||
**Resampling filters comparison**
|
**Resampling filters comparison**
|
||||||
|
|
||||||
Original image. Will be resized from 512x512px to 128x128px.
|
The original image.
|
||||||
|
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_512.png)
|
![srcImage](testdata/lena_512.png)
|
||||||
|
|
||||||
Filter | Resize result
|
The same image resized from 512x512px to 128x128px using different resampling filters.
|
||||||
---|---
|
From faster (lower quality) to slower (higher quality):
|
||||||
`imaging.NearestNeighbor` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_nearest.png)
|
|
||||||
`imaging.Box` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_box.png)
|
|
||||||
`imaging.Linear` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_linear.png)
|
|
||||||
`imaging.MitchellNetravali` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_mitchell.png)
|
|
||||||
`imaging.CatmullRom` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_catrom.png)
|
|
||||||
`imaging.Gaussian` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_gaussian.png)
|
|
||||||
`imaging.Lanczos` | ![dstImage](http://disintegration.github.io/imaging/out_resize_down_lanczos.png)
|
|
||||||
|
|
||||||
**Resize functions comparison**
|
Filter | Resize result
|
||||||
|
--------------------------|---------------------------------------------
|
||||||
|
`imaging.NearestNeighbor` | ![dstImage](testdata/out_resize_nearest.png)
|
||||||
|
`imaging.Linear` | ![dstImage](testdata/out_resize_linear.png)
|
||||||
|
`imaging.CatmullRom` | ![dstImage](testdata/out_resize_catrom.png)
|
||||||
|
`imaging.Lanczos` | ![dstImage](testdata/out_resize_lanczos.png)
|
||||||
|
|
||||||
Original image:
|
|
||||||
|
|
||||||
![srcImage](http://disintegration.github.io/imaging/in.jpg)
|
|
||||||
|
|
||||||
Resize the image to width=100px and height=100px:
|
|
||||||
|
|
||||||
```go
|
|
||||||
dstImage := imaging.Resize(srcImage, 100, 100, imaging.Lanczos)
|
|
||||||
```
|
|
||||||
![dstImage](http://disintegration.github.io/imaging/out-comp-resize.jpg)
|
|
||||||
|
|
||||||
Resize the image to width=100px preserving the aspect ratio:
|
|
||||||
|
|
||||||
```go
|
|
||||||
dstImage := imaging.Resize(srcImage, 100, 0, imaging.Lanczos)
|
|
||||||
```
|
|
||||||
![dstImage](http://disintegration.github.io/imaging/out-comp-fit.jpg)
|
|
||||||
|
|
||||||
Resize the image to fit the 100x100px boundng box preserving the aspect ratio:
|
|
||||||
|
|
||||||
```go
|
|
||||||
dstImage := imaging.Fit(srcImage, 100, 100, imaging.Lanczos)
|
|
||||||
```
|
|
||||||
![dstImage](http://disintegration.github.io/imaging/out-comp-fit.jpg)
|
|
||||||
|
|
||||||
Resize and crop the image with a center anchor point to fill the 100x100px area:
|
|
||||||
|
|
||||||
```go
|
|
||||||
dstImage := imaging.Fill(srcImage, 100, 100, imaging.Center, imaging.Lanczos)
|
|
||||||
```
|
|
||||||
![dstImage](http://disintegration.github.io/imaging/out-comp-fill.jpg)
|
|
||||||
|
|
||||||
### Gaussian Blur
|
### Gaussian Blur
|
||||||
|
|
||||||
```go
|
```go
|
||||||
dstImage := imaging.Blur(srcImage, 0.5)
|
dstImage := imaging.Blur(srcImage, 0.5)
|
||||||
```
|
```
|
||||||
|
|
||||||
Sigma parameter allows to control the strength of the blurring effect.
|
Sigma parameter allows to control the strength of the blurring effect.
|
||||||
|
|
||||||
Original image | Sigma = 0.5 | Sigma = 1.5
|
Original image | Sigma = 0.5 | Sigma = 1.5
|
||||||
---|---|---
|
-----------------------------------|----------------------------------------|---------------------------------------
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_128.png) | ![dstImage](http://disintegration.github.io/imaging/out_blur_0.5.png) | ![dstImage](http://disintegration.github.io/imaging/out_blur_1.5.png)
|
![srcImage](testdata/lena_128.png) | ![dstImage](testdata/out_blur_0.5.png) | ![dstImage](testdata/out_blur_1.5.png)
|
||||||
|
|
||||||
### Sharpening
|
### Sharpening
|
||||||
|
|
||||||
```go
|
```go
|
||||||
dstImage := imaging.Sharpen(srcImage, 0.5)
|
dstImage := imaging.Sharpen(srcImage, 0.5)
|
||||||
```
|
```
|
||||||
|
|
||||||
Uses gaussian function internally. Sigma parameter allows to control the strength of the sharpening effect.
|
`Sharpen` uses gaussian function internally. Sigma parameter allows to control the strength of the sharpening effect.
|
||||||
|
|
||||||
Original image | Sigma = 0.5 | Sigma = 1.5
|
Original image | Sigma = 0.5 | Sigma = 1.5
|
||||||
---|---|---
|
-----------------------------------|-------------------------------------------|------------------------------------------
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_128.png) | ![dstImage](http://disintegration.github.io/imaging/out_sharpen_0.5.png) | ![dstImage](http://disintegration.github.io/imaging/out_sharpen_1.5.png)
|
![srcImage](testdata/lena_128.png) | ![dstImage](testdata/out_sharpen_0.5.png) | ![dstImage](testdata/out_sharpen_1.5.png)
|
||||||
|
|
||||||
### Gamma correction
|
### Gamma correction
|
||||||
|
|
||||||
```go
|
```go
|
||||||
dstImage := imaging.AdjustGamma(srcImage, 0.75)
|
dstImage := imaging.AdjustGamma(srcImage, 0.75)
|
||||||
```
|
```
|
||||||
|
|
||||||
Original image | Gamma = 0.75 | Gamma = 1.25
|
Original image | Gamma = 0.75 | Gamma = 1.25
|
||||||
---|---|---
|
-----------------------------------|------------------------------------------|-----------------------------------------
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_128.png) | ![dstImage](http://disintegration.github.io/imaging/out_gamma_0.75.png) | ![dstImage](http://disintegration.github.io/imaging/out_gamma_1.25.png)
|
![srcImage](testdata/lena_128.png) | ![dstImage](testdata/out_gamma_0.75.png) | ![dstImage](testdata/out_gamma_1.25.png)
|
||||||
|
|
||||||
### Contrast adjustment
|
### Contrast adjustment
|
||||||
|
|
||||||
```go
|
```go
|
||||||
dstImage := imaging.AdjustContrast(srcImage, 20)
|
dstImage := imaging.AdjustContrast(srcImage, 20)
|
||||||
```
|
```
|
||||||
|
|
||||||
Original image | Contrast = 20 | Contrast = -20
|
Original image | Contrast = 10 | Contrast = -10
|
||||||
---|---|---
|
-----------------------------------|--------------------------------------------|-------------------------------------------
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_128.png) | ![dstImage](http://disintegration.github.io/imaging/out_contrast_p20.png) | ![dstImage](http://disintegration.github.io/imaging/out_contrast_m20.png)
|
![srcImage](testdata/lena_128.png) | ![dstImage](testdata/out_contrast_p10.png) | ![dstImage](testdata/out_contrast_m10.png)
|
||||||
|
|
||||||
### Brightness adjustment
|
### Brightness adjustment
|
||||||
|
|
||||||
```go
|
```go
|
||||||
dstImage := imaging.AdjustBrightness(srcImage, 20)
|
dstImage := imaging.AdjustBrightness(srcImage, 20)
|
||||||
```
|
```
|
||||||
|
|
||||||
Original image | Brightness = 20 | Brightness = -20
|
Original image | Brightness = 10 | Brightness = -10
|
||||||
---|---|---
|
-----------------------------------|----------------------------------------------|---------------------------------------------
|
||||||
![srcImage](http://disintegration.github.io/imaging/in_lena_bw_128.png) | ![dstImage](http://disintegration.github.io/imaging/out_brightness_p20.png) | ![dstImage](http://disintegration.github.io/imaging/out_brightness_m20.png)
|
![srcImage](testdata/lena_128.png) | ![dstImage](testdata/out_brightness_p10.png) | ![dstImage](testdata/out_brightness_m10.png)
|
||||||
|
|
||||||
|
## Example code
|
||||||
### Complete code example
|
|
||||||
Here is the code example that loads several images, makes thumbnails of them
|
|
||||||
and combines them together side-by-side.
|
|
||||||
|
|
||||||
```go
|
```go
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"image"
|
"image"
|
||||||
"image/color"
|
"image/color"
|
||||||
|
"log"
|
||||||
"github.com/disintegration/imaging"
|
|
||||||
|
"github.com/disintegration/imaging"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
// Open the test image.
|
||||||
|
src, err := imaging.Open("testdata/lena_512.png")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Open failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
// input files
|
// Crop the original image to 350x350px size using the center anchor.
|
||||||
files := []string{"01.jpg", "02.jpg", "03.jpg"}
|
src = imaging.CropAnchor(src, 350, 350, imaging.Center)
|
||||||
|
|
||||||
// load images and make 100x100 thumbnails of them
|
// Resize the cropped image to width = 256px preserving the aspect ratio.
|
||||||
var thumbnails []image.Image
|
src = imaging.Resize(src, 256, 0, imaging.Lanczos)
|
||||||
for _, file := range files {
|
|
||||||
img, err := imaging.Open(file)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
thumb := imaging.Thumbnail(img, 100, 100, imaging.CatmullRom)
|
|
||||||
thumbnails = append(thumbnails, thumb)
|
|
||||||
}
|
|
||||||
|
|
||||||
// create a new blank image
|
// Create a blurred version of the image.
|
||||||
dst := imaging.New(100*len(thumbnails), 100, color.NRGBA{0, 0, 0, 0})
|
img1 := imaging.Blur(src, 2)
|
||||||
|
|
||||||
// paste thumbnails into the new image side by side
|
// Create a grayscale version of the image with higher contrast and sharpness.
|
||||||
for i, thumb := range thumbnails {
|
img2 := imaging.Grayscale(src)
|
||||||
dst = imaging.Paste(dst, thumb, image.Pt(i*100, 0))
|
img2 = imaging.AdjustContrast(img2, 20)
|
||||||
}
|
img2 = imaging.Sharpen(img2, 2)
|
||||||
|
|
||||||
// save the combined image to file
|
// Create an inverted version of the image.
|
||||||
err := imaging.Save(dst, "dst.jpg")
|
img3 := imaging.Invert(src)
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
// Create an embossed version of the image using a convolution filter.
|
||||||
}
|
img4 := imaging.Convolve3x3(
|
||||||
|
src,
|
||||||
|
[9]float64{
|
||||||
|
-1, -1, 0,
|
||||||
|
-1, 1, 1,
|
||||||
|
0, 1, 1,
|
||||||
|
},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create a new image and paste the four produced images into it.
|
||||||
|
dst := imaging.New(512, 512, color.NRGBA{0, 0, 0, 0})
|
||||||
|
dst = imaging.Paste(dst, img1, image.Pt(0, 0))
|
||||||
|
dst = imaging.Paste(dst, img2, image.Pt(0, 256))
|
||||||
|
dst = imaging.Paste(dst, img3, image.Pt(256, 0))
|
||||||
|
dst = imaging.Paste(dst, img4, image.Pt(256, 256))
|
||||||
|
|
||||||
|
// Save the resulting image using JPEG format.
|
||||||
|
err = imaging.Save(dst, "testdata/out_example.jpg")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Save failed: %v", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
![dstImage](testdata/out_example.jpg)
|
148
vendor/github.com/disintegration/imaging/convolution.go
generated
vendored
Normal file
148
vendor/github.com/disintegration/imaging/convolution.go
generated
vendored
Normal file
|
@ -0,0 +1,148 @@
|
||||||
|
package imaging
|
||||||
|
|
||||||
|
import (
|
||||||
|
"image"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConvolveOptions are convolution parameters.
|
||||||
|
type ConvolveOptions struct {
|
||||||
|
// If Normalize is true the kernel is normalized before convolution.
|
||||||
|
Normalize bool
|
||||||
|
|
||||||
|
// If Abs is true the absolute value of each color channel is taken after convolution.
|
||||||
|
Abs bool
|
||||||
|
|
||||||
|
// Bias is added to each color channel value after convolution.
|
||||||
|
Bias int
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convolve3x3 convolves the image with the specified 3x3 convolution kernel.
|
||||||
|
// Default parameters are used if a nil *ConvolveOptions is passed.
|
||||||
|
func Convolve3x3(img image.Image, kernel [9]float64, options *ConvolveOptions) *image.NRGBA {
|
||||||
|
return convolve(img, kernel[:], options)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convolve5x5 convolves the image with the specified 5x5 convolution kernel.
|
||||||
|
// Default parameters are used if a nil *ConvolveOptions is passed.
|
||||||
|
func Convolve5x5(img image.Image, kernel [25]float64, options *ConvolveOptions) *image.NRGBA {
|
||||||
|
return convolve(img, kernel[:], options)
|
||||||
|
}
|
||||||
|
|
||||||
|
func convolve(img image.Image, kernel []float64, options *ConvolveOptions) *image.NRGBA {
|
||||||
|
src := toNRGBA(img)
|
||||||
|
w := src.Bounds().Max.X
|
||||||
|
h := src.Bounds().Max.Y
|
||||||
|
dst := image.NewNRGBA(image.Rect(0, 0, w, h))
|
||||||
|
|
||||||
|
if w < 1 || h < 1 {
|
||||||
|
return dst
|
||||||
|
}
|
||||||
|
|
||||||
|
if options == nil {
|
||||||
|
options = &ConvolveOptions{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if options.Normalize {
|
||||||
|
normalizeKernel(kernel)
|
||||||
|
}
|
||||||
|
|
||||||
|
type coef struct {
|
||||||
|
x, y int
|
||||||
|
k float64
|
||||||
|
}
|
||||||
|
var coefs []coef
|
||||||
|
var m int
|
||||||
|
|
||||||
|
switch len(kernel) {
|
||||||
|
case 9:
|
||||||
|
m = 1
|
||||||
|
case 25:
|
||||||
|
m = 2
|
||||||
|
default:
|
||||||
|
return dst
|
||||||
|
}
|
||||||
|
|
||||||
|
i := 0
|
||||||
|
for y := -m; y <= m; y++ {
|
||||||
|
for x := -m; x <= m; x++ {
|
||||||
|
if kernel[i] != 0 {
|
||||||
|
coefs = append(coefs, coef{x: x, y: y, k: kernel[i]})
|
||||||
|
}
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
parallel(h, func(partStart, partEnd int) {
|
||||||
|
for y := partStart; y < partEnd; y++ {
|
||||||
|
for x := 0; x < w; x++ {
|
||||||
|
var r, g, b float64
|
||||||
|
for _, c := range coefs {
|
||||||
|
ix := x + c.x
|
||||||
|
if ix < 0 {
|
||||||
|
ix = 0
|
||||||
|
} else if ix >= w {
|
||||||
|
ix = w - 1
|
||||||
|
}
|
||||||
|
|
||||||
|
iy := y + c.y
|
||||||
|
if iy < 0 {
|
||||||
|
iy = 0
|
||||||
|
} else if iy >= h {
|
||||||
|
iy = h - 1
|
||||||
|
}
|
||||||
|
|
||||||
|
off := iy*src.Stride + ix*4
|
||||||
|
r += float64(src.Pix[off+0]) * c.k
|
||||||
|
g += float64(src.Pix[off+1]) * c.k
|
||||||
|
b += float64(src.Pix[off+2]) * c.k
|
||||||
|
}
|
||||||
|
|
||||||
|
if options.Abs {
|
||||||
|
if r < 0 {
|
||||||
|
r = -r
|
||||||
|
}
|
||||||
|
if g < 0 {
|
||||||
|
g = -g
|
||||||
|
}
|
||||||
|
if b < 0 {
|
||||||
|
b = -b
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if options.Bias != 0 {
|
||||||
|
r += float64(options.Bias)
|
||||||
|
g += float64(options.Bias)
|
||||||
|
b += float64(options.Bias)
|
||||||
|
}
|
||||||
|
|
||||||
|
srcOff := y*src.Stride + x*4
|
||||||
|
dstOff := y*dst.Stride + x*4
|
||||||
|
dst.Pix[dstOff+0] = clamp(r)
|
||||||
|
dst.Pix[dstOff+1] = clamp(g)
|
||||||
|
dst.Pix[dstOff+2] = clamp(b)
|
||||||
|
dst.Pix[dstOff+3] = src.Pix[srcOff+3]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return dst
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeKernel(kernel []float64) {
|
||||||
|
var sum, sumpos float64
|
||||||
|
for i := range kernel {
|
||||||
|
sum += kernel[i]
|
||||||
|
if kernel[i] > 0 {
|
||||||
|
sumpos += kernel[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if sum != 0 {
|
||||||
|
for i := range kernel {
|
||||||
|
kernel[i] /= sum
|
||||||
|
}
|
||||||
|
} else if sumpos != 0 {
|
||||||
|
for i := range kernel {
|
||||||
|
kernel[i] /= sumpos
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
54
vendor/github.com/disintegration/imaging/effects.go
generated
vendored
54
vendor/github.com/disintegration/imaging/effects.go
generated
vendored
|
@ -62,28 +62,22 @@ func blurHorizontal(src *image.NRGBA, kernel []float64) *image.NRGBA {
|
||||||
}
|
}
|
||||||
|
|
||||||
for y := 0; y < height; y++ {
|
for y := 0; y < height; y++ {
|
||||||
|
var r, g, b, a float64
|
||||||
r, g, b, a := 0.0, 0.0, 0.0, 0.0
|
|
||||||
for ix := start; ix <= end; ix++ {
|
for ix := start; ix <= end; ix++ {
|
||||||
weight := kernel[absint(x-ix)]
|
weight := kernel[absint(x-ix)]
|
||||||
i := y*src.Stride + ix*4
|
i := y*src.Stride + ix*4
|
||||||
r += float64(src.Pix[i+0]) * weight
|
wa := float64(src.Pix[i+3]) * weight
|
||||||
g += float64(src.Pix[i+1]) * weight
|
r += float64(src.Pix[i+0]) * wa
|
||||||
b += float64(src.Pix[i+2]) * weight
|
g += float64(src.Pix[i+1]) * wa
|
||||||
a += float64(src.Pix[i+3]) * weight
|
b += float64(src.Pix[i+2]) * wa
|
||||||
|
a += wa
|
||||||
}
|
}
|
||||||
|
|
||||||
r = math.Min(math.Max(r/weightSum, 0.0), 255.0)
|
|
||||||
g = math.Min(math.Max(g/weightSum, 0.0), 255.0)
|
|
||||||
b = math.Min(math.Max(b/weightSum, 0.0), 255.0)
|
|
||||||
a = math.Min(math.Max(a/weightSum, 0.0), 255.0)
|
|
||||||
|
|
||||||
j := y*dst.Stride + x*4
|
j := y*dst.Stride + x*4
|
||||||
dst.Pix[j+0] = uint8(r + 0.5)
|
dst.Pix[j+0] = clamp(r / a)
|
||||||
dst.Pix[j+1] = uint8(g + 0.5)
|
dst.Pix[j+1] = clamp(g / a)
|
||||||
dst.Pix[j+2] = uint8(b + 0.5)
|
dst.Pix[j+2] = clamp(b / a)
|
||||||
dst.Pix[j+3] = uint8(a + 0.5)
|
dst.Pix[j+3] = clamp(a / weightSum)
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -116,28 +110,22 @@ func blurVertical(src *image.NRGBA, kernel []float64) *image.NRGBA {
|
||||||
}
|
}
|
||||||
|
|
||||||
for x := 0; x < width; x++ {
|
for x := 0; x < width; x++ {
|
||||||
|
var r, g, b, a float64
|
||||||
r, g, b, a := 0.0, 0.0, 0.0, 0.0
|
|
||||||
for iy := start; iy <= end; iy++ {
|
for iy := start; iy <= end; iy++ {
|
||||||
weight := kernel[absint(y-iy)]
|
weight := kernel[absint(y-iy)]
|
||||||
i := iy*src.Stride + x*4
|
i := iy*src.Stride + x*4
|
||||||
r += float64(src.Pix[i+0]) * weight
|
wa := float64(src.Pix[i+3]) * weight
|
||||||
g += float64(src.Pix[i+1]) * weight
|
r += float64(src.Pix[i+0]) * wa
|
||||||
b += float64(src.Pix[i+2]) * weight
|
g += float64(src.Pix[i+1]) * wa
|
||||||
a += float64(src.Pix[i+3]) * weight
|
b += float64(src.Pix[i+2]) * wa
|
||||||
|
a += wa
|
||||||
}
|
}
|
||||||
|
|
||||||
r = math.Min(math.Max(r/weightSum, 0.0), 255.0)
|
|
||||||
g = math.Min(math.Max(g/weightSum, 0.0), 255.0)
|
|
||||||
b = math.Min(math.Max(b/weightSum, 0.0), 255.0)
|
|
||||||
a = math.Min(math.Max(a/weightSum, 0.0), 255.0)
|
|
||||||
|
|
||||||
j := y*dst.Stride + x*4
|
j := y*dst.Stride + x*4
|
||||||
dst.Pix[j+0] = uint8(r + 0.5)
|
dst.Pix[j+0] = clamp(r / a)
|
||||||
dst.Pix[j+1] = uint8(g + 0.5)
|
dst.Pix[j+1] = clamp(g / a)
|
||||||
dst.Pix[j+2] = uint8(b + 0.5)
|
dst.Pix[j+2] = clamp(b / a)
|
||||||
dst.Pix[j+3] = uint8(a + 0.5)
|
dst.Pix[j+3] = clamp(a / weightSum)
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -171,7 +159,7 @@ func Sharpen(img image.Image, sigma float64) *image.NRGBA {
|
||||||
i := y*src.Stride + x*4
|
i := y*src.Stride + x*4
|
||||||
for j := 0; j < 4; j++ {
|
for j := 0; j < 4; j++ {
|
||||||
k := i + j
|
k := i + j
|
||||||
val := int(src.Pix[k]) + (int(src.Pix[k]) - int(blurred.Pix[k]))
|
val := int(src.Pix[k])<<1 - int(blurred.Pix[k])
|
||||||
if val < 0 {
|
if val < 0 {
|
||||||
val = 0
|
val = 0
|
||||||
} else if val > 255 {
|
} else if val > 255 {
|
||||||
|
|
44
vendor/github.com/disintegration/imaging/helpers.go
generated
vendored
44
vendor/github.com/disintegration/imaging/helpers.go
generated
vendored
|
@ -1,11 +1,9 @@
|
||||||
/*
|
// Package imaging provides basic image manipulation functions (resize, rotate, flip, crop, etc.).
|
||||||
Package imaging provides basic image manipulation functions (resize, rotate, flip, crop, etc.).
|
// This package is based on the standard Go image package and works best along with it.
|
||||||
This package is based on the standard Go image package and works best along with it.
|
//
|
||||||
|
// Image manipulation functions provided by the package take any image type
|
||||||
Image manipulation functions provided by the package take any image type
|
// that implements `image.Image` interface as an input, and return a new image of
|
||||||
that implements `image.Image` interface as an input, and return a new image of
|
// `*image.NRGBA` type (32bit RGBA colors, not premultiplied by alpha).
|
||||||
`*image.NRGBA` type (32bit RGBA colors, not premultiplied by alpha).
|
|
||||||
*/
|
|
||||||
package imaging
|
package imaging
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -24,8 +22,10 @@ import (
|
||||||
"golang.org/x/image/tiff"
|
"golang.org/x/image/tiff"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Format is an image file format.
|
||||||
type Format int
|
type Format int
|
||||||
|
|
||||||
|
// Image file formats.
|
||||||
const (
|
const (
|
||||||
JPEG Format = iota
|
JPEG Format = iota
|
||||||
PNG
|
PNG
|
||||||
|
@ -52,6 +52,7 @@ func (f Format) String() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
// ErrUnsupportedFormat means the given image format (or file extension) is unsupported.
|
||||||
ErrUnsupportedFormat = errors.New("imaging: unsupported image format")
|
ErrUnsupportedFormat = errors.New("imaging: unsupported image format")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -194,15 +195,12 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
dst.Pix[di+0] = src.Pix[si+0]
|
dst.Pix[di+0] = src.Pix[si+0]
|
||||||
dst.Pix[di+1] = src.Pix[si+2]
|
dst.Pix[di+1] = src.Pix[si+2]
|
||||||
dst.Pix[di+2] = src.Pix[si+4]
|
dst.Pix[di+2] = src.Pix[si+4]
|
||||||
dst.Pix[di+3] = src.Pix[si+6]
|
dst.Pix[di+3] = src.Pix[si+6]
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 8
|
si += 8
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -213,9 +211,9 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
a := src.Pix[si+3]
|
a := src.Pix[si+3]
|
||||||
dst.Pix[di+3] = a
|
dst.Pix[di+3] = a
|
||||||
|
|
||||||
switch a {
|
switch a {
|
||||||
case 0:
|
case 0:
|
||||||
dst.Pix[di+0] = 0
|
dst.Pix[di+0] = 0
|
||||||
|
@ -237,7 +235,6 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 4
|
si += 4
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -248,9 +245,9 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
a := src.Pix[si+6]
|
a := src.Pix[si+6]
|
||||||
dst.Pix[di+3] = a
|
dst.Pix[di+3] = a
|
||||||
|
|
||||||
switch a {
|
switch a {
|
||||||
case 0:
|
case 0:
|
||||||
dst.Pix[di+0] = 0
|
dst.Pix[di+0] = 0
|
||||||
|
@ -272,7 +269,6 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 8
|
si += 8
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -283,16 +279,13 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
c := src.Pix[si]
|
c := src.Pix[si]
|
||||||
dst.Pix[di+0] = c
|
dst.Pix[di+0] = c
|
||||||
dst.Pix[di+1] = c
|
dst.Pix[di+1] = c
|
||||||
dst.Pix[di+2] = c
|
dst.Pix[di+2] = c
|
||||||
dst.Pix[di+3] = 0xff
|
dst.Pix[di+3] = 0xff
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 1
|
si += 1
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -303,16 +296,13 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
c := src.Pix[si]
|
c := src.Pix[si]
|
||||||
dst.Pix[di+0] = c
|
dst.Pix[di+0] = c
|
||||||
dst.Pix[di+1] = c
|
dst.Pix[di+1] = c
|
||||||
dst.Pix[di+2] = c
|
dst.Pix[di+2] = c
|
||||||
dst.Pix[di+3] = 0xff
|
dst.Pix[di+3] = 0xff
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 2
|
si += 2
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -322,7 +312,6 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
for dstY := partStart; dstY < partEnd; dstY++ {
|
for dstY := partStart; dstY < partEnd; dstY++ {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
srcX := srcMinX + dstX
|
srcX := srcMinX + dstX
|
||||||
srcY := srcMinY + dstY
|
srcY := srcMinY + dstY
|
||||||
siy := src.YOffset(srcX, srcY)
|
siy := src.YOffset(srcX, srcY)
|
||||||
|
@ -332,9 +321,7 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
dst.Pix[di+1] = g
|
dst.Pix[di+1] = g
|
||||||
dst.Pix[di+2] = b
|
dst.Pix[di+2] = b
|
||||||
dst.Pix[di+3] = 0xff
|
dst.Pix[di+3] = 0xff
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -345,22 +332,18 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
for i := 0; i < plen; i++ {
|
for i := 0; i < plen; i++ {
|
||||||
pnew[i] = color.NRGBAModel.Convert(src.Palette[i]).(color.NRGBA)
|
pnew[i] = color.NRGBAModel.Convert(src.Palette[i]).(color.NRGBA)
|
||||||
}
|
}
|
||||||
|
|
||||||
parallel(dstH, func(partStart, partEnd int) {
|
parallel(dstH, func(partStart, partEnd int) {
|
||||||
for dstY := partStart; dstY < partEnd; dstY++ {
|
for dstY := partStart; dstY < partEnd; dstY++ {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
si := src.PixOffset(srcMinX, srcMinY+dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
c := pnew[src.Pix[si]]
|
c := pnew[src.Pix[si]]
|
||||||
dst.Pix[di+0] = c.R
|
dst.Pix[di+0] = c.R
|
||||||
dst.Pix[di+1] = c.G
|
dst.Pix[di+1] = c.G
|
||||||
dst.Pix[di+2] = c.B
|
dst.Pix[di+2] = c.B
|
||||||
dst.Pix[di+3] = c.A
|
dst.Pix[di+3] = c.A
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
si += 1
|
si += 1
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -370,15 +353,12 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
for dstY := partStart; dstY < partEnd; dstY++ {
|
for dstY := partStart; dstY < partEnd; dstY++ {
|
||||||
di := dst.PixOffset(0, dstY)
|
di := dst.PixOffset(0, dstY)
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
|
|
||||||
c := color.NRGBAModel.Convert(img.At(srcMinX+dstX, srcMinY+dstY)).(color.NRGBA)
|
c := color.NRGBAModel.Convert(img.At(srcMinX+dstX, srcMinY+dstY)).(color.NRGBA)
|
||||||
dst.Pix[di+0] = c.R
|
dst.Pix[di+0] = c.R
|
||||||
dst.Pix[di+1] = c.G
|
dst.Pix[di+1] = c.G
|
||||||
dst.Pix[di+2] = c.B
|
dst.Pix[di+2] = c.B
|
||||||
dst.Pix[di+3] = c.A
|
dst.Pix[di+3] = c.A
|
||||||
|
|
||||||
di += 4
|
di += 4
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -388,7 +368,7 @@ func Clone(img image.Image) *image.NRGBA {
|
||||||
return dst
|
return dst
|
||||||
}
|
}
|
||||||
|
|
||||||
// This function used internally to convert any image type to NRGBA if needed.
|
// toNRGBA converts any image type to *image.NRGBA with min-point at (0, 0).
|
||||||
func toNRGBA(img image.Image) *image.NRGBA {
|
func toNRGBA(img image.Image) *image.NRGBA {
|
||||||
srcBounds := img.Bounds()
|
srcBounds := img.Bounds()
|
||||||
if srcBounds.Min.X == 0 && srcBounds.Min.Y == 0 {
|
if srcBounds.Min.X == 0 && srcBounds.Min.Y == 0 {
|
||||||
|
|
43
vendor/github.com/disintegration/imaging/histogram.go
generated
vendored
Normal file
43
vendor/github.com/disintegration/imaging/histogram.go
generated
vendored
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
package imaging
|
||||||
|
|
||||||
|
import (
|
||||||
|
"image"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Histogram returns a normalized histogram of an image.
|
||||||
|
//
|
||||||
|
// Resulting histogram is represented as an array of 256 floats, where
|
||||||
|
// histogram[i] is a probability of a pixel being of a particular luminance i.
|
||||||
|
func Histogram(img image.Image) [256]float64 {
|
||||||
|
src := toNRGBA(img)
|
||||||
|
width := src.Bounds().Max.X
|
||||||
|
height := src.Bounds().Max.Y
|
||||||
|
|
||||||
|
var histogram [256]float64
|
||||||
|
var total float64
|
||||||
|
|
||||||
|
if width == 0 || height == 0 {
|
||||||
|
return histogram
|
||||||
|
}
|
||||||
|
|
||||||
|
for y := 0; y < height; y++ {
|
||||||
|
for x := 0; x < width; x++ {
|
||||||
|
i := y*src.Stride + x*4
|
||||||
|
|
||||||
|
r := src.Pix[i+0]
|
||||||
|
g := src.Pix[i+1]
|
||||||
|
b := src.Pix[i+2]
|
||||||
|
|
||||||
|
var y float32 = 0.299*float32(r) + 0.587*float32(g) + 0.114*float32(b)
|
||||||
|
|
||||||
|
histogram[int(y+0.5)]++
|
||||||
|
total++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 0; i < 256; i++ {
|
||||||
|
histogram[i] = histogram[i] / total
|
||||||
|
}
|
||||||
|
|
||||||
|
return histogram
|
||||||
|
}
|
130
vendor/github.com/disintegration/imaging/resize.go
generated
vendored
130
vendor/github.com/disintegration/imaging/resize.go
generated
vendored
|
@ -5,17 +5,12 @@ import (
|
||||||
"math"
|
"math"
|
||||||
)
|
)
|
||||||
|
|
||||||
type iwpair struct {
|
type indexWeight struct {
|
||||||
i int
|
index int
|
||||||
w int32
|
weight float64
|
||||||
}
|
}
|
||||||
|
|
||||||
type pweights struct {
|
func precomputeWeights(dstSize, srcSize int, filter ResampleFilter) [][]indexWeight {
|
||||||
iwpairs []iwpair
|
|
||||||
wsum int32
|
|
||||||
}
|
|
||||||
|
|
||||||
func precomputeWeights(dstSize, srcSize int, filter ResampleFilter) []pweights {
|
|
||||||
du := float64(srcSize) / float64(dstSize)
|
du := float64(srcSize) / float64(dstSize)
|
||||||
scale := du
|
scale := du
|
||||||
if scale < 1.0 {
|
if scale < 1.0 {
|
||||||
|
@ -23,7 +18,7 @@ func precomputeWeights(dstSize, srcSize int, filter ResampleFilter) []pweights {
|
||||||
}
|
}
|
||||||
ru := math.Ceil(scale * filter.Support)
|
ru := math.Ceil(scale * filter.Support)
|
||||||
|
|
||||||
out := make([]pweights, dstSize)
|
out := make([][]indexWeight, dstSize)
|
||||||
|
|
||||||
for v := 0; v < dstSize; v++ {
|
for v := 0; v < dstSize; v++ {
|
||||||
fu := (float64(v)+0.5)*du - 0.5
|
fu := (float64(v)+0.5)*du - 0.5
|
||||||
|
@ -37,15 +32,19 @@ func precomputeWeights(dstSize, srcSize int, filter ResampleFilter) []pweights {
|
||||||
endu = srcSize - 1
|
endu = srcSize - 1
|
||||||
}
|
}
|
||||||
|
|
||||||
wsum := int32(0)
|
var sum float64
|
||||||
for u := startu; u <= endu; u++ {
|
for u := startu; u <= endu; u++ {
|
||||||
w := int32(0xff * filter.Kernel((float64(u)-fu)/scale))
|
w := filter.Kernel((float64(u) - fu) / scale)
|
||||||
if w != 0 {
|
if w != 0 {
|
||||||
wsum += w
|
sum += w
|
||||||
out[v].iwpairs = append(out[v].iwpairs, iwpair{u, w})
|
out[v] = append(out[v], indexWeight{index: u, weight: w})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if sum != 0 {
|
||||||
|
for i := range out[v] {
|
||||||
|
out[v][i].weight /= sum
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
out[v].wsum = wsum
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return out
|
return out
|
||||||
|
@ -127,21 +126,26 @@ func resizeHorizontal(src *image.NRGBA, width int, filter ResampleFilter) *image
|
||||||
|
|
||||||
parallel(dstH, func(partStart, partEnd int) {
|
parallel(dstH, func(partStart, partEnd int) {
|
||||||
for dstY := partStart; dstY < partEnd; dstY++ {
|
for dstY := partStart; dstY < partEnd; dstY++ {
|
||||||
|
i0 := dstY * src.Stride
|
||||||
|
j0 := dstY * dst.Stride
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
var c [4]int32
|
var r, g, b, a float64
|
||||||
for _, iw := range weights[dstX].iwpairs {
|
for _, w := range weights[dstX] {
|
||||||
i := dstY*src.Stride + iw.i*4
|
i := i0 + w.index*4
|
||||||
c[0] += int32(src.Pix[i+0]) * iw.w
|
aw := float64(src.Pix[i+3]) * w.weight
|
||||||
c[1] += int32(src.Pix[i+1]) * iw.w
|
r += float64(src.Pix[i+0]) * aw
|
||||||
c[2] += int32(src.Pix[i+2]) * iw.w
|
g += float64(src.Pix[i+1]) * aw
|
||||||
c[3] += int32(src.Pix[i+3]) * iw.w
|
b += float64(src.Pix[i+2]) * aw
|
||||||
|
a += aw
|
||||||
|
}
|
||||||
|
if a != 0 {
|
||||||
|
aInv := 1 / a
|
||||||
|
j := j0 + dstX*4
|
||||||
|
dst.Pix[j+0] = clamp(r * aInv)
|
||||||
|
dst.Pix[j+1] = clamp(g * aInv)
|
||||||
|
dst.Pix[j+2] = clamp(b * aInv)
|
||||||
|
dst.Pix[j+3] = clamp(a)
|
||||||
}
|
}
|
||||||
j := dstY*dst.Stride + dstX*4
|
|
||||||
sum := weights[dstX].wsum
|
|
||||||
dst.Pix[j+0] = clampint32(int32(float32(c[0])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+1] = clampint32(int32(float32(c[1])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+2] = clampint32(int32(float32(c[2])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+3] = clampint32(int32(float32(c[3])/float32(sum) + 0.5))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -162,32 +166,33 @@ func resizeVertical(src *image.NRGBA, height int, filter ResampleFilter) *image.
|
||||||
weights := precomputeWeights(dstH, srcH, filter)
|
weights := precomputeWeights(dstH, srcH, filter)
|
||||||
|
|
||||||
parallel(dstW, func(partStart, partEnd int) {
|
parallel(dstW, func(partStart, partEnd int) {
|
||||||
|
|
||||||
for dstX := partStart; dstX < partEnd; dstX++ {
|
for dstX := partStart; dstX < partEnd; dstX++ {
|
||||||
for dstY := 0; dstY < dstH; dstY++ {
|
for dstY := 0; dstY < dstH; dstY++ {
|
||||||
var c [4]int32
|
var r, g, b, a float64
|
||||||
for _, iw := range weights[dstY].iwpairs {
|
for _, w := range weights[dstY] {
|
||||||
i := iw.i*src.Stride + dstX*4
|
i := w.index*src.Stride + dstX*4
|
||||||
c[0] += int32(src.Pix[i+0]) * iw.w
|
aw := float64(src.Pix[i+3]) * w.weight
|
||||||
c[1] += int32(src.Pix[i+1]) * iw.w
|
r += float64(src.Pix[i+0]) * aw
|
||||||
c[2] += int32(src.Pix[i+2]) * iw.w
|
g += float64(src.Pix[i+1]) * aw
|
||||||
c[3] += int32(src.Pix[i+3]) * iw.w
|
b += float64(src.Pix[i+2]) * aw
|
||||||
|
a += aw
|
||||||
|
}
|
||||||
|
if a != 0 {
|
||||||
|
aInv := 1 / a
|
||||||
|
j := dstY*dst.Stride + dstX*4
|
||||||
|
dst.Pix[j+0] = clamp(r * aInv)
|
||||||
|
dst.Pix[j+1] = clamp(g * aInv)
|
||||||
|
dst.Pix[j+2] = clamp(b * aInv)
|
||||||
|
dst.Pix[j+3] = clamp(a)
|
||||||
}
|
}
|
||||||
j := dstY*dst.Stride + dstX*4
|
|
||||||
sum := weights[dstY].wsum
|
|
||||||
dst.Pix[j+0] = clampint32(int32(float32(c[0])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+1] = clampint32(int32(float32(c[1])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+2] = clampint32(int32(float32(c[2])/float32(sum) + 0.5))
|
|
||||||
dst.Pix[j+3] = clampint32(int32(float32(c[3])/float32(sum) + 0.5))
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
})
|
})
|
||||||
|
|
||||||
return dst
|
return dst
|
||||||
}
|
}
|
||||||
|
|
||||||
// fast nearest-neighbor resize, no filtering
|
// resizeNearest is a fast nearest-neighbor resize, no filtering.
|
||||||
func resizeNearest(src *image.NRGBA, width, height int) *image.NRGBA {
|
func resizeNearest(src *image.NRGBA, width, height int) *image.NRGBA {
|
||||||
dstW, dstH := width, height
|
dstW, dstH := width, height
|
||||||
|
|
||||||
|
@ -203,13 +208,16 @@ func resizeNearest(src *image.NRGBA, width, height int) *image.NRGBA {
|
||||||
parallel(dstH, func(partStart, partEnd int) {
|
parallel(dstH, func(partStart, partEnd int) {
|
||||||
|
|
||||||
for dstY := partStart; dstY < partEnd; dstY++ {
|
for dstY := partStart; dstY < partEnd; dstY++ {
|
||||||
fy := (float64(dstY)+0.5)*dy - 0.5
|
srcY := int((float64(dstY) + 0.5) * dy)
|
||||||
|
if srcY > srcH-1 {
|
||||||
|
srcY = srcH - 1
|
||||||
|
}
|
||||||
|
|
||||||
for dstX := 0; dstX < dstW; dstX++ {
|
for dstX := 0; dstX < dstW; dstX++ {
|
||||||
fx := (float64(dstX)+0.5)*dx - 0.5
|
srcX := int((float64(dstX) + 0.5) * dx)
|
||||||
|
if srcX > srcW-1 {
|
||||||
srcX := int(math.Min(math.Max(math.Floor(fx+0.5), 0.0), float64(srcW)))
|
srcX = srcW - 1
|
||||||
srcY := int(math.Min(math.Max(math.Floor(fy+0.5), 0.0), float64(srcH)))
|
}
|
||||||
|
|
||||||
srcOff := srcY*src.Stride + srcX*4
|
srcOff := srcY*src.Stride + srcX*4
|
||||||
dstOff := dstY*dst.Stride + dstX*4
|
dstOff := dstY*dst.Stride + dstX*4
|
||||||
|
@ -324,7 +332,7 @@ func Thumbnail(img image.Image, width, height int, filter ResampleFilter) *image
|
||||||
return Fill(img, width, height, Center, filter)
|
return Fill(img, width, height, Center, filter)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Resample filter struct. It can be used to make custom filters.
|
// ResampleFilter is a resampling filter struct. It can be used to define custom filters.
|
||||||
//
|
//
|
||||||
// Supported resample filters: NearestNeighbor, Box, Linear, Hermite, MitchellNetravali,
|
// Supported resample filters: NearestNeighbor, Box, Linear, Hermite, MitchellNetravali,
|
||||||
// CatmullRom, BSpline, Gaussian, Lanczos, Hann, Hamming, Blackman, Bartlett, Welch, Cosine.
|
// CatmullRom, BSpline, Gaussian, Lanczos, Hann, Hamming, Blackman, Bartlett, Welch, Cosine.
|
||||||
|
@ -359,7 +367,7 @@ type ResampleFilter struct {
|
||||||
Kernel func(float64) float64
|
Kernel func(float64) float64
|
||||||
}
|
}
|
||||||
|
|
||||||
// Nearest-neighbor filter, no anti-aliasing.
|
// NearestNeighbor is a nearest-neighbor filter (no anti-aliasing).
|
||||||
var NearestNeighbor ResampleFilter
|
var NearestNeighbor ResampleFilter
|
||||||
|
|
||||||
// Box filter (averaging pixels).
|
// Box filter (averaging pixels).
|
||||||
|
@ -371,37 +379,37 @@ var Linear ResampleFilter
|
||||||
// Hermite cubic spline filter (BC-spline; B=0; C=0).
|
// Hermite cubic spline filter (BC-spline; B=0; C=0).
|
||||||
var Hermite ResampleFilter
|
var Hermite ResampleFilter
|
||||||
|
|
||||||
// Mitchell-Netravali cubic filter (BC-spline; B=1/3; C=1/3).
|
// MitchellNetravali is Mitchell-Netravali cubic filter (BC-spline; B=1/3; C=1/3).
|
||||||
var MitchellNetravali ResampleFilter
|
var MitchellNetravali ResampleFilter
|
||||||
|
|
||||||
// Catmull-Rom - sharp cubic filter (BC-spline; B=0; C=0.5).
|
// CatmullRom is a Catmull-Rom - sharp cubic filter (BC-spline; B=0; C=0.5).
|
||||||
var CatmullRom ResampleFilter
|
var CatmullRom ResampleFilter
|
||||||
|
|
||||||
// Cubic B-spline - smooth cubic filter (BC-spline; B=1; C=0).
|
// BSpline is a smooth cubic filter (BC-spline; B=1; C=0).
|
||||||
var BSpline ResampleFilter
|
var BSpline ResampleFilter
|
||||||
|
|
||||||
// Gaussian Blurring Filter.
|
// Gaussian is a Gaussian blurring Filter.
|
||||||
var Gaussian ResampleFilter
|
var Gaussian ResampleFilter
|
||||||
|
|
||||||
// Bartlett-windowed sinc filter (3 lobes).
|
// Bartlett is a Bartlett-windowed sinc filter (3 lobes).
|
||||||
var Bartlett ResampleFilter
|
var Bartlett ResampleFilter
|
||||||
|
|
||||||
// Lanczos filter (3 lobes).
|
// Lanczos filter (3 lobes).
|
||||||
var Lanczos ResampleFilter
|
var Lanczos ResampleFilter
|
||||||
|
|
||||||
// Hann-windowed sinc filter (3 lobes).
|
// Hann is a Hann-windowed sinc filter (3 lobes).
|
||||||
var Hann ResampleFilter
|
var Hann ResampleFilter
|
||||||
|
|
||||||
// Hamming-windowed sinc filter (3 lobes).
|
// Hamming is a Hamming-windowed sinc filter (3 lobes).
|
||||||
var Hamming ResampleFilter
|
var Hamming ResampleFilter
|
||||||
|
|
||||||
// Blackman-windowed sinc filter (3 lobes).
|
// Blackman is a Blackman-windowed sinc filter (3 lobes).
|
||||||
var Blackman ResampleFilter
|
var Blackman ResampleFilter
|
||||||
|
|
||||||
// Welch-windowed sinc filter (parabolic window, 3 lobes).
|
// Welch is a Welch-windowed sinc filter (parabolic window, 3 lobes).
|
||||||
var Welch ResampleFilter
|
var Welch ResampleFilter
|
||||||
|
|
||||||
// Cosine-windowed sinc filter (3 lobes).
|
// Cosine is a Cosine-windowed sinc filter (3 lobes).
|
||||||
var Cosine ResampleFilter
|
var Cosine ResampleFilter
|
||||||
|
|
||||||
func bcspline(x, b, c float64) float64 {
|
func bcspline(x, b, c float64) float64 {
|
||||||
|
|
20
vendor/github.com/disintegration/imaging/tools.go
generated
vendored
20
vendor/github.com/disintegration/imaging/tools.go
generated
vendored
|
@ -8,6 +8,7 @@ import (
|
||||||
// Anchor is the anchor point for image alignment.
|
// Anchor is the anchor point for image alignment.
|
||||||
type Anchor int
|
type Anchor int
|
||||||
|
|
||||||
|
// Anchor point positions.
|
||||||
const (
|
const (
|
||||||
Center Anchor = iota
|
Center Anchor = iota
|
||||||
TopLeft
|
TopLeft
|
||||||
|
@ -180,3 +181,22 @@ func Overlay(background, img image.Image, pos image.Point, opacity float64) *ima
|
||||||
|
|
||||||
return dst
|
return dst
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// OverlayCenter overlays the img image to the center of the background image and
|
||||||
|
// returns the combined image. Opacity parameter is the opacity of the img
|
||||||
|
// image layer, used to compose the images, it must be from 0.0 to 1.0.
|
||||||
|
func OverlayCenter(background, img image.Image, opacity float64) *image.NRGBA {
|
||||||
|
bgBounds := background.Bounds()
|
||||||
|
bgW := bgBounds.Dx()
|
||||||
|
bgH := bgBounds.Dy()
|
||||||
|
bgMinX := bgBounds.Min.X
|
||||||
|
bgMinY := bgBounds.Min.Y
|
||||||
|
|
||||||
|
centerX := bgMinX + bgW/2
|
||||||
|
centerY := bgMinY + bgH/2
|
||||||
|
|
||||||
|
x0 := centerX - img.Bounds().Dx()/2
|
||||||
|
y0 := centerY - img.Bounds().Dy()/2
|
||||||
|
|
||||||
|
return Overlay(background, img, image.Point{x0, y0}, opacity)
|
||||||
|
}
|
||||||
|
|
42
vendor/github.com/disintegration/imaging/utils.go
generated
vendored
42
vendor/github.com/disintegration/imaging/utils.go
generated
vendored
|
@ -1,28 +1,24 @@
|
||||||
package imaging
|
package imaging
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"math"
|
|
||||||
"runtime"
|
"runtime"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
)
|
)
|
||||||
|
|
||||||
var parallelizationEnabled = true
|
// parallel starts parallel image processing based on the current GOMAXPROCS value.
|
||||||
|
// If GOMAXPROCS = 1 it uses no parallelization.
|
||||||
// if GOMAXPROCS = 1: no goroutines used
|
// If GOMAXPROCS > 1 it spawns N=GOMAXPROCS workers in separate goroutines.
|
||||||
// if GOMAXPROCS > 1: spawn N=GOMAXPROCS workers in separate goroutines
|
|
||||||
func parallel(dataSize int, fn func(partStart, partEnd int)) {
|
func parallel(dataSize int, fn func(partStart, partEnd int)) {
|
||||||
numGoroutines := 1
|
numGoroutines := 1
|
||||||
partSize := dataSize
|
partSize := dataSize
|
||||||
|
|
||||||
if parallelizationEnabled {
|
numProcs := runtime.GOMAXPROCS(0)
|
||||||
numProcs := runtime.GOMAXPROCS(0)
|
if numProcs > 1 {
|
||||||
if numProcs > 1 {
|
numGoroutines = numProcs
|
||||||
numGoroutines = numProcs
|
partSize = dataSize / (numGoroutines * 10)
|
||||||
partSize = dataSize / (numGoroutines * 10)
|
if partSize < 1 {
|
||||||
if partSize < 1 {
|
partSize = 1
|
||||||
partSize = 1
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -54,6 +50,7 @@ func parallel(dataSize int, fn func(partStart, partEnd int)) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// absint returns the absolute value of i.
|
||||||
func absint(i int) int {
|
func absint(i int) int {
|
||||||
if i < 0 {
|
if i < 0 {
|
||||||
return -i
|
return -i
|
||||||
|
@ -61,17 +58,14 @@ func absint(i int) int {
|
||||||
return i
|
return i
|
||||||
}
|
}
|
||||||
|
|
||||||
// clamp & round float64 to uint8 (0..255)
|
// clamp rounds and clamps float64 value to fit into uint8.
|
||||||
func clamp(v float64) uint8 {
|
func clamp(x float64) uint8 {
|
||||||
return uint8(math.Min(math.Max(v, 0.0), 255.0) + 0.5)
|
v := int64(x + 0.5)
|
||||||
}
|
if v > 255 {
|
||||||
|
|
||||||
// clamp int32 to uint8 (0..255)
|
|
||||||
func clampint32(v int32) uint8 {
|
|
||||||
if v < 0 {
|
|
||||||
return 0
|
|
||||||
} else if v > 255 {
|
|
||||||
return 255
|
return 255
|
||||||
}
|
}
|
||||||
return uint8(v)
|
if v > 0 {
|
||||||
|
return uint8(v)
|
||||||
|
}
|
||||||
|
return 0
|
||||||
}
|
}
|
||||||
|
|
2
vendor/github.com/golang/glog/README
generated
vendored
2
vendor/github.com/golang/glog/README
generated
vendored
|
@ -5,7 +5,7 @@ Leveled execution logs for Go.
|
||||||
|
|
||||||
This is an efficient pure Go implementation of leveled logs in the
|
This is an efficient pure Go implementation of leveled logs in the
|
||||||
manner of the open source C++ package
|
manner of the open source C++ package
|
||||||
http://code.google.com/p/google-glog
|
https://github.com/google/glog
|
||||||
|
|
||||||
By binding methods to booleans it is possible to use the log package
|
By binding methods to booleans it is possible to use the log package
|
||||||
without paying the expense of evaluating the arguments to the log.
|
without paying the expense of evaluating the arguments to the log.
|
||||||
|
|
202
vendor/github.com/google/btree/LICENSE
generated
vendored
Normal file
202
vendor/github.com/google/btree/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,202 @@
|
||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
12
vendor/github.com/google/btree/README.md
generated
vendored
Normal file
12
vendor/github.com/google/btree/README.md
generated
vendored
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
# BTree implementation for Go
|
||||||
|
|
||||||
|
![Travis CI Build Status](https://api.travis-ci.org/google/btree.svg?branch=master)
|
||||||
|
|
||||||
|
This package provides an in-memory B-Tree implementation for Go, useful as
|
||||||
|
an ordered, mutable data structure.
|
||||||
|
|
||||||
|
The API is based off of the wonderful
|
||||||
|
http://godoc.org/github.com/petar/GoLLRB/llrb, and is meant to allow btree to
|
||||||
|
act as a drop-in replacement for gollrb trees.
|
||||||
|
|
||||||
|
See http://godoc.org/github.com/google/btree for documentation.
|
821
vendor/github.com/google/btree/btree.go
generated
vendored
Normal file
821
vendor/github.com/google/btree/btree.go
generated
vendored
Normal file
|
@ -0,0 +1,821 @@
|
||||||
|
// Copyright 2014 Google Inc.
|
||||||
|
//
|
||||||
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
// you may not use this file except in compliance with the License.
|
||||||
|
// You may obtain a copy of the License at
|
||||||
|
//
|
||||||
|
// http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
//
|
||||||
|
// Unless required by applicable law or agreed to in writing, software
|
||||||
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
// See the License for the specific language governing permissions and
|
||||||
|
// limitations under the License.
|
||||||
|
|
||||||
|
// Package btree implements in-memory B-Trees of arbitrary degree.
|
||||||
|
//
|
||||||
|
// btree implements an in-memory B-Tree for use as an ordered data structure.
|
||||||
|
// It is not meant for persistent storage solutions.
|
||||||
|
//
|
||||||
|
// It has a flatter structure than an equivalent red-black or other binary tree,
|
||||||
|
// which in some cases yields better memory usage and/or performance.
|
||||||
|
// See some discussion on the matter here:
|
||||||
|
// http://google-opensource.blogspot.com/2013/01/c-containers-that-save-memory-and-time.html
|
||||||
|
// Note, though, that this project is in no way related to the C++ B-Tree
|
||||||
|
// implementation written about there.
|
||||||
|
//
|
||||||
|
// Within this tree, each node contains a slice of items and a (possibly nil)
|
||||||
|
// slice of children. For basic numeric values or raw structs, this can cause
|
||||||
|
// efficiency differences when compared to equivalent C++ template code that
|
||||||
|
// stores values in arrays within the node:
|
||||||
|
// * Due to the overhead of storing values as interfaces (each
|
||||||
|
// value needs to be stored as the value itself, then 2 words for the
|
||||||
|
// interface pointing to that value and its type), resulting in higher
|
||||||
|
// memory use.
|
||||||
|
// * Since interfaces can point to values anywhere in memory, values are
|
||||||
|
// most likely not stored in contiguous blocks, resulting in a higher
|
||||||
|
// number of cache misses.
|
||||||
|
// These issues don't tend to matter, though, when working with strings or other
|
||||||
|
// heap-allocated structures, since C++-equivalent structures also must store
|
||||||
|
// pointers and also distribute their values across the heap.
|
||||||
|
//
|
||||||
|
// This implementation is designed to be a drop-in replacement to gollrb.LLRB
|
||||||
|
// trees, (http://github.com/petar/gollrb), an excellent and probably the most
|
||||||
|
// widely used ordered tree implementation in the Go ecosystem currently.
|
||||||
|
// Its functions, therefore, exactly mirror those of
|
||||||
|
// llrb.LLRB where possible. Unlike gollrb, though, we currently don't
|
||||||
|
// support storing multiple equivalent values.
|
||||||
|
package btree
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Item represents a single object in the tree.
|
||||||
|
type Item interface {
|
||||||
|
// Less tests whether the current item is less than the given argument.
|
||||||
|
//
|
||||||
|
// This must provide a strict weak ordering.
|
||||||
|
// If !a.Less(b) && !b.Less(a), we treat this to mean a == b (i.e. we can only
|
||||||
|
// hold one of either a or b in the tree).
|
||||||
|
Less(than Item) bool
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
DefaultFreeListSize = 32
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
nilItems = make(items, 16)
|
||||||
|
nilChildren = make(children, 16)
|
||||||
|
)
|
||||||
|
|
||||||
|
// FreeList represents a free list of btree nodes. By default each
|
||||||
|
// BTree has its own FreeList, but multiple BTrees can share the same
|
||||||
|
// FreeList.
|
||||||
|
// Two Btrees using the same freelist are safe for concurrent write access.
|
||||||
|
type FreeList struct {
|
||||||
|
mu sync.Mutex
|
||||||
|
freelist []*node
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewFreeList creates a new free list.
|
||||||
|
// size is the maximum size of the returned free list.
|
||||||
|
func NewFreeList(size int) *FreeList {
|
||||||
|
return &FreeList{freelist: make([]*node, 0, size)}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *FreeList) newNode() (n *node) {
|
||||||
|
f.mu.Lock()
|
||||||
|
index := len(f.freelist) - 1
|
||||||
|
if index < 0 {
|
||||||
|
f.mu.Unlock()
|
||||||
|
return new(node)
|
||||||
|
}
|
||||||
|
n = f.freelist[index]
|
||||||
|
f.freelist[index] = nil
|
||||||
|
f.freelist = f.freelist[:index]
|
||||||
|
f.mu.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *FreeList) freeNode(n *node) {
|
||||||
|
f.mu.Lock()
|
||||||
|
if len(f.freelist) < cap(f.freelist) {
|
||||||
|
f.freelist = append(f.freelist, n)
|
||||||
|
}
|
||||||
|
f.mu.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ItemIterator allows callers of Ascend* to iterate in-order over portions of
|
||||||
|
// the tree. When this function returns false, iteration will stop and the
|
||||||
|
// associated Ascend* function will immediately return.
|
||||||
|
type ItemIterator func(i Item) bool
|
||||||
|
|
||||||
|
// New creates a new B-Tree with the given degree.
|
||||||
|
//
|
||||||
|
// New(2), for example, will create a 2-3-4 tree (each node contains 1-3 items
|
||||||
|
// and 2-4 children).
|
||||||
|
func New(degree int) *BTree {
|
||||||
|
return NewWithFreeList(degree, NewFreeList(DefaultFreeListSize))
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewWithFreeList creates a new B-Tree that uses the given node free list.
|
||||||
|
func NewWithFreeList(degree int, f *FreeList) *BTree {
|
||||||
|
if degree <= 1 {
|
||||||
|
panic("bad degree")
|
||||||
|
}
|
||||||
|
return &BTree{
|
||||||
|
degree: degree,
|
||||||
|
cow: ©OnWriteContext{freelist: f},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// items stores items in a node.
|
||||||
|
type items []Item
|
||||||
|
|
||||||
|
// insertAt inserts a value into the given index, pushing all subsequent values
|
||||||
|
// forward.
|
||||||
|
func (s *items) insertAt(index int, item Item) {
|
||||||
|
*s = append(*s, nil)
|
||||||
|
if index < len(*s) {
|
||||||
|
copy((*s)[index+1:], (*s)[index:])
|
||||||
|
}
|
||||||
|
(*s)[index] = item
|
||||||
|
}
|
||||||
|
|
||||||
|
// removeAt removes a value at a given index, pulling all subsequent values
|
||||||
|
// back.
|
||||||
|
func (s *items) removeAt(index int) Item {
|
||||||
|
item := (*s)[index]
|
||||||
|
copy((*s)[index:], (*s)[index+1:])
|
||||||
|
(*s)[len(*s)-1] = nil
|
||||||
|
*s = (*s)[:len(*s)-1]
|
||||||
|
return item
|
||||||
|
}
|
||||||
|
|
||||||
|
// pop removes and returns the last element in the list.
|
||||||
|
func (s *items) pop() (out Item) {
|
||||||
|
index := len(*s) - 1
|
||||||
|
out = (*s)[index]
|
||||||
|
(*s)[index] = nil
|
||||||
|
*s = (*s)[:index]
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// truncate truncates this instance at index so that it contains only the
|
||||||
|
// first index items. index must be less than or equal to length.
|
||||||
|
func (s *items) truncate(index int) {
|
||||||
|
var toClear items
|
||||||
|
*s, toClear = (*s)[:index], (*s)[index:]
|
||||||
|
for len(toClear) > 0 {
|
||||||
|
toClear = toClear[copy(toClear, nilItems):]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// find returns the index where the given item should be inserted into this
|
||||||
|
// list. 'found' is true if the item already exists in the list at the given
|
||||||
|
// index.
|
||||||
|
func (s items) find(item Item) (index int, found bool) {
|
||||||
|
i := sort.Search(len(s), func(i int) bool {
|
||||||
|
return item.Less(s[i])
|
||||||
|
})
|
||||||
|
if i > 0 && !s[i-1].Less(item) {
|
||||||
|
return i - 1, true
|
||||||
|
}
|
||||||
|
return i, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// children stores child nodes in a node.
|
||||||
|
type children []*node
|
||||||
|
|
||||||
|
// insertAt inserts a value into the given index, pushing all subsequent values
|
||||||
|
// forward.
|
||||||
|
func (s *children) insertAt(index int, n *node) {
|
||||||
|
*s = append(*s, nil)
|
||||||
|
if index < len(*s) {
|
||||||
|
copy((*s)[index+1:], (*s)[index:])
|
||||||
|
}
|
||||||
|
(*s)[index] = n
|
||||||
|
}
|
||||||
|
|
||||||
|
// removeAt removes a value at a given index, pulling all subsequent values
|
||||||
|
// back.
|
||||||
|
func (s *children) removeAt(index int) *node {
|
||||||
|
n := (*s)[index]
|
||||||
|
copy((*s)[index:], (*s)[index+1:])
|
||||||
|
(*s)[len(*s)-1] = nil
|
||||||
|
*s = (*s)[:len(*s)-1]
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// pop removes and returns the last element in the list.
|
||||||
|
func (s *children) pop() (out *node) {
|
||||||
|
index := len(*s) - 1
|
||||||
|
out = (*s)[index]
|
||||||
|
(*s)[index] = nil
|
||||||
|
*s = (*s)[:index]
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// truncate truncates this instance at index so that it contains only the
|
||||||
|
// first index children. index must be less than or equal to length.
|
||||||
|
func (s *children) truncate(index int) {
|
||||||
|
var toClear children
|
||||||
|
*s, toClear = (*s)[:index], (*s)[index:]
|
||||||
|
for len(toClear) > 0 {
|
||||||
|
toClear = toClear[copy(toClear, nilChildren):]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// node is an internal node in a tree.
|
||||||
|
//
|
||||||
|
// It must at all times maintain the invariant that either
|
||||||
|
// * len(children) == 0, len(items) unconstrained
|
||||||
|
// * len(children) == len(items) + 1
|
||||||
|
type node struct {
|
||||||
|
items items
|
||||||
|
children children
|
||||||
|
cow *copyOnWriteContext
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *node) mutableFor(cow *copyOnWriteContext) *node {
|
||||||
|
if n.cow == cow {
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
out := cow.newNode()
|
||||||
|
if cap(out.items) >= len(n.items) {
|
||||||
|
out.items = out.items[:len(n.items)]
|
||||||
|
} else {
|
||||||
|
out.items = make(items, len(n.items), cap(n.items))
|
||||||
|
}
|
||||||
|
copy(out.items, n.items)
|
||||||
|
// Copy children
|
||||||
|
if cap(out.children) >= len(n.children) {
|
||||||
|
out.children = out.children[:len(n.children)]
|
||||||
|
} else {
|
||||||
|
out.children = make(children, len(n.children), cap(n.children))
|
||||||
|
}
|
||||||
|
copy(out.children, n.children)
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *node) mutableChild(i int) *node {
|
||||||
|
c := n.children[i].mutableFor(n.cow)
|
||||||
|
n.children[i] = c
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
// split splits the given node at the given index. The current node shrinks,
|
||||||
|
// and this function returns the item that existed at that index and a new node
|
||||||
|
// containing all items/children after it.
|
||||||
|
func (n *node) split(i int) (Item, *node) {
|
||||||
|
item := n.items[i]
|
||||||
|
next := n.cow.newNode()
|
||||||
|
next.items = append(next.items, n.items[i+1:]...)
|
||||||
|
n.items.truncate(i)
|
||||||
|
if len(n.children) > 0 {
|
||||||
|
next.children = append(next.children, n.children[i+1:]...)
|
||||||
|
n.children.truncate(i + 1)
|
||||||
|
}
|
||||||
|
return item, next
|
||||||
|
}
|
||||||
|
|
||||||
|
// maybeSplitChild checks if a child should be split, and if so splits it.
|
||||||
|
// Returns whether or not a split occurred.
|
||||||
|
func (n *node) maybeSplitChild(i, maxItems int) bool {
|
||||||
|
if len(n.children[i].items) < maxItems {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
first := n.mutableChild(i)
|
||||||
|
item, second := first.split(maxItems / 2)
|
||||||
|
n.items.insertAt(i, item)
|
||||||
|
n.children.insertAt(i+1, second)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// insert inserts an item into the subtree rooted at this node, making sure
|
||||||
|
// no nodes in the subtree exceed maxItems items. Should an equivalent item be
|
||||||
|
// be found/replaced by insert, it will be returned.
|
||||||
|
func (n *node) insert(item Item, maxItems int) Item {
|
||||||
|
i, found := n.items.find(item)
|
||||||
|
if found {
|
||||||
|
out := n.items[i]
|
||||||
|
n.items[i] = item
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
if len(n.children) == 0 {
|
||||||
|
n.items.insertAt(i, item)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if n.maybeSplitChild(i, maxItems) {
|
||||||
|
inTree := n.items[i]
|
||||||
|
switch {
|
||||||
|
case item.Less(inTree):
|
||||||
|
// no change, we want first split node
|
||||||
|
case inTree.Less(item):
|
||||||
|
i++ // we want second split node
|
||||||
|
default:
|
||||||
|
out := n.items[i]
|
||||||
|
n.items[i] = item
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return n.mutableChild(i).insert(item, maxItems)
|
||||||
|
}
|
||||||
|
|
||||||
|
// get finds the given key in the subtree and returns it.
|
||||||
|
func (n *node) get(key Item) Item {
|
||||||
|
i, found := n.items.find(key)
|
||||||
|
if found {
|
||||||
|
return n.items[i]
|
||||||
|
} else if len(n.children) > 0 {
|
||||||
|
return n.children[i].get(key)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// min returns the first item in the subtree.
|
||||||
|
func min(n *node) Item {
|
||||||
|
if n == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
for len(n.children) > 0 {
|
||||||
|
n = n.children[0]
|
||||||
|
}
|
||||||
|
if len(n.items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return n.items[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
// max returns the last item in the subtree.
|
||||||
|
func max(n *node) Item {
|
||||||
|
if n == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
for len(n.children) > 0 {
|
||||||
|
n = n.children[len(n.children)-1]
|
||||||
|
}
|
||||||
|
if len(n.items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return n.items[len(n.items)-1]
|
||||||
|
}
|
||||||
|
|
||||||
|
// toRemove details what item to remove in a node.remove call.
|
||||||
|
type toRemove int
|
||||||
|
|
||||||
|
const (
|
||||||
|
removeItem toRemove = iota // removes the given item
|
||||||
|
removeMin // removes smallest item in the subtree
|
||||||
|
removeMax // removes largest item in the subtree
|
||||||
|
)
|
||||||
|
|
||||||
|
// remove removes an item from the subtree rooted at this node.
|
||||||
|
func (n *node) remove(item Item, minItems int, typ toRemove) Item {
|
||||||
|
var i int
|
||||||
|
var found bool
|
||||||
|
switch typ {
|
||||||
|
case removeMax:
|
||||||
|
if len(n.children) == 0 {
|
||||||
|
return n.items.pop()
|
||||||
|
}
|
||||||
|
i = len(n.items)
|
||||||
|
case removeMin:
|
||||||
|
if len(n.children) == 0 {
|
||||||
|
return n.items.removeAt(0)
|
||||||
|
}
|
||||||
|
i = 0
|
||||||
|
case removeItem:
|
||||||
|
i, found = n.items.find(item)
|
||||||
|
if len(n.children) == 0 {
|
||||||
|
if found {
|
||||||
|
return n.items.removeAt(i)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
panic("invalid type")
|
||||||
|
}
|
||||||
|
// If we get to here, we have children.
|
||||||
|
if len(n.children[i].items) <= minItems {
|
||||||
|
return n.growChildAndRemove(i, item, minItems, typ)
|
||||||
|
}
|
||||||
|
child := n.mutableChild(i)
|
||||||
|
// Either we had enough items to begin with, or we've done some
|
||||||
|
// merging/stealing, because we've got enough now and we're ready to return
|
||||||
|
// stuff.
|
||||||
|
if found {
|
||||||
|
// The item exists at index 'i', and the child we've selected can give us a
|
||||||
|
// predecessor, since if we've gotten here it's got > minItems items in it.
|
||||||
|
out := n.items[i]
|
||||||
|
// We use our special-case 'remove' call with typ=maxItem to pull the
|
||||||
|
// predecessor of item i (the rightmost leaf of our immediate left child)
|
||||||
|
// and set it into where we pulled the item from.
|
||||||
|
n.items[i] = child.remove(nil, minItems, removeMax)
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
// Final recursive call. Once we're here, we know that the item isn't in this
|
||||||
|
// node and that the child is big enough to remove from.
|
||||||
|
return child.remove(item, minItems, typ)
|
||||||
|
}
|
||||||
|
|
||||||
|
// growChildAndRemove grows child 'i' to make sure it's possible to remove an
|
||||||
|
// item from it while keeping it at minItems, then calls remove to actually
|
||||||
|
// remove it.
|
||||||
|
//
|
||||||
|
// Most documentation says we have to do two sets of special casing:
|
||||||
|
// 1) item is in this node
|
||||||
|
// 2) item is in child
|
||||||
|
// In both cases, we need to handle the two subcases:
|
||||||
|
// A) node has enough values that it can spare one
|
||||||
|
// B) node doesn't have enough values
|
||||||
|
// For the latter, we have to check:
|
||||||
|
// a) left sibling has node to spare
|
||||||
|
// b) right sibling has node to spare
|
||||||
|
// c) we must merge
|
||||||
|
// To simplify our code here, we handle cases #1 and #2 the same:
|
||||||
|
// If a node doesn't have enough items, we make sure it does (using a,b,c).
|
||||||
|
// We then simply redo our remove call, and the second time (regardless of
|
||||||
|
// whether we're in case 1 or 2), we'll have enough items and can guarantee
|
||||||
|
// that we hit case A.
|
||||||
|
func (n *node) growChildAndRemove(i int, item Item, minItems int, typ toRemove) Item {
|
||||||
|
if i > 0 && len(n.children[i-1].items) > minItems {
|
||||||
|
// Steal from left child
|
||||||
|
child := n.mutableChild(i)
|
||||||
|
stealFrom := n.mutableChild(i - 1)
|
||||||
|
stolenItem := stealFrom.items.pop()
|
||||||
|
child.items.insertAt(0, n.items[i-1])
|
||||||
|
n.items[i-1] = stolenItem
|
||||||
|
if len(stealFrom.children) > 0 {
|
||||||
|
child.children.insertAt(0, stealFrom.children.pop())
|
||||||
|
}
|
||||||
|
} else if i < len(n.items) && len(n.children[i+1].items) > minItems {
|
||||||
|
// steal from right child
|
||||||
|
child := n.mutableChild(i)
|
||||||
|
stealFrom := n.mutableChild(i + 1)
|
||||||
|
stolenItem := stealFrom.items.removeAt(0)
|
||||||
|
child.items = append(child.items, n.items[i])
|
||||||
|
n.items[i] = stolenItem
|
||||||
|
if len(stealFrom.children) > 0 {
|
||||||
|
child.children = append(child.children, stealFrom.children.removeAt(0))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if i >= len(n.items) {
|
||||||
|
i--
|
||||||
|
}
|
||||||
|
child := n.mutableChild(i)
|
||||||
|
// merge with right child
|
||||||
|
mergeItem := n.items.removeAt(i)
|
||||||
|
mergeChild := n.children.removeAt(i + 1)
|
||||||
|
child.items = append(child.items, mergeItem)
|
||||||
|
child.items = append(child.items, mergeChild.items...)
|
||||||
|
child.children = append(child.children, mergeChild.children...)
|
||||||
|
n.cow.freeNode(mergeChild)
|
||||||
|
}
|
||||||
|
return n.remove(item, minItems, typ)
|
||||||
|
}
|
||||||
|
|
||||||
|
type direction int
|
||||||
|
|
||||||
|
const (
|
||||||
|
descend = direction(-1)
|
||||||
|
ascend = direction(+1)
|
||||||
|
)
|
||||||
|
|
||||||
|
// iterate provides a simple method for iterating over elements in the tree.
|
||||||
|
//
|
||||||
|
// When ascending, the 'start' should be less than 'stop' and when descending,
|
||||||
|
// the 'start' should be greater than 'stop'. Setting 'includeStart' to true
|
||||||
|
// will force the iterator to include the first item when it equals 'start',
|
||||||
|
// thus creating a "greaterOrEqual" or "lessThanEqual" rather than just a
|
||||||
|
// "greaterThan" or "lessThan" queries.
|
||||||
|
func (n *node) iterate(dir direction, start, stop Item, includeStart bool, hit bool, iter ItemIterator) (bool, bool) {
|
||||||
|
var ok bool
|
||||||
|
switch dir {
|
||||||
|
case ascend:
|
||||||
|
for i := 0; i < len(n.items); i++ {
|
||||||
|
if start != nil && n.items[i].Less(start) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(n.children) > 0 {
|
||||||
|
if hit, ok = n.children[i].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !includeStart && !hit && start != nil && !start.Less(n.items[i]) {
|
||||||
|
hit = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
hit = true
|
||||||
|
if stop != nil && !n.items[i].Less(stop) {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
if !iter(n.items[i]) {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(n.children) > 0 {
|
||||||
|
if hit, ok = n.children[len(n.children)-1].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case descend:
|
||||||
|
for i := len(n.items) - 1; i >= 0; i-- {
|
||||||
|
if start != nil && !n.items[i].Less(start) {
|
||||||
|
if !includeStart || hit || start.Less(n.items[i]) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(n.children) > 0 {
|
||||||
|
if hit, ok = n.children[i+1].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if stop != nil && !stop.Less(n.items[i]) {
|
||||||
|
return hit, false // continue
|
||||||
|
}
|
||||||
|
hit = true
|
||||||
|
if !iter(n.items[i]) {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(n.children) > 0 {
|
||||||
|
if hit, ok = n.children[0].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||||
|
return hit, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return hit, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Used for testing/debugging purposes.
|
||||||
|
func (n *node) print(w io.Writer, level int) {
|
||||||
|
fmt.Fprintf(w, "%sNODE:%v\n", strings.Repeat(" ", level), n.items)
|
||||||
|
for _, c := range n.children {
|
||||||
|
c.print(w, level+1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BTree is an implementation of a B-Tree.
|
||||||
|
//
|
||||||
|
// BTree stores Item instances in an ordered structure, allowing easy insertion,
|
||||||
|
// removal, and iteration.
|
||||||
|
//
|
||||||
|
// Write operations are not safe for concurrent mutation by multiple
|
||||||
|
// goroutines, but Read operations are.
|
||||||
|
type BTree struct {
|
||||||
|
degree int
|
||||||
|
length int
|
||||||
|
root *node
|
||||||
|
cow *copyOnWriteContext
|
||||||
|
}
|
||||||
|
|
||||||
|
// copyOnWriteContext pointers determine node ownership... a tree with a write
|
||||||
|
// context equivalent to a node's write context is allowed to modify that node.
|
||||||
|
// A tree whose write context does not match a node's is not allowed to modify
|
||||||
|
// it, and must create a new, writable copy (IE: it's a Clone).
|
||||||
|
//
|
||||||
|
// When doing any write operation, we maintain the invariant that the current
|
||||||
|
// node's context is equal to the context of the tree that requested the write.
|
||||||
|
// We do this by, before we descend into any node, creating a copy with the
|
||||||
|
// correct context if the contexts don't match.
|
||||||
|
//
|
||||||
|
// Since the node we're currently visiting on any write has the requesting
|
||||||
|
// tree's context, that node is modifiable in place. Children of that node may
|
||||||
|
// not share context, but before we descend into them, we'll make a mutable
|
||||||
|
// copy.
|
||||||
|
type copyOnWriteContext struct {
|
||||||
|
freelist *FreeList
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clone clones the btree, lazily. Clone should not be called concurrently,
|
||||||
|
// but the original tree (t) and the new tree (t2) can be used concurrently
|
||||||
|
// once the Clone call completes.
|
||||||
|
//
|
||||||
|
// The internal tree structure of b is marked read-only and shared between t and
|
||||||
|
// t2. Writes to both t and t2 use copy-on-write logic, creating new nodes
|
||||||
|
// whenever one of b's original nodes would have been modified. Read operations
|
||||||
|
// should have no performance degredation. Write operations for both t and t2
|
||||||
|
// will initially experience minor slow-downs caused by additional allocs and
|
||||||
|
// copies due to the aforementioned copy-on-write logic, but should converge to
|
||||||
|
// the original performance characteristics of the original tree.
|
||||||
|
func (t *BTree) Clone() (t2 *BTree) {
|
||||||
|
// Create two entirely new copy-on-write contexts.
|
||||||
|
// This operation effectively creates three trees:
|
||||||
|
// the original, shared nodes (old b.cow)
|
||||||
|
// the new b.cow nodes
|
||||||
|
// the new out.cow nodes
|
||||||
|
cow1, cow2 := *t.cow, *t.cow
|
||||||
|
out := *t
|
||||||
|
t.cow = &cow1
|
||||||
|
out.cow = &cow2
|
||||||
|
return &out
|
||||||
|
}
|
||||||
|
|
||||||
|
// maxItems returns the max number of items to allow per node.
|
||||||
|
func (t *BTree) maxItems() int {
|
||||||
|
return t.degree*2 - 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// minItems returns the min number of items to allow per node (ignored for the
|
||||||
|
// root node).
|
||||||
|
func (t *BTree) minItems() int {
|
||||||
|
return t.degree - 1
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *copyOnWriteContext) newNode() (n *node) {
|
||||||
|
n = c.freelist.newNode()
|
||||||
|
n.cow = c
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *copyOnWriteContext) freeNode(n *node) {
|
||||||
|
if n.cow == c {
|
||||||
|
// clear to allow GC
|
||||||
|
n.items.truncate(0)
|
||||||
|
n.children.truncate(0)
|
||||||
|
n.cow = nil
|
||||||
|
c.freelist.freeNode(n)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReplaceOrInsert adds the given item to the tree. If an item in the tree
|
||||||
|
// already equals the given one, it is removed from the tree and returned.
|
||||||
|
// Otherwise, nil is returned.
|
||||||
|
//
|
||||||
|
// nil cannot be added to the tree (will panic).
|
||||||
|
func (t *BTree) ReplaceOrInsert(item Item) Item {
|
||||||
|
if item == nil {
|
||||||
|
panic("nil item being added to BTree")
|
||||||
|
}
|
||||||
|
if t.root == nil {
|
||||||
|
t.root = t.cow.newNode()
|
||||||
|
t.root.items = append(t.root.items, item)
|
||||||
|
t.length++
|
||||||
|
return nil
|
||||||
|
} else {
|
||||||
|
t.root = t.root.mutableFor(t.cow)
|
||||||
|
if len(t.root.items) >= t.maxItems() {
|
||||||
|
item2, second := t.root.split(t.maxItems() / 2)
|
||||||
|
oldroot := t.root
|
||||||
|
t.root = t.cow.newNode()
|
||||||
|
t.root.items = append(t.root.items, item2)
|
||||||
|
t.root.children = append(t.root.children, oldroot, second)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out := t.root.insert(item, t.maxItems())
|
||||||
|
if out == nil {
|
||||||
|
t.length++
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete removes an item equal to the passed in item from the tree, returning
|
||||||
|
// it. If no such item exists, returns nil.
|
||||||
|
func (t *BTree) Delete(item Item) Item {
|
||||||
|
return t.deleteItem(item, removeItem)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteMin removes the smallest item in the tree and returns it.
|
||||||
|
// If no such item exists, returns nil.
|
||||||
|
func (t *BTree) DeleteMin() Item {
|
||||||
|
return t.deleteItem(nil, removeMin)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteMax removes the largest item in the tree and returns it.
|
||||||
|
// If no such item exists, returns nil.
|
||||||
|
func (t *BTree) DeleteMax() Item {
|
||||||
|
return t.deleteItem(nil, removeMax)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *BTree) deleteItem(item Item, typ toRemove) Item {
|
||||||
|
if t.root == nil || len(t.root.items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
t.root = t.root.mutableFor(t.cow)
|
||||||
|
out := t.root.remove(item, t.minItems(), typ)
|
||||||
|
if len(t.root.items) == 0 && len(t.root.children) > 0 {
|
||||||
|
oldroot := t.root
|
||||||
|
t.root = t.root.children[0]
|
||||||
|
t.cow.freeNode(oldroot)
|
||||||
|
}
|
||||||
|
if out != nil {
|
||||||
|
t.length--
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// AscendRange calls the iterator for every value in the tree within the range
|
||||||
|
// [greaterOrEqual, lessThan), until iterator returns false.
|
||||||
|
func (t *BTree) AscendRange(greaterOrEqual, lessThan Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(ascend, greaterOrEqual, lessThan, true, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AscendLessThan calls the iterator for every value in the tree within the range
|
||||||
|
// [first, pivot), until iterator returns false.
|
||||||
|
func (t *BTree) AscendLessThan(pivot Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(ascend, nil, pivot, false, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AscendGreaterOrEqual calls the iterator for every value in the tree within
|
||||||
|
// the range [pivot, last], until iterator returns false.
|
||||||
|
func (t *BTree) AscendGreaterOrEqual(pivot Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(ascend, pivot, nil, true, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ascend calls the iterator for every value in the tree within the range
|
||||||
|
// [first, last], until iterator returns false.
|
||||||
|
func (t *BTree) Ascend(iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(ascend, nil, nil, false, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DescendRange calls the iterator for every value in the tree within the range
|
||||||
|
// [lessOrEqual, greaterThan), until iterator returns false.
|
||||||
|
func (t *BTree) DescendRange(lessOrEqual, greaterThan Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(descend, lessOrEqual, greaterThan, true, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DescendLessOrEqual calls the iterator for every value in the tree within the range
|
||||||
|
// [pivot, first], until iterator returns false.
|
||||||
|
func (t *BTree) DescendLessOrEqual(pivot Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(descend, pivot, nil, true, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DescendGreaterThan calls the iterator for every value in the tree within
|
||||||
|
// the range (pivot, last], until iterator returns false.
|
||||||
|
func (t *BTree) DescendGreaterThan(pivot Item, iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(descend, nil, pivot, false, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Descend calls the iterator for every value in the tree within the range
|
||||||
|
// [last, first], until iterator returns false.
|
||||||
|
func (t *BTree) Descend(iterator ItemIterator) {
|
||||||
|
if t.root == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
t.root.iterate(descend, nil, nil, false, false, iterator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get looks for the key item in the tree, returning it. It returns nil if
|
||||||
|
// unable to find that item.
|
||||||
|
func (t *BTree) Get(key Item) Item {
|
||||||
|
if t.root == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return t.root.get(key)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Min returns the smallest item in the tree, or nil if the tree is empty.
|
||||||
|
func (t *BTree) Min() Item {
|
||||||
|
return min(t.root)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Max returns the largest item in the tree, or nil if the tree is empty.
|
||||||
|
func (t *BTree) Max() Item {
|
||||||
|
return max(t.root)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Has returns true if the given key is in the tree.
|
||||||
|
func (t *BTree) Has(key Item) bool {
|
||||||
|
return t.Get(key) != nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Len returns the number of items currently in the tree.
|
||||||
|
func (t *BTree) Len() int {
|
||||||
|
return t.length
|
||||||
|
}
|
||||||
|
|
||||||
|
// Int implements the Item interface for integers.
|
||||||
|
type Int int
|
||||||
|
|
||||||
|
// Less returns true if int(a) < int(b).
|
||||||
|
func (a Int) Less(b Item) bool {
|
||||||
|
return a < b.(Int)
|
||||||
|
}
|
23
vendor/github.com/gregjones/httpcache/README.md
generated
vendored
23
vendor/github.com/gregjones/httpcache/README.md
generated
vendored
|
@ -1,21 +1,24 @@
|
||||||
httpcache
|
httpcache
|
||||||
=========
|
=========
|
||||||
|
|
||||||
A Transport for Go's http.Client that will cache responses according to the HTTP RFC
|
[![Build Status](https://travis-ci.org/gregjones/httpcache.svg?branch=master)](https://travis-ci.org/gregjones/httpcache) [![GoDoc](https://godoc.org/github.com/gregjones/httpcache?status.svg)](https://godoc.org/github.com/gregjones/httpcache)
|
||||||
|
|
||||||
Package httpcache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses.
|
Package httpcache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses.
|
||||||
|
|
||||||
It is only suitable for use as a 'private' cache (i.e. for a web-browser or an API-client and not for a shared proxy).
|
It is only suitable for use as a 'private' cache (i.e. for a web-browser or an API-client and not for a shared proxy).
|
||||||
|
|
||||||
**Documentation:** http://godoc.org/github.com/gregjones/httpcache
|
Cache Backends
|
||||||
|
|
||||||
**License:** MIT (see LICENSE.txt)
|
|
||||||
|
|
||||||
Cache backends
|
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
- The built-in 'memory' cache stores responses in an in-memory map.
|
- The built-in 'memory' cache stores responses in an in-memory map.
|
||||||
- https://github.com/gregjones/httpcache/diskcache provides a filesystem-backed cache using the [diskv](https://github.com/peterbourgon/diskv) library.
|
- [`github.com/gregjones/httpcache/diskcache`](https://github.com/gregjones/httpcache/tree/master/diskcache) provides a filesystem-backed cache using the [diskv](https://github.com/peterbourgon/diskv) library.
|
||||||
- https://github.com/gregjones/httpcache/memcache provides memcache implementations, for both App Engine and 'normal' memcache servers
|
- [`github.com/gregjones/httpcache/memcache`](https://github.com/gregjones/httpcache/tree/master/memcache) provides memcache implementations, for both App Engine and 'normal' memcache servers.
|
||||||
- https://github.com/sourcegraph/s3cache uses Amazon S3 for storage.
|
- [`sourcegraph.com/sourcegraph/s3cache`](https://sourcegraph.com/github.com/sourcegraph/s3cache) uses Amazon S3 for storage.
|
||||||
- https://github.com/gregjones/httpcache/leveldbcache provides a filesystem-backed cache using [leveldb](https://github.com/syndtr/goleveldb/leveldb)
|
- [`github.com/gregjones/httpcache/leveldbcache`](https://github.com/gregjones/httpcache/tree/master/leveldbcache) provides a filesystem-backed cache using [leveldb](https://github.com/syndtr/goleveldb/leveldb).
|
||||||
|
- [`github.com/die-net/lrucache`](https://github.com/die-net/lrucache) provides an in-memory cache that will evict least-recently used entries.
|
||||||
|
- [`github.com/die-net/lrucache/twotier`](https://github.com/die-net/lrucache/tree/master/twotier) allows caches to be combined, for example to use lrucache above with a persistent disk-cache.
|
||||||
|
|
||||||
|
License
|
||||||
|
-------
|
||||||
|
|
||||||
|
- [MIT License](LICENSE.txt)
|
||||||
|
|
105
vendor/github.com/gregjones/httpcache/httpcache.go
generated
vendored
105
vendor/github.com/gregjones/httpcache/httpcache.go
generated
vendored
|
@ -11,8 +11,6 @@ import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httputil"
|
"net/http/httputil"
|
||||||
"strings"
|
"strings"
|
||||||
|
@ -65,23 +63,23 @@ type MemoryCache struct {
|
||||||
// Get returns the []byte representation of the response and true if present, false if not
|
// Get returns the []byte representation of the response and true if present, false if not
|
||||||
func (c *MemoryCache) Get(key string) (resp []byte, ok bool) {
|
func (c *MemoryCache) Get(key string) (resp []byte, ok bool) {
|
||||||
c.mu.RLock()
|
c.mu.RLock()
|
||||||
defer c.mu.RUnlock()
|
|
||||||
resp, ok = c.items[key]
|
resp, ok = c.items[key]
|
||||||
|
c.mu.RUnlock()
|
||||||
return resp, ok
|
return resp, ok
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set saves response resp to the cache with key
|
// Set saves response resp to the cache with key
|
||||||
func (c *MemoryCache) Set(key string, resp []byte) {
|
func (c *MemoryCache) Set(key string, resp []byte) {
|
||||||
c.mu.Lock()
|
c.mu.Lock()
|
||||||
defer c.mu.Unlock()
|
|
||||||
c.items[key] = resp
|
c.items[key] = resp
|
||||||
|
c.mu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete removes key from the cache
|
// Delete removes key from the cache
|
||||||
func (c *MemoryCache) Delete(key string) {
|
func (c *MemoryCache) Delete(key string) {
|
||||||
c.mu.Lock()
|
c.mu.Lock()
|
||||||
defer c.mu.Unlock()
|
|
||||||
delete(c.items, key)
|
delete(c.items, key)
|
||||||
|
c.mu.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMemoryCache returns a new Cache that will store items in an in-memory map
|
// NewMemoryCache returns a new Cache that will store items in an in-memory map
|
||||||
|
@ -90,33 +88,6 @@ func NewMemoryCache() *MemoryCache {
|
||||||
return c
|
return c
|
||||||
}
|
}
|
||||||
|
|
||||||
// onEOFReader executes a function on reader EOF or close
|
|
||||||
type onEOFReader struct {
|
|
||||||
rc io.ReadCloser
|
|
||||||
fn func()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *onEOFReader) Read(p []byte) (n int, err error) {
|
|
||||||
n, err = r.rc.Read(p)
|
|
||||||
if err == io.EOF {
|
|
||||||
r.runFunc()
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *onEOFReader) Close() error {
|
|
||||||
err := r.rc.Close()
|
|
||||||
r.runFunc()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *onEOFReader) runFunc() {
|
|
||||||
if fn := r.fn; fn != nil {
|
|
||||||
fn()
|
|
||||||
r.fn = nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Transport is an implementation of http.RoundTripper that will return values from a cache
|
// Transport is an implementation of http.RoundTripper that will return values from a cache
|
||||||
// where possible (avoiding a network request) and will additionally add validators (etag/if-modified-since)
|
// where possible (avoiding a network request) and will additionally add validators (etag/if-modified-since)
|
||||||
// to repeated requests allowing servers to return 304 / Not Modified
|
// to repeated requests allowing servers to return 304 / Not Modified
|
||||||
|
@ -127,10 +98,6 @@ type Transport struct {
|
||||||
Cache Cache
|
Cache Cache
|
||||||
// If true, responses returned from the cache will be given an extra header, X-From-Cache
|
// If true, responses returned from the cache will be given an extra header, X-From-Cache
|
||||||
MarkCachedResponses bool
|
MarkCachedResponses bool
|
||||||
// guards modReq
|
|
||||||
mu sync.RWMutex
|
|
||||||
// Mapping of original request => cloned
|
|
||||||
modReq map[*http.Request]*http.Request
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewTransport returns a new Transport with the
|
// NewTransport returns a new Transport with the
|
||||||
|
@ -156,20 +123,6 @@ func varyMatches(cachedResp *http.Response, req *http.Request) bool {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// setModReq maintains a mapping between original requests and their associated cloned requests
|
|
||||||
func (t *Transport) setModReq(orig, mod *http.Request) {
|
|
||||||
t.mu.Lock()
|
|
||||||
defer t.mu.Unlock()
|
|
||||||
if t.modReq == nil {
|
|
||||||
t.modReq = make(map[*http.Request]*http.Request)
|
|
||||||
}
|
|
||||||
if mod == nil {
|
|
||||||
delete(t.modReq, orig)
|
|
||||||
} else {
|
|
||||||
t.modReq[orig] = mod
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// RoundTrip takes a Request and returns a Response
|
// RoundTrip takes a Request and returns a Response
|
||||||
//
|
//
|
||||||
// If there is a fresh Response already in cache, then it will be returned without connecting to
|
// If there is a fresh Response already in cache, then it will be returned without connecting to
|
||||||
|
@ -180,9 +133,9 @@ func (t *Transport) setModReq(orig, mod *http.Request) {
|
||||||
// will be returned.
|
// will be returned.
|
||||||
func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
|
func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
|
||||||
cacheKey := cacheKey(req)
|
cacheKey := cacheKey(req)
|
||||||
cacheableMethod := req.Method == "GET" || req.Method == "HEAD"
|
cacheable := (req.Method == "GET" || req.Method == "HEAD") && req.Header.Get("range") == ""
|
||||||
var cachedResp *http.Response
|
var cachedResp *http.Response
|
||||||
if cacheableMethod {
|
if cacheable {
|
||||||
cachedResp, err = CachedResponse(t.Cache, req)
|
cachedResp, err = CachedResponse(t.Cache, req)
|
||||||
} else {
|
} else {
|
||||||
// Need to invalidate an existing value
|
// Need to invalidate an existing value
|
||||||
|
@ -194,7 +147,7 @@ func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error
|
||||||
transport = http.DefaultTransport
|
transport = http.DefaultTransport
|
||||||
}
|
}
|
||||||
|
|
||||||
if cachedResp != nil && err == nil && cacheableMethod && req.Header.Get("range") == "" {
|
if cacheable && cachedResp != nil && err == nil {
|
||||||
if t.MarkCachedResponses {
|
if t.MarkCachedResponses {
|
||||||
cachedResp.Header.Set(XFromCache, "1")
|
cachedResp.Header.Set(XFromCache, "1")
|
||||||
}
|
}
|
||||||
|
@ -222,23 +175,7 @@ func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error
|
||||||
req2.Header.Set("if-modified-since", lastModified)
|
req2.Header.Set("if-modified-since", lastModified)
|
||||||
}
|
}
|
||||||
if req2 != nil {
|
if req2 != nil {
|
||||||
// Associate original request with cloned request so we can refer to
|
|
||||||
// it in CancelRequest()
|
|
||||||
t.setModReq(req, req2)
|
|
||||||
req = req2
|
req = req2
|
||||||
defer func() {
|
|
||||||
// Release req/clone mapping on error
|
|
||||||
if err != nil {
|
|
||||||
t.setModReq(req, nil)
|
|
||||||
}
|
|
||||||
if resp != nil {
|
|
||||||
// Release req/clone mapping on body close/EOF
|
|
||||||
resp.Body = &onEOFReader{
|
|
||||||
rc: resp.Body,
|
|
||||||
fn: func() { t.setModReq(req, nil) },
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -281,10 +218,7 @@ func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
reqCacheControl := parseCacheControl(req.Header)
|
if cacheable && canStore(parseCacheControl(req.Header), parseCacheControl(resp.Header)) {
|
||||||
respCacheControl := parseCacheControl(resp.Header)
|
|
||||||
|
|
||||||
if canStore(reqCacheControl, respCacheControl) {
|
|
||||||
for _, varyKey := range headerAllCommaSepValues(resp.Header, "vary") {
|
for _, varyKey := range headerAllCommaSepValues(resp.Header, "vary") {
|
||||||
varyKey = http.CanonicalHeaderKey(varyKey)
|
varyKey = http.CanonicalHeaderKey(varyKey)
|
||||||
fakeHeader := "X-Varied-" + varyKey
|
fakeHeader := "X-Varied-" + varyKey
|
||||||
|
@ -303,31 +237,6 @@ func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error
|
||||||
return resp, nil
|
return resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// CancelRequest calls CancelRequest on the underlaying transport if implemented or
|
|
||||||
// throw a warning otherwise.
|
|
||||||
func (t *Transport) CancelRequest(req *http.Request) {
|
|
||||||
type canceler interface {
|
|
||||||
CancelRequest(*http.Request)
|
|
||||||
}
|
|
||||||
tr, ok := t.Transport.(canceler)
|
|
||||||
if !ok {
|
|
||||||
log.Printf("httpcache: Client Transport of type %T doesn't support CancelRequest; Timeout not supported", t.Transport)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
t.mu.RLock()
|
|
||||||
if modReq, ok := t.modReq[req]; ok {
|
|
||||||
t.mu.RUnlock()
|
|
||||||
t.mu.Lock()
|
|
||||||
delete(t.modReq, req)
|
|
||||||
t.mu.Unlock()
|
|
||||||
tr.CancelRequest(modReq)
|
|
||||||
} else {
|
|
||||||
t.mu.RUnlock()
|
|
||||||
tr.CancelRequest(req)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ErrNoDateHeader indicates that the HTTP headers contained no Date header.
|
// ErrNoDateHeader indicates that the HTTP headers contained no Date header.
|
||||||
var ErrNoDateHeader = errors.New("no Date header")
|
var ErrNoDateHeader = errors.New("no Date header")
|
||||||
|
|
||||||
|
|
6
vendor/github.com/peterbourgon/diskv/README.md
generated
vendored
6
vendor/github.com/peterbourgon/diskv/README.md
generated
vendored
|
@ -114,11 +114,11 @@ with a RWMutex to provide safe concurrent access.
|
||||||
|
|
||||||
diskv is a key-value store and therefore inherently unordered. An ordering
|
diskv is a key-value store and therefore inherently unordered. An ordering
|
||||||
system can be injected into the store by passing something which satisfies the
|
system can be injected into the store by passing something which satisfies the
|
||||||
diskv.Index interface. (A default implementation, using Petar Maymounkov's
|
diskv.Index interface. (A default implementation, using Google's
|
||||||
[LLRB tree][7], is provided.) Basically, diskv keeps an ordered (by a
|
[btree][7] package, is provided.) Basically, diskv keeps an ordered (by a
|
||||||
user-provided Less function) index of the keys, which can be queried.
|
user-provided Less function) index of the keys, which can be queried.
|
||||||
|
|
||||||
[7]: https://github.com/petar/GoLLRB
|
[7]: https://github.com/google/btree
|
||||||
|
|
||||||
## Adding compression
|
## Adding compression
|
||||||
|
|
||||||
|
|
41
vendor/github.com/peterbourgon/diskv/diskv.go
generated
vendored
41
vendor/github.com/peterbourgon/diskv/diskv.go
generated
vendored
|
@ -56,8 +56,8 @@ type Options struct {
|
||||||
// Diskv implements the Diskv interface. You shouldn't construct Diskv
|
// Diskv implements the Diskv interface. You shouldn't construct Diskv
|
||||||
// structures directly; instead, use the New constructor.
|
// structures directly; instead, use the New constructor.
|
||||||
type Diskv struct {
|
type Diskv struct {
|
||||||
sync.RWMutex
|
|
||||||
Options
|
Options
|
||||||
|
mu sync.RWMutex
|
||||||
cache map[string][]byte
|
cache map[string][]byte
|
||||||
cacheSize uint64
|
cacheSize uint64
|
||||||
}
|
}
|
||||||
|
@ -109,8 +109,8 @@ func (d *Diskv) WriteStream(key string, r io.Reader, sync bool) error {
|
||||||
return errEmptyKey
|
return errEmptyKey
|
||||||
}
|
}
|
||||||
|
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
|
|
||||||
return d.writeStreamWithLock(key, r, sync)
|
return d.writeStreamWithLock(key, r, sync)
|
||||||
}
|
}
|
||||||
|
@ -181,8 +181,8 @@ func (d *Diskv) Import(srcFilename, dstKey string, move bool) (err error) {
|
||||||
return errImportDirectory
|
return errImportDirectory
|
||||||
}
|
}
|
||||||
|
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
|
|
||||||
if err := d.ensurePathWithLock(dstKey); err != nil {
|
if err := d.ensurePathWithLock(dstKey); err != nil {
|
||||||
return fmt.Errorf("ensure path: %s", err)
|
return fmt.Errorf("ensure path: %s", err)
|
||||||
|
@ -234,8 +234,8 @@ func (d *Diskv) Read(key string) ([]byte, error) {
|
||||||
// If compression is enabled, ReadStream taps into the io.Reader stream prior
|
// If compression is enabled, ReadStream taps into the io.Reader stream prior
|
||||||
// to decompression, and caches the compressed data.
|
// to decompression, and caches the compressed data.
|
||||||
func (d *Diskv) ReadStream(key string, direct bool) (io.ReadCloser, error) {
|
func (d *Diskv) ReadStream(key string, direct bool) (io.ReadCloser, error) {
|
||||||
d.RLock()
|
d.mu.RLock()
|
||||||
defer d.RUnlock()
|
defer d.mu.RUnlock()
|
||||||
|
|
||||||
if val, ok := d.cache[key]; ok {
|
if val, ok := d.cache[key]; ok {
|
||||||
if !direct {
|
if !direct {
|
||||||
|
@ -247,8 +247,8 @@ func (d *Diskv) ReadStream(key string, direct bool) (io.ReadCloser, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
d.uncacheWithLock(key, uint64(len(val)))
|
d.uncacheWithLock(key, uint64(len(val)))
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
@ -352,8 +352,8 @@ func (s *siphon) Read(p []byte) (int, error) {
|
||||||
|
|
||||||
// Erase synchronously erases the given key from the disk and the cache.
|
// Erase synchronously erases the given key from the disk and the cache.
|
||||||
func (d *Diskv) Erase(key string) error {
|
func (d *Diskv) Erase(key string) error {
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
|
|
||||||
d.bustCacheWithLock(key)
|
d.bustCacheWithLock(key)
|
||||||
|
|
||||||
|
@ -365,14 +365,15 @@ func (d *Diskv) Erase(key string) error {
|
||||||
// erase from disk
|
// erase from disk
|
||||||
filename := d.completeFilename(key)
|
filename := d.completeFilename(key)
|
||||||
if s, err := os.Stat(filename); err == nil {
|
if s, err := os.Stat(filename); err == nil {
|
||||||
if !!s.IsDir() {
|
if s.IsDir() {
|
||||||
return errBadKey
|
return errBadKey
|
||||||
}
|
}
|
||||||
if err = os.Remove(filename); err != nil {
|
if err = os.Remove(filename); err != nil {
|
||||||
return fmt.Errorf("remove: %s", err)
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return fmt.Errorf("stat: %s", err)
|
// Return err as-is so caller can do os.IsNotExist(err).
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// clean up and return
|
// clean up and return
|
||||||
|
@ -385,8 +386,8 @@ func (d *Diskv) Erase(key string) error {
|
||||||
// diskv-related data. Care should be taken to always specify a diskv base
|
// diskv-related data. Care should be taken to always specify a diskv base
|
||||||
// directory that is exclusively for diskv data.
|
// directory that is exclusively for diskv data.
|
||||||
func (d *Diskv) EraseAll() error {
|
func (d *Diskv) EraseAll() error {
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
d.cache = make(map[string][]byte)
|
d.cache = make(map[string][]byte)
|
||||||
d.cacheSize = 0
|
d.cacheSize = 0
|
||||||
return os.RemoveAll(d.BasePath)
|
return os.RemoveAll(d.BasePath)
|
||||||
|
@ -394,8 +395,8 @@ func (d *Diskv) EraseAll() error {
|
||||||
|
|
||||||
// Has returns true if the given key exists.
|
// Has returns true if the given key exists.
|
||||||
func (d *Diskv) Has(key string) bool {
|
func (d *Diskv) Has(key string) bool {
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
|
|
||||||
if _, ok := d.cache[key]; ok {
|
if _, ok := d.cache[key]; ok {
|
||||||
return true
|
return true
|
||||||
|
@ -498,8 +499,8 @@ func (d *Diskv) cacheWithLock(key string, val []byte) error {
|
||||||
|
|
||||||
// cacheWithoutLock acquires the store's (write) mutex and calls cacheWithLock.
|
// cacheWithoutLock acquires the store's (write) mutex and calls cacheWithLock.
|
||||||
func (d *Diskv) cacheWithoutLock(key string, val []byte) error {
|
func (d *Diskv) cacheWithoutLock(key string, val []byte) error {
|
||||||
d.Lock()
|
d.mu.Lock()
|
||||||
defer d.Unlock()
|
defer d.mu.Unlock()
|
||||||
return d.cacheWithLock(key, val)
|
return d.cacheWithLock(key, val)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
71
vendor/github.com/peterbourgon/diskv/index.go
generated
vendored
71
vendor/github.com/peterbourgon/diskv/index.go
generated
vendored
|
@ -3,7 +3,7 @@ package diskv
|
||||||
import (
|
import (
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/petar/GoLLRB/llrb"
|
"github.com/google/btree"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Index is a generic interface for things that can
|
// Index is a generic interface for things that can
|
||||||
|
@ -18,85 +18,84 @@ type Index interface {
|
||||||
// LessFunction is used to initialize an Index of keys in a specific order.
|
// LessFunction is used to initialize an Index of keys in a specific order.
|
||||||
type LessFunction func(string, string) bool
|
type LessFunction func(string, string) bool
|
||||||
|
|
||||||
// llrbString is a custom data type that satisfies the LLRB Less interface,
|
// btreeString is a custom data type that satisfies the BTree Less interface,
|
||||||
// making the strings it wraps sortable by the LLRB package.
|
// making the strings it wraps sortable by the BTree package.
|
||||||
type llrbString struct {
|
type btreeString struct {
|
||||||
s string
|
s string
|
||||||
l LessFunction
|
l LessFunction
|
||||||
}
|
}
|
||||||
|
|
||||||
// Less satisfies the llrb.Less interface using the llrbString's LessFunction.
|
// Less satisfies the BTree.Less interface using the btreeString's LessFunction.
|
||||||
func (s llrbString) Less(i llrb.Item) bool {
|
func (s btreeString) Less(i btree.Item) bool {
|
||||||
return s.l(s.s, i.(llrbString).s)
|
return s.l(s.s, i.(btreeString).s)
|
||||||
}
|
}
|
||||||
|
|
||||||
// LLRBIndex is an implementation of the Index interface
|
// BTreeIndex is an implementation of the Index interface using google/btree.
|
||||||
// using Petar Maymounkov's LLRB tree.
|
type BTreeIndex struct {
|
||||||
type LLRBIndex struct {
|
|
||||||
sync.RWMutex
|
sync.RWMutex
|
||||||
LessFunction
|
LessFunction
|
||||||
*llrb.LLRB
|
*btree.BTree
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize populates the LLRB tree with data from the keys channel,
|
// Initialize populates the BTree tree with data from the keys channel,
|
||||||
// according to the passed less function. It's destructive to the LLRBIndex.
|
// according to the passed less function. It's destructive to the BTreeIndex.
|
||||||
func (i *LLRBIndex) Initialize(less LessFunction, keys <-chan string) {
|
func (i *BTreeIndex) Initialize(less LessFunction, keys <-chan string) {
|
||||||
i.Lock()
|
i.Lock()
|
||||||
defer i.Unlock()
|
defer i.Unlock()
|
||||||
i.LessFunction = less
|
i.LessFunction = less
|
||||||
i.LLRB = rebuild(less, keys)
|
i.BTree = rebuild(less, keys)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Insert inserts the given key (only) into the LLRB tree.
|
// Insert inserts the given key (only) into the BTree tree.
|
||||||
func (i *LLRBIndex) Insert(key string) {
|
func (i *BTreeIndex) Insert(key string) {
|
||||||
i.Lock()
|
i.Lock()
|
||||||
defer i.Unlock()
|
defer i.Unlock()
|
||||||
if i.LLRB == nil || i.LessFunction == nil {
|
if i.BTree == nil || i.LessFunction == nil {
|
||||||
panic("uninitialized index")
|
panic("uninitialized index")
|
||||||
}
|
}
|
||||||
i.LLRB.ReplaceOrInsert(llrbString{s: key, l: i.LessFunction})
|
i.BTree.ReplaceOrInsert(btreeString{s: key, l: i.LessFunction})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete removes the given key (only) from the LLRB tree.
|
// Delete removes the given key (only) from the BTree tree.
|
||||||
func (i *LLRBIndex) Delete(key string) {
|
func (i *BTreeIndex) Delete(key string) {
|
||||||
i.Lock()
|
i.Lock()
|
||||||
defer i.Unlock()
|
defer i.Unlock()
|
||||||
if i.LLRB == nil || i.LessFunction == nil {
|
if i.BTree == nil || i.LessFunction == nil {
|
||||||
panic("uninitialized index")
|
panic("uninitialized index")
|
||||||
}
|
}
|
||||||
i.LLRB.Delete(llrbString{s: key, l: i.LessFunction})
|
i.BTree.Delete(btreeString{s: key, l: i.LessFunction})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Keys yields a maximum of n keys in order. If the passed 'from' key is empty,
|
// Keys yields a maximum of n keys in order. If the passed 'from' key is empty,
|
||||||
// Keys will return the first n keys. If the passed 'from' key is non-empty, the
|
// Keys will return the first n keys. If the passed 'from' key is non-empty, the
|
||||||
// first key in the returned slice will be the key that immediately follows the
|
// first key in the returned slice will be the key that immediately follows the
|
||||||
// passed key, in key order.
|
// passed key, in key order.
|
||||||
func (i *LLRBIndex) Keys(from string, n int) []string {
|
func (i *BTreeIndex) Keys(from string, n int) []string {
|
||||||
i.RLock()
|
i.RLock()
|
||||||
defer i.RUnlock()
|
defer i.RUnlock()
|
||||||
|
|
||||||
if i.LLRB == nil || i.LessFunction == nil {
|
if i.BTree == nil || i.LessFunction == nil {
|
||||||
panic("uninitialized index")
|
panic("uninitialized index")
|
||||||
}
|
}
|
||||||
|
|
||||||
if i.LLRB.Len() <= 0 {
|
if i.BTree.Len() <= 0 {
|
||||||
return []string{}
|
return []string{}
|
||||||
}
|
}
|
||||||
|
|
||||||
llrbFrom := llrbString{s: from, l: i.LessFunction}
|
btreeFrom := btreeString{s: from, l: i.LessFunction}
|
||||||
skipFirst := true
|
skipFirst := true
|
||||||
if len(from) <= 0 || !i.LLRB.Has(llrbFrom) {
|
if len(from) <= 0 || !i.BTree.Has(btreeFrom) {
|
||||||
// no such key, so start at the top
|
// no such key, so fabricate an always-smallest item
|
||||||
llrbFrom = i.LLRB.Min().(llrbString)
|
btreeFrom = btreeString{s: "", l: func(string, string) bool { return true }}
|
||||||
skipFirst = false
|
skipFirst = false
|
||||||
}
|
}
|
||||||
|
|
||||||
keys := []string{}
|
keys := []string{}
|
||||||
iterator := func(i llrb.Item) bool {
|
iterator := func(i btree.Item) bool {
|
||||||
keys = append(keys, i.(llrbString).s)
|
keys = append(keys, i.(btreeString).s)
|
||||||
return len(keys) < n
|
return len(keys) < n
|
||||||
}
|
}
|
||||||
i.LLRB.AscendGreaterOrEqual(llrbFrom, iterator)
|
i.BTree.AscendGreaterOrEqual(btreeFrom, iterator)
|
||||||
|
|
||||||
if skipFirst && len(keys) > 0 {
|
if skipFirst && len(keys) > 0 {
|
||||||
keys = keys[1:]
|
keys = keys[1:]
|
||||||
|
@ -107,10 +106,10 @@ func (i *LLRBIndex) Keys(from string, n int) []string {
|
||||||
|
|
||||||
// rebuildIndex does the work of regenerating the index
|
// rebuildIndex does the work of regenerating the index
|
||||||
// with the given keys.
|
// with the given keys.
|
||||||
func rebuild(less LessFunction, keys <-chan string) *llrb.LLRB {
|
func rebuild(less LessFunction, keys <-chan string) *btree.BTree {
|
||||||
tree := llrb.New()
|
tree := btree.New(2)
|
||||||
for key := range keys {
|
for key := range keys {
|
||||||
tree.ReplaceOrInsert(llrbString{s: key, l: less})
|
tree.ReplaceOrInsert(btreeString{s: key, l: less})
|
||||||
}
|
}
|
||||||
return tree
|
return tree
|
||||||
}
|
}
|
||||||
|
|
20
vendor/golang.org/x/image/riff/riff.go
generated
vendored
20
vendor/golang.org/x/image/riff/riff.go
generated
vendored
|
@ -23,6 +23,7 @@ import (
|
||||||
var (
|
var (
|
||||||
errMissingPaddingByte = errors.New("riff: missing padding byte")
|
errMissingPaddingByte = errors.New("riff: missing padding byte")
|
||||||
errMissingRIFFChunkHeader = errors.New("riff: missing RIFF chunk header")
|
errMissingRIFFChunkHeader = errors.New("riff: missing RIFF chunk header")
|
||||||
|
errListSubchunkTooLong = errors.New("riff: list subchunk too long")
|
||||||
errShortChunkData = errors.New("riff: short chunk data")
|
errShortChunkData = errors.New("riff: short chunk data")
|
||||||
errShortChunkHeader = errors.New("riff: short chunk header")
|
errShortChunkHeader = errors.New("riff: short chunk header")
|
||||||
errStaleReader = errors.New("riff: stale reader")
|
errStaleReader = errors.New("riff: stale reader")
|
||||||
|
@ -100,13 +101,23 @@ func (z *Reader) Next() (chunkID FourCC, chunkLen uint32, chunkData io.Reader, e
|
||||||
|
|
||||||
// Drain the rest of the previous chunk.
|
// Drain the rest of the previous chunk.
|
||||||
if z.chunkLen != 0 {
|
if z.chunkLen != 0 {
|
||||||
_, z.err = io.Copy(ioutil.Discard, z.chunkReader)
|
want := z.chunkLen
|
||||||
|
var got int64
|
||||||
|
got, z.err = io.Copy(ioutil.Discard, z.chunkReader)
|
||||||
|
if z.err == nil && uint32(got) != want {
|
||||||
|
z.err = errShortChunkData
|
||||||
|
}
|
||||||
if z.err != nil {
|
if z.err != nil {
|
||||||
return FourCC{}, 0, nil, z.err
|
return FourCC{}, 0, nil, z.err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
z.chunkReader = nil
|
z.chunkReader = nil
|
||||||
if z.padded {
|
if z.padded {
|
||||||
|
if z.totalLen == 0 {
|
||||||
|
z.err = errListSubchunkTooLong
|
||||||
|
return FourCC{}, 0, nil, z.err
|
||||||
|
}
|
||||||
|
z.totalLen--
|
||||||
_, z.err = io.ReadFull(z.r, z.buf[:1])
|
_, z.err = io.ReadFull(z.r, z.buf[:1])
|
||||||
if z.err != nil {
|
if z.err != nil {
|
||||||
if z.err == io.EOF {
|
if z.err == io.EOF {
|
||||||
|
@ -114,7 +125,6 @@ func (z *Reader) Next() (chunkID FourCC, chunkLen uint32, chunkData io.Reader, e
|
||||||
}
|
}
|
||||||
return FourCC{}, 0, nil, z.err
|
return FourCC{}, 0, nil, z.err
|
||||||
}
|
}
|
||||||
z.totalLen--
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// We are done if we have no more data.
|
// We are done if we have no more data.
|
||||||
|
@ -129,7 +139,7 @@ func (z *Reader) Next() (chunkID FourCC, chunkLen uint32, chunkData io.Reader, e
|
||||||
return FourCC{}, 0, nil, z.err
|
return FourCC{}, 0, nil, z.err
|
||||||
}
|
}
|
||||||
z.totalLen -= chunkHeaderSize
|
z.totalLen -= chunkHeaderSize
|
||||||
if _, err = io.ReadFull(z.r, z.buf[:chunkHeaderSize]); err != nil {
|
if _, z.err = io.ReadFull(z.r, z.buf[:chunkHeaderSize]); z.err != nil {
|
||||||
if z.err == io.EOF || z.err == io.ErrUnexpectedEOF {
|
if z.err == io.EOF || z.err == io.ErrUnexpectedEOF {
|
||||||
z.err = errShortChunkHeader
|
z.err = errShortChunkHeader
|
||||||
}
|
}
|
||||||
|
@ -137,6 +147,10 @@ func (z *Reader) Next() (chunkID FourCC, chunkLen uint32, chunkData io.Reader, e
|
||||||
}
|
}
|
||||||
chunkID = FourCC{z.buf[0], z.buf[1], z.buf[2], z.buf[3]}
|
chunkID = FourCC{z.buf[0], z.buf[1], z.buf[2], z.buf[3]}
|
||||||
z.chunkLen = u32(z.buf[4:])
|
z.chunkLen = u32(z.buf[4:])
|
||||||
|
if z.chunkLen > z.totalLen {
|
||||||
|
z.err = errListSubchunkTooLong
|
||||||
|
return FourCC{}, 0, nil, z.err
|
||||||
|
}
|
||||||
z.padded = z.chunkLen&1 == 1
|
z.padded = z.chunkLen&1 == 1
|
||||||
z.chunkReader = &chunkReader{z}
|
z.chunkReader = &chunkReader{z}
|
||||||
return chunkID, z.chunkLen, z.chunkReader, nil
|
return chunkID, z.chunkLen, z.chunkReader, nil
|
||||||
|
|
17
vendor/golang.org/x/image/tiff/lzw/reader.go
generated
vendored
17
vendor/golang.org/x/image/tiff/lzw/reader.go
generated
vendored
|
@ -147,6 +147,7 @@ func (d *decoder) Read(b []byte) (int, error) {
|
||||||
// litWidth is the width in bits of literal codes.
|
// litWidth is the width in bits of literal codes.
|
||||||
func (d *decoder) decode() {
|
func (d *decoder) decode() {
|
||||||
// Loop over the code stream, converting codes into decompressed bytes.
|
// Loop over the code stream, converting codes into decompressed bytes.
|
||||||
|
loop:
|
||||||
for {
|
for {
|
||||||
code, err := d.read(d)
|
code, err := d.read(d)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -154,8 +155,7 @@ func (d *decoder) decode() {
|
||||||
err = io.ErrUnexpectedEOF
|
err = io.ErrUnexpectedEOF
|
||||||
}
|
}
|
||||||
d.err = err
|
d.err = err
|
||||||
d.flush()
|
break
|
||||||
return
|
|
||||||
}
|
}
|
||||||
switch {
|
switch {
|
||||||
case code < d.clear:
|
case code < d.clear:
|
||||||
|
@ -174,9 +174,8 @@ func (d *decoder) decode() {
|
||||||
d.last = decoderInvalidCode
|
d.last = decoderInvalidCode
|
||||||
continue
|
continue
|
||||||
case code == d.eof:
|
case code == d.eof:
|
||||||
d.flush()
|
|
||||||
d.err = io.EOF
|
d.err = io.EOF
|
||||||
return
|
break loop
|
||||||
case code <= d.hi:
|
case code <= d.hi:
|
||||||
c, i := code, len(d.output)-1
|
c, i := code, len(d.output)-1
|
||||||
if code == d.hi {
|
if code == d.hi {
|
||||||
|
@ -206,8 +205,7 @@ func (d *decoder) decode() {
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
d.err = errors.New("lzw: invalid code")
|
d.err = errors.New("lzw: invalid code")
|
||||||
d.flush()
|
break loop
|
||||||
return
|
|
||||||
}
|
}
|
||||||
d.last, d.hi = code, d.hi+1
|
d.last, d.hi = code, d.hi+1
|
||||||
if d.hi+1 >= d.overflow { // NOTE: the "+1" is where TIFF's LZW differs from the standard algorithm.
|
if d.hi+1 >= d.overflow { // NOTE: the "+1" is where TIFF's LZW differs from the standard algorithm.
|
||||||
|
@ -219,13 +217,10 @@ func (d *decoder) decode() {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if d.o >= flushBuffer {
|
if d.o >= flushBuffer {
|
||||||
d.flush()
|
break
|
||||||
return
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
// Flush pending output.
|
||||||
|
|
||||||
func (d *decoder) flush() {
|
|
||||||
d.toRead = d.output[:d.o]
|
d.toRead = d.output[:d.o]
|
||||||
d.o = 0
|
d.o = 0
|
||||||
}
|
}
|
||||||
|
|
32
vendor/golang.org/x/image/tiff/reader.go
generated
vendored
32
vendor/golang.org/x/image/tiff/reader.go
generated
vendored
|
@ -35,13 +35,6 @@ func (e UnsupportedError) Error() string {
|
||||||
return "tiff: unsupported feature: " + string(e)
|
return "tiff: unsupported feature: " + string(e)
|
||||||
}
|
}
|
||||||
|
|
||||||
// An InternalError reports that an internal error was encountered.
|
|
||||||
type InternalError string
|
|
||||||
|
|
||||||
func (e InternalError) Error() string {
|
|
||||||
return "tiff: internal error: " + string(e)
|
|
||||||
}
|
|
||||||
|
|
||||||
var errNoPixels = FormatError("not enough pixel data")
|
var errNoPixels = FormatError("not enough pixel data")
|
||||||
|
|
||||||
type decoder struct {
|
type decoder struct {
|
||||||
|
@ -118,8 +111,9 @@ func (d *decoder) ifdUint(p []byte) (u []uint, err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseIFD decides whether the the IFD entry in p is "interesting" and
|
// parseIFD decides whether the the IFD entry in p is "interesting" and
|
||||||
// stows away the data in the decoder.
|
// stows away the data in the decoder. It returns the tag number of the
|
||||||
func (d *decoder) parseIFD(p []byte) error {
|
// entry and an error, if any.
|
||||||
|
func (d *decoder) parseIFD(p []byte) (int, error) {
|
||||||
tag := d.byteOrder.Uint16(p[0:2])
|
tag := d.byteOrder.Uint16(p[0:2])
|
||||||
switch tag {
|
switch tag {
|
||||||
case tBitsPerSample,
|
case tBitsPerSample,
|
||||||
|
@ -138,17 +132,17 @@ func (d *decoder) parseIFD(p []byte) error {
|
||||||
tImageWidth:
|
tImageWidth:
|
||||||
val, err := d.ifdUint(p)
|
val, err := d.ifdUint(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return 0, err
|
||||||
}
|
}
|
||||||
d.features[int(tag)] = val
|
d.features[int(tag)] = val
|
||||||
case tColorMap:
|
case tColorMap:
|
||||||
val, err := d.ifdUint(p)
|
val, err := d.ifdUint(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return 0, err
|
||||||
}
|
}
|
||||||
numcolors := len(val) / 3
|
numcolors := len(val) / 3
|
||||||
if len(val)%3 != 0 || numcolors <= 0 || numcolors > 256 {
|
if len(val)%3 != 0 || numcolors <= 0 || numcolors > 256 {
|
||||||
return FormatError("bad ColorMap length")
|
return 0, FormatError("bad ColorMap length")
|
||||||
}
|
}
|
||||||
d.palette = make([]color.Color, numcolors)
|
d.palette = make([]color.Color, numcolors)
|
||||||
for i := 0; i < numcolors; i++ {
|
for i := 0; i < numcolors; i++ {
|
||||||
|
@ -166,15 +160,15 @@ func (d *decoder) parseIFD(p []byte) error {
|
||||||
// must terminate the import process gracefully.
|
// must terminate the import process gracefully.
|
||||||
val, err := d.ifdUint(p)
|
val, err := d.ifdUint(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return 0, err
|
||||||
}
|
}
|
||||||
for _, v := range val {
|
for _, v := range val {
|
||||||
if v != 1 {
|
if v != 1 {
|
||||||
return UnsupportedError("sample format")
|
return 0, UnsupportedError("sample format")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return int(tag), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// readBits reads n bits from the internal buffer starting at the current offset.
|
// readBits reads n bits from the internal buffer starting at the current offset.
|
||||||
|
@ -428,10 +422,16 @@ func newDecoder(r io.Reader) (*decoder, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
prevTag := -1
|
||||||
for i := 0; i < len(p); i += ifdLen {
|
for i := 0; i < len(p); i += ifdLen {
|
||||||
if err := d.parseIFD(p[i : i+ifdLen]); err != nil {
|
tag, err := d.parseIFD(p[i : i+ifdLen])
|
||||||
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
if tag <= prevTag {
|
||||||
|
return nil, FormatError("tags are not sorted in ascending order")
|
||||||
|
}
|
||||||
|
prevTag = tag
|
||||||
}
|
}
|
||||||
|
|
||||||
d.config.Width = int(d.firstVal(tImageWidth))
|
d.config.Width = int(d.firstVal(tImageWidth))
|
||||||
|
|
13
vendor/golang.org/x/image/webp/decode.go
generated
vendored
13
vendor/golang.org/x/image/webp/decode.go
generated
vendored
|
@ -2,11 +2,9 @@
|
||||||
// Use of this source code is governed by a BSD-style
|
// Use of this source code is governed by a BSD-style
|
||||||
// license that can be found in the LICENSE file.
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
// Package webp implements a decoder for WEBP images.
|
// +build go1.6
|
||||||
//
|
|
||||||
// WEBP is defined at:
|
package webp
|
||||||
// https://developers.google.com/speed/webp/docs/riff_container
|
|
||||||
package webp // import "golang.org/x/image/webp"
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
|
@ -18,7 +16,6 @@ import (
|
||||||
"golang.org/x/image/riff"
|
"golang.org/x/image/riff"
|
||||||
"golang.org/x/image/vp8"
|
"golang.org/x/image/vp8"
|
||||||
"golang.org/x/image/vp8l"
|
"golang.org/x/image/vp8l"
|
||||||
"golang.org/x/image/webp/nycbcra"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var errInvalidFormat = errors.New("webp: invalid format")
|
var errInvalidFormat = errors.New("webp: invalid format")
|
||||||
|
@ -98,7 +95,7 @@ func decode(r io.Reader, configOnly bool) (image.Image, image.Config, error) {
|
||||||
return nil, image.Config{}, err
|
return nil, image.Config{}, err
|
||||||
}
|
}
|
||||||
if alpha != nil {
|
if alpha != nil {
|
||||||
return &nycbcra.Image{
|
return &image.NYCbCrA{
|
||||||
YCbCr: *m,
|
YCbCr: *m,
|
||||||
A: alpha,
|
A: alpha,
|
||||||
AStride: alphaStride,
|
AStride: alphaStride,
|
||||||
|
@ -138,7 +135,7 @@ func decode(r io.Reader, configOnly bool) (image.Image, image.Config, error) {
|
||||||
heightMinusOne = uint32(buf[7]) | uint32(buf[8])<<8 | uint32(buf[9])<<16
|
heightMinusOne = uint32(buf[7]) | uint32(buf[8])<<8 | uint32(buf[9])<<16
|
||||||
if configOnly {
|
if configOnly {
|
||||||
return nil, image.Config{
|
return nil, image.Config{
|
||||||
ColorModel: nycbcra.ColorModel,
|
ColorModel: color.NYCbCrAModel,
|
||||||
Width: int(widthMinusOne) + 1,
|
Width: int(widthMinusOne) + 1,
|
||||||
Height: int(heightMinusOne) + 1,
|
Height: int(heightMinusOne) + 1,
|
||||||
}, nil
|
}, nil
|
||||||
|
|
8
vendor/golang.org/x/image/webp/nycbcra/nycbcra.go
generated
vendored
8
vendor/golang.org/x/image/webp/nycbcra/nycbcra.go
generated
vendored
|
@ -4,6 +4,9 @@
|
||||||
|
|
||||||
// Package nycbcra provides non-alpha-premultiplied Y'CbCr-with-alpha image and
|
// Package nycbcra provides non-alpha-premultiplied Y'CbCr-with-alpha image and
|
||||||
// color types.
|
// color types.
|
||||||
|
//
|
||||||
|
// Deprecated: as of Go 1.6. Use the standard image and image/color packages
|
||||||
|
// instead.
|
||||||
package nycbcra // import "golang.org/x/image/webp/nycbcra"
|
package nycbcra // import "golang.org/x/image/webp/nycbcra"
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -11,6 +14,11 @@ import (
|
||||||
"image/color"
|
"image/color"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
println("The golang.org/x/image/webp/nycbcra package is deprecated, as of Go 1.6. " +
|
||||||
|
"Use the standard image and image/color packages instead.")
|
||||||
|
}
|
||||||
|
|
||||||
// TODO: move this to the standard image and image/color packages, so that the
|
// TODO: move this to the standard image and image/color packages, so that the
|
||||||
// image/draw package can have fast-path code. Moving would rename:
|
// image/draw package can have fast-path code. Moving would rename:
|
||||||
// nycbcra.Color to color.NYCbCrA
|
// nycbcra.Color to color.NYCbCrA
|
||||||
|
|
30
vendor/golang.org/x/image/webp/webp.go
generated
vendored
Normal file
30
vendor/golang.org/x/image/webp/webp.go
generated
vendored
Normal file
|
@ -0,0 +1,30 @@
|
||||||
|
// Copyright 2016 The Go Authors. All rights reserved.
|
||||||
|
// Use of this source code is governed by a BSD-style
|
||||||
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
// Package webp implements a decoder for WEBP images.
|
||||||
|
//
|
||||||
|
// WEBP is defined at:
|
||||||
|
// https://developers.google.com/speed/webp/docs/riff_container
|
||||||
|
//
|
||||||
|
// It requires Go 1.6 or later.
|
||||||
|
package webp // import "golang.org/x/image/webp"
|
||||||
|
|
||||||
|
// This blank Go file, other than the package clause, exists so that this
|
||||||
|
// package can be built for Go 1.5 and earlier. (The other files in this
|
||||||
|
// package are all marked "+build go1.6" for the NYCbCrA types introduced in Go
|
||||||
|
// 1.6). There is no functionality in a blank package, but some image
|
||||||
|
// manipulation programs might still underscore import this package for the
|
||||||
|
// side effect of registering the WEBP format with the standard library's
|
||||||
|
// image.RegisterFormat and image.Decode functions. For example, that program
|
||||||
|
// might contain:
|
||||||
|
//
|
||||||
|
// // Underscore imports to register some formats for image.Decode.
|
||||||
|
// import _ "image/gif"
|
||||||
|
// import _ "image/jpeg"
|
||||||
|
// import _ "image/png"
|
||||||
|
// import _ "golang.org/x/image/webp"
|
||||||
|
//
|
||||||
|
// Such a program will still compile for Go 1.5 (due to this placeholder Go
|
||||||
|
// file). It will simply not be able to recognize and decode WEBP (but still
|
||||||
|
// handle GIF, JPEG and PNG).
|
76
vendor/vendor.json
vendored
76
vendor/vendor.json
vendored
|
@ -3,28 +3,34 @@
|
||||||
"ignore": "test",
|
"ignore": "test",
|
||||||
"package": [
|
"package": [
|
||||||
{
|
{
|
||||||
"checksumSHA1": "kbGwaqCVchx6lunLtdDh3mfiWuo=",
|
"checksumSHA1": "YmarTNe8NlR3nkpX6MoCvhisfoA=",
|
||||||
"path": "github.com/disintegration/imaging",
|
"path": "github.com/disintegration/imaging",
|
||||||
"revision": "546cb3c5137b3f1232e123a26aa033aade6b3066",
|
"revision": "ac27d1805a555e1754fa177216ee07f4e63c30b5",
|
||||||
"revisionTime": "2015-10-03T01:44:24Z"
|
"revisionTime": "2017-03-19T14:47:19Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "tKMSOCOue78dHOP9PY0aSI79+vw=",
|
"checksumSHA1": "HmbftipkadrLlCfzzVQ+iFHbl6g=",
|
||||||
"path": "github.com/golang/glog",
|
"path": "github.com/golang/glog",
|
||||||
"revision": "fca8c8854093a154ff1eb580aae10276ad6b1b5f",
|
"revision": "23def4e6c14b4da8ac2ed8007337bc5eb5007998",
|
||||||
"revisionTime": "2015-07-31T22:52:21Z"
|
"revisionTime": "2016-01-25T20:49:56Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "ZZQiLjIW+WEKXuEgyyKo5PUSBBo=",
|
"checksumSHA1": "kHrNY4ktruLxWd+qxbMw90KfO1Y=",
|
||||||
|
"path": "github.com/google/btree",
|
||||||
|
"revision": "316fb6d3f031ae8f4d457c6c5186b9e3ded70435",
|
||||||
|
"revisionTime": "2016-12-17T18:35:37Z"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"checksumSHA1": "89AgVeQ6dU0XqFSgYG+fQ5rqp/8=",
|
||||||
"path": "github.com/gregjones/httpcache",
|
"path": "github.com/gregjones/httpcache",
|
||||||
"revision": "ae1d6feaf2d3354cece07d7dcf420de6745ad7b6",
|
"revision": "0d2297f241a3503b4d464cd434e6d1490ec76e9a",
|
||||||
"revisionTime": "2015-10-25T15:48:47Z"
|
"revisionTime": "2017-04-24T21:01:39Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "A+TX1jxqy7iWvcb9ZldoG1b5SsY=",
|
"checksumSHA1": "A+TX1jxqy7iWvcb9ZldoG1b5SsY=",
|
||||||
"path": "github.com/gregjones/httpcache/diskcache",
|
"path": "github.com/gregjones/httpcache/diskcache",
|
||||||
"revision": "ae1d6feaf2d3354cece07d7dcf420de6745ad7b6",
|
"revision": "0d2297f241a3503b4d464cd434e6d1490ec76e9a",
|
||||||
"revisionTime": "2015-10-25T15:48:47Z"
|
"revisionTime": "2017-04-24T21:01:39Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "Th+zE6hHI4jpczGj+JNsqDJrJgI=",
|
"checksumSHA1": "Th+zE6hHI4jpczGj+JNsqDJrJgI=",
|
||||||
|
@ -39,10 +45,10 @@
|
||||||
"revisionTime": "2013-04-27T21:51:48Z"
|
"revisionTime": "2013-04-27T21:51:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "d7waKzi8uLYpcrJaRjK1TikDleI=",
|
"checksumSHA1": "GfnXm54E98jxQJMXPZz0LbPVaRc=",
|
||||||
"path": "github.com/peterbourgon/diskv",
|
"path": "github.com/peterbourgon/diskv",
|
||||||
"revision": "72aa5da9f7d1125b480b83c6dc5ad09a1f04508c",
|
"revision": "5dfcb07a075adbaaa4094cddfd160b1e1c77a043",
|
||||||
"revisionTime": "2014-12-31T14:08:51Z"
|
"revisionTime": "2016-04-04T09:36:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "keGfp7Lfr4cPUiZjLpHzZWrHEzM=",
|
"checksumSHA1": "keGfp7Lfr4cPUiZjLpHzZWrHEzM=",
|
||||||
|
@ -59,50 +65,50 @@
|
||||||
{
|
{
|
||||||
"checksumSHA1": "UD/pejajPyS7WaWVXq2NU1eK4Ic=",
|
"checksumSHA1": "UD/pejajPyS7WaWVXq2NU1eK4Ic=",
|
||||||
"path": "golang.org/x/image/bmp",
|
"path": "golang.org/x/image/bmp",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "yPobk1ttfzvwK1b1PxdA4BdP+Mg=",
|
"checksumSHA1": "zdekzNuFGSoxAZ8cURGsrhBObZs=",
|
||||||
"path": "golang.org/x/image/riff",
|
"path": "golang.org/x/image/riff",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "ctyyddkXAz6xBcMGo2Brocz1iuY=",
|
"checksumSHA1": "SmD/LkP3vgBGPKT6I38wH7Jb6QI=",
|
||||||
"path": "golang.org/x/image/tiff",
|
"path": "golang.org/x/image/tiff",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "Xnm81x3lWS6xRxSthJuLt+fpfi0=",
|
"checksumSHA1": "PF6VjvpNpOdR8epWH1Liyy7x1Qg=",
|
||||||
"path": "golang.org/x/image/tiff/lzw",
|
"path": "golang.org/x/image/tiff/lzw",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "ebUbLKyTEaupuKj5KsceDfkn+UA=",
|
"checksumSHA1": "ebUbLKyTEaupuKj5KsceDfkn+UA=",
|
||||||
"path": "golang.org/x/image/vp8",
|
"path": "golang.org/x/image/vp8",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "MF/A3WDD30iVwlktK0itZ0PTJho=",
|
"checksumSHA1": "MF/A3WDD30iVwlktK0itZ0PTJho=",
|
||||||
"path": "golang.org/x/image/vp8l",
|
"path": "golang.org/x/image/vp8l",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "n8RwT4S91OC9FNx0terB4fX/ZdU=",
|
"checksumSHA1": "wwirbKM4d69iWA4s9JwpXTsda3A=",
|
||||||
"path": "golang.org/x/image/webp",
|
"path": "golang.org/x/image/webp",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "3to5NHHqrRXdbrGGuPLH5Qvx0/Q=",
|
"checksumSHA1": "Q+QuePyosoyKLP7tHNe3iREV+mc=",
|
||||||
"path": "golang.org/x/image/webp/nycbcra",
|
"path": "golang.org/x/image/webp/nycbcra",
|
||||||
"revision": "baddd3465a05d84a6d8d3507547a91cb188c81ea",
|
"revision": "426cfd8eeb6e08ab1932954e09e3c2cb2bc6e36d",
|
||||||
"revisionTime": "2015-09-11T03:43:18Z"
|
"revisionTime": "2017-05-14T06:33:48Z"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"checksumSHA1": "Oz2aQiusOZOpefTB6nCKW+vzrWA=",
|
"checksumSHA1": "Oz2aQiusOZOpefTB6nCKW+vzrWA=",
|
||||||
|
|
Loading…
Reference in a new issue