Compare commits
6 Commits
| Author | SHA1 | Date |
|---|---|---|
|
|
2af40b75b7 | 3 years ago |
|
|
83f3691c15 | 3 years ago |
|
|
4e93e87991 | 3 years ago |
|
|
3f1516d3fe | 3 years ago |
|
|
09d1e1ee99 | 3 years ago |
|
|
2e9906ba20 | 3 years ago |
@ -1,25 +0,0 @@
|
|||||||
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
|
|
||||||
// README at: https://github.com/devcontainers/templates/tree/main/src/go
|
|
||||||
{
|
|
||||||
"name": "Go",
|
|
||||||
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
|
|
||||||
"image": "mcr.microsoft.com/devcontainers/go:1-1.21-bullseye",
|
|
||||||
"features": {
|
|
||||||
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Features to add to the dev container. More info: https://containers.dev/features.
|
|
||||||
// "features": {},
|
|
||||||
|
|
||||||
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
|
||||||
// "forwardPorts": [],
|
|
||||||
|
|
||||||
// Use 'postCreateCommand' to run commands after the container is created.
|
|
||||||
// "postCreateCommand": "go version",
|
|
||||||
|
|
||||||
// Configure tool-specific properties.
|
|
||||||
// "customizations": {},
|
|
||||||
|
|
||||||
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
|
|
||||||
// "remoteUser": "root"
|
|
||||||
}
|
|
||||||
@ -1 +1,3 @@
|
|||||||
bin/
|
bin/
|
||||||
|
cross-out/
|
||||||
|
release-out/
|
||||||
|
|||||||
@ -1,124 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
|
||||||
name: Bug Report
|
|
||||||
description: Report a bug
|
|
||||||
labels:
|
|
||||||
- status/triage
|
|
||||||
|
|
||||||
body:
|
|
||||||
- type: markdown
|
|
||||||
attributes:
|
|
||||||
value: |
|
|
||||||
Thank you for taking the time to report a bug!
|
|
||||||
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
|
|
||||||
|
|
||||||
- type: checkboxes
|
|
||||||
attributes:
|
|
||||||
label: Contributing guidelines
|
|
||||||
description: |
|
|
||||||
Please read the contributing guidelines before proceeding.
|
|
||||||
options:
|
|
||||||
- label: I've read the [contributing guidelines](https://github.com/docker/buildx/blob/master/.github/CONTRIBUTING.md) and wholeheartedly agree
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: checkboxes
|
|
||||||
attributes:
|
|
||||||
label: I've found a bug and checked that ...
|
|
||||||
description: |
|
|
||||||
Make sure that your request fulfills all of the following requirements.
|
|
||||||
If one requirement cannot be satisfied, explain in detail why.
|
|
||||||
options:
|
|
||||||
- label: ... the documentation does not mention anything about my problem
|
|
||||||
- label: ... there are no open or closed issues that are related to my problem
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Description
|
|
||||||
description: |
|
|
||||||
Please provide a brief description of the bug in 1-2 sentences.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Expected behaviour
|
|
||||||
description: |
|
|
||||||
Please describe precisely what you'd expect to happen.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Actual behaviour
|
|
||||||
description: |
|
|
||||||
Please describe precisely what is actually happening.
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: input
|
|
||||||
attributes:
|
|
||||||
label: Buildx version
|
|
||||||
description: |
|
|
||||||
Output of `docker buildx version` command.
|
|
||||||
Example: `github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554`
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Docker info
|
|
||||||
description: |
|
|
||||||
Output of `docker info` command.
|
|
||||||
render: text
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Builders list
|
|
||||||
description: |
|
|
||||||
Output of `docker buildx ls` command.
|
|
||||||
render: text
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Configuration
|
|
||||||
description: >
|
|
||||||
Please provide a minimal Dockerfile, bake definition (if applicable) and
|
|
||||||
invoked commands to help reproducing your issue.
|
|
||||||
placeholder: |
|
|
||||||
```dockerfile
|
|
||||||
FROM alpine
|
|
||||||
echo hello
|
|
||||||
```
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
group "default" {
|
|
||||||
targets = ["app"]
|
|
||||||
}
|
|
||||||
target "app" {
|
|
||||||
dockerfile = "Dockerfile"
|
|
||||||
target = "build"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build .
|
|
||||||
$ docker buildx bake
|
|
||||||
```
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Build logs
|
|
||||||
description: |
|
|
||||||
Please provide logs output (and/or BuildKit logs if applicable).
|
|
||||||
render: text
|
|
||||||
validations:
|
|
||||||
required: false
|
|
||||||
|
|
||||||
- type: textarea
|
|
||||||
attributes:
|
|
||||||
label: Additional info
|
|
||||||
description: |
|
|
||||||
Please provide any additional information that could be useful.
|
|
||||||
@ -1,12 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
|
|
||||||
blank_issues_enabled: true
|
|
||||||
contact_links:
|
|
||||||
- name: Questions and Discussions
|
|
||||||
url: https://github.com/docker/buildx/discussions/new
|
|
||||||
about: Use Github Discussions to ask questions and/or open discussion topics.
|
|
||||||
- name: Command line reference
|
|
||||||
url: https://docs.docker.com/engine/reference/commandline/buildx/
|
|
||||||
about: Read the command line reference.
|
|
||||||
- name: Documentation
|
|
||||||
url: https://docs.docker.com/build/
|
|
||||||
about: Read the documentation.
|
|
||||||
@ -1,15 +0,0 @@
|
|||||||
# https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/syntax-for-githubs-form-schema
|
|
||||||
name: Feature request
|
|
||||||
description: Missing functionality? Come tell us about it!
|
|
||||||
labels:
|
|
||||||
- kind/enhancement
|
|
||||||
- status/triage
|
|
||||||
|
|
||||||
body:
|
|
||||||
- type: textarea
|
|
||||||
id: description
|
|
||||||
attributes:
|
|
||||||
label: Description
|
|
||||||
description: What is the feature you want to see?
|
|
||||||
validations:
|
|
||||||
required: true
|
|
||||||
@ -1,12 +0,0 @@
|
|||||||
# Reporting security issues
|
|
||||||
|
|
||||||
The project maintainers take security seriously. If you discover a security
|
|
||||||
issue, please bring it to their attention right away!
|
|
||||||
|
|
||||||
**Please _DO NOT_ file a public issue**, instead send your report privately to
|
|
||||||
[security@docker.com](mailto:security@docker.com).
|
|
||||||
|
|
||||||
Security reports are greatly appreciated, and we will publicly thank you for it.
|
|
||||||
We also like to send gifts—if you're into schwag, make sure to let
|
|
||||||
us know. We currently do not offer a paid security bounty program, but are not
|
|
||||||
ruling it out in the future.
|
|
||||||
@ -1,735 +0,0 @@
|
|||||||
{
|
|
||||||
"latest": {
|
|
||||||
"id": 90741208,
|
|
||||||
"tag_name": "v0.10.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.2": {
|
|
||||||
"id": 90741208,
|
|
||||||
"tag_name": "v0.10.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/buildx-v0.10.2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.1": {
|
|
||||||
"id": 90346950,
|
|
||||||
"tag_name": "v0.10.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/buildx-v0.10.1.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0": {
|
|
||||||
"id": 88388110,
|
|
||||||
"tag_name": "v0.10.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/buildx-v0.10.0.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc3": {
|
|
||||||
"id": 88191592,
|
|
||||||
"tag_name": "v0.10.0-rc3",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc3",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/buildx-v0.10.0-rc3.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc3/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc2": {
|
|
||||||
"id": 86248476,
|
|
||||||
"tag_name": "v0.10.0-rc2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.darwin-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v6.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm-v7.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-ppc64le.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-riscv64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.linux-s390x.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-amd64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.provenance.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/buildx-v0.10.0-rc2.windows-arm64.sbom.json",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.10.0-rc1": {
|
|
||||||
"id": 85963900,
|
|
||||||
"tag_name": "v0.10.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.10.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/buildx-v0.10.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.10.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.1": {
|
|
||||||
"id": 74760068,
|
|
||||||
"tag_name": "v0.9.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/buildx-v0.9.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0": {
|
|
||||||
"id": 74546589,
|
|
||||||
"tag_name": "v0.9.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/buildx-v0.9.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0-rc2": {
|
|
||||||
"id": 74052235,
|
|
||||||
"tag_name": "v0.9.0-rc2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/buildx-v0.9.0-rc2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.9.0-rc1": {
|
|
||||||
"id": 73389692,
|
|
||||||
"tag_name": "v0.9.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.9.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/buildx-v0.9.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.9.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.2": {
|
|
||||||
"id": 63479740,
|
|
||||||
"tag_name": "v0.8.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/buildx-v0.8.2.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.2/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.1": {
|
|
||||||
"id": 62289050,
|
|
||||||
"tag_name": "v0.8.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/buildx-v0.8.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.0": {
|
|
||||||
"id": 61423774,
|
|
||||||
"tag_name": "v0.8.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/buildx-v0.8.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.8.0-rc1": {
|
|
||||||
"id": 60513568,
|
|
||||||
"tag_name": "v0.8.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.8.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/buildx-v0.8.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.8.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.1": {
|
|
||||||
"id": 54098347,
|
|
||||||
"tag_name": "v0.7.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/buildx-v0.7.1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.0": {
|
|
||||||
"id": 53109422,
|
|
||||||
"tag_name": "v0.7.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/buildx-v0.7.0.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.7.0-rc1": {
|
|
||||||
"id": 52726324,
|
|
||||||
"tag_name": "v0.7.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.7.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/buildx-v0.7.0-rc1.windows-arm64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.7.0-rc1/checksums.txt"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.3": {
|
|
||||||
"id": 48691641,
|
|
||||||
"tag_name": "v0.6.3",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.3",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.3/buildx-v0.6.3.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.2": {
|
|
||||||
"id": 48207405,
|
|
||||||
"tag_name": "v0.6.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.2/buildx-v0.6.2.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.1": {
|
|
||||||
"id": 47064772,
|
|
||||||
"tag_name": "v0.6.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.1/buildx-v0.6.1.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.0": {
|
|
||||||
"id": 46343260,
|
|
||||||
"tag_name": "v0.6.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0/buildx-v0.6.0.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.6.0-rc1": {
|
|
||||||
"id": 46230351,
|
|
||||||
"tag_name": "v0.6.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.6.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-riscv64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-amd64.exe",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.6.0-rc1/buildx-v0.6.0-rc1.windows-arm64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.1": {
|
|
||||||
"id": 35276550,
|
|
||||||
"tag_name": "v0.5.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.darwin-universal",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.0": {
|
|
||||||
"id": 35268960,
|
|
||||||
"tag_name": "v0.5.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.darwin-universal",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0/buildx-v0.5.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.5.0-rc1": {
|
|
||||||
"id": 35015334,
|
|
||||||
"tag_name": "v0.5.0-rc1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.5.0-rc1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.5.0-rc1/buildx-v0.5.0-rc1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.2": {
|
|
||||||
"id": 30007794,
|
|
||||||
"tag_name": "v0.4.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.2/buildx-v0.4.2.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.1": {
|
|
||||||
"id": 26067509,
|
|
||||||
"tag_name": "v0.4.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.1/buildx-v0.4.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.4.0": {
|
|
||||||
"id": 26028174,
|
|
||||||
"tag_name": "v0.4.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.4.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.4.0/buildx-v0.4.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.3.1": {
|
|
||||||
"id": 20316235,
|
|
||||||
"tag_name": "v0.3.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.1/buildx-v0.3.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.3.0": {
|
|
||||||
"id": 19029664,
|
|
||||||
"tag_name": "v0.3.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.3.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.2": {
|
|
||||||
"id": 17671545,
|
|
||||||
"tag_name": "v0.2.2",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.2",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.1": {
|
|
||||||
"id": 17582885,
|
|
||||||
"tag_name": "v0.2.1",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.1",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.1/buildx-v0.2.1.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"v0.2.0": {
|
|
||||||
"id": 16965310,
|
|
||||||
"tag_name": "v0.2.0",
|
|
||||||
"html_url": "https://github.com/docker/buildx/releases/tag/v0.2.0",
|
|
||||||
"assets": [
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.darwin-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-amd64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v6",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm-v7",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-arm64",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-ppc64le",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.linux-s390x",
|
|
||||||
"https://github.com/docker/buildx/releases/download/v0.2.0/buildx-v0.2.0.windows-amd64.exe"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1 +1,4 @@
|
|||||||
/bin
|
bin
|
||||||
|
coverage
|
||||||
|
cross-out
|
||||||
|
release-out
|
||||||
|
|||||||
@ -1,103 +0,0 @@
|
|||||||
package hclparser
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/hashicorp/hcl/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
type filterBody struct {
|
|
||||||
body hcl.Body
|
|
||||||
schema *hcl.BodySchema
|
|
||||||
exclude bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func FilterIncludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
|
||||||
return &filterBody{
|
|
||||||
body: body,
|
|
||||||
schema: schema,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func FilterExcludeBody(body hcl.Body, schema *hcl.BodySchema) hcl.Body {
|
|
||||||
return &filterBody{
|
|
||||||
body: body,
|
|
||||||
schema: schema,
|
|
||||||
exclude: true,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *filterBody) Content(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Diagnostics) {
|
|
||||||
if b.exclude {
|
|
||||||
schema = subtractSchemas(schema, b.schema)
|
|
||||||
} else {
|
|
||||||
schema = intersectSchemas(schema, b.schema)
|
|
||||||
}
|
|
||||||
content, _, diag := b.body.PartialContent(schema)
|
|
||||||
return content, diag
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *filterBody) PartialContent(schema *hcl.BodySchema) (*hcl.BodyContent, hcl.Body, hcl.Diagnostics) {
|
|
||||||
if b.exclude {
|
|
||||||
schema = subtractSchemas(schema, b.schema)
|
|
||||||
} else {
|
|
||||||
schema = intersectSchemas(schema, b.schema)
|
|
||||||
}
|
|
||||||
return b.body.PartialContent(schema)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *filterBody) JustAttributes() (hcl.Attributes, hcl.Diagnostics) {
|
|
||||||
return b.body.JustAttributes()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *filterBody) MissingItemRange() hcl.Range {
|
|
||||||
return b.body.MissingItemRange()
|
|
||||||
}
|
|
||||||
|
|
||||||
func intersectSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
|
||||||
result := &hcl.BodySchema{}
|
|
||||||
for _, blockA := range a.Blocks {
|
|
||||||
for _, blockB := range b.Blocks {
|
|
||||||
if blockA.Type == blockB.Type {
|
|
||||||
result.Blocks = append(result.Blocks, blockA)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, attrA := range a.Attributes {
|
|
||||||
for _, attrB := range b.Attributes {
|
|
||||||
if attrA.Name == attrB.Name {
|
|
||||||
result.Attributes = append(result.Attributes, attrA)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
func subtractSchemas(a, b *hcl.BodySchema) *hcl.BodySchema {
|
|
||||||
result := &hcl.BodySchema{}
|
|
||||||
for _, blockA := range a.Blocks {
|
|
||||||
found := false
|
|
||||||
for _, blockB := range b.Blocks {
|
|
||||||
if blockA.Type == blockB.Type {
|
|
||||||
found = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !found {
|
|
||||||
result.Blocks = append(result.Blocks, blockA)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for _, attrA := range a.Attributes {
|
|
||||||
found := false
|
|
||||||
for _, attrB := range b.Attributes {
|
|
||||||
if attrA.Name == attrB.Name {
|
|
||||||
found = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !found {
|
|
||||||
result.Attributes = append(result.Attributes, attrA)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
@ -1,115 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/util/gitutil"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
const DockerfileLabel = "com.docker.image.source.entrypoint"
|
|
||||||
|
|
||||||
func getGitAttributes(ctx context.Context, contextPath string, dockerfilePath string) (res map[string]string, _ error) {
|
|
||||||
res = make(map[string]string)
|
|
||||||
if contextPath == "" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
setGitLabels := false
|
|
||||||
if v, ok := os.LookupEnv("BUILDX_GIT_LABELS"); ok {
|
|
||||||
if v == "full" { // backward compatibility with old "full" mode
|
|
||||||
setGitLabels = true
|
|
||||||
} else if v, err := strconv.ParseBool(v); err == nil {
|
|
||||||
setGitLabels = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
setGitInfo := true
|
|
||||||
if v, ok := os.LookupEnv("BUILDX_GIT_INFO"); ok {
|
|
||||||
if v, err := strconv.ParseBool(v); err == nil {
|
|
||||||
setGitInfo = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !setGitLabels && !setGitInfo {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// figure out in which directory the git command needs to run in
|
|
||||||
var wd string
|
|
||||||
if filepath.IsAbs(contextPath) {
|
|
||||||
wd = contextPath
|
|
||||||
} else {
|
|
||||||
cwd, _ := os.Getwd()
|
|
||||||
wd, _ = filepath.Abs(filepath.Join(cwd, contextPath))
|
|
||||||
}
|
|
||||||
|
|
||||||
gitc, err := gitutil.New(gitutil.WithContext(ctx), gitutil.WithWorkingDir(wd))
|
|
||||||
if err != nil {
|
|
||||||
if st, err1 := os.Stat(path.Join(wd, ".git")); err1 == nil && st.IsDir() {
|
|
||||||
return res, errors.Wrap(err, "git was not found in the system")
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !gitc.IsInsideWorkTree() {
|
|
||||||
if st, err := os.Stat(path.Join(wd, ".git")); err == nil && st.IsDir() {
|
|
||||||
return res, errors.New("failed to read current commit information with git rev-parse --is-inside-work-tree")
|
|
||||||
}
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if sha, err := gitc.FullCommit(); err != nil && !gitutil.IsUnknownRevision(err) {
|
|
||||||
return res, errors.Wrap(err, "failed to get git commit")
|
|
||||||
} else if sha != "" {
|
|
||||||
checkDirty := false
|
|
||||||
if v, ok := os.LookupEnv("BUILDX_GIT_CHECK_DIRTY"); ok {
|
|
||||||
if v, err := strconv.ParseBool(v); err == nil {
|
|
||||||
checkDirty = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if checkDirty && gitc.IsDirty() {
|
|
||||||
sha += "-dirty"
|
|
||||||
}
|
|
||||||
if setGitLabels {
|
|
||||||
res["label:"+specs.AnnotationRevision] = sha
|
|
||||||
}
|
|
||||||
if setGitInfo {
|
|
||||||
res["vcs:revision"] = sha
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if rurl, err := gitc.RemoteURL(); err == nil && rurl != "" {
|
|
||||||
if setGitLabels {
|
|
||||||
res["label:"+specs.AnnotationSource] = rurl
|
|
||||||
}
|
|
||||||
if setGitInfo {
|
|
||||||
res["vcs:source"] = rurl
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if setGitLabels {
|
|
||||||
if root, err := gitc.RootDir(); err != nil {
|
|
||||||
return res, errors.Wrap(err, "failed to get git root dir")
|
|
||||||
} else if root != "" {
|
|
||||||
if dockerfilePath == "" {
|
|
||||||
dockerfilePath = filepath.Join(wd, "Dockerfile")
|
|
||||||
}
|
|
||||||
if !filepath.IsAbs(dockerfilePath) {
|
|
||||||
cwd, _ := os.Getwd()
|
|
||||||
dockerfilePath = filepath.Join(cwd, dockerfilePath)
|
|
||||||
}
|
|
||||||
dockerfilePath, _ = filepath.Rel(root, dockerfilePath)
|
|
||||||
if !strings.HasPrefix(dockerfilePath, "..") {
|
|
||||||
res["label:"+DockerfileLabel] = dockerfilePath
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
@ -1,156 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/util/gitutil"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func setupTest(tb testing.TB) {
|
|
||||||
gitutil.Mktmp(tb)
|
|
||||||
|
|
||||||
c, err := gitutil.New()
|
|
||||||
require.NoError(tb, err)
|
|
||||||
gitutil.GitInit(c, tb)
|
|
||||||
|
|
||||||
df := []byte("FROM alpine:latest\n")
|
|
||||||
assert.NoError(tb, os.WriteFile("Dockerfile", df, 0644))
|
|
||||||
|
|
||||||
gitutil.GitAdd(c, tb, "Dockerfile")
|
|
||||||
gitutil.GitCommit(c, tb, "initial commit")
|
|
||||||
gitutil.GitSetRemote(c, tb, "origin", "git@github.com:docker/buildx.git")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetGitAttributesNotGitRepo(t *testing.T) {
|
|
||||||
_, err := getGitAttributes(context.Background(), t.TempDir(), "Dockerfile")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetGitAttributesBadGitRepo(t *testing.T) {
|
|
||||||
tmp := t.TempDir()
|
|
||||||
require.NoError(t, os.MkdirAll(path.Join(tmp, ".git"), 0755))
|
|
||||||
|
|
||||||
_, err := getGitAttributes(context.Background(), tmp, "Dockerfile")
|
|
||||||
assert.Error(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetGitAttributesNoContext(t *testing.T) {
|
|
||||||
setupTest(t)
|
|
||||||
|
|
||||||
gitattrs, err := getGitAttributes(context.Background(), "", "Dockerfile")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Empty(t, gitattrs)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetGitAttributes(t *testing.T) {
|
|
||||||
cases := []struct {
|
|
||||||
name string
|
|
||||||
envGitLabels string
|
|
||||||
envGitInfo string
|
|
||||||
expected []string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "default",
|
|
||||||
envGitLabels: "",
|
|
||||||
envGitInfo: "",
|
|
||||||
expected: []string{
|
|
||||||
"vcs:revision",
|
|
||||||
"vcs:source",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "none",
|
|
||||||
envGitLabels: "false",
|
|
||||||
envGitInfo: "false",
|
|
||||||
expected: []string{},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "gitinfo",
|
|
||||||
envGitLabels: "false",
|
|
||||||
envGitInfo: "true",
|
|
||||||
expected: []string{
|
|
||||||
"vcs:revision",
|
|
||||||
"vcs:source",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "gitlabels",
|
|
||||||
envGitLabels: "true",
|
|
||||||
envGitInfo: "false",
|
|
||||||
expected: []string{
|
|
||||||
"label:" + DockerfileLabel,
|
|
||||||
"label:" + specs.AnnotationRevision,
|
|
||||||
"label:" + specs.AnnotationSource,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "both",
|
|
||||||
envGitLabels: "true",
|
|
||||||
envGitInfo: "",
|
|
||||||
expected: []string{
|
|
||||||
"label:" + DockerfileLabel,
|
|
||||||
"label:" + specs.AnnotationRevision,
|
|
||||||
"label:" + specs.AnnotationSource,
|
|
||||||
"vcs:revision",
|
|
||||||
"vcs:source",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, tt := range cases {
|
|
||||||
tt := tt
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
setupTest(t)
|
|
||||||
if tt.envGitLabels != "" {
|
|
||||||
t.Setenv("BUILDX_GIT_LABELS", tt.envGitLabels)
|
|
||||||
}
|
|
||||||
if tt.envGitInfo != "" {
|
|
||||||
t.Setenv("BUILDX_GIT_INFO", tt.envGitInfo)
|
|
||||||
}
|
|
||||||
gitattrs, err := getGitAttributes(context.Background(), ".", "Dockerfile")
|
|
||||||
require.NoError(t, err)
|
|
||||||
for _, e := range tt.expected {
|
|
||||||
assert.Contains(t, gitattrs, e)
|
|
||||||
assert.NotEmpty(t, gitattrs[e])
|
|
||||||
if e == "label:"+DockerfileLabel {
|
|
||||||
assert.Equal(t, "Dockerfile", gitattrs[e])
|
|
||||||
} else if e == "label:"+specs.AnnotationSource || e == "vcs:source" {
|
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs[e])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetGitAttributesDirty(t *testing.T) {
|
|
||||||
setupTest(t)
|
|
||||||
t.Setenv("BUILDX_GIT_CHECK_DIRTY", "true")
|
|
||||||
|
|
||||||
// make a change to test dirty flag
|
|
||||||
df := []byte("FROM alpine:edge\n")
|
|
||||||
require.NoError(t, os.Mkdir("dir", 0755))
|
|
||||||
require.NoError(t, os.WriteFile(filepath.Join("dir", "Dockerfile"), df, 0644))
|
|
||||||
|
|
||||||
t.Setenv("BUILDX_GIT_LABELS", "true")
|
|
||||||
gitattrs, _ := getGitAttributes(context.Background(), ".", "Dockerfile")
|
|
||||||
assert.Equal(t, 5, len(gitattrs))
|
|
||||||
|
|
||||||
assert.Contains(t, gitattrs, "label:"+DockerfileLabel)
|
|
||||||
assert.Equal(t, "Dockerfile", gitattrs["label:"+DockerfileLabel])
|
|
||||||
assert.Contains(t, gitattrs, "label:"+specs.AnnotationSource)
|
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["label:"+specs.AnnotationSource])
|
|
||||||
assert.Contains(t, gitattrs, "label:"+specs.AnnotationRevision)
|
|
||||||
assert.True(t, strings.HasSuffix(gitattrs["label:"+specs.AnnotationRevision], "-dirty"))
|
|
||||||
|
|
||||||
assert.Contains(t, gitattrs, "vcs:source")
|
|
||||||
assert.Equal(t, "git@github.com:docker/buildx.git", gitattrs["vcs:source"])
|
|
||||||
assert.Contains(t, gitattrs, "vcs:revision")
|
|
||||||
assert.True(t, strings.HasSuffix(gitattrs["vcs:revision"], "-dirty"))
|
|
||||||
}
|
|
||||||
@ -1,138 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
_ "crypto/sha256" // ensure digests can be computed
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"syscall"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Container struct {
|
|
||||||
cancelOnce sync.Once
|
|
||||||
containerCancel func()
|
|
||||||
isUnavailable atomic.Bool
|
|
||||||
initStarted atomic.Bool
|
|
||||||
container gateway.Container
|
|
||||||
releaseCh chan struct{}
|
|
||||||
resultCtx *ResultHandle
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewContainer(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig) (*Container, error) {
|
|
||||||
mainCtx := ctx
|
|
||||||
|
|
||||||
ctrCh := make(chan *Container)
|
|
||||||
errCh := make(chan error)
|
|
||||||
go func() {
|
|
||||||
err := resultCtx.build(func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
go func() {
|
|
||||||
<-mainCtx.Done()
|
|
||||||
cancel()
|
|
||||||
}()
|
|
||||||
|
|
||||||
containerCfg, err := resultCtx.getContainerConfig(ctx, c, cfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
containerCtx, containerCancel := context.WithCancel(ctx)
|
|
||||||
defer containerCancel()
|
|
||||||
bkContainer, err := c.NewContainer(containerCtx, containerCfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
releaseCh := make(chan struct{})
|
|
||||||
container := &Container{
|
|
||||||
containerCancel: containerCancel,
|
|
||||||
container: bkContainer,
|
|
||||||
releaseCh: releaseCh,
|
|
||||||
resultCtx: resultCtx,
|
|
||||||
}
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
defer close(doneCh)
|
|
||||||
resultCtx.registerCleanup(func() {
|
|
||||||
container.Cancel()
|
|
||||||
<-doneCh
|
|
||||||
})
|
|
||||||
ctrCh <- container
|
|
||||||
<-container.releaseCh
|
|
||||||
|
|
||||||
return nil, bkContainer.Release(ctx)
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
errCh <- err
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
select {
|
|
||||||
case ctr := <-ctrCh:
|
|
||||||
return ctr, nil
|
|
||||||
case err := <-errCh:
|
|
||||||
return nil, err
|
|
||||||
case <-mainCtx.Done():
|
|
||||||
return nil, mainCtx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) Cancel() {
|
|
||||||
c.markUnavailable()
|
|
||||||
c.cancelOnce.Do(func() {
|
|
||||||
if c.containerCancel != nil {
|
|
||||||
c.containerCancel()
|
|
||||||
}
|
|
||||||
close(c.releaseCh)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) IsUnavailable() bool {
|
|
||||||
return c.isUnavailable.Load()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) markUnavailable() {
|
|
||||||
c.isUnavailable.Store(true)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Container) Exec(ctx context.Context, cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
if isInit := c.initStarted.CompareAndSwap(false, true); isInit {
|
|
||||||
defer func() {
|
|
||||||
// container can't be used after init exits
|
|
||||||
c.markUnavailable()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
err := exec(ctx, c.resultCtx, cfg, c.container, stdin, stdout, stderr)
|
|
||||||
if err != nil {
|
|
||||||
// Container becomes unavailable if one of the processes fails in it.
|
|
||||||
c.markUnavailable()
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func exec(ctx context.Context, resultCtx *ResultHandle, cfg *controllerapi.InvokeConfig, ctr gateway.Container, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
processCfg, err := resultCtx.getProcessConfig(cfg, stdin, stdout, stderr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
proc, err := ctr.Start(ctx, processCfg)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Errorf("failed to start container: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
defer close(doneCh)
|
|
||||||
go func() {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
if err := proc.Signal(ctx, syscall.SIGKILL); err != nil {
|
|
||||||
logrus.Warnf("failed to kill process: %v", err)
|
|
||||||
}
|
|
||||||
case <-doneCh:
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
return proc.Wait()
|
|
||||||
}
|
|
||||||
@ -1,495 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
_ "crypto/sha256" // ensure digests can be computed
|
|
||||||
"encoding/json"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/exporter/containerimage/exptypes"
|
|
||||||
gateway "github.com/moby/buildkit/frontend/gateway/client"
|
|
||||||
"github.com/moby/buildkit/solver/errdefs"
|
|
||||||
"github.com/moby/buildkit/solver/pb"
|
|
||||||
"github.com/moby/buildkit/solver/result"
|
|
||||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
// NewResultHandle makes a call to client.Build, additionally returning a
|
|
||||||
// opaque ResultHandle alongside the standard response and error.
|
|
||||||
//
|
|
||||||
// This ResultHandle can be used to execute additional build steps in the same
|
|
||||||
// context as the build occurred, which can allow easy debugging of build
|
|
||||||
// failures and successes.
|
|
||||||
//
|
|
||||||
// If the returned ResultHandle is not nil, the caller must call Done() on it.
|
|
||||||
func NewResultHandle(ctx context.Context, cc *client.Client, opt client.SolveOpt, product string, buildFunc gateway.BuildFunc, ch chan *client.SolveStatus) (*ResultHandle, *client.SolveResponse, error) {
|
|
||||||
// Create a new context to wrap the original, and cancel it when the
|
|
||||||
// caller-provided context is cancelled.
|
|
||||||
//
|
|
||||||
// We derive the context from the background context so that we can forbid
|
|
||||||
// cancellation of the build request after <-done is closed (which we do
|
|
||||||
// before returning the ResultHandle).
|
|
||||||
baseCtx := ctx
|
|
||||||
ctx, cancel := context.WithCancelCause(context.Background())
|
|
||||||
done := make(chan struct{})
|
|
||||||
go func() {
|
|
||||||
select {
|
|
||||||
case <-baseCtx.Done():
|
|
||||||
cancel(baseCtx.Err())
|
|
||||||
case <-done:
|
|
||||||
// Once done is closed, we've recorded a ResultHandle, so we
|
|
||||||
// shouldn't allow cancelling the underlying build request anymore.
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Create a new channel to forward status messages to the original.
|
|
||||||
//
|
|
||||||
// We do this so that we can discard status messages after the main portion
|
|
||||||
// of the build is complete. This is necessary for the solve error case,
|
|
||||||
// where the original gateway is kept open until the ResultHandle is
|
|
||||||
// closed - we don't want progress messages from operations in that
|
|
||||||
// ResultHandle to display after this function exits.
|
|
||||||
//
|
|
||||||
// Additionally, callers should wait for the progress channel to be closed.
|
|
||||||
// If we keep the session open and never close the progress channel, the
|
|
||||||
// caller will likely hang.
|
|
||||||
baseCh := ch
|
|
||||||
ch = make(chan *client.SolveStatus)
|
|
||||||
go func() {
|
|
||||||
for {
|
|
||||||
s, ok := <-ch
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case <-baseCh:
|
|
||||||
// base channel is closed, discard status messages
|
|
||||||
default:
|
|
||||||
baseCh <- s
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
defer close(baseCh)
|
|
||||||
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
var respErr error
|
|
||||||
var respHandle *ResultHandle
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
defer cancel(context.Canceled) // ensure no dangling processes
|
|
||||||
|
|
||||||
var res *gateway.Result
|
|
||||||
var err error
|
|
||||||
resp, err = cc.Build(ctx, opt, product, func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
var err error
|
|
||||||
res, err = buildFunc(ctx, c)
|
|
||||||
|
|
||||||
if res != nil && err == nil {
|
|
||||||
// Force evaluation of the build result (otherwise, we likely
|
|
||||||
// won't get a solve error)
|
|
||||||
def, err2 := getDefinition(ctx, res)
|
|
||||||
if err2 != nil {
|
|
||||||
return nil, err2
|
|
||||||
}
|
|
||||||
res, err = evalDefinition(ctx, c, def)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
// Scenario 1: we failed to evaluate a node somewhere in the
|
|
||||||
// build graph.
|
|
||||||
//
|
|
||||||
// In this case, we construct a ResultHandle from this
|
|
||||||
// original Build session, and return it alongside the original
|
|
||||||
// build error. We then need to keep the gateway session open
|
|
||||||
// until the caller explicitly closes the ResultHandle.
|
|
||||||
|
|
||||||
var se *errdefs.SolveError
|
|
||||||
if errors.As(err, &se) {
|
|
||||||
respHandle = &ResultHandle{
|
|
||||||
done: make(chan struct{}),
|
|
||||||
solveErr: se,
|
|
||||||
gwClient: c,
|
|
||||||
gwCtx: ctx,
|
|
||||||
}
|
|
||||||
respErr = se
|
|
||||||
close(done)
|
|
||||||
|
|
||||||
// Block until the caller closes the ResultHandle.
|
|
||||||
select {
|
|
||||||
case <-respHandle.done:
|
|
||||||
case <-ctx.Done():
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return res, err
|
|
||||||
}, ch)
|
|
||||||
if respHandle != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
// Something unexpected failed during the build, we didn't succeed,
|
|
||||||
// but we also didn't make it far enough to create a ResultHandle.
|
|
||||||
respErr = err
|
|
||||||
close(done)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Scenario 2: we successfully built the image with no errors.
|
|
||||||
//
|
|
||||||
// In this case, the original gateway session has now been closed
|
|
||||||
// since the Build has been completed. So, we need to create a new
|
|
||||||
// gateway session to populate the ResultHandle. To do this, we
|
|
||||||
// need to re-evaluate the target result, in this new session. This
|
|
||||||
// should be instantaneous since the result should be cached.
|
|
||||||
|
|
||||||
def, err := getDefinition(ctx, res)
|
|
||||||
if err != nil {
|
|
||||||
respErr = err
|
|
||||||
close(done)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// NOTE: ideally this second connection should be lazily opened
|
|
||||||
opt := opt
|
|
||||||
opt.Ref = ""
|
|
||||||
opt.Exports = nil
|
|
||||||
opt.CacheExports = nil
|
|
||||||
opt.Internal = true
|
|
||||||
_, respErr = cc.Build(ctx, opt, "buildx", func(ctx context.Context, c gateway.Client) (*gateway.Result, error) {
|
|
||||||
res, err := evalDefinition(ctx, c, def)
|
|
||||||
if err != nil {
|
|
||||||
// This should probably not happen, since we've previously
|
|
||||||
// successfully evaluated the same result with no issues.
|
|
||||||
return nil, errors.Wrap(err, "inconsistent solve result")
|
|
||||||
}
|
|
||||||
respHandle = &ResultHandle{
|
|
||||||
done: make(chan struct{}),
|
|
||||||
res: res,
|
|
||||||
gwClient: c,
|
|
||||||
gwCtx: ctx,
|
|
||||||
}
|
|
||||||
close(done)
|
|
||||||
|
|
||||||
// Block until the caller closes the ResultHandle.
|
|
||||||
select {
|
|
||||||
case <-respHandle.done:
|
|
||||||
case <-ctx.Done():
|
|
||||||
}
|
|
||||||
return nil, ctx.Err()
|
|
||||||
}, nil)
|
|
||||||
if respHandle != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
close(done)
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Block until the other thread signals that it's completed the build.
|
|
||||||
select {
|
|
||||||
case <-done:
|
|
||||||
case <-baseCtx.Done():
|
|
||||||
if respErr == nil {
|
|
||||||
respErr = baseCtx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return respHandle, resp, respErr
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDefinition converts a gateway result into a collection of definitions for
|
|
||||||
// each ref in the result.
|
|
||||||
func getDefinition(ctx context.Context, res *gateway.Result) (*result.Result[*pb.Definition], error) {
|
|
||||||
return result.ConvertResult(res, func(ref gateway.Reference) (*pb.Definition, error) {
|
|
||||||
st, err := ref.ToState()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
def, err := st.Marshal(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return def.ToPB(), nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// evalDefinition performs the reverse of getDefinition, converting a
|
|
||||||
// collection of definitions into a gateway result.
|
|
||||||
func evalDefinition(ctx context.Context, c gateway.Client, defs *result.Result[*pb.Definition]) (*gateway.Result, error) {
|
|
||||||
// force evaluation of all targets in parallel
|
|
||||||
results := make(map[*pb.Definition]*gateway.Result)
|
|
||||||
resultsMu := sync.Mutex{}
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
defs.EachRef(func(def *pb.Definition) error {
|
|
||||||
eg.Go(func() error {
|
|
||||||
res, err := c.Solve(egCtx, gateway.SolveRequest{
|
|
||||||
Evaluate: true,
|
|
||||||
Definition: def,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
resultsMu.Lock()
|
|
||||||
results[def] = res
|
|
||||||
resultsMu.Unlock()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
res, _ := result.ConvertResult(defs, func(def *pb.Definition) (gateway.Reference, error) {
|
|
||||||
if res, ok := results[def]; ok {
|
|
||||||
return res.Ref, nil
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
})
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ResultHandle is a build result with the client that built it.
|
|
||||||
type ResultHandle struct {
|
|
||||||
res *gateway.Result
|
|
||||||
solveErr *errdefs.SolveError
|
|
||||||
|
|
||||||
done chan struct{}
|
|
||||||
doneOnce sync.Once
|
|
||||||
|
|
||||||
gwClient gateway.Client
|
|
||||||
gwCtx context.Context
|
|
||||||
|
|
||||||
cleanups []func()
|
|
||||||
cleanupsMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) Done() {
|
|
||||||
r.doneOnce.Do(func() {
|
|
||||||
r.cleanupsMu.Lock()
|
|
||||||
cleanups := r.cleanups
|
|
||||||
r.cleanups = nil
|
|
||||||
r.cleanupsMu.Unlock()
|
|
||||||
for _, f := range cleanups {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
|
|
||||||
close(r.done)
|
|
||||||
<-r.gwCtx.Done()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) registerCleanup(f func()) {
|
|
||||||
r.cleanupsMu.Lock()
|
|
||||||
r.cleanups = append(r.cleanups, f)
|
|
||||||
r.cleanupsMu.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) build(buildFunc gateway.BuildFunc) (err error) {
|
|
||||||
_, err = buildFunc(r.gwCtx, r.gwClient)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) getContainerConfig(ctx context.Context, c gateway.Client, cfg *controllerapi.InvokeConfig) (containerCfg gateway.NewContainerRequest, _ error) {
|
|
||||||
if r.res != nil && r.solveErr == nil {
|
|
||||||
logrus.Debugf("creating container from successful build")
|
|
||||||
ccfg, err := containerConfigFromResult(ctx, r.res, c, *cfg)
|
|
||||||
if err != nil {
|
|
||||||
return containerCfg, err
|
|
||||||
}
|
|
||||||
containerCfg = *ccfg
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
|
||||||
ccfg, err := containerConfigFromError(r.solveErr, *cfg)
|
|
||||||
if err != nil {
|
|
||||||
return containerCfg, errors.Wrapf(err, "no result nor error is available")
|
|
||||||
}
|
|
||||||
containerCfg = *ccfg
|
|
||||||
}
|
|
||||||
return containerCfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *ResultHandle) getProcessConfig(cfg *controllerapi.InvokeConfig, stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) (_ gateway.StartRequest, err error) {
|
|
||||||
processCfg := newStartRequest(stdin, stdout, stderr)
|
|
||||||
if r.res != nil && r.solveErr == nil {
|
|
||||||
logrus.Debugf("creating container from successful build")
|
|
||||||
if err := populateProcessConfigFromResult(&processCfg, r.res, *cfg); err != nil {
|
|
||||||
return processCfg, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("creating container from failed build %+v", cfg)
|
|
||||||
if err := populateProcessConfigFromError(&processCfg, r.solveErr, *cfg); err != nil {
|
|
||||||
return processCfg, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return processCfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func containerConfigFromResult(ctx context.Context, res *gateway.Result, c gateway.Client, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
|
||||||
if cfg.Initial {
|
|
||||||
return nil, errors.Errorf("starting from the container from the initial state of the step is supported only on the failed steps")
|
|
||||||
}
|
|
||||||
|
|
||||||
ps, err := exptypes.ParsePlatforms(res.Metadata)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ref, ok := res.FindRef(ps.Platforms[0].ID)
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("no reference found")
|
|
||||||
}
|
|
||||||
|
|
||||||
return &gateway.NewContainerRequest{
|
|
||||||
Mounts: []gateway.Mount{
|
|
||||||
{
|
|
||||||
Dest: "/",
|
|
||||||
MountType: pb.MountType_BIND,
|
|
||||||
Ref: ref,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func populateProcessConfigFromResult(req *gateway.StartRequest, res *gateway.Result, cfg controllerapi.InvokeConfig) error {
|
|
||||||
imgData := res.Metadata[exptypes.ExporterImageConfigKey]
|
|
||||||
var img *specs.Image
|
|
||||||
if len(imgData) > 0 {
|
|
||||||
img = &specs.Image{}
|
|
||||||
if err := json.Unmarshal(imgData, img); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
user := ""
|
|
||||||
if !cfg.NoUser {
|
|
||||||
user = cfg.User
|
|
||||||
} else if img != nil {
|
|
||||||
user = img.Config.User
|
|
||||||
}
|
|
||||||
|
|
||||||
cwd := ""
|
|
||||||
if !cfg.NoCwd {
|
|
||||||
cwd = cfg.Cwd
|
|
||||||
} else if img != nil {
|
|
||||||
cwd = img.Config.WorkingDir
|
|
||||||
}
|
|
||||||
|
|
||||||
env := []string{}
|
|
||||||
if img != nil {
|
|
||||||
env = append(env, img.Config.Env...)
|
|
||||||
}
|
|
||||||
env = append(env, cfg.Env...)
|
|
||||||
|
|
||||||
args := []string{}
|
|
||||||
if cfg.Entrypoint != nil {
|
|
||||||
args = append(args, cfg.Entrypoint...)
|
|
||||||
} else if img != nil {
|
|
||||||
args = append(args, img.Config.Entrypoint...)
|
|
||||||
}
|
|
||||||
if !cfg.NoCmd {
|
|
||||||
args = append(args, cfg.Cmd...)
|
|
||||||
} else if img != nil {
|
|
||||||
args = append(args, img.Config.Cmd...)
|
|
||||||
}
|
|
||||||
|
|
||||||
req.Args = args
|
|
||||||
req.Env = env
|
|
||||||
req.User = user
|
|
||||||
req.Cwd = cwd
|
|
||||||
req.Tty = cfg.Tty
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func containerConfigFromError(solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) (*gateway.NewContainerRequest, error) {
|
|
||||||
exec, err := execOpFromError(solveErr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var mounts []gateway.Mount
|
|
||||||
for i, mnt := range exec.Mounts {
|
|
||||||
rid := solveErr.Solve.MountIDs[i]
|
|
||||||
if cfg.Initial {
|
|
||||||
rid = solveErr.Solve.InputIDs[i]
|
|
||||||
}
|
|
||||||
mounts = append(mounts, gateway.Mount{
|
|
||||||
Selector: mnt.Selector,
|
|
||||||
Dest: mnt.Dest,
|
|
||||||
ResultID: rid,
|
|
||||||
Readonly: mnt.Readonly,
|
|
||||||
MountType: mnt.MountType,
|
|
||||||
CacheOpt: mnt.CacheOpt,
|
|
||||||
SecretOpt: mnt.SecretOpt,
|
|
||||||
SSHOpt: mnt.SSHOpt,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &gateway.NewContainerRequest{
|
|
||||||
Mounts: mounts,
|
|
||||||
NetMode: exec.Network,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func populateProcessConfigFromError(req *gateway.StartRequest, solveErr *errdefs.SolveError, cfg controllerapi.InvokeConfig) error {
|
|
||||||
exec, err := execOpFromError(solveErr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
meta := exec.Meta
|
|
||||||
user := ""
|
|
||||||
if !cfg.NoUser {
|
|
||||||
user = cfg.User
|
|
||||||
} else {
|
|
||||||
user = meta.User
|
|
||||||
}
|
|
||||||
|
|
||||||
cwd := ""
|
|
||||||
if !cfg.NoCwd {
|
|
||||||
cwd = cfg.Cwd
|
|
||||||
} else {
|
|
||||||
cwd = meta.Cwd
|
|
||||||
}
|
|
||||||
|
|
||||||
env := append(meta.Env, cfg.Env...)
|
|
||||||
|
|
||||||
args := []string{}
|
|
||||||
if cfg.Entrypoint != nil {
|
|
||||||
args = append(args, cfg.Entrypoint...)
|
|
||||||
}
|
|
||||||
if cfg.Cmd != nil {
|
|
||||||
args = append(args, cfg.Cmd...)
|
|
||||||
}
|
|
||||||
if len(args) == 0 {
|
|
||||||
args = meta.Args
|
|
||||||
}
|
|
||||||
|
|
||||||
req.Args = args
|
|
||||||
req.Env = env
|
|
||||||
req.User = user
|
|
||||||
req.Cwd = cwd
|
|
||||||
req.Tty = cfg.Tty
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func execOpFromError(solveErr *errdefs.SolveError) (*pb.ExecOp, error) {
|
|
||||||
if solveErr == nil {
|
|
||||||
return nil, errors.Errorf("no error is available")
|
|
||||||
}
|
|
||||||
switch op := solveErr.Solve.Op.GetOp().(type) {
|
|
||||||
case *pb.Op_Exec:
|
|
||||||
return op.Exec, nil
|
|
||||||
default:
|
|
||||||
return nil, errors.Errorf("invoke: unsupported error type")
|
|
||||||
}
|
|
||||||
// TODO: support other ops
|
|
||||||
}
|
|
||||||
|
|
||||||
func newStartRequest(stdin io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) gateway.StartRequest {
|
|
||||||
return gateway.StartRequest{
|
|
||||||
Stdin: stdin,
|
|
||||||
Stdout: stdout,
|
|
||||||
Stderr: stderr,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,292 +0,0 @@
|
|||||||
package builder
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"sort"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
|
||||||
"github.com/docker/buildx/util/imagetools"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Builder represents an active builder object
|
|
||||||
type Builder struct {
|
|
||||||
*store.NodeGroup
|
|
||||||
driverFactory driverFactory
|
|
||||||
nodes []Node
|
|
||||||
opts builderOpts
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
type builderOpts struct {
|
|
||||||
dockerCli command.Cli
|
|
||||||
name string
|
|
||||||
txn *store.Txn
|
|
||||||
contextPathHash string
|
|
||||||
validate bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Option provides a variadic option for configuring the builder.
|
|
||||||
type Option func(b *Builder)
|
|
||||||
|
|
||||||
// WithName sets builder name.
|
|
||||||
func WithName(name string) Option {
|
|
||||||
return func(b *Builder) {
|
|
||||||
b.opts.name = name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithStore sets a store instance used at init.
|
|
||||||
func WithStore(txn *store.Txn) Option {
|
|
||||||
return func(b *Builder) {
|
|
||||||
b.opts.txn = txn
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithContextPathHash is used for determining pods in k8s driver instance.
|
|
||||||
func WithContextPathHash(contextPathHash string) Option {
|
|
||||||
return func(b *Builder) {
|
|
||||||
b.opts.contextPathHash = contextPathHash
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithSkippedValidation skips builder context validation.
|
|
||||||
func WithSkippedValidation() Option {
|
|
||||||
return func(b *Builder) {
|
|
||||||
b.opts.validate = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// New initializes a new builder client
|
|
||||||
func New(dockerCli command.Cli, opts ...Option) (_ *Builder, err error) {
|
|
||||||
b := &Builder{
|
|
||||||
opts: builderOpts{
|
|
||||||
dockerCli: dockerCli,
|
|
||||||
validate: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, opt := range opts {
|
|
||||||
opt(b)
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.opts.txn == nil {
|
|
||||||
// if store instance is nil we create a short-lived one using the
|
|
||||||
// default store and ensure we release it on completion
|
|
||||||
var release func()
|
|
||||||
b.opts.txn, release, err = storeutil.GetStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
}
|
|
||||||
|
|
||||||
if b.opts.name != "" {
|
|
||||||
if b.NodeGroup, err = storeutil.GetNodeGroup(b.opts.txn, dockerCli, b.opts.name); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if b.NodeGroup, err = storeutil.GetCurrentInstance(b.opts.txn, dockerCli); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if b.opts.validate {
|
|
||||||
if err = b.Validate(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return b, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate validates builder context
|
|
||||||
func (b *Builder) Validate() error {
|
|
||||||
if b.NodeGroup != nil && b.NodeGroup.DockerContext {
|
|
||||||
list, err := b.opts.dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
currentContext := b.opts.dockerCli.CurrentContext()
|
|
||||||
for _, l := range list {
|
|
||||||
if l.Name == b.Name && l.Name != currentContext {
|
|
||||||
return errors.Errorf("use `docker --context=%s buildx` to switch to context %q", l.Name, l.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ContextName returns builder context name if available.
|
|
||||||
func (b *Builder) ContextName() string {
|
|
||||||
ctxbuilders, err := b.opts.dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
for _, cb := range ctxbuilders {
|
|
||||||
if b.NodeGroup.Driver == "docker" && len(b.NodeGroup.Nodes) == 1 && b.NodeGroup.Nodes[0].Endpoint == cb.Name {
|
|
||||||
return cb.Name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// ImageOpt returns registry auth configuration
|
|
||||||
func (b *Builder) ImageOpt() (imagetools.Opt, error) {
|
|
||||||
return storeutil.GetImageConfig(b.opts.dockerCli, b.NodeGroup)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Boot bootstrap a builder
|
|
||||||
func (b *Builder) Boot(ctx context.Context) (bool, error) {
|
|
||||||
toBoot := make([]int, 0, len(b.nodes))
|
|
||||||
for idx, d := range b.nodes {
|
|
||||||
if d.Err != nil || d.Driver == nil || d.DriverInfo == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if d.DriverInfo.Status != driver.Running {
|
|
||||||
toBoot = append(toBoot, idx)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(toBoot) == 0 {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progress.PrinterModeAuto)
|
|
||||||
if err != nil {
|
|
||||||
return false, err
|
|
||||||
}
|
|
||||||
|
|
||||||
baseCtx := ctx
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
for _, idx := range toBoot {
|
|
||||||
func(idx int) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
pw := progress.WithPrefix(printer, b.NodeGroup.Nodes[idx].Name, len(toBoot) > 1)
|
|
||||||
_, err := driver.Boot(ctx, baseCtx, b.nodes[idx].Driver, pw)
|
|
||||||
if err != nil {
|
|
||||||
b.nodes[idx].Err = err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(idx)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = eg.Wait()
|
|
||||||
err1 := printer.Wait()
|
|
||||||
if err == nil {
|
|
||||||
err = err1
|
|
||||||
}
|
|
||||||
|
|
||||||
return true, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Inactive checks if all nodes are inactive for this builder.
|
|
||||||
func (b *Builder) Inactive() bool {
|
|
||||||
for _, d := range b.nodes {
|
|
||||||
if d.DriverInfo != nil && d.DriverInfo.Status == driver.Running {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Err returns error if any.
|
|
||||||
func (b *Builder) Err() error {
|
|
||||||
return b.err
|
|
||||||
}
|
|
||||||
|
|
||||||
type driverFactory struct {
|
|
||||||
driver.Factory
|
|
||||||
once sync.Once
|
|
||||||
}
|
|
||||||
|
|
||||||
// Factory returns the driver factory.
|
|
||||||
func (b *Builder) Factory(ctx context.Context) (_ driver.Factory, err error) {
|
|
||||||
b.driverFactory.once.Do(func() {
|
|
||||||
if b.Driver != "" {
|
|
||||||
b.driverFactory.Factory, err = driver.GetFactory(b.Driver, true)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// empty driver means nodegroup was implicitly created as a default
|
|
||||||
// driver for a docker context and allows falling back to a
|
|
||||||
// docker-container driver for older daemon that doesn't support
|
|
||||||
// buildkit (< 18.06).
|
|
||||||
ep := b.NodeGroup.Nodes[0].Endpoint
|
|
||||||
var dockerapi *dockerutil.ClientAPI
|
|
||||||
dockerapi, err = dockerutil.NewClientAPI(b.opts.dockerCli, b.NodeGroup.Nodes[0].Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// check if endpoint is healthy is needed to determine the driver type.
|
|
||||||
// if this fails then can't continue with driver selection.
|
|
||||||
if _, err = dockerapi.Ping(ctx); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
b.driverFactory.Factory, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
b.Driver = b.driverFactory.Factory.Name()
|
|
||||||
}
|
|
||||||
})
|
|
||||||
return b.driverFactory.Factory, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetBuilders returns all builders
|
|
||||||
func GetBuilders(dockerCli command.Cli, txn *store.Txn) ([]*Builder, error) {
|
|
||||||
storeng, err := txn.List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
builders := make([]*Builder, len(storeng))
|
|
||||||
seen := make(map[string]struct{})
|
|
||||||
for i, ng := range storeng {
|
|
||||||
b, err := New(dockerCli,
|
|
||||||
WithName(ng.Name),
|
|
||||||
WithStore(txn),
|
|
||||||
WithSkippedValidation(),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
builders[i] = b
|
|
||||||
seen[b.NodeGroup.Name] = struct{}{}
|
|
||||||
}
|
|
||||||
|
|
||||||
contexts, err := dockerCli.ContextStore().List()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
sort.Slice(contexts, func(i, j int) bool {
|
|
||||||
return contexts[i].Name < contexts[j].Name
|
|
||||||
})
|
|
||||||
|
|
||||||
for _, c := range contexts {
|
|
||||||
// if a context has the same name as an instance from the store, do not
|
|
||||||
// add it to the builders list. An instance from the store takes
|
|
||||||
// precedence over context builders.
|
|
||||||
if _, ok := seen[c.Name]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
b, err := New(dockerCli,
|
|
||||||
WithName(c.Name),
|
|
||||||
WithStore(txn),
|
|
||||||
WithSkippedValidation(),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
builders = append(builders, b)
|
|
||||||
}
|
|
||||||
|
|
||||||
return builders, nil
|
|
||||||
}
|
|
||||||
@ -1,211 +0,0 @@
|
|||||||
package builder
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/driver"
|
|
||||||
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
|
||||||
"github.com/docker/buildx/util/imagetools"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
ocispecs "github.com/opencontainers/image-spec/specs-go/v1"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
"google.golang.org/grpc/codes"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Node struct {
|
|
||||||
store.Node
|
|
||||||
Builder string
|
|
||||||
Driver *driver.DriverHandle
|
|
||||||
DriverInfo *driver.Info
|
|
||||||
Platforms []ocispecs.Platform
|
|
||||||
GCPolicy []client.PruneInfo
|
|
||||||
Labels map[string]string
|
|
||||||
ImageOpt imagetools.Opt
|
|
||||||
ProxyConfig map[string]string
|
|
||||||
Version string
|
|
||||||
Err error
|
|
||||||
}
|
|
||||||
|
|
||||||
// Nodes returns nodes for this builder.
|
|
||||||
func (b *Builder) Nodes() []Node {
|
|
||||||
return b.nodes
|
|
||||||
}
|
|
||||||
|
|
||||||
// LoadNodes loads and returns nodes for this builder.
|
|
||||||
// TODO: this should be a method on a Node object and lazy load data for each driver.
|
|
||||||
func (b *Builder) LoadNodes(ctx context.Context, withData bool) (_ []Node, err error) {
|
|
||||||
eg, _ := errgroup.WithContext(ctx)
|
|
||||||
b.nodes = make([]Node, len(b.NodeGroup.Nodes))
|
|
||||||
|
|
||||||
defer func() {
|
|
||||||
if b.err == nil && err != nil {
|
|
||||||
b.err = err
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
factory, err := b.Factory(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
imageopt, err := b.ImageOpt()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, n := range b.NodeGroup.Nodes {
|
|
||||||
func(i int, n store.Node) {
|
|
||||||
eg.Go(func() error {
|
|
||||||
node := Node{
|
|
||||||
Node: n,
|
|
||||||
ProxyConfig: storeutil.GetProxyConfig(b.opts.dockerCli),
|
|
||||||
Platforms: n.Platforms,
|
|
||||||
Builder: b.Name,
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
b.nodes[i] = node
|
|
||||||
}()
|
|
||||||
|
|
||||||
dockerapi, err := dockerutil.NewClientAPI(b.opts.dockerCli, n.Endpoint)
|
|
||||||
if err != nil {
|
|
||||||
node.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
contextStore := b.opts.dockerCli.ContextStore()
|
|
||||||
|
|
||||||
var kcc driver.KubeClientConfig
|
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint(n.Endpoint, contextStore)
|
|
||||||
if err != nil {
|
|
||||||
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
|
||||||
// try again with name="default".
|
|
||||||
// FIXME(@AkihiroSuda): n should retain real context name.
|
|
||||||
kcc, err = ctxkube.ConfigFromEndpoint("default", contextStore)
|
|
||||||
if err != nil {
|
|
||||||
logrus.Error(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tryToUseKubeConfigInCluster := false
|
|
||||||
if kcc == nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
} else {
|
|
||||||
if _, err := kcc.ClientConfig(); err != nil {
|
|
||||||
tryToUseKubeConfigInCluster = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if tryToUseKubeConfigInCluster {
|
|
||||||
kccInCluster := driver.KubeClientConfigInCluster{}
|
|
||||||
if _, err := kccInCluster.ClientConfig(); err == nil {
|
|
||||||
logrus.Debug("using kube config in cluster")
|
|
||||||
kcc = kccInCluster
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, factory, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.SecurityOpts, n.Platforms, b.opts.contextPathHash)
|
|
||||||
if err != nil {
|
|
||||||
node.Err = err
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
node.Driver = d
|
|
||||||
node.ImageOpt = imageopt
|
|
||||||
|
|
||||||
if withData {
|
|
||||||
if err := node.loadData(ctx); err != nil {
|
|
||||||
node.Err = err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}(i, n)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := eg.Wait(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: This should be done in the routine loading driver data
|
|
||||||
if withData {
|
|
||||||
kubernetesDriverCount := 0
|
|
||||||
for _, d := range b.nodes {
|
|
||||||
if d.DriverInfo != nil && len(d.DriverInfo.DynamicNodes) > 0 {
|
|
||||||
kubernetesDriverCount++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
isAllKubernetesDrivers := len(b.nodes) == kubernetesDriverCount
|
|
||||||
if isAllKubernetesDrivers {
|
|
||||||
var nodes []Node
|
|
||||||
var dynamicNodes []store.Node
|
|
||||||
for _, di := range b.nodes {
|
|
||||||
// dynamic nodes are used in Kubernetes driver.
|
|
||||||
// Kubernetes' pods are dynamically mapped to BuildKit Nodes.
|
|
||||||
if di.DriverInfo != nil && len(di.DriverInfo.DynamicNodes) > 0 {
|
|
||||||
for i := 0; i < len(di.DriverInfo.DynamicNodes); i++ {
|
|
||||||
diClone := di
|
|
||||||
if pl := di.DriverInfo.DynamicNodes[i].Platforms; len(pl) > 0 {
|
|
||||||
diClone.Platforms = pl
|
|
||||||
}
|
|
||||||
nodes = append(nodes, di)
|
|
||||||
}
|
|
||||||
dynamicNodes = append(dynamicNodes, di.DriverInfo.DynamicNodes...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// not append (remove the static nodes in the store)
|
|
||||||
b.NodeGroup.Nodes = dynamicNodes
|
|
||||||
b.nodes = nodes
|
|
||||||
b.NodeGroup.Dynamic = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return b.nodes, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *Node) loadData(ctx context.Context) error {
|
|
||||||
if n.Driver == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
info, err := n.Driver.Info(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
n.DriverInfo = info
|
|
||||||
if n.DriverInfo.Status == driver.Running {
|
|
||||||
driverClient, err := n.Driver.Client(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
workers, err := driverClient.ListWorkers(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "listing workers")
|
|
||||||
}
|
|
||||||
for idx, w := range workers {
|
|
||||||
n.Platforms = append(n.Platforms, w.Platforms...)
|
|
||||||
if idx == 0 {
|
|
||||||
n.GCPolicy = w.GCPolicy
|
|
||||||
n.Labels = w.Labels
|
|
||||||
}
|
|
||||||
}
|
|
||||||
n.Platforms = platformutil.Dedupe(n.Platforms)
|
|
||||||
inf, err := driverClient.Info(ctx)
|
|
||||||
if err != nil {
|
|
||||||
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
|
|
||||||
n.Version, err = n.Driver.Version(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "getting version")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
n.Version = inf.BuildkitVersion.Version
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
@ -1,79 +0,0 @@
|
|||||||
package commands
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"runtime"
|
|
||||||
|
|
||||||
"github.com/containerd/console"
|
|
||||||
"github.com/docker/buildx/controller"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/monitor"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func debugShellCmd(dockerCli command.Cli) *cobra.Command {
|
|
||||||
var options control.ControlOptions
|
|
||||||
var progressMode string
|
|
||||||
|
|
||||||
cmd := &cobra.Command{
|
|
||||||
Use: "debug-shell",
|
|
||||||
Short: "Start a monitor",
|
|
||||||
Annotations: map[string]string{
|
|
||||||
"experimentalCLI": "",
|
|
||||||
},
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
printer, err := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, progressMode)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := context.TODO()
|
|
||||||
c, err := controller.NewController(ctx, options, dockerCli, printer)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := c.Close(); err != nil {
|
|
||||||
logrus.Warnf("failed to close server connection %v", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
con := console.Current()
|
|
||||||
if err := con.SetRaw(); err != nil {
|
|
||||||
return errors.Errorf("failed to configure terminal: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = monitor.RunMonitor(ctx, "", nil, controllerapi.InvokeConfig{
|
|
||||||
Tty: true,
|
|
||||||
}, c, dockerCli.In(), os.Stdout, os.Stderr, printer)
|
|
||||||
con.Reset()
|
|
||||||
return err
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
|
|
||||||
flags.StringVar(&options.Root, "root", "", "Specify root directory of server to connect")
|
|
||||||
flags.SetAnnotation("root", "experimentalCLI", nil)
|
|
||||||
|
|
||||||
flags.BoolVar(&options.Detach, "detach", runtime.GOOS == "linux", "Detach buildx server (supported only on linux)")
|
|
||||||
flags.SetAnnotation("detach", "experimentalCLI", nil)
|
|
||||||
|
|
||||||
flags.StringVar(&options.ServerConfig, "server-config", "", "Specify buildx server config file (used only when launching new server)")
|
|
||||||
flags.SetAnnotation("server-config", "experimentalCLI", nil)
|
|
||||||
|
|
||||||
flags.StringVar(&progressMode, "progress", "auto", `Set type of progress output ("auto", "plain", "tty"). Use plain to show container output`)
|
|
||||||
|
|
||||||
return cmd
|
|
||||||
}
|
|
||||||
|
|
||||||
func addDebugShellCommand(cmd *cobra.Command, dockerCli command.Cli) {
|
|
||||||
cmd.AddCommand(
|
|
||||||
debugShellCmd(dockerCli),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
@ -0,0 +1,48 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/docker/api/types/versions"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/outline"
|
||||||
|
"github.com/moby/buildkit/frontend/subrequests/targets"
|
||||||
|
)
|
||||||
|
|
||||||
|
func printResult(f *build.PrintFunc, res map[string]string) error {
|
||||||
|
switch f.Name {
|
||||||
|
case "outline":
|
||||||
|
return printValue(outline.PrintOutline, outline.SubrequestsOutlineDefinition.Version, f.Format, res)
|
||||||
|
case "targets":
|
||||||
|
return printValue(targets.PrintTargets, targets.SubrequestsTargetsDefinition.Version, f.Format, res)
|
||||||
|
case "subrequests.describe":
|
||||||
|
return printValue(subrequests.PrintDescribe, subrequests.SubrequestsDescribeDefinition.Version, f.Format, res)
|
||||||
|
default:
|
||||||
|
if dt, ok := res["result.txt"]; ok {
|
||||||
|
fmt.Print(dt)
|
||||||
|
} else {
|
||||||
|
log.Printf("%s %+v", f, res)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type printFunc func([]byte, io.Writer) error
|
||||||
|
|
||||||
|
func printValue(printer printFunc, version string, format string, res map[string]string) error {
|
||||||
|
if format == "json" {
|
||||||
|
fmt.Fprintln(os.Stdout, res["result.json"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if res["version"] != "" && versions.LessThan(version, res["version"]) && res["result.txt"] != "" {
|
||||||
|
// structure is too new and we don't know how to print it
|
||||||
|
fmt.Fprint(os.Stdout, res["result.txt"])
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return printer([]byte(res["result.json"]), os.Stdout)
|
||||||
|
}
|
||||||
@ -0,0 +1,487 @@
|
|||||||
|
package commands
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/docker/buildx/build"
|
||||||
|
"github.com/docker/buildx/driver"
|
||||||
|
ctxkube "github.com/docker/buildx/driver/kubernetes/context"
|
||||||
|
remoteutil "github.com/docker/buildx/driver/remote/util"
|
||||||
|
"github.com/docker/buildx/store"
|
||||||
|
"github.com/docker/buildx/store/storeutil"
|
||||||
|
"github.com/docker/buildx/util/platformutil"
|
||||||
|
"github.com/docker/buildx/util/progress"
|
||||||
|
"github.com/docker/cli/cli/command"
|
||||||
|
"github.com/docker/cli/cli/context/docker"
|
||||||
|
ctxstore "github.com/docker/cli/cli/context/store"
|
||||||
|
dopts "github.com/docker/cli/opts"
|
||||||
|
dockerclient "github.com/docker/docker/client"
|
||||||
|
"github.com/moby/buildkit/util/grpcerrors"
|
||||||
|
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
"k8s.io/client-go/tools/clientcmd"
|
||||||
|
)
|
||||||
|
|
||||||
|
// validateEndpoint validates that endpoint is either a context or a docker host
|
||||||
|
func validateEndpoint(dockerCli command.Cli, ep string) (string, error) {
|
||||||
|
de, err := storeutil.GetDockerEndpoint(dockerCli, ep)
|
||||||
|
if err == nil && de != "" {
|
||||||
|
if ep == "default" {
|
||||||
|
return de, nil
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
h, err := dopts.ParseHost(true, ep)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrapf(err, "failed to parse endpoint %s", ep)
|
||||||
|
}
|
||||||
|
return h, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// validateBuildkitEndpoint validates that endpoint is a valid buildkit host
|
||||||
|
func validateBuildkitEndpoint(ep string) (string, error) {
|
||||||
|
if err := remoteutil.IsValidEndpoint(ep); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return ep, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// driversForNodeGroup returns drivers for a nodegroup instance
|
||||||
|
func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, contextPathHash string) ([]build.DriverInfo, error) {
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
|
||||||
|
dis := make([]build.DriverInfo, len(ng.Nodes))
|
||||||
|
|
||||||
|
var f driver.Factory
|
||||||
|
if ng.Driver != "" {
|
||||||
|
var err error
|
||||||
|
f, err = driver.GetFactory(ng.Driver, true)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// empty driver means nodegroup was implicitly created as a default
|
||||||
|
// driver for a docker context and allows falling back to a
|
||||||
|
// docker-container driver for older daemon that doesn't support
|
||||||
|
// buildkit (< 18.06).
|
||||||
|
ep := ng.Nodes[0].Endpoint
|
||||||
|
dockerapi, err := clientForEndpoint(dockerCli, ep)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
// check if endpoint is healthy is needed to determine the driver type.
|
||||||
|
// if this fails then can't continue with driver selection.
|
||||||
|
if _, err = dockerapi.Ping(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
f, err = driver.GetDefaultFactory(ctx, ep, dockerapi, false)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ng.Driver = f.Name()
|
||||||
|
}
|
||||||
|
imageopt, err := storeutil.GetImageConfig(dockerCli, ng)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, n := range ng.Nodes {
|
||||||
|
func(i int, n store.Node) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
di := build.DriverInfo{
|
||||||
|
Name: n.Name,
|
||||||
|
Platform: n.Platforms,
|
||||||
|
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
dis[i] = di
|
||||||
|
}()
|
||||||
|
|
||||||
|
dockerapi, err := clientForEndpoint(dockerCli, n.Endpoint)
|
||||||
|
if err != nil {
|
||||||
|
di.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
|
||||||
|
dockerapi.NegotiateAPIVersion(ctx)
|
||||||
|
|
||||||
|
contextStore := dockerCli.ContextStore()
|
||||||
|
|
||||||
|
var kcc driver.KubeClientConfig
|
||||||
|
kcc, err = configFromContext(n.Endpoint, contextStore)
|
||||||
|
if err != nil {
|
||||||
|
// err is returned if n.Endpoint is non-context name like "unix:///var/run/docker.sock".
|
||||||
|
// try again with name="default".
|
||||||
|
// FIXME: n should retain real context name.
|
||||||
|
kcc, err = configFromContext("default", contextStore)
|
||||||
|
if err != nil {
|
||||||
|
logrus.Error(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
tryToUseKubeConfigInCluster := false
|
||||||
|
if kcc == nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
} else {
|
||||||
|
if _, err := kcc.ClientConfig(); err != nil {
|
||||||
|
tryToUseKubeConfigInCluster = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if tryToUseKubeConfigInCluster {
|
||||||
|
kccInCluster := driver.KubeClientConfigInCluster{}
|
||||||
|
if _, err := kccInCluster.ClientConfig(); err == nil {
|
||||||
|
logrus.Debug("using kube config in cluster")
|
||||||
|
kcc = kccInCluster
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, n.Endpoint, dockerapi, imageopt.Auth, kcc, n.Flags, n.Files, n.DriverOpts, n.Platforms, contextPathHash)
|
||||||
|
if err != nil {
|
||||||
|
di.Err = err
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
di.Driver = d
|
||||||
|
di.ImageOpt = imageopt
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(i, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := eg.Wait(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dis, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func configFromContext(endpointName string, s ctxstore.Reader) (clientcmd.ClientConfig, error) {
|
||||||
|
if strings.HasPrefix(endpointName, "kubernetes://") {
|
||||||
|
u, _ := url.Parse(endpointName)
|
||||||
|
if kubeconfig := u.Query().Get("kubeconfig"); kubeconfig != "" {
|
||||||
|
_ = os.Setenv(clientcmd.RecommendedConfigPathEnvVar, kubeconfig)
|
||||||
|
}
|
||||||
|
rules := clientcmd.NewDefaultClientConfigLoadingRules()
|
||||||
|
apiConfig, err := rules.Load()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return clientcmd.NewDefaultClientConfig(*apiConfig, &clientcmd.ConfigOverrides{}), nil
|
||||||
|
}
|
||||||
|
return ctxkube.ConfigFromContext(endpointName, s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// clientForEndpoint returns a docker client for an endpoint
|
||||||
|
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
|
||||||
|
list, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, l := range list {
|
||||||
|
if l.Name == name {
|
||||||
|
dep, ok := l.Endpoints["docker"]
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("context %q does not have a Docker endpoint", name)
|
||||||
|
}
|
||||||
|
epm, ok := dep.(docker.EndpointMeta)
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.Errorf("endpoint %q is not of type EndpointMeta, %T", dep, dep)
|
||||||
|
}
|
||||||
|
ep, err := docker.WithTLSData(dockerCli.ContextStore(), name, epm)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
clientOpts, err := ep.ClientOpts()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return dockerclient.NewClientWithOpts(clientOpts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ep := docker.Endpoint{
|
||||||
|
EndpointMeta: docker.EndpointMeta{
|
||||||
|
Host: name,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
clientOpts, err := ep.ClientOpts()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dockerclient.NewClientWithOpts(clientOpts...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getInstanceOrDefault(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
|
||||||
|
var defaultOnly bool
|
||||||
|
|
||||||
|
if instance == "default" && instance != dockerCli.CurrentContext() {
|
||||||
|
return nil, errors.Errorf("use `docker --context=default buildx` to switch to default context")
|
||||||
|
}
|
||||||
|
if instance == "default" || instance == dockerCli.CurrentContext() {
|
||||||
|
instance = ""
|
||||||
|
defaultOnly = true
|
||||||
|
}
|
||||||
|
list, err := dockerCli.ContextStore().List()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, l := range list {
|
||||||
|
if l.Name == instance {
|
||||||
|
return nil, errors.Errorf("use `docker --context=%s buildx` to switch to context %s", instance, instance)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if instance != "" {
|
||||||
|
return getInstanceByName(ctx, dockerCli, instance, contextPathHash)
|
||||||
|
}
|
||||||
|
return getDefaultDrivers(ctx, dockerCli, defaultOnly, contextPathHash)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getInstanceByName(ctx context.Context, dockerCli command.Cli, instance, contextPathHash string) ([]build.DriverInfo, error) {
|
||||||
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
|
||||||
|
ng, err := txn.NodeGroupByName(instance)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDefaultDrivers returns drivers based on current cli config
|
||||||
|
func getDefaultDrivers(ctx context.Context, dockerCli command.Cli, defaultOnly bool, contextPathHash string) ([]build.DriverInfo, error) {
|
||||||
|
txn, release, err := storeutil.GetStore(dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer release()
|
||||||
|
|
||||||
|
if !defaultOnly {
|
||||||
|
ng, err := storeutil.GetCurrentInstance(txn, dockerCli)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if ng != nil {
|
||||||
|
return driversForNodeGroup(ctx, dockerCli, ng, contextPathHash)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
imageopt, err := storeutil.GetImageConfig(dockerCli, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, "", dockerCli.Client(), imageopt.Auth, nil, nil, nil, nil, nil, contextPathHash)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return []build.DriverInfo{
|
||||||
|
{
|
||||||
|
Name: "default",
|
||||||
|
Driver: d,
|
||||||
|
ImageOpt: imageopt,
|
||||||
|
ProxyConfig: storeutil.GetProxyConfig(dockerCli),
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadInfoData(ctx context.Context, d *dinfo) error {
|
||||||
|
if d.di.Driver == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
info, err := d.di.Driver.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
d.info = info
|
||||||
|
if info.Status == driver.Running {
|
||||||
|
c, err := d.di.Driver.Client(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
workers, err := c.ListWorkers(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "listing workers")
|
||||||
|
}
|
||||||
|
for _, w := range workers {
|
||||||
|
d.platforms = append(d.platforms, w.Platforms...)
|
||||||
|
}
|
||||||
|
d.platforms = platformutil.Dedupe(d.platforms)
|
||||||
|
inf, err := c.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
if st, ok := grpcerrors.AsGRPCStatus(err); ok && st.Code() == codes.Unimplemented {
|
||||||
|
d.version, err = d.di.Driver.Version(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrap(err, "getting version")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
d.version = inf.BuildkitVersion.Version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadNodeGroupData(ctx context.Context, dockerCli command.Cli, ngi *nginfo) error {
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
|
||||||
|
dis, err := driversForNodeGroup(ctx, dockerCli, ngi.ng, "")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
ngi.drivers = make([]dinfo, len(dis))
|
||||||
|
for i, di := range dis {
|
||||||
|
d := di
|
||||||
|
ngi.drivers[i].di = &d
|
||||||
|
func(d *dinfo) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
if err := loadInfoData(ctx, d); err != nil {
|
||||||
|
d.err = err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(&ngi.drivers[i])
|
||||||
|
}
|
||||||
|
|
||||||
|
if eg.Wait(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
kubernetesDriverCount := 0
|
||||||
|
|
||||||
|
for _, di := range ngi.drivers {
|
||||||
|
if di.info != nil && len(di.info.DynamicNodes) > 0 {
|
||||||
|
kubernetesDriverCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
isAllKubernetesDrivers := len(ngi.drivers) == kubernetesDriverCount
|
||||||
|
|
||||||
|
if isAllKubernetesDrivers {
|
||||||
|
var drivers []dinfo
|
||||||
|
var dynamicNodes []store.Node
|
||||||
|
|
||||||
|
for _, di := range ngi.drivers {
|
||||||
|
// dynamic nodes are used in Kubernetes driver.
|
||||||
|
// Kubernetes pods are dynamically mapped to BuildKit Nodes.
|
||||||
|
if di.info != nil && len(di.info.DynamicNodes) > 0 {
|
||||||
|
for i := 0; i < len(di.info.DynamicNodes); i++ {
|
||||||
|
// all []dinfo share *build.DriverInfo and *driver.Info
|
||||||
|
diClone := di
|
||||||
|
if pl := di.info.DynamicNodes[i].Platforms; len(pl) > 0 {
|
||||||
|
diClone.platforms = pl
|
||||||
|
}
|
||||||
|
drivers = append(drivers, di)
|
||||||
|
}
|
||||||
|
dynamicNodes = append(dynamicNodes, di.info.DynamicNodes...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// not append (remove the static nodes in the store)
|
||||||
|
ngi.ng.Nodes = dynamicNodes
|
||||||
|
ngi.drivers = drivers
|
||||||
|
ngi.ng.Dynamic = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasNodeGroup(list []*nginfo, ngi *nginfo) bool {
|
||||||
|
for _, l := range list {
|
||||||
|
if ngi.ng.Name == l.ng.Name {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func dockerAPI(dockerCli command.Cli) *api {
|
||||||
|
return &api{dockerCli: dockerCli}
|
||||||
|
}
|
||||||
|
|
||||||
|
type api struct {
|
||||||
|
dockerCli command.Cli
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *api) DockerAPI(name string) (dockerclient.APIClient, error) {
|
||||||
|
if name == "" {
|
||||||
|
name = a.dockerCli.CurrentContext()
|
||||||
|
}
|
||||||
|
return clientForEndpoint(a.dockerCli, name)
|
||||||
|
}
|
||||||
|
|
||||||
|
type dinfo struct {
|
||||||
|
di *build.DriverInfo
|
||||||
|
info *driver.Info
|
||||||
|
platforms []specs.Platform
|
||||||
|
version string
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
type nginfo struct {
|
||||||
|
ng *store.NodeGroup
|
||||||
|
drivers []dinfo
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
// inactive checks if all nodes are inactive for this builder
|
||||||
|
func (n *nginfo) inactive() bool {
|
||||||
|
for idx := range n.ng.Nodes {
|
||||||
|
d := n.drivers[idx]
|
||||||
|
if d.info != nil && d.info.Status == driver.Running {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func boot(ctx context.Context, ngi *nginfo) (bool, error) {
|
||||||
|
toBoot := make([]int, 0, len(ngi.drivers))
|
||||||
|
for i, d := range ngi.drivers {
|
||||||
|
if d.err != nil || d.di.Err != nil || d.di.Driver == nil || d.info == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if d.info.Status != driver.Running {
|
||||||
|
toBoot = append(toBoot, i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(toBoot) == 0 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
printer := progress.NewPrinter(context.TODO(), os.Stderr, os.Stderr, "auto")
|
||||||
|
|
||||||
|
baseCtx := ctx
|
||||||
|
eg, _ := errgroup.WithContext(ctx)
|
||||||
|
for _, idx := range toBoot {
|
||||||
|
func(idx int) {
|
||||||
|
eg.Go(func() error {
|
||||||
|
pw := progress.WithPrefix(printer, ngi.ng.Nodes[idx].Name, len(toBoot) > 1)
|
||||||
|
_, err := driver.Boot(ctx, baseCtx, ngi.drivers[idx].di.Driver, pw)
|
||||||
|
if err != nil {
|
||||||
|
ngi.drivers[idx].err = err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}(idx)
|
||||||
|
}
|
||||||
|
|
||||||
|
err := eg.Wait()
|
||||||
|
err1 := printer.Wait()
|
||||||
|
if err == nil {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, err
|
||||||
|
}
|
||||||
@ -1,267 +0,0 @@
|
|||||||
package build
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/builder"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/store"
|
|
||||||
"github.com/docker/buildx/store/storeutil"
|
|
||||||
"github.com/docker/buildx/util/buildflags"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/dockerutil"
|
|
||||||
"github.com/docker/buildx/util/platformutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/docker/cli/cli/config"
|
|
||||||
dockeropts "github.com/docker/cli/opts"
|
|
||||||
"github.com/docker/go-units"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/session/auth/authprovider"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"google.golang.org/grpc/codes"
|
|
||||||
)
|
|
||||||
|
|
||||||
const defaultTargetName = "default"
|
|
||||||
|
|
||||||
// RunBuild runs the specified build and returns the result.
|
|
||||||
//
|
|
||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
|
||||||
// inspect the result and debug the cause of that error.
|
|
||||||
func RunBuild(ctx context.Context, dockerCli command.Cli, in controllerapi.BuildOptions, inStream io.Reader, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
if in.NoCache && len(in.NoCacheFilter) > 0 {
|
|
||||||
return nil, nil, errors.Errorf("--no-cache and --no-cache-filter cannot currently be used together")
|
|
||||||
}
|
|
||||||
|
|
||||||
contexts := map[string]build.NamedContext{}
|
|
||||||
for name, path := range in.NamedContexts {
|
|
||||||
contexts[name] = build.NamedContext{Path: path}
|
|
||||||
}
|
|
||||||
|
|
||||||
opts := build.Options{
|
|
||||||
Inputs: build.Inputs{
|
|
||||||
ContextPath: in.ContextPath,
|
|
||||||
DockerfilePath: in.DockerfileName,
|
|
||||||
InStream: inStream,
|
|
||||||
NamedContexts: contexts,
|
|
||||||
},
|
|
||||||
BuildArgs: in.BuildArgs,
|
|
||||||
CgroupParent: in.CgroupParent,
|
|
||||||
ExtraHosts: in.ExtraHosts,
|
|
||||||
Labels: in.Labels,
|
|
||||||
NetworkMode: in.NetworkMode,
|
|
||||||
NoCache: in.NoCache,
|
|
||||||
NoCacheFilter: in.NoCacheFilter,
|
|
||||||
Pull: in.Pull,
|
|
||||||
ShmSize: dockeropts.MemBytes(in.ShmSize),
|
|
||||||
Tags: in.Tags,
|
|
||||||
Target: in.Target,
|
|
||||||
Ulimits: controllerUlimitOpt2DockerUlimit(in.Ulimits),
|
|
||||||
}
|
|
||||||
|
|
||||||
platforms, err := platformutil.Parse(in.Platforms)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Platforms = platforms
|
|
||||||
|
|
||||||
dockerConfig := config.LoadDefaultConfigFile(os.Stderr)
|
|
||||||
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(dockerConfig))
|
|
||||||
|
|
||||||
secrets, err := controllerapi.CreateSecrets(in.Secrets)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Session = append(opts.Session, secrets)
|
|
||||||
|
|
||||||
sshSpecs := in.SSH
|
|
||||||
if len(sshSpecs) == 0 && buildflags.IsGitSSH(in.ContextPath) {
|
|
||||||
sshSpecs = append(sshSpecs, &controllerapi.SSH{ID: "default"})
|
|
||||||
}
|
|
||||||
ssh, err := controllerapi.CreateSSH(sshSpecs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Session = append(opts.Session, ssh)
|
|
||||||
|
|
||||||
outputs, err := controllerapi.CreateExports(in.Exports)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
if in.ExportPush {
|
|
||||||
if in.ExportLoad {
|
|
||||||
return nil, nil, errors.Errorf("push and load may not be set together at the moment")
|
|
||||||
}
|
|
||||||
if len(outputs) == 0 {
|
|
||||||
outputs = []client.ExportEntry{{
|
|
||||||
Type: "image",
|
|
||||||
Attrs: map[string]string{
|
|
||||||
"push": "true",
|
|
||||||
},
|
|
||||||
}}
|
|
||||||
} else {
|
|
||||||
switch outputs[0].Type {
|
|
||||||
case "image":
|
|
||||||
outputs[0].Attrs["push"] = "true"
|
|
||||||
default:
|
|
||||||
return nil, nil, errors.Errorf("push and %q output can't be used together", outputs[0].Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if in.ExportLoad {
|
|
||||||
if len(outputs) == 0 {
|
|
||||||
outputs = []client.ExportEntry{{
|
|
||||||
Type: "docker",
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}}
|
|
||||||
} else {
|
|
||||||
switch outputs[0].Type {
|
|
||||||
case "docker":
|
|
||||||
default:
|
|
||||||
return nil, nil, errors.Errorf("load and %q output can't be used together", outputs[0].Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
opts.Exports = outputs
|
|
||||||
|
|
||||||
opts.CacheFrom = controllerapi.CreateCaches(in.CacheFrom)
|
|
||||||
opts.CacheTo = controllerapi.CreateCaches(in.CacheTo)
|
|
||||||
|
|
||||||
opts.Attests = controllerapi.CreateAttestations(in.Attests)
|
|
||||||
|
|
||||||
opts.SourcePolicy = in.SourcePolicy
|
|
||||||
|
|
||||||
allow, err := buildflags.ParseEntitlements(in.Allow)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
opts.Allow = allow
|
|
||||||
|
|
||||||
if in.PrintFunc != nil {
|
|
||||||
opts.PrintFunc = &build.PrintFunc{
|
|
||||||
Name: in.PrintFunc.Name,
|
|
||||||
Format: in.PrintFunc.Format,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// key string used for kubernetes "sticky" mode
|
|
||||||
contextPathHash, err := filepath.Abs(in.ContextPath)
|
|
||||||
if err != nil {
|
|
||||||
contextPathHash = in.ContextPath
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: this should not be loaded this side of the controller api
|
|
||||||
b, err := builder.New(dockerCli,
|
|
||||||
builder.WithName(in.Builder),
|
|
||||||
builder.WithContextPathHash(contextPathHash),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
if err = updateLastActivity(dockerCli, b.NodeGroup); err != nil {
|
|
||||||
return nil, nil, errors.Wrapf(err, "failed to update builder last activity time")
|
|
||||||
}
|
|
||||||
nodes, err := b.LoadNodes(ctx, false)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, res, err := buildTargets(ctx, dockerCli, b.NodeGroup, nodes, map[string]build.Options{defaultTargetName: opts}, progress, generateResult)
|
|
||||||
err = wrapBuildError(err, false)
|
|
||||||
if err != nil {
|
|
||||||
// NOTE: buildTargets can return *build.ResultHandle even on error.
|
|
||||||
return nil, res, err
|
|
||||||
}
|
|
||||||
return resp, res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// buildTargets runs the specified build and returns the result.
|
|
||||||
//
|
|
||||||
// NOTE: When an error happens during the build and this function acquires the debuggable *build.ResultHandle,
|
|
||||||
// this function returns it in addition to the error (i.e. it does "return nil, res, err"). The caller can
|
|
||||||
// inspect the result and debug the cause of that error.
|
|
||||||
func buildTargets(ctx context.Context, dockerCli command.Cli, ng *store.NodeGroup, nodes []builder.Node, opts map[string]build.Options, progress progress.Writer, generateResult bool) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
var res *build.ResultHandle
|
|
||||||
var resp map[string]*client.SolveResponse
|
|
||||||
var err error
|
|
||||||
if generateResult {
|
|
||||||
var mu sync.Mutex
|
|
||||||
var idx int
|
|
||||||
resp, err = build.BuildWithResultHandler(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress, func(driverIndex int, gotRes *build.ResultHandle) {
|
|
||||||
mu.Lock()
|
|
||||||
defer mu.Unlock()
|
|
||||||
if res == nil || driverIndex < idx {
|
|
||||||
idx, res = driverIndex, gotRes
|
|
||||||
}
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
resp, err = build.Build(ctx, nodes, opts, dockerutil.NewClient(dockerCli), confutil.ConfigDir(dockerCli), progress)
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
return nil, res, err
|
|
||||||
}
|
|
||||||
return resp[defaultTargetName], res, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func wrapBuildError(err error, bake bool) error {
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
st, ok := grpcerrors.AsGRPCStatus(err)
|
|
||||||
if ok {
|
|
||||||
if st.Code() == codes.Unimplemented && strings.Contains(st.Message(), "unsupported frontend capability moby.buildkit.frontend.contexts") {
|
|
||||||
msg := "current frontend does not support --build-context."
|
|
||||||
if bake {
|
|
||||||
msg = "current frontend does not support defining additional contexts for targets."
|
|
||||||
}
|
|
||||||
msg += " Named contexts are supported since Dockerfile v1.4. Use #syntax directive in Dockerfile or update to latest BuildKit."
|
|
||||||
return &wrapped{err, msg}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
type wrapped struct {
|
|
||||||
err error
|
|
||||||
msg string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *wrapped) Error() string {
|
|
||||||
return w.msg
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *wrapped) Unwrap() error {
|
|
||||||
return w.err
|
|
||||||
}
|
|
||||||
|
|
||||||
func updateLastActivity(dockerCli command.Cli, ng *store.NodeGroup) error {
|
|
||||||
txn, release, err := storeutil.GetStore(dockerCli)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer release()
|
|
||||||
return txn.UpdateLastActivity(ng)
|
|
||||||
}
|
|
||||||
|
|
||||||
func controllerUlimitOpt2DockerUlimit(u *controllerapi.UlimitOpt) *dockeropts.UlimitOpt {
|
|
||||||
if u == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
values := make(map[string]*units.Ulimit)
|
|
||||||
for k, v := range u.Values {
|
|
||||||
values[k] = &units.Ulimit{
|
|
||||||
Name: v.Name,
|
|
||||||
Hard: v.Hard,
|
|
||||||
Soft: v.Soft,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return dockeropts.NewUlimitOpt(&values)
|
|
||||||
}
|
|
||||||
@ -1,32 +0,0 @@
|
|||||||
package control
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BuildxController interface {
|
|
||||||
Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (ref string, resp *client.SolveResponse, err error)
|
|
||||||
// Invoke starts an IO session into the specified process.
|
|
||||||
// If pid doesn't matche to any running processes, it starts a new process with the specified config.
|
|
||||||
// If there is no container running or InvokeConfig.Rollback is speicfied, the process will start in a newly created container.
|
|
||||||
// NOTE: If needed, in the future, we can split this API into three APIs (NewContainer, NewProcess and Attach).
|
|
||||||
Invoke(ctx context.Context, ref, pid string, options controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error
|
|
||||||
Kill(ctx context.Context) error
|
|
||||||
Close() error
|
|
||||||
List(ctx context.Context) (refs []string, _ error)
|
|
||||||
Disconnect(ctx context.Context, ref string) error
|
|
||||||
ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error)
|
|
||||||
DisconnectProcess(ctx context.Context, ref, pid string) error
|
|
||||||
Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type ControlOptions struct {
|
|
||||||
ServerConfig string
|
|
||||||
Root string
|
|
||||||
Detach bool
|
|
||||||
}
|
|
||||||
@ -1,36 +0,0 @@
|
|||||||
package controller
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
"github.com/docker/buildx/controller/local"
|
|
||||||
"github.com/docker/buildx/controller/remote"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewController(ctx context.Context, opts control.ControlOptions, dockerCli command.Cli, pw progress.Writer) (control.BuildxController, error) {
|
|
||||||
var name string
|
|
||||||
if opts.Detach {
|
|
||||||
name = "remote"
|
|
||||||
} else {
|
|
||||||
name = "local"
|
|
||||||
}
|
|
||||||
|
|
||||||
var c control.BuildxController
|
|
||||||
err := progress.Wrap(fmt.Sprintf("[internal] connecting to %s controller", name), pw.Write, func(l progress.SubLogger) (err error) {
|
|
||||||
if opts.Detach {
|
|
||||||
c, err = remote.NewRemoteBuildxController(ctx, dockerCli, opts, l)
|
|
||||||
} else {
|
|
||||||
c = local.NewLocalBuildxController(ctx, dockerCli, l)
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Wrap(err, "failed to start buildx controller")
|
|
||||||
}
|
|
||||||
return c, nil
|
|
||||||
}
|
|
||||||
@ -1,34 +0,0 @@
|
|||||||
package errdefs
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/containerd/typeurl/v2"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
typeurl.Register((*Build)(nil), "github.com/docker/buildx", "errdefs.Build+json")
|
|
||||||
}
|
|
||||||
|
|
||||||
type BuildError struct {
|
|
||||||
Build
|
|
||||||
error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *BuildError) Unwrap() error {
|
|
||||||
return e.error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *BuildError) ToProto() grpcerrors.TypedErrorProto {
|
|
||||||
return &e.Build
|
|
||||||
}
|
|
||||||
|
|
||||||
func WrapBuild(err error, ref string) error {
|
|
||||||
if err == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return &BuildError{Build: Build{Ref: ref}, error: err}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *Build) WrapError(err error) error {
|
|
||||||
return &BuildError{error: err, Build: *b}
|
|
||||||
}
|
|
||||||
@ -1,77 +0,0 @@
|
|||||||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
|
||||||
// source: errdefs.proto
|
|
||||||
|
|
||||||
package errdefs
|
|
||||||
|
|
||||||
import (
|
|
||||||
fmt "fmt"
|
|
||||||
proto "github.com/gogo/protobuf/proto"
|
|
||||||
_ "github.com/moby/buildkit/solver/pb"
|
|
||||||
math "math"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Reference imports to suppress errors if they are not otherwise used.
|
|
||||||
var _ = proto.Marshal
|
|
||||||
var _ = fmt.Errorf
|
|
||||||
var _ = math.Inf
|
|
||||||
|
|
||||||
// This is a compile-time assertion to ensure that this generated file
|
|
||||||
// is compatible with the proto package it is being compiled against.
|
|
||||||
// A compilation error at this line likely means your copy of the
|
|
||||||
// proto package needs to be updated.
|
|
||||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
|
||||||
|
|
||||||
type Build struct {
|
|
||||||
Ref string `protobuf:"bytes,1,opt,name=Ref,proto3" json:"Ref,omitempty"`
|
|
||||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
|
||||||
XXX_unrecognized []byte `json:"-"`
|
|
||||||
XXX_sizecache int32 `json:"-"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Build) Reset() { *m = Build{} }
|
|
||||||
func (m *Build) String() string { return proto.CompactTextString(m) }
|
|
||||||
func (*Build) ProtoMessage() {}
|
|
||||||
func (*Build) Descriptor() ([]byte, []int) {
|
|
||||||
return fileDescriptor_689dc58a5060aff5, []int{0}
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Unmarshal(b []byte) error {
|
|
||||||
return xxx_messageInfo_Build.Unmarshal(m, b)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
|
||||||
return xxx_messageInfo_Build.Marshal(b, m, deterministic)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Merge(src proto.Message) {
|
|
||||||
xxx_messageInfo_Build.Merge(m, src)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_Size() int {
|
|
||||||
return xxx_messageInfo_Build.Size(m)
|
|
||||||
}
|
|
||||||
func (m *Build) XXX_DiscardUnknown() {
|
|
||||||
xxx_messageInfo_Build.DiscardUnknown(m)
|
|
||||||
}
|
|
||||||
|
|
||||||
var xxx_messageInfo_Build proto.InternalMessageInfo
|
|
||||||
|
|
||||||
func (m *Build) GetRef() string {
|
|
||||||
if m != nil {
|
|
||||||
return m.Ref
|
|
||||||
}
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
proto.RegisterType((*Build)(nil), "errdefs.Build")
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() { proto.RegisterFile("errdefs.proto", fileDescriptor_689dc58a5060aff5) }
|
|
||||||
|
|
||||||
var fileDescriptor_689dc58a5060aff5 = []byte{
|
|
||||||
// 111 bytes of a gzipped FileDescriptorProto
|
|
||||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4d, 0x2d, 0x2a, 0x4a,
|
|
||||||
0x49, 0x4d, 0x2b, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x74, 0xd2,
|
|
||||||
0x33, 0x4b, 0x32, 0x4a, 0x93, 0xf4, 0x92, 0xf3, 0x73, 0xf5, 0x73, 0xf3, 0x93, 0x2a, 0xf5, 0x93,
|
|
||||||
0x4a, 0x33, 0x73, 0x52, 0xb2, 0x33, 0x4b, 0xf4, 0x8b, 0xf3, 0x73, 0xca, 0x52, 0x8b, 0xf4, 0x0b,
|
|
||||||
0x92, 0xf4, 0xf3, 0x0b, 0xa0, 0xda, 0x94, 0x24, 0xb9, 0x58, 0x9d, 0x40, 0xf2, 0x42, 0x02, 0x5c,
|
|
||||||
0xcc, 0x41, 0xa9, 0x69, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x20, 0x66, 0x12, 0x1b, 0x58,
|
|
||||||
0x85, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0x56, 0x52, 0x41, 0x91, 0x69, 0x00, 0x00, 0x00,
|
|
||||||
}
|
|
||||||
@ -1,9 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package errdefs;
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/solver/pb/ops.proto";
|
|
||||||
|
|
||||||
message Build {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
@ -1,3 +0,0 @@
|
|||||||
package errdefs
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. errdefs.proto
|
|
||||||
@ -1,146 +0,0 @@
|
|||||||
package local
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
cbuild "github.com/docker/buildx/controller/build"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/controller/processes"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewLocalBuildxController(ctx context.Context, dockerCli command.Cli, logger progress.SubLogger) control.BuildxController {
|
|
||||||
return &localController{
|
|
||||||
dockerCli: dockerCli,
|
|
||||||
ref: "local",
|
|
||||||
processes: processes.NewManager(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type buildConfig struct {
|
|
||||||
// TODO: these two structs should be merged
|
|
||||||
// Discussion: https://github.com/docker/buildx/pull/1640#discussion_r1113279719
|
|
||||||
resultCtx *build.ResultHandle
|
|
||||||
buildOptions *controllerapi.BuildOptions
|
|
||||||
}
|
|
||||||
|
|
||||||
type localController struct {
|
|
||||||
dockerCli command.Cli
|
|
||||||
ref string
|
|
||||||
buildConfig buildConfig
|
|
||||||
processes *processes.Manager
|
|
||||||
|
|
||||||
buildOnGoing atomic.Bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Build(ctx context.Context, options controllerapi.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
|
||||||
if !b.buildOnGoing.CompareAndSwap(false, true) {
|
|
||||||
return "", nil, errors.New("build ongoing")
|
|
||||||
}
|
|
||||||
defer b.buildOnGoing.Store(false)
|
|
||||||
|
|
||||||
resp, res, buildErr := cbuild.RunBuild(ctx, b.dockerCli, options, in, progress, true)
|
|
||||||
// NOTE: RunBuild can return *build.ResultHandle even on error.
|
|
||||||
if res != nil {
|
|
||||||
b.buildConfig = buildConfig{
|
|
||||||
resultCtx: res,
|
|
||||||
buildOptions: &options,
|
|
||||||
}
|
|
||||||
if buildErr != nil {
|
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, b.ref)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if buildErr != nil {
|
|
||||||
return "", nil, buildErr
|
|
||||||
}
|
|
||||||
return b.ref, resp, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) ListProcesses(ctx context.Context, ref string) (infos []*controllerapi.ProcessInfo, retErr error) {
|
|
||||||
if ref != b.ref {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return b.processes.ListProcesses(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
|
||||||
if ref != b.ref {
|
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return b.processes.DeleteProcess(pid)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) cancelRunningProcesses() {
|
|
||||||
b.processes.CancelRunningProcesses()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Invoke(ctx context.Context, ref string, pid string, cfg controllerapi.InvokeConfig, ioIn io.ReadCloser, ioOut io.WriteCloser, ioErr io.WriteCloser) error {
|
|
||||||
if ref != b.ref {
|
|
||||||
return errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
|
|
||||||
proc, ok := b.processes.Get(pid)
|
|
||||||
if !ok {
|
|
||||||
// Start a new process.
|
|
||||||
if b.buildConfig.resultCtx == nil {
|
|
||||||
return errors.New("no build result is registered")
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
proc, err = b.processes.StartProcess(pid, b.buildConfig.resultCtx, &cfg)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attach containerIn to this process
|
|
||||||
ioCancelledCh := make(chan struct{})
|
|
||||||
proc.ForwardIO(&ioset.In{Stdin: ioIn, Stdout: ioOut, Stderr: ioErr}, func() { close(ioCancelledCh) })
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-ioCancelledCh:
|
|
||||||
return errors.Errorf("io cancelled")
|
|
||||||
case err := <-proc.Done():
|
|
||||||
return err
|
|
||||||
case <-ctx.Done():
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Kill(context.Context) error {
|
|
||||||
b.Close()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Close() error {
|
|
||||||
b.cancelRunningProcesses()
|
|
||||||
if b.buildConfig.resultCtx != nil {
|
|
||||||
b.buildConfig.resultCtx.Done()
|
|
||||||
}
|
|
||||||
// TODO: cancel ongoing builds?
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) List(ctx context.Context) (res []string, _ error) {
|
|
||||||
return []string{b.ref}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Disconnect(ctx context.Context, key string) error {
|
|
||||||
b.Close()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (b *localController) Inspect(ctx context.Context, ref string) (*controllerapi.InspectResponse, error) {
|
|
||||||
if ref != b.ref {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", ref)
|
|
||||||
}
|
|
||||||
return &controllerapi.InspectResponse{Options: b.buildConfig.buildOptions}, nil
|
|
||||||
}
|
|
||||||
@ -1,20 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
func CreateAttestations(attests []*Attest) map[string]*string {
|
|
||||||
result := map[string]*string{}
|
|
||||||
for _, attest := range attests {
|
|
||||||
// ignore duplicates
|
|
||||||
if _, ok := result[attest.Type]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if attest.Disabled {
|
|
||||||
result[attest.Type] = nil
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
attrs := attest.Attrs
|
|
||||||
result[attest.Type] = &attrs
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@ -1,21 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/client"
|
|
||||||
|
|
||||||
func CreateCaches(entries []*CacheOptionsEntry) []client.CacheOptionsEntry {
|
|
||||||
var outs []client.CacheOptionsEntry
|
|
||||||
if len(entries) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
out := client.CacheOptionsEntry{
|
|
||||||
Type: entry.Type,
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}
|
|
||||||
for k, v := range entry.Attrs {
|
|
||||||
out.Attrs[k] = v
|
|
||||||
}
|
|
||||||
outs = append(outs, out)
|
|
||||||
}
|
|
||||||
return outs
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load Diff
@ -1,245 +0,0 @@
|
|||||||
syntax = "proto3";
|
|
||||||
|
|
||||||
package buildx.controller.v1;
|
|
||||||
|
|
||||||
import "github.com/moby/buildkit/api/services/control/control.proto";
|
|
||||||
import "github.com/moby/buildkit/sourcepolicy/pb/policy.proto";
|
|
||||||
|
|
||||||
option go_package = "pb";
|
|
||||||
|
|
||||||
service Controller {
|
|
||||||
rpc Build(BuildRequest) returns (BuildResponse);
|
|
||||||
rpc Inspect(InspectRequest) returns (InspectResponse);
|
|
||||||
rpc Status(StatusRequest) returns (stream StatusResponse);
|
|
||||||
rpc Input(stream InputMessage) returns (InputResponse);
|
|
||||||
rpc Invoke(stream Message) returns (stream Message);
|
|
||||||
rpc List(ListRequest) returns (ListResponse);
|
|
||||||
rpc Disconnect(DisconnectRequest) returns (DisconnectResponse);
|
|
||||||
rpc Info(InfoRequest) returns (InfoResponse);
|
|
||||||
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
|
|
||||||
rpc DisconnectProcess(DisconnectProcessRequest) returns (DisconnectProcessResponse);
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListProcessesRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListProcessesResponse {
|
|
||||||
repeated ProcessInfo Infos = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ProcessInfo {
|
|
||||||
string ProcessID = 1;
|
|
||||||
InvokeConfig InvokeConfig = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectProcessRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
string ProcessID = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectProcessResponse {
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
BuildOptions Options = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildOptions {
|
|
||||||
string ContextPath = 1;
|
|
||||||
string DockerfileName = 2;
|
|
||||||
PrintFunc PrintFunc = 3;
|
|
||||||
map<string, string> NamedContexts = 4;
|
|
||||||
|
|
||||||
repeated string Allow = 5;
|
|
||||||
repeated Attest Attests = 6;
|
|
||||||
map<string, string> BuildArgs = 7;
|
|
||||||
repeated CacheOptionsEntry CacheFrom = 8;
|
|
||||||
repeated CacheOptionsEntry CacheTo = 9;
|
|
||||||
string CgroupParent = 10;
|
|
||||||
repeated ExportEntry Exports = 11;
|
|
||||||
repeated string ExtraHosts = 12;
|
|
||||||
map<string, string> Labels = 13;
|
|
||||||
string NetworkMode = 14;
|
|
||||||
repeated string NoCacheFilter = 15;
|
|
||||||
repeated string Platforms = 16;
|
|
||||||
repeated Secret Secrets = 17;
|
|
||||||
int64 ShmSize = 18;
|
|
||||||
repeated SSH SSH = 19;
|
|
||||||
repeated string Tags = 20;
|
|
||||||
string Target = 21;
|
|
||||||
UlimitOpt Ulimits = 22;
|
|
||||||
|
|
||||||
string Builder = 23;
|
|
||||||
bool NoCache = 24;
|
|
||||||
bool Pull = 25;
|
|
||||||
bool ExportPush = 26;
|
|
||||||
bool ExportLoad = 27;
|
|
||||||
moby.buildkit.v1.sourcepolicy.Policy SourcePolicy = 28;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ExportEntry {
|
|
||||||
string Type = 1;
|
|
||||||
map<string, string> Attrs = 2;
|
|
||||||
string Destination = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message CacheOptionsEntry {
|
|
||||||
string Type = 1;
|
|
||||||
map<string, string> Attrs = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Attest {
|
|
||||||
string Type = 1;
|
|
||||||
bool Disabled = 2;
|
|
||||||
string Attrs = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SSH {
|
|
||||||
string ID = 1;
|
|
||||||
repeated string Paths = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Secret {
|
|
||||||
string ID = 1;
|
|
||||||
string FilePath = 2;
|
|
||||||
string Env = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message PrintFunc {
|
|
||||||
string Name = 1;
|
|
||||||
string Format = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InspectRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InspectResponse {
|
|
||||||
BuildOptions Options = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message UlimitOpt {
|
|
||||||
map<string, Ulimit> values = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Ulimit {
|
|
||||||
string Name = 1;
|
|
||||||
int64 Hard = 2;
|
|
||||||
int64 Soft = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildResponse {
|
|
||||||
map<string, string> ExporterResponse = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DisconnectResponse {}
|
|
||||||
|
|
||||||
message ListRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message ListResponse {
|
|
||||||
repeated string keys = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputMessage {
|
|
||||||
oneof Input {
|
|
||||||
InputInitMessage Init = 1;
|
|
||||||
DataMessage Data = 2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputInitMessage {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message DataMessage {
|
|
||||||
bool EOF = 1; // true if eof was reached
|
|
||||||
bytes Data = 2; // should be chunked smaller than 4MB:
|
|
||||||
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
|
||||||
}
|
|
||||||
|
|
||||||
message InputResponse {}
|
|
||||||
|
|
||||||
message Message {
|
|
||||||
oneof Input {
|
|
||||||
InitMessage Init = 1;
|
|
||||||
// FdMessage used from client to server for input (stdin) and
|
|
||||||
// from server to client for output (stdout, stderr)
|
|
||||||
FdMessage File = 2;
|
|
||||||
// ResizeMessage used from client to server for terminal resize events
|
|
||||||
ResizeMessage Resize = 3;
|
|
||||||
// SignalMessage is used from client to server to send signal events
|
|
||||||
SignalMessage Signal = 4;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
message InitMessage {
|
|
||||||
string Ref = 1;
|
|
||||||
|
|
||||||
// If ProcessID already exists in the server, it tries to connect to it
|
|
||||||
// instead of invoking the new one. In this case, InvokeConfig will be ignored.
|
|
||||||
string ProcessID = 2;
|
|
||||||
InvokeConfig InvokeConfig = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InvokeConfig {
|
|
||||||
repeated string Entrypoint = 1;
|
|
||||||
repeated string Cmd = 2;
|
|
||||||
bool NoCmd = 11; // Do not set cmd but use the image's default
|
|
||||||
repeated string Env = 3;
|
|
||||||
string User = 4;
|
|
||||||
bool NoUser = 5; // Do not set user but use the image's default
|
|
||||||
string Cwd = 6;
|
|
||||||
bool NoCwd = 7; // Do not set cwd but use the image's default
|
|
||||||
bool Tty = 8;
|
|
||||||
bool Rollback = 9; // Kill all process in the container and recreate it.
|
|
||||||
bool Initial = 10; // Run container from the initial state of that stage (supported only on the failed step)
|
|
||||||
}
|
|
||||||
|
|
||||||
message FdMessage {
|
|
||||||
uint32 Fd = 1; // what fd the data was from
|
|
||||||
bool EOF = 2; // true if eof was reached
|
|
||||||
bytes Data = 3; // should be chunked smaller than 4MB:
|
|
||||||
// https://pkg.go.dev/google.golang.org/grpc#MaxRecvMsgSize
|
|
||||||
}
|
|
||||||
|
|
||||||
message ResizeMessage {
|
|
||||||
uint32 Rows = 1;
|
|
||||||
uint32 Cols = 2;
|
|
||||||
}
|
|
||||||
|
|
||||||
message SignalMessage {
|
|
||||||
// we only send name (ie HUP, INT) because the int values
|
|
||||||
// are platform dependent.
|
|
||||||
string Name = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message StatusRequest {
|
|
||||||
string Ref = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message StatusResponse {
|
|
||||||
repeated moby.buildkit.v1.Vertex vertexes = 1;
|
|
||||||
repeated moby.buildkit.v1.VertexStatus statuses = 2;
|
|
||||||
repeated moby.buildkit.v1.VertexLog logs = 3;
|
|
||||||
repeated moby.buildkit.v1.VertexWarning warnings = 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
message InfoRequest {}
|
|
||||||
|
|
||||||
message InfoResponse {
|
|
||||||
BuildxVersion buildxVersion = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
message BuildxVersion {
|
|
||||||
string package = 1;
|
|
||||||
string version = 2;
|
|
||||||
string revision = 3;
|
|
||||||
}
|
|
||||||
@ -1,100 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
"github.com/containerd/console"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
)
|
|
||||||
|
|
||||||
func CreateExports(entries []*ExportEntry) ([]client.ExportEntry, error) {
|
|
||||||
var outs []client.ExportEntry
|
|
||||||
if len(entries) == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
if entry.Type == "" {
|
|
||||||
return nil, errors.Errorf("type is required for output")
|
|
||||||
}
|
|
||||||
|
|
||||||
out := client.ExportEntry{
|
|
||||||
Type: entry.Type,
|
|
||||||
Attrs: map[string]string{},
|
|
||||||
}
|
|
||||||
for k, v := range entry.Attrs {
|
|
||||||
out.Attrs[k] = v
|
|
||||||
}
|
|
||||||
|
|
||||||
supportFile := false
|
|
||||||
supportDir := false
|
|
||||||
switch out.Type {
|
|
||||||
case client.ExporterLocal:
|
|
||||||
supportDir = true
|
|
||||||
case client.ExporterTar:
|
|
||||||
supportFile = true
|
|
||||||
case client.ExporterOCI, client.ExporterDocker:
|
|
||||||
tar, err := strconv.ParseBool(out.Attrs["tar"])
|
|
||||||
if err != nil {
|
|
||||||
tar = true
|
|
||||||
}
|
|
||||||
supportFile = tar
|
|
||||||
supportDir = !tar
|
|
||||||
case "registry":
|
|
||||||
out.Type = client.ExporterImage
|
|
||||||
}
|
|
||||||
|
|
||||||
if supportDir {
|
|
||||||
if entry.Destination == "" {
|
|
||||||
return nil, errors.Errorf("dest is required for %s exporter", out.Type)
|
|
||||||
}
|
|
||||||
if entry.Destination == "-" {
|
|
||||||
return nil, errors.Errorf("dest cannot be stdout for %s exporter", out.Type)
|
|
||||||
}
|
|
||||||
|
|
||||||
fi, err := os.Stat(entry.Destination)
|
|
||||||
if err != nil && !os.IsNotExist(err) {
|
|
||||||
return nil, errors.Wrapf(err, "invalid destination directory: %s", entry.Destination)
|
|
||||||
}
|
|
||||||
if err == nil && !fi.IsDir() {
|
|
||||||
return nil, errors.Errorf("destination directory %s is a file", entry.Destination)
|
|
||||||
}
|
|
||||||
out.OutputDir = entry.Destination
|
|
||||||
}
|
|
||||||
if supportFile {
|
|
||||||
if entry.Destination == "" && out.Type != client.ExporterDocker {
|
|
||||||
entry.Destination = "-"
|
|
||||||
}
|
|
||||||
if entry.Destination == "-" {
|
|
||||||
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
|
|
||||||
return nil, errors.Errorf("dest file is required for %s exporter. refusing to write to console", out.Type)
|
|
||||||
}
|
|
||||||
out.Output = wrapWriteCloser(os.Stdout)
|
|
||||||
} else if entry.Destination != "" {
|
|
||||||
fi, err := os.Stat(entry.Destination)
|
|
||||||
if err != nil && !os.IsNotExist(err) {
|
|
||||||
return nil, errors.Wrapf(err, "invalid destination file: %s", entry.Destination)
|
|
||||||
}
|
|
||||||
if err == nil && fi.IsDir() {
|
|
||||||
return nil, errors.Errorf("destination file %s is a directory", entry.Destination)
|
|
||||||
}
|
|
||||||
f, err := os.Create(entry.Destination)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Errorf("failed to open %s", err)
|
|
||||||
}
|
|
||||||
out.Output = wrapWriteCloser(f)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
outs = append(outs, out)
|
|
||||||
}
|
|
||||||
return outs, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
|
|
||||||
return func(map[string]string) (io.WriteCloser, error) {
|
|
||||||
return wc, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,3 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
//go:generate protoc -I=. -I=../../vendor/ --gogo_out=plugins=grpc:. controller.proto
|
|
||||||
@ -1,175 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/docker/builder/remotecontext/urlutil"
|
|
||||||
"github.com/moby/buildkit/util/gitutil"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ResolveOptionPaths resolves all paths contained in BuildOptions
|
|
||||||
// and replaces them to absolute paths.
|
|
||||||
func ResolveOptionPaths(options *BuildOptions) (_ *BuildOptions, err error) {
|
|
||||||
localContext := false
|
|
||||||
if options.ContextPath != "" && options.ContextPath != "-" {
|
|
||||||
if !isRemoteURL(options.ContextPath) {
|
|
||||||
localContext = true
|
|
||||||
options.ContextPath, err = filepath.Abs(options.ContextPath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if options.DockerfileName != "" && options.DockerfileName != "-" {
|
|
||||||
if localContext && !urlutil.IsURL(options.DockerfileName) {
|
|
||||||
options.DockerfileName, err = filepath.Abs(options.DockerfileName)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var contexts map[string]string
|
|
||||||
for k, v := range options.NamedContexts {
|
|
||||||
if isRemoteURL(v) || strings.HasPrefix(v, "docker-image://") {
|
|
||||||
// url prefix, this is a remote path
|
|
||||||
} else if strings.HasPrefix(v, "oci-layout://") {
|
|
||||||
// oci layout prefix, this is a local path
|
|
||||||
p := strings.TrimPrefix(v, "oci-layout://")
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
v = "oci-layout://" + p
|
|
||||||
} else {
|
|
||||||
// no prefix, assume local path
|
|
||||||
v, err = filepath.Abs(v)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if contexts == nil {
|
|
||||||
contexts = make(map[string]string)
|
|
||||||
}
|
|
||||||
contexts[k] = v
|
|
||||||
}
|
|
||||||
options.NamedContexts = contexts
|
|
||||||
|
|
||||||
var cacheFrom []*CacheOptionsEntry
|
|
||||||
for _, co := range options.CacheFrom {
|
|
||||||
switch co.Type {
|
|
||||||
case "local":
|
|
||||||
var attrs map[string]string
|
|
||||||
for k, v := range co.Attrs {
|
|
||||||
if attrs == nil {
|
|
||||||
attrs = make(map[string]string)
|
|
||||||
}
|
|
||||||
switch k {
|
|
||||||
case "src":
|
|
||||||
p := v
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
attrs[k] = p
|
|
||||||
default:
|
|
||||||
attrs[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
co.Attrs = attrs
|
|
||||||
cacheFrom = append(cacheFrom, co)
|
|
||||||
default:
|
|
||||||
cacheFrom = append(cacheFrom, co)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
options.CacheFrom = cacheFrom
|
|
||||||
|
|
||||||
var cacheTo []*CacheOptionsEntry
|
|
||||||
for _, co := range options.CacheTo {
|
|
||||||
switch co.Type {
|
|
||||||
case "local":
|
|
||||||
var attrs map[string]string
|
|
||||||
for k, v := range co.Attrs {
|
|
||||||
if attrs == nil {
|
|
||||||
attrs = make(map[string]string)
|
|
||||||
}
|
|
||||||
switch k {
|
|
||||||
case "dest":
|
|
||||||
p := v
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
attrs[k] = p
|
|
||||||
default:
|
|
||||||
attrs[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
co.Attrs = attrs
|
|
||||||
cacheTo = append(cacheTo, co)
|
|
||||||
default:
|
|
||||||
cacheTo = append(cacheTo, co)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
options.CacheTo = cacheTo
|
|
||||||
var exports []*ExportEntry
|
|
||||||
for _, e := range options.Exports {
|
|
||||||
if e.Destination != "" && e.Destination != "-" {
|
|
||||||
e.Destination, err = filepath.Abs(e.Destination)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
exports = append(exports, e)
|
|
||||||
}
|
|
||||||
options.Exports = exports
|
|
||||||
|
|
||||||
var secrets []*Secret
|
|
||||||
for _, s := range options.Secrets {
|
|
||||||
if s.FilePath != "" {
|
|
||||||
s.FilePath, err = filepath.Abs(s.FilePath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
secrets = append(secrets, s)
|
|
||||||
}
|
|
||||||
options.Secrets = secrets
|
|
||||||
|
|
||||||
var ssh []*SSH
|
|
||||||
for _, s := range options.SSH {
|
|
||||||
var ps []string
|
|
||||||
for _, pt := range s.Paths {
|
|
||||||
p := pt
|
|
||||||
if p != "" {
|
|
||||||
p, err = filepath.Abs(p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ps = append(ps, p)
|
|
||||||
|
|
||||||
}
|
|
||||||
s.Paths = ps
|
|
||||||
ssh = append(ssh, s)
|
|
||||||
}
|
|
||||||
options.SSH = ssh
|
|
||||||
|
|
||||||
return options, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func isRemoteURL(c string) bool {
|
|
||||||
if urlutil.IsURL(c) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if _, err := gitutil.ParseGitRef(c); err == nil {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
@ -1,247 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"reflect"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestResolvePaths(t *testing.T) {
|
|
||||||
tmpwd, err := os.MkdirTemp("", "testresolvepaths")
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer os.Remove(tmpwd)
|
|
||||||
require.NoError(t, os.Chdir(tmpwd))
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
options BuildOptions
|
|
||||||
want BuildOptions
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "contextpath",
|
|
||||||
options: BuildOptions{ContextPath: "test"},
|
|
||||||
want: BuildOptions{ContextPath: filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-cwd",
|
|
||||||
options: BuildOptions{ContextPath: "."},
|
|
||||||
want: BuildOptions{ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-dash",
|
|
||||||
options: BuildOptions{ContextPath: "-"},
|
|
||||||
want: BuildOptions{ContextPath: "-"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contextpath-ssh",
|
|
||||||
options: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
want: BuildOptions{ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename",
|
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "."},
|
|
||||||
want: BuildOptions{DockerfileName: filepath.Join(tmpwd, "test"), ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename-dash",
|
|
||||||
options: BuildOptions{DockerfileName: "-", ContextPath: "."},
|
|
||||||
want: BuildOptions{DockerfileName: "-", ContextPath: tmpwd},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dockerfilename-remote",
|
|
||||||
options: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
want: BuildOptions{DockerfileName: "test", ContextPath: "git@github.com:docker/buildx.git"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "contexts",
|
|
||||||
options: BuildOptions{NamedContexts: map[string]string{"a": "test1", "b": "test2",
|
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
|
||||||
want: BuildOptions{NamedContexts: map[string]string{"a": filepath.Join(tmpwd, "test1"), "b": filepath.Join(tmpwd, "test2"),
|
|
||||||
"alpine": "docker-image://alpine@sha256:0123456789", "project": "https://github.com/myuser/project.git"}},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "cache-from",
|
|
||||||
options: BuildOptions{
|
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"src": "test"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
CacheFrom: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"src": filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "cache-to",
|
|
||||||
options: BuildOptions{
|
|
||||||
CacheTo: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"dest": "test"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
CacheTo: []*CacheOptionsEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Attrs: map[string]string{"dest": filepath.Join(tmpwd, "test")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "registry",
|
|
||||||
Attrs: map[string]string{"ref": "user/app"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "exports",
|
|
||||||
options: BuildOptions{
|
|
||||||
Exports: []*ExportEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "test1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "tar",
|
|
||||||
Destination: "test3",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "oci",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "docker",
|
|
||||||
Destination: "test4",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "image",
|
|
||||||
Attrs: map[string]string{"push": "true"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
Exports: []*ExportEntry{
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "local",
|
|
||||||
Destination: filepath.Join(tmpwd, "test1"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "tar",
|
|
||||||
Destination: filepath.Join(tmpwd, "test3"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "oci",
|
|
||||||
Destination: "-",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "docker",
|
|
||||||
Destination: filepath.Join(tmpwd, "test4"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Type: "image",
|
|
||||||
Attrs: map[string]string{"push": "true"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "secrets",
|
|
||||||
options: BuildOptions{
|
|
||||||
Secrets: []*Secret{
|
|
||||||
{
|
|
||||||
FilePath: "test1",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "val",
|
|
||||||
Env: "a",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "test",
|
|
||||||
FilePath: "test3",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
Secrets: []*Secret{
|
|
||||||
{
|
|
||||||
FilePath: filepath.Join(tmpwd, "test1"),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "val",
|
|
||||||
Env: "a",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "test",
|
|
||||||
FilePath: filepath.Join(tmpwd, "test3"),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ssh",
|
|
||||||
options: BuildOptions{
|
|
||||||
SSH: []*SSH{
|
|
||||||
{
|
|
||||||
ID: "default",
|
|
||||||
Paths: []string{"test1", "test2"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "a",
|
|
||||||
Paths: []string{"test3"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
want: BuildOptions{
|
|
||||||
SSH: []*SSH{
|
|
||||||
{
|
|
||||||
ID: "default",
|
|
||||||
Paths: []string{filepath.Join(tmpwd, "test1"), filepath.Join(tmpwd, "test2")},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
ID: "a",
|
|
||||||
Paths: []string{filepath.Join(tmpwd, "test3")},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
got, err := ResolveOptionPaths(&tt.options)
|
|
||||||
require.NoError(t, err)
|
|
||||||
if !reflect.DeepEqual(tt.want, *got) {
|
|
||||||
t.Fatalf("expected %#v, got %#v", tt.want, *got)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@ -1,126 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
control "github.com/moby/buildkit/api/services/control"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/opencontainers/go-digest"
|
|
||||||
)
|
|
||||||
|
|
||||||
type writer struct {
|
|
||||||
ch chan<- *StatusResponse
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewProgressWriter(ch chan<- *StatusResponse) progress.Writer {
|
|
||||||
return &writer{ch: ch}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) Write(status *client.SolveStatus) {
|
|
||||||
w.ch <- ToControlStatus(status)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) WriteBuildRef(target string, ref string) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) ValidateLogSource(digest.Digest, interface{}) bool {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *writer) ClearLogSource(interface{}) {}
|
|
||||||
|
|
||||||
func ToControlStatus(s *client.SolveStatus) *StatusResponse {
|
|
||||||
resp := StatusResponse{}
|
|
||||||
for _, v := range s.Vertexes {
|
|
||||||
resp.Vertexes = append(resp.Vertexes, &control.Vertex{
|
|
||||||
Digest: v.Digest,
|
|
||||||
Inputs: v.Inputs,
|
|
||||||
Name: v.Name,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
Error: v.Error,
|
|
||||||
Cached: v.Cached,
|
|
||||||
ProgressGroup: v.ProgressGroup,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Statuses {
|
|
||||||
resp.Statuses = append(resp.Statuses, &control.VertexStatus{
|
|
||||||
ID: v.ID,
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Name: v.Name,
|
|
||||||
Total: v.Total,
|
|
||||||
Current: v.Current,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Logs {
|
|
||||||
resp.Logs = append(resp.Logs, &control.VertexLog{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Stream: int64(v.Stream),
|
|
||||||
Msg: v.Data,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range s.Warnings {
|
|
||||||
resp.Warnings = append(resp.Warnings, &control.VertexWarning{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Level: int64(v.Level),
|
|
||||||
Short: v.Short,
|
|
||||||
Detail: v.Detail,
|
|
||||||
Url: v.URL,
|
|
||||||
Info: v.SourceInfo,
|
|
||||||
Ranges: v.Range,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &resp
|
|
||||||
}
|
|
||||||
|
|
||||||
func FromControlStatus(resp *StatusResponse) *client.SolveStatus {
|
|
||||||
s := client.SolveStatus{}
|
|
||||||
for _, v := range resp.Vertexes {
|
|
||||||
s.Vertexes = append(s.Vertexes, &client.Vertex{
|
|
||||||
Digest: v.Digest,
|
|
||||||
Inputs: v.Inputs,
|
|
||||||
Name: v.Name,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
Error: v.Error,
|
|
||||||
Cached: v.Cached,
|
|
||||||
ProgressGroup: v.ProgressGroup,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Statuses {
|
|
||||||
s.Statuses = append(s.Statuses, &client.VertexStatus{
|
|
||||||
ID: v.ID,
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Name: v.Name,
|
|
||||||
Total: v.Total,
|
|
||||||
Current: v.Current,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
Started: v.Started,
|
|
||||||
Completed: v.Completed,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Logs {
|
|
||||||
s.Logs = append(s.Logs, &client.VertexLog{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Stream: int(v.Stream),
|
|
||||||
Data: v.Msg,
|
|
||||||
Timestamp: v.Timestamp,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
for _, v := range resp.Warnings {
|
|
||||||
s.Warnings = append(s.Warnings, &client.VertexWarning{
|
|
||||||
Vertex: v.Vertex,
|
|
||||||
Level: int(v.Level),
|
|
||||||
Short: v.Short,
|
|
||||||
Detail: v.Detail,
|
|
||||||
URL: v.Url,
|
|
||||||
SourceInfo: v.Info,
|
|
||||||
Range: v.Ranges,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return &s
|
|
||||||
}
|
|
||||||
@ -1,22 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/secrets/secretsprovider"
|
|
||||||
)
|
|
||||||
|
|
||||||
func CreateSecrets(secrets []*Secret) (session.Attachable, error) {
|
|
||||||
fs := make([]secretsprovider.Source, 0, len(secrets))
|
|
||||||
for _, secret := range secrets {
|
|
||||||
fs = append(fs, secretsprovider.Source{
|
|
||||||
ID: secret.ID,
|
|
||||||
FilePath: secret.FilePath,
|
|
||||||
Env: secret.Env,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
store, err := secretsprovider.NewStore(fs)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return secretsprovider.NewSecretProvider(store), nil
|
|
||||||
}
|
|
||||||
@ -1,18 +0,0 @@
|
|||||||
package pb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/moby/buildkit/session"
|
|
||||||
"github.com/moby/buildkit/session/sshforward/sshprovider"
|
|
||||||
)
|
|
||||||
|
|
||||||
func CreateSSH(ssh []*SSH) (session.Attachable, error) {
|
|
||||||
configs := make([]sshprovider.AgentConfig, 0, len(ssh))
|
|
||||||
for _, ssh := range ssh {
|
|
||||||
cfg := sshprovider.AgentConfig{
|
|
||||||
ID: ssh.ID,
|
|
||||||
Paths: append([]string{}, ssh.Paths...),
|
|
||||||
}
|
|
||||||
configs = append(configs, cfg)
|
|
||||||
}
|
|
||||||
return sshprovider.NewSSHAgentProvider(configs)
|
|
||||||
}
|
|
||||||
@ -1,149 +0,0 @@
|
|||||||
package processes
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Process provides methods to control a process.
|
|
||||||
type Process struct {
|
|
||||||
inEnd *ioset.Forwarder
|
|
||||||
invokeConfig *pb.InvokeConfig
|
|
||||||
errCh chan error
|
|
||||||
processCancel func()
|
|
||||||
serveIOCancel func()
|
|
||||||
}
|
|
||||||
|
|
||||||
// ForwardIO forwards process's io to the specified reader/writer.
|
|
||||||
// Optionally specify ioCancelCallback which will be called when
|
|
||||||
// the process closes the specified IO. This will be useful for additional cleanup.
|
|
||||||
func (p *Process) ForwardIO(in *ioset.In, ioCancelCallback func()) {
|
|
||||||
p.inEnd.SetIn(in)
|
|
||||||
if f := p.serveIOCancel; f != nil {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
p.serveIOCancel = ioCancelCallback
|
|
||||||
}
|
|
||||||
|
|
||||||
// Done returns a channel where error or nil will be sent
|
|
||||||
// when the process exits.
|
|
||||||
// TODO: change this to Wait()
|
|
||||||
func (p *Process) Done() <-chan error {
|
|
||||||
return p.errCh
|
|
||||||
}
|
|
||||||
|
|
||||||
// Manager manages a set of proceses.
|
|
||||||
type Manager struct {
|
|
||||||
container atomic.Value
|
|
||||||
processes sync.Map
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewManager creates and returns a Manager.
|
|
||||||
func NewManager() *Manager {
|
|
||||||
return &Manager{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get returns the specified process.
|
|
||||||
func (m *Manager) Get(id string) (*Process, bool) {
|
|
||||||
v, ok := m.processes.Load(id)
|
|
||||||
if !ok {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
return v.(*Process), true
|
|
||||||
}
|
|
||||||
|
|
||||||
// CancelRunningProcesses cancels execution of all running processes.
|
|
||||||
func (m *Manager) CancelRunningProcesses() {
|
|
||||||
var funcs []func()
|
|
||||||
m.processes.Range(func(key, value any) bool {
|
|
||||||
funcs = append(funcs, value.(*Process).processCancel)
|
|
||||||
m.processes.Delete(key)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
for _, f := range funcs {
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListProcesses lists all running processes.
|
|
||||||
func (m *Manager) ListProcesses() (res []*pb.ProcessInfo) {
|
|
||||||
m.processes.Range(func(key, value any) bool {
|
|
||||||
res = append(res, &pb.ProcessInfo{
|
|
||||||
ProcessID: key.(string),
|
|
||||||
InvokeConfig: value.(*Process).invokeConfig,
|
|
||||||
})
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
return res
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeleteProcess deletes the specified process.
|
|
||||||
func (m *Manager) DeleteProcess(id string) error {
|
|
||||||
p, ok := m.processes.LoadAndDelete(id)
|
|
||||||
if !ok {
|
|
||||||
return errors.Errorf("unknown process %q", id)
|
|
||||||
}
|
|
||||||
p.(*Process).processCancel()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// StartProcess starts a process in the container.
|
|
||||||
// When a container isn't available (i.e. first time invoking or the container has exited) or cfg.Rollback is set,
|
|
||||||
// this method will start a new container and run the process in it. Otherwise, this method starts a new process in the
|
|
||||||
// existing container.
|
|
||||||
func (m *Manager) StartProcess(pid string, resultCtx *build.ResultHandle, cfg *pb.InvokeConfig) (*Process, error) {
|
|
||||||
// Get the target result to invoke a container from
|
|
||||||
var ctr *build.Container
|
|
||||||
if a := m.container.Load(); a != nil {
|
|
||||||
ctr = a.(*build.Container)
|
|
||||||
}
|
|
||||||
if cfg.Rollback || ctr == nil || ctr.IsUnavailable() {
|
|
||||||
go m.CancelRunningProcesses()
|
|
||||||
// (Re)create a new container if this is rollback or first time to invoke a process.
|
|
||||||
if ctr != nil {
|
|
||||||
go ctr.Cancel() // Finish the existing container
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
ctr, err = build.NewContainer(context.TODO(), resultCtx, cfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, errors.Errorf("failed to create container %v", err)
|
|
||||||
}
|
|
||||||
m.container.Store(ctr)
|
|
||||||
}
|
|
||||||
// [client(ForwardIO)] <-forwarder(switchable)-> [out] <-pipe-> [in] <- [process]
|
|
||||||
in, out := ioset.Pipe()
|
|
||||||
f := ioset.NewForwarder()
|
|
||||||
f.PropagateStdinClose = false
|
|
||||||
f.SetOut(&out)
|
|
||||||
|
|
||||||
// Register process
|
|
||||||
ctx, cancel := context.WithCancel(context.TODO())
|
|
||||||
var cancelOnce sync.Once
|
|
||||||
processCancelFunc := func() { cancelOnce.Do(func() { cancel(); f.Close(); in.Close(); out.Close() }) }
|
|
||||||
p := &Process{
|
|
||||||
inEnd: f,
|
|
||||||
invokeConfig: cfg,
|
|
||||||
processCancel: processCancelFunc,
|
|
||||||
errCh: make(chan error),
|
|
||||||
}
|
|
||||||
m.processes.Store(pid, p)
|
|
||||||
go func() {
|
|
||||||
var err error
|
|
||||||
if err = ctr.Exec(ctx, cfg, in.Stdin, in.Stdout, in.Stderr); err != nil {
|
|
||||||
logrus.Errorf("failed to exec process: %v", err)
|
|
||||||
}
|
|
||||||
logrus.Debugf("finished process %s %v", pid, cfg.Entrypoint)
|
|
||||||
m.processes.Delete(pid)
|
|
||||||
processCancelFunc()
|
|
||||||
p.errCh <- err
|
|
||||||
}()
|
|
||||||
|
|
||||||
return p, nil
|
|
||||||
}
|
|
||||||
@ -1,240 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/defaults"
|
|
||||||
"github.com/containerd/containerd/pkg/dialer"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/identity"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
"google.golang.org/grpc/backoff"
|
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewClient(ctx context.Context, addr string) (*Client, error) {
|
|
||||||
backoffConfig := backoff.DefaultConfig
|
|
||||||
backoffConfig.MaxDelay = 3 * time.Second
|
|
||||||
connParams := grpc.ConnectParams{
|
|
||||||
Backoff: backoffConfig,
|
|
||||||
}
|
|
||||||
gopts := []grpc.DialOption{
|
|
||||||
grpc.WithBlock(),
|
|
||||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
|
||||||
grpc.WithConnectParams(connParams),
|
|
||||||
grpc.WithContextDialer(dialer.ContextDialer),
|
|
||||||
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
|
|
||||||
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
|
|
||||||
grpc.WithUnaryInterceptor(grpcerrors.UnaryClientInterceptor),
|
|
||||||
grpc.WithStreamInterceptor(grpcerrors.StreamClientInterceptor),
|
|
||||||
}
|
|
||||||
conn, err := grpc.DialContext(ctx, dialer.DialAddress(addr), gopts...)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &Client{conn: conn}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type Client struct {
|
|
||||||
conn *grpc.ClientConn
|
|
||||||
closeOnce sync.Once
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Close() (err error) {
|
|
||||||
c.closeOnce.Do(func() {
|
|
||||||
err = c.conn.Close()
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Version(ctx context.Context) (string, string, string, error) {
|
|
||||||
res, err := c.client().Info(ctx, &pb.InfoRequest{})
|
|
||||||
if err != nil {
|
|
||||||
return "", "", "", err
|
|
||||||
}
|
|
||||||
v := res.BuildxVersion
|
|
||||||
return v.Package, v.Version, v.Revision, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) List(ctx context.Context) (keys []string, retErr error) {
|
|
||||||
res, err := c.client().List(ctx, &pb.ListRequest{})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return res.Keys, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Disconnect(ctx context.Context, key string) error {
|
|
||||||
if key == "" {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
_, err := c.client().Disconnect(ctx, &pb.DisconnectRequest{Ref: key})
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) ListProcesses(ctx context.Context, ref string) (infos []*pb.ProcessInfo, retErr error) {
|
|
||||||
res, err := c.client().ListProcesses(ctx, &pb.ListProcessesRequest{Ref: ref})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return res.Infos, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) DisconnectProcess(ctx context.Context, ref, pid string) error {
|
|
||||||
_, err := c.client().DisconnectProcess(ctx, &pb.DisconnectProcessRequest{Ref: ref, ProcessID: pid})
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Invoke(ctx context.Context, ref string, pid string, invokeConfig pb.InvokeConfig, in io.ReadCloser, stdout io.WriteCloser, stderr io.WriteCloser) error {
|
|
||||||
if ref == "" || pid == "" {
|
|
||||||
return errors.New("build reference must be specified")
|
|
||||||
}
|
|
||||||
stream, err := c.client().Invoke(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return attachIO(ctx, stream, &pb.InitMessage{Ref: ref, ProcessID: pid, InvokeConfig: &invokeConfig}, ioAttachConfig{
|
|
||||||
stdin: in,
|
|
||||||
stdout: stdout,
|
|
||||||
stderr: stderr,
|
|
||||||
// TODO: Signal, Resize
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Inspect(ctx context.Context, ref string) (*pb.InspectResponse, error) {
|
|
||||||
return c.client().Inspect(ctx, &pb.InspectRequest{Ref: ref})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) Build(ctx context.Context, options pb.BuildOptions, in io.ReadCloser, progress progress.Writer) (string, *client.SolveResponse, error) {
|
|
||||||
ref := identity.NewID()
|
|
||||||
statusChan := make(chan *client.SolveStatus)
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(statusChan)
|
|
||||||
var err error
|
|
||||||
resp, err = c.build(egCtx, ref, options, in, statusChan)
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
eg.Go(func() error {
|
|
||||||
for s := range statusChan {
|
|
||||||
st := s
|
|
||||||
progress.Write(st)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return ref, resp, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) build(ctx context.Context, ref string, options pb.BuildOptions, in io.ReadCloser, statusChan chan *client.SolveStatus) (*client.SolveResponse, error) {
|
|
||||||
eg, egCtx := errgroup.WithContext(ctx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
var resp *client.SolveResponse
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(done)
|
|
||||||
pbResp, err := c.client().Build(egCtx, &pb.BuildRequest{
|
|
||||||
Ref: ref,
|
|
||||||
Options: &options,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
resp = &client.SolveResponse{
|
|
||||||
ExporterResponse: pbResp.ExporterResponse,
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
eg.Go(func() error {
|
|
||||||
stream, err := c.client().Status(egCtx, &pb.StatusRequest{
|
|
||||||
Ref: ref,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
for {
|
|
||||||
resp, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return errors.Wrap(err, "failed to receive status")
|
|
||||||
}
|
|
||||||
statusChan <- pb.FromControlStatus(resp)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
if in != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
stream, err := c.client().Input(egCtx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Init{
|
|
||||||
Init: &pb.InputInitMessage{
|
|
||||||
Ref: ref,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to init input")
|
|
||||||
}
|
|
||||||
|
|
||||||
inReader, inWriter := io.Pipe()
|
|
||||||
eg2, _ := errgroup.WithContext(ctx)
|
|
||||||
eg2.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return inWriter.Close()
|
|
||||||
})
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(inWriter, in)
|
|
||||||
inWriter.Close()
|
|
||||||
}()
|
|
||||||
eg2.Go(func() error {
|
|
||||||
for {
|
|
||||||
buf := make([]byte, 32*1024)
|
|
||||||
n, err := inReader.Read(buf)
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
break // break loop and send EOF
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
} else if n > 0 {
|
|
||||||
if stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Data{
|
|
||||||
Data: &pb.DataMessage{
|
|
||||||
Data: buf[:n],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return stream.Send(&pb.InputMessage{
|
|
||||||
Input: &pb.InputMessage_Data{
|
|
||||||
Data: &pb.DataMessage{
|
|
||||||
EOF: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
})
|
|
||||||
return eg2.Wait()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return resp, eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Client) client() pb.ControllerClient {
|
|
||||||
return pb.NewControllerClient(c.conn)
|
|
||||||
}
|
|
||||||
@ -1,333 +0,0 @@
|
|||||||
//go:build linux
|
|
||||||
|
|
||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"os/signal"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/containerd/containerd/log"
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
cbuild "github.com/docker/buildx/controller/build"
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
controllerapi "github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/util/confutil"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/buildx/version"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/moby/buildkit/util/grpcerrors"
|
|
||||||
"github.com/pelletier/go-toml"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"google.golang.org/grpc"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
serveCommandName = "_INTERNAL_SERVE"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
defaultLogFilename = fmt.Sprintf("buildx.%s.log", version.Revision)
|
|
||||||
defaultSocketFilename = fmt.Sprintf("buildx.%s.sock", version.Revision)
|
|
||||||
defaultPIDFilename = fmt.Sprintf("buildx.%s.pid", version.Revision)
|
|
||||||
)
|
|
||||||
|
|
||||||
type serverConfig struct {
|
|
||||||
// Specify buildx server root
|
|
||||||
Root string `toml:"root"`
|
|
||||||
|
|
||||||
// LogLevel sets the logging level [trace, debug, info, warn, error, fatal, panic]
|
|
||||||
LogLevel string `toml:"log_level"`
|
|
||||||
|
|
||||||
// Specify file to output buildx server log
|
|
||||||
LogFile string `toml:"log_file"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
|
||||||
rootDir := opts.Root
|
|
||||||
if rootDir == "" {
|
|
||||||
rootDir = rootDataDir(dockerCli)
|
|
||||||
}
|
|
||||||
serverRoot := filepath.Join(rootDir, "shared")
|
|
||||||
|
|
||||||
// connect to buildx server if it is already running
|
|
||||||
ctx2, cancel := context.WithTimeout(ctx, 1*time.Second)
|
|
||||||
c, err := newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
|
||||||
cancel()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, context.DeadlineExceeded) {
|
|
||||||
return nil, errors.Wrap(err, "cannot connect to the buildx server")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return &buildxController{c, serverRoot}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// start buildx server via subcommand
|
|
||||||
err = logger.Wrap("no buildx server found; launching...", func() error {
|
|
||||||
launchFlags := []string{}
|
|
||||||
if opts.ServerConfig != "" {
|
|
||||||
launchFlags = append(launchFlags, "--config", opts.ServerConfig)
|
|
||||||
}
|
|
||||||
logFile, err := getLogFilePath(dockerCli, opts.ServerConfig)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
wait, err := launch(ctx, logFile, append([]string{serveCommandName}, launchFlags...)...)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
go wait()
|
|
||||||
|
|
||||||
// wait for buildx server to be ready
|
|
||||||
ctx2, cancel = context.WithTimeout(ctx, 10*time.Second)
|
|
||||||
c, err = newBuildxClientAndCheck(ctx2, filepath.Join(serverRoot, defaultSocketFilename))
|
|
||||||
cancel()
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "cannot connect to the buildx server")
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &buildxController{c, serverRoot}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {
|
|
||||||
cmd.AddCommand(
|
|
||||||
serveCmd(dockerCli),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
func serveCmd(dockerCli command.Cli) *cobra.Command {
|
|
||||||
var serverConfigPath string
|
|
||||||
cmd := &cobra.Command{
|
|
||||||
Use: fmt.Sprintf("%s [OPTIONS]", serveCommandName),
|
|
||||||
Hidden: true,
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
// Parse config
|
|
||||||
config, err := getConfig(dockerCli, serverConfigPath)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if config.LogLevel == "" {
|
|
||||||
logrus.SetLevel(logrus.InfoLevel)
|
|
||||||
} else {
|
|
||||||
lvl, err := logrus.ParseLevel(config.LogLevel)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "failed to prepare logger")
|
|
||||||
}
|
|
||||||
logrus.SetLevel(lvl)
|
|
||||||
}
|
|
||||||
logrus.SetFormatter(&logrus.JSONFormatter{
|
|
||||||
TimestampFormat: log.RFC3339NanoFixed,
|
|
||||||
})
|
|
||||||
root, err := prepareRootDir(dockerCli, config)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pidF := filepath.Join(root, defaultPIDFilename)
|
|
||||||
if err := os.WriteFile(pidF, []byte(fmt.Sprintf("%d", os.Getpid())), 0600); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := os.Remove(pidF); err != nil {
|
|
||||||
logrus.Errorf("failed to clean up info file %q: %v", pidF, err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// prepare server
|
|
||||||
b := NewServer(func(ctx context.Context, options *controllerapi.BuildOptions, stdin io.Reader, progress progress.Writer) (*client.SolveResponse, *build.ResultHandle, error) {
|
|
||||||
return cbuild.RunBuild(ctx, dockerCli, *options, stdin, progress, true)
|
|
||||||
})
|
|
||||||
defer b.Close()
|
|
||||||
|
|
||||||
// serve server
|
|
||||||
addr := filepath.Join(root, defaultSocketFilename)
|
|
||||||
if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { // avoid EADDRINUSE
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
if err := os.Remove(addr); err != nil {
|
|
||||||
logrus.Errorf("failed to clean up socket %q: %v", addr, err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
logrus.Infof("starting server at %q", addr)
|
|
||||||
l, err := net.Listen("unix", addr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
rpc := grpc.NewServer(
|
|
||||||
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
|
|
||||||
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
|
|
||||||
)
|
|
||||||
controllerapi.RegisterControllerServer(rpc, b)
|
|
||||||
doneCh := make(chan struct{})
|
|
||||||
errCh := make(chan error, 1)
|
|
||||||
go func() {
|
|
||||||
defer close(doneCh)
|
|
||||||
if err := rpc.Serve(l); err != nil {
|
|
||||||
errCh <- errors.Wrapf(err, "error on serving via socket %q", addr)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
var s os.Signal
|
|
||||||
sigCh := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(sigCh, syscall.SIGINT)
|
|
||||||
signal.Notify(sigCh, syscall.SIGTERM)
|
|
||||||
select {
|
|
||||||
case err := <-errCh:
|
|
||||||
logrus.Errorf("got error %s, exiting", err)
|
|
||||||
return err
|
|
||||||
case s = <-sigCh:
|
|
||||||
logrus.Infof("got signal %s, exiting", s)
|
|
||||||
return nil
|
|
||||||
case <-doneCh:
|
|
||||||
logrus.Infof("rpc server done, exiting")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
flags := cmd.Flags()
|
|
||||||
flags.StringVar(&serverConfigPath, "config", "", "Specify buildx server config file")
|
|
||||||
return cmd
|
|
||||||
}
|
|
||||||
|
|
||||||
func getLogFilePath(dockerCli command.Cli, configPath string) (string, error) {
|
|
||||||
config, err := getConfig(dockerCli, configPath)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
if config.LogFile == "" {
|
|
||||||
root, err := prepareRootDir(dockerCli, config)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return filepath.Join(root, defaultLogFilename), nil
|
|
||||||
}
|
|
||||||
return config.LogFile, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func getConfig(dockerCli command.Cli, configPath string) (*serverConfig, error) {
|
|
||||||
var defaultConfigPath bool
|
|
||||||
if configPath == "" {
|
|
||||||
defaultRoot := rootDataDir(dockerCli)
|
|
||||||
configPath = filepath.Join(defaultRoot, "config.toml")
|
|
||||||
defaultConfigPath = true
|
|
||||||
}
|
|
||||||
var config serverConfig
|
|
||||||
tree, err := toml.LoadFile(configPath)
|
|
||||||
if err != nil && !(os.IsNotExist(err) && defaultConfigPath) {
|
|
||||||
return nil, errors.Wrapf(err, "failed to read config %q", configPath)
|
|
||||||
} else if err == nil {
|
|
||||||
if err := tree.Unmarshal(&config); err != nil {
|
|
||||||
return nil, errors.Wrapf(err, "failed to unmarshal config %q", configPath)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &config, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func prepareRootDir(dockerCli command.Cli, config *serverConfig) (string, error) {
|
|
||||||
rootDir := config.Root
|
|
||||||
if rootDir == "" {
|
|
||||||
rootDir = rootDataDir(dockerCli)
|
|
||||||
}
|
|
||||||
if rootDir == "" {
|
|
||||||
return "", errors.New("buildx root dir must be determined")
|
|
||||||
}
|
|
||||||
if err := os.MkdirAll(rootDir, 0700); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
serverRoot := filepath.Join(rootDir, "shared")
|
|
||||||
if err := os.MkdirAll(serverRoot, 0700); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return serverRoot, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func rootDataDir(dockerCli command.Cli) string {
|
|
||||||
return filepath.Join(confutil.ConfigDir(dockerCli), "controller")
|
|
||||||
}
|
|
||||||
|
|
||||||
func newBuildxClientAndCheck(ctx context.Context, addr string) (*Client, error) {
|
|
||||||
c, err := NewClient(ctx, addr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
p, v, r, err := c.Version(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
logrus.Debugf("connected to server (\"%v %v %v\")", p, v, r)
|
|
||||||
if !(p == version.Package && v == version.Version && r == version.Revision) {
|
|
||||||
return nil, errors.Errorf("version mismatch (client: \"%v %v %v\", server: \"%v %v %v\")", version.Package, version.Version, version.Revision, p, v, r)
|
|
||||||
}
|
|
||||||
return c, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type buildxController struct {
|
|
||||||
*Client
|
|
||||||
serverRoot string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *buildxController) Kill(ctx context.Context) error {
|
|
||||||
pidB, err := os.ReadFile(filepath.Join(c.serverRoot, defaultPIDFilename))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pid, err := strconv.ParseInt(string(pidB), 10, 64)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if pid <= 0 {
|
|
||||||
return errors.New("no PID is recorded for buildx server")
|
|
||||||
}
|
|
||||||
p, err := os.FindProcess(int(pid))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if err := p.Signal(syscall.SIGINT); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// TODO: Should we send SIGKILL if process doesn't finish?
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func launch(ctx context.Context, logFile string, args ...string) (func() error, error) {
|
|
||||||
// set absolute path of binary, since we set the working directory to the root
|
|
||||||
pathname, err := os.Executable()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
bCmd := exec.CommandContext(ctx, pathname, args...)
|
|
||||||
if logFile != "" {
|
|
||||||
f, err := os.OpenFile(logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
bCmd.Stdout = f
|
|
||||||
bCmd.Stderr = f
|
|
||||||
}
|
|
||||||
bCmd.Stdin = nil
|
|
||||||
bCmd.Dir = "/"
|
|
||||||
bCmd.SysProcAttr = &syscall.SysProcAttr{
|
|
||||||
Setsid: true,
|
|
||||||
}
|
|
||||||
if err := bCmd.Start(); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return bCmd.Wait, nil
|
|
||||||
}
|
|
||||||
@ -1,19 +0,0 @@
|
|||||||
//go:build !linux
|
|
||||||
|
|
||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/control"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/cli/cli/command"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func NewRemoteBuildxController(ctx context.Context, dockerCli command.Cli, opts control.ControlOptions, logger progress.SubLogger) (control.BuildxController, error) {
|
|
||||||
return nil, errors.New("remote buildx unsupported")
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddControllerCommands(cmd *cobra.Command, dockerCli command.Cli) {}
|
|
||||||
@ -1,430 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/moby/sys/signal"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"github.com/sirupsen/logrus"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type msgStream interface {
|
|
||||||
Send(*pb.Message) error
|
|
||||||
Recv() (*pb.Message, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type ioServerConfig struct {
|
|
||||||
stdin io.WriteCloser
|
|
||||||
stdout, stderr io.ReadCloser
|
|
||||||
|
|
||||||
// signalFn is a callback function called when a signal is reached to the client.
|
|
||||||
signalFn func(context.Context, syscall.Signal) error
|
|
||||||
|
|
||||||
// resizeFn is a callback function called when a resize event is reached to the client.
|
|
||||||
resizeFn func(context.Context, winSize) error
|
|
||||||
}
|
|
||||||
|
|
||||||
func serveIO(attachCtx context.Context, srv msgStream, initFn func(*pb.InitMessage) error, ioConfig *ioServerConfig) (err error) {
|
|
||||||
stdin, stdout, stderr := ioConfig.stdin, ioConfig.stdout, ioConfig.stderr
|
|
||||||
stream := &debugStream{srv, "server=" + time.Now().String()}
|
|
||||||
eg, ctx := errgroup.WithContext(attachCtx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
init := msg.GetInit()
|
|
||||||
if init == nil {
|
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInput())
|
|
||||||
}
|
|
||||||
ref := init.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("no ref is provided")
|
|
||||||
}
|
|
||||||
if err := initFn(init); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to initialize IO server")
|
|
||||||
}
|
|
||||||
|
|
||||||
if stdout != nil {
|
|
||||||
stdoutReader, stdoutWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stdoutWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stdoutWriter, stdout)
|
|
||||||
stdoutWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stdoutReader.Close()
|
|
||||||
return copyToStream(1, stream, stdoutReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if stderr != nil {
|
|
||||||
stderrReader, stderrWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stderrWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stderrWriter, stderr)
|
|
||||||
stderrWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stderrReader.Close()
|
|
||||||
return copyToStream(2, stream, stderrReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(done)
|
|
||||||
for {
|
|
||||||
var msg *pb.Message
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if file := msg.GetFile(); file != nil {
|
|
||||||
if file.Fd != 0 {
|
|
||||||
return errors.Errorf("unexpected fd: %v", file.Fd)
|
|
||||||
}
|
|
||||||
if stdin == nil {
|
|
||||||
continue // no stdin destination is specified so ignore the data
|
|
||||||
}
|
|
||||||
if len(file.Data) > 0 {
|
|
||||||
_, err := stdin.Write(file.Data)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if file.EOF {
|
|
||||||
stdin.Close()
|
|
||||||
}
|
|
||||||
} else if resize := msg.GetResize(); resize != nil {
|
|
||||||
if ioConfig.resizeFn != nil {
|
|
||||||
ioConfig.resizeFn(ctx, winSize{
|
|
||||||
cols: resize.Cols,
|
|
||||||
rows: resize.Rows,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
} else if sig := msg.GetSignal(); sig != nil {
|
|
||||||
if ioConfig.signalFn != nil {
|
|
||||||
syscallSignal, ok := signal.SignalMap[sig.Name]
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
ioConfig.signalFn(ctx, syscallSignal)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
type ioAttachConfig struct {
|
|
||||||
stdin io.ReadCloser
|
|
||||||
stdout, stderr io.WriteCloser
|
|
||||||
signal <-chan syscall.Signal
|
|
||||||
resize <-chan winSize
|
|
||||||
}
|
|
||||||
|
|
||||||
type winSize struct {
|
|
||||||
rows uint32
|
|
||||||
cols uint32
|
|
||||||
}
|
|
||||||
|
|
||||||
func attachIO(ctx context.Context, stream msgStream, initMessage *pb.InitMessage, cfg ioAttachConfig) (retErr error) {
|
|
||||||
eg, ctx := errgroup.WithContext(ctx)
|
|
||||||
done := make(chan struct{})
|
|
||||||
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Init{
|
|
||||||
Init: initMessage,
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to init")
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.stdin != nil {
|
|
||||||
stdinReader, stdinWriter := io.Pipe()
|
|
||||||
eg.Go(func() error {
|
|
||||||
<-done
|
|
||||||
return stdinWriter.Close()
|
|
||||||
})
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
// do not wait for read completion but return here and let the caller send EOF
|
|
||||||
// this allows us to return on ctx.Done() without being blocked by this reader.
|
|
||||||
io.Copy(stdinWriter, cfg.stdin)
|
|
||||||
stdinWriter.Close()
|
|
||||||
}()
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer stdinReader.Close()
|
|
||||||
return copyToStream(0, stream, stdinReader)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.signal != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
for {
|
|
||||||
var sig syscall.Signal
|
|
||||||
select {
|
|
||||||
case sig = <-cfg.signal:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
name := sigToName[sig]
|
|
||||||
if name == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Signal{
|
|
||||||
Signal: &pb.SignalMessage{
|
|
||||||
Name: name,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to send signal")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.resize != nil {
|
|
||||||
eg.Go(func() error {
|
|
||||||
for {
|
|
||||||
var win winSize
|
|
||||||
select {
|
|
||||||
case win = <-cfg.resize:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if err := stream.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_Resize{
|
|
||||||
Resize: &pb.ResizeMessage{
|
|
||||||
Rows: win.rows,
|
|
||||||
Cols: win.cols,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return errors.Wrap(err, "failed to send resize")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := receive(ctx, stream)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
eg.Go(func() error {
|
|
||||||
eofs := make(map[uint32]struct{})
|
|
||||||
defer close(done)
|
|
||||||
for {
|
|
||||||
var msg *pb.Message
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if file := msg.GetFile(); file != nil {
|
|
||||||
if _, ok := eofs[file.Fd]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
var out io.WriteCloser
|
|
||||||
switch file.Fd {
|
|
||||||
case 1:
|
|
||||||
out = cfg.stdout
|
|
||||||
case 2:
|
|
||||||
out = cfg.stderr
|
|
||||||
default:
|
|
||||||
return errors.Errorf("unsupported fd %d", file.Fd)
|
|
||||||
|
|
||||||
}
|
|
||||||
if out == nil {
|
|
||||||
logrus.Warnf("attachIO: no writer for fd %d", file.Fd)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if len(file.Data) > 0 {
|
|
||||||
if _, err := out.Write(file.Data); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if file.EOF {
|
|
||||||
eofs[file.Fd] = struct{}{}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return errors.Errorf("unexpected message: %T", msg.GetInput())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func receive(ctx context.Context, stream msgStream) (*pb.Message, error) {
|
|
||||||
msgCh := make(chan *pb.Message)
|
|
||||||
errCh := make(chan error)
|
|
||||||
go func() {
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, io.EOF) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
errCh <- err
|
|
||||||
return
|
|
||||||
}
|
|
||||||
msgCh <- msg
|
|
||||||
}()
|
|
||||||
select {
|
|
||||||
case msg := <-msgCh:
|
|
||||||
return msg, nil
|
|
||||||
case err := <-errCh:
|
|
||||||
return nil, err
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil, ctx.Err()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func copyToStream(fd uint32, snd msgStream, r io.Reader) error {
|
|
||||||
for {
|
|
||||||
buf := make([]byte, 32*1024)
|
|
||||||
n, err := r.Read(buf)
|
|
||||||
if err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
break // break loop and send EOF
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
} else if n > 0 {
|
|
||||||
if snd.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_File{
|
|
||||||
File: &pb.FdMessage{
|
|
||||||
Fd: fd,
|
|
||||||
Data: buf[:n],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return snd.Send(&pb.Message{
|
|
||||||
Input: &pb.Message_File{
|
|
||||||
File: &pb.FdMessage{
|
|
||||||
Fd: fd,
|
|
||||||
EOF: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
var sigToName = map[syscall.Signal]string{}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
for name, value := range signal.SignalMap {
|
|
||||||
sigToName[value] = name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type debugStream struct {
|
|
||||||
msgStream
|
|
||||||
prefix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *debugStream) Send(msg *pb.Message) error {
|
|
||||||
switch m := msg.GetInput().(type) {
|
|
||||||
case *pb.Message_File:
|
|
||||||
if m.File.EOF {
|
|
||||||
logrus.Debugf("|---> File Message (sender:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("|---> File Message (sender:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
|
||||||
}
|
|
||||||
case *pb.Message_Resize:
|
|
||||||
logrus.Debugf("|---> Resize Message (sender:%v): %+v", s.prefix, m.Resize)
|
|
||||||
case *pb.Message_Signal:
|
|
||||||
logrus.Debugf("|---> Signal Message (sender:%v): %s", s.prefix, m.Signal.Name)
|
|
||||||
}
|
|
||||||
return s.msgStream.Send(msg)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *debugStream) Recv() (*pb.Message, error) {
|
|
||||||
msg, err := s.msgStream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
switch m := msg.GetInput().(type) {
|
|
||||||
case *pb.Message_File:
|
|
||||||
if m.File.EOF {
|
|
||||||
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, EOF", s.prefix, m.File.Fd)
|
|
||||||
} else {
|
|
||||||
logrus.Debugf("|<--- File Message (receiver:%v) fd=%d, %d bytes", s.prefix, m.File.Fd, len(m.File.Data))
|
|
||||||
}
|
|
||||||
case *pb.Message_Resize:
|
|
||||||
logrus.Debugf("|<--- Resize Message (receiver:%v): %+v", s.prefix, m.Resize)
|
|
||||||
case *pb.Message_Signal:
|
|
||||||
logrus.Debugf("|<--- Signal Message (receiver:%v): %s", s.prefix, m.Signal.Name)
|
|
||||||
}
|
|
||||||
return msg, nil
|
|
||||||
}
|
|
||||||
@ -1,439 +0,0 @@
|
|||||||
package remote
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io"
|
|
||||||
"sync"
|
|
||||||
"sync/atomic"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/docker/buildx/build"
|
|
||||||
controllererrors "github.com/docker/buildx/controller/errdefs"
|
|
||||||
"github.com/docker/buildx/controller/pb"
|
|
||||||
"github.com/docker/buildx/controller/processes"
|
|
||||||
"github.com/docker/buildx/util/ioset"
|
|
||||||
"github.com/docker/buildx/util/progress"
|
|
||||||
"github.com/docker/buildx/version"
|
|
||||||
"github.com/moby/buildkit/client"
|
|
||||||
"github.com/pkg/errors"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BuildFunc func(ctx context.Context, options *pb.BuildOptions, stdin io.Reader, progress progress.Writer) (resp *client.SolveResponse, res *build.ResultHandle, err error)
|
|
||||||
|
|
||||||
func NewServer(buildFunc BuildFunc) *Server {
|
|
||||||
return &Server{
|
|
||||||
buildFunc: buildFunc,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Server struct {
|
|
||||||
buildFunc BuildFunc
|
|
||||||
session map[string]*session
|
|
||||||
sessionMu sync.Mutex
|
|
||||||
}
|
|
||||||
|
|
||||||
type session struct {
|
|
||||||
buildOnGoing atomic.Bool
|
|
||||||
statusChan chan *pb.StatusResponse
|
|
||||||
cancelBuild func()
|
|
||||||
buildOptions *pb.BuildOptions
|
|
||||||
inputPipe *io.PipeWriter
|
|
||||||
|
|
||||||
result *build.ResultHandle
|
|
||||||
|
|
||||||
processes *processes.Manager
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *session) cancelRunningProcesses() {
|
|
||||||
s.processes.CancelRunningProcesses()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) ListProcesses(ctx context.Context, req *pb.ListProcessesRequest) (res *pb.ListProcessesResponse, err error) {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
defer m.sessionMu.Unlock()
|
|
||||||
s, ok := m.session[req.Ref]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
|
||||||
}
|
|
||||||
res = new(pb.ListProcessesResponse)
|
|
||||||
res.Infos = append(res.Infos, s.processes.ListProcesses()...)
|
|
||||||
return res, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) DisconnectProcess(ctx context.Context, req *pb.DisconnectProcessRequest) (res *pb.DisconnectProcessResponse, err error) {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
defer m.sessionMu.Unlock()
|
|
||||||
s, ok := m.session[req.Ref]
|
|
||||||
if !ok {
|
|
||||||
return nil, errors.Errorf("unknown ref %q", req.Ref)
|
|
||||||
}
|
|
||||||
return res, s.processes.DeleteProcess(req.ProcessID)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Info(ctx context.Context, req *pb.InfoRequest) (res *pb.InfoResponse, err error) {
|
|
||||||
return &pb.InfoResponse{
|
|
||||||
BuildxVersion: &pb.BuildxVersion{
|
|
||||||
Package: version.Package,
|
|
||||||
Version: version.Version,
|
|
||||||
Revision: version.Revision,
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) List(ctx context.Context, req *pb.ListRequest) (res *pb.ListResponse, err error) {
|
|
||||||
keys := make(map[string]struct{})
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
for k := range m.session {
|
|
||||||
keys[k] = struct{}{}
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
var keysL []string
|
|
||||||
for k := range keys {
|
|
||||||
keysL = append(keysL, k)
|
|
||||||
}
|
|
||||||
return &pb.ListResponse{
|
|
||||||
Keys: keysL,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Disconnect(ctx context.Context, req *pb.DisconnectRequest) (res *pb.DisconnectResponse, err error) {
|
|
||||||
key := req.Ref
|
|
||||||
if key == "" {
|
|
||||||
return nil, errors.New("disconnect: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[key]; ok {
|
|
||||||
if s.cancelBuild != nil {
|
|
||||||
s.cancelBuild()
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
if s.result != nil {
|
|
||||||
s.result.Done()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
delete(m.session, key)
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
return &pb.DisconnectResponse{}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Close() error {
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
for k := range m.session {
|
|
||||||
if s, ok := m.session[k]; ok {
|
|
||||||
if s.cancelBuild != nil {
|
|
||||||
s.cancelBuild()
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Inspect(ctx context.Context, req *pb.InspectRequest) (*pb.InspectResponse, error) {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return nil, errors.New("inspect: empty key")
|
|
||||||
}
|
|
||||||
var bo *pb.BuildOptions
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[ref]; ok {
|
|
||||||
bo = s.buildOptions
|
|
||||||
} else {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil, errors.Errorf("inspect: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return &pb.InspectResponse{Options: bo}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Build(ctx context.Context, req *pb.BuildRequest) (*pb.BuildResponse, error) {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return nil, errors.New("build: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prepare status channel and session
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if m.session == nil {
|
|
||||||
m.session = make(map[string]*session)
|
|
||||||
}
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if ok {
|
|
||||||
if !s.buildOnGoing.CompareAndSwap(false, true) {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return &pb.BuildResponse{}, errors.New("build ongoing")
|
|
||||||
}
|
|
||||||
s.cancelRunningProcesses()
|
|
||||||
s.result = nil
|
|
||||||
} else {
|
|
||||||
s = &session{}
|
|
||||||
s.buildOnGoing.Store(true)
|
|
||||||
}
|
|
||||||
|
|
||||||
s.processes = processes.NewManager()
|
|
||||||
statusChan := make(chan *pb.StatusResponse)
|
|
||||||
s.statusChan = statusChan
|
|
||||||
inR, inW := io.Pipe()
|
|
||||||
defer inR.Close()
|
|
||||||
s.inputPipe = inW
|
|
||||||
m.session[ref] = s
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
defer func() {
|
|
||||||
close(statusChan)
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if ok {
|
|
||||||
s.statusChan = nil
|
|
||||||
s.buildOnGoing.Store(false)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
}()
|
|
||||||
|
|
||||||
pw := pb.NewProgressWriter(statusChan)
|
|
||||||
|
|
||||||
// Build the specified request
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
defer cancel()
|
|
||||||
resp, res, buildErr := m.buildFunc(ctx, req.Options, inR, pw)
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if s, ok := m.session[ref]; ok {
|
|
||||||
// NOTE: buildFunc can return *build.ResultHandle even on error (e.g. when it's implemented using (github.com/docker/buildx/controller/build).RunBuild).
|
|
||||||
if res != nil {
|
|
||||||
s.result = res
|
|
||||||
s.cancelBuild = cancel
|
|
||||||
s.buildOptions = req.Options
|
|
||||||
m.session[ref] = s
|
|
||||||
if buildErr != nil {
|
|
||||||
buildErr = controllererrors.WrapBuild(buildErr, ref)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return nil, errors.Errorf("build: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
if buildErr != nil {
|
|
||||||
return nil, buildErr
|
|
||||||
}
|
|
||||||
|
|
||||||
if resp == nil {
|
|
||||||
resp = &client.SolveResponse{}
|
|
||||||
}
|
|
||||||
return &pb.BuildResponse{
|
|
||||||
ExporterResponse: resp.ExporterResponse,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Status(req *pb.StatusRequest, stream pb.Controller_StatusServer) error {
|
|
||||||
ref := req.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("status: empty key")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait and get status channel prepared by Build()
|
|
||||||
var statusChan <-chan *pb.StatusResponse
|
|
||||||
for {
|
|
||||||
// TODO: timeout?
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].statusChan == nil {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
statusChan = m.session[ref].statusChan
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// forward status
|
|
||||||
for ss := range statusChan {
|
|
||||||
if ss == nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err := stream.Send(ss); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Input(stream pb.Controller_InputServer) (err error) {
|
|
||||||
// Get the target ref from init message
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, io.EOF) {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
init := msg.GetInit()
|
|
||||||
if init == nil {
|
|
||||||
return errors.Errorf("unexpected message: %T; wanted init", msg.GetInit())
|
|
||||||
}
|
|
||||||
ref := init.Ref
|
|
||||||
if ref == "" {
|
|
||||||
return errors.New("input: no ref is provided")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait and get input stream pipe prepared by Build()
|
|
||||||
var inputPipeW *io.PipeWriter
|
|
||||||
for {
|
|
||||||
// TODO: timeout?
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
if _, ok := m.session[ref]; !ok || m.session[ref].inputPipe == nil {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
time.Sleep(time.Millisecond) // TODO: wait Build without busy loop and make it cancellable
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
inputPipeW = m.session[ref].inputPipe
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
// Forward input stream
|
|
||||||
eg, ctx := errgroup.WithContext(context.TODO())
|
|
||||||
done := make(chan struct{})
|
|
||||||
msgCh := make(chan *pb.InputMessage)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer close(msgCh)
|
|
||||||
for {
|
|
||||||
msg, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
if !errors.Is(err, io.EOF) {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case msgCh <- msg:
|
|
||||||
case <-done:
|
|
||||||
return nil
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
eg.Go(func() (retErr error) {
|
|
||||||
defer close(done)
|
|
||||||
defer func() {
|
|
||||||
if retErr != nil {
|
|
||||||
inputPipeW.CloseWithError(retErr)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
inputPipeW.Close()
|
|
||||||
}()
|
|
||||||
for {
|
|
||||||
var msg *pb.InputMessage
|
|
||||||
select {
|
|
||||||
case msg = <-msgCh:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return errors.Wrap(ctx.Err(), "canceled")
|
|
||||||
}
|
|
||||||
if msg == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if data := msg.GetData(); data != nil {
|
|
||||||
if len(data.Data) > 0 {
|
|
||||||
_, err := inputPipeW.Write(data.Data)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if data.EOF {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Server) Invoke(srv pb.Controller_InvokeServer) error {
|
|
||||||
containerIn, containerOut := ioset.Pipe()
|
|
||||||
defer func() { containerOut.Close(); containerIn.Close() }()
|
|
||||||
|
|
||||||
initDoneCh := make(chan *processes.Process)
|
|
||||||
initErrCh := make(chan error)
|
|
||||||
eg, egCtx := errgroup.WithContext(context.TODO())
|
|
||||||
srvIOCtx, srvIOCancel := context.WithCancel(egCtx)
|
|
||||||
eg.Go(func() error {
|
|
||||||
defer srvIOCancel()
|
|
||||||
return serveIO(srvIOCtx, srv, func(initMessage *pb.InitMessage) (retErr error) {
|
|
||||||
defer func() {
|
|
||||||
if retErr != nil {
|
|
||||||
initErrCh <- retErr
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
ref := initMessage.Ref
|
|
||||||
cfg := initMessage.InvokeConfig
|
|
||||||
|
|
||||||
m.sessionMu.Lock()
|
|
||||||
s, ok := m.session[ref]
|
|
||||||
if !ok {
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
return errors.Errorf("invoke: unknown key %v", ref)
|
|
||||||
}
|
|
||||||
m.sessionMu.Unlock()
|
|
||||||
|
|
||||||
pid := initMessage.ProcessID
|
|
||||||
if pid == "" {
|
|
||||||
return errors.Errorf("invoke: specify process ID")
|
|
||||||
}
|
|
||||||
proc, ok := s.processes.Get(pid)
|
|
||||||
if !ok {
|
|
||||||
// Start a new process.
|
|
||||||
if cfg == nil {
|
|
||||||
return errors.New("no container config is provided")
|
|
||||||
}
|
|
||||||
var err error
|
|
||||||
proc, err = s.processes.StartProcess(pid, s.result, cfg)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Attach containerIn to this process
|
|
||||||
proc.ForwardIO(&containerIn, srvIOCancel)
|
|
||||||
initDoneCh <- proc
|
|
||||||
return nil
|
|
||||||
}, &ioServerConfig{
|
|
||||||
stdin: containerOut.Stdin,
|
|
||||||
stdout: containerOut.Stdout,
|
|
||||||
stderr: containerOut.Stderr,
|
|
||||||
// TODO: signal, resize
|
|
||||||
})
|
|
||||||
})
|
|
||||||
eg.Go(func() (rErr error) {
|
|
||||||
defer srvIOCancel()
|
|
||||||
// Wait for init done
|
|
||||||
var proc *processes.Process
|
|
||||||
select {
|
|
||||||
case p := <-initDoneCh:
|
|
||||||
proc = p
|
|
||||||
case err := <-initErrCh:
|
|
||||||
return err
|
|
||||||
case <-egCtx.Done():
|
|
||||||
return egCtx.Err()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for IO done
|
|
||||||
select {
|
|
||||||
case <-srvIOCtx.Done():
|
|
||||||
return srvIOCtx.Err()
|
|
||||||
case err := <-proc.Done():
|
|
||||||
return err
|
|
||||||
case <-egCtx.Done():
|
|
||||||
return egCtx.Err()
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
return eg.Wait()
|
|
||||||
}
|
|
||||||
@ -0,0 +1,74 @@
|
|||||||
|
# Defining additional build contexts and linking targets
|
||||||
|
|
||||||
|
In addition to the main `context` key that defines the build context each target
|
||||||
|
can also define additional named contexts with a map defined with key `contexts`.
|
||||||
|
These values map to the `--build-context` flag in the [build command](https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context).
|
||||||
|
|
||||||
|
Inside the Dockerfile these contexts can be used with the `FROM` instruction or `--from` flag.
|
||||||
|
|
||||||
|
The value can be a local source directory, container image (with `docker-image://` prefix),
|
||||||
|
Git URL, HTTP URL or a name of another target in the Bake file (with `target:` prefix).
|
||||||
|
|
||||||
|
## Pinning alpine image
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM alpine
|
||||||
|
RUN echo "Hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
alpine = "docker-image://alpine:3.13"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using a secondary source directory
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM scratch AS src
|
||||||
|
|
||||||
|
FROM golang
|
||||||
|
COPY --from=src . .
|
||||||
|
```
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
src = "../path/to/source"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using a result of one target as a base image in another target
|
||||||
|
|
||||||
|
To use a result of one target as a build context of another, specity the target
|
||||||
|
name with `target:` prefix.
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
FROM baseapp
|
||||||
|
RUN echo "Hello world"
|
||||||
|
```
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "base" {
|
||||||
|
dockerfile = "baseapp.Dockerfile"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app" {
|
||||||
|
contexts = {
|
||||||
|
baseapp = "target:base"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Please note that in most cases you should just use a single multi-stage
|
||||||
|
Dockerfile with multiple targets for similar behavior. This case is recommended
|
||||||
|
when you have multiple Dockerfiles that can't be easily merged into one.
|
||||||
@ -0,0 +1,270 @@
|
|||||||
|
# Building from Compose file
|
||||||
|
|
||||||
|
## Specification
|
||||||
|
|
||||||
|
Bake uses the [compose-spec](https://docs.docker.com/compose/compose-file/) to
|
||||||
|
parse a compose file and translate each service to a [target](file-definition.md#target).
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
webapp-dev:
|
||||||
|
build: &build-dev
|
||||||
|
dockerfile: Dockerfile.webapp
|
||||||
|
tags:
|
||||||
|
- docker.io/username/webapp:latest
|
||||||
|
cache_from:
|
||||||
|
- docker.io/username/webapp:cache
|
||||||
|
cache_to:
|
||||||
|
- docker.io/username/webapp:cache
|
||||||
|
|
||||||
|
webapp-release:
|
||||||
|
build:
|
||||||
|
<<: *build-dev
|
||||||
|
x-bake:
|
||||||
|
platforms:
|
||||||
|
- linux/amd64
|
||||||
|
- linux/arm64
|
||||||
|
|
||||||
|
db:
|
||||||
|
image: docker.io/username/db
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile.db
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"db",
|
||||||
|
"webapp-dev",
|
||||||
|
"webapp-release"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"db": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile.db",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/db"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"webapp-dev": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile.webapp",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:latest"
|
||||||
|
],
|
||||||
|
"cache-from": [
|
||||||
|
"docker.io/username/webapp:cache"
|
||||||
|
],
|
||||||
|
"cache-to": [
|
||||||
|
"docker.io/username/webapp:cache"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"webapp-release": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile.webapp",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:latest"
|
||||||
|
],
|
||||||
|
"cache-from": [
|
||||||
|
"docker.io/username/webapp:cache"
|
||||||
|
],
|
||||||
|
"cache-to": [
|
||||||
|
"docker.io/username/webapp:cache"
|
||||||
|
],
|
||||||
|
"platforms": [
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Unlike the [HCL format](file-definition.md#hcl-definition), there are some
|
||||||
|
limitations with the compose format:
|
||||||
|
|
||||||
|
* Specifying variables or global scope attributes is not yet supported
|
||||||
|
* `inherits` service field is not supported, but you can use [YAML anchors](https://docs.docker.com/compose/compose-file/#fragments) to reference other services like the example above
|
||||||
|
|
||||||
|
## `.env` file
|
||||||
|
|
||||||
|
You can declare default environment variables in an environment file named
|
||||||
|
`.env`. This file will be loaded from the current working directory,
|
||||||
|
where the command is executed and applied to compose definitions passed
|
||||||
|
with `-f`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
image: docker.io/username/webapp:${TAG:-v1.0.0}
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
# .env
|
||||||
|
TAG=v1.1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:v1.1.0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> System environment variables take precedence over environment variables
|
||||||
|
> in `.env` file.
|
||||||
|
|
||||||
|
## Extension field with `x-bake`
|
||||||
|
|
||||||
|
Even if some fields are not (yet) available in the compose specification, you
|
||||||
|
can use the [special extension](https://docs.docker.com/compose/compose-file/#extension)
|
||||||
|
field `x-bake` in your compose file to evaluate extra fields:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
addon:
|
||||||
|
image: ct-addon:bar
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: ./Dockerfile
|
||||||
|
args:
|
||||||
|
CT_ECR: foo
|
||||||
|
CT_TAG: bar
|
||||||
|
x-bake:
|
||||||
|
tags:
|
||||||
|
- ct-addon:foo
|
||||||
|
- ct-addon:alp
|
||||||
|
platforms:
|
||||||
|
- linux/amd64
|
||||||
|
- linux/arm64
|
||||||
|
cache-from:
|
||||||
|
- user/app:cache
|
||||||
|
- type=local,src=path/to/cache
|
||||||
|
cache-to:
|
||||||
|
- type=local,dest=path/to/cache
|
||||||
|
pull: true
|
||||||
|
|
||||||
|
aws:
|
||||||
|
image: ct-fake-aws:bar
|
||||||
|
build:
|
||||||
|
dockerfile: ./aws.Dockerfile
|
||||||
|
args:
|
||||||
|
CT_ECR: foo
|
||||||
|
CT_TAG: bar
|
||||||
|
x-bake:
|
||||||
|
secret:
|
||||||
|
- id=mysecret,src=./secret
|
||||||
|
- id=mysecret2,src=./secret2
|
||||||
|
platforms: linux/arm64
|
||||||
|
output: type=docker
|
||||||
|
no-cache: true
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"aws",
|
||||||
|
"addon"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"addon": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "./Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"CT_ECR": "foo",
|
||||||
|
"CT_TAG": "bar"
|
||||||
|
},
|
||||||
|
"tags": [
|
||||||
|
"ct-addon:foo",
|
||||||
|
"ct-addon:alp"
|
||||||
|
],
|
||||||
|
"cache-from": [
|
||||||
|
"user/app:cache",
|
||||||
|
"type=local,src=path/to/cache"
|
||||||
|
],
|
||||||
|
"cache-to": [
|
||||||
|
"type=local,dest=path/to/cache"
|
||||||
|
],
|
||||||
|
"platforms": [
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64"
|
||||||
|
],
|
||||||
|
"pull": true
|
||||||
|
},
|
||||||
|
"aws": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "./aws.Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"CT_ECR": "foo",
|
||||||
|
"CT_TAG": "bar"
|
||||||
|
},
|
||||||
|
"tags": [
|
||||||
|
"ct-fake-aws:bar"
|
||||||
|
],
|
||||||
|
"secret": [
|
||||||
|
"id=mysecret,src=./secret",
|
||||||
|
"id=mysecret2,src=./secret2"
|
||||||
|
],
|
||||||
|
"platforms": [
|
||||||
|
"linux/arm64"
|
||||||
|
],
|
||||||
|
"output": [
|
||||||
|
"type=docker"
|
||||||
|
],
|
||||||
|
"no-cache": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Complete list of valid fields for `x-bake`:
|
||||||
|
|
||||||
|
* `cache-from`
|
||||||
|
* `cache-to`
|
||||||
|
* `contexts`
|
||||||
|
* `no-cache`
|
||||||
|
* `no-cache-filter`
|
||||||
|
* `output`
|
||||||
|
* `platforms`
|
||||||
|
* `pull`
|
||||||
|
* `secret`
|
||||||
|
* `ssh`
|
||||||
|
* `tags`
|
||||||
@ -0,0 +1,216 @@
|
|||||||
|
# Configuring builds
|
||||||
|
|
||||||
|
Bake supports loading build definition from files, but sometimes you need even
|
||||||
|
more flexibility to configure this definition.
|
||||||
|
|
||||||
|
For this use case, you can define variables inside the bake files that can be
|
||||||
|
set by the user with environment variables or by [attribute definitions](#global-scope-attributes)
|
||||||
|
in other bake files. If you wish to change a specific value for a single
|
||||||
|
invocation you can use the `--set` flag [from the command line](#from-command-line).
|
||||||
|
|
||||||
|
## Global scope attributes
|
||||||
|
|
||||||
|
You can define global scope attributes in HCL/JSON and use them for code reuse
|
||||||
|
and setting values for variables. This means you can do a "data-only" HCL file
|
||||||
|
with the values you want to set/override and use it in the list of regular
|
||||||
|
output files.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "FOO" {
|
||||||
|
default = "abc"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = "pre-${FOO}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use this file directly:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print app
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"v1": "pre-abc"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Or create an override configuration file:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# env.hcl
|
||||||
|
WHOAMI="myuser"
|
||||||
|
FOO="def-${WHOAMI}"
|
||||||
|
```
|
||||||
|
|
||||||
|
And invoke bake together with both of the files:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake -f docker-bake.hcl -f env.hcl --print app
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"v1": "pre-def-myuser"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## From command line
|
||||||
|
|
||||||
|
You can also override target configurations from the command line with the
|
||||||
|
[`--set` flag](https://docs.docker.com/engine/reference/commandline/buildx_bake/#set):
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
mybuildarg = "foo"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --set app.args.mybuildarg=bar --set app.platform=linux/arm64 app --print
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"mybuildarg": "bar"
|
||||||
|
},
|
||||||
|
"platforms": [
|
||||||
|
"linux/arm64"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Pattern matching syntax defined in [https://golang.org/pkg/path/#Match](https://golang.org/pkg/path/#Match)
|
||||||
|
is also supported:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with "foo"
|
||||||
|
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
|
||||||
|
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with "foo"
|
||||||
|
```
|
||||||
|
|
||||||
|
Complete list of overridable fields:
|
||||||
|
|
||||||
|
* `args`
|
||||||
|
* `cache-from`
|
||||||
|
* `cache-to`
|
||||||
|
* `context`
|
||||||
|
* `dockerfile`
|
||||||
|
* `labels`
|
||||||
|
* `no-cache`
|
||||||
|
* `output`
|
||||||
|
* `platform`
|
||||||
|
* `pull`
|
||||||
|
* `secrets`
|
||||||
|
* `ssh`
|
||||||
|
* `tags`
|
||||||
|
* `target`
|
||||||
|
|
||||||
|
## Using variables in variables across files
|
||||||
|
|
||||||
|
When multiple files are specified, one file can use variables defined in
|
||||||
|
another file.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake1.hcl
|
||||||
|
variable "FOO" {
|
||||||
|
default = upper("${BASE}def")
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "BAR" {
|
||||||
|
default = "-${FOO}-"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = "pre-${BAR}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake2.hcl
|
||||||
|
variable "BASE" {
|
||||||
|
default = "abc"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v2 = "${FOO}-post"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake -f docker-bake1.hcl -f docker-bake2.hcl --print app
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"v1": "pre--ABCDEF-",
|
||||||
|
"v2": "ABCDEF-post"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@ -0,0 +1,440 @@
|
|||||||
|
# Bake file definition
|
||||||
|
|
||||||
|
`buildx bake` supports HCL, JSON and Compose file format for defining build
|
||||||
|
[groups](#group), [targets](#target) as well as [variables](#variable) and
|
||||||
|
[functions](#functions). It looks for build definition files in the current
|
||||||
|
directory in the following order:
|
||||||
|
|
||||||
|
* `docker-compose.yml`
|
||||||
|
* `docker-compose.yaml`
|
||||||
|
* `docker-bake.json`
|
||||||
|
* `docker-bake.override.json`
|
||||||
|
* `docker-bake.hcl`
|
||||||
|
* `docker-bake.override.hcl`
|
||||||
|
|
||||||
|
## Specification
|
||||||
|
|
||||||
|
Inside a bake file you can declare group, target and variable blocks to define
|
||||||
|
project specific reusable build flows.
|
||||||
|
|
||||||
|
### Target
|
||||||
|
|
||||||
|
A target reflects a single docker build invocation with the same options that
|
||||||
|
you would specify for `docker build`:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```console
|
||||||
|
$ docker buildx bake webapp-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> In the case of compose files, each service corresponds to a target.
|
||||||
|
> If compose service name contains a dot it will be replaced with an underscore.
|
||||||
|
|
||||||
|
Complete list of valid target fields available for [HCL](#hcl-definition) and
|
||||||
|
[JSON](#json-definition) definitions:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|---------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| `inherits` | List | [Inherit build options](#merging-and-inheritance) from other targets |
|
||||||
|
| `args` | Map | Set build-time variables (same as [`--build-arg` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `cache-from` | List | External cache sources (same as [`--cache-from` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `cache-to` | List | Cache export destinations (same as [`--cache-to` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `context` | String | Set of files located in the specified path or URL |
|
||||||
|
| `contexts` | Map | Additional build contexts (same as [`--build-context` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `dockerfile` | String | Name of the Dockerfile (same as [`--file` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `dockerfile-inline` | String | Inline Dockerfile content |
|
||||||
|
| `labels` | Map | Set metadata for an image (same as [`--label` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `no-cache` | Bool | Do not use cache when building the image (same as [`--no-cache` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `no-cache-filter` | List | Do not cache specified stages (same as [`--no-cache-filter` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `output` | List | Output destination (same as [`--output` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `platforms` | List | Set target platforms for build (same as [`--platform` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `pull` | Bool | Always attempt to pull all referenced images (same as [`--pull` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `secret` | List | Secret to expose to the build (same as [`--secret` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `ssh` | List | SSH agent socket or keys to expose to the build (same as [`--ssh` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `tags` | List | Name and optionally a tag in the format `name:tag` (same as [`--tag` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
| `target` | String | Set the target build stage to build (same as [`--target` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
|
||||||
|
|
||||||
|
### Group
|
||||||
|
|
||||||
|
A group is a grouping of targets:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
group "build" {
|
||||||
|
targets = ["db", "webapp-dev"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "db" {
|
||||||
|
dockerfile = "Dockerfile.db"
|
||||||
|
tags = ["docker.io/username/db"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```console
|
||||||
|
$ docker buildx bake build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Variable
|
||||||
|
|
||||||
|
Similar to how Terraform provides a way to [define variables](https://www.terraform.io/docs/configuration/variables.html#declaring-an-input-variable),
|
||||||
|
the HCL file format also supports variable block definitions. These can be used
|
||||||
|
to define variables with values provided by the current environment, or a
|
||||||
|
default value when unset:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```console
|
||||||
|
$ docker buildx bake webapp-dev # will use the default value "latest"
|
||||||
|
$ TAG=dev docker buildx bake webapp-dev # will use the TAG environment variable value
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Tip**
|
||||||
|
>
|
||||||
|
> See also the [Configuring builds](configuring-build.md) page for advanced usage.
|
||||||
|
|
||||||
|
### Functions
|
||||||
|
|
||||||
|
A [set of generally useful functions](https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go)
|
||||||
|
provided by [go-cty](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib)
|
||||||
|
are available for use in HCL files:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
args = {
|
||||||
|
buildno = "${add(123, 1)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In addition, [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc)
|
||||||
|
are also supported:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
function "increment" {
|
||||||
|
params = [number]
|
||||||
|
result = number + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
args = {
|
||||||
|
buildno = "${increment(123)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> See [User defined HCL functions](hcl-funcs.md) page for more details.
|
||||||
|
|
||||||
|
## Built-in variables
|
||||||
|
|
||||||
|
* `BAKE_CMD_CONTEXT` can be used to access the main `context` for bake command
|
||||||
|
from a bake file that has been [imported remotely](file-definition.md#remote-definition).
|
||||||
|
* `BAKE_LOCAL_PLATFORM` returns the current platform's default platform
|
||||||
|
specification (e.g. `linux/amd64`).
|
||||||
|
|
||||||
|
## Merging and inheritance
|
||||||
|
|
||||||
|
Multiple files can include the same target and final build options will be
|
||||||
|
determined by merging them together:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```hcl
|
||||||
|
# docker-bake2.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
tags = ["docker.io/username/webapp:dev"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```console
|
||||||
|
$ docker buildx bake -f docker-bake.hcl -f docker-bake2.hcl webapp-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
A group can specify its list of targets with the `targets` option. A target can
|
||||||
|
inherit build options by setting the `inherits` option to the list of targets or
|
||||||
|
groups to inherit from:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-release" {
|
||||||
|
inherits = ["webapp-dev"]
|
||||||
|
platforms = ["linux/amd64", "linux/arm64"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## `default` target/group
|
||||||
|
|
||||||
|
When you invoke `bake` you specify what targets/groups you want to build. If no
|
||||||
|
arguments is specified, the group/target named `default` will be built:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
target "default" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:latest"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```console
|
||||||
|
$ docker buildx bake
|
||||||
|
```
|
||||||
|
|
||||||
|
## Definitions
|
||||||
|
|
||||||
|
### HCL definition
|
||||||
|
|
||||||
|
HCL definition file is recommended as its experience is more aligned with buildx UX
|
||||||
|
and also allows better code reuse, different target groups and extended features.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["db", "webapp-dev"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-dev" {
|
||||||
|
dockerfile = "Dockerfile.webapp"
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp-release" {
|
||||||
|
inherits = ["webapp-dev"]
|
||||||
|
platforms = ["linux/amd64", "linux/arm64"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "db" {
|
||||||
|
dockerfile = "Dockerfile.db"
|
||||||
|
tags = ["docker.io/username/db"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSON definition
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"variable": {
|
||||||
|
"TAG": {
|
||||||
|
"default": "latest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"db",
|
||||||
|
"webapp-dev"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp-dev": {
|
||||||
|
"dockerfile": "Dockerfile.webapp",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:${TAG}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"webapp-release": {
|
||||||
|
"inherits": [
|
||||||
|
"webapp-dev"
|
||||||
|
],
|
||||||
|
"platforms": [
|
||||||
|
"linux/amd64",
|
||||||
|
"linux/arm64"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"db": {
|
||||||
|
"dockerfile": "Dockerfile.db",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/db"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compose file
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
webapp:
|
||||||
|
image: docker.io/username/webapp:latest
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile.webapp
|
||||||
|
|
||||||
|
db:
|
||||||
|
image: docker.io/username/db
|
||||||
|
build:
|
||||||
|
dockerfile: Dockerfile.db
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> See [Building from Compose file](compose-file.md) page for more details.
|
||||||
|
|
||||||
|
## Remote definition
|
||||||
|
|
||||||
|
You can also build bake files directly from a remote Git repository or HTTPS URL:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake "https://github.com/docker/cli.git#v20.10.11" --print
|
||||||
|
#1 [internal] load git source https://github.com/docker/cli.git#v20.10.11
|
||||||
|
#1 0.745 e8f1871b077b64bcb4a13334b7146492773769f7 refs/tags/v20.10.11
|
||||||
|
#1 2.022 From https://github.com/docker/cli
|
||||||
|
#1 2.022 * [new tag] v20.10.11 -> v20.10.11
|
||||||
|
#1 DONE 2.9s
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"binary"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"binary": {
|
||||||
|
"context": "https://github.com/docker/cli.git#v20.10.11",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"BASE_VARIANT": "alpine",
|
||||||
|
"GO_STRIP": "",
|
||||||
|
"VERSION": ""
|
||||||
|
},
|
||||||
|
"target": "binary",
|
||||||
|
"platforms": [
|
||||||
|
"local"
|
||||||
|
],
|
||||||
|
"output": [
|
||||||
|
"build"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see the context is fixed to `https://github.com/docker/cli.git` even if
|
||||||
|
[no context is actually defined](https://github.com/docker/cli/blob/2776a6d694f988c0c1df61cad4bfac0f54e481c8/docker-bake.hcl#L17-L26)
|
||||||
|
in the definition.
|
||||||
|
|
||||||
|
If you want to access the main context for bake command from a bake file
|
||||||
|
that has been imported remotely, you can use the [`BAKE_CMD_CONTEXT` built-in var](#built-in-variables).
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ cat https://raw.githubusercontent.com/tonistiigi/buildx/remote-test/docker-bake.hcl
|
||||||
|
```
|
||||||
|
```hcl
|
||||||
|
target "default" {
|
||||||
|
context = BAKE_CMD_CONTEXT
|
||||||
|
dockerfile-inline = <<EOT
|
||||||
|
FROM alpine
|
||||||
|
WORKDIR /src
|
||||||
|
COPY . .
|
||||||
|
RUN ls -l && stop
|
||||||
|
EOT
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" --print
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ touch foo bar
|
||||||
|
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test"
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
...
|
||||||
|
> [4/4] RUN ls -l && stop:
|
||||||
|
#8 0.101 total 0
|
||||||
|
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 bar
|
||||||
|
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 foo
|
||||||
|
#8 0.102 /bin/sh: stop: not found
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11" --print
|
||||||
|
#1 [internal] load git source https://github.com/tonistiigi/buildx.git#remote-test
|
||||||
|
#1 0.429 577303add004dd7efeb13434d69ea030d35f7888 refs/heads/remote-test
|
||||||
|
#1 CACHED
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"target": {
|
||||||
|
"default": {
|
||||||
|
"context": "https://github.com/docker/cli.git#v20.10.11",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11"
|
||||||
|
```
|
||||||
|
```text
|
||||||
|
...
|
||||||
|
> [4/4] RUN ls -l && stop:
|
||||||
|
#8 0.136 drwxrwxrwx 5 root root 4096 Jul 27 18:31 kubernetes
|
||||||
|
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 man
|
||||||
|
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 opts
|
||||||
|
#8 0.136 -rw-rw-rw- 1 root root 1893 Jul 27 18:31 poule.yml
|
||||||
|
#8 0.136 drwxrwxrwx 7 root root 4096 Jul 27 18:31 scripts
|
||||||
|
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 service
|
||||||
|
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 templates
|
||||||
|
#8 0.136 drwxrwxrwx 10 root root 4096 Jul 27 18:31 vendor
|
||||||
|
#8 0.136 -rwxrwxrwx 1 root root 9620 Jul 27 18:31 vendor.conf
|
||||||
|
#8 0.136 /bin/sh: stop: not found
|
||||||
|
```
|
||||||
@ -0,0 +1,327 @@
|
|||||||
|
# User defined HCL functions
|
||||||
|
|
||||||
|
## Using interpolation to tag an image with the git sha
|
||||||
|
|
||||||
|
As shown in the [File definition](file-definition.md#variable) page, `bake`
|
||||||
|
supports variable blocks which are assigned to matching environment variables
|
||||||
|
or default values:
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
tags = ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
alternatively, in json format:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"variable": {
|
||||||
|
"TAG": {
|
||||||
|
"default": "latest"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": ["webapp"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"tags": ["docker.io/username/webapp:${TAG}"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:latest"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ TAG=$(git rev-parse --short HEAD) docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"docker.io/username/webapp:985e9e9"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using the `add` function
|
||||||
|
|
||||||
|
You can use [`go-cty` stdlib functions](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib).
|
||||||
|
Here we are using the `add` function.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "TAG" {
|
||||||
|
default = "latest"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
args = {
|
||||||
|
buildno = "${add(123, 1)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"buildno": "124"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Defining an `increment` function
|
||||||
|
|
||||||
|
It also supports [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc).
|
||||||
|
The following example defines a simple an `increment` function.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
function "increment" {
|
||||||
|
params = [number]
|
||||||
|
result = number + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = ["webapp"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
args = {
|
||||||
|
buildno = "${increment(123)}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"buildno": "124"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Only adding tags if a variable is not empty using an `notequal`
|
||||||
|
|
||||||
|
Here we are using the conditional `notequal` function which is just for
|
||||||
|
symmetry with the `equal` one.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "TAG" {default="" }
|
||||||
|
|
||||||
|
group "default" {
|
||||||
|
targets = [
|
||||||
|
"webapp",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
context="."
|
||||||
|
dockerfile="Dockerfile"
|
||||||
|
tags = [
|
||||||
|
"my-image:latest",
|
||||||
|
notequal("",TAG) ? "my-image:${TAG}": "",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"my-image:latest"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using variables in functions
|
||||||
|
|
||||||
|
You can refer variables to other variables like the target blocks can. Stdlib
|
||||||
|
functions can also be called but user functions can't at the moment.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "REPO" {
|
||||||
|
default = "user/repo"
|
||||||
|
}
|
||||||
|
|
||||||
|
function "tag" {
|
||||||
|
params = [tag]
|
||||||
|
result = ["${REPO}:${tag}"]
|
||||||
|
}
|
||||||
|
|
||||||
|
target "webapp" {
|
||||||
|
tags = tag("v1")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print webapp
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"webapp"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"webapp": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"tags": [
|
||||||
|
"user/repo:v1"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using typed variables
|
||||||
|
|
||||||
|
Non-string variables are also accepted. The value passed with env is parsed
|
||||||
|
into suitable type first.
|
||||||
|
|
||||||
|
```hcl
|
||||||
|
# docker-bake.hcl
|
||||||
|
variable "FOO" {
|
||||||
|
default = 3
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "IS_FOO" {
|
||||||
|
default = true
|
||||||
|
}
|
||||||
|
|
||||||
|
target "app" {
|
||||||
|
args = {
|
||||||
|
v1 = FOO > 5 ? "higher" : "lower"
|
||||||
|
v2 = IS_FOO ? "yes" : "no"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx bake --print app
|
||||||
|
```
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"group": {
|
||||||
|
"default": {
|
||||||
|
"targets": [
|
||||||
|
"app"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"target": {
|
||||||
|
"app": {
|
||||||
|
"context": ".",
|
||||||
|
"dockerfile": "Dockerfile",
|
||||||
|
"args": {
|
||||||
|
"v1": "lower",
|
||||||
|
"v2": "yes"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
@ -0,0 +1,36 @@
|
|||||||
|
# High-level build options with Bake
|
||||||
|
|
||||||
|
> This command is experimental.
|
||||||
|
>
|
||||||
|
> The design of bake is in early stages, and we are looking for [feedback from users](https://github.com/docker/buildx/issues).
|
||||||
|
{: .experimental }
|
||||||
|
|
||||||
|
Buildx also aims to provide support for high-level build concepts that go beyond
|
||||||
|
invoking a single build command. We want to support building all the images in
|
||||||
|
your application together and let the users define project specific reusable
|
||||||
|
build flows that can then be easily invoked by anyone.
|
||||||
|
|
||||||
|
[BuildKit](https://github.com/moby/buildkit) efficiently handles multiple
|
||||||
|
concurrent build requests and de-duplicating work. The build commands can be
|
||||||
|
combined with general-purpose command runners (for example, `make`). However,
|
||||||
|
these tools generally invoke builds in sequence and therefore cannot leverage
|
||||||
|
the full potential of BuildKit parallelization, or combine BuildKit's output
|
||||||
|
for the user. For this use case, we have added a command called
|
||||||
|
[`docker buildx bake`](https://docs.docker.com/engine/reference/commandline/buildx_bake/).
|
||||||
|
|
||||||
|
The `bake` command supports building images from HCL, JSON and Compose files.
|
||||||
|
This is similar to [`docker compose build`](https://docs.docker.com/compose/reference/build/),
|
||||||
|
but allowing all the services to be built concurrently as part of a single
|
||||||
|
request. If multiple files are specified they are all read and configurations are
|
||||||
|
combined.
|
||||||
|
|
||||||
|
We recommend using HCL files as its experience is more aligned with buildx UX
|
||||||
|
and also allows better code reuse, different target groups and extended features.
|
||||||
|
|
||||||
|
## Next steps
|
||||||
|
|
||||||
|
* [File definition](file-definition.md)
|
||||||
|
* [Configuring builds](configuring-build.md)
|
||||||
|
* [User defined HCL functions](hcl-funcs.md)
|
||||||
|
* [Defining additional build contexts and linking targets](build-contexts.md)
|
||||||
|
* [Building from Compose file](compose-file.md)
|
||||||
@ -1,3 +1,48 @@
|
|||||||
# CI/CD
|
# CI/CD
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/ci/)
|
## GitHub Actions
|
||||||
|
|
||||||
|
Docker provides a [GitHub Action that will build and push your image](https://github.com/docker/build-push-action/#about)
|
||||||
|
using Buildx. Here is a simple workflow:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
name: ci
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docker:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
-
|
||||||
|
name: Set up QEMU
|
||||||
|
uses: docker/setup-qemu-action@v2
|
||||||
|
-
|
||||||
|
name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
-
|
||||||
|
name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
-
|
||||||
|
name: Build and push
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
tags: user/app:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
In this example we are also using 3 other actions:
|
||||||
|
|
||||||
|
* [`setup-buildx`](https://github.com/docker/setup-buildx-action) action will create and boot a builder using by
|
||||||
|
default the `docker-container` [builder driver](../reference/buildx_create.md#driver).
|
||||||
|
This is **not required but recommended** using it to be able to build multi-platform images, export cache, etc.
|
||||||
|
* [`setup-qemu`](https://github.com/docker/setup-qemu-action) action can be useful if you want
|
||||||
|
to add emulation support with QEMU to be able to build against more platforms.
|
||||||
|
* [`login`](https://github.com/docker/login-action) action will take care to log
|
||||||
|
in against a Docker registry.
|
||||||
|
|||||||
@ -1,3 +1,23 @@
|
|||||||
# CNI networking
|
# CNI networking
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#cni-networking)
|
It can be useful to use a bridge network for your builder if for example you
|
||||||
|
encounter a network port contention during multiple builds. If you're using
|
||||||
|
the BuildKit image, CNI is not yet available in it, but you can create
|
||||||
|
[a custom BuildKit image with CNI support](https://github.com/moby/buildkit/blob/master/docs/cni-networking.md).
|
||||||
|
|
||||||
|
Now build this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --tag buildkit-cni:local --load .
|
||||||
|
```
|
||||||
|
|
||||||
|
Then [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) that
|
||||||
|
will use this image:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "image=buildkit-cni:local" \
|
||||||
|
--buildkitd-flags "--oci-worker-net=cni"
|
||||||
|
```
|
||||||
|
|||||||
@ -1,3 +1,20 @@
|
|||||||
# Color output controls
|
# Color output controls
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/building/env-vars/#buildkit_colors)
|
Buildx has support for modifying the colors that are used to output information
|
||||||
|
to the terminal. You can set the environment variable `BUILDKIT_COLORS` to
|
||||||
|
something like `run=123,20,245:error=yellow:cancel=blue:warning=white` to set
|
||||||
|
the colors that you would like to use:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Setting `NO_COLOR` to anything will disable any colorized output as recommended
|
||||||
|
by [no-color.org](https://no-color.org/):
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Parsing errors will be reported but ignored. This will result in default
|
||||||
|
> color values being used where needed.
|
||||||
|
|
||||||
|
See also [the list of pre-defined colors](https://github.com/moby/buildkit/blob/master/util/progress/progressui/colors.go).
|
||||||
|
|||||||
@ -1,3 +1,34 @@
|
|||||||
# Using a custom network
|
# Using a custom network
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/drivers/docker-container/#custom-network)
|
[Create a network](https://docs.docker.com/engine/reference/commandline/network_create/)
|
||||||
|
named `foonet`:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker network create foonet
|
||||||
|
```
|
||||||
|
|
||||||
|
[Create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
named `mybuilder` that will use this network:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--driver-opt "network=foonet"
|
||||||
|
```
|
||||||
|
|
||||||
|
Boot and [inspect `mybuilder`](https://docs.docker.com/engine/reference/commandline/buildx_inspect/):
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx inspect --bootstrap
|
||||||
|
```
|
||||||
|
|
||||||
|
[Inspect the builder container](https://docs.docker.com/engine/reference/commandline/inspect/)
|
||||||
|
and see what network is being used:
|
||||||
|
|
||||||
|
{% raw %}
|
||||||
|
```console
|
||||||
|
$ docker inspect buildx_buildkit_mybuilder0 --format={{.NetworkSettings.Networks}}
|
||||||
|
map[foonet:0xc00018c0c0]
|
||||||
|
```
|
||||||
|
{% endraw %}
|
||||||
|
|||||||
@ -1,3 +1,63 @@
|
|||||||
# Using a custom registry configuration
|
# Using a custom registry configuration
|
||||||
|
|
||||||
This page has moved to [Docker Docs website](https://docs.docker.com/build/buildkit/configure/#setting-registry-certificates)
|
If you [create a `docker-container` or `kubernetes` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/) and
|
||||||
|
have specified certificates for registries in the [BuildKit daemon configuration](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md),
|
||||||
|
the files will be copied into the container under `/etc/buildkit/certs` and
|
||||||
|
configuration will be updated to reflect that.
|
||||||
|
|
||||||
|
Take the following `buildkitd.toml` configuration that will be used for
|
||||||
|
pushing an image to this registry using self-signed certificates:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# /etc/buildkitd.toml
|
||||||
|
debug = true
|
||||||
|
[registry."myregistry.com"]
|
||||||
|
ca=["/etc/certs/myregistry.pem"]
|
||||||
|
[[registry."myregistry.com".keypair]]
|
||||||
|
key="/etc/certs/myregistry_key.pem"
|
||||||
|
cert="/etc/certs/myregistry_cert.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we have configured a self-signed certificate for `myregistry.com` registry.
|
||||||
|
|
||||||
|
Now [create a `docker-container` builder](https://docs.docker.com/engine/reference/commandline/buildx_create/)
|
||||||
|
that will use this BuildKit configuration:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --use \
|
||||||
|
--name mybuilder \
|
||||||
|
--driver docker-container \
|
||||||
|
--config /etc/buildkitd.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
Inspecting the builder container, you can see that buildkitd configuration
|
||||||
|
has changed:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
|
||||||
|
```
|
||||||
|
```toml
|
||||||
|
debug = true
|
||||||
|
|
||||||
|
[registry]
|
||||||
|
|
||||||
|
[registry."myregistry.com"]
|
||||||
|
ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
|
||||||
|
|
||||||
|
[[registry."myregistry.com".keypair]]
|
||||||
|
cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
|
||||||
|
key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
And certificates copied inside the container:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
|
||||||
|
myregistry.pem myregistry_cert.pem myregistry_key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you should be able to push to the registry with this builder:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --push --tag myregistry.com/myimage:latest .
|
||||||
|
```
|
||||||
|
|||||||
@ -1,164 +0,0 @@
|
|||||||
# Debug monitor
|
|
||||||
|
|
||||||
To assist with creating and debugging complex builds, Buildx provides a
|
|
||||||
debugger to help you step through the build process and easily inspect the
|
|
||||||
state of the build environment at any point.
|
|
||||||
|
|
||||||
> **Note**
|
|
||||||
>
|
|
||||||
> The debug monitor is a new experimental feature in recent versions of Buildx.
|
|
||||||
> There are rough edges, known bugs, and missing features. Please try it out
|
|
||||||
> and let us know what you think!
|
|
||||||
|
|
||||||
## Starting the debugger
|
|
||||||
|
|
||||||
To start the debugger, first, ensure that `BUILDX_EXPERIMENTAL=1` is set in
|
|
||||||
your environment.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ export BUILDX_EXPERIMENTAL=1
|
|
||||||
```
|
|
||||||
|
|
||||||
To start a debug session for a build, you can use the `--invoke` flag with the
|
|
||||||
build command to specify a command to launch in the resulting image.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --invoke /bin/sh .
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
=> [internal] load build definition from Dockerfile 0.0s
|
|
||||||
=> => transferring dockerfile: 32B 0.0s
|
|
||||||
=> [internal] load .dockerignore 0.0s
|
|
||||||
=> => transferring context: 34B 0.0s
|
|
||||||
...
|
|
||||||
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
|
||||||
Interactive container was restarted with process "dzz7pjb4pk1mj29xqrx0ac3oj". Press Ctrl-a-c to switch to the new container
|
|
||||||
Switched IO
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
|
|
||||||
This launches a `/bin/sh` process in the final stage of the image, and allows
|
|
||||||
you to explore the contents of the image, without needing to export or load the
|
|
||||||
image outside of the builder.
|
|
||||||
|
|
||||||
For example, you can use `ls` to see the contents of the image:
|
|
||||||
|
|
||||||
```console
|
|
||||||
/ # ls
|
|
||||||
bin etc lib mnt proc run srv tmp var
|
|
||||||
dev home media opt root sbin sys usr work
|
|
||||||
```
|
|
||||||
|
|
||||||
Optional long form allows you specifying detailed configurations of the process.
|
|
||||||
It must be CSV-styled comma-separated key-value pairs.
|
|
||||||
Supported keys are `args` (can be JSON array format), `entrypoint` (can be JSON array format), `env` (can be JSON array format), `user`, `cwd` and `tty` (bool).
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker buildx build --invoke 'entrypoint=["sh"],"args=[""-c"", ""env | grep -e FOO -e AAA""]","env=[""FOO=bar"", ""AAA=bbb""]"' .
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `on-error`
|
|
||||||
|
|
||||||
If you want to start a debug session when a build fails, you can use
|
|
||||||
`--invoke=on-error` to start a debug session when the build fails.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --invoke on-error .
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
=> [internal] load build definition from Dockerfile 0.0s
|
|
||||||
=> => transferring dockerfile: 32B 0.0s
|
|
||||||
=> [internal] load .dockerignore 0.0s
|
|
||||||
=> => transferring context: 34B 0.0s
|
|
||||||
...
|
|
||||||
=> ERROR [shell 10/10] RUN bad-command
|
|
||||||
------
|
|
||||||
> [shell 10/10] RUN bad-command:
|
|
||||||
#0 0.049 /bin/sh: bad-command: not found
|
|
||||||
------
|
|
||||||
Launching interactive container. Press Ctrl-a-c to switch to monitor console
|
|
||||||
Interactive container was restarted with process "edmzor60nrag7rh1mbi4o9lm8". Press Ctrl-a-c to switch to the new container
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
|
|
||||||
This allows you to explore the state of the image when the build failed.
|
|
||||||
|
|
||||||
#### `debug-shell`
|
|
||||||
|
|
||||||
If you want to drop into a debug session without first starting the build, you
|
|
||||||
can use `--invoke=debug-shell` to start a debug session.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker buildx build --invoke debug-shell .
|
|
||||||
[+] Building 4.2s (19/19) FINISHED
|
|
||||||
=> [internal] connecting to local controller 0.0s
|
|
||||||
(buildx)
|
|
||||||
```
|
|
||||||
|
|
||||||
You can then use the commands available in [monitor mode](#monitor-mode) to
|
|
||||||
start and observe the build.
|
|
||||||
|
|
||||||
## Monitor mode
|
|
||||||
|
|
||||||
By default, when debugging, you'll be dropped into a shell in the final stage.
|
|
||||||
|
|
||||||
When you're in a debug shell, you can use the `Ctrl-a-c` key combination (press
|
|
||||||
`Ctrl`+`a` together, lift, then press `c`) to toggle between the debug shell
|
|
||||||
and the monitor mode. In monitor mode, you can run commands that control the
|
|
||||||
debug environment.
|
|
||||||
|
|
||||||
```console
|
|
||||||
(buildx) help
|
|
||||||
Available commands are:
|
|
||||||
attach attach to a buildx server or a process in the container
|
|
||||||
disconnect disconnect a client from a buildx server. Specific session ID can be specified an arg
|
|
||||||
exec execute a process in the interactive container
|
|
||||||
exit exits monitor
|
|
||||||
help shows this message. Optionally pass a command name as an argument to print the detailed usage.
|
|
||||||
kill kill buildx server
|
|
||||||
list list buildx sessions
|
|
||||||
ps list processes invoked by "exec". Use "attach" to attach IO to that process
|
|
||||||
reload reloads the context and build it
|
|
||||||
rollback re-runs the interactive container with the step's rootfs contents
|
|
||||||
```
|
|
||||||
|
|
||||||
## Build controllers
|
|
||||||
|
|
||||||
Debugging is performed using a buildx "controller", which provides a high-level
|
|
||||||
abstraction to perform builds. By default, the local controller is used for a
|
|
||||||
more stable experience which runs all builds in-process. However, you can also
|
|
||||||
use the remote controller to detach the build process from the CLI.
|
|
||||||
|
|
||||||
To detach the build process from the CLI, you can use the `--detach=true` flag with
|
|
||||||
the build command.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx build --detach=true --invoke /bin/sh .
|
|
||||||
```
|
|
||||||
|
|
||||||
If you start a debugging session using the `--invoke` flag with a detached
|
|
||||||
build, then you can attach to it using the `buildx debug-shell` subcommand to
|
|
||||||
immediately enter the monitor mode.
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ docker buildx debug-shell
|
|
||||||
[+] Building 0.0s (1/1) FINISHED
|
|
||||||
=> [internal] connecting to remote controller
|
|
||||||
(buildx) list
|
|
||||||
ID CURRENT_SESSION
|
|
||||||
xfe1162ovd9def8yapb4ys66t false
|
|
||||||
(buildx) attach xfe1162ovd9def8yapb4ys66t
|
|
||||||
Attached to process "". Press Ctrl-a-c to switch to the new container
|
|
||||||
(buildx) ps
|
|
||||||
PID CURRENT_SESSION COMMAND
|
|
||||||
3ug8iqaufiwwnukimhqqt06jz false [sh]
|
|
||||||
(buildx) attach 3ug8iqaufiwwnukimhqqt06jz
|
|
||||||
Attached to process "3ug8iqaufiwwnukimhqqt06jz". Press Ctrl-a-c to switch to the new container
|
|
||||||
(buildx) Switched IO
|
|
||||||
/ # ls
|
|
||||||
bin etc lib mnt proc run srv tmp var
|
|
||||||
dev home media opt root sbin sys usr work
|
|
||||||
/ #
|
|
||||||
```
|
|
||||||
@ -0,0 +1,75 @@
|
|||||||
|
# Docker container driver
|
||||||
|
|
||||||
|
The buildx docker-container driver allows creation of a managed and
|
||||||
|
customizable BuildKit environment inside a dedicated Docker container.
|
||||||
|
|
||||||
|
Using the docker-container driver has a couple of advantages over the basic
|
||||||
|
docker driver. Firstly, we can manually override the version of buildkit to
|
||||||
|
use, meaning that we can access the latest and greatest features as soon as
|
||||||
|
they're released, instead of waiting to upgrade to a newer version of Docker.
|
||||||
|
Additionally, we can access more complex features like multi-architecture
|
||||||
|
builds and the more advanced cache exporters, which are currently unsupported
|
||||||
|
in the default docker driver.
|
||||||
|
|
||||||
|
We can easily create a new builder that uses the docker-container driver:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --name container --driver docker-container
|
||||||
|
container
|
||||||
|
```
|
||||||
|
|
||||||
|
We should then be able to see it on our list of available builders:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx ls
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
||||||
|
container docker-container
|
||||||
|
container0 desktop-linux inactive
|
||||||
|
default docker
|
||||||
|
default default running 20.10.17 linux/amd64, linux/386
|
||||||
|
```
|
||||||
|
|
||||||
|
If we trigger a build, the appropriate `moby/buildkit` image will be pulled
|
||||||
|
from [Docker Hub](https://hub.docker.com/u/moby/buildkit), the image started,
|
||||||
|
and our build submitted to our containerized build server.
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build -t <image> --builder=container .
|
||||||
|
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
|
||||||
|
#1 [internal] booting buildkit
|
||||||
|
#1 pulling image moby/buildkit:buildx-stable-1
|
||||||
|
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
|
||||||
|
#1 creating container buildx_buildkit_container0
|
||||||
|
#1 creating container buildx_buildkit_container0 0.5s done
|
||||||
|
#1 DONE 2.4s
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the warning "Build result will only remain in the build cache" - unlike
|
||||||
|
the `docker` driver, the built image must be explicitly loaded into the local
|
||||||
|
image store. We can use the `--load` flag for this:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx build --load -t <image> --builder=container .
|
||||||
|
...
|
||||||
|
=> exporting to oci image format 7.7s
|
||||||
|
=> => exporting layers 4.9s
|
||||||
|
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
|
||||||
|
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
|
||||||
|
=> => sending tarball 2.8s
|
||||||
|
=> importing to docker
|
||||||
|
```
|
||||||
|
|
||||||
|
The image should then be available in the image store:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker image ls
|
||||||
|
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||||
|
<image> latest adf3eec768a1 2 minutes ago 197MB
|
||||||
|
```
|
||||||
|
|
||||||
|
## Further reading
|
||||||
|
|
||||||
|
For more information on the docker-container driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
|
||||||
|
<!--- FIXME: for 0.9, make reference link relative --->
|
||||||
@ -0,0 +1,50 @@
|
|||||||
|
# Docker driver
|
||||||
|
|
||||||
|
The buildx docker driver is the default builtin driver, that uses the BuildKit
|
||||||
|
server components built directly into the docker engine.
|
||||||
|
|
||||||
|
No setup should be required for the docker driver - it should already be
|
||||||
|
configured for you:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx ls
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
||||||
|
default docker
|
||||||
|
default default running 20.10.17 linux/amd64, linux/386
|
||||||
|
```
|
||||||
|
|
||||||
|
This builder is ready to build with out-of-the-box, requiring no extra setup,
|
||||||
|
so you can get going with a `docker buildx build` as soon as you like.
|
||||||
|
|
||||||
|
Depending on your personal setup, you may find multiple builders in your list
|
||||||
|
the use the docker driver. For example, on a system that runs both a package
|
||||||
|
managed version of dockerd, as well as Docker Desktop, you might have the
|
||||||
|
following:
|
||||||
|
|
||||||
|
```console
|
||||||
|
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
|
||||||
|
default docker
|
||||||
|
default default running 20.10.17 linux/amd64, linux/386
|
||||||
|
desktop-linux * docker
|
||||||
|
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||||
|
```
|
||||||
|
|
||||||
|
This is because the docker driver builders are automatically pulled from
|
||||||
|
the available [Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
|
||||||
|
When you add new contexts using `docker context create`, these will appear in
|
||||||
|
your list of buildx builders.
|
||||||
|
|
||||||
|
Unlike the [other drivers](../index.md), builders using the docker driver
|
||||||
|
cannot be manually created, and can only be automatically created from the
|
||||||
|
docker context. Additionally, they cannot be configured to a specific BuildKit
|
||||||
|
version, and cannot take any extra parameters, as these are both preset by the
|
||||||
|
Docker engine internally.
|
||||||
|
|
||||||
|
If you want the extra configuration and flexibility without too much more
|
||||||
|
overhead, then see the help page for the [docker-container driver](./docker-container.md).
|
||||||
|
|
||||||
|
## Further reading
|
||||||
|
|
||||||
|
For more information on the docker driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
|
||||||
|
|
||||||
|
<!--- FIXME: for 0.9, make reference link relative --->
|
||||||
@ -0,0 +1,41 @@
|
|||||||
|
# Buildx drivers overview
|
||||||
|
|
||||||
|
The buildx client connects out to the BuildKit backend to execute builds -
|
||||||
|
Buildx drivers allow fine-grained control over management of the backend, and
|
||||||
|
supports several different options for where and how BuildKit should run.
|
||||||
|
|
||||||
|
Currently, we support the following drivers:
|
||||||
|
|
||||||
|
- The `docker` driver, that uses the BuildKit library bundled into the Docker
|
||||||
|
daemon.
|
||||||
|
([guide](./docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
|
- The `docker-container` driver, that launches a dedicated BuildKit container
|
||||||
|
using Docker, for access to advanced features.
|
||||||
|
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
|
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
|
||||||
|
remote Kubernetes cluster, for scalable builds.
|
||||||
|
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver))
|
||||||
|
- The `remote` driver, that allows directly connecting to a manually managed
|
||||||
|
BuildKit daemon, for more custom setups.
|
||||||
|
([guide](./remote.md))
|
||||||
|
|
||||||
|
<!--- FIXME: for 0.9, make links relative, and add reference link for remote --->
|
||||||
|
|
||||||
|
To create a new builder that uses one of the above drivers, you can use the
|
||||||
|
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
|
||||||
|
```
|
||||||
|
|
||||||
|
The build experience is very similar across drivers, however, there are some
|
||||||
|
features that are not evenly supported across the board, notably, the `docker`
|
||||||
|
driver does not include support for certain output/caching types.
|
||||||
|
|
||||||
|
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
|
||||||
|
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
|
||||||
|
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
|
||||||
|
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
|
||||||
|
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
|
||||||
|
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
|
||||||
|
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue