Compare commits

..

28 Commits
v1 ... 1.27.1

Author SHA1 Message Date
aiordache
509cfb9979 "Bump 1.27.1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-10 15:22:57 +02:00
aiordache
4d8d0769a4 Merge branch 'master' into 1.27.x 2020-09-10 15:08:45 +02:00
aiordache
980ec85bf4 "Bump 1.27.0"
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-07 19:09:24 +02:00
aiordache
ad87891ef8 "Bump 1.27.0-rc4"
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-07 16:52:17 +02:00
aiordache
7d2a308b44 Merge branch 'master' into 1.27.x 2020-09-07 16:37:28 +02:00
aiordache
b920010afe "Bump 1.27.0-rc3"
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-03 16:16:51 +02:00
aiordache
687ba365cd Merge branch 'master' into 1.27.x 2020-09-03 15:39:08 +02:00
Ulysses Souza
45c6730e64 "Bump 1.27.0-rc2"
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-08-21 19:27:48 +02:00
Ulysses Souza
dcb1d3b781 Fix flake8
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-08-21 19:27:48 +02:00
Ulysses Souza
0f651d71c7 Update API version for docker client
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-08-21 18:32:29 +02:00
Ulysses Souza
b6e84b0f1c Update tests to use the version on docker client
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-08-21 18:32:29 +02:00
aiordache
0dad2367e6 Use docker-py's default api version for engine queries
Bump docker-py version to 4.3.1

Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
Ulysses Souza
dfd5ff396a Parse network-mode on CLI build
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-08-21 18:32:29 +02:00
aiordache
3f18d599b4 Update schema and fix memory limit parsing
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
aiordache
ae5e505de0 set scale default to 1 on deploy
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
Ryosuke TOKUAMI
89cf753299 Use docker cli on run when the envvar passed.
Make docker-compose run pass the the cli option
to project.up to build images using docker cli
considering COMPOSE_DOCKER_CLI_BUILD environment
variable.

Signed-off-by: Ryosuke TOKUAMI <mail@pokutuna.com>
2020-08-21 18:32:29 +02:00
aiordache
ea7772d599 "Bump 1.27.0-rc1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
aiordache
2963363240 Update jenkins node filter
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
aiordache
8041319bfd Set agent in Release.Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
aiordache
824b4943ed Fix tox failures
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-08-21 18:32:29 +02:00
ulyssessouza
35d71511b3 Recover ./script/release/release.py
Signed-off-by: ulyssessouza <ulysses.souza@docker.com>
2020-08-21 18:32:29 +02:00
dependabot-preview[bot]
7d73cb76b3 Bump cffi from 1.14.0 to 1.14.1
Bumps [cffi](https://github.com/python-cffi/release-doc) from 1.14.0 to 1.14.1.
- [Release notes](https://github.com/python-cffi/release-doc/releases)
- [Commits](https://github.com/python-cffi/release-doc/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-21 18:32:29 +02:00
dependabot-preview[bot]
796588ec35 Bump urllib3 from 1.25.9 to 1.25.10
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.25.9 to 1.25.10.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/master/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.25.9...1.25.10)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-21 18:32:29 +02:00
dependabot-preview[bot]
d120a6f07b Bump pytest from 5.4.3 to 6.0.1
Bumps [pytest](https://github.com/pytest-dev/pytest) from 5.4.3 to 6.0.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/5.4.3...6.0.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-21 18:32:29 +02:00
dependabot-preview[bot]
5ddd881dbd Bump cryptography from 2.9.2 to 3.0
Bumps [cryptography](https://github.com/pyca/cryptography) from 2.9.2 to 3.0.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/2.9.2...3.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-08-21 18:32:29 +02:00
dependabot-preview[bot]
22b0f5d20c Bump virtualenv from 20.0.29 to 20.0.30 (#7657)
* Bump virtualenv from 20.0.29 to 20.0.30

Bumps [virtualenv](https://github.com/pypa/virtualenv) from 20.0.29 to 20.0.30.
- [Release notes](https://github.com/pypa/virtualenv/releases)
- [Changelog](https://github.com/pypa/virtualenv/blob/master/docs/changelog.rst)
- [Commits](https://github.com/pypa/virtualenv/compare/20.0.29...20.0.30)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

* Bump virtualenv version in all files

Signed-off-by: aiordache <anca.iordache@docker.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
Co-authored-by: aiordache <anca.iordache@docker.com>
Co-authored-by: Anca Iordache <aiordache@users.noreply.github.com>
2020-08-21 18:32:29 +02:00
alexrecuenco
f74ff28728 Suggestions by @ulyssessouza
Removed unused versions, (we only support python3.4 onwards)

Signed-off-by: alexrecuenco <alejandrogonzalezrecuenco@gmail.com>
2020-08-21 18:32:29 +02:00
alexrecuenco
1285960d3c Removed Python2 support
Closes: #6890

Some remarks,

- `# coding ... utf-8` statements are not needed
- isdigit on strings instead of a try-catch.
- Default opening mode is read, so we can do `open()` without the `'r'` everywhere
- Removed inheritinng from `object` class, it isn't necessary in python3.
- `super(ClassName, self)` can now be replaced with `super()`
- Use of itertools and `chain` on a couple places dealing with sets.
- Used the operator module instead of lambdas when warranted
    `itemgetter(0)` instead of `lambda x: x[0]`
    `attrgetter('name')` instead of `lambda x: x.name`
- `sorted` returns a list, so no need to use `list(sorted(...))`
- Removed `dict()` using dictionary comprehensions whenever possible
- Attempted to remove python3.2 support

Signed-off-by: alexrecuenco <alejandrogonzalezrecuenco@gmail.com>
2020-08-21 18:32:29 +02:00
88 changed files with 562 additions and 8358 deletions

View File

@@ -7,16 +7,6 @@ assignees: ''
---
<!--
**DEPRECATION NOTICE:**
Compose V1 is end-of-life, and as such only issues relating to security vulnerabilities will be considered.
Please do not submit issues regarding bugs or improvements.
For a more up-to-date compose, check v2: https://github.com/docker/compose/tree/v2/
-->
<!--
Welcome to the docker-compose issue tracker! Before creating an issue, please heed the following:
@@ -35,10 +25,7 @@ Welcome to the docker-compose issue tracker! Before creating an issue, please he
## Context information (for bug reports)
- [ ] Using Compose V2 `docker compose ...`
- [ ] Using Compose V1 `docker-compose ...`
**Output of `docker(-)compose version`**
**Output of `docker-compose version`**
```
(paste here)
```

View File

@@ -7,16 +7,6 @@ assignees: ''
---
<!--
**DEPRECATION NOTICE:**
Compose V1 is end-of-life, and as such only issues relating to security vulnerabilities will be considered.
Please do not submit issues regarding bugs or improvements.
For a more up-to-date compose, check v2: https://github.com/docker/compose/tree/v2/
-->
<!--
Welcome to the docker-compose issue tracker! Before creating an issue, please heed the following:
@@ -29,9 +19,6 @@ Welcome to the docker-compose issue tracker! Before creating an issue, please he
the original discussion.
-->
/!\ If your request is about evolving the compose file format, please report on the [Compose Specification](https://github.com/compose-spec/compose-spec)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

View File

@@ -7,16 +7,6 @@ assignees: ''
---
<!--
**DEPRECATION NOTICE:**
Compose V1 is end-of-life, and as such only issues relating to security vulnerabilities will be considered.
Please do not submit issues regarding bugs or improvements.
For a more up-to-date compose, check v2: https://github.com/docker/compose/tree/v2/
-->
Please post on our forums: https://forums.docker.com for questions about using `docker-compose`.
Posts that are not a bug report or a feature/enhancement request will not be addressed on this issue tracker.

View File

@@ -1,28 +0,0 @@
version: 2
updates:
- package-ecosystem: pip
directory: "/"
schedule:
interval: weekly
time: "14:00"
timezone: America/Los_Angeles
open-pull-requests-limit: 10
ignore:
- dependency-name: python-dotenv
versions:
- 0.15.0
- 0.16.0
- dependency-name: urllib3
versions:
- 1.26.2
- 1.26.3
- dependency-name: coverage
versions:
- 5.3.1
- "5.4"
- dependency-name: packaging
versions:
- "20.8"
- dependency-name: cached-property
versions:
- 1.5.2

View File

@@ -1,58 +0,0 @@
name: Publish Artifacts
on:
issue_comment:
types: [created]
jobs:
publish-artifacts:
if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/generate-artifacts')
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: 1.16
id: go
- name: Checkout code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: go-${{ hashFiles('**/go.sum') }}
- name: Build cross platform compose-plugin binaries
run: make -f builder.Makefile cross
- name: Upload macos-amd64 binary
uses: actions/upload-artifact@v2
with:
name: docker-compose-darwin-amd64
path: ${{ github.workspace }}/bin/docker-compose-darwin-amd64
- name: Upload macos-arm64 binary
uses: actions/upload-artifact@v2
with:
name: docker-compose-darwin-arm64
path: ${{ github.workspace }}/bin/docker-compose-darwin-arm64
- name: Upload linux-amd64 binary
uses: actions/upload-artifact@v2
with:
name: docker-compose-linux-amd64
path: ${{ github.workspace }}/bin/docker-compose-linux-amd64
- name: Upload windows-amd64 binary
uses: actions/upload-artifact@v2
with:
name: docker-compose-windows-amd64.exe
path: ${{ github.workspace }}/bin/docker-compose-windows-amd64.exe
- name: Update comment
uses: peter-evans/create-or-update-comment@v1
with:
comment-id: ${{ github.event.comment.id }}
body: |
This PR can be tested using [binaries](https://github.com/docker/compose-cli/actions/runs/${{ github.run_id }}).
reactions: eyes

View File

@@ -1,104 +0,0 @@
name: Continuous integration
on:
push:
branches:
- v2
pull_request:
branches:
- v2
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
env:
GO111MODULE: "on"
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: 1.16
id: go
- name: Checkout code into the Go module directory
uses: actions/checkout@v2
- name: Validate go-mod is up-to-date and license headers
run: make validate
- name: Run golangci-lint
env:
BUILD_TAGS: e2e
run: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sudo sh -s -- -b /usr/bin/ v1.39.0
make -f builder.Makefile lint
# only on main branch, costs too much for the gain on every PR
validate-cross-build:
name: Validate cross build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
env:
GO111MODULE: "on"
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: 1.16
id: go
- name: Checkout code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: go-${{ hashFiles('**/go.sum') }}
# Ensure we don't discover cross platform build issues at release time.
# Time used to build linux here is gained back in the build for local E2E step
- name: Build packages
run: make -f builder.Makefile cross
build:
name: Build
runs-on: ubuntu-latest
env:
GO111MODULE: "on"
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: 1.16
id: go
- name: Set up gosum
run: |
go get -u gotest.tools/gotestsum
- name: Setup docker CLI
run: |
curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.3.tgz | tar xz
sudo cp ./docker/docker /usr/bin/ && rm -rf docker && docker version
- name: Checkout code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: go-${{ hashFiles('**/go.sum') }}
- name: Test
env:
BUILD_TAGS: kube
run: make -f builder.Makefile test
- name: Build for local E2E
env:
BUILD_TAGS: e2e
run: make -f builder.Makefile compose-plugin
- name: E2E Test
run: make e2e-compose

View File

@@ -1,11 +0,0 @@
name: PR cleanup
on:
pull_request:
types: [closed]
jobs:
delete_pr_artifacts:
runs-on: ubuntu-latest
steps:
- uses: stefanluptak/delete-old-pr-artifacts@v1
with:
workflow_filename: ci.yaml

View File

@@ -1,19 +0,0 @@
name: Automatic Rebase
on:
issue_comment:
types: [created]
jobs:
rebase:
name: Rebase
if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v2
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0 # otherwise, you will fail to push refs to dest repo
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.4
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,49 +0,0 @@
name: Releaser
on:
workflow_dispatch:
inputs:
tag:
description: 'Release Tag'
required: true
dry-run:
description: 'Dry run'
required: false
default: 'true'
jobs:
upload-release:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: 1.16
id: go
- name: Setup docker CLI
run: |
curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.3.tgz | tar xz
sudo cp ./docker/docker /usr/bin/ && rm -rf docker && docker version
- name: Checkout code into the Go module directory
uses: actions/checkout@v2
- uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build
run: make -f builder.Makefile cross-compose-plugin
- name: License
run: cp packaging/* bin/
- uses: ncipollo/release-action@v1
with:
artifacts: "bin/*"
prerelease: true
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,28 +1,26 @@
exclude: .github/
repos:
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 'v0.9.1'
hooks:
- id: check-added-large-files
- id: check-docstring-first
- id: check-merge-conflict
- id: check-yaml
- id: check-json
- id: debug-statements
- id: end-of-file-fixer
- id: flake8
- id: name-tests-test
exclude: 'tests/(integration/testcases\.py|helpers\.py)'
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: git://github.com/asottile/reorder_python_imports
sha: v1.3.4
hooks:
- id: reorder-python-imports
language_version: 'python3.7'
args:
- --py3-plus
- repo: https://github.com/asottile/pyupgrade
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 'v0.9.1'
hooks:
- id: check-added-large-files
- id: check-docstring-first
- id: check-merge-conflict
- id: check-yaml
- id: check-json
- id: debug-statements
- id: end-of-file-fixer
- id: flake8
- id: name-tests-test
exclude: 'tests/(integration/testcases\.py|helpers\.py)'
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: git://github.com/asottile/reorder_python_imports
sha: v1.3.4
hooks:
- id: reorder-python-imports
language_version: 'python3.7'
args:
- --py3-plus
- repo: https://github.com/asottile/pyupgrade
rev: v2.1.0
hooks:
- id: pyupgrade

View File

@@ -1,242 +1,12 @@
Change log
==========
1.29.2 (2021-05-10)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/59?closed=1)
### Miscellaneous
- Remove prompt to use `docker compose` in the `up` command
- Bump `py` to `1.10.0` in `requirements-indirect.txt`
1.29.1 (2021-04-13)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/58?closed=1)
### Bugs
- Fix for invalid handler warning on Windows builds
- Fix config hash to trigger container recreation on IPC mode updates
- Fix conversion map for `placement.max_replicas_per_node`
- Remove extra scan suggestion on build
1.29.0 (2021-04-06)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/56?closed=1)
### Features
- Add profile filter to `docker-compose config`
- Add a `depends_on` condition to wait for successful service completion
### Miscellaneous
- Add image scan message on build
- Update warning message for `--no-ansi` to mention `--ansi never` as alternative
- Bump docker-py to 5.0.0
- Bump PyYAML to 5.4.1
- Bump python-dotenv to 0.17.0
1.28.6 (2021-03-23)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/57?closed=1)
### Bugs
- Make `--env-file` relative to the current working directory and error out for invalid paths. Environment file paths set with `--env-file` are relative to the current working directory while the default `.env` file is located in the project directory which by default is the base directory of the Compose file.
- Fix missing service property `storage_opt` by updating the compose schema
- Fix build `extra_hosts` list format
- Remove extra error message on `exec`
### Miscellaneous
- Add `compose.yml` and `compose.yaml` to default filename list
1.28.5 (2021-02-25)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/55?closed=1)
### Bugs
- Fix OpenSSL version mismatch error when shelling out to the ssh client (via bump to docker-py 4.4.4 which contains the fix)
- Add missing build flags to the native builder: `platform`, `isolation` and `extra_hosts`
- Remove info message on native build
- Avoid fetching logs when service logging driver is set to 'none'
1.28.4 (2021-02-18)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/54?closed=1)
### Bugs
- Fix SSH port parsing by bumping docker-py to 4.4.3
### Miscellaneous
- Bump Python to 3.7.10
1.28.3 (2021-02-17)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/53?closed=1)
### Bugs
- Fix SSH hostname parsing when it contains leading s/h, and remove the quiet option that was hiding the error (via docker-py bump to 4.4.2)
- Fix key error for '--no-log-prefix' option
- Fix incorrect CLI environment variable name for service profiles: `COMPOSE_PROFILES` instead of `COMPOSE_PROFILE`
- Fix fish completion
### Miscellaneous
- Bump cryptography to 3.3.2
- Remove log driver filter
1.28.2 (2021-01-26)
-------------------
### Miscellaneous
- CI setup update
1.28.1 (2021-01-25)
-------------------
### Bugs
- Revert to Python 3.7 bump for Linux static builds
- Add bash completion for `docker-compose logs|up --no-log-prefix`
1.28.0 (2021-01-20)
-------------------
### Features
- Support for Nvidia GPUs via device requests
- Support for service profiles
- Change the SSH connection approach to the Docker CLI's via shellout to the local SSH client (old behaviour enabled by setting `COMPOSE_PARAMIKO_SSH` environment variable)
- Add flag to disable log prefix
- Add flag for ansi output control
### Bugs
- Make `parallel_pull=True` by default
- Bring back warning for configs in non-swarm mode
- Take `--file` in account when defining `project_dir`
- On `compose up`, attach only to services we read logs from
### Miscellaneous
- Make COMPOSE_DOCKER_CLI_BUILD=1 the default
- Add usage metrics
- Sync schema with COMPOSE specification
- Improve failure report for missing mandatory environment variables
- Bump attrs to 20.3.0
- Bump more_itertools to 8.6.0
- Bump cryptograhy to 3.2.1
- Bump cffi to 1.14.4
- Bump virtualenv to 20.2.2
- Bump bcrypt to 3.2.0
- Bump gitpython to 3.1.11
- Bump docker-py to 4.4.1
- Bump Python to 3.9
- Linux: bump Debian base image from stretch to buster (required for Python 3.9)
- macOS: OpenSSL 1.1.1g to 1.1.1h, Python 3.7.7 to 3.9.0
- Bump pyinstaller 4.1
- Loosen restriction on base images to latest minor
- Updates of READMEs
1.27.4 (2020-09-24)
-------------------
### Bugs
- Remove path checks for bind mounts
- Fix port rendering to output long form syntax for non-v1
- Add protocol to the docker socket address
1.27.3 (2020-09-16)
-------------------
### Bugs
- Merge `max_replicas_per_node` on `docker-compose config`
- Fix `depends_on` serialization on `docker-compose config`
- Fix scaling when some containers are not running on `docker-compose up`
- Enable relative paths for `driver_opts.device` for `local` driver
- Allow strings for `cpus` fields
1.27.2 (2020-09-10)
-------------------
### Bugs
- Fix bug on `docker-compose run` container attach
1.27.1 (2020-09-10)
-------------------
### Bugs
- Fix `docker-compose run` when `service.scale` is specified
- Fix `compose run` when `service.scale` is specified
- Allow `driver` property for external networks as temporary workaround for swarm network propagation issue

View File

@@ -1,15 +1,11 @@
ARG DOCKER_VERSION=19.03
ARG PYTHON_VERSION=3.7.10
ARG BUILD_ALPINE_VERSION=3.12
ARG BUILD_CENTOS_VERSION=7
ARG DOCKER_VERSION=19.03.8
ARG PYTHON_VERSION=3.7.7
ARG BUILD_ALPINE_VERSION=3.11
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.11.5
ARG RUNTIME_DEBIAN_VERSION=stretch-20200414-slim
ARG RUNTIME_ALPINE_VERSION=3.12
ARG RUNTIME_CENTOS_VERSION=7
ARG RUNTIME_DEBIAN_VERSION=stretch-slim
ARG DISTRO=alpine
ARG BUILD_PLATFORM=alpine
FROM docker:${DOCKER_VERSION} AS docker-cli
@@ -44,56 +40,32 @@ RUN apt-get update && apt-get install --no-install-recommends -y \
openssl \
zlib1g-dev
FROM centos:${BUILD_CENTOS_VERSION} AS build-centos
RUN yum install -y \
gcc \
git \
libffi-devel \
make \
openssl \
openssl-devel
WORKDIR /tmp/python3/
ARG PYTHON_VERSION
RUN curl -L https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz | tar xzf - \
&& cd Python-${PYTHON_VERSION} \
&& ./configure --enable-optimizations --enable-shared --prefix=/usr LDFLAGS="-Wl,-rpath /usr/lib" \
&& make altinstall
RUN alternatives --install /usr/bin/python python /usr/bin/python2.7 50
RUN alternatives --install /usr/bin/python python /usr/bin/python$(echo "${PYTHON_VERSION%.*}") 60
RUN curl https://bootstrap.pypa.io/get-pip.py | python -
FROM build-${DISTRO} AS build
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
WORKDIR /code/
FROM build-${BUILD_PLATFORM} AS build
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
RUN pip install \
virtualenv==20.4.0 \
tox==3.21.2
COPY requirements-dev.txt .
WORKDIR /code/
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
RUN pip install virtualenv==20.0.30
RUN pip install tox==3.19.0
COPY requirements-indirect.txt .
COPY requirements.txt .
RUN pip install -r requirements.txt -r requirements-indirect.txt -r requirements-dev.txt
COPY requirements-dev.txt .
COPY .pre-commit-config.yaml .
COPY tox.ini .
COPY setup.py .
COPY README.md .
COPY compose compose/
RUN tox -e py37 --notest
RUN tox --notest
COPY . .
ARG GIT_COMMIT=unknown
ENV DOCKER_COMPOSE_GITSHA=$GIT_COMMIT
RUN script/build/linux-entrypoint
FROM scratch AS bin
ARG TARGETARCH
ARG TARGETOS
COPY --from=build /usr/local/bin/docker-compose /docker-compose-${TARGETOS}-${TARGETARCH}
FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
FROM debian:${RUNTIME_DEBIAN_VERSION} AS runtime-debian
FROM centos:${RUNTIME_CENTOS_VERSION} AS runtime-centos
FROM runtime-${DISTRO} AS runtime
FROM runtime-${BUILD_PLATFORM} AS runtime
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker

View File

@@ -1,136 +0,0 @@
# Install Docker Compose
This page contains information on how to install Docker Compose. You can run Compose on macOS, Windows, and 64-bit Linux.
> ⚠️ The installation instructions on this page will help you to install Compose v1 which is a deprecated version. We recommend that you use the [latest version of Docker Compose](https://docs.docker.com/compose/install/).
## Prerequisites
Docker Compose relies on Docker Engine for any meaningful work, so make sure you
have Docker Engine installed either locally or remote, depending on your setup.
- Install
[Docker Engine](https://docs.docker.com/engine/install/#server)
for your OS and then come back here for
instructions on installing the Python version of Compose.
- To run Compose as a non-root user, see [Manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/).
## Install Compose
Follow the instructions below to install Compose using the `pip`
Python package manager or to install Compose as a container.
> Install a different version
>
> The instructions below outline installation of the current stable release
> (**v1.29.2**) of Compose. To install a different version of
> Compose, replace the given release number with the one that you want. For instructions to install Compose 2.x.x on Linux, see [Install Compose 2.x.x on Linux](https://docs.docker.com/compose/install/#install-compose-on-linux-systems).
>
> Compose releases are also listed and available for direct download on the
> [Compose repository release page on GitHub](https://github.com/docker/compose/releases).
> To install a **pre-release** of Compose, refer to the [install pre-release builds](#install-pre-release-builds)
> section.
- [Install using pip](#install-using-pip)
- [Install as a container](#install-as-a-container)
#### Install using pip
> For `alpine`, the following dependency packages are needed:
> `py-pip`, `python3-dev`, `libffi-dev`, `openssl-dev`, `gcc`, `libc-dev`, `rust`, `cargo`, and `make`.
{: .important}
You can install Compose from
[pypi](https://pypi.python.org/pypi/docker-compose) using `pip`. If you install
using `pip`, we recommend that you use a
[virtualenv](https://virtualenv.pypa.io/en/latest/) because many operating
systems have python system packages that conflict with docker-compose
dependencies. See the [virtualenv
tutorial](https://docs.python-guide.org/dev/virtualenvs/) to get
started.
```console
$ pip3 install docker-compose
```
If you are not using virtualenv,
```console
$ sudo pip install docker-compose
```
> pip version 6.0 or greater is required.
#### Install as a container
You can also run Compose inside a container, from a small bash script wrapper. To
install Compose as a container run this command:
```console
$ sudo curl -L --fail https://github.com/docker/compose/releases/download/1.29.2/run.sh -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
```
### Install pre-release builds
If you're interested in trying out a pre-release build, you can download release
candidates from the [Compose repository release page on GitHub](https://github.com/docker/compose/releases).
Follow the instructions from the link, which involves running the `curl` command
in your terminal to download the binaries.
Pre-releases built from the "master" branch are also available for download at
[https://dl.bintray.com/docker-compose/master/](https://dl.bintray.com/docker-compose/master/).
> Pre-release builds allow you to try out new features before they are released,
> but may be less stable.
----
## Upgrading
If you're upgrading from Compose 1.2 or earlier, remove or
migrate your existing containers after upgrading Compose. This is because, as of
version 1.3, Compose uses Docker labels to keep track of containers, and your
containers need to be recreated to add the labels.
If Compose detects containers that were created without labels, it refuses
to run, so that you don't end up with two sets of them. If you want to keep using
your existing containers (for example, because they have data volumes you want
to preserve), you can use Compose 1.5.x to migrate them with the following
command:
```console
$ docker-compose migrate-to-labels
```
Alternatively, if you're not worried about keeping them, you can remove them.
Compose just creates new ones.
```console
$ docker container rm -f -v myapp_web_1 myapp_db_1 ...
```
## Uninstall
To uninstall Docker Compose if you installed using `curl`:
```console
$ sudo rm /usr/local/bin/docker-compose
```
To uninstall Docker Compose if you installed using `pip`:
```console
$ pip uninstall docker-compose
```
> Got a "Permission denied" error?
>
> If you get a "Permission denied" error using either of the above
> methods, you probably do not have the proper permissions to remove
> `docker-compose`. To force the removal, prepend `sudo` to either of the above
> commands and run again.

19
Jenkinsfile vendored
View File

@@ -1,6 +1,6 @@
#!groovy
def dockerVersions = ['19.03.13']
def dockerVersions = ['19.03.8']
def baseImages = ['alpine', 'debian']
def pythonVersions = ['py37']
@@ -13,9 +13,6 @@ pipeline {
timeout(time: 2, unit: 'HOURS')
timestamps()
}
environment {
DOCKER_BUILDKIT="1"
}
stages {
stage('Build test images') {
@@ -23,7 +20,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
label 'ubuntu && amd64 && !zfs'
}
steps {
buildImage('alpine')
@@ -31,7 +28,7 @@ pipeline {
}
stage('debian') {
agent {
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
label 'ubuntu && amd64 && !zfs'
}
steps {
buildImage('debian')
@@ -62,7 +59,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -72,7 +69,7 @@ def buildImage(baseImage) {
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--target build \\
--build-arg DISTRO="${baseImage}" \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
.\\
"""
@@ -87,9 +84,9 @@ def buildImage(baseImage) {
def runTests(dockerVersion, pythonVersion, baseImage) {
return {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("ubuntu-2004 && amd64 && !zfs && cgroup1") {
node("ubuntu && amd64 && !zfs") {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -99,8 +96,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\

View File

@@ -1,57 +0,0 @@
TAG = "docker-compose:alpine-$(shell git rev-parse --short HEAD)"
GIT_VOLUME = "--volume=$(shell pwd)/.git:/code/.git"
DOCKERFILE ?="Dockerfile"
DOCKER_BUILD_TARGET ?="build"
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
BUILD_SCRIPT = linux
endif
ifeq ($(UNAME_S),Darwin)
BUILD_SCRIPT = osx
endif
COMPOSE_SPEC_SCHEMA_PATH = "compose/config/compose_spec.json"
COMPOSE_SPEC_RAW_URL = "https://raw.githubusercontent.com/compose-spec/compose-spec/master/schema/compose-spec.json"
all: cli
cli: download-compose-spec ## Compile the cli
./script/build/$(BUILD_SCRIPT)
download-compose-spec: ## Download the compose-spec schema from it's repo
curl -so $(COMPOSE_SPEC_SCHEMA_PATH) $(COMPOSE_SPEC_RAW_URL)
cache-clear: ## Clear the builder cache
@docker builder prune --force --filter type=exec.cachemount --filter=unused-for=24h
base-image: ## Builds base image
docker build -f $(DOCKERFILE) -t $(TAG) --target $(DOCKER_BUILD_TARGET) .
lint: base-image ## Run linter
docker run --rm \
--tty \
$(GIT_VOLUME) \
$(TAG) \
tox -e pre-commit
test-unit: base-image ## Run tests
docker run --rm \
--tty \
$(GIT_VOLUME) \
$(TAG) \
pytest -v tests/unit/
test: ## Run all tests
./script/test/default
pre-commit: lint test-unit cli
help: ## Show help
@echo Please specify a build target. The choices are:
@grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
FORCE:
.PHONY: all cli download-compose-spec cache-clear base-image lint test-unit test pre-commit help

124
README.md
View File

@@ -1,104 +1,62 @@
Docker Compose
==============
[![Build Status](https://ci-next.docker.com/public/buildStatus/icon?job=compose/master)](https://ci-next.docker.com/public/job/compose/job/master/)
![Docker Compose](logo.png?raw=true "Docker Compose Logo")
# :warning: *Compose V1 is DEPRECATED* :warning:
Since [Compose V2 is now GA](https://www.docker.com/blog/announcing-compose-v2-general-availability/), Compose V1 is officially **End of Life**. This means that:
- Active development and new features will only be added to the V2 codebase
- Only security-related issues will be considered for V1
Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
Check out the [V2 branch here](https://github.com/docker/compose/tree/v2/)!!
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
---------------------------------------------
Using Compose is basically a three-step process.
** Compose V2 is **Generally Available**! :star_struck: **
---------------------------------------------
Check it out [here](https://github.com/docker/compose/tree/v2/)!
Read more on the [GA announcement here](https://www.docker.com/blog/announcing-compose-v2-general-availability/)
---------------------------------------------
V1 vs V2 transition :hourglass_flowing_sand:
--------------------------------------------
"Generally Available" will mean:
- New features and bug fixes will only be considered in the V2 codebase
- Users on Mac/Windows will be defaulted into Docker Compose V2, but can still opt out through the UI and the CLI. This means when running `docker-compose` you will actually be running `docker compose`
- Our current goal is for users on Linux to receive Compose v2 with the latest version of the docker CLI, but is pending some technical discussion. Users will be able to use [compose switch](https://github.com/docker/compose-switch) to enable redirection of `docker-compose` to `docker compose`
- Docker Compose V1 will continue to be maintained regarding security issues
- [v2 branch](https://github.com/docker/compose/tree/v2) will become the default one at that time
:lock_with_ink_pen: Depending on the feedback we receive from the community of GA and the adoption on Linux, we will come up with a plan to deprecate v1, but as of right now there is no concrete timeline as we want the transition to be as smooth as possible for all users. It is important to note that we have no plans of removing any aliasing of `docker-compose` to `docker compose`. We want to make it as easy as possible to switch and not break any ones scripts. We will follow up with a blog post in the next few months with more information of an exact timeline of V1 being marked as deprecated and end of support for security issues. Wed love to hear your feedback! You can provide it [here](https://github.com/docker/roadmap/issues/257).
About
-----
Docker Compose is a tool for running multi-container applications on Docker
defined using the [Compose file format](https://compose-spec.io).
A Compose file is used to define how the one or more containers that make up
your application are configured.
Once you have a Compose file, you can create and start your application with a
single command: `docker-compose up`.
Compose files can be used to deploy applications locally, or to the cloud on
[Amazon ECS](https://aws.amazon.com/ecs) or
[Microsoft ACI](https://azure.microsoft.com/services/container-instances/) using
the Docker CLI. You can read more about how to do this:
- [Compose for Amazon ECS](https://docs.docker.com/engine/context/ecs-integration/)
- [Compose for Microsoft ACI](https://docs.docker.com/engine/context/aci-integration/)
Where to get Docker Compose
----------------------------
All the instructions to install the Python version of Docker Compose, aka `v1`,
are described in the [installation guide](./INSTALL.md).
> ⚠️ This version is a deprecated version of Compose. We recommend that you use the [latest version of Docker Compose](https://docs.docker.com/compose/install/).
Quick Start
-----------
Using Docker Compose is basically a three-step process:
1. Define your app's environment with a `Dockerfile` so it can be
reproduced anywhere.
reproduced anywhere.
2. Define the services that make up your app in `docker-compose.yml` so
they can be run together in an isolated environment.
3. Lastly, run `docker-compose up` and Compose will start and run your entire
app.
they can be run together in an isolated environment.
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
A Compose file looks like this:
A `docker-compose.yml` looks like this:
```yaml
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
```
version: '2'
You can find examples of Compose applications in our
[Awesome Compose repository](https://github.com/docker/awesome-compose).
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
For more information about the Compose format, see the
[Compose file reference](https://docs.docker.com/compose/compose-file/).
For more information about the Compose file, see the
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
Compose has commands for managing the whole lifecycle of your application:
* Start, stop and rebuild services
* View the status of running services
* Stream the log output of running services
* Run a one-off command on a service
Installation and documentation
------------------------------
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
- Code repository for Compose is on [GitHub](https://github.com/docker/compose).
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
Contributing
------------
Want to help develop Docker Compose? Check out our
[contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
[![Build Status](https://ci-next.docker.com/public/buildStatus/icon?job=compose/master)](https://ci-next.docker.com/public/job/compose/job/master/)
If you find an issue, please report it on the
[issue tracker](https://github.com/docker/compose/issues/new/choose).
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
Releasing
---------

View File

@@ -1,6 +1,6 @@
#!groovy
def dockerVersions = ['19.03.13', '18.09.9']
def dockerVersions = ['19.03.8', '18.09.9']
def baseImages = ['alpine', 'debian']
def pythonVersions = ['py37']
@@ -13,9 +13,6 @@ pipeline {
timeout(time: 2, unit: 'HOURS')
timestamps()
}
environment {
DOCKER_BUILDKIT="1"
}
stages {
stage('Build test images') {
@@ -23,7 +20,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
buildImage('alpine')
@@ -31,7 +28,7 @@ pipeline {
}
stage('debian') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
buildImage('debian')
@@ -41,7 +38,7 @@ pipeline {
}
stage('Test') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
// TODO use declarative 1.5.0 `matrix` once available on CI
@@ -61,7 +58,7 @@ pipeline {
}
stage('Generate Changelog') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
checkout scm
@@ -84,7 +81,7 @@ pipeline {
steps {
checkout scm
sh './script/setup/osx'
sh 'tox -e py39 -- tests/unit'
sh 'tox -e py37 -- tests/unit'
sh './script/build/osx'
dir ('dist') {
checksum('docker-compose-Darwin-x86_64')
@@ -98,7 +95,7 @@ pipeline {
}
stage('linux binary') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
checkout scm
@@ -117,11 +114,11 @@ pipeline {
label 'windows-python'
}
environment {
PATH = "C:\\Python39;C:\\Python39\\Scripts;$PATH"
PATH = "$PATH;C:\\Python37;C:\\Python37\\Scripts"
}
steps {
checkout scm
bat 'tox.exe -e py39 -- tests/unit'
bat 'tox.exe -e py37 -- tests/unit'
powershell '.\\script\\build\\windows.ps1'
dir ('dist') {
checksum('docker-compose-Windows-x86_64.exe')
@@ -134,7 +131,7 @@ pipeline {
}
stage('alpine image') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
buildRuntimeImage('alpine')
@@ -142,7 +139,7 @@ pipeline {
}
stage('debian image') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
buildRuntimeImage('debian')
@@ -157,7 +154,7 @@ pipeline {
parallel {
stage('Pushing images') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
steps {
pushRuntimeImage('alpine')
@@ -166,7 +163,7 @@ pipeline {
}
stage('Creating Github Release') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
environment {
GITHUB_TOKEN = credentials('github-release-token')
@@ -198,7 +195,7 @@ pipeline {
}
stage('Publishing Python packages') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004'
}
environment {
PYPIRC = credentials('pypirc-docker-dsg-cibot')
@@ -222,7 +219,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -232,7 +229,7 @@ def buildImage(baseImage) {
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--target build \\
--build-arg DISTRO="${baseImage}" \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
.\\
"""
@@ -247,9 +244,9 @@ def buildImage(baseImage) {
def runTests(dockerVersion, pythonVersion, baseImage) {
return {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("linux && docker && ubuntu-2004 && amd64 && cgroup1") {
node("linux && docker && ubuntu-2004") {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -259,8 +256,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\
@@ -281,7 +276,7 @@ def buildRuntimeImage(baseImage) {
def imageName = "docker/compose:${baseImage}-${env.BRANCH_NAME}"
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--build-arg DISTRO="${baseImage}" \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT.take(7)}" \\
.
"""

View File

@@ -1 +1 @@
__version__ = '1.30.0dev'
__version__ = '1.27.1'

View File

@@ -1,6 +1,3 @@
import enum
import os
from ..const import IS_WINDOWS_PLATFORM
NAMES = [
@@ -15,21 +12,6 @@ NAMES = [
]
@enum.unique
class AnsiMode(enum.Enum):
"""Enumeration for when to output ANSI colors."""
NEVER = "never"
ALWAYS = "always"
AUTO = "auto"
def use_ansi_codes(self, stream):
if self is AnsiMode.ALWAYS:
return True
if self is AnsiMode.NEVER or os.environ.get('CLICOLOR') == '0':
return False
return stream.isatty()
def get_pairs():
for i, name in enumerate(NAMES):
yield (name, str(30 + i))

View File

@@ -35,7 +35,7 @@ SILENT_COMMANDS = {
def project_from_options(project_dir, options, additional_options=None):
additional_options = additional_options or {}
override_dir = get_project_dir(options)
override_dir = options.get('--project-directory')
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or project_dir, environment_file)
environment.silent = options.get('COMMAND', None) in SILENT_COMMANDS
@@ -59,15 +59,14 @@ def project_from_options(project_dir, options, additional_options=None):
return get_project(
project_dir,
get_config_path_from_options(options, environment),
get_config_path_from_options(project_dir, options, environment),
project_name=options.get('--project-name'),
verbose=options.get('--verbose'),
context=context,
environment=environment,
override_dir=override_dir,
interpolate=(not additional_options.get('--no-interpolate')),
environment_file=environment_file,
enabled_profiles=get_profiles_from_options(options, environment)
environment_file=environment_file
)
@@ -87,29 +86,21 @@ def set_parallel_limit(environment):
parallel.GlobalLimit.set_global_limit(parallel_limit)
def get_project_dir(options):
override_dir = None
files = get_config_path_from_options(options, os.environ)
if files:
if files[0] == '-':
return '.'
override_dir = os.path.dirname(files[0])
return options.get('--project-directory') or override_dir
def get_config_from_options(base_dir, options, additional_options=None):
additional_options = additional_options or {}
override_dir = get_project_dir(options)
override_dir = options.get('--project-directory')
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or base_dir, environment_file)
config_path = get_config_path_from_options(options, environment)
config_path = get_config_path_from_options(
base_dir, options, environment
)
return config.load(
config.find(base_dir, config_path, environment, override_dir),
not additional_options.get('--no-interpolate')
)
def get_config_path_from_options(options, environment):
def get_config_path_from_options(base_dir, options, environment):
def unicode_paths(paths):
return [p.decode('utf-8') if isinstance(p, bytes) else p for p in paths]
@@ -124,21 +115,9 @@ def get_config_path_from_options(options, environment):
return None
def get_profiles_from_options(options, environment):
profile_option = options.get('--profile')
if profile_option:
return profile_option
profiles = environment.get('COMPOSE_PROFILES')
if profiles:
return profiles.split(',')
return []
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
context=None, environment=None, override_dir=None,
interpolate=True, environment_file=None, enabled_profiles=None):
interpolate=True, environment_file=None):
if not environment:
environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir)
@@ -160,7 +139,6 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
client,
environment.get('DOCKER_DEFAULT_PLATFORM'),
execution_context_labels(config_details, environment_file),
enabled_profiles,
)

View File

@@ -166,8 +166,8 @@ def docker_client(environment, version=None, context=None, tls_version=None):
kwargs['credstore_env'] = {
'LD_LIBRARY_PATH': environment.get('LD_LIBRARY_PATH_ORIG'),
}
use_paramiko_ssh = int(environment.get('COMPOSE_PARAMIKO_SSH', 0))
client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
client = APIClient(**kwargs)
client._original_base_url = kwargs.get('base_url')
return client

View File

@@ -17,16 +17,10 @@ class DocoptDispatcher:
self.command_class = command_class
self.options = options
@classmethod
def get_command_and_options(cls, doc_entity, argv, options):
command_help = getdoc(doc_entity)
opt = docopt_full_help(command_help, argv, **options)
command = opt['COMMAND']
return command_help, opt, command
def parse(self, argv):
command_help, options, command = DocoptDispatcher.get_command_and_options(
self.command_class, argv, self.options)
command_help = getdoc(self.command_class)
options = docopt_full_help(command_help, argv, **self.options)
command = options['COMMAND']
if command is None:
raise SystemExit(command_help)

View File

@@ -16,22 +16,18 @@ from compose.utils import split_buffer
class LogPresenter:
def __init__(self, prefix_width, color_func, keep_prefix=True):
def __init__(self, prefix_width, color_func):
self.prefix_width = prefix_width
self.color_func = color_func
self.keep_prefix = keep_prefix
def present(self, container, line):
to_log = '{line}'.format(line=line)
if self.keep_prefix:
prefix = container.name_without_project.ljust(self.prefix_width)
to_log = '{prefix} '.format(prefix=self.color_func(prefix + ' |')) + to_log
return to_log
prefix = container.name_without_project.ljust(self.prefix_width)
return '{prefix} {line}'.format(
prefix=self.color_func(prefix + ' |'),
line=line)
def build_log_presenters(service_names, monochrome, keep_prefix=True):
def build_log_presenters(service_names, monochrome):
"""Return an iterable of functions.
Each function can be used to format the logs output of a container.
@@ -42,7 +38,7 @@ def build_log_presenters(service_names, monochrome, keep_prefix=True):
return text
for color_func in cycle([no_color] if monochrome else colors.rainbow()):
yield LogPresenter(prefix_width, color_func, keep_prefix)
yield LogPresenter(prefix_width, color_func)
def max_name_width(service_names, max_index_width=3):
@@ -158,8 +154,10 @@ class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
def tail_container_logs(container, presenter, queue, log_args):
generator = get_log_generator(container)
try:
for item in build_log_generator(container, log_args):
for item in generator(container, log_args):
queue.put(QueueItem.new(presenter.present(container, item)))
except Exception as e:
queue.put(QueueItem.exception(e))
@@ -169,6 +167,20 @@ def tail_container_logs(container, presenter, queue, log_args):
queue.put(QueueItem.stop(container.name))
def get_log_generator(container):
if container.has_api_logs:
return build_log_generator
return build_no_log_generator
def build_no_log_generator(container, log_args):
"""Return a generator that prints a warning about logs and waits for
container to exit.
"""
yield "WARNING: no logs are available with the '{}' log driver\n".format(
container.log_driver)
def build_log_generator(container, log_args):
# if the container doesn't have a log_stream we need to attach to container
# before log printer starts running

View File

@@ -2,6 +2,7 @@ import contextlib
import functools
import json
import logging
import os
import pipes
import re
import subprocess
@@ -23,11 +24,8 @@ from ..config import resolve_build_args
from ..config.environment import Environment
from ..config.serialize import serialize_config
from ..config.types import VolumeSpec
from ..const import IS_LINUX_PLATFORM
from ..const import IS_WINDOWS_PLATFORM
from ..errors import StreamParseError
from ..metrics.decorator import metrics
from ..parallel import ParallelStreamWriter
from ..progress_stream import StreamOutputError
from ..project import get_image_digests
from ..project import MissingDigests
@@ -40,10 +38,7 @@ from ..service import ConvergenceStrategy
from ..service import ImageType
from ..service import NeedsBuildError
from ..service import OperationFailedError
from ..utils import filter_attached_for_up
from .colors import AnsiMode
from .command import get_config_from_options
from .command import get_project_dir
from .command import project_from_options
from .docopt_command import DocoptDispatcher
from .docopt_command import get_handler
@@ -56,132 +51,60 @@ from .log_printer import LogPrinter
from .utils import get_version_info
from .utils import human_readable_file_size
from .utils import yesno
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
if not IS_WINDOWS_PLATFORM:
from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
log = logging.getLogger(__name__)
console_handler = logging.StreamHandler(sys.stderr)
def main(): # noqa: C901
def main():
signals.ignore_sigpipe()
command = None
try:
_, opts, command = DocoptDispatcher.get_command_and_options(
TopLevelCommand,
get_filtered_args(sys.argv[1:]),
{'options_first': True, 'version': get_version_info('compose')})
except Exception:
pass
try:
command_func = dispatch()
command_func()
if not IS_LINUX_PLATFORM and command == 'help':
print("\nDocker Compose is now in the Docker CLI, try `docker compose` help")
command = dispatch()
command()
except (KeyboardInterrupt, signals.ShutdownException):
exit_with_metrics(command, "Aborting.", status=Status.CANCELED)
log.error("Aborting.")
sys.exit(1)
except (UserError, NoSuchService, ConfigurationError,
ProjectError, OperationFailedError) as e:
exit_with_metrics(command, e.msg, status=Status.FAILURE)
log.error(e.msg)
sys.exit(1)
except BuildError as e:
reason = ""
if e.reason:
reason = " : " + e.reason
exit_with_metrics(command,
"Service '{}' failed to build{}".format(e.service.name, reason),
status=Status.FAILURE)
log.error("Service '{}' failed to build{}".format(e.service.name, reason))
sys.exit(1)
except StreamOutputError as e:
exit_with_metrics(command, e, status=Status.FAILURE)
log.error(e)
sys.exit(1)
except NeedsBuildError as e:
exit_with_metrics(command,
"Service '{}' needs to be built, but --no-build was passed.".format(
e.service.name), status=Status.FAILURE)
log.error("Service '{}' needs to be built, but --no-build was passed.".format(e.service.name))
sys.exit(1)
except NoSuchCommand as e:
commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
if not IS_LINUX_PLATFORM:
commands += "\n\nDocker Compose is now in the Docker CLI, try `docker compose`"
exit_with_metrics("", log_msg="No such command: {}\n\n{}".format(
e.command, commands), status=Status.FAILURE)
log.error("No such command: %s\n\n%s", e.command, commands)
sys.exit(1)
except (errors.ConnectionError, StreamParseError):
exit_with_metrics(command, status=Status.FAILURE)
except SystemExit as e:
status = Status.SUCCESS
if len(sys.argv) > 1 and '--help' not in sys.argv:
status = Status.FAILURE
if command and len(sys.argv) >= 3 and sys.argv[2] == '--help':
command = '--help ' + command
if not command and len(sys.argv) >= 2 and sys.argv[1] == '--help':
command = '--help'
msg = e.args[0] if len(e.args) else ""
code = 0
if isinstance(e.code, int):
code = e.code
if not IS_LINUX_PLATFORM and not command:
msg += "\n\nDocker Compose is now in the Docker CLI, try `docker compose`"
exit_with_metrics(command, log_msg=msg, status=status,
exit_code=code)
def get_filtered_args(args):
if args[0] in ('-h', '--help'):
return []
if args[0] == '--version':
return ['version']
def exit_with_metrics(command, log_msg=None, status=Status.SUCCESS, exit_code=1):
if log_msg and command != 'exec':
if not exit_code:
log.info(log_msg)
else:
log.error(log_msg)
MetricsCommand(command, status=status).send_metrics()
sys.exit(exit_code)
sys.exit(1)
def dispatch():
console_stream = sys.stderr
console_handler = logging.StreamHandler(console_stream)
setup_logging(console_handler)
setup_logging()
dispatcher = DocoptDispatcher(
TopLevelCommand,
{'options_first': True, 'version': get_version_info('compose')})
options, handler, command_options = dispatcher.parse(sys.argv[1:])
ansi_mode = AnsiMode.AUTO
try:
if options.get("--ansi"):
ansi_mode = AnsiMode(options.get("--ansi"))
except ValueError:
raise UserError(
'Invalid value for --ansi: {}. Expected one of {}.'.format(
options.get("--ansi"),
', '.join(m.value for m in AnsiMode)
)
)
if options.get("--no-ansi"):
if options.get("--ansi"):
raise UserError("--no-ansi and --ansi cannot be combined.")
log.warning('--no-ansi option is deprecated and will be removed in future versions. '
'Use `--ansi never` instead.')
ansi_mode = AnsiMode.NEVER
setup_console_handler(console_handler,
options.get('--verbose'),
ansi_mode.use_ansi_codes(console_handler.stream),
set_no_color_if_clicolor(options.get('--no-ansi')),
options.get("--log-level"))
setup_parallel_logger(ansi_mode)
if ansi_mode is AnsiMode.NEVER:
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
if options.get('--no-ansi'):
command_options['--no-color'] = True
return functools.partial(perform_command, options, handler, command_options)
@@ -203,23 +126,23 @@ def perform_command(options, handler, command_options):
handler(command, command_options)
def setup_logging(console_handler):
def setup_logging():
root_logger = logging.getLogger()
root_logger.addHandler(console_handler)
root_logger.setLevel(logging.DEBUG)
# Disable requests and docker-py logging
logging.getLogger("urllib3").propagate = False
# Disable requests logging
logging.getLogger("requests").propagate = False
logging.getLogger("docker").propagate = False
def setup_parallel_logger(ansi_mode):
ParallelStreamWriter.set_default_ansi_mode(ansi_mode)
def setup_parallel_logger(noansi):
if noansi:
import compose.parallel
compose.parallel.ParallelStreamWriter.set_noansi()
def setup_console_handler(handler, verbose, use_console_formatter=True, level=None):
if use_console_formatter:
def setup_console_handler(handler, verbose, noansi=False, level=None):
if handler.stream.isatty() and noansi is False:
format_class = ConsoleWarningFormatter
else:
format_class = logging.Formatter
@@ -259,7 +182,7 @@ class TopLevelCommand:
"""Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
docker-compose [-f <arg>...] [options] [--] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
@@ -267,12 +190,10 @@ class TopLevelCommand:
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--profile NAME Specify a profile to enable
-c, --context NAME Specify a context name
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--ansi (never|always|auto) Control when to print ANSI control characters
--no-ansi Do not print ANSI control characters (DEPRECATED)
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
@@ -293,7 +214,7 @@ class TopLevelCommand:
build Build or rebuild services
config Validate and view the Compose file
create Create services
down Stop and remove resources
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
@@ -323,14 +244,13 @@ class TopLevelCommand:
@property
def project_dir(self):
return get_project_dir(self.toplevel_options)
return self.toplevel_options.get('--project-directory') or '.'
@property
def toplevel_environment(self):
environment_file = self.toplevel_options.get('--env-file')
return Environment.from_env_file(self.project_dir, environment_file)
@metrics()
def build(self, options):
"""
Build or rebuild services.
@@ -350,6 +270,8 @@ class TopLevelCommand:
--no-rm Do not remove intermediate containers after a successful build.
--parallel Build images in parallel.
--progress string Set type of progress output (auto, plain, tty).
EXPERIMENTAL flag for native builder.
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
"""
@@ -363,7 +285,7 @@ class TopLevelCommand:
)
build_args = resolve_build_args(build_args, self.toplevel_environment)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
self.project.build(
service_names=options['SERVICE'],
@@ -380,7 +302,6 @@ class TopLevelCommand:
progress=options.get('--progress'),
)
@metrics()
def config(self, options):
"""
Validate and view the Compose file.
@@ -392,7 +313,6 @@ class TopLevelCommand:
--no-interpolate Don't interpolate environment variables.
-q, --quiet Only validate the configuration, don't print
anything.
--profiles Print the profile names, one per line.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
@@ -412,15 +332,6 @@ class TopLevelCommand:
if options['--quiet']:
return
if options['--profiles']:
profiles = set()
for service in compose_config.services:
if 'profiles' in service:
for profile in service['profiles']:
profiles.add(profile)
print('\n'.join(sorted(profiles)))
return
if options['--services']:
print('\n'.join(service['name'] for service in compose_config.services))
return
@@ -440,7 +351,6 @@ class TopLevelCommand:
print(serialize_config(compose_config, image_digests, not options['--no-interpolate']))
@metrics()
def create(self, options):
"""
Creates containers for a service.
@@ -469,7 +379,6 @@ class TopLevelCommand:
do_build=build_action_from_opts(options),
)
@metrics()
def down(self, options):
"""
Stops containers and removes containers, networks, volumes, and images
@@ -521,7 +430,6 @@ class TopLevelCommand:
Options:
--json Output events as a stream of json objects
"""
def format_event(event):
attributes = ["%s=%s" % item for item in event['attributes'].items()]
return ("{time} {type} {action} {id} ({attrs})").format(
@@ -538,7 +446,6 @@ class TopLevelCommand:
print(formatter(event))
sys.stdout.flush()
@metrics("exec")
def exec_command(self, options):
"""
Execute a command in a running container
@@ -615,7 +522,6 @@ class TopLevelCommand:
sys.exit(exit_code)
@classmethod
@metrics()
def help(cls, options):
"""
Get help on a command.
@@ -629,7 +535,6 @@ class TopLevelCommand:
print(getdoc(subject))
@metrics()
def images(self, options):
"""
List images used by the created containers.
@@ -684,7 +589,6 @@ class TopLevelCommand:
])
print(Formatter.table(headers, rows))
@metrics()
def kill(self, options):
"""
Force stop service containers.
@@ -699,7 +603,6 @@ class TopLevelCommand:
self.project.kill(service_names=options['SERVICE'], signal=signal)
@metrics()
def logs(self, options):
"""
View output from containers.
@@ -707,12 +610,11 @@ class TopLevelCommand:
Usage: logs [options] [--] [SERVICE...]
Options:
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.
--no-log-prefix Don't print prefix in logs.
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.
"""
containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
@@ -731,12 +633,10 @@ class TopLevelCommand:
log_printer_from_project(
self.project,
containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
log_args,
event_stream=self.project.events(service_names=options['SERVICE']),
keep_prefix=not options['--no-log-prefix']).run()
event_stream=self.project.events(service_names=options['SERVICE'])).run()
@metrics()
def pause(self, options):
"""
Pause services.
@@ -746,7 +646,6 @@ class TopLevelCommand:
containers = self.project.pause(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to pause', 1)
@metrics()
def port(self, options):
"""
Print the public port for a port binding.
@@ -768,7 +667,6 @@ class TopLevelCommand:
options['PRIVATE_PORT'],
protocol=options.get('--protocol') or 'tcp') or '')
@metrics()
def ps(self, options):
"""
List containers.
@@ -778,9 +676,7 @@ class TopLevelCommand:
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property. KEY is either:
1. `source` with values `image`, or `build`;
2. `status` with values `running`, `stopped`, `paused`, or `restarted`.
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
"""
if options['--quiet'] and options['--services']:
@@ -827,7 +723,6 @@ class TopLevelCommand:
])
print(Formatter.table(headers, rows))
@metrics()
def pull(self, options):
"""
Pulls images for services defined in a Compose file, but does not start the containers.
@@ -851,7 +746,6 @@ class TopLevelCommand:
include_deps=options.get('--include-deps'),
)
@metrics()
def push(self, options):
"""
Pushes images for services.
@@ -866,7 +760,6 @@ class TopLevelCommand:
ignore_push_failures=options.get('--ignore-push-failures')
)
@metrics()
def rm(self, options):
"""
Removes stopped service containers.
@@ -911,7 +804,6 @@ class TopLevelCommand:
else:
print("No stopped containers")
@metrics()
def run(self, options):
"""
Run a one-off command on a service.
@@ -972,7 +864,6 @@ class TopLevelCommand:
self.toplevel_options, self.toplevel_environment
)
@metrics()
def scale(self, options):
"""
Set number of containers to run for a service.
@@ -1001,7 +892,6 @@ class TopLevelCommand:
for service_name, num in parse_scale_args(options['SERVICE=NUM']).items():
self.project.get_service(service_name).scale(num, timeout=timeout)
@metrics()
def start(self, options):
"""
Start existing containers.
@@ -1011,7 +901,6 @@ class TopLevelCommand:
containers = self.project.start(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to start', 1)
@metrics()
def stop(self, options):
"""
Stop running containers without removing them.
@@ -1027,7 +916,6 @@ class TopLevelCommand:
timeout = timeout_from_opts(options)
self.project.stop(service_names=options['SERVICE'], timeout=timeout)
@metrics()
def restart(self, options):
"""
Restart running containers.
@@ -1042,7 +930,6 @@ class TopLevelCommand:
containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
exit_if(not containers, 'No containers to restart', 1)
@metrics()
def top(self, options):
"""
Display the running processes
@@ -1070,7 +957,6 @@ class TopLevelCommand:
print(container.name)
print(Formatter.table(headers, rows))
@metrics()
def unpause(self, options):
"""
Unpause services.
@@ -1080,7 +966,6 @@ class TopLevelCommand:
containers = self.project.unpause(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to unpause', 1)
@metrics()
def up(self, options):
"""
Builds, (re)creates, starts, and attaches to containers for a service.
@@ -1132,7 +1017,6 @@ class TopLevelCommand:
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
--no-log-prefix Don't print prefix in logs.
"""
start_deps = not options['--no-deps']
always_recreate_deps = options['--always-recreate-deps']
@@ -1144,7 +1028,6 @@ class TopLevelCommand:
detached = options.get('--detach')
no_start = options.get('--no-start')
attach_dependencies = options.get('--attach-dependencies')
keep_prefix = not options.get('--no-log-prefix')
if detached and (cascade_stop or exit_value_from or attach_dependencies):
raise UserError(
@@ -1159,7 +1042,7 @@ class TopLevelCommand:
for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
with up_shutdown_context(self.project, service_names, timeout, detached):
warn_for_swarm_mode(self.project.client)
@@ -1181,7 +1064,6 @@ class TopLevelCommand:
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
cli=native_builder,
attach_dependencies=attach_dependencies,
)
try:
@@ -1209,11 +1091,10 @@ class TopLevelCommand:
log_printer = log_printer_from_project(
self.project,
attached_containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
{'follow': True},
cascade_stop,
event_stream=self.project.events(service_names=service_names),
keep_prefix=keep_prefix)
event_stream=self.project.events(service_names=service_names))
print("Attaching to", list_containers(log_printer.containers))
cascade_starter = log_printer.run()
@@ -1231,7 +1112,6 @@ class TopLevelCommand:
sys.exit(exit_code)
@classmethod
@metrics()
def version(cls, options):
"""
Show version information and quit.
@@ -1429,7 +1309,7 @@ def run_one_off_container(container_options, project, service, options, toplevel
service_names=[service.name],
start_deps=not options['--no-deps'],
strategy=ConvergenceStrategy.never,
detached=True,
detached=detach,
rescale=False,
cli=native_builder,
one_off=True,
@@ -1496,28 +1376,29 @@ def get_docker_start_call(container_options, container_id):
def log_printer_from_project(
project,
containers,
monochrome,
log_args,
cascade_stop=False,
event_stream=None,
keep_prefix=True,
project,
containers,
monochrome,
log_args,
cascade_stop=False,
event_stream=None,
):
return LogPrinter(
[c for c in containers if c.log_driver not in (None, 'none')],
build_log_presenters(project.service_names, monochrome, keep_prefix),
containers,
build_log_presenters(project.service_names, monochrome),
event_stream or project.events(),
cascade_stop=cascade_stop,
log_args=log_args)
def filter_attached_containers(containers, service_names, attach_dependencies=False):
return filter_attached_for_up(
containers,
service_names,
attach_dependencies,
lambda container: container.service)
if attach_dependencies or not service_names:
return containers
return [
container
for container in containers if container.service in service_names
]
@contextlib.contextmanager
@@ -1693,3 +1574,7 @@ def warn_for_swarm_mode(client):
"To deploy your application across the swarm, "
"use `docker stack deploy`.\n"
)
def set_no_color_if_clicolor(no_color_flag):
return no_color_flag or os.environ.get('CLICOLOR') == "0"

View File

@@ -10,11 +10,7 @@ from operator import attrgetter
from operator import itemgetter
import yaml
try:
from functools import cached_property
except ImportError:
from cached_property import cached_property
from cached_property import cached_property
from . import types
from ..const import COMPOSE_SPEC as VERSION
@@ -24,7 +20,6 @@ from ..utils import json_hash
from ..utils import parse_bytes
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive
from ..version import ComposeVersion
from .environment import env_vars_from_file
from .environment import Environment
from .environment import split_env
@@ -137,7 +132,6 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'logging',
'network_mode',
'platform',
'profiles',
'scale',
'stop_grace_period',
]
@@ -153,14 +147,9 @@ DOCKER_VALID_URL_PREFIXES = (
SUPPORTED_FILENAMES = [
'docker-compose.yml',
'docker-compose.yaml',
'compose.yml',
'compose.yaml',
]
DEFAULT_OVERRIDE_FILENAMES = ('docker-compose.override.yml',
'docker-compose.override.yaml',
'compose.override.yml',
'compose.override.yaml')
DEFAULT_OVERRIDE_FILENAMES = ('docker-compose.override.yml', 'docker-compose.override.yaml')
log = logging.getLogger(__name__)
@@ -195,13 +184,6 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
def from_filename(cls, filename):
return cls(filename, load_yaml(filename))
@cached_property
def config_version(self):
version = self.config.get('version', None)
if isinstance(version, dict):
return V1
return ComposeVersion(version) if version else self.version
@cached_property
def version(self):
version = self.config.get('version', None)
@@ -240,13 +222,15 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
'Version "{}" in "{}" is invalid.'
.format(version, self.filename))
if version.startswith("1"):
if version.startswith("1"):
version = V1
if version == V1:
raise ConfigurationError(
'Version in "{}" is invalid. {}'
.format(self.filename, VERSION_EXPLANATION)
)
return VERSION
return version
def get_service(self, name):
return self.get_service_dicts()[name]
@@ -269,10 +253,8 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
return {} if self.version == V1 else self.config.get('configs', {})
class Config(namedtuple('_Config', 'config_version version services volumes networks secrets configs')):
class Config(namedtuple('_Config', 'version services volumes networks secrets configs')):
"""
:param config_version: configuration file version
:type config_version: int
:param version: configuration version
:type version: int
:param services: List of service description dictionaries
@@ -313,16 +295,7 @@ def find(base_dir, filenames, environment, override_dir=None):
if filenames:
filenames = [os.path.join(base_dir, f) for f in filenames]
else:
# search for compose files in the base dir and its parents
filenames = get_default_config_files(base_dir)
if not filenames and not override_dir:
# none found in base_dir and no override_dir defined
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
if not filenames:
# search for compose files in the project directory and its parents
filenames = get_default_config_files(override_dir)
if not filenames:
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
log.debug("Using configuration files: {}".format(",".join(filenames)))
return ConfigDetails(
@@ -353,7 +326,7 @@ def get_default_config_files(base_dir):
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
if not candidates:
return None
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
winner = candidates[0]
@@ -392,23 +365,6 @@ def find_candidates_in_parent_dirs(filenames, path):
return (candidates, path)
def check_swarm_only_config(service_dicts):
warning_template = (
"Some services ({services}) use the '{key}' key, which will be ignored. "
"Compose does not support '{key}' configuration - use "
"`docker stack deploy` to deploy to a swarm."
)
key = 'configs'
services = [s for s in service_dicts if s.get(key)]
if services:
log.warning(
warning_template.format(
services=", ".join(sorted(s['name'] for s in services)),
key=key
)
)
def load(config_details, interpolate=True):
"""Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top
@@ -445,10 +401,9 @@ def load(config_details, interpolate=True):
for service_dict in service_dicts:
match_named_volumes(service_dict, volumes)
check_swarm_only_config(service_dicts)
version = main_file.version
return Config(main_file.config_version, main_file.version,
service_dicts, volumes, networks, secrets, configs)
return Config(version, service_dicts, volumes, networks, secrets, configs)
def load_mapping(config_files, get_func, entity_type, working_dir=None):
@@ -468,36 +423,20 @@ def load_mapping(config_files, get_func, entity_type, working_dir=None):
elif not config.get('name'):
config['name'] = name
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
config['driver_opts']
)
if 'labels' in config:
config['labels'] = parse_labels(config['labels'])
if 'file' in config:
config['file'] = expand_path(working_dir, config['file'])
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
config['driver_opts']
)
device = format_device_option(entity_type, config)
if device:
config['driver_opts']['device'] = device
return mapping
def format_device_option(entity_type, config):
if entity_type != 'Volume':
return
# default driver is 'local'
driver = config.get('driver', 'local')
if driver != 'local':
return
o = config['driver_opts'].get('o')
device = config['driver_opts'].get('device')
if o and o == 'bind' and device:
fullpath = os.path.abspath(os.path.expanduser(device))
return fullpath
def validate_external(entity_type, name, config, version):
for k in config.keys():
if entity_type == 'Network' and k == 'driver':
@@ -574,7 +513,8 @@ def process_config_section(config_file, config, section, environment, interpolat
config_file.version,
config,
section,
environment)
environment
)
else:
return config
@@ -1084,7 +1024,7 @@ def merge_service_dicts(base, override, version):
for field in [
'cap_add', 'cap_drop', 'expose', 'external_links',
'volumes_from', 'device_cgroup_rules', 'profiles',
'volumes_from', 'device_cgroup_rules',
]:
md.merge_field(field, merge_unique_items_lists, default=[])
@@ -1174,7 +1114,6 @@ def merge_deploy(base, override):
md['resources'] = dict(resources_md)
if md.needs_merge('placement'):
placement_md = MergeDict(md.base.get('placement') or {}, md.override.get('placement') or {})
placement_md.merge_scalar('max_replicas_per_node')
placement_md.merge_field('constraints', merge_unique_items_lists, default=[])
placement_md.merge_field('preferences', merge_unique_objects_lists, default=[])
md['placement'] = dict(placement_md)
@@ -1203,7 +1142,6 @@ def merge_reservations(base, override):
md.merge_scalar('cpus')
md.merge_scalar('memory')
md.merge_sequence('generic_resources', types.GenericResource.parse)
md.merge_field('devices', merge_unique_objects_lists, default=[])
return dict(md)

View File

@@ -1,16 +1,14 @@
{
"$schema": "http://json-schema.org/draft/2019-09/schema#",
"id": "compose_spec.json",
"id": "config_schema_compose_spec.json",
"type": "object",
"title": "Compose Specification",
"description": "The Compose file is a YAML file defining a multi-containers based application.",
"properties": {
"version": {
"type": "string",
"description": "Version of the Compose specification used. Tools not implementing required version MUST reject the configuration file."
},
"services": {
"id": "#/properties/services",
"type": "object",
@@ -21,7 +19,6 @@
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
@@ -31,7 +28,6 @@
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
@@ -42,7 +38,6 @@
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
@@ -53,7 +48,6 @@
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
@@ -65,16 +59,12 @@
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
@@ -87,7 +77,7 @@
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"type": "array", "items": {"type": "string"}},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]},
@@ -163,7 +153,7 @@
"cpu_period": {"type": ["number", "string"]},
"cpu_rt_period": {"type": ["number", "string"]},
"cpu_rt_runtime": {"type": ["number", "string"]},
"cpus": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"cpuset": {"type": "string"},
"credential_spec": {
"type": "object",
@@ -188,7 +178,7 @@
"properties": {
"condition": {
"type": "string",
"enum": ["service_started", "service_healthy", "service_completed_successfully"]
"enum": ["service_started", "service_healthy"]
}
},
"required": ["condition"]
@@ -200,6 +190,7 @@
"device_cgroup_rules": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_opt": {"type": "array","items": {"type": "string"}, "uniqueItems": true},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
@@ -220,12 +211,12 @@
},
"uniqueItems": true
},
"extends": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"service": {"type": "string"},
"file": {"type": "string"}
@@ -254,7 +245,6 @@
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
@@ -268,7 +258,7 @@
"patternProperties": {"^x-": {}}
},
"mac_address": {"type": "string"},
"mem_limit": {"type": ["number", "string"]},
"mem_limit": {"type": "string"},
"mem_reservation": {"type": ["string", "integer"]},
"mem_swappiness": {"type": "integer"},
"memswap_limit": {"type": ["number", "string"]},
@@ -328,13 +318,13 @@
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"profiles": {"$ref": "#/definitions/list_of_strings"},
"pull_policy": {"type": "string", "enum": [
"always", "never", "if_not_present", "build"
"always", "never", "if_not_present"
]},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"runtime": {
"deprecated": true,
"type": "string"
},
"scale": {
@@ -366,7 +356,6 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
@@ -436,9 +425,9 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
]
},
"uniqueItems": true
],
"uniqueItems": true
}
},
"volumes_from": {
"type": "array",
@@ -514,7 +503,7 @@
"limits": {
"type": "object",
"properties": {
"cpus": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"memory": {"type": "string"}
},
"additionalProperties": false,
@@ -523,10 +512,9 @@
"reservations": {
"type": "object",
"properties": {
"cpus": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"memory": {"type": "string"},
"generic_resources": {"$ref": "#/definitions/generic_resources"},
"devices": {"$ref": "#/definitions/devices"}
"generic_resources": {"$ref": "#/definitions/generic_resources"}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
@@ -570,7 +558,6 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"generic_resources": {
"id": "#/definitions/generic_resources",
"type": "array",
@@ -591,24 +578,6 @@
"patternProperties": {"^x-": {}}
}
},
"devices": {
"id": "#/definitions/devices",
"type": "array",
"items": {
"type": "object",
"properties": {
"capabilities": {"$ref": "#/definitions/list_of_strings"},
"count": {"type": ["string", "integer"]},
"device_ids": {"$ref": "#/definitions/list_of_strings"},
"driver":{"type": "string"},
"options":{"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
@@ -638,10 +607,10 @@
"additionalProperties": false,
"patternProperties": {"^.+$": {"type": "string"}}
}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"options": {
"type": "object",
@@ -671,7 +640,6 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
@@ -700,7 +668,6 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
@@ -726,7 +693,6 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"config": {
"id": "#/definitions/config",
"type": "object",
@@ -748,20 +714,17 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
@@ -776,7 +739,6 @@
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"blkio_limit": {
"type": "object",
"properties": {
@@ -793,7 +755,6 @@
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",

View File

@@ -54,10 +54,9 @@ class Environment(dict):
if base_dir is None:
return result
if env_file:
env_file_path = os.path.join(os.getcwd(), env_file)
return cls(env_vars_from_file(env_file_path))
env_file_path = os.path.join(base_dir, '.env')
env_file_path = os.path.join(base_dir, env_file)
else:
env_file_path = os.path.join(base_dir, '.env')
try:
return cls(env_vars_from_file(env_file_path))
except EnvFileNotFound:
@@ -114,13 +113,13 @@ class Environment(dict):
)
return super().get(key, *args, **kwargs)
def get_boolean(self, key, default=False):
def get_boolean(self, key):
# Convert a value to a boolean using "common sense" rules.
# Unset, empty, "0" and "false" (i-case) yield False.
# All other values yield True.
value = self.get(key)
if not value:
return default
return False
if value.lower() in ['0', 'false']:
return False
return True

View File

@@ -111,14 +111,12 @@ class TemplateWithDefaults(Template):
var, _, err = braced.partition(':?')
result = mapping.get(var)
if not result:
err = err or var
raise UnsetRequiredSubstitution(err)
return result
elif '?' == sep:
var, _, err = braced.partition('?')
if var in mapping:
return mapping.get(var)
err = err or var
raise UnsetRequiredSubstitution(err)
# Modified from python2.7/string.py
@@ -243,7 +241,6 @@ class ConversionMap:
service_path('healthcheck', 'disable'): to_boolean,
service_path('deploy', 'labels', PATH_JOKER): to_str,
service_path('deploy', 'replicas'): to_int,
service_path('deploy', 'placement', 'max_replicas_per_node'): to_int,
service_path('deploy', 'resources', 'limits', "cpus"): to_float,
service_path('deploy', 'update_config', 'parallelism'): to_int,
service_path('deploy', 'update_config', 'max_failure_ratio'): to_float,

View File

@@ -44,7 +44,7 @@ yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
def denormalize_config(config, image_digests=None):
result = {'version': str(config.config_version)}
result = {'version': str(config.version)}
denormalized_services = [
denormalize_service_dict(
service_dict,
@@ -121,6 +121,11 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
if version == V1 and 'network_mode' not in service_dict:
service_dict['network_mode'] = 'bridge'
if 'depends_on' in service_dict:
service_dict['depends_on'] = sorted([
svc for svc in service_dict['depends_on'].keys()
])
if 'healthcheck' in service_dict:
if 'interval' in service_dict['healthcheck']:
service_dict['healthcheck']['interval'] = serialize_ns_time_value(

View File

@@ -502,13 +502,13 @@ def get_schema_path():
def load_jsonschema(version):
name = "compose_spec"
suffix = "compose_spec"
if version == V1:
name = "config_schema_v1"
suffix = "v1"
filename = os.path.join(
get_schema_path(),
"{}.json".format(name))
"config_schema_{}.json".format(suffix))
if not os.path.exists(filename):
raise ConfigurationError(

View File

@@ -5,7 +5,6 @@ from .version import ComposeVersion
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
IS_LINUX_PLATFORM = (sys.platform == "linux")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'

View File

@@ -186,6 +186,11 @@ class Container:
def log_driver(self):
return self.get('HostConfig.LogConfig.Type')
@property
def has_api_logs(self):
log_type = self.log_driver
return not log_type or log_type in ('json-file', 'journald', 'local')
@property
def human_readable_health_status(self):
""" Generate UP status string with up time and health
@@ -199,7 +204,11 @@ class Container:
return status_string
def attach_log_stream(self):
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
"""A log stream can only be attached if the container uses a
json-file, journald or local log driver.
"""
if self.has_api_logs:
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
def get(self, key):
"""Return a value from the container or None if the value is not set.

View File

@@ -27,8 +27,3 @@ class NoHealthCheckConfigured(HealthCheckException):
service_name
)
)
class CompletedUnsuccessfully(Exception):
def __init__(self, container_id, exit_code):
self.msg = 'Container "{}" exited with code {}.'.format(container_id, exit_code)

View File

@@ -1,64 +0,0 @@
import os
from enum import Enum
import requests
from docker import ContextAPI
from docker.transport import UnixHTTPAdapter
from compose.const import IS_WINDOWS_PLATFORM
if IS_WINDOWS_PLATFORM:
from docker.transport import NpipeHTTPAdapter
class Status(Enum):
SUCCESS = "success"
FAILURE = "failure"
CANCELED = "canceled"
class MetricsSource:
CLI = "docker-compose"
if IS_WINDOWS_PLATFORM:
METRICS_SOCKET_FILE = 'npipe://\\\\.\\pipe\\docker_cli'
else:
METRICS_SOCKET_FILE = 'http+unix:///var/run/docker-cli.sock'
class MetricsCommand(requests.Session):
"""
Representation of a command in the metrics.
"""
def __init__(self, command,
context_type=None, status=Status.SUCCESS,
source=MetricsSource.CLI, uri=None):
super().__init__()
self.command = ("compose " + command).strip() if command else "compose --help"
self.context = context_type or ContextAPI.get_current_context().context_type or 'moby'
self.source = source
self.status = status.value
self.uri = uri or os.environ.get("METRICS_SOCKET_FILE", METRICS_SOCKET_FILE)
if IS_WINDOWS_PLATFORM:
self.mount("http+unix://", NpipeHTTPAdapter(self.uri))
else:
self.mount("http+unix://", UnixHTTPAdapter(self.uri))
def send_metrics(self):
try:
return self.post("http+unix://localhost/usage",
json=self.to_map(),
timeout=.05,
headers={'Content-Type': 'application/json'})
except Exception as e:
return e
def to_map(self):
return {
'command': self.command,
'context': self.context,
'source': self.source,
'status': self.status,
}

View File

@@ -1,21 +0,0 @@
import functools
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
class metrics:
def __init__(self, command_name=None):
self.command_name = command_name
def __call__(self, fn):
@functools.wraps(fn,
assigned=functools.WRAPPER_ASSIGNMENTS,
updated=functools.WRAPPER_UPDATES)
def wrapper(*args, **kwargs):
if not self.command_name:
self.command_name = fn.__name__
result = fn(*args, **kwargs)
MetricsCommand(self.command_name, status=Status.SUCCESS).send_metrics()
return result
return wrapper

View File

@@ -11,12 +11,10 @@ from threading import Thread
from docker.errors import APIError
from docker.errors import ImageNotFound
from compose.cli.colors import AnsiMode
from compose.cli.colors import green
from compose.cli.colors import red
from compose.cli.signals import ShutdownException
from compose.const import PARALLEL_LIMIT
from compose.errors import CompletedUnsuccessfully
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.errors import OperationFailedError
@@ -62,8 +60,7 @@ def parallel_execute_watch(events, writer, errors, results, msg, get_name, fail_
elif isinstance(exception, APIError):
errors[get_name(obj)] = exception.explanation
writer.write(msg, get_name(obj), 'error', red)
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured,
CompletedUnsuccessfully)):
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured)):
errors[get_name(obj)] = exception.msg
writer.write(msg, get_name(obj), 'error', red)
elif isinstance(exception, UpstreamError):
@@ -86,7 +83,10 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None, fa
objects = list(objects)
stream = sys.stderr
writer = ParallelStreamWriter.get_or_assign_instance(ParallelStreamWriter(stream))
if ParallelStreamWriter.instance:
writer = ParallelStreamWriter.instance
else:
writer = ParallelStreamWriter(stream)
for obj in objects:
writer.add_object(msg, get_name(obj))
@@ -243,12 +243,6 @@ def feed_queue(objects, func, get_deps, results, state, limiter):
'not processing'.format(obj)
)
results.put((obj, None, e))
except CompletedUnsuccessfully as e:
log.debug(
'Service(s) upstream of {} did not completed successfully - '
'not processing'.format(obj)
)
results.put((obj, None, e))
if state.is_done():
results.put(STOP)
@@ -265,37 +259,19 @@ class ParallelStreamWriter:
to jump to the correct line, and write over the line.
"""
default_ansi_mode = AnsiMode.AUTO
write_lock = Lock()
noansi = False
lock = Lock()
instance = None
instance_lock = Lock()
@classmethod
def get_instance(cls):
return cls.instance
def set_noansi(cls, value=True):
cls.noansi = value
@classmethod
def get_or_assign_instance(cls, writer):
cls.instance_lock.acquire()
try:
if cls.instance is None:
cls.instance = writer
return cls.instance
finally:
cls.instance_lock.release()
@classmethod
def set_default_ansi_mode(cls, ansi_mode):
cls.default_ansi_mode = ansi_mode
def __init__(self, stream, ansi_mode=None):
if ansi_mode is None:
ansi_mode = self.default_ansi_mode
def __init__(self, stream):
self.stream = stream
self.use_ansi_codes = ansi_mode.use_ansi_codes(stream)
self.lines = []
self.width = 0
ParallelStreamWriter.instance = self
def add_object(self, msg, obj_index):
if msg is None:
@@ -309,7 +285,7 @@ class ParallelStreamWriter:
return self._write_noansi(msg, obj_index, '')
def _write_ansi(self, msg, obj_index, status):
self.write_lock.acquire()
self.lock.acquire()
position = self.lines.index(msg + obj_index)
diff = len(self.lines) - position
# move up
@@ -321,7 +297,7 @@ class ParallelStreamWriter:
# move back down
self.stream.write("%c[%dB" % (27, diff))
self.stream.flush()
self.write_lock.release()
self.lock.release()
def _write_noansi(self, msg, obj_index, status):
self.stream.write(
@@ -334,10 +310,17 @@ class ParallelStreamWriter:
def write(self, msg, obj_index, status, color_func):
if msg is None:
return
if self.use_ansi_codes:
self._write_ansi(msg, obj_index, color_func(status))
else:
if self.noansi:
self._write_noansi(msg, obj_index, status)
else:
self._write_ansi(msg, obj_index, color_func(status))
def get_stream_writer():
instance = ParallelStreamWriter.instance
if instance is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
return instance
def parallel_operation(containers, operation, options, message):

View File

@@ -39,7 +39,6 @@ from .service import Service
from .service import ServiceIpcMode
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import filter_attached_for_up
from .utils import microseconds_from_time_nano
from .utils import truncate_string
from .volume import ProjectVolumes
@@ -69,15 +68,13 @@ class Project:
"""
A collection of services.
"""
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None,
enabled_profiles=None):
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None):
self.name = name
self.services = services
self.client = client
self.volumes = volumes or ProjectVolumes({})
self.networks = networks or ProjectNetworks({}, False)
self.config_version = config_version
self.enabled_profiles = enabled_profiles or []
def labels(self, one_off=OneOffFilter.exclude, legacy=False):
name = self.name
@@ -89,8 +86,7 @@ class Project:
return labels
@classmethod
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None,
enabled_profiles=None):
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None):
"""
Construct a Project from a config.Config object.
"""
@@ -102,7 +98,7 @@ class Project:
networks,
use_networking)
volumes = ProjectVolumes.from_config(name, config_data, client)
project = cls(name, [], client, project_networks, volumes, config_data.version, enabled_profiles)
project = cls(name, [], client, project_networks, volumes, config_data.version)
for service_dict in config_data.services:
service_dict = dict(service_dict)
@@ -132,7 +128,7 @@ class Project:
config_data.secrets)
service_dict['scale'] = project.get_service_scale(service_dict)
service_dict['device_requests'] = project.get_device_requests(service_dict)
service_dict = translate_credential_spec_to_security_opt(service_dict)
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
service_dict
@@ -189,7 +185,7 @@ class Project:
if name not in valid_names:
raise NoSuchService(name)
def get_services(self, service_names=None, include_deps=False, auto_enable_profiles=True):
def get_services(self, service_names=None, include_deps=False):
"""
Returns a list of this project's services filtered
by the provided list of names, or all services if service_names is None
@@ -202,36 +198,15 @@ class Project:
reordering as needed to resolve dependencies.
Raises NoSuchService if any of the named services do not exist.
Raises ConfigurationError if any service depended on is not enabled by active profiles
"""
# create a copy so we can *locally* add auto-enabled profiles later
enabled_profiles = self.enabled_profiles.copy()
if service_names is None or len(service_names) == 0:
auto_enable_profiles = False
service_names = [
service.name
for service in self.services
if service.enabled_for_profiles(enabled_profiles)
]
service_names = self.service_names
unsorted = [self.get_service(name) for name in service_names]
services = [s for s in self.services if s in unsorted]
if auto_enable_profiles:
# enable profiles of explicitly targeted services
for service in services:
for profile in service.get_profiles():
if profile not in enabled_profiles:
enabled_profiles.append(profile)
if include_deps:
services = reduce(
lambda acc, s: self._inject_deps(acc, s, enabled_profiles),
services,
[]
)
services = reduce(self._inject_deps, services, [])
uniques = []
[uniques.append(s) for s in services if s not in uniques]
@@ -356,31 +331,6 @@ class Project:
max_replicas))
return scale
def get_device_requests(self, service_dict):
deploy_dict = service_dict.get('deploy', None)
if not deploy_dict:
return
resources = deploy_dict.get('resources', None)
if not resources or not resources.get('reservations', None):
return
devices = resources['reservations'].get('devices')
if not devices:
return
for dev in devices:
count = dev.get("count", -1)
if not isinstance(count, int):
if count != "all":
raise ConfigurationError(
'Invalid value "{}" for devices count'.format(dev["count"]),
'(expected integer or "all")')
dev["count"] = -1
if 'capabilities' in dev:
dev['capabilities'] = [dev['capabilities']]
return devices
def start(self, service_names=None, **options):
containers = []
@@ -462,12 +412,10 @@ class Project:
self.remove_images(remove_image_type)
def remove_images(self, remove_image_type):
for service in self.services:
for service in self.get_services():
service.remove_image(remove_image_type)
def restart(self, service_names=None, **options):
# filter service_names by enabled profiles
service_names = [s.name for s in self.get_services(service_names)]
containers = self.containers(service_names, stopped=True)
parallel.parallel_execute(
@@ -490,6 +438,7 @@ class Project:
log.info('%s uses an image, skipping' % service.name)
if cli:
log.warning("Native build is an experimental feature and could change at any time")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
@@ -645,10 +594,12 @@ class Project:
silent=False,
cli=False,
one_off=False,
attach_dependencies=False,
override_options=None,
):
if cli:
log.warning("Native build is an experimental feature and could change at any time")
self.initialize()
if not ignore_orphans:
self.find_orphan_containers(remove_orphans)
@@ -669,17 +620,12 @@ class Project:
one_off=service_names if one_off else [],
)
services_to_attach = filter_attached_for_up(
services,
service_names,
attach_dependencies,
lambda service: service.name)
def do(service):
return service.execute_convergence_plan(
plans[service.name],
timeout=timeout,
detached=detached or (service not in services_to_attach),
detached=detached,
scale_override=scale_override.get(service.name),
rescale=rescale,
start=start,
@@ -749,7 +695,7 @@ class Project:
return plans
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=True, silent=False,
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
include_deps=False):
services = self.get_services(service_names, include_deps)
@@ -783,9 +729,7 @@ class Project:
return
try:
writer = parallel.ParallelStreamWriter.get_instance()
if writer is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
writer = parallel.get_stream_writer()
for event in strm:
if 'status' not in event:
continue
@@ -886,26 +830,14 @@ class Project:
)
)
def _inject_deps(self, acc, service, enabled_profiles):
def _inject_deps(self, acc, service):
dep_names = service.get_dependency_names()
if len(dep_names) > 0:
dep_services = self.get_services(
service_names=list(set(dep_names)),
include_deps=True,
auto_enable_profiles=False
include_deps=True
)
for dep in dep_services:
if not dep.enabled_for_profiles(enabled_profiles):
raise ConfigurationError(
'Service "{dep_name}" was pulled in as a dependency of '
'service "{service_name}" but is not enabled by the '
'active profiles. '
'You may fix this by adding a common profile to '
'"{dep_name}" and "{service_name}".'
.format(dep_name=dep.name, service_name=service.name)
)
else:
dep_services = []

View File

@@ -1,5 +1,6 @@
import enum
import itertools
import json
import logging
import os
import re
@@ -44,7 +45,6 @@ from .const import LABEL_VERSION
from .const import NANOCPUS_SCALE
from .const import WINDOWS_LONGPATH_PREFIX
from .container import Container
from .errors import CompletedUnsuccessfully
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
from .errors import OperationFailedError
@@ -77,7 +77,6 @@ HOST_CONFIG_KEYS = [
'cpuset',
'device_cgroup_rules',
'devices',
'device_requests',
'dns',
'dns_search',
'dns_opt',
@@ -112,7 +111,6 @@ HOST_CONFIG_KEYS = [
CONDITION_STARTED = 'service_started'
CONDITION_HEALTHY = 'service_healthy'
CONDITION_COMPLETED_SUCCESSFULLY = 'service_completed_successfully'
class BuildError(Exception):
@@ -413,7 +411,7 @@ class Service:
stopped = [c for c in containers if not c.is_running]
if stopped:
return ConvergencePlan('start', containers)
return ConvergencePlan('start', stopped)
return ConvergencePlan('noop', containers)
@@ -516,9 +514,8 @@ class Service:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
stopped = [c for c in containers if not c.is_running]
_, errors = parallel_execute(
stopped,
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
@@ -713,13 +710,12 @@ class Service:
'image_id': image_id(),
'links': self.get_link_names(),
'net': self.network_mode.id,
'ipc_mode': self.ipc_mode.mode,
'networks': self.networks,
'secrets': self.secrets,
'volumes_from': [
(v.source.name, v.mode)
for v in self.volumes_from if isinstance(v.source, Service)
]
],
}
def get_dependency_names(self):
@@ -755,8 +751,6 @@ class Service:
configs[svc] = lambda s: True
elif config['condition'] == CONDITION_HEALTHY:
configs[svc] = lambda s: s.is_healthy()
elif config['condition'] == CONDITION_COMPLETED_SUCCESSFULLY:
configs[svc] = lambda s: s.is_completed_successfully()
else:
# The config schema already prevents this, but it might be
# bypassed if Compose is called programmatically.
@@ -1021,7 +1015,6 @@ class Service:
privileged=options.get('privileged', False),
network_mode=self.network_mode.mode,
devices=options.get('devices'),
device_requests=options.get('device_requests'),
dns=options.get('dns'),
dns_opt=options.get('dns_opt'),
dns_search=options.get('dns_search'),
@@ -1107,9 +1100,8 @@ class Service:
'Impossible to perform platform-targeted builds for API version < 1.35'
)
builder = _ClientBuilder(self.client) if not cli else _CLIBuilder(progress)
return builder.build(
service=self,
builder = self.client if not cli else _CLIBuilder(progress)
build_output = builder.build(
path=path,
tag=self.image_name,
rm=rm,
@@ -1130,7 +1122,30 @@ class Service:
gzip=gzip,
isolation=build_opts.get('isolation', self.options.get('isolation', None)),
platform=self.platform,
output_stream=output_stream)
)
try:
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(self, str(e))
# Ensure the HTTP connection is not reused for another
# streaming command, as the Docker daemon can sometimes
# complain about it
self.client.close()
image_id = None
for event in all_events:
if 'stream' in event:
match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
if match:
image_id = match.group(1)
if image_id is None:
raise BuildError(self, event if all_events else 'Unknown')
return image_id
def get_cache_from(self, build_opts):
cache_from = build_opts.get('cache_from', None)
@@ -1286,21 +1301,6 @@ class Service:
raise HealthCheckFailed(ctnr.short_id)
return result
def is_completed_successfully(self):
""" Check that all containers for this service has completed successfully
Returns false if at least one container does not exited and
raises CompletedUnsuccessfully exception if at least one container
exited with non-zero exit code.
"""
result = True
for ctnr in self.containers(stopped=True):
ctnr.inspect()
if ctnr.get('State.Status') != 'exited':
result = False
elif ctnr.exit_code != 0:
raise CompletedUnsuccessfully(ctnr.short_id, ctnr.exit_code)
return result
def _parse_proxy_config(self):
client = self.client
if 'proxies' not in client._general_configs:
@@ -1326,24 +1326,6 @@ class Service:
return result
def get_profiles(self):
if 'profiles' not in self.options:
return []
return self.options.get('profiles')
def enabled_for_profiles(self, enabled_profiles):
# if service has no profiles specified it is always enabled
if 'profiles' not in self.options:
return True
service_profiles = self.options.get('profiles')
for profile in enabled_profiles:
if profile in service_profiles:
return True
return False
def short_id_alias_exists(container, network):
aliases = container.get(
@@ -1787,77 +1769,20 @@ def rewrite_build_path(path):
return path
class _ClientBuilder:
def __init__(self, client):
self.client = client
def build(self, service, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True, output_stream=sys.stdout):
build_output = self.client.build(
path=path,
tag=tag,
nocache=nocache,
rm=rm,
pull=pull,
forcerm=forcerm,
dockerfile=dockerfile,
labels=labels,
cache_from=cache_from,
buildargs=buildargs,
network_mode=network_mode,
target=target,
shmsize=shmsize,
extra_hosts=extra_hosts,
container_limits=container_limits,
gzip=gzip,
isolation=isolation,
platform=platform)
try:
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(service, str(e))
# Ensure the HTTP connection is not reused for another
# streaming command, as the Docker daemon can sometimes
# complain about it
self.client.close()
image_id = None
for event in all_events:
if 'stream' in event:
match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
if match:
image_id = match.group(1)
if image_id is None:
raise BuildError(service, event if all_events else 'Unknown')
return image_id
class _CLIBuilder:
def __init__(self, progress):
self._progress = progress
def build(self, service, path, tag=None, quiet=False, fileobj=None,
def build(self, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True, output_stream=sys.stdout):
use_config_proxy=True):
"""
Args:
service (str): Service to be built
path (str): Path to the directory containing the Dockerfile
buildargs (dict): A dictionary of build arguments
cache_from (:py:class:`list`): A list of images used for build
@@ -1906,11 +1831,10 @@ class _CLIBuilder:
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
output_stream (writer): stream to use for build logs
Returns:
A generator for the build output.
"""
if dockerfile and os.path.isdir(path):
if dockerfile:
dockerfile = os.path.join(path, dockerfile)
iidfile = tempfile.mktemp()
@@ -1928,29 +1852,35 @@ class _CLIBuilder:
command_builder.add_arg("--tag", tag)
command_builder.add_arg("--target", target)
command_builder.add_arg("--iidfile", iidfile)
command_builder.add_arg("--platform", platform)
command_builder.add_arg("--isolation", isolation)
if extra_hosts:
if isinstance(extra_hosts, dict):
extra_hosts = ["{}:{}".format(host, ip) for host, ip in extra_hosts.items()]
for host in extra_hosts:
command_builder.add_arg("--add-host", "{}".format(host))
args = command_builder.build([path])
with subprocess.Popen(args, stdout=output_stream, stderr=sys.stderr,
magic_word = "Successfully built "
appear = False
with subprocess.Popen(args, stdout=subprocess.PIPE,
universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
if line.startswith(magic_word):
appear = True
yield json.dumps({"stream": line})
p.communicate()
if p.returncode != 0:
raise BuildError(service, "Build failed")
raise StreamOutputError()
with open(iidfile) as f:
line = f.readline()
image_id = line.split(":")[1].strip()
os.remove(iidfile)
return image_id
# In case of `DOCKER_BUILDKIT=1`
# there is no success message already present in the output.
# Since that's the way `Service::build` gets the `image_id`
# it has to be added `manually`
if not appear:
yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
class _CommandBuilder:

View File

@@ -174,18 +174,3 @@ def truncate_string(s, max_chars=35):
if len(s) > max_chars:
return s[:max_chars - 2] + '...'
return s
def filter_attached_for_up(items, service_names, attach_dependencies=False,
item_to_service_name=lambda x: x):
"""This function contains the logic of choosing which services to
attach when doing docker-compose up. It may be used both with containers
and services, and any other entities that map to service names -
this mapping is provided by item_to_service_name."""
if attach_dependencies or not service_names:
return items
return [
item
for item in items if item_to_service_name(item) in service_names
]

View File

@@ -138,7 +138,7 @@ _docker_compose_config() {
;;
esac
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --profiles --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
}
@@ -164,18 +164,10 @@ _docker_compose_docker_compose() {
_filedir "y?(a)ml"
return
;;
--ansi)
COMPREPLY=( $( compgen -W "never always auto" -- "$cur" ) )
return
;;
--log-level)
COMPREPLY=( $( compgen -W "debug info warning error critical" -- "$cur" ) )
return
;;
--profile)
COMPREPLY=( $( compgen -W "$(__docker_compose_q config --profiles)" -- "$cur" ) )
return
;;
--project-directory)
_filedir -d
return
@@ -298,7 +290,7 @@ _docker_compose_logs() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --no-log-prefix --tail --timestamps -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --tail --timestamps -t" -- "$cur" ) )
;;
*)
__docker_compose_complete_services
@@ -553,7 +545,7 @@ _docker_compose_up() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-log-prefix --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
;;
*)
__docker_compose_complete_services
@@ -622,11 +614,9 @@ _docker_compose() {
--tlskey
"
# These options require special treatment when searching the command.
# These options are require special treatment when searching the command.
local top_level_options_with_args="
--ansi
--log-level
--profile
"
COMPREPLY=()

View File

@@ -21,7 +21,5 @@ complete -c docker-compose -l tlscert -r -d 'Path to TLS certif
complete -c docker-compose -l tlskey -r -d 'Path to TLS key file'
complete -c docker-compose -l tlsverify -d 'Use TLS and verify the remote'
complete -c docker-compose -l skip-hostname-check -d "Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)"
complete -c docker-compose -l no-ansi -d 'Do not print ANSI control characters'
complete -c docker-compose -l ansi -a 'never always auto' -d 'Control when to print ANSI control characters'
complete -c docker-compose -s h -l help -d 'Print usage'
complete -c docker-compose -s v -l version -d 'Print version and exit'

View File

@@ -342,7 +342,6 @@ _docker-compose() {
'--verbose[Show more output]' \
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \
'--no-ansi[Do not print ANSI control characters]' \
'--ansi=[Control when to print ANSI control characters]:when:(never always auto)' \
'(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \
'--tls[Use TLS; implied by --tlsverify]' \
'--tlscacert=[Trust certs signed only by this CA]:ca path:' \

View File

@@ -23,8 +23,8 @@ exe = EXE(pyz,
'DATA'
),
(
'compose/config/compose_spec.json',
'compose/config/compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'DATA'
),
(

View File

@@ -32,8 +32,8 @@ coll = COLLECT(exe,
'DATA'
),
(
'compose/config/compose_spec.json',
'compose/config/compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'DATA'
),
(

View File

@@ -1,474 +0,0 @@
There are three legacy versions of the Compose file format:
- Version 1. This is specified by omitting a `version` key at the root of the YAML.
- Version 2.x. This is specified with a `version: '2'` or `version: '2.1'`, etc., entry at the root of the YAML.
- Version 3.x, designed to be cross-compatible between Compose and the Docker Engine's
[swarm mode](https://docs.docker.com/engine/swarm/). This is specified with a `version: '3'` or `version: '3.1'`, etc., entry at the root of the YAML.
The latest and recommended version of the Compose file format is defined by the [Compose Specification](https://docs.docker.com/compose/compose-file/). This format merges the 2.x and 3.x versions and is implemented by **Compose 1.27.0+**.
> **Note**
>
> If you're using [multiple Compose files](https://docs.docker.com/compose/multiple-compose-files/) or
> [extending services](https://docs.docker.com/compose/multiple-compose-files/extends/),
> each file must be of the same version - you cannot, for example,
> mix version 1 and 2 in a single project.
Several things differ depending on which version you use:
- The structure and permitted configuration keys
- The minimum Docker Engine version you must be running
- Compose's behaviour with regards to networking
These differences are explained below.
### Version 2
Compose files using the version 2 syntax must indicate the version number at
the root of the document. All [services](compose-file-v2.md#service-configuration-reference)
must be declared under the `services` key.
Version 2 files are supported by **Compose 1.6.0+** and require a Docker Engine
of version **1.10.0+**.
Named [volumes](compose-file-v2.md#volume-configuration-reference) can be declared under the
`volumes` key, and [networks](compose-file-v2.md#network-configuration-reference) can be declared
under the `networks` key.
By default, every container joins an application-wide default network, and is
discoverable at a hostname that's the same as the service name. This means
[links](compose-file-v2.md#links) are largely unnecessary. For more details, see
[Networking in Compose](https://docs.docker.com/compose/networking/).
> **Note**
>
> With Compose version 2, when specifying the Compose file version to use, make sure to
> specify both the _major_ and _minor_ numbers. If no minor version is given,
> `0` is used by default and not the latest minor version. As a result, features added in later versions will not be supported. For example:
>
> ```yaml
> version: "2"
> ```
>
> is equivalent to:
>
> ```yaml
> version: "2.0"
> ```
Simple example:
version: "{{% param "compose_file_v2" %}}"
services:
web:
build: .
ports:
- "8000:5000"
volumes:
- .:/code
redis:
image: redis
A more extended example, defining volumes and networks:
version: "{{% param "compose_file_v2" %}}"
services:
web:
build: .
ports:
- "8000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
volumes:
redis-data:
driver: local
networks:
front-tier:
driver: bridge
back-tier:
driver: bridge
Several other options were added to support networking, such as:
* [`aliases`](compose-file-v2.md#aliases)
* The [`depends_on`](compose-file-v2.md#depends_on) option can be used in place of links to indicate dependencies
between services and startup order.
version: "{{% param "compose_file_v2" %}}"
services:
web:
build: .
depends_on:
- db
- redis
redis:
image: redis
db:
image: postgres
* [`ipv4_address`, `ipv6_address`](compose-file-v2.md#ipv4_address-ipv6_address)
[Variable substitution](compose-file-v2.md#variable-substitution) also was added in Version 2.
### Version 2.1
An upgrade of [version 2](#version-2) that introduces new parameters only
available with Docker Engine version **1.12.0+**. Version 2.1 files are
supported by **Compose 1.9.0+**.
Introduces the following additional parameters:
- [`link_local_ips`](compose-file-v2.md#link_local_ips)
- [`isolation`](compose-file-v2.md#isolation-1) in build configurations and
service definitions
- `labels` for [volumes](compose-file-v2.md#volume-configuration-reference),
[networks](compose-file-v2.md#network-configuration-reference), and
[build](compose-file-v3.md#build)
- `name` for [volumes](compose-file-v2.md#volume-configuration-reference)
- [`userns_mode`](compose-file-v2.md#userns_mode)
- [`healthcheck`](compose-file-v2.md#healthcheck)
- [`sysctls`](compose-file-v2.md#sysctls)
- [`pids_limit`](compose-file-v2.md#pids_limit)
- [`oom_kill_disable`](compose-file-v2.md#cpu-and-other-resources)
- [`cpu_period`](compose-file-v2.md#cpu-and-other-resources)
### Version 2.2
An upgrade of [version 2.1](#version-21) that introduces new parameters only
available with Docker Engine version **1.13.0+**. Version 2.2 files are
supported by **Compose 1.13.0+**. This version also allows you to specify
default scale numbers inside the service's configuration.
Introduces the following additional parameters:
- [`init`](compose-file-v2.md#init)
- [`scale`](compose-file-v2.md#scale)
- [`cpu_rt_runtime` and `cpu_rt_period`](compose-file-v2.md#cpu_rt_runtime-cpu_rt_period)
- [`network`](compose-file-v2.md#network) for [build configurations](compose-file-v2.md#build)
### Version 2.3
An upgrade of [version 2.2](#version-22) that introduces new parameters only
available with Docker Engine version **17.06.0+**. Version 2.3 files are
supported by **Compose 1.16.0+**.
Introduces the following additional parameters:
- [`target`](compose-file-v2.md#target), [`extra_hosts`](compose-file-v2.md#extra_hosts-1) and
[`shm_size`](compose-file-v2.md#shm_size) for [build configurations](compose-file-v2.md#build)
- `start_period` for [`healthchecks`](compose-file-v2.md#healthcheck)
- ["Long syntax" for volumes](compose-file-v2.md#long-syntax)
- [`runtime`](compose-file-v2.md#runtime) for service definitions
- [`device_cgroup_rules`](compose-file-v2.md#device_cgroup_rules)
### Version 2.4
An upgrade of [version 2.3](#version-23) that introduces new parameters only
available with Docker Engine version **17.12.0+**. Version 2.4 files are
supported by **Compose 1.21.0+**.
Introduces the following additional parameters:
- [`platform`](compose-file-v2.md#platform) for service definitions
- Support for extension fields at the root of service, network, and volume
definitions
### Version 3
Designed to be cross-compatible between Compose and the Docker Engine's
[swarm mode](/engine/swarm/), version 3 removes several options and adds
several more.
- Removed: `volume_driver`, `volumes_from`, `cpu_shares`, `cpu_quota`,
`cpuset`, `mem_limit`, `memswap_limit`, `extends`, `group_add`. See
the [upgrading](#upgrading) guide for how to migrate away from these.
- Added: [deploy](compose-file-v3.md#deploy)
If only the major version is given (`version: '3'`),
the latest minor version is used by default.
### Version 3.1
An upgrade of [version 3](#version-3) that introduces new parameters only
available with Docker Engine version **1.13.1+**, and higher.
Introduces the following additional parameters:
- [`secrets`](compose-file-v3.md#secrets)
### Version 3.2
An upgrade of [version 3](#version-3) that introduces new parameters only
available with Docker Engine version **17.04.0+**, and higher.
Introduces the following additional parameters:
- [`cache_from`](compose-file-v3.md#cache_from) in [build configurations](compose-file-v3.md#build)
- Long syntax for [ports](compose-file-v3.md#ports) and [volume mounts](compose-file-v3.md#volumes)
- [`attachable`](compose-file-v3.md#attachable) network driver option
- [deploy `endpoint_mode`](compose-file-v3.md#endpoint_mode)
- [deploy placement `preference`](compose-file-v3.md#placement)
### Version 3.3
An upgrade of [version 3](#version-3) that introduces new parameters only
available with Docker Engine version **17.06.0+**, and higher.
Introduces the following additional parameters:
- [build `labels`](compose-file-v3.md#build)
- [`credential_spec`](compose-file-v3.md#credential_spec)
- [`configs`](compose-file-v3.md#configs)
### Version 3.4
An upgrade of [version 3](#version-3) that introduces new parameters. It is
only available with Docker Engine version **17.09.0** and higher.
Introduces the following additional parameters:
- [`target`](compose-file-v3.md#target) and [`network`](compose-file-v3.md#network) in
[build configurations](compose-file-v3.md#build)
- `start_period` for [`healthchecks`](compose-file-v3.md#healthcheck)
- `order` for [update configurations](compose-file-v3.md#update_config)
- `name` for [volumes](compose-file-v3.md#volume-configuration-reference)
### Version 3.5
An upgrade of [version 3](#version-3) that introduces new parameters. It is
only available with Docker Engine version **17.12.0** and higher.
Introduces the following additional parameters:
- [`isolation`](compose-file-v3.md#isolation) in service definitions
- `name` for networks, secrets and configs
- `shm_size` in [build configurations](compose-file-v3.md#build)
### Version 3.6
An upgrade of [version 3](#version-3) that introduces new parameters. It is
only available with Docker Engine version **18.02.0** and higher.
Introduces the following additional parameters:
- [`tmpfs` size](compose-file-v3.md#long-syntax-3) for `tmpfs`-type mounts
### Version 3.7
An upgrade of [version 3](#version-3) that introduces new parameters. It is
only available with Docker Engine version **18.06.0** and higher.
Introduces the following additional parameters:
- [`init`](compose-file-v3.md#init) in service definitions
- [`rollback_config`](compose-file-v3.md#rollback_config) in deploy configurations
- Support for extension fields at the root of service, network, volume, secret
and config definitions
### Version 3.8
An upgrade of [version 3](#version-3) that introduces new parameters. It is
only available with Docker Engine version **19.03.0** and higher.
Introduces the following additional parameters:
- [`max_replicas_per_node`](compose-file-v3.md#max_replicas_per_node) in placement
configurations
- `template_driver` option for [config](compose-file-v3.md#configs-configuration-reference)
and [secret](compose-file-v3.md#secrets-configuration-reference) configurations. This
option is only supported when deploying swarm services using
`docker stack deploy`.
- `driver` and `driver_opts` option for [secret](compose-file-v3.md#secrets-configuration-reference)
configurations. This option is only supported when deploying swarm services
using `docker stack deploy`.
### Version 1 (Deprecated)
Compose versions below 1.6.x are
Compose files that do not declare a version are considered "version 1". In those
files, all the [services](compose-file-v3.md#service-configuration-reference) are
declared at the root of the document.
Version 1 is supported by Compose up to 1.6.x** and has been deprecated.
Version 1 files cannot declare named
[volumes](compose-file-v3.md#volume-configuration-reference), [networks](compose-file-v3.md#network-configuration-reference) or
[build arguments](compose-file-v3.md#args).
Compose does not take advantage of [networking](https://docs.docker.com/compose/networking/) when you
use version 1: every container is placed on the default `bridge` network and is
reachable from every other container at its IP address. You need to use
`links` to enable discovery between containers.
Example:
web:
build: .
ports:
- "8000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
## Upgrading
### Version 2.x to 3.x
Between versions 2.x and 3.x, the structure of the Compose file is the same, but
several options have been removed:
- `volume_driver`: Instead of setting the volume driver on the service, define
a volume using the
[top-level `volumes` option](compose-file-v3.md#volume-configuration-reference)
and specify the driver there.
version: "3.8"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
driver: mydriver
- `volumes_from`: To share a volume between services, define it using the
[top-level `volumes` option](compose-file-v3.md#volume-configuration-reference)
and reference it from each service that shares it using the
[service-level `volumes` option](compose-file-v3.md#driver).
- `cpu_shares`, `cpu_quota`, `cpuset`, `mem_limit`, `memswap_limit`: These
have been replaced by the [resources](compose-file-v3.md#resources) key under
`deploy`. `deploy` configuration only takes effect when using
`docker stack deploy`, and is ignored by `docker-compose`.
- `extends`: This option has been removed for `version: "3.x"` Compose files.
For more information on `extends`, see
[Extending services](https://docs.docker.com/compose/multiple-compose-files/extends/).
- `group_add`: This option has been removed for `version: "3.x"` Compose files.
- `pids_limit`: This option has not been introduced in `version: "3.x"` Compose files.
- `link_local_ips` in `networks`: This option has not been introduced in
`version: "3.x"` Compose files.
#### Compatibility mode
`docker-compose` 1.20.0 introduces a new `--compatibility` flag designed to
help developers transition to version 3 more easily. When enabled,
`docker-compose` reads the `deploy` section of each service's definition and
attempts to translate it into the equivalent version 2 parameter. Currently,
the following deploy keys are translated:
- [resources](compose-file-v3.md#resources) limits and memory reservations
- [replicas](compose-file-v3.md#replicas)
- [restart_policy](compose-file-v3.md#restart_policy) `condition` and `max_attempts`
All other keys are ignored and produce a warning if present. You can review
the configuration that will be used to deploy by using the `--compatibility`
flag with the `config` command.
> Do not use this in production
>
> We recommend against using `--compatibility` mode in production. The
> resulting configuration is only an approximate using non-Swarm mode
> properties, it may produce unexpected results.
### Version 1 to 2.x
In the majority of cases, moving from version 1 to 2 is a very simple process:
1. Indent the whole file by one level and put a `services:` key at the top.
2. Add a `version: '2'` line at the top of the file.
It's more complicated if you're using particular configuration features:
- `dockerfile`: This now lives under the `build` key:
build:
context: .
dockerfile: Dockerfile-alternate
- `log_driver`, `log_opt`: These now live under the `logging` key:
logging:
driver: syslog
options:
syslog-address: "tcp://192.168.0.42:123"
- `links` with environment variables: environment variables created by
links, such as `CONTAINERNAME_PORT`, ` have been deprecated for some time. In the new Docker network system,
they have been removed. You should either connect directly to the
appropriate hostname or set the relevant environment variable yourself,
using the link hostname:
web:
links:
- db
environment:
- DB_PORT=tcp://db:5432
- `external_links`: Compose uses Docker networks when running version 2
projects, so links behave slightly differently. In particular, two
containers must be connected to at least one network in common in order to
communicate, even if explicitly linked together.
Either connect the external container to your app's
[default network](https://docs.docker.com/compose/networking/), or connect both the external container and
your service's containers to an
[external network](https://docs.docker.com/compose/networking/).
- `net`: This is now replaced by [network_mode](compose-file-v3.md#network_mode):
net: host -> network_mode: host
net: bridge -> network_mode: bridge
net: none -> network_mode: none
If you're using `net: "container:[service name]"`, you must now use
`network_mode: "service:[service name]"` instead.
net: "container:web" -> network_mode: "service:web"
If you're using `net: "container:[container name/id]"`, the value does not
need to change.
net: "container:cont-name" -> network_mode: "container:cont-name"
net: "container:abc12345" -> network_mode: "container:abc12345"
- `volumes` with named volumes: these must now be explicitly declared in a
top-level `volumes` section of your Compose file. If a service mounts a
named volume called `data`, you must declare a `data` volume in your
top-level `volumes` section. The whole file might look like this:
version: "{{% param "compose_file_v2" %}}"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
By default, Compose creates a volume whose name is prefixed with your
project name. If you want it to just be called `data`, declare it as
external:
volumes:
data:
external: true

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

14
docs/README.md Normal file
View File

@@ -0,0 +1,14 @@
# The docs have been moved!
The documentation for Compose has been merged into
[the general documentation repo](https://github.com/docker/docker.github.io).
The docs for Compose are now here:
https://github.com/docker/docker.github.io/tree/master/compose
Please submit pull requests for unreleased features/changes on the `master` branch (https://github.com/docker/docker.github.io/tree/master), please prefix the PR title with `[WIP]` to indicate that it relates to an unreleased change.
If you submit a PR to this codebase that has a docs impact, create a second docs PR on `docker.github.io`. Use the docs PR template provided.
As always, the docs remain open-source and we appreciate your feedback and
pull requests!

437
logo.svg

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 132 KiB

View File

@@ -1 +1 @@
pyinstaller==4.1
pyinstaller==3.6

View File

@@ -1,9 +1,9 @@
Click==7.1.2
coverage==5.5
ddt==1.4.2
coverage==5.2.1
ddt==1.4.1
flake8==3.8.3
gitpython==3.1.11
gitpython==3.1.7
mock==3.0.5
pytest==6.2.4; python_version >= '3.5'
pytest==6.0.1; python_version >= '3.5'
pytest==4.6.5; python_version < '3.5'
pytest-cov==2.10.1

View File

@@ -1,19 +1,19 @@
altgraph==0.17
appdirs==1.4.4
attrs==20.3.0
bcrypt==3.2.0
cffi==1.14.4
cryptography==3.3.2
attrs==20.1.0
bcrypt==3.1.7
cffi==1.14.1
cryptography==3.0
distlib==0.3.1
entrypoints==0.3
filelock==3.0.12
gitdb2==4.0.2
mccabe==0.6.1
more-itertools==8.6.0; python_version >= '3.5'
more-itertools==8.4.0; python_version >= '3.5'
more-itertools==5.0.0; python_version < '3.5'
packaging==20.9
packaging==20.4
pluggy==0.13.1
py==1.10.0
py==1.9.0
pycodestyle==2.6.0
pycparser==2.20
pyflakes==2.2.0
@@ -23,6 +23,6 @@ pyrsistent==0.16.0
smmap==3.0.4
smmap2==3.0.1
toml==0.10.1
tox==3.21.2
virtualenv==20.4.0
tox==3.19.0
virtualenv==20.0.30
wcwidth==0.2.5

View File

@@ -1,22 +1,23 @@
backports.shutil_get_terminal_size==1.0.0
cached-property==1.5.1; python_version < '3.8'
certifi==2021.5.30
cached-property==1.5.1
certifi==2020.6.20
chardet==3.0.4
colorama==0.4.4; sys_platform == 'win32'
colorama==0.4.3; sys_platform == 'win32'
distro==1.5.0
docker==5.0.0
docker==4.3.1
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
idna==2.10
ipaddress==1.0.23
jsonschema==3.2.0
paramiko==2.7.2
paramiko==2.7.1
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.7.1
python-dotenv==0.17.0
pywin32==301; sys_platform == 'win32'
PyYAML==5.4.1
requests==2.25.1
texttable==1.6.3
urllib3==1.26.5; python_version == '3.3'
websocket-client==1.1.0
python-dotenv==0.14.0
PyYAML==5.3.1
requests==2.24.0
texttable==1.6.2
urllib3==1.25.10; python_version == '3.3'
websocket-client==0.57.0

View File

@@ -5,12 +5,14 @@ set -ex
./script/clean
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
TAG="docker/compose:tmp-glibc-linux-binary-${DOCKER_COMPOSE_GITSHA}"
docker build . \
--target bin \
--build-arg DISTRO=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}" \
--output dist/
docker build -t "${TAG}" . \
--build-arg BUILD_PLATFORM=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
TMP_CONTAINER=$(docker create "${TAG}")
mkdir -p dist
ARCH=$(uname -m)
# Ensure that we output the binary with the same name as we did before
mv dist/docker-compose-linux-amd64 "dist/docker-compose-Linux-${ARCH}"
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
docker container rm -f "${TMP_CONTAINER}"
docker image rm -f "${TAG}"

View File

@@ -24,7 +24,7 @@ if [ ! -z "${BUILD_BOOTLOADER}" ]; then
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
cd /tmp/pyinstaller/bootloader
# Checkout commit corresponding to version in requirements-build
git checkout v4.1
git checkout v3.6
"${VENV}"/bin/python3 ./waf configure --no-lsb all
"${VENV}"/bin/pip3 install ..
cd "${CODE_PATH}"

View File

@@ -13,6 +13,6 @@ IMAGE="docker/compose-tests"
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
docker build -t "${IMAGE}:${TAG}" . \
--target build \
--build-arg DISTRO="debian" \
--build-arg BUILD_PLATFORM="debian" \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
docker tag "${IMAGE}":"${TAG}" "${IMAGE}":latest

View File

@@ -6,17 +6,17 @@
#
# http://git-scm.com/download/win
#
# 2. Install Python 3.9.x:
# 2. Install Python 3.7.x:
#
# https://www.python.org/downloads/
#
# 3. Append ";C:\Python39;C:\Python39\Scripts" to the "Path" environment variable:
# 3. Append ";C:\Python37;C:\Python37\Scripts" to the "Path" environment variable:
#
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
#
# 4. In Powershell, run the following commands:
#
# $ pip install 'virtualenv==20.2.2'
# $ pip install 'virtualenv==20.0.30'
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
#
# 5. Clone the repository:
@@ -39,7 +39,7 @@ if (Test-Path venv) {
Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName }
# Create virtualenv
virtualenv -p C:\Python39\python.exe .\venv
virtualenv -p C:\Python37\python.exe .\venv
# pip and pyinstaller generate lots of warnings, so we need to ignore them
$ErrorActionPreference = "Continue"

View File

@@ -20,3 +20,4 @@ This should trigger a new CI build on the new tag. When the CI finishes with the
4. In case of a GA version, please update `docker-compose`s release notes and version on [github documentation repository](https://github.com/docker/docker.github.io):
- [Release Notes](https://github.com/docker/docker.github.io/blob/master/compose/release-notes.md)
- [Config version](https://github.com/docker/docker.github.io/blob/master/_config.yml)
- [Config authoring version](https://github.com/docker/docker.github.io/blob/master/_config_authoring.yml)

0
script/release/release.py Normal file → Executable file
View File

View File

@@ -15,16 +15,16 @@
set -e
VERSION="1.26.1"
VERSION="1.27.1"
IMAGE="docker/compose:$VERSION"
# Setup options for connecting to docker host
if [ -z "$DOCKER_HOST" ]; then
DOCKER_HOST='unix:///var/run/docker.sock'
DOCKER_HOST="/var/run/docker.sock"
fi
if [ -S "${DOCKER_HOST#unix://}" ]; then
DOCKER_ADDR="-v ${DOCKER_HOST#unix://}:${DOCKER_HOST#unix://} -e DOCKER_HOST"
if [ -S "$DOCKER_HOST" ]; then
DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST"
else
DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH"
fi
@@ -44,34 +44,13 @@ fi
if [ -n "$COMPOSE_PROJECT_NAME" ]; then
COMPOSE_OPTIONS="-e COMPOSE_PROJECT_NAME $COMPOSE_OPTIONS"
fi
# TODO: also check --file argument
if [ -n "$compose_dir" ]; then
VOLUMES="$VOLUMES -v $compose_dir:$compose_dir"
fi
if [ -n "$HOME" ]; then
VOLUMES="$VOLUMES -v $HOME:$HOME -e HOME" # Pass in HOME to share docker.config and allow ~/-relative paths to work.
fi
i=$#
while [ $i -gt 0 ]; do
arg=$1
i=$((i - 1))
shift
case "$arg" in
-f|--file)
value=$1
i=$((i - 1))
shift
set -- "$@" "$arg" "$value"
file_dir=$(realpath "$(dirname "$value")")
VOLUMES="$VOLUMES -v $file_dir:$file_dir"
;;
*) set -- "$@" "$arg" ;;
esac
done
# Setup environment variables for compose config and context
ENV_OPTIONS=$(printenv | sed -E "/^PATH=.*/d; s/^/-e /g; s/=.*//g; s/\n/ /g")
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
@@ -88,4 +67,4 @@ if docker info --format '{{json .SecurityOptions}}' 2>/dev/null | grep -q 'name=
fi
# shellcheck disable=SC2086
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $ENV_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"

View File

@@ -13,13 +13,13 @@ if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
OPENSSL_VERSION=1.1.1h
OPENSSL_VERSION=1.1.1g
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=8d0d099e8973ec851368c8c775e05e1eadca1794
OPENSSL_SHA1=b213a293f2127ec3e323fb3cfc0c9807664fd997
PYTHON_VERSION=3.9.0
PYTHON_VERSION=3.7.7
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=5744a10ba989d2badacbab3c00cdcb83c83106c7
PYTHON_SHA1=8e9968663a214aea29659ba9dfa959e8a7d82b39
#
# Install prerequisites.
@@ -36,7 +36,7 @@ if ! [ -x "$(command -v python3)" ]; then
brew install python3
fi
if ! [ -x "$(command -v virtualenv)" ]; then
pip3 install virtualenv==20.2.2
pip3 install virtualenv==20.0.30
fi
#

View File

@@ -21,6 +21,7 @@ elif [ "$DOCKER_VERSIONS" == "all" ]; then
DOCKER_VERSIONS=$($get_versions -n 2 recent)
fi
BUILD_NUMBER=${BUILD_NUMBER-$USER}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py37}
@@ -38,23 +39,17 @@ for version in $DOCKER_VERSIONS; do
trap "on_exit" EXIT
repo="dockerswarm/dind"
docker run \
-d \
--name "$daemon_container" \
--privileged \
--volume="/var/lib/docker" \
-e "DOCKER_TLS_CERTDIR=" \
"docker:$version-dind" \
"$repo:$version" \
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
2>&1 | tail -n 10
docker exec "$daemon_container" sh -c "apk add --no-cache git"
# copy docker config from host for authentication with Docker Hub
docker exec "$daemon_container" sh -c "mkdir /root/.docker"
docker cp /root/.docker/config.json $daemon_container:/root/.docker/config.json
docker exec "$daemon_container" sh -c "chmod 644 /root/.docker/config.json"
docker run \
--rm \
--tty \

View File

@@ -25,13 +25,14 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'docopt >= 0.6.1, < 1',
'PyYAML >= 3.10, < 6',
'requests >= 2.20.0, < 3',
'texttable >= 0.9.0, < 2',
'websocket-client >= 0.32.0, < 1',
'distro >= 1.5.0, < 2',
'docker[ssh] >= 5',
'docker[ssh] >= 4.3.1, < 5',
'dockerpty >= 0.4.1, < 1',
'jsonschema >= 2.5.1, < 4',
'python-dotenv >= 0.13.0, < 1',
@@ -49,7 +50,6 @@ if sys.version_info[:2] < (3, 4):
extras_require = {
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
':python_version < "3.8"': ['cached-property >= 1.2.0, < 2'],
':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
'tests': tests_require,
@@ -102,7 +102,5 @@ setup(
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
)

View File

@@ -58,16 +58,13 @@ COMPOSE_COMPATIBILITY_DICT = {
}
def start_process(base_dir, options, executable=None, env=None):
executable = executable or DOCKER_COMPOSE_EXECUTABLE
def start_process(base_dir, options):
proc = subprocess.Popen(
[executable] + options,
[DOCKER_COMPOSE_EXECUTABLE] + options,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=base_dir,
env=env,
)
cwd=base_dir)
print("Running process: %s" % proc.pid)
return proc
@@ -81,10 +78,9 @@ def wait_on_process(proc, returncode=0, stdin=None):
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
def dispatch(base_dir, options,
project_options=None, returncode=0, stdin=None, executable=None, env=None):
def dispatch(base_dir, options, project_options=None, returncode=0, stdin=None):
project_options = project_options or []
proc = start_process(base_dir, project_options + options, executable=executable, env=env)
proc = start_process(base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode, stdin=stdin)
@@ -237,11 +233,6 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['-H=tcp://doesnotexist:8000', 'ps'], returncode=1)
assert "Couldn't connect to Docker daemon" in result.stderr
def test_config_list_profiles(self):
self.base_dir = 'tests/fixtures/config-profiles'
result = self.dispatch(['config', '--profiles'])
assert set(result.stdout.rstrip().split('\n')) == {'debug', 'frontend', 'gui'}
def test_config_list_services(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config', '--services'])
@@ -368,7 +359,7 @@ services:
'web': {
'command': 'true',
'image': 'alpine:latest',
'ports': [{'target': 5643}, {'target': 9999}]
'ports': ['5643/tcp', '9999/tcp']
}
}
}
@@ -383,7 +374,7 @@ services:
'web': {
'command': 'false',
'image': 'alpine:latest',
'ports': [{'target': 5644}, {'target': 9998}]
'ports': ['5644/tcp', '9998/tcp']
}
}
}
@@ -398,7 +389,7 @@ services:
'web': {
'command': 'echo uwu',
'image': 'alpine:3.10.1',
'ports': [{'target': 3341}, {'target': 4449}]
'ports': ['3341/tcp', '4449/tcp']
}
}
}
@@ -792,11 +783,7 @@ services:
assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT in result.stdout
@mock.patch.dict(os.environ)
def test_build_log_level(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
os.environ['DOCKER_BUILDKIT'] = '0'
self.test_env_file_relative_to_compose_file()
self.base_dir = 'tests/fixtures/simple-dockerfile'
result = self.dispatch(['--log-level', 'warning', 'build', 'simple'])
assert result.stderr == ''
@@ -858,17 +845,13 @@ services:
for c in self.project.client.containers(all=True):
self.addCleanup(self.project.client.remove_container, c, force=True)
@mock.patch.dict(os.environ)
def test_build_shm_size_build_option(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-shm-size'
result = self.dispatch(['build', '--no-cache'], None)
assert 'shm_size: 96' in result.stdout
@mock.patch.dict(os.environ)
def test_build_memory_build_option(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-memory'
result = self.dispatch(['build', '--no-cache', '--memory', '96m', 'service'], None)
@@ -1736,98 +1719,6 @@ services:
shareable_mode_container = self.project.get_service('shareable').containers()[0]
assert shareable_mode_container.get('HostConfig.IpcMode') == 'shareable'
def test_profiles_up_with_no_profile(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert len(containers) == 1
def test_profiles_up_with_profile(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'test', 'up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert 'bar' in service_names
assert 'baz' in service_names
assert len(containers) == 3
def test_profiles_up_invalid_dependency(self):
self.base_dir = 'tests/fixtures/profiles'
result = self.dispatch(['--profile', 'debug', 'up'], returncode=1)
assert ('Service "bar" was pulled in as a dependency of service "zot" '
'but is not enabled by the active profiles.') in result.stderr
def test_profiles_up_with_multiple_profiles(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'debug', '--profile', 'test', 'up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert 'bar' in service_names
assert 'baz' in service_names
assert 'zot' in service_names
assert len(containers) == 4
def test_profiles_up_with_profile_enabled_by_service(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up', 'bar'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert len(containers) == 1
def test_profiles_up_with_dependency_and_profile_enabled_by_service(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up', 'baz'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'baz' in service_names
assert len(containers) == 2
def test_profiles_up_with_invalid_dependency_for_target_service(self):
self.base_dir = 'tests/fixtures/profiles'
result = self.dispatch(['up', 'zot'], returncode=1)
assert ('Service "bar" was pulled in as a dependency of service "zot" '
'but is not enabled by the active profiles.') in result.stderr
def test_profiles_up_with_profile_for_dependency(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'test', 'up', 'zot'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'zot' in service_names
assert len(containers) == 2
def test_profiles_up_with_merged_profiles(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['-f', 'docker-compose.yml', '-f', 'merge-profiles.yml', 'up', 'zot'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'zot' in service_names
assert len(containers) == 2
def test_exec_without_tty(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'console'])
@@ -3143,12 +3034,3 @@ services:
another = self.project.get_service('--log-service')
assert len(service.containers()) == 1
assert len(another.containers()) == 1
def test_up_no_log_prefix(self):
self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-log-prefix'])
assert 'simple' in result.stdout
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
assert 'exited with code 0' in result.stdout

View File

@@ -1,15 +0,0 @@
version: '3.8'
services:
frontend:
image: frontend
profiles: ["frontend", "gui"]
phpmyadmin:
image: phpmyadmin
depends_on:
- db
profiles:
- debug
backend:
image: backend
db:
image: mysql

View File

@@ -1 +0,0 @@
WHEREAMI=default

View File

@@ -1,20 +0,0 @@
version: "3"
services:
foo:
image: busybox:1.31.0-uclibc
bar:
image: busybox:1.31.0-uclibc
profiles:
- test
baz:
image: busybox:1.31.0-uclibc
depends_on:
- bar
profiles:
- test
zot:
image: busybox:1.31.0-uclibc
depends_on:
- bar
profiles:
- debug

View File

@@ -1,5 +0,0 @@
version: "3"
services:
bar:
profiles:
- debug

View File

@@ -1,6 +1,5 @@
import tempfile
import pytest
from ddt import data
from ddt import ddt
@@ -9,7 +8,6 @@ from ..acceptance.cli_test import dispatch
from compose.cli.command import get_project
from compose.cli.command import project_from_options
from compose.config.environment import Environment
from compose.config.errors import EnvFileNotFound
from tests.integration.testcases import DockerClientTestCase
@@ -57,36 +55,13 @@ services:
class EnvironmentOverrideFileTest(DockerClientTestCase):
def test_env_file_override(self):
base_dir = 'tests/fixtures/env-file-override'
# '--env-file' are relative to the current working dir
env = Environment.from_env_file(base_dir, base_dir+'/.env.override')
dispatch(base_dir, ['--env-file', '.env.override', 'up'])
project = get_project(project_dir=base_dir,
config_path=['docker-compose.yml'],
environment=env,
environment=Environment.from_env_file(base_dir, '.env.override'),
override_dir=base_dir)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert "WHEREAMI=override" in containers[0].get('Config.Env')
assert "DEFAULT_CONF_LOADED=true" in containers[0].get('Config.Env')
dispatch(base_dir, ['--env-file', '.env.override', 'down'], None)
def test_env_file_not_found_error(self):
base_dir = 'tests/fixtures/env-file-override'
with pytest.raises(EnvFileNotFound) as excinfo:
Environment.from_env_file(base_dir, '.env.override')
assert "Couldn't find env file" in excinfo.exconly()
def test_dot_env_file(self):
base_dir = 'tests/fixtures/env-file-override'
# '.env' is relative to the project_dir (base_dir)
env = Environment.from_env_file(base_dir, None)
dispatch(base_dir, ['up'])
project = get_project(project_dir=base_dir,
config_path=['docker-compose.yml'],
environment=env,
override_dir=base_dir)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert "WHEREAMI=default" in containers[0].get('Config.Env')
dispatch(base_dir, ['down'], None)

View File

@@ -1,125 +0,0 @@
import logging
import os
import socket
from http.server import BaseHTTPRequestHandler
from http.server import HTTPServer
from threading import Thread
import requests
from docker.transport import UnixHTTPAdapter
from tests.acceptance.cli_test import dispatch
from tests.integration.testcases import DockerClientTestCase
TEST_SOCKET_FILE = '/tmp/test-metrics-docker-cli.sock'
class MetricsTest(DockerClientTestCase):
test_session = requests.sessions.Session()
test_env = None
base_dir = 'tests/fixtures/v3-full'
@classmethod
def setUpClass(cls):
super().setUpClass()
MetricsTest.test_session.mount("http+unix://", UnixHTTPAdapter(TEST_SOCKET_FILE))
MetricsTest.test_env = os.environ.copy()
MetricsTest.test_env['METRICS_SOCKET_FILE'] = TEST_SOCKET_FILE
MetricsServer().start()
@classmethod
def test_metrics_help(cls):
# root `docker-compose` command is considered as a `--help`
dispatch(cls.base_dir, [], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['help', 'run'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['--help'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['run', '--help'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help run", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['up', '--help', 'extra_args'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help up", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
@classmethod
def test_metrics_simple_commands(cls):
dispatch(cls.base_dir, ['ps'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose ps", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['version'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose version", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['version', '--yyy'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose version", "context": "moby", ' \
b'"source": "docker-compose", "status": "failure"}'
@staticmethod
def get_content():
resp = MetricsTest.test_session.get("http+unix://localhost")
print(resp.content)
return resp.content
def start_server(uri=TEST_SOCKET_FILE):
try:
os.remove(uri)
except OSError:
pass
httpd = HTTPServer(uri, MetricsHTTPRequestHandler, False)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(TEST_SOCKET_FILE)
sock.listen(0)
httpd.socket = sock
print('Serving on ', uri)
httpd.serve_forever()
sock.shutdown(socket.SHUT_RDWR)
sock.close()
os.remove(uri)
class MetricsServer:
@classmethod
def start(cls):
t = Thread(target=start_server, daemon=True)
t.start()
class MetricsHTTPRequestHandler(BaseHTTPRequestHandler):
usages = []
def do_GET(self):
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
self.send_response(200)
self.end_headers()
for u in MetricsHTTPRequestHandler.usages:
self.wfile.write(u)
MetricsHTTPRequestHandler.usages = []
def do_POST(self):
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
content_length = int(self.headers['Content-Length'])
body = self.rfile.read(content_length)
print(body)
MetricsHTTPRequestHandler.usages.append(body)
self.send_response(200)
self.end_headers()
if __name__ == '__main__':
logging.getLogger("urllib3").propagate = False
logging.getLogger("requests").propagate = False
start_server()

View File

@@ -25,7 +25,6 @@ from compose.const import COMPOSE_SPEC as VERSION
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.errors import CompletedUnsuccessfully
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.project import Project
@@ -38,7 +37,6 @@ from tests.integration.testcases import no_cluster
def build_config(**kwargs):
return config.Config(
config_version=kwargs.get('version', VERSION),
version=kwargs.get('version', VERSION),
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),
@@ -1349,36 +1347,6 @@ class ProjectTest(DockerClientTestCase):
project.up()
assert len(project.containers()) == 3
def test_project_up_scale_with_stopped_containers(self):
config_data = build_config(
services=[{
'name': 'web',
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'scale': 2
}]
)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
containers = project.containers()
assert len(containers) == 2
self.client.stop(containers[0].id)
project.up(scale_override={'web': 2})
containers = project.containers()
assert len(containers) == 2
self.client.stop(containers[0].id)
project.up(scale_override={'web': 3})
assert len(project.containers()) == 3
self.client.stop(containers[0].id)
project.up(scale_override={'web': 1})
assert len(project.containers()) == 1
def test_initialize_volumes(self):
vol_name = '{:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{}'.format(vol_name)
@@ -1900,106 +1868,6 @@ class ProjectTest(DockerClientTestCase):
with pytest.raises(NoHealthCheckConfigured):
svc1.is_healthy()
def test_project_up_completed_successfully_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'true'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert svc2.containers()[0].is_running
assert len(svc1.containers()) == 0
assert svc1.is_completed_successfully()
def test_project_up_completed_unsuccessfully_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'false'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert len(svc2.containers()) == 0
with pytest.raises(CompletedUnsuccessfully):
svc1.is_completed_successfully()
def test_project_up_completed_differently_dependencies(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'true'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'false'
},
'svc3': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
'svc2': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
svc3 = project.get_service('svc3')
assert ['svc1', 'svc2'] == svc3.get_dependency_names()
assert svc1.is_completed_successfully()
assert len(svc3.containers()) == 0
with pytest.raises(CompletedUnsuccessfully):
svc2.is_completed_successfully()
def test_project_up_seccomp_profile(self):
seccomp_data = {
'defaultAction': 'SCMP_ACT_ALLOW',

View File

@@ -948,12 +948,7 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
service = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '0',
'DOCKER_BUILDKIT': '0',
})
service = self.create_service('web', build={'context': base_dir})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
@@ -969,6 +964,7 @@ class ServiceTest(DockerClientTestCase):
service = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
service.build(cli=True)
@@ -1019,6 +1015,7 @@ class ServiceTest(DockerClientTestCase):
web = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
project = Project('composetest', [web], self.client)

View File

@@ -375,7 +375,7 @@ class ServiceStateTest(DockerClientTestCase):
assert [c.is_running for c in containers] == [False, True]
assert ('start', containers) == web.convergence_plan()
assert ('start', containers[0:1]) == web.convergence_plan()
def test_trigger_recreate_with_config_change(self):
web = self.create_service('web', command=["top"])

View File

@@ -61,7 +61,6 @@ class DockerClientTestCase(unittest.TestCase):
@classmethod
def tearDownClass(cls):
cls.client.close()
del cls.client
def tearDown(self):

View File

@@ -1,56 +0,0 @@
import os
import pytest
from compose.cli.colors import AnsiMode
from tests import mock
@pytest.fixture
def tty_stream():
stream = mock.Mock()
stream.isatty.return_value = True
return stream
@pytest.fixture
def non_tty_stream():
stream = mock.Mock()
stream.isatty.return_value = False
return stream
class TestAnsiModeTestCase:
@mock.patch.dict(os.environ)
def test_ansi_mode_never(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
@mock.patch.dict(os.environ)
def test_ansi_mode_always(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
@mock.patch.dict(os.environ)
def test_ansi_mode_auto(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert AnsiMode.AUTO.use_ansi_codes(tty_stream)
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert not AnsiMode.AUTO.use_ansi_codes(tty_stream)
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)

View File

@@ -14,41 +14,49 @@ class TestGetConfigPathFromOptions:
paths = ['one.yml', 'two.yml']
opts = {'--file': paths}
environment = Environment.from_env_file('.')
assert get_config_path_from_options(opts, environment) == paths
assert get_config_path_from_options('.', opts, environment) == paths
def test_single_path_from_env(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options({}, environment) == ['one.yml']
assert get_config_path_from_options('.', {}, environment) == ['one.yml']
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix separator')
def test_multiple_path_from_env(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml:two.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
assert get_config_path_from_options(
'.', {}, environment
) == ['one.yml', 'two.yml']
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows separator')
def test_multiple_path_from_env_windows(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml;two.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
assert get_config_path_from_options(
'.', {}, environment
) == ['one.yml', 'two.yml']
def test_multiple_path_from_env_custom_separator(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_PATH_SEPARATOR'] = '^'
os.environ['COMPOSE_FILE'] = 'c:\\one.yml^.\\semi;colon.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options({}, environment) == ['c:\\one.yml', '.\\semi;colon.yml']
assert get_config_path_from_options(
'.', {}, environment
) == ['c:\\one.yml', '.\\semi;colon.yml']
def test_no_path(self):
environment = Environment.from_env_file('.')
assert not get_config_path_from_options({}, environment)
assert not get_config_path_from_options('.', {}, environment)
def test_unicode_path_from_options(self):
paths = [b'\xe5\xb0\xb1\xe5\x90\x83\xe9\xa5\xad/docker-compose.yml']
opts = {'--file': paths}
environment = Environment.from_env_file('.')
assert get_config_path_from_options(opts, environment) == ['就吃饭/docker-compose.yml']
assert get_config_path_from_options(
'.', opts, environment
) == ['就吃饭/docker-compose.yml']

View File

@@ -8,6 +8,7 @@ from docker.errors import APIError
from compose.cli.log_printer import build_log_generator
from compose.cli.log_printer import build_log_presenters
from compose.cli.log_printer import build_no_log_generator
from compose.cli.log_printer import consume_queue
from compose.cli.log_printer import QueueItem
from compose.cli.log_printer import wait_on_exit
@@ -74,6 +75,14 @@ def test_wait_on_exit_raises():
assert expected in wait_on_exit(mock_container)
def test_build_no_log_generator(mock_container):
mock_container.has_api_logs = False
mock_container.log_driver = 'none'
output, = build_no_log_generator(mock_container, None)
assert "WARNING: no logs are available with the 'none' log driver\n" in output
assert "exited with code" not in output
class TestBuildLogGenerator:
def test_no_log_stream(self, mock_container):

View File

@@ -137,20 +137,21 @@ class TestCLIMainTestCase:
class TestSetupConsoleHandlerTestCase:
def test_with_console_formatter_verbose(self, logging_handler):
def test_with_tty_verbose(self, logging_handler):
setup_console_handler(logging_handler, True)
assert type(logging_handler.formatter) == ConsoleWarningFormatter
assert '%(name)s' in logging_handler.formatter._fmt
assert '%(funcName)s' in logging_handler.formatter._fmt
def test_with_console_formatter_not_verbose(self, logging_handler):
def test_with_tty_not_verbose(self, logging_handler):
setup_console_handler(logging_handler, False)
assert type(logging_handler.formatter) == ConsoleWarningFormatter
assert '%(name)s' not in logging_handler.formatter._fmt
assert '%(funcName)s' not in logging_handler.formatter._fmt
def test_without_console_formatter(self, logging_handler):
setup_console_handler(logging_handler, False, use_console_formatter=False)
def test_with_not_a_tty(self, logging_handler):
logging_handler.stream.isatty.return_value = False
setup_console_handler(logging_handler, False)
assert type(logging_handler.formatter) == logging.Formatter

View File

@@ -168,14 +168,12 @@ class ConfigTest(unittest.TestCase):
}
})
)
assert cfg.config_version == VERSION
assert cfg.version == VERSION
for version in ['2', '2.0', '2.1', '2.2', '2.3',
'3', '3.0', '3.1', '3.2', '3.3', '3.4', '3.5', '3.6', '3.7', '3.8']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.config_version == version
assert cfg.version == VERSION
assert cfg.version == version
def test_v1_file_version(self):
cfg = config.load(build_config_details({'web': {'image': 'busybox'}}))
@@ -238,9 +236,7 @@ class ConfigTest(unittest.TestCase):
)
)
assert "compose.config.errors.ConfigurationError: " \
"The Compose file 'filename.yml' is invalid because:\n" \
"'web' does not match any of the regexes: '^x-'" in excinfo.exconly()
assert 'Invalid top-level property "web"' in excinfo.exconly()
assert VERSION_EXPLANATION in excinfo.exconly()
def test_named_volume_config_empty(self):
@@ -669,7 +665,7 @@ class ConfigTest(unittest.TestCase):
assert 'Invalid service name \'mong\\o\'' in excinfo.exconly()
def test_config_duplicate_cache_from_values_no_validation_error(self):
def test_config_duplicate_cache_from_values_validation_error(self):
with pytest.raises(ConfigurationError) as exc:
config.load(
build_config_details({
@@ -681,7 +677,7 @@ class ConfigTest(unittest.TestCase):
})
)
assert 'build.cache_from contains non-unique items' not in exc.exconly()
assert 'build.cache_from contains non-unique items' in exc.exconly()
def test_load_with_multiple_files_v1(self):
base_file = config.ConfigFile(
@@ -2397,8 +2393,7 @@ web:
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'}
'app2': {'condition': 'service_healthy'}
}
}
override = {}
@@ -2410,12 +2405,11 @@ web:
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'}
'app2': {'condition': 'service_healthy'}
}
}
override = {
'depends_on': ['app4']
'depends_on': ['app3']
}
actual = config.merge_service_dicts(base, override, VERSION)
@@ -2424,8 +2418,7 @@ web:
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'},
'app4': {'condition': 'service_started'},
'app3': {'condition': 'service_started'}
}
}
@@ -2550,7 +2543,6 @@ web:
'labels': ['com.docker.compose.a=1', 'com.docker.compose.b=2'],
'mode': 'replicated',
'placement': {
'max_replicas_per_node': 1,
'constraints': [
'node.role == manager', 'engine.labels.aws == true'
],
@@ -2607,7 +2599,6 @@ web:
'com.docker.compose.c': '3'
},
'placement': {
'max_replicas_per_node': 1,
'constraints': [
'engine.labels.aws == true', 'engine.labels.dev == true',
'node.role == manager', 'node.role == worker'
@@ -3570,11 +3561,9 @@ class InterpolationTest(unittest.TestCase):
@mock.patch.dict(os.environ)
def test_config_file_with_options_environment_file(self):
project_dir = 'tests/fixtures/default-env-file'
# env-file is relative to current working dir
env = Environment.from_env_file(project_dir, project_dir + '/.env2')
service_dicts = config.load(
config.find(
project_dir, None, env
project_dir, None, Environment.from_env_file(project_dir, '.env2')
)
).services
@@ -5238,8 +5227,6 @@ class GetDefaultConfigFilesTestCase(unittest.TestCase):
files = [
'docker-compose.yml',
'docker-compose.yaml',
'compose.yml',
'compose.yaml',
]
def test_get_config_path_default_file_in_basedir(self):
@@ -5273,16 +5260,14 @@ def get_config_filename_for_files(filenames, subdir=None):
base_dir = tempfile.mkdtemp(dir=project_dir)
else:
base_dir = project_dir
filenames = config.get_default_config_files(base_dir)
if not filenames:
raise config.ComposeFileNotFound(config.SUPPORTED_FILENAMES)
return os.path.basename(filenames[0])
filename, = config.get_default_config_files(base_dir)
return os.path.basename(filename)
finally:
shutil.rmtree(project_dir)
class SerializeTest(unittest.TestCase):
def test_denormalize_depends(self):
def test_denormalize_depends_on_v3(self):
service_dict = {
'image': 'busybox',
'command': 'true',
@@ -5292,7 +5277,27 @@ class SerializeTest(unittest.TestCase):
}
}
assert denormalize_service_dict(service_dict, VERSION) == service_dict
assert denormalize_service_dict(service_dict, VERSION) == {
'image': 'busybox',
'command': 'true',
'depends_on': ['service2', 'service3']
}
def test_denormalize_depends_on_v2_1(self):
service_dict = {
'image': 'busybox',
'command': 'true',
'depends_on': {
'service2': {'condition': 'service_started'},
'service3': {'condition': 'service_started'},
}
}
assert denormalize_service_dict(service_dict, VERSION) == {
'image': 'busybox',
'command': 'true',
'depends_on': ['service2', 'service3']
}
def test_serialize_time(self):
data = {
@@ -5382,7 +5387,7 @@ class SerializeTest(unittest.TestCase):
assert serialized_config['secrets']['two'] == {'external': True, 'name': 'two'}
def test_serialize_ports(self):
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
config_dict = config.Config(version=VERSION, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
@@ -5393,20 +5398,8 @@ class SerializeTest(unittest.TestCase):
serialized_config = yaml.safe_load(serialize_config(config_dict))
assert [{'published': 8080, 'target': 80}] == serialized_config['services']['web']['ports']
def test_serialize_ports_v1(self):
config_dict = config.Config(config_version=V1, version=V1, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
'name': 'web'
}
], volumes={}, networks={}, secrets={}, configs={})
serialized_config = yaml.safe_load(serialize_config(config_dict))
assert ['8080:80/tcp'] == serialized_config['services']['web']['ports']
def test_serialize_ports_with_ext_ip(self):
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
config_dict = config.Config(version=VERSION, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, '127.0.0.1')],
'image': 'alpine',

View File

@@ -416,7 +416,7 @@ def test_interpolate_mandatory_no_err_msg(defaults_interpolator):
with pytest.raises(UnsetRequiredSubstitution) as e:
defaults_interpolator("not ok ${BAZ?}")
assert e.value.err == 'BAZ'
assert e.value.err == ''
def test_interpolate_mixed_separators(defaults_interpolator):

View File

@@ -221,6 +221,34 @@ class ContainerTest(unittest.TestCase):
container = Container(None, self.container_dict, has_been_inspected=True)
assert container.short_id == self.container_id[:12]
def test_has_api_logs(self):
container_dict = {
'HostConfig': {
'LogConfig': {
'Type': 'json-file'
}
}
}
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'none'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'syslog'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'journald'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'foobar'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
class GetContainerNameTestCase(unittest.TestCase):

View File

@@ -1,36 +0,0 @@
import unittest
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
class MetricsTest(unittest.TestCase):
@classmethod
def test_metrics(cls):
assert MetricsCommand('up', 'moby').to_map() == {
'command': 'compose up',
'context': 'moby',
'status': 'success',
'source': 'docker-compose',
}
assert MetricsCommand('down', 'local').to_map() == {
'command': 'compose down',
'context': 'local',
'status': 'success',
'source': 'docker-compose',
}
assert MetricsCommand('help', 'aci', Status.FAILURE).to_map() == {
'command': 'compose help',
'context': 'aci',
'status': 'failure',
'source': 'docker-compose',
}
assert MetricsCommand('run', 'ecs').to_map() == {
'command': 'compose run',
'context': 'ecs',
'status': 'success',
'source': 'docker-compose',
}

View File

@@ -3,7 +3,6 @@ from threading import Lock
from docker.errors import APIError
from compose.cli.colors import AnsiMode
from compose.parallel import GlobalLimit
from compose.parallel import parallel_execute
from compose.parallel import parallel_execute_iter
@@ -157,7 +156,7 @@ def test_parallel_execute_alignment(capsys):
def test_parallel_execute_ansi(capsys):
ParallelStreamWriter.instance = None
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.ALWAYS)
ParallelStreamWriter.set_noansi(value=False)
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,
@@ -173,7 +172,7 @@ def test_parallel_execute_ansi(capsys):
def test_parallel_execute_noansi(capsys):
ParallelStreamWriter.instance = None
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.NEVER)
ParallelStreamWriter.set_noansi()
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,

View File

@@ -28,7 +28,6 @@ from compose.service import Service
def build_config(**kwargs):
return Config(
config_version=kwargs.get('config_version', VERSION),
version=kwargs.get('version', VERSION),
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),

View File

@@ -330,7 +330,7 @@ class ServiceTest(unittest.TestCase):
assert service.options['environment'] == environment
assert opts['labels'][LABEL_CONFIG_HASH] == \
'6da0f3ec0d5adf901de304bdc7e0ee44ec5dd7adb08aebc20fe0dd791d4ee5a8'
'689149e6041a85f6fb4945a2146a497ed43c8a5cbd8991753d875b165f1b4de4'
assert opts['environment'] == ['also=real']
def test_get_container_create_options_sets_affinity_with_binds(self):
@@ -700,7 +700,6 @@ class ServiceTest(unittest.TestCase):
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'ipc_mode': None,
'options': {'image': 'example.com/foo'},
'links': [('one', 'one')],
'net': 'other',
@@ -724,7 +723,6 @@ class ServiceTest(unittest.TestCase):
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'ipc_mode': None,
'options': {'image': 'example.com/foo'},
'links': [],
'networks': {},

View File

@@ -1,5 +1,5 @@
[tox]
envlist = py37,py39,pre-commit
envlist = py37,pre-commit
[testenv]
usedevelop=True
@@ -50,7 +50,7 @@ directory = coverage-html
[flake8]
max-line-length = 105
# Set this high for now
max-complexity = 12
max-complexity = 11
exclude = compose/packages
[pytest]