Compare commits

...

159 Commits

Author SHA1 Message Date
aiordache
cabd5cfb4f "Bump 1.28.4"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 20:49:11 +01:00
aiordache
b97275024a Merge branch 'master' into 1.28.x 2021-02-18 20:46:27 +01:00
aiordache
324d73d434 fix config path for authentication
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 20:42:47 +01:00
aiordache
b83d685a26 Bump docker-py to 4.4.3 in setup.py
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 20:42:47 +01:00
aiordache
f3ef2df3fb Fix SSH port parsing via docker-py bump
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 20:42:47 +01:00
Ulysses Souza
1415471b2a Add cgroup1 as filter label
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 20:42:47 +01:00
Ulysses Souza
24a873954b Add label amd64 to filter the agents
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 20:42:47 +01:00
aiordache
5db68315fa Update changelog for release 1.28.3
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 20:42:44 +01:00
Ulysses Souza
0672fcbcad Bump python to 3.7.10
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 20:41:59 +01:00
Anca Iordache
66375c2871 Merge pull request #8135 from aiordache/test_auth
Fix docker/config path in the test containers
2021-02-18 19:36:21 +01:00
aiordache
c760600a65 fix config path for authentication
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 19:22:13 +01:00
Ulysses Souza
74c09cac66 Merge pull request #8134 from aiordache/bump_docker_setup
Bump docker-py to 4.4.3 in setup.py
2021-02-18 13:27:42 -03:00
aiordache
36e470d640 Bump docker-py to 4.4.3 in setup.py
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 17:19:46 +01:00
aiordache
d28d717884 Fix SSH port parsing via docker-py bump
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 12:52:29 -03:00
Anca Iordache
42c2cfd7a6 Merge pull request #8133 from docker/fix-release-jenkins
Add cgroup1 as filter label
2021-02-18 16:51:55 +01:00
Ulysses Souza
5b983ac653 Add cgroup1 as filter label
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 12:24:04 -03:00
Anca Iordache
93425218eb Merge pull request #8130 from docker/fix-release-jenkins
Add label amd64 to filter the agents
2021-02-18 15:58:19 +01:00
aiordache
49d0ee2de5 Update changelog for release 1.28.3
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-18 11:48:29 -03:00
Ulysses Souza
a92c6d7e17 Add label amd64 to filter the agents
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 11:34:03 -03:00
Anca Iordache
b8800db52e Merge pull request #8129 from ulyssessouza/bump-python
Bump python to 3.7.10
2021-02-18 15:09:42 +01:00
Ulysses Souza
ccabfde353 Bump python to 3.7.10
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-18 11:04:25 -03:00
aiordache
1473615283 "Bump 1.28.3"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-17 20:06:32 +01:00
aiordache
31b95cfc12 Merge branch 'master' into 1.28.x 2021-02-17 18:30:52 +01:00
aiordache
3297bb50bb Update dind setup for tests
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-17 12:52:14 -03:00
Anca Iordache
e688006444 Merge pull request #8123 from ulyssessouza/fix-dict-access
Fix dics access on keep-prefix option for up
2021-02-16 20:16:50 +01:00
Ulysses Souza
e4a83c15ff Fix dics access on keep-prefix option for up
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-16 15:55:39 -03:00
Anca Iordache
824b9f138e Merge pull request #8094 from Agrendalath/agrendalath/fix_fish_completion
Fix fish completion
2021-02-16 19:48:09 +01:00
Anca Iordache
8654eb2ea3 Merge pull request #8120 from ulyssessouza/bump-docker-py
Bump docker-py
2021-02-15 18:54:30 +01:00
Ulysses Souza
9407ee65e5 Bump docker-py
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-02-15 14:19:37 -03:00
Anca Iordache
66c6d2757a Merge pull request #8082 from thaJeztah/remove_log_driver_check
Remove local check for log-driver read support
2021-02-15 17:07:34 +01:00
Anca Iordache
17daa93edf Merge pull request #8109 from docker/dependabot/pip/cryptography-3.3.2
[Security] Bump cryptography from 3.2.1 to 3.3.2
2021-02-15 16:41:42 +01:00
dependabot-preview[bot]
9795e39d0c [Security] Bump cryptography from 3.2.1 to 3.3.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.2.1 to 3.3.2. **This update includes a security fix.**
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.2.1...3.3.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2021-02-12 10:45:09 +00:00
Anca Iordache
393abc5b33 Merge pull request #8112 from aiordache/update_Jenkinsfile
Update test base image in Jenkinsfile
2021-02-12 11:33:06 +01:00
aiordache
d0866c8c18 Update test base image in Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-02-10 15:06:06 +01:00
Anca Iordache
546133c977 Merge pull request #8093 from JimCronqvist/master
Fix incorrect CLI env variable name for service profiles
2021-02-10 12:24:02 +01:00
Ulysses Souza
9a2f94713e Merge pull request #8080 from aiordache/update_changelog_1.28.2
Post-release 1.28.2 changelog updates
2021-02-09 16:36:27 -03:00
Agrendalath
b88f635514 Fix fish completion
Signed-off-by: Agrendalath <piotr@surowiec.it>
2021-02-03 00:29:16 +01:00
Jim Cronqvist
31002aeacd Fix incorrect CLI variable name for service profiles
Changed from singular to plural as defined in the docs, i.e. "COMPOSE_PROFILES"

Signed-off-by: Jim Cronqvist <jim.cronqvist@gmail.com>
2021-02-02 21:41:57 +01:00
Sebastiaan van Stijn
28f8b8549d Remove local check for log-driver read support
Starting with Docker 20.10, the docker daemon has support for
"dual logging", which allows reading back logs, irregardless of
the logging-driver that is configured (except for "none" as logging
driver).

This patch removes the local check, which used a hard-coded list of
logging drivers that are expected to support reading logs.

When using an older version of Docker, the API should return an
error that reading logs is not supported, so no local check should
be needed.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-01-28 16:55:36 +01:00
aiordache
76a19ec8c5 Post-release changelog update
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-28 10:28:47 +01:00
Anca Iordache
bba8cd0322 Merge pull request #8045 from aiordache/changelog_1.28.0
Post-release 1.28.0: Update changelog and version
2021-01-26 21:07:24 +01:00
aiordache
67630359cf "Bump 1.28.2"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:23:53 +01:00
aiordache
c99c1556aa Add cgroup1 label to Release.Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
aiordache
0e529bf29b "Bump 1.28.1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Harald Albers
27d039d39a Fix formatting of help output for up|logs --no-log-prefix
Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-26 20:15:15 +01:00
Harald Albers
ad1baff1b3 Add bash completion for logs|up --no-log-prefix
This adds bash completion for https://github.com/docker/compose/pull/7435

Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-26 20:15:15 +01:00
Chris Crone
59e9ebe428 build.linux: Revert to Python 3.7
This allows us to revert from Debian Buster to Stretch which allows
us to relax the glibc version requirements.

Signed-off-by: Chris Crone <christopher.crone@docker.com>
2021-01-26 20:15:15 +01:00
aiordache
90373e9e63 "Bump 1.28.0"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Ulysses Souza
786822e921 Update compose-spec
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:15 +01:00
Ulysses Souza
95c6adeecf Remove restriction on docker version
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:15 +01:00
Mike Seplowitz
b6ddddc31a Improve control over ANSI output (#6858)
* Move global console_handler into function scope

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>

* Improve control over ANSI output

- Disabled parallel logger ANSI output if not attached to a tty.
  The console handler and progress stream already checked whether the
  output stream is a tty, but ParallelStreamWriter did not.

- Added --ansi=(never|always|auto) option to allow clearer control over
  ANSI output. Since --no-ansi is the same as --ansi=never, --no-ansi is
  now deprecated.

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>
2021-01-26 20:15:15 +01:00
aiordache
e1fb1e9a3a "Bump 1.28.0-rc3"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Mark Gallagher
c27c73efae Remove duplicate values check for build.cache_from
The `docker` command accepts duplicate values, so there is no benefit to
performing this check.

Fixes #7342.

Signed-off-by: Mark Gallagher <mark@fts.scot>
2021-01-26 20:15:15 +01:00
Sebastiaan van Stijn
a5863de31a Make COMPOSE_DOCKER_CLI_BUILD=1 the default
This changes compose to use "native" build through the CLI
by default. With this, docker-compose can take advantage of
BuildKit (which is now enabled by default on Docker Desktop
2.5 and up).

Users that want to use the python client for building can
opt-out of this feature by setting COMPOSE_DOCKER_CLI_BUILD=0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-01-26 20:15:14 +01:00
guillaume.tardif
97056552dc Support windows npipe, set content type & corrrect URL /usage. Also fixed socket name for desktop mac
Signed-off-by: guillaume.tardif <guillaume.tardif@gmail.com>
2021-01-26 20:15:14 +01:00
Ulysses Souza
318741ca5e Add metrics
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
aiordache
aa8b7bb392 "Bump 1.28.0-rc2"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
Daniil Sigalov
a8ffcfaefb Only attach services we'll read logs from in up
When 'up' is run with explicit list of services, compose will
start them together with their dependencies. It will attach to all
started services, but won't read output from dependencies (their
logs are not printed by 'up') - so the receive buffer of
dependencies will fill and at some point will start blocking those
services. Fix that by only attaching to services given in the
list.
To do that, move logic of choosing which services to attach from
cli/main.py to utils.py and use it from project.py to decide if
service should be attached.

Fixes #6018

Signed-off-by: Daniil Sigalov <asterite@seclab.cs.msu.ru>
2021-01-26 20:15:14 +01:00
Ulysses Souza
97e009a8cb Avoid setting unsuported parameter for subprocess.Popen on Windows
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
aiordache
186e3913f0 "Bump 1.28.0-rc1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
dependabot-preview[bot]
7bc945654f Bump virtualenv from 20.0.30 to 20.2.2
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
(cherry picked from commit 8785279ffd)
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
dependabot-preview[bot]
cc299f5cd5 Bump bcrypt from 3.1.7 to 3.2.0
Bumps [bcrypt](https://github.com/pyca/bcrypt) from 3.1.7 to 3.2.0.
- [Release notes](https://github.com/pyca/bcrypt/releases)
- [Changelog](https://github.com/pyca/bcrypt/blob/master/release.py)
- [Commits](https://github.com/pyca/bcrypt/compare/3.1.7...3.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2021-01-26 20:15:14 +01:00
Anca Iordache
536bea0859 Revert "Bump virtualenv from 20.0.30 to 20.2.1" (#7975)
This reverts commit 8785279ffd.

Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
Anca Iordache
db7b666e40 Revert "Bump gitpython from 3.1.7 to 3.1.11" (#7974)
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
aiordache
945123145f Bump docker-py in setup.py
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
Anca Iordache
f2ec6a2176 Merge pull request #8070 from aiordache/Jenkins_cgroup1_label
Add `cgroup1` label to Release.Jenkinsfile
2021-01-25 19:11:50 +01:00
aiordache
7f7f1607de Add cgroup1 label to Release.Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-25 19:00:39 +01:00
Anca Iordache
4990a7f935 Merge pull request #8067 from albers/completion-no-log-prefix
Add bash completion for `docker-compose logs|up --no-log-prefix`, fix formatting of help message
2021-01-25 15:06:34 +01:00
Anca Iordache
72f8551466 Merge pull request #8058 from docker/py-37-revert
Revert to Python 3.7 bump for Linux static builds
2021-01-25 14:34:42 +01:00
Harald Albers
487779960c Fix formatting of help output for up|logs --no-log-prefix
Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-24 22:19:37 +00:00
Harald Albers
99b6776fd2 Add bash completion for logs|up --no-log-prefix
This adds bash completion for https://github.com/docker/compose/pull/7435

Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-24 22:18:36 +00:00
Chris Crone
6a3af5b707 build.linux: Revert to Python 3.7
This allows us to revert from Debian Buster to Stretch which allows
us to relax the glibc version requirements.

Signed-off-by: Chris Crone <christopher.crone@docker.com>
2021-01-22 11:35:37 +01:00
aiordache
205d520805 Post-release 1.28.0: update changelog and version
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-20 11:30:25 +01:00
Anca Iordache
8f2bb66e73 Merge pull request #8043 from docker/update-compose-spec
Update compose-spec
2021-01-19 19:00:43 +01:00
Ulysses Souza
af4eaae006 Update compose-spec
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-19 14:57:59 -03:00
Anca Iordache
1c547b270e Merge pull request #8042 from docker/fix-docker-version
Remove restriction on docker version
2021-01-19 18:37:22 +01:00
Ulysses Souza
1c499bb2eb Remove restriction on docker version
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-19 14:33:16 -03:00
Mike Seplowitz
4fa72a066a Improve control over ANSI output (#6858)
* Move global console_handler into function scope

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>

* Improve control over ANSI output

- Disabled parallel logger ANSI output if not attached to a tty.
  The console handler and progress stream already checked whether the
  output stream is a tty, but ParallelStreamWriter did not.

- Added --ansi=(never|always|auto) option to allow clearer control over
  ANSI output. Since --no-ansi is the same as --ansi=never, --no-ansi is
  now deprecated.

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>
2021-01-19 18:17:55 +01:00
Anca Iordache
b9249168bd Merge pull request #7926 from maaarghk/no_build_cache_from_duplicate_check
Remove duplicate values check for build.cache_from
2021-01-11 18:33:16 +01:00
Anca Iordache
e36ac32120 Merge pull request #7978 from thaJeztah/default_to_cli_build
Make COMPOSE_DOCKER_CLI_BUILD=1 the default
2021-01-11 18:29:42 +01:00
Guillaume Tardif
5be6bde76c Merge pull request #7989 from docker/add-metrics
Add metrics capturing
2021-01-06 09:38:52 +01:00
guillaume.tardif
c380604a9e Support windows npipe, set content type & corrrect URL /usage. Also fixed socket name for desktop mac
Signed-off-by: guillaume.tardif <guillaume.tardif@gmail.com>
2021-01-05 15:45:10 +01:00
Ulysses Souza
369eb3220a Add metrics
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-04 15:16:51 -03:00
Ulysses Souza
2e273c5029 Merge pull request #8005 from asterite3/up-only-attach-foreground-services
Only attach services we're going to read logs from in "up"
2021-01-04 13:44:12 +00:00
Anca Iordache
21e196f20a Merge pull request #8009 from ulyssessouza/fix-windows-popen
Avoid setting unsuported parameter for subprocess.Popen on Windows
2021-01-04 09:55:53 +01:00
Ulysses Souza
b9d86f4b51 Avoid setting unsuported parameter for subprocess.Popen on Windows
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-12-22 17:52:24 -03:00
Daniil Sigalov
1b5278f977 Only attach services we'll read logs from in up
When 'up' is run with explicit list of services, compose will
start them together with their dependencies. It will attach to all
started services, but won't read output from dependencies (their
logs are not printed by 'up') - so the receive buffer of
dependencies will fill and at some point will start blocking those
services. Fix that by only attaching to services given in the
list.
To do that, move logic of choosing which services to attach from
cli/main.py to utils.py and use it from project.py to decide if
service should be attached.

Fixes #6018

Signed-off-by: Daniil Sigalov <asterite@seclab.cs.msu.ru>
2020-12-20 15:58:58 +03:00
Sebastiaan van Stijn
affb0d504d Make COMPOSE_DOCKER_CLI_BUILD=1 the default
This changes compose to use "native" build through the CLI
by default. With this, docker-compose can take advantage of
BuildKit (which is now enabled by default on Docker Desktop
2.5 and up).

Users that want to use the python client for building can
opt-out of this feature by setting COMPOSE_DOCKER_CLI_BUILD=0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-08 12:26:41 +01:00
Anca Iordache
8034bc3bd6 Merge pull request #7977 from docker/bumps-virtenv-gitpython
Bump virtualenv from 20.0.30 to 20.2.2 and gitpython to 3.1.11
2020-12-07 19:59:16 +01:00
dependabot-preview[bot]
89fcfc5499 Bump virtualenv from 20.0.30 to 20.2.2
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
(cherry picked from commit 8785279ffd)
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-12-07 15:57:24 -03:00
Anca Iordache
40a4ec1624 Merge pull request #7679 from docker/dependabot/pip/bcrypt-3.2.0
Bump bcrypt from 3.1.7 to 3.2.0
2020-12-07 19:18:30 +01:00
Anca Iordache
6c55ef6a5d Revert "Bump virtualenv from 20.0.30 to 20.2.1" (#7975)
This reverts commit 8785279ffd.

Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-04 17:32:14 +01:00
Anca Iordache
3f46dc1d76 Revert "Bump gitpython from 3.1.7 to 3.1.11" (#7974)
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-04 17:22:31 +01:00
Anca Iordache
f2bc89a876 Merge pull request #7971 from aiordache/update_docker_setup
Bump docker-py in setup.py
2020-12-03 19:23:30 +01:00
aiordache
fee4756e33 Bump docker-py in setup.py
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-03 19:00:39 +01:00
Anca Iordache
030b347673 Merge pull request #7965 from docker/fix-project-dir
Fix project_dir to take first file in account
2020-12-03 14:21:01 +01:00
Ulysses Souza
e0edc908b5 Fix project_dir to take first file in account
The order of precedence is:
- '--project-directory' option
- first file directory in '--file' option
- current directory

Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-12-02 16:59:47 -03:00
Paco Xu
6f3f696bd1 parallel_pull is default behavior in new versions (#7395)
Signed-off-by: pacoxu <paco.xu@daocloud.io>
2020-12-02 20:58:22 +01:00
lcsdtw
3f4d1ea97e docker-compose-help: improve help about down (#7411)
One could think just using `docker-compose down` already remove volumes

Signed-off-by: Dara Keon <lc.sales.duarte@gmail.com>
2020-12-02 20:56:34 +01:00
Ofek Lev
7b5be97c45 Upgrade Windows dependency (#7537)
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2020-12-02 20:51:39 +01:00
EricsonMacedo
3e31f80977 Setup environment variables for compose. (#7490)
* Setup environment variables for compose.

- Setup environment variables to work as expected for compose
  config and context in container mode.
- Setup volume mounts based on -f, --file argument for compose
  config and context.

Signed-off-by: Ericson Macedo <macedoericson@gmail.com>

* Improve parsing of specified compose file

- Update parsing of multiple -f, --file parameters.
- Remove usage of eval command.

Signed-off-by: Ericson Macedo <macedoericson@gmail.com>
2020-12-02 20:49:54 +01:00
dependabot-preview[bot]
059fd29ec3 Bump tox from 3.19.0 to 3.20.1 (#7863)
* Bump tox from 3.19.0 to 3.20.1
* Bump tox version in Dockerfile

Bumps [tox](https://github.com/tox-dev/tox) from 3.19.0 to 3.20.1.
- [Release notes](https://github.com/tox-dev/tox/releases)
- [Changelog](https://github.com/tox-dev/tox/blob/master/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/tox/compare/3.19.0...3.20.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-02 20:14:04 +01:00
Anca Iordache
f1059d75ed Merge pull request #7866 from luca-nardelli/improve-mandatory-variables-issues
Report which variable fails interpolation when they are mandatory
2020-12-02 20:06:25 +01:00
Anca Iordache
c45e93971f Merge pull request #7946 from aiordache/config_warning
Bring back warning for configs in non-swarm mode
2020-12-02 20:05:28 +01:00
Anca Iordache
ff42a783de Merge pull request #7889 from docker/dependabot/pip/gitpython-3.1.11
Bump gitpython from 3.1.7 to 3.1.11
2020-12-02 20:03:38 +01:00
Anca Iordache
6ec45cf2d2 Merge pull request #7902 from docker/dependabot/pip/more-itertools-8.6.0
Bump more-itertools from 8.4.0 to 8.6.0
2020-12-02 20:02:23 +01:00
dependabot-preview[bot]
4139d701f3 Bump bcrypt from 3.1.7 to 3.2.0
Bumps [bcrypt](https://github.com/pyca/bcrypt) from 3.1.7 to 3.2.0.
- [Release notes](https://github.com/pyca/bcrypt/releases)
- [Changelog](https://github.com/pyca/bcrypt/blob/master/release.py)
- [Commits](https://github.com/pyca/bcrypt/compare/3.1.7...3.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-12-02 19:01:24 +00:00
Anca Iordache
c56f57da12 Merge pull request #7939 from docker/dependabot/pip/virtualenv-20.2.1
Bump virtualenv from 20.0.30 to 20.2.1
2020-12-02 20:00:53 +01:00
Anca Iordache
687fa65557 Merge pull request #7917 from docker/dependabot/pip/attrs-20.3.0
Bump attrs from 20.1.0 to 20.3.0
2020-12-02 20:00:03 +01:00
Anca Iordache
5e3708e605 Merge pull request #7930 from acran/profiles
Implement service profiles
2020-12-02 19:57:19 +01:00
aiordache
d6e3af36dd Bump version in build scripts
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-02 19:54:36 +01:00
dependabot-preview[bot]
ac06e35c00 Bump attrs from 20.1.0 to 20.3.0
Bumps [attrs](https://github.com/python-attrs/attrs) from 20.1.0 to 20.3.0.
- [Release notes](https://github.com/python-attrs/attrs/releases)
- [Changelog](https://github.com/python-attrs/attrs/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/python-attrs/attrs/compare/20.1.0...20.3.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-12-02 18:47:11 +00:00
Anca Iordache
9a913b110c Merge pull request #7952 from docker/dependabot/pip/cffi-1.14.4
Bump cffi from 1.14.1 to 1.14.4
2020-12-02 19:45:46 +01:00
dependabot-preview[bot]
929ca84db1 Bump cffi from 1.14.1 to 1.14.4
Bumps [cffi](https://github.com/python-cffi/release-doc) from 1.14.1 to 1.14.4.
- [Release notes](https://github.com/python-cffi/release-doc/releases)
- [Commits](https://github.com/python-cffi/release-doc/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-12-02 18:19:18 +00:00
Anca Iordache
57f8a0b039 Merge pull request #7953 from docker/dependabot/pip/cryptography-3.2.1
Bump cryptography from 3.2 to 3.2.1
2020-12-02 19:17:57 +01:00
Chris Crone
21f1d7c5e6 centos: Simplify short version variable
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-12-02 18:17:27 +00:00
Chris Crone
c87844c504 win: Bump Python version for release
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-12-02 18:16:32 +00:00
aiordache
21c07bd76c Move device requests to service_dict to avoid adding another field to config hash
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-02 18:12:39 +00:00
aiordache
8f2dbd9b12 Add devices to config hash to trigger container recreate on change
* add unit test
 * update path to compose spec schema in Makefile

Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-02 18:12:39 +00:00
aiordache
7ca88de76b Bring back warning for swarm configs
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-12-02 15:08:01 +01:00
Roman Anasal
2d2a8a0469 Implement service profiles
Implement profiles as introduced in compose-spec/compose-spec#110
fixes #7919
closes #1896
closes #6742
closes #7539

Signed-off-by: Roman Anasal <roman.anasal@bdsu.de>
2020-12-02 01:08:11 +01:00
dependabot-preview[bot]
b187f19f94 Bump cryptography from 3.2 to 3.2.1
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.2 to 3.2.1.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.2...3.2.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-26 18:48:15 +00:00
Anca Iordache
5c6c300ba5 Merge pull request #7944 from docker/bump-deps
Bump dependencies
2020-11-26 19:46:39 +01:00
Chris Crone
a3e6e28eeb deps: Bump Python, Docker, base images
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-11-26 15:25:09 +01:00
Anca Iordache
be8523708e Merge pull request #7872 from aiordache/use_ssh_client
Use ssh client by default
2020-11-24 13:54:48 +01:00
dependabot-preview[bot]
8785279ffd Bump virtualenv from 20.0.30 to 20.2.1
Bumps [virtualenv](https://github.com/pypa/virtualenv) from 20.0.30 to 20.2.1.
- [Release notes](https://github.com/pypa/virtualenv/releases)
- [Changelog](https://github.com/pypa/virtualenv/blob/main/docs/changelog.rst)
- [Commits](https://github.com/pypa/virtualenv/compare/20.0.30...20.2.1)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-23 22:16:04 +00:00
aiordache
e28c948f34 Shell out to ssh client for ssh connections
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-11-23 14:41:34 +01:00
aiordache
854c003359 Implement device requests for GPU support
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-11-17 13:34:58 +01:00
Mark Gallagher
3ebfa4b089 Remove duplicate values check for build.cache_from
The `docker` command accepts duplicate values, so there is no benefit to
performing this check.

Fixes #7342.

Signed-off-by: Mark Gallagher <mark@fts.scot>
2020-11-13 01:56:37 +00:00
Chris Crone
843621dfb8 Merge pull request #7906 from chris-crone/cloud-readme
Simplify README and add cloud deployment
2020-11-12 15:04:27 +01:00
Chris Crone
ea28d8edac readme: Simplify and add cloud deployment
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-11-12 13:15:23 +01:00
Anca Iordache
8633939080 Merge pull request #7796 from aiordache/update_changelog_1.27.4
Update changelog for 1.27.4 release
2020-11-12 09:35:02 +01:00
Anca Iordache
f965401569 Merge pull request #7435 from rohitkg98/7416-add-disable-log-prefix-flag
Added option to disable log prefix via cli
2020-11-12 09:34:20 +01:00
Chris Crone
bf61244f37 build: Add build for CentOS
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-11-09 18:19:02 +01:00
Chris Crone
f825cec2fc build: Refactor to use BuildKit
Signed-off-by: Chris Crone <christopher.crone@docker.com>
2020-11-09 18:19:02 +01:00
dependabot-preview[bot]
3cfccc1d64 Bump more-itertools from 8.4.0 to 8.6.0
Bumps [more-itertools](https://github.com/more-itertools/more-itertools) from 8.4.0 to 8.6.0.
- [Release notes](https://github.com/more-itertools/more-itertools/releases)
- [Commits](https://github.com/more-itertools/more-itertools/compare/v8.4.0...v8.6.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-02 22:17:02 +00:00
Ulysses Souza
675c9674e1 Add Makefile including spec download target
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-10-27 13:50:17 +01:00
dependabot-preview[bot]
df99124d72 Bump gitpython from 3.1.7 to 3.1.11
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.7 to 3.1.11.
- [Release notes](https://github.com/gitpython-developers/GitPython/releases)
- [Changelog](https://github.com/gitpython-developers/GitPython/blob/master/CHANGES)
- [Commits](https://github.com/gitpython-developers/GitPython/compare/3.1.7...3.1.11)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-10-26 21:23:52 +00:00
Kaushal Rohit
cddaa77fea Added option to disable log prefix via cli
Signed-off-by: Kaushal Rohit <rohit.kg98@gmail.com>
2020-10-18 19:52:16 +05:30
Luca Nardelli
d51249acf4 Report which variable fails interpolation when they are mandatory
Add default value before raising UnsetRequiredSubstitution

Signed-off-by: Luca Nardelli <luca.nardelli@protonmail.com>
2020-10-15 09:55:04 +02:00
aiordache
062deb19c0 Update changelog for 1.27.4
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-25 09:49:48 +02:00
Anca Iordache
a24843e1e4 Merge pull request #7769 from aiordache/changelog_1.27.3
Update changelog for 1.27.3 release
2020-09-24 16:34:42 +02:00
aiordache
df05472bcc Remove path check for bind mounts
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-24 16:32:52 +02:00
Ulysses Souza
ce59a4c223 Fix port rendering
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2020-09-24 14:49:26 +02:00
Christian Höltje
1ff05ac060 run.sh: handle unix:// prefix in DOCKER_HOST
docker currently requires the `unix://` prefix when pointing `DOCKER_HOST` at a socket.

fixes #7281

Signed-off-by: Christian Höltje <docwhat@gerf.org>
2020-09-18 13:38:03 +02:00
aiordache
1192a4e817 Update changelog for 1.27.3 release
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-17 09:47:07 +02:00
aiordache
60514c1adb Allow strings for cpus fields
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 16:16:06 +02:00
aiordache
c960b028b9 fix flake8 complexity
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 16:14:55 +02:00
aiordache
8c81a9da7a Enable relative paths for driver_opts.device
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 16:14:55 +02:00
aiordache
5340a6d760 Add test for scale with stopped containers
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 15:59:43 +02:00
aiordache
a85d2bc64c update test for start trigger
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 15:59:43 +02:00
aiordache
50a4afaf17 Fix scaling when some containers are not running
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-16 15:59:43 +02:00
Anca Iordache
ddec1f61a6 Merge pull request #7761 from aiordache/depends_on
Fix depends_on serialisation on `docker-compose config`
2020-09-15 14:30:03 +02:00
aiordache
fa720787d6 update depends_on tests
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-14 18:01:35 +02:00
aiordache
a75b6249f8 Fix depends_on serialisation on docker-compose config
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-14 11:41:30 +02:00
Anca Iordache
86dad9247d Merge pull request #7751 from aiordache/1.27.1_changelog
Update changelog for release 1.27.1
2020-09-11 14:40:21 +02:00
Anca Iordache
c365ac0c11 Merge pull request #7754 from ticalcster/master
Added merge for max_replicas_per_node
2020-09-11 14:38:39 +02:00
Kevin Clark
d811500fa0 Added merge for max_replicas_per_node
Signed-off-by: Kevin Clark <kclark@edustaff.org>
2020-09-11 08:01:24 -04:00
aiordache
204655be13 Update changelog for 1.27.2
Signed-off-by: aiordache <anca.iordache@docker.com>
2020-09-10 18:10:37 +02:00
65 changed files with 1489 additions and 431 deletions

View File

@@ -1,6 +1,169 @@
Change log
==========
1.28.4 (2021-02-18)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/54?closed=1)
### Bugs
- Fix SSH port parsing by bumping docker-py to 4.4.3
### Miscellaneous
- Bump Python to 3.7.10
1.28.3 (2021-02-17)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/53?closed=1)
### Bugs
- Fix SSH hostname parsing when it contains leading s/h, and remove the quiet option that was hiding the error (via docker-py bump to 4.4.2)
- Fix key error for '--no-log-prefix' option
- Fix incorrect CLI environment variable name for service profiles: `COMPOSE_PROFILES` instead of `COMPOSE_PROFILE`
- Fix fish completion
### Miscellaneous
- Bump cryptography to 3.3.2
- Remove log driver filter
1.28.2 (2021-01-26)
-------------------
### Miscellaneous
- CI setup update
1.28.1 (2021-01-25)
-------------------
### Bugs
- Revert to Python 3.7 bump for Linux static builds
- Add bash completion for `docker-compose logs|up --no-log-prefix`
1.28.0 (2021-01-20)
-------------------
### Features
- Support for Nvidia GPUs via device requests
- Support for service profiles
- Change the SSH connection approach to the Docker CLI's via shellout to the local SSH client (old behaviour enabled by setting `COMPOSE_PARAMIKO_SSH` environment variable)
- Add flag to disable log prefix
- Add flag for ansi output control
### Bugs
- Make `parallel_pull=True` by default
- Bring back warning for configs in non-swarm mode
- Take `--file` in account when defining `project_dir`
- On `compose up`, attach only to services we read logs from
### Miscellaneous
- Make COMPOSE_DOCKER_CLI_BUILD=1 the default
- Add usage metrics
- Sync schema with COMPOSE specification
- Improve failure report for missing mandatory environment variables
- Bump attrs to 20.3.0
- Bump more_itertools to 8.6.0
- Bump cryptograhy to 3.2.1
- Bump cffi to 1.14.4
- Bump virtualenv to 20.2.2
- Bump bcrypt to 3.2.0
- Bump gitpython to 3.1.11
- Bump docker-py to 4.4.1
- Bump Python to 3.9
- Linux: bump Debian base image from stretch to buster (required for Python 3.9)
- macOS: OpenSSL 1.1.1g to 1.1.1h, Python 3.7.7 to 3.9.0
- Bump pyinstaller 4.1
- Loosen restriction on base images to latest minor
- Updates of READMEs
<<<<<<< HEAD
=======
>>>>>>> master
1.27.4 (2020-09-24)
-------------------
### Bugs
- Remove path checks for bind mounts
- Fix port rendering to output long form syntax for non-v1
- Add protocol to the docker socket address
1.27.3 (2020-09-16)
-------------------
### Bugs
- Merge `max_replicas_per_node` on `docker-compose config`
- Fix `depends_on` serialization on `docker-compose config`
- Fix scaling when some containers are not running on `docker-compose up`
- Enable relative paths for `driver_opts.device` for `local` driver
- Allow strings for `cpus` fields
1.27.2 (2020-09-10)
-------------------
### Bugs
- Fix bug on `docker-compose run` container attach
1.27.1 (2020-09-10)
-------------------
### Bugs
- Fix `docker-compose run` when `service.scale` is specified
- Allow `driver` property for external networks as temporary workaround for swarm network propagation issue
- Pin new internal schema version to `3.9` as the default
- Preserve the version when configured in the compose file
1.27.0 (2020-09-07)
-------------------

View File

@@ -1,11 +1,15 @@
ARG DOCKER_VERSION=19.03.8
ARG PYTHON_VERSION=3.7.7
ARG BUILD_ALPINE_VERSION=3.11
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.11.5
ARG RUNTIME_DEBIAN_VERSION=stretch-20200414-slim
ARG DOCKER_VERSION=19.03
ARG PYTHON_VERSION=3.7.10
ARG BUILD_PLATFORM=alpine
ARG BUILD_ALPINE_VERSION=3.12
ARG BUILD_CENTOS_VERSION=7
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.12
ARG RUNTIME_CENTOS_VERSION=7
ARG RUNTIME_DEBIAN_VERSION=stretch-slim
ARG DISTRO=alpine
FROM docker:${DOCKER_VERSION} AS docker-cli
@@ -40,32 +44,56 @@ RUN apt-get update && apt-get install --no-install-recommends -y \
openssl \
zlib1g-dev
FROM build-${BUILD_PLATFORM} AS build
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
WORKDIR /code/
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
RUN pip install virtualenv==20.0.30
RUN pip install tox==3.19.0
FROM centos:${BUILD_CENTOS_VERSION} AS build-centos
RUN yum install -y \
gcc \
git \
libffi-devel \
make \
openssl \
openssl-devel
WORKDIR /tmp/python3/
ARG PYTHON_VERSION
RUN curl -L https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz | tar xzf - \
&& cd Python-${PYTHON_VERSION} \
&& ./configure --enable-optimizations --enable-shared --prefix=/usr LDFLAGS="-Wl,-rpath /usr/lib" \
&& make altinstall
RUN alternatives --install /usr/bin/python python /usr/bin/python2.7 50
RUN alternatives --install /usr/bin/python python /usr/bin/python$(echo "${PYTHON_VERSION%.*}") 60
RUN curl https://bootstrap.pypa.io/get-pip.py | python -
FROM build-${DISTRO} AS build
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
WORKDIR /code/
COPY docker-compose-entrypoint.sh /usr/local/bin/
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
RUN pip install \
virtualenv==20.4.0 \
tox==3.21.2
COPY requirements-dev.txt .
COPY requirements-indirect.txt .
COPY requirements.txt .
COPY requirements-dev.txt .
RUN pip install -r requirements.txt -r requirements-indirect.txt -r requirements-dev.txt
COPY .pre-commit-config.yaml .
COPY tox.ini .
COPY setup.py .
COPY README.md .
COPY compose compose/
RUN tox --notest
RUN tox -e py37 --notest
COPY . .
ARG GIT_COMMIT=unknown
ENV DOCKER_COMPOSE_GITSHA=$GIT_COMMIT
RUN script/build/linux-entrypoint
FROM scratch AS bin
ARG TARGETARCH
ARG TARGETOS
COPY --from=build /usr/local/bin/docker-compose /docker-compose-${TARGETOS}-${TARGETARCH}
FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
FROM debian:${RUNTIME_DEBIAN_VERSION} AS runtime-debian
FROM runtime-${BUILD_PLATFORM} AS runtime
FROM centos:${RUNTIME_CENTOS_VERSION} AS runtime-centos
FROM runtime-${DISTRO} AS runtime
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker

17
Jenkinsfile vendored
View File

@@ -1,6 +1,6 @@
#!groovy
def dockerVersions = ['19.03.8']
def dockerVersions = ['19.03.13']
def baseImages = ['alpine', 'debian']
def pythonVersions = ['py37']
@@ -13,6 +13,9 @@ pipeline {
timeout(time: 2, unit: 'HOURS')
timestamps()
}
environment {
DOCKER_BUILDKIT="1"
}
stages {
stage('Build test images') {
@@ -20,7 +23,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'ubuntu && amd64 && !zfs'
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
}
steps {
buildImage('alpine')
@@ -28,7 +31,7 @@ pipeline {
}
stage('debian') {
agent {
label 'ubuntu && amd64 && !zfs'
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
}
steps {
buildImage('debian')
@@ -59,7 +62,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -69,7 +72,7 @@ def buildImage(baseImage) {
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--target build \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg DISTRO="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
.\\
"""
@@ -86,7 +89,7 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("ubuntu && amd64 && !zfs") {
def scmvar = checkout(scm)
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -96,6 +99,8 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\

57
Makefile Normal file
View File

@@ -0,0 +1,57 @@
TAG = "docker-compose:alpine-$(shell git rev-parse --short HEAD)"
GIT_VOLUME = "--volume=$(shell pwd)/.git:/code/.git"
DOCKERFILE ?="Dockerfile"
DOCKER_BUILD_TARGET ?="build"
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
BUILD_SCRIPT = linux
endif
ifeq ($(UNAME_S),Darwin)
BUILD_SCRIPT = osx
endif
COMPOSE_SPEC_SCHEMA_PATH = "compose/config/compose_spec.json"
COMPOSE_SPEC_RAW_URL = "https://raw.githubusercontent.com/compose-spec/compose-spec/master/schema/compose-spec.json"
all: cli
cli: download-compose-spec ## Compile the cli
./script/build/$(BUILD_SCRIPT)
download-compose-spec: ## Download the compose-spec schema from it's repo
curl -so $(COMPOSE_SPEC_SCHEMA_PATH) $(COMPOSE_SPEC_RAW_URL)
cache-clear: ## Clear the builder cache
@docker builder prune --force --filter type=exec.cachemount --filter=unused-for=24h
base-image: ## Builds base image
docker build -f $(DOCKERFILE) -t $(TAG) --target $(DOCKER_BUILD_TARGET) .
lint: base-image ## Run linter
docker run --rm \
--tty \
$(GIT_VOLUME) \
$(TAG) \
tox -e pre-commit
test-unit: base-image ## Run tests
docker run --rm \
--tty \
$(GIT_VOLUME) \
$(TAG) \
pytest -v tests/unit/
test: ## Run all tests
./script/test/default
pre-commit: lint test-unit cli
help: ## Show help
@echo Please specify a build target. The choices are:
@grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
FORCE:
.PHONY: all cli download-compose-spec cache-clear base-image lint test-unit test pre-commit help

106
README.md
View File

@@ -1,62 +1,86 @@
Docker Compose
==============
[![Build Status](https://ci-next.docker.com/public/buildStatus/icon?job=compose/master)](https://ci-next.docker.com/public/job/compose/job/master/)
![Docker Compose](logo.png?raw=true "Docker Compose Logo")
Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
Docker Compose is a tool for running multi-container applications on Docker
defined using the [Compose file format](https://compose-spec.io).
A Compose file is used to define how the one or more containers that make up
your application are configured.
Once you have a Compose file, you can create and start your application with a
single command: `docker-compose up`.
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
Compose files can be used to deploy applications locally, or to the cloud on
[Amazon ECS](https://aws.amazon.com/ecs) or
[Microsoft ACI](https://azure.microsoft.com/services/container-instances/) using
the Docker CLI. You can read more about how to do this:
- [Compose for Amazon ECS](https://docs.docker.com/engine/context/ecs-integration/)
- [Compose for Microsoft ACI](https://docs.docker.com/engine/context/aci-integration/)
Using Compose is basically a three-step process.
Where to get Docker Compose
----------------------------
### Windows and macOS
Docker Compose is included in
[Docker Desktop](https://www.docker.com/products/docker-desktop)
for Windows and macOS.
### Linux
You can download Docker Compose binaries from the
[release page](https://github.com/docker/compose/releases) on this repository.
### Using pip
If your platform is not supported, you can download Docker Compose using `pip`:
```console
pip install docker-compose
```
> **Note:** Docker Compose requires Python 3.6 or later.
Quick Start
-----------
Using Docker Compose is basically a three-step process:
1. Define your app's environment with a `Dockerfile` so it can be
reproduced anywhere.
reproduced anywhere.
2. Define the services that make up your app in `docker-compose.yml` so
they can be run together in an isolated environment.
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
they can be run together in an isolated environment.
3. Lastly, run `docker-compose up` and Compose will start and run your entire
app.
A `docker-compose.yml` looks like this:
A Compose file looks like this:
version: '2'
```yaml
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
```
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
You can find examples of Compose applications in our
[Awesome Compose repository](https://github.com/docker/awesome-compose).
For more information about the Compose file, see the
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
Compose has commands for managing the whole lifecycle of your application:
* Start, stop and rebuild services
* View the status of running services
* Stream the log output of running services
* Run a one-off command on a service
Installation and documentation
------------------------------
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
- Code repository for Compose is on [GitHub](https://github.com/docker/compose).
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
For more information about the Compose format, see the
[Compose file reference](https://docs.docker.com/compose/compose-file/).
Contributing
------------
[![Build Status](https://ci-next.docker.com/public/buildStatus/icon?job=compose/master)](https://ci-next.docker.com/public/job/compose/job/master/)
Want to help develop Docker Compose? Check out our
[contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
If you find an issue, please report it on the
[issue tracker](https://github.com/docker/compose/issues/new/choose).
Releasing
---------

View File

@@ -1,6 +1,6 @@
#!groovy
def dockerVersions = ['19.03.8', '18.09.9']
def dockerVersions = ['19.03.13', '18.09.9']
def baseImages = ['alpine', 'debian']
def pythonVersions = ['py37']
@@ -13,6 +13,9 @@ pipeline {
timeout(time: 2, unit: 'HOURS')
timestamps()
}
environment {
DOCKER_BUILDKIT="1"
}
stages {
stage('Build test images') {
@@ -20,7 +23,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
buildImage('alpine')
@@ -28,7 +31,7 @@ pipeline {
}
stage('debian') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
buildImage('debian')
@@ -38,7 +41,7 @@ pipeline {
}
stage('Test') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
// TODO use declarative 1.5.0 `matrix` once available on CI
@@ -58,7 +61,7 @@ pipeline {
}
stage('Generate Changelog') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
checkout scm
@@ -81,7 +84,7 @@ pipeline {
steps {
checkout scm
sh './script/setup/osx'
sh 'tox -e py37 -- tests/unit'
sh 'tox -e py39 -- tests/unit'
sh './script/build/osx'
dir ('dist') {
checksum('docker-compose-Darwin-x86_64')
@@ -95,7 +98,7 @@ pipeline {
}
stage('linux binary') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
checkout scm
@@ -114,11 +117,11 @@ pipeline {
label 'windows-python'
}
environment {
PATH = "$PATH;C:\\Python37;C:\\Python37\\Scripts"
PATH = "C:\\Python39;C:\\Python39\\Scripts;$PATH"
}
steps {
checkout scm
bat 'tox.exe -e py37 -- tests/unit'
bat 'tox.exe -e py39 -- tests/unit'
powershell '.\\script\\build\\windows.ps1'
dir ('dist') {
checksum('docker-compose-Windows-x86_64.exe')
@@ -131,7 +134,7 @@ pipeline {
}
stage('alpine image') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
buildRuntimeImage('alpine')
@@ -139,7 +142,7 @@ pipeline {
}
stage('debian image') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
buildRuntimeImage('debian')
@@ -154,7 +157,7 @@ pipeline {
parallel {
stage('Pushing images') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
steps {
pushRuntimeImage('alpine')
@@ -163,7 +166,7 @@ pipeline {
}
stage('Creating Github Release') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
environment {
GITHUB_TOKEN = credentials('github-release-token')
@@ -195,7 +198,7 @@ pipeline {
}
stage('Publishing Python packages') {
agent {
label 'linux && docker && ubuntu-2004'
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
}
environment {
PYPIRC = credentials('pypirc-docker-dsg-cibot')
@@ -219,7 +222,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -229,7 +232,7 @@ def buildImage(baseImage) {
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--target build \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg DISTRO="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
.\\
"""
@@ -244,9 +247,9 @@ def buildImage(baseImage) {
def runTests(dockerVersion, pythonVersion, baseImage) {
return {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("linux && docker && ubuntu-2004") {
node("linux && docker && ubuntu-2004 && amd64 && cgroup1") {
def scmvar = checkout(scm)
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -256,6 +259,8 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\
@@ -276,7 +281,7 @@ def buildRuntimeImage(baseImage) {
def imageName = "docker/compose:${baseImage}-${env.BRANCH_NAME}"
ansiColor('xterm') {
sh """docker build -t ${imageName} \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg DISTRO="${baseImage}" \\
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT.take(7)}" \\
.
"""

View File

@@ -1 +1 @@
__version__ = '1.28.0dev'
__version__ = '1.28.4'

View File

@@ -1,3 +1,6 @@
import enum
import os
from ..const import IS_WINDOWS_PLATFORM
NAMES = [
@@ -12,6 +15,21 @@ NAMES = [
]
@enum.unique
class AnsiMode(enum.Enum):
"""Enumeration for when to output ANSI colors."""
NEVER = "never"
ALWAYS = "always"
AUTO = "auto"
def use_ansi_codes(self, stream):
if self is AnsiMode.ALWAYS:
return True
if self is AnsiMode.NEVER or os.environ.get('CLICOLOR') == '0':
return False
return stream.isatty()
def get_pairs():
for i, name in enumerate(NAMES):
yield (name, str(30 + i))

View File

@@ -35,7 +35,7 @@ SILENT_COMMANDS = {
def project_from_options(project_dir, options, additional_options=None):
additional_options = additional_options or {}
override_dir = options.get('--project-directory')
override_dir = get_project_dir(options)
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or project_dir, environment_file)
environment.silent = options.get('COMMAND', None) in SILENT_COMMANDS
@@ -59,14 +59,15 @@ def project_from_options(project_dir, options, additional_options=None):
return get_project(
project_dir,
get_config_path_from_options(project_dir, options, environment),
get_config_path_from_options(options, environment),
project_name=options.get('--project-name'),
verbose=options.get('--verbose'),
context=context,
environment=environment,
override_dir=override_dir,
interpolate=(not additional_options.get('--no-interpolate')),
environment_file=environment_file
environment_file=environment_file,
enabled_profiles=get_profiles_from_options(options, environment)
)
@@ -86,21 +87,29 @@ def set_parallel_limit(environment):
parallel.GlobalLimit.set_global_limit(parallel_limit)
def get_project_dir(options):
override_dir = None
files = get_config_path_from_options(options, os.environ)
if files:
if files[0] == '-':
return '.'
override_dir = os.path.dirname(files[0])
return options.get('--project-directory') or override_dir
def get_config_from_options(base_dir, options, additional_options=None):
additional_options = additional_options or {}
override_dir = options.get('--project-directory')
override_dir = get_project_dir(options)
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or base_dir, environment_file)
config_path = get_config_path_from_options(
base_dir, options, environment
)
config_path = get_config_path_from_options(options, environment)
return config.load(
config.find(base_dir, config_path, environment, override_dir),
not additional_options.get('--no-interpolate')
)
def get_config_path_from_options(base_dir, options, environment):
def get_config_path_from_options(options, environment):
def unicode_paths(paths):
return [p.decode('utf-8') if isinstance(p, bytes) else p for p in paths]
@@ -115,9 +124,21 @@ def get_config_path_from_options(base_dir, options, environment):
return None
def get_profiles_from_options(options, environment):
profile_option = options.get('--profile')
if profile_option:
return profile_option
profiles = environment.get('COMPOSE_PROFILES')
if profiles:
return profiles.split(',')
return []
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
context=None, environment=None, override_dir=None,
interpolate=True, environment_file=None):
interpolate=True, environment_file=None, enabled_profiles=None):
if not environment:
environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir)
@@ -139,6 +160,7 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
client,
environment.get('DOCKER_DEFAULT_PLATFORM'),
execution_context_labels(config_details, environment_file),
enabled_profiles,
)

View File

@@ -166,8 +166,8 @@ def docker_client(environment, version=None, context=None, tls_version=None):
kwargs['credstore_env'] = {
'LD_LIBRARY_PATH': environment.get('LD_LIBRARY_PATH_ORIG'),
}
client = APIClient(**kwargs)
use_paramiko_ssh = int(environment.get('COMPOSE_PARAMIKO_SSH', 0))
client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
client._original_base_url = kwargs.get('base_url')
return client

View File

@@ -17,10 +17,16 @@ class DocoptDispatcher:
self.command_class = command_class
self.options = options
@classmethod
def get_command_and_options(cls, doc_entity, argv, options):
command_help = getdoc(doc_entity)
opt = docopt_full_help(command_help, argv, **options)
command = opt['COMMAND']
return command_help, opt, command
def parse(self, argv):
command_help = getdoc(self.command_class)
options = docopt_full_help(command_help, argv, **self.options)
command = options['COMMAND']
command_help, options, command = DocoptDispatcher.get_command_and_options(
self.command_class, argv, self.options)
if command is None:
raise SystemExit(command_help)

View File

@@ -16,18 +16,22 @@ from compose.utils import split_buffer
class LogPresenter:
def __init__(self, prefix_width, color_func):
def __init__(self, prefix_width, color_func, keep_prefix=True):
self.prefix_width = prefix_width
self.color_func = color_func
self.keep_prefix = keep_prefix
def present(self, container, line):
prefix = container.name_without_project.ljust(self.prefix_width)
return '{prefix} {line}'.format(
prefix=self.color_func(prefix + ' |'),
line=line)
to_log = '{line}'.format(line=line)
if self.keep_prefix:
prefix = container.name_without_project.ljust(self.prefix_width)
to_log = '{prefix} '.format(prefix=self.color_func(prefix + ' |')) + to_log
return to_log
def build_log_presenters(service_names, monochrome):
def build_log_presenters(service_names, monochrome, keep_prefix=True):
"""Return an iterable of functions.
Each function can be used to format the logs output of a container.
@@ -38,7 +42,7 @@ def build_log_presenters(service_names, monochrome):
return text
for color_func in cycle([no_color] if monochrome else colors.rainbow()):
yield LogPresenter(prefix_width, color_func)
yield LogPresenter(prefix_width, color_func, keep_prefix)
def max_name_width(service_names, max_index_width=3):
@@ -154,10 +158,8 @@ class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
def tail_container_logs(container, presenter, queue, log_args):
generator = get_log_generator(container)
try:
for item in generator(container, log_args):
for item in build_log_generator(container, log_args):
queue.put(QueueItem.new(presenter.present(container, item)))
except Exception as e:
queue.put(QueueItem.exception(e))
@@ -167,20 +169,6 @@ def tail_container_logs(container, presenter, queue, log_args):
queue.put(QueueItem.stop(container.name))
def get_log_generator(container):
if container.has_api_logs:
return build_log_generator
return build_no_log_generator
def build_no_log_generator(container, log_args):
"""Return a generator that prints a warning about logs and waits for
container to exit.
"""
yield "WARNING: no logs are available with the '{}' log driver\n".format(
container.log_driver)
def build_log_generator(container, log_args):
# if the container doesn't have a log_stream we need to attach to container
# before log printer starts running

View File

@@ -2,7 +2,6 @@ import contextlib
import functools
import json
import logging
import os
import pipes
import re
import subprocess
@@ -26,6 +25,8 @@ from ..config.serialize import serialize_config
from ..config.types import VolumeSpec
from ..const import IS_WINDOWS_PLATFORM
from ..errors import StreamParseError
from ..metrics.decorator import metrics
from ..parallel import ParallelStreamWriter
from ..progress_stream import StreamOutputError
from ..project import get_image_digests
from ..project import MissingDigests
@@ -38,7 +39,10 @@ from ..service import ConvergenceStrategy
from ..service import ImageType
from ..service import NeedsBuildError
from ..service import OperationFailedError
from ..utils import filter_attached_for_up
from .colors import AnsiMode
from .command import get_config_from_options
from .command import get_project_dir
from .command import project_from_options
from .docopt_command import DocoptDispatcher
from .docopt_command import get_handler
@@ -51,60 +55,122 @@ from .log_printer import LogPrinter
from .utils import get_version_info
from .utils import human_readable_file_size
from .utils import yesno
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
if not IS_WINDOWS_PLATFORM:
from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
log = logging.getLogger(__name__)
console_handler = logging.StreamHandler(sys.stderr)
def main():
def main(): # noqa: C901
signals.ignore_sigpipe()
command = None
try:
command = dispatch()
command()
_, opts, command = DocoptDispatcher.get_command_and_options(
TopLevelCommand,
get_filtered_args(sys.argv[1:]),
{'options_first': True, 'version': get_version_info('compose')})
except Exception:
pass
try:
command_func = dispatch()
command_func()
except (KeyboardInterrupt, signals.ShutdownException):
log.error("Aborting.")
sys.exit(1)
exit_with_metrics(command, "Aborting.", status=Status.FAILURE)
except (UserError, NoSuchService, ConfigurationError,
ProjectError, OperationFailedError) as e:
log.error(e.msg)
sys.exit(1)
exit_with_metrics(command, e.msg, status=Status.FAILURE)
except BuildError as e:
reason = ""
if e.reason:
reason = " : " + e.reason
log.error("Service '{}' failed to build{}".format(e.service.name, reason))
sys.exit(1)
exit_with_metrics(command,
"Service '{}' failed to build{}".format(e.service.name, reason),
status=Status.FAILURE)
except StreamOutputError as e:
log.error(e)
sys.exit(1)
exit_with_metrics(command, e, status=Status.FAILURE)
except NeedsBuildError as e:
log.error("Service '{}' needs to be built, but --no-build was passed.".format(e.service.name))
sys.exit(1)
exit_with_metrics(command,
"Service '{}' needs to be built, but --no-build was passed.".format(
e.service.name), status=Status.FAILURE)
except NoSuchCommand as e:
commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
log.error("No such command: %s\n\n%s", e.command, commands)
sys.exit(1)
exit_with_metrics(e.command, "No such command: {}\n\n{}".format(e.command, commands))
except (errors.ConnectionError, StreamParseError):
sys.exit(1)
exit_with_metrics(command, status=Status.FAILURE)
except SystemExit as e:
status = Status.SUCCESS
if len(sys.argv) > 1 and '--help' not in sys.argv:
status = Status.FAILURE
if command and len(sys.argv) >= 3 and sys.argv[2] == '--help':
command = '--help ' + command
if not command and len(sys.argv) >= 2 and sys.argv[1] == '--help':
command = '--help'
msg = e.args[0] if len(e.args) else ""
code = 0
if isinstance(e.code, int):
code = e.code
exit_with_metrics(command, log_msg=msg, status=status,
exit_code=code)
def get_filtered_args(args):
if args[0] in ('-h', '--help'):
return []
if args[0] == '--version':
return ['version']
def exit_with_metrics(command, log_msg=None, status=Status.SUCCESS, exit_code=1):
if log_msg:
if not exit_code:
log.info(log_msg)
else:
log.error(log_msg)
MetricsCommand(command, status=status).send_metrics()
sys.exit(exit_code)
def dispatch():
setup_logging()
console_stream = sys.stderr
console_handler = logging.StreamHandler(console_stream)
setup_logging(console_handler)
dispatcher = DocoptDispatcher(
TopLevelCommand,
{'options_first': True, 'version': get_version_info('compose')})
options, handler, command_options = dispatcher.parse(sys.argv[1:])
ansi_mode = AnsiMode.AUTO
try:
if options.get("--ansi"):
ansi_mode = AnsiMode(options.get("--ansi"))
except ValueError:
raise UserError(
'Invalid value for --ansi: {}. Expected one of {}.'.format(
options.get("--ansi"),
', '.join(m.value for m in AnsiMode)
)
)
if options.get("--no-ansi"):
if options.get("--ansi"):
raise UserError("--no-ansi and --ansi cannot be combined.")
log.warning('--no-ansi option is deprecated and will be removed in future versions.')
ansi_mode = AnsiMode.NEVER
setup_console_handler(console_handler,
options.get('--verbose'),
set_no_color_if_clicolor(options.get('--no-ansi')),
ansi_mode.use_ansi_codes(console_handler.stream),
options.get("--log-level"))
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
if options.get('--no-ansi'):
setup_parallel_logger(ansi_mode)
if ansi_mode is AnsiMode.NEVER:
command_options['--no-color'] = True
return functools.partial(perform_command, options, handler, command_options)
@@ -126,23 +192,23 @@ def perform_command(options, handler, command_options):
handler(command, command_options)
def setup_logging():
def setup_logging(console_handler):
root_logger = logging.getLogger()
root_logger.addHandler(console_handler)
root_logger.setLevel(logging.DEBUG)
# Disable requests logging
# Disable requests and docker-py logging
logging.getLogger("urllib3").propagate = False
logging.getLogger("requests").propagate = False
logging.getLogger("docker").propagate = False
def setup_parallel_logger(noansi):
if noansi:
import compose.parallel
compose.parallel.ParallelStreamWriter.set_noansi()
def setup_parallel_logger(ansi_mode):
ParallelStreamWriter.set_default_ansi_mode(ansi_mode)
def setup_console_handler(handler, verbose, noansi=False, level=None):
if handler.stream.isatty() and noansi is False:
def setup_console_handler(handler, verbose, use_console_formatter=True, level=None):
if use_console_formatter:
format_class = ConsoleWarningFormatter
else:
format_class = logging.Formatter
@@ -182,7 +248,7 @@ class TopLevelCommand:
"""Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [--] [COMMAND] [ARGS...]
docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
@@ -190,10 +256,12 @@ class TopLevelCommand:
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name
(default: directory name)
--profile NAME Specify a profile to enable
-c, --context NAME Specify a context name
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi Do not print ANSI control characters
--ansi (never|always|auto) Control when to print ANSI control characters
--no-ansi Do not print ANSI control characters (DEPRECATED)
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
@@ -214,7 +282,7 @@ class TopLevelCommand:
build Build or rebuild services
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
down Stop and remove resources
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
@@ -244,13 +312,14 @@ class TopLevelCommand:
@property
def project_dir(self):
return self.toplevel_options.get('--project-directory') or '.'
return get_project_dir(self.toplevel_options)
@property
def toplevel_environment(self):
environment_file = self.toplevel_options.get('--env-file')
return Environment.from_env_file(self.project_dir, environment_file)
@metrics()
def build(self, options):
"""
Build or rebuild services.
@@ -270,8 +339,6 @@ class TopLevelCommand:
--no-rm Do not remove intermediate containers after a successful build.
--parallel Build images in parallel.
--progress string Set type of progress output (auto, plain, tty).
EXPERIMENTAL flag for native builder.
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
"""
@@ -285,7 +352,7 @@ class TopLevelCommand:
)
build_args = resolve_build_args(build_args, self.toplevel_environment)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
self.project.build(
service_names=options['SERVICE'],
@@ -302,6 +369,7 @@ class TopLevelCommand:
progress=options.get('--progress'),
)
@metrics()
def config(self, options):
"""
Validate and view the Compose file.
@@ -351,6 +419,7 @@ class TopLevelCommand:
print(serialize_config(compose_config, image_digests, not options['--no-interpolate']))
@metrics()
def create(self, options):
"""
Creates containers for a service.
@@ -379,6 +448,7 @@ class TopLevelCommand:
do_build=build_action_from_opts(options),
)
@metrics()
def down(self, options):
"""
Stops containers and removes containers, networks, volumes, and images
@@ -430,6 +500,7 @@ class TopLevelCommand:
Options:
--json Output events as a stream of json objects
"""
def format_event(event):
attributes = ["%s=%s" % item for item in event['attributes'].items()]
return ("{time} {type} {action} {id} ({attrs})").format(
@@ -446,6 +517,7 @@ class TopLevelCommand:
print(formatter(event))
sys.stdout.flush()
@metrics("exec")
def exec_command(self, options):
"""
Execute a command in a running container
@@ -522,6 +594,7 @@ class TopLevelCommand:
sys.exit(exit_code)
@classmethod
@metrics()
def help(cls, options):
"""
Get help on a command.
@@ -535,6 +608,7 @@ class TopLevelCommand:
print(getdoc(subject))
@metrics()
def images(self, options):
"""
List images used by the created containers.
@@ -589,6 +663,7 @@ class TopLevelCommand:
])
print(Formatter.table(headers, rows))
@metrics()
def kill(self, options):
"""
Force stop service containers.
@@ -603,6 +678,7 @@ class TopLevelCommand:
self.project.kill(service_names=options['SERVICE'], signal=signal)
@metrics()
def logs(self, options):
"""
View output from containers.
@@ -610,11 +686,12 @@ class TopLevelCommand:
Usage: logs [options] [--] [SERVICE...]
Options:
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.
--no-color Produce monochrome output.
-f, --follow Follow log output.
-t, --timestamps Show timestamps.
--tail="all" Number of lines to show from the end of the logs
for each container.
--no-log-prefix Don't print prefix in logs.
"""
containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
@@ -633,10 +710,12 @@ class TopLevelCommand:
log_printer_from_project(
self.project,
containers,
set_no_color_if_clicolor(options['--no-color']),
options['--no-color'],
log_args,
event_stream=self.project.events(service_names=options['SERVICE'])).run()
event_stream=self.project.events(service_names=options['SERVICE']),
keep_prefix=not options['--no-log-prefix']).run()
@metrics()
def pause(self, options):
"""
Pause services.
@@ -646,6 +725,7 @@ class TopLevelCommand:
containers = self.project.pause(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to pause', 1)
@metrics()
def port(self, options):
"""
Print the public port for a port binding.
@@ -667,6 +747,7 @@ class TopLevelCommand:
options['PRIVATE_PORT'],
protocol=options.get('--protocol') or 'tcp') or '')
@metrics()
def ps(self, options):
"""
List containers.
@@ -723,6 +804,7 @@ class TopLevelCommand:
])
print(Formatter.table(headers, rows))
@metrics()
def pull(self, options):
"""
Pulls images for services defined in a Compose file, but does not start the containers.
@@ -746,6 +828,7 @@ class TopLevelCommand:
include_deps=options.get('--include-deps'),
)
@metrics()
def push(self, options):
"""
Pushes images for services.
@@ -760,6 +843,7 @@ class TopLevelCommand:
ignore_push_failures=options.get('--ignore-push-failures')
)
@metrics()
def rm(self, options):
"""
Removes stopped service containers.
@@ -804,6 +888,7 @@ class TopLevelCommand:
else:
print("No stopped containers")
@metrics()
def run(self, options):
"""
Run a one-off command on a service.
@@ -864,6 +949,7 @@ class TopLevelCommand:
self.toplevel_options, self.toplevel_environment
)
@metrics()
def scale(self, options):
"""
Set number of containers to run for a service.
@@ -892,6 +978,7 @@ class TopLevelCommand:
for service_name, num in parse_scale_args(options['SERVICE=NUM']).items():
self.project.get_service(service_name).scale(num, timeout=timeout)
@metrics()
def start(self, options):
"""
Start existing containers.
@@ -901,6 +988,7 @@ class TopLevelCommand:
containers = self.project.start(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to start', 1)
@metrics()
def stop(self, options):
"""
Stop running containers without removing them.
@@ -916,6 +1004,7 @@ class TopLevelCommand:
timeout = timeout_from_opts(options)
self.project.stop(service_names=options['SERVICE'], timeout=timeout)
@metrics()
def restart(self, options):
"""
Restart running containers.
@@ -930,6 +1019,7 @@ class TopLevelCommand:
containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
exit_if(not containers, 'No containers to restart', 1)
@metrics()
def top(self, options):
"""
Display the running processes
@@ -957,6 +1047,7 @@ class TopLevelCommand:
print(container.name)
print(Formatter.table(headers, rows))
@metrics()
def unpause(self, options):
"""
Unpause services.
@@ -966,6 +1057,7 @@ class TopLevelCommand:
containers = self.project.unpause(service_names=options['SERVICE'])
exit_if(not containers, 'No containers to unpause', 1)
@metrics()
def up(self, options):
"""
Builds, (re)creates, starts, and attaches to containers for a service.
@@ -1017,6 +1109,7 @@ class TopLevelCommand:
container. Implies --abort-on-container-exit.
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
`scale` setting in the Compose file if present.
--no-log-prefix Don't print prefix in logs.
"""
start_deps = not options['--no-deps']
always_recreate_deps = options['--always-recreate-deps']
@@ -1028,6 +1121,7 @@ class TopLevelCommand:
detached = options.get('--detach')
no_start = options.get('--no-start')
attach_dependencies = options.get('--attach-dependencies')
keep_prefix = not options.get('--no-log-prefix')
if detached and (cascade_stop or exit_value_from or attach_dependencies):
raise UserError(
@@ -1042,7 +1136,7 @@ class TopLevelCommand:
for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
with up_shutdown_context(self.project, service_names, timeout, detached):
warn_for_swarm_mode(self.project.client)
@@ -1064,6 +1158,7 @@ class TopLevelCommand:
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
cli=native_builder,
attach_dependencies=attach_dependencies,
)
try:
@@ -1091,10 +1186,11 @@ class TopLevelCommand:
log_printer = log_printer_from_project(
self.project,
attached_containers,
set_no_color_if_clicolor(options['--no-color']),
options['--no-color'],
{'follow': True},
cascade_stop,
event_stream=self.project.events(service_names=service_names))
event_stream=self.project.events(service_names=service_names),
keep_prefix=keep_prefix)
print("Attaching to", list_containers(log_printer.containers))
cascade_starter = log_printer.run()
@@ -1112,6 +1208,7 @@ class TopLevelCommand:
sys.exit(exit_code)
@classmethod
@metrics()
def version(cls, options):
"""
Show version information and quit.
@@ -1376,29 +1473,28 @@ def get_docker_start_call(container_options, container_id):
def log_printer_from_project(
project,
containers,
monochrome,
log_args,
cascade_stop=False,
event_stream=None,
project,
containers,
monochrome,
log_args,
cascade_stop=False,
event_stream=None,
keep_prefix=True,
):
return LogPrinter(
containers,
build_log_presenters(project.service_names, monochrome),
build_log_presenters(project.service_names, monochrome, keep_prefix),
event_stream or project.events(),
cascade_stop=cascade_stop,
log_args=log_args)
def filter_attached_containers(containers, service_names, attach_dependencies=False):
if attach_dependencies or not service_names:
return containers
return [
container
for container in containers if container.service in service_names
]
return filter_attached_for_up(
containers,
service_names,
attach_dependencies,
lambda container: container.service)
@contextlib.contextmanager
@@ -1574,7 +1670,3 @@ def warn_for_swarm_mode(client):
"To deploy your application across the swarm, "
"use `docker stack deploy`.\n"
)
def set_no_color_if_clicolor(no_color_flag):
return no_color_flag or os.environ.get('CLICOLOR') == "0"

View File

@@ -1,14 +1,16 @@
{
"$schema": "http://json-schema.org/draft/2019-09/schema#",
"id": "config_schema_compose_spec.json",
"id": "compose_spec.json",
"type": "object",
"title": "Compose Specification",
"description": "The Compose file is a YAML file defining a multi-containers based application.",
"properties": {
"version": {
"type": "string",
"description": "Version of the Compose specification used. Tools not implementing required version MUST reject the configuration file."
},
"services": {
"id": "#/properties/services",
"type": "object",
@@ -19,6 +21,7 @@
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
@@ -28,6 +31,7 @@
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
@@ -38,6 +42,7 @@
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
@@ -48,6 +53,7 @@
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
@@ -59,12 +65,16 @@
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
@@ -77,7 +87,7 @@
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"cache_from": {"type": "array", "items": {"type": "string"}},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]},
@@ -153,7 +163,7 @@
"cpu_period": {"type": ["number", "string"]},
"cpu_rt_period": {"type": ["number", "string"]},
"cpu_rt_runtime": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"cpus": {"type": ["number", "string"]},
"cpuset": {"type": "string"},
"credential_spec": {
"type": "object",
@@ -190,7 +200,6 @@
"device_cgroup_rules": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_opt": {"type": "array","items": {"type": "string"}, "uniqueItems": true},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
@@ -211,12 +220,12 @@
},
"uniqueItems": true
},
"extends": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"service": {"type": "string"},
"file": {"type": "string"}
@@ -245,6 +254,7 @@
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
@@ -258,7 +268,7 @@
"patternProperties": {"^x-": {}}
},
"mac_address": {"type": "string"},
"mem_limit": {"type": "string"},
"mem_limit": {"type": ["number", "string"]},
"mem_reservation": {"type": ["string", "integer"]},
"mem_swappiness": {"type": "integer"},
"memswap_limit": {"type": ["number", "string"]},
@@ -318,8 +328,9 @@
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"profiles": {"$ref": "#/definitions/list_of_strings"},
"pull_policy": {"type": "string", "enum": [
"always", "never", "if_not_present"
"always", "never", "if_not_present", "build"
]},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
@@ -425,9 +436,9 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
],
"uniqueItems": true
}
]
},
"uniqueItems": true
},
"volumes_from": {
"type": "array",
@@ -503,7 +514,7 @@
"limits": {
"type": "object",
"properties": {
"cpus": {"type": "number", "minimum": 0},
"cpus": {"type": ["number", "string"]},
"memory": {"type": "string"}
},
"additionalProperties": false,
@@ -512,9 +523,10 @@
"reservations": {
"type": "object",
"properties": {
"cpus": {"type": "number", "minimum": 0},
"cpus": {"type": ["number", "string"]},
"memory": {"type": "string"},
"generic_resources": {"$ref": "#/definitions/generic_resources"}
"generic_resources": {"$ref": "#/definitions/generic_resources"},
"devices": {"$ref": "#/definitions/devices"}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
@@ -558,6 +570,7 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"generic_resources": {
"id": "#/definitions/generic_resources",
"type": "array",
@@ -578,6 +591,24 @@
"patternProperties": {"^x-": {}}
}
},
"devices": {
"id": "#/definitions/devices",
"type": "array",
"items": {
"type": "object",
"properties": {
"capabilities": {"$ref": "#/definitions/list_of_strings"},
"count": {"type": ["string", "integer"]},
"device_ids": {"$ref": "#/definitions/list_of_strings"},
"driver":{"type": "string"},
"options":{"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
@@ -607,10 +638,10 @@
"additionalProperties": false,
"patternProperties": {"^.+$": {"type": "string"}}
}
}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"additionalProperties": false,
"patternProperties": {"^x-": {}}
}
},
"options": {
"type": "object",
@@ -640,6 +671,7 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
@@ -668,6 +700,7 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
@@ -693,6 +726,7 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"config": {
"id": "#/definitions/config",
"type": "object",
@@ -714,17 +748,20 @@
"additionalProperties": false,
"patternProperties": {"^x-": {}}
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
@@ -739,6 +776,7 @@
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"blkio_limit": {
"type": "object",
"properties": {
@@ -755,6 +793,7 @@
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",

View File

@@ -20,6 +20,7 @@ from ..utils import json_hash
from ..utils import parse_bytes
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive
from ..version import ComposeVersion
from .environment import env_vars_from_file
from .environment import Environment
from .environment import split_env
@@ -132,6 +133,7 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'logging',
'network_mode',
'platform',
'profiles',
'scale',
'stop_grace_period',
]
@@ -184,6 +186,13 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
def from_filename(cls, filename):
return cls(filename, load_yaml(filename))
@cached_property
def config_version(self):
version = self.config.get('version', None)
if isinstance(version, dict):
return V1
return ComposeVersion(version) if version else self.version
@cached_property
def version(self):
version = self.config.get('version', None)
@@ -222,15 +231,13 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
'Version "{}" in "{}" is invalid.'
.format(version, self.filename))
if version.startswith("1"):
version = V1
if version == V1:
if version.startswith("1"):
raise ConfigurationError(
'Version in "{}" is invalid. {}'
.format(self.filename, VERSION_EXPLANATION)
)
return version
return VERSION
def get_service(self, name):
return self.get_service_dicts()[name]
@@ -253,8 +260,10 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
return {} if self.version == V1 else self.config.get('configs', {})
class Config(namedtuple('_Config', 'version services volumes networks secrets configs')):
class Config(namedtuple('_Config', 'config_version version services volumes networks secrets configs')):
"""
:param config_version: configuration file version
:type config_version: int
:param version: configuration version
:type version: int
:param services: List of service description dictionaries
@@ -365,6 +374,23 @@ def find_candidates_in_parent_dirs(filenames, path):
return (candidates, path)
def check_swarm_only_config(service_dicts):
warning_template = (
"Some services ({services}) use the '{key}' key, which will be ignored. "
"Compose does not support '{key}' configuration - use "
"`docker stack deploy` to deploy to a swarm."
)
key = 'configs'
services = [s for s in service_dicts if s.get(key)]
if services:
log.warning(
warning_template.format(
services=", ".join(sorted(s['name'] for s in services)),
key=key
)
)
def load(config_details, interpolate=True):
"""Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top
@@ -401,9 +427,10 @@ def load(config_details, interpolate=True):
for service_dict in service_dicts:
match_named_volumes(service_dict, volumes)
version = main_file.version
check_swarm_only_config(service_dicts)
return Config(version, service_dicts, volumes, networks, secrets, configs)
return Config(main_file.config_version, main_file.version,
service_dicts, volumes, networks, secrets, configs)
def load_mapping(config_files, get_func, entity_type, working_dir=None):
@@ -423,20 +450,36 @@ def load_mapping(config_files, get_func, entity_type, working_dir=None):
elif not config.get('name'):
config['name'] = name
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
config['driver_opts']
)
if 'labels' in config:
config['labels'] = parse_labels(config['labels'])
if 'file' in config:
config['file'] = expand_path(working_dir, config['file'])
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
config['driver_opts']
)
device = format_device_option(entity_type, config)
if device:
config['driver_opts']['device'] = device
return mapping
def format_device_option(entity_type, config):
if entity_type != 'Volume':
return
# default driver is 'local'
driver = config.get('driver', 'local')
if driver != 'local':
return
o = config['driver_opts'].get('o')
device = config['driver_opts'].get('device')
if o and o == 'bind' and device:
fullpath = os.path.abspath(os.path.expanduser(device))
return fullpath
def validate_external(entity_type, name, config, version):
for k in config.keys():
if entity_type == 'Network' and k == 'driver':
@@ -1024,7 +1067,7 @@ def merge_service_dicts(base, override, version):
for field in [
'cap_add', 'cap_drop', 'expose', 'external_links',
'volumes_from', 'device_cgroup_rules',
'volumes_from', 'device_cgroup_rules', 'profiles',
]:
md.merge_field(field, merge_unique_items_lists, default=[])
@@ -1114,6 +1157,7 @@ def merge_deploy(base, override):
md['resources'] = dict(resources_md)
if md.needs_merge('placement'):
placement_md = MergeDict(md.base.get('placement') or {}, md.override.get('placement') or {})
placement_md.merge_scalar('max_replicas_per_node')
placement_md.merge_field('constraints', merge_unique_items_lists, default=[])
placement_md.merge_field('preferences', merge_unique_objects_lists, default=[])
md['placement'] = dict(placement_md)
@@ -1142,6 +1186,7 @@ def merge_reservations(base, override):
md.merge_scalar('cpus')
md.merge_scalar('memory')
md.merge_sequence('generic_resources', types.GenericResource.parse)
md.merge_field('devices', merge_unique_objects_lists, default=[])
return dict(md)

View File

@@ -113,13 +113,13 @@ class Environment(dict):
)
return super().get(key, *args, **kwargs)
def get_boolean(self, key):
def get_boolean(self, key, default=False):
# Convert a value to a boolean using "common sense" rules.
# Unset, empty, "0" and "false" (i-case) yield False.
# All other values yield True.
value = self.get(key)
if not value:
return False
return default
if value.lower() in ['0', 'false']:
return False
return True

View File

@@ -111,12 +111,14 @@ class TemplateWithDefaults(Template):
var, _, err = braced.partition(':?')
result = mapping.get(var)
if not result:
err = err or var
raise UnsetRequiredSubstitution(err)
return result
elif '?' == sep:
var, _, err = braced.partition('?')
if var in mapping:
return mapping.get(var)
err = err or var
raise UnsetRequiredSubstitution(err)
# Modified from python2.7/string.py

View File

@@ -44,7 +44,7 @@ yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
def denormalize_config(config, image_digests=None):
result = {'version': str(config.version)}
result = {'version': str(config.config_version)}
denormalized_services = [
denormalize_service_dict(
service_dict,
@@ -121,11 +121,6 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
if version == V1 and 'network_mode' not in service_dict:
service_dict['network_mode'] = 'bridge'
if 'depends_on' in service_dict:
service_dict['depends_on'] = sorted([
svc for svc in service_dict['depends_on'].keys()
])
if 'healthcheck' in service_dict:
if 'interval' in service_dict['healthcheck']:
service_dict['healthcheck']['interval'] = serialize_ns_time_value(

View File

@@ -502,13 +502,13 @@ def get_schema_path():
def load_jsonschema(version):
suffix = "compose_spec"
name = "compose_spec"
if version == V1:
suffix = "v1"
name = "config_schema_v1"
filename = os.path.join(
get_schema_path(),
"config_schema_{}.json".format(suffix))
"{}.json".format(name))
if not os.path.exists(filename):
raise ConfigurationError(

View File

@@ -186,11 +186,6 @@ class Container:
def log_driver(self):
return self.get('HostConfig.LogConfig.Type')
@property
def has_api_logs(self):
log_type = self.log_driver
return not log_type or log_type in ('json-file', 'journald', 'local')
@property
def human_readable_health_status(self):
""" Generate UP status string with up time and health
@@ -204,11 +199,7 @@ class Container:
return status_string
def attach_log_stream(self):
"""A log stream can only be attached if the container uses a
json-file, journald or local log driver.
"""
if self.has_api_logs:
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
def get(self, key):
"""Return a value from the container or None if the value is not set.

View File

64
compose/metrics/client.py Normal file
View File

@@ -0,0 +1,64 @@
import os
from enum import Enum
import requests
from docker import ContextAPI
from docker.transport import UnixHTTPAdapter
from compose.const import IS_WINDOWS_PLATFORM
if IS_WINDOWS_PLATFORM:
from docker.transport import NpipeHTTPAdapter
class Status(Enum):
SUCCESS = "success"
FAILURE = "failure"
CANCELED = "canceled"
class MetricsSource:
CLI = "docker-compose"
if IS_WINDOWS_PLATFORM:
METRICS_SOCKET_FILE = 'npipe://\\\\.\\pipe\\docker_cli'
else:
METRICS_SOCKET_FILE = 'http+unix:///var/run/docker-cli.sock'
class MetricsCommand(requests.Session):
"""
Representation of a command in the metrics.
"""
def __init__(self, command,
context_type=None, status=Status.SUCCESS,
source=MetricsSource.CLI, uri=None):
super().__init__()
self.command = "compose " + command if command else "compose --help"
self.context = context_type or ContextAPI.get_current_context().context_type or 'moby'
self.source = source
self.status = status.value
self.uri = uri or os.environ.get("METRICS_SOCKET_FILE", METRICS_SOCKET_FILE)
if IS_WINDOWS_PLATFORM:
self.mount("http+unix://", NpipeHTTPAdapter(self.uri))
else:
self.mount("http+unix://", UnixHTTPAdapter(self.uri))
def send_metrics(self):
try:
return self.post("http+unix://localhost/usage",
json=self.to_map(),
timeout=.05,
headers={'Content-Type': 'application/json'})
except Exception as e:
return e
def to_map(self):
return {
'command': self.command,
'context': self.context,
'source': self.source,
'status': self.status,
}

View File

@@ -0,0 +1,21 @@
import functools
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
class metrics:
def __init__(self, command_name=None):
self.command_name = command_name
def __call__(self, fn):
@functools.wraps(fn,
assigned=functools.WRAPPER_ASSIGNMENTS,
updated=functools.WRAPPER_UPDATES)
def wrapper(*args, **kwargs):
if not self.command_name:
self.command_name = fn.__name__
result = fn(*args, **kwargs)
MetricsCommand(self.command_name, status=Status.SUCCESS).send_metrics()
return result
return wrapper

View File

@@ -11,6 +11,7 @@ from threading import Thread
from docker.errors import APIError
from docker.errors import ImageNotFound
from compose.cli.colors import AnsiMode
from compose.cli.colors import green
from compose.cli.colors import red
from compose.cli.signals import ShutdownException
@@ -83,10 +84,7 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None, fa
objects = list(objects)
stream = sys.stderr
if ParallelStreamWriter.instance:
writer = ParallelStreamWriter.instance
else:
writer = ParallelStreamWriter(stream)
writer = ParallelStreamWriter.get_or_assign_instance(ParallelStreamWriter(stream))
for obj in objects:
writer.add_object(msg, get_name(obj))
@@ -259,19 +257,37 @@ class ParallelStreamWriter:
to jump to the correct line, and write over the line.
"""
noansi = False
lock = Lock()
default_ansi_mode = AnsiMode.AUTO
write_lock = Lock()
instance = None
instance_lock = Lock()
@classmethod
def set_noansi(cls, value=True):
cls.noansi = value
def get_instance(cls):
return cls.instance
def __init__(self, stream):
@classmethod
def get_or_assign_instance(cls, writer):
cls.instance_lock.acquire()
try:
if cls.instance is None:
cls.instance = writer
return cls.instance
finally:
cls.instance_lock.release()
@classmethod
def set_default_ansi_mode(cls, ansi_mode):
cls.default_ansi_mode = ansi_mode
def __init__(self, stream, ansi_mode=None):
if ansi_mode is None:
ansi_mode = self.default_ansi_mode
self.stream = stream
self.use_ansi_codes = ansi_mode.use_ansi_codes(stream)
self.lines = []
self.width = 0
ParallelStreamWriter.instance = self
def add_object(self, msg, obj_index):
if msg is None:
@@ -285,7 +301,7 @@ class ParallelStreamWriter:
return self._write_noansi(msg, obj_index, '')
def _write_ansi(self, msg, obj_index, status):
self.lock.acquire()
self.write_lock.acquire()
position = self.lines.index(msg + obj_index)
diff = len(self.lines) - position
# move up
@@ -297,7 +313,7 @@ class ParallelStreamWriter:
# move back down
self.stream.write("%c[%dB" % (27, diff))
self.stream.flush()
self.lock.release()
self.write_lock.release()
def _write_noansi(self, msg, obj_index, status):
self.stream.write(
@@ -310,17 +326,10 @@ class ParallelStreamWriter:
def write(self, msg, obj_index, status, color_func):
if msg is None:
return
if self.noansi:
self._write_noansi(msg, obj_index, status)
else:
if self.use_ansi_codes:
self._write_ansi(msg, obj_index, color_func(status))
def get_stream_writer():
instance = ParallelStreamWriter.instance
if instance is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
return instance
else:
self._write_noansi(msg, obj_index, status)
def parallel_operation(containers, operation, options, message):

View File

@@ -39,6 +39,7 @@ from .service import Service
from .service import ServiceIpcMode
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import filter_attached_for_up
from .utils import microseconds_from_time_nano
from .utils import truncate_string
from .volume import ProjectVolumes
@@ -68,13 +69,15 @@ class Project:
"""
A collection of services.
"""
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None):
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None,
enabled_profiles=None):
self.name = name
self.services = services
self.client = client
self.volumes = volumes or ProjectVolumes({})
self.networks = networks or ProjectNetworks({}, False)
self.config_version = config_version
self.enabled_profiles = enabled_profiles or []
def labels(self, one_off=OneOffFilter.exclude, legacy=False):
name = self.name
@@ -86,7 +89,8 @@ class Project:
return labels
@classmethod
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None):
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None,
enabled_profiles=None):
"""
Construct a Project from a config.Config object.
"""
@@ -98,7 +102,7 @@ class Project:
networks,
use_networking)
volumes = ProjectVolumes.from_config(name, config_data, client)
project = cls(name, [], client, project_networks, volumes, config_data.version)
project = cls(name, [], client, project_networks, volumes, config_data.version, enabled_profiles)
for service_dict in config_data.services:
service_dict = dict(service_dict)
@@ -128,7 +132,7 @@ class Project:
config_data.secrets)
service_dict['scale'] = project.get_service_scale(service_dict)
service_dict['device_requests'] = project.get_device_requests(service_dict)
service_dict = translate_credential_spec_to_security_opt(service_dict)
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
service_dict
@@ -185,7 +189,7 @@ class Project:
if name not in valid_names:
raise NoSuchService(name)
def get_services(self, service_names=None, include_deps=False):
def get_services(self, service_names=None, include_deps=False, auto_enable_profiles=True):
"""
Returns a list of this project's services filtered
by the provided list of names, or all services if service_names is None
@@ -198,15 +202,36 @@ class Project:
reordering as needed to resolve dependencies.
Raises NoSuchService if any of the named services do not exist.
Raises ConfigurationError if any service depended on is not enabled by active profiles
"""
# create a copy so we can *locally* add auto-enabled profiles later
enabled_profiles = self.enabled_profiles.copy()
if service_names is None or len(service_names) == 0:
service_names = self.service_names
auto_enable_profiles = False
service_names = [
service.name
for service in self.services
if service.enabled_for_profiles(enabled_profiles)
]
unsorted = [self.get_service(name) for name in service_names]
services = [s for s in self.services if s in unsorted]
if auto_enable_profiles:
# enable profiles of explicitly targeted services
for service in services:
for profile in service.get_profiles():
if profile not in enabled_profiles:
enabled_profiles.append(profile)
if include_deps:
services = reduce(self._inject_deps, services, [])
services = reduce(
lambda acc, s: self._inject_deps(acc, s, enabled_profiles),
services,
[]
)
uniques = []
[uniques.append(s) for s in services if s not in uniques]
@@ -331,6 +356,31 @@ class Project:
max_replicas))
return scale
def get_device_requests(self, service_dict):
deploy_dict = service_dict.get('deploy', None)
if not deploy_dict:
return
resources = deploy_dict.get('resources', None)
if not resources or not resources.get('reservations', None):
return
devices = resources['reservations'].get('devices')
if not devices:
return
for dev in devices:
count = dev.get("count", -1)
if not isinstance(count, int):
if count != "all":
raise ConfigurationError(
'Invalid value "{}" for devices count'.format(dev["count"]),
'(expected integer or "all")')
dev["count"] = -1
if 'capabilities' in dev:
dev['capabilities'] = [dev['capabilities']]
return devices
def start(self, service_names=None, **options):
containers = []
@@ -412,10 +462,12 @@ class Project:
self.remove_images(remove_image_type)
def remove_images(self, remove_image_type):
for service in self.get_services():
for service in self.services:
service.remove_image(remove_image_type)
def restart(self, service_names=None, **options):
# filter service_names by enabled profiles
service_names = [s.name for s in self.get_services(service_names)]
containers = self.containers(service_names, stopped=True)
parallel.parallel_execute(
@@ -438,7 +490,8 @@ class Project:
log.info('%s uses an image, skipping' % service.name)
if cli:
log.warning("Native build is an experimental feature and could change at any time")
log.info("Building with native build. Learn about native build in Compose here: "
"https://docs.docker.com/go/compose-native-build/")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
@@ -594,11 +647,13 @@ class Project:
silent=False,
cli=False,
one_off=False,
attach_dependencies=False,
override_options=None,
):
if cli:
log.warning("Native build is an experimental feature and could change at any time")
log.info("Building with native build. Learn about native build in Compose here: "
"https://docs.docker.com/go/compose-native-build/")
self.initialize()
if not ignore_orphans:
@@ -620,12 +675,17 @@ class Project:
one_off=service_names if one_off else [],
)
def do(service):
services_to_attach = filter_attached_for_up(
services,
service_names,
attach_dependencies,
lambda service: service.name)
def do(service):
return service.execute_convergence_plan(
plans[service.name],
timeout=timeout,
detached=detached,
detached=detached or (service not in services_to_attach),
scale_override=scale_override.get(service.name),
rescale=rescale,
start=start,
@@ -695,7 +755,7 @@ class Project:
return plans
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=True, silent=False,
include_deps=False):
services = self.get_services(service_names, include_deps)
@@ -729,7 +789,9 @@ class Project:
return
try:
writer = parallel.get_stream_writer()
writer = parallel.ParallelStreamWriter.get_instance()
if writer is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
for event in strm:
if 'status' not in event:
continue
@@ -830,14 +892,26 @@ class Project:
)
)
def _inject_deps(self, acc, service):
def _inject_deps(self, acc, service, enabled_profiles):
dep_names = service.get_dependency_names()
if len(dep_names) > 0:
dep_services = self.get_services(
service_names=list(set(dep_names)),
include_deps=True
include_deps=True,
auto_enable_profiles=False
)
for dep in dep_services:
if not dep.enabled_for_profiles(enabled_profiles):
raise ConfigurationError(
'Service "{dep_name}" was pulled in as a dependency of '
'service "{service_name}" but is not enabled by the '
'active profiles. '
'You may fix this by adding a common profile to '
'"{dep_name}" and "{service_name}".'
.format(dep_name=dep.name, service_name=service.name)
)
else:
dep_services = []

View File

@@ -77,6 +77,7 @@ HOST_CONFIG_KEYS = [
'cpuset',
'device_cgroup_rules',
'devices',
'device_requests',
'dns',
'dns_search',
'dns_opt',
@@ -411,7 +412,7 @@ class Service:
stopped = [c for c in containers if not c.is_running]
if stopped:
return ConvergencePlan('start', stopped)
return ConvergencePlan('start', containers)
return ConvergencePlan('noop', containers)
@@ -514,8 +515,9 @@ class Service:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
stopped = [c for c in containers if not c.is_running]
_, errors = parallel_execute(
containers,
stopped,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
@@ -715,7 +717,7 @@ class Service:
'volumes_from': [
(v.source.name, v.mode)
for v in self.volumes_from if isinstance(v.source, Service)
],
]
}
def get_dependency_names(self):
@@ -1015,6 +1017,7 @@ class Service:
privileged=options.get('privileged', False),
network_mode=self.network_mode.mode,
devices=options.get('devices'),
device_requests=options.get('device_requests'),
dns=options.get('dns'),
dns_opt=options.get('dns_opt'),
dns_search=options.get('dns_search'),
@@ -1326,6 +1329,24 @@ class Service:
return result
def get_profiles(self):
if 'profiles' not in self.options:
return []
return self.options.get('profiles')
def enabled_for_profiles(self, enabled_profiles):
# if service has no profiles specified it is always enabled
if 'profiles' not in self.options:
return True
service_profiles = self.options.get('profiles')
for profile in enabled_profiles:
if profile in service_profiles:
return True
return False
def short_id_alias_exists(container, network):
aliases = container.get(

View File

@@ -174,3 +174,18 @@ def truncate_string(s, max_chars=35):
if len(s) > max_chars:
return s[:max_chars - 2] + '...'
return s
def filter_attached_for_up(items, service_names, attach_dependencies=False,
item_to_service_name=lambda x: x):
"""This function contains the logic of choosing which services to
attach when doing docker-compose up. It may be used both with containers
and services, and any other entities that map to service names -
this mapping is provided by item_to_service_name."""
if attach_dependencies or not service_names:
return items
return [
item
for item in items if item_to_service_name(item) in service_names
]

View File

@@ -164,6 +164,10 @@ _docker_compose_docker_compose() {
_filedir "y?(a)ml"
return
;;
--ansi)
COMPREPLY=( $( compgen -W "never always auto" -- "$cur" ) )
return
;;
--log-level)
COMPREPLY=( $( compgen -W "debug info warning error critical" -- "$cur" ) )
return
@@ -290,7 +294,7 @@ _docker_compose_logs() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --tail --timestamps -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --no-log-prefix --tail --timestamps -t" -- "$cur" ) )
;;
*)
__docker_compose_complete_services
@@ -545,7 +549,7 @@ _docker_compose_up() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-log-prefix --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
;;
*)
__docker_compose_complete_services
@@ -616,6 +620,7 @@ _docker_compose() {
# These options are require special treatment when searching the command.
local top_level_options_with_args="
--ansi
--log-level
"

View File

@@ -21,5 +21,7 @@ complete -c docker-compose -l tlscert -r -d 'Path to TLS certif
complete -c docker-compose -l tlskey -r -d 'Path to TLS key file'
complete -c docker-compose -l tlsverify -d 'Use TLS and verify the remote'
complete -c docker-compose -l skip-hostname-check -d "Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)"
complete -c docker-compose -l no-ansi -d 'Do not print ANSI control characters'
complete -c docker-compose -l ansi -a 'never always auto' -d 'Control when to print ANSI control characters'
complete -c docker-compose -s h -l help -d 'Print usage'
complete -c docker-compose -s v -l version -d 'Print version and exit'

View File

@@ -342,6 +342,7 @@ _docker-compose() {
'--verbose[Show more output]' \
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \
'--no-ansi[Do not print ANSI control characters]' \
'--ansi=[Control when to print ANSI control characters]:when:(never always auto)' \
'(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \
'--tls[Use TLS; implied by --tlsverify]' \
'--tlscacert=[Trust certs signed only by this CA]:ca path:' \

View File

@@ -23,8 +23,8 @@ exe = EXE(pyz,
'DATA'
),
(
'compose/config/config_schema_compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'compose/config/compose_spec.json',
'compose/config/compose_spec.json',
'DATA'
),
(

View File

@@ -32,8 +32,8 @@ coll = COLLECT(exe,
'DATA'
),
(
'compose/config/config_schema_compose_spec.json',
'compose/config/config_schema_compose_spec.json',
'compose/config/compose_spec.json',
'compose/config/compose_spec.json',
'DATA'
),
(

View File

@@ -1 +1 @@
pyinstaller==3.6
pyinstaller==4.1

View File

@@ -2,8 +2,9 @@ Click==7.1.2
coverage==5.2.1
ddt==1.4.1
flake8==3.8.3
gitpython==3.1.7
gitpython==3.1.11
mock==3.0.5
pytest==6.0.1; python_version >= '3.5'
pytest==4.6.5; python_version < '3.5'
pytest-cov==2.10.1
PyYAML==5.3.1

View File

@@ -1,15 +1,15 @@
altgraph==0.17
appdirs==1.4.4
attrs==20.1.0
bcrypt==3.1.7
cffi==1.14.1
cryptography==3.0
attrs==20.3.0
bcrypt==3.2.0
cffi==1.14.4
cryptography==3.3.2
distlib==0.3.1
entrypoints==0.3
filelock==3.0.12
gitdb2==4.0.2
mccabe==0.6.1
more-itertools==8.4.0; python_version >= '3.5'
more-itertools==8.6.0; python_version >= '3.5'
more-itertools==5.0.0; python_version < '3.5'
packaging==20.4
pluggy==0.13.1
@@ -23,6 +23,6 @@ pyrsistent==0.16.0
smmap==3.0.4
smmap2==3.0.1
toml==0.10.1
tox==3.19.0
virtualenv==20.0.30
tox==3.21.2
virtualenv==20.4.0
wcwidth==0.2.5

View File

@@ -4,7 +4,7 @@ certifi==2020.6.20
chardet==3.0.4
colorama==0.4.3; sys_platform == 'win32'
distro==1.5.0
docker==4.3.1
docker==4.4.3
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
@@ -12,10 +12,9 @@ idna==2.10
ipaddress==1.0.23
jsonschema==3.2.0
paramiko==2.7.1
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.7.1
python-dotenv==0.14.0
pywin32==227; sys_platform == 'win32'
PyYAML==5.3.1
requests==2.24.0
texttable==1.6.2

View File

@@ -5,14 +5,12 @@ set -ex
./script/clean
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
TAG="docker/compose:tmp-glibc-linux-binary-${DOCKER_COMPOSE_GITSHA}"
docker build -t "${TAG}" . \
--build-arg BUILD_PLATFORM=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
TMP_CONTAINER=$(docker create "${TAG}")
mkdir -p dist
docker build . \
--target bin \
--build-arg DISTRO=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}" \
--output dist/
ARCH=$(uname -m)
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
docker container rm -f "${TMP_CONTAINER}"
docker image rm -f "${TAG}"
# Ensure that we output the binary with the same name as we did before
mv dist/docker-compose-linux-amd64 "dist/docker-compose-Linux-${ARCH}"

View File

@@ -24,7 +24,7 @@ if [ ! -z "${BUILD_BOOTLOADER}" ]; then
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
cd /tmp/pyinstaller/bootloader
# Checkout commit corresponding to version in requirements-build
git checkout v3.6
git checkout v4.1
"${VENV}"/bin/python3 ./waf configure --no-lsb all
"${VENV}"/bin/pip3 install ..
cd "${CODE_PATH}"

View File

@@ -13,6 +13,6 @@ IMAGE="docker/compose-tests"
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
docker build -t "${IMAGE}:${TAG}" . \
--target build \
--build-arg BUILD_PLATFORM="debian" \
--build-arg DISTRO="debian" \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
docker tag "${IMAGE}":"${TAG}" "${IMAGE}":latest

View File

@@ -6,17 +6,17 @@
#
# http://git-scm.com/download/win
#
# 2. Install Python 3.7.x:
# 2. Install Python 3.9.x:
#
# https://www.python.org/downloads/
#
# 3. Append ";C:\Python37;C:\Python37\Scripts" to the "Path" environment variable:
# 3. Append ";C:\Python39;C:\Python39\Scripts" to the "Path" environment variable:
#
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
#
# 4. In Powershell, run the following commands:
#
# $ pip install 'virtualenv==20.0.30'
# $ pip install 'virtualenv==20.2.2'
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
#
# 5. Clone the repository:
@@ -39,7 +39,7 @@ if (Test-Path venv) {
Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName }
# Create virtualenv
virtualenv -p C:\Python37\python.exe .\venv
virtualenv -p C:\Python39\python.exe .\venv
# pip and pyinstaller generate lots of warnings, so we need to ignore them
$ErrorActionPreference = "Continue"

0
script/release/release.py Normal file → Executable file
View File

View File

@@ -15,16 +15,16 @@
set -e
VERSION="1.26.1"
VERSION="1.28.4"
IMAGE="docker/compose:$VERSION"
# Setup options for connecting to docker host
if [ -z "$DOCKER_HOST" ]; then
DOCKER_HOST="/var/run/docker.sock"
DOCKER_HOST='unix:///var/run/docker.sock'
fi
if [ -S "$DOCKER_HOST" ]; then
DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST"
if [ -S "${DOCKER_HOST#unix://}" ]; then
DOCKER_ADDR="-v ${DOCKER_HOST#unix://}:${DOCKER_HOST#unix://} -e DOCKER_HOST"
else
DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH"
fi
@@ -44,13 +44,34 @@ fi
if [ -n "$COMPOSE_PROJECT_NAME" ]; then
COMPOSE_OPTIONS="-e COMPOSE_PROJECT_NAME $COMPOSE_OPTIONS"
fi
# TODO: also check --file argument
if [ -n "$compose_dir" ]; then
VOLUMES="$VOLUMES -v $compose_dir:$compose_dir"
fi
if [ -n "$HOME" ]; then
VOLUMES="$VOLUMES -v $HOME:$HOME -e HOME" # Pass in HOME to share docker.config and allow ~/-relative paths to work.
fi
i=$#
while [ $i -gt 0 ]; do
arg=$1
i=$((i - 1))
shift
case "$arg" in
-f|--file)
value=$1
i=$((i - 1))
shift
set -- "$@" "$arg" "$value"
file_dir=$(realpath "$(dirname "$value")")
VOLUMES="$VOLUMES -v $file_dir:$file_dir"
;;
*) set -- "$@" "$arg" ;;
esac
done
# Setup environment variables for compose config and context
ENV_OPTIONS=$(printenv | sed -E "/^PATH=.*/d; s/^/-e /g; s/=.*//g; s/\n/ /g")
# Only allocate tty if we detect one
if [ -t 0 ] && [ -t 1 ]; then
@@ -67,4 +88,4 @@ if docker info --format '{{json .SecurityOptions}}' 2>/dev/null | grep -q 'name=
fi
# shellcheck disable=SC2086
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $ENV_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"

View File

@@ -13,13 +13,13 @@ if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
OPENSSL_VERSION=1.1.1g
OPENSSL_VERSION=1.1.1h
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=b213a293f2127ec3e323fb3cfc0c9807664fd997
OPENSSL_SHA1=8d0d099e8973ec851368c8c775e05e1eadca1794
PYTHON_VERSION=3.7.7
PYTHON_VERSION=3.9.0
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=8e9968663a214aea29659ba9dfa959e8a7d82b39
PYTHON_SHA1=5744a10ba989d2badacbab3c00cdcb83c83106c7
#
# Install prerequisites.
@@ -36,7 +36,7 @@ if ! [ -x "$(command -v python3)" ]; then
brew install python3
fi
if ! [ -x "$(command -v virtualenv)" ]; then
pip3 install virtualenv==20.0.30
pip3 install virtualenv==20.2.2
fi
#

View File

@@ -21,7 +21,6 @@ elif [ "$DOCKER_VERSIONS" == "all" ]; then
DOCKER_VERSIONS=$($get_versions -n 2 recent)
fi
BUILD_NUMBER=${BUILD_NUMBER-$USER}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py37}
@@ -39,17 +38,19 @@ for version in $DOCKER_VERSIONS; do
trap "on_exit" EXIT
repo="dockerswarm/dind"
docker run \
-d \
--name "$daemon_container" \
--privileged \
--volume="/var/lib/docker" \
"$repo:$version" \
-v $DOCKER_CONFIG/config.json:/root/.docker/config.json \
-e "DOCKER_TLS_CERTDIR=" \
"docker:$version-dind" \
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
2>&1 | tail -n 10
docker exec "$daemon_container" sh -c "apk add --no-cache git"
docker run \
--rm \
--tty \

View File

@@ -32,7 +32,7 @@ install_requires = [
'texttable >= 0.9.0, < 2',
'websocket-client >= 0.32.0, < 1',
'distro >= 1.5.0, < 2',
'docker[ssh] >= 4.3.1, < 5',
'docker[ssh] >= 4.4.3, < 5',
'dockerpty >= 0.4.1, < 1',
'jsonschema >= 2.5.1, < 4',
'python-dotenv >= 0.13.0, < 1',
@@ -102,5 +102,7 @@ setup(
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
)

View File

@@ -58,13 +58,16 @@ COMPOSE_COMPATIBILITY_DICT = {
}
def start_process(base_dir, options):
def start_process(base_dir, options, executable=None, env=None):
executable = executable or DOCKER_COMPOSE_EXECUTABLE
proc = subprocess.Popen(
[DOCKER_COMPOSE_EXECUTABLE] + options,
[executable] + options,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=base_dir)
cwd=base_dir,
env=env,
)
print("Running process: %s" % proc.pid)
return proc
@@ -78,9 +81,10 @@ def wait_on_process(proc, returncode=0, stdin=None):
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
def dispatch(base_dir, options, project_options=None, returncode=0, stdin=None):
def dispatch(base_dir, options,
project_options=None, returncode=0, stdin=None, executable=None, env=None):
project_options = project_options or []
proc = start_process(base_dir, project_options + options)
proc = start_process(base_dir, project_options + options, executable=executable, env=env)
return wait_on_process(proc, returncode=returncode, stdin=stdin)
@@ -359,7 +363,7 @@ services:
'web': {
'command': 'true',
'image': 'alpine:latest',
'ports': ['5643/tcp', '9999/tcp']
'ports': [{'target': 5643}, {'target': 9999}]
}
}
}
@@ -374,7 +378,7 @@ services:
'web': {
'command': 'false',
'image': 'alpine:latest',
'ports': ['5644/tcp', '9998/tcp']
'ports': [{'target': 5644}, {'target': 9998}]
}
}
}
@@ -389,7 +393,7 @@ services:
'web': {
'command': 'echo uwu',
'image': 'alpine:3.10.1',
'ports': ['3341/tcp', '4449/tcp']
'ports': [{'target': 3341}, {'target': 4449}]
}
}
}
@@ -783,7 +787,11 @@ services:
assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT in result.stdout
@mock.patch.dict(os.environ)
def test_build_log_level(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
os.environ['DOCKER_BUILDKIT'] = '0'
self.test_env_file_relative_to_compose_file()
self.base_dir = 'tests/fixtures/simple-dockerfile'
result = self.dispatch(['--log-level', 'warning', 'build', 'simple'])
assert result.stderr == ''
@@ -845,13 +853,17 @@ services:
for c in self.project.client.containers(all=True):
self.addCleanup(self.project.client.remove_container, c, force=True)
@mock.patch.dict(os.environ)
def test_build_shm_size_build_option(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-shm-size'
result = self.dispatch(['build', '--no-cache'], None)
assert 'shm_size: 96' in result.stdout
@mock.patch.dict(os.environ)
def test_build_memory_build_option(self):
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-memory'
result = self.dispatch(['build', '--no-cache', '--memory', '96m', 'service'], None)
@@ -1719,6 +1731,98 @@ services:
shareable_mode_container = self.project.get_service('shareable').containers()[0]
assert shareable_mode_container.get('HostConfig.IpcMode') == 'shareable'
def test_profiles_up_with_no_profile(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert len(containers) == 1
def test_profiles_up_with_profile(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'test', 'up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert 'bar' in service_names
assert 'baz' in service_names
assert len(containers) == 3
def test_profiles_up_invalid_dependency(self):
self.base_dir = 'tests/fixtures/profiles'
result = self.dispatch(['--profile', 'debug', 'up'], returncode=1)
assert ('Service "bar" was pulled in as a dependency of service "zot" '
'but is not enabled by the active profiles.') in result.stderr
def test_profiles_up_with_multiple_profiles(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'debug', '--profile', 'test', 'up'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'foo' in service_names
assert 'bar' in service_names
assert 'baz' in service_names
assert 'zot' in service_names
assert len(containers) == 4
def test_profiles_up_with_profile_enabled_by_service(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up', 'bar'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert len(containers) == 1
def test_profiles_up_with_dependency_and_profile_enabled_by_service(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['up', 'baz'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'baz' in service_names
assert len(containers) == 2
def test_profiles_up_with_invalid_dependency_for_target_service(self):
self.base_dir = 'tests/fixtures/profiles'
result = self.dispatch(['up', 'zot'], returncode=1)
assert ('Service "bar" was pulled in as a dependency of service "zot" '
'but is not enabled by the active profiles.') in result.stderr
def test_profiles_up_with_profile_for_dependency(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['--profile', 'test', 'up', 'zot'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'zot' in service_names
assert len(containers) == 2
def test_profiles_up_with_merged_profiles(self):
self.base_dir = 'tests/fixtures/profiles'
self.dispatch(['-f', 'docker-compose.yml', '-f', 'merge-profiles.yml', 'up', 'zot'])
containers = self.project.containers(stopped=True)
service_names = [c.service for c in containers]
assert 'bar' in service_names
assert 'zot' in service_names
assert len(containers) == 2
def test_exec_without_tty(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'console'])
@@ -3034,3 +3138,12 @@ services:
another = self.project.get_service('--log-service')
assert len(service.containers()) == 1
assert len(another.containers()) == 1
def test_up_no_log_prefix(self):
self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-log-prefix'])
assert 'simple' in result.stdout
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
assert 'exited with code 0' in result.stdout

View File

@@ -0,0 +1,20 @@
version: "3"
services:
foo:
image: busybox:1.31.0-uclibc
bar:
image: busybox:1.31.0-uclibc
profiles:
- test
baz:
image: busybox:1.31.0-uclibc
depends_on:
- bar
profiles:
- test
zot:
image: busybox:1.31.0-uclibc
depends_on:
- bar
profiles:
- debug

View File

@@ -0,0 +1,5 @@
version: "3"
services:
bar:
profiles:
- debug

View File

@@ -0,0 +1,125 @@
import logging
import os
import socket
from http.server import BaseHTTPRequestHandler
from http.server import HTTPServer
from threading import Thread
import requests
from docker.transport import UnixHTTPAdapter
from tests.acceptance.cli_test import dispatch
from tests.integration.testcases import DockerClientTestCase
TEST_SOCKET_FILE = '/tmp/test-metrics-docker-cli.sock'
class MetricsTest(DockerClientTestCase):
test_session = requests.sessions.Session()
test_env = None
base_dir = 'tests/fixtures/v3-full'
@classmethod
def setUpClass(cls):
super().setUpClass()
MetricsTest.test_session.mount("http+unix://", UnixHTTPAdapter(TEST_SOCKET_FILE))
MetricsTest.test_env = os.environ.copy()
MetricsTest.test_env['METRICS_SOCKET_FILE'] = TEST_SOCKET_FILE
MetricsServer().start()
@classmethod
def test_metrics_help(cls):
# root `docker-compose` command is considered as a `--help`
dispatch(cls.base_dir, [], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['help', 'run'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['--help'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['run', '--help'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help run", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['up', '--help', 'extra_args'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose --help up", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
@classmethod
def test_metrics_simple_commands(cls):
dispatch(cls.base_dir, ['ps'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose ps", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['version'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose version", "context": "moby", ' \
b'"source": "docker-compose", "status": "success"}'
dispatch(cls.base_dir, ['version', '--yyy'], env=MetricsTest.test_env)
assert cls.get_content() == \
b'{"command": "compose version", "context": "moby", ' \
b'"source": "docker-compose", "status": "failure"}'
@staticmethod
def get_content():
resp = MetricsTest.test_session.get("http+unix://localhost")
print(resp.content)
return resp.content
def start_server(uri=TEST_SOCKET_FILE):
try:
os.remove(uri)
except OSError:
pass
httpd = HTTPServer(uri, MetricsHTTPRequestHandler, False)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(TEST_SOCKET_FILE)
sock.listen(0)
httpd.socket = sock
print('Serving on ', uri)
httpd.serve_forever()
sock.shutdown(socket.SHUT_RDWR)
sock.close()
os.remove(uri)
class MetricsServer:
@classmethod
def start(cls):
t = Thread(target=start_server, daemon=True)
t.start()
class MetricsHTTPRequestHandler(BaseHTTPRequestHandler):
usages = []
def do_GET(self):
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
self.send_response(200)
self.end_headers()
for u in MetricsHTTPRequestHandler.usages:
self.wfile.write(u)
MetricsHTTPRequestHandler.usages = []
def do_POST(self):
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
content_length = int(self.headers['Content-Length'])
body = self.rfile.read(content_length)
print(body)
MetricsHTTPRequestHandler.usages.append(body)
self.send_response(200)
self.end_headers()
if __name__ == '__main__':
logging.getLogger("urllib3").propagate = False
logging.getLogger("requests").propagate = False
start_server()

View File

@@ -37,6 +37,7 @@ from tests.integration.testcases import no_cluster
def build_config(**kwargs):
return config.Config(
config_version=kwargs.get('version', VERSION),
version=kwargs.get('version', VERSION),
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),
@@ -1347,6 +1348,36 @@ class ProjectTest(DockerClientTestCase):
project.up()
assert len(project.containers()) == 3
def test_project_up_scale_with_stopped_containers(self):
config_data = build_config(
services=[{
'name': 'web',
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'scale': 2
}]
)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
containers = project.containers()
assert len(containers) == 2
self.client.stop(containers[0].id)
project.up(scale_override={'web': 2})
containers = project.containers()
assert len(containers) == 2
self.client.stop(containers[0].id)
project.up(scale_override={'web': 3})
assert len(project.containers()) == 3
self.client.stop(containers[0].id)
project.up(scale_override={'web': 1})
assert len(project.containers()) == 1
def test_initialize_volumes(self):
vol_name = '{:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{}'.format(vol_name)

View File

@@ -948,7 +948,12 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
service = self.create_service('web', build={'context': base_dir})
service = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '0',
'DOCKER_BUILDKIT': '0',
})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
@@ -964,7 +969,6 @@ class ServiceTest(DockerClientTestCase):
service = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
service.build(cli=True)
@@ -1015,7 +1019,6 @@ class ServiceTest(DockerClientTestCase):
web = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
project = Project('composetest', [web], self.client)

View File

@@ -375,7 +375,7 @@ class ServiceStateTest(DockerClientTestCase):
assert [c.is_running for c in containers] == [False, True]
assert ('start', containers[0:1]) == web.convergence_plan()
assert ('start', containers) == web.convergence_plan()
def test_trigger_recreate_with_config_change(self):
web = self.create_service('web', command=["top"])

View File

@@ -61,6 +61,7 @@ class DockerClientTestCase(unittest.TestCase):
@classmethod
def tearDownClass(cls):
cls.client.close()
del cls.client
def tearDown(self):

View File

@@ -0,0 +1,56 @@
import os
import pytest
from compose.cli.colors import AnsiMode
from tests import mock
@pytest.fixture
def tty_stream():
stream = mock.Mock()
stream.isatty.return_value = True
return stream
@pytest.fixture
def non_tty_stream():
stream = mock.Mock()
stream.isatty.return_value = False
return stream
class TestAnsiModeTestCase:
@mock.patch.dict(os.environ)
def test_ansi_mode_never(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
@mock.patch.dict(os.environ)
def test_ansi_mode_always(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
@mock.patch.dict(os.environ)
def test_ansi_mode_auto(self, tty_stream, non_tty_stream):
if "CLICOLOR" in os.environ:
del os.environ["CLICOLOR"]
assert AnsiMode.AUTO.use_ansi_codes(tty_stream)
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)
os.environ["CLICOLOR"] = "0"
assert not AnsiMode.AUTO.use_ansi_codes(tty_stream)
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)

View File

@@ -14,49 +14,41 @@ class TestGetConfigPathFromOptions:
paths = ['one.yml', 'two.yml']
opts = {'--file': paths}
environment = Environment.from_env_file('.')
assert get_config_path_from_options('.', opts, environment) == paths
assert get_config_path_from_options(opts, environment) == paths
def test_single_path_from_env(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options('.', {}, environment) == ['one.yml']
assert get_config_path_from_options({}, environment) == ['one.yml']
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix separator')
def test_multiple_path_from_env(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml:two.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options(
'.', {}, environment
) == ['one.yml', 'two.yml']
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows separator')
def test_multiple_path_from_env_windows(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_FILE'] = 'one.yml;two.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options(
'.', {}, environment
) == ['one.yml', 'two.yml']
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
def test_multiple_path_from_env_custom_separator(self):
with mock.patch.dict(os.environ):
os.environ['COMPOSE_PATH_SEPARATOR'] = '^'
os.environ['COMPOSE_FILE'] = 'c:\\one.yml^.\\semi;colon.yml'
environment = Environment.from_env_file('.')
assert get_config_path_from_options(
'.', {}, environment
) == ['c:\\one.yml', '.\\semi;colon.yml']
assert get_config_path_from_options({}, environment) == ['c:\\one.yml', '.\\semi;colon.yml']
def test_no_path(self):
environment = Environment.from_env_file('.')
assert not get_config_path_from_options('.', {}, environment)
assert not get_config_path_from_options({}, environment)
def test_unicode_path_from_options(self):
paths = [b'\xe5\xb0\xb1\xe5\x90\x83\xe9\xa5\xad/docker-compose.yml']
opts = {'--file': paths}
environment = Environment.from_env_file('.')
assert get_config_path_from_options(
'.', opts, environment
) == ['就吃饭/docker-compose.yml']
assert get_config_path_from_options(opts, environment) == ['就吃饭/docker-compose.yml']

View File

@@ -8,7 +8,6 @@ from docker.errors import APIError
from compose.cli.log_printer import build_log_generator
from compose.cli.log_printer import build_log_presenters
from compose.cli.log_printer import build_no_log_generator
from compose.cli.log_printer import consume_queue
from compose.cli.log_printer import QueueItem
from compose.cli.log_printer import wait_on_exit
@@ -75,14 +74,6 @@ def test_wait_on_exit_raises():
assert expected in wait_on_exit(mock_container)
def test_build_no_log_generator(mock_container):
mock_container.has_api_logs = False
mock_container.log_driver = 'none'
output, = build_no_log_generator(mock_container, None)
assert "WARNING: no logs are available with the 'none' log driver\n" in output
assert "exited with code" not in output
class TestBuildLogGenerator:
def test_no_log_stream(self, mock_container):

View File

@@ -137,21 +137,20 @@ class TestCLIMainTestCase:
class TestSetupConsoleHandlerTestCase:
def test_with_tty_verbose(self, logging_handler):
def test_with_console_formatter_verbose(self, logging_handler):
setup_console_handler(logging_handler, True)
assert type(logging_handler.formatter) == ConsoleWarningFormatter
assert '%(name)s' in logging_handler.formatter._fmt
assert '%(funcName)s' in logging_handler.formatter._fmt
def test_with_tty_not_verbose(self, logging_handler):
def test_with_console_formatter_not_verbose(self, logging_handler):
setup_console_handler(logging_handler, False)
assert type(logging_handler.formatter) == ConsoleWarningFormatter
assert '%(name)s' not in logging_handler.formatter._fmt
assert '%(funcName)s' not in logging_handler.formatter._fmt
def test_with_not_a_tty(self, logging_handler):
logging_handler.stream.isatty.return_value = False
setup_console_handler(logging_handler, False)
def test_without_console_formatter(self, logging_handler):
setup_console_handler(logging_handler, False, use_console_formatter=False)
assert type(logging_handler.formatter) == logging.Formatter

View File

@@ -168,12 +168,14 @@ class ConfigTest(unittest.TestCase):
}
})
)
assert cfg.config_version == VERSION
assert cfg.version == VERSION
for version in ['2', '2.0', '2.1', '2.2', '2.3',
'3', '3.0', '3.1', '3.2', '3.3', '3.4', '3.5', '3.6', '3.7', '3.8']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == version
assert cfg.config_version == version
assert cfg.version == VERSION
def test_v1_file_version(self):
cfg = config.load(build_config_details({'web': {'image': 'busybox'}}))
@@ -236,7 +238,9 @@ class ConfigTest(unittest.TestCase):
)
)
assert 'Invalid top-level property "web"' in excinfo.exconly()
assert "compose.config.errors.ConfigurationError: " \
"The Compose file 'filename.yml' is invalid because:\n" \
"'web' does not match any of the regexes: '^x-'" in excinfo.exconly()
assert VERSION_EXPLANATION in excinfo.exconly()
def test_named_volume_config_empty(self):
@@ -665,7 +669,7 @@ class ConfigTest(unittest.TestCase):
assert 'Invalid service name \'mong\\o\'' in excinfo.exconly()
def test_config_duplicate_cache_from_values_validation_error(self):
def test_config_duplicate_cache_from_values_no_validation_error(self):
with pytest.raises(ConfigurationError) as exc:
config.load(
build_config_details({
@@ -677,7 +681,7 @@ class ConfigTest(unittest.TestCase):
})
)
assert 'build.cache_from contains non-unique items' in exc.exconly()
assert 'build.cache_from contains non-unique items' not in exc.exconly()
def test_load_with_multiple_files_v1(self):
base_file = config.ConfigFile(
@@ -2543,6 +2547,7 @@ web:
'labels': ['com.docker.compose.a=1', 'com.docker.compose.b=2'],
'mode': 'replicated',
'placement': {
'max_replicas_per_node': 1,
'constraints': [
'node.role == manager', 'engine.labels.aws == true'
],
@@ -2599,6 +2604,7 @@ web:
'com.docker.compose.c': '3'
},
'placement': {
'max_replicas_per_node': 1,
'constraints': [
'engine.labels.aws == true', 'engine.labels.dev == true',
'node.role == manager', 'node.role == worker'
@@ -5267,7 +5273,7 @@ def get_config_filename_for_files(filenames, subdir=None):
class SerializeTest(unittest.TestCase):
def test_denormalize_depends_on_v3(self):
def test_denormalize_depends(self):
service_dict = {
'image': 'busybox',
'command': 'true',
@@ -5277,27 +5283,7 @@ class SerializeTest(unittest.TestCase):
}
}
assert denormalize_service_dict(service_dict, VERSION) == {
'image': 'busybox',
'command': 'true',
'depends_on': ['service2', 'service3']
}
def test_denormalize_depends_on_v2_1(self):
service_dict = {
'image': 'busybox',
'command': 'true',
'depends_on': {
'service2': {'condition': 'service_started'},
'service3': {'condition': 'service_started'},
}
}
assert denormalize_service_dict(service_dict, VERSION) == {
'image': 'busybox',
'command': 'true',
'depends_on': ['service2', 'service3']
}
assert denormalize_service_dict(service_dict, VERSION) == service_dict
def test_serialize_time(self):
data = {
@@ -5387,7 +5373,7 @@ class SerializeTest(unittest.TestCase):
assert serialized_config['secrets']['two'] == {'external': True, 'name': 'two'}
def test_serialize_ports(self):
config_dict = config.Config(version=VERSION, services=[
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
@@ -5398,8 +5384,20 @@ class SerializeTest(unittest.TestCase):
serialized_config = yaml.safe_load(serialize_config(config_dict))
assert [{'published': 8080, 'target': 80}] == serialized_config['services']['web']['ports']
def test_serialize_ports_v1(self):
config_dict = config.Config(config_version=V1, version=V1, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
'name': 'web'
}
], volumes={}, networks={}, secrets={}, configs={})
serialized_config = yaml.safe_load(serialize_config(config_dict))
assert ['8080:80/tcp'] == serialized_config['services']['web']['ports']
def test_serialize_ports_with_ext_ip(self):
config_dict = config.Config(version=VERSION, services=[
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, '127.0.0.1')],
'image': 'alpine',

View File

@@ -416,7 +416,7 @@ def test_interpolate_mandatory_no_err_msg(defaults_interpolator):
with pytest.raises(UnsetRequiredSubstitution) as e:
defaults_interpolator("not ok ${BAZ?}")
assert e.value.err == ''
assert e.value.err == 'BAZ'
def test_interpolate_mixed_separators(defaults_interpolator):

View File

@@ -221,34 +221,6 @@ class ContainerTest(unittest.TestCase):
container = Container(None, self.container_dict, has_been_inspected=True)
assert container.short_id == self.container_id[:12]
def test_has_api_logs(self):
container_dict = {
'HostConfig': {
'LogConfig': {
'Type': 'json-file'
}
}
}
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'none'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'syslog'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'journald'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'foobar'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
class GetContainerNameTestCase(unittest.TestCase):

View File

View File

@@ -0,0 +1,36 @@
import unittest
from compose.metrics.client import MetricsCommand
from compose.metrics.client import Status
class MetricsTest(unittest.TestCase):
@classmethod
def test_metrics(cls):
assert MetricsCommand('up', 'moby').to_map() == {
'command': 'compose up',
'context': 'moby',
'status': 'success',
'source': 'docker-compose',
}
assert MetricsCommand('down', 'local').to_map() == {
'command': 'compose down',
'context': 'local',
'status': 'success',
'source': 'docker-compose',
}
assert MetricsCommand('help', 'aci', Status.FAILURE).to_map() == {
'command': 'compose help',
'context': 'aci',
'status': 'failure',
'source': 'docker-compose',
}
assert MetricsCommand('run', 'ecs').to_map() == {
'command': 'compose run',
'context': 'ecs',
'status': 'success',
'source': 'docker-compose',
}

View File

@@ -3,6 +3,7 @@ from threading import Lock
from docker.errors import APIError
from compose.cli.colors import AnsiMode
from compose.parallel import GlobalLimit
from compose.parallel import parallel_execute
from compose.parallel import parallel_execute_iter
@@ -156,7 +157,7 @@ def test_parallel_execute_alignment(capsys):
def test_parallel_execute_ansi(capsys):
ParallelStreamWriter.instance = None
ParallelStreamWriter.set_noansi(value=False)
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.ALWAYS)
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,
@@ -172,7 +173,7 @@ def test_parallel_execute_ansi(capsys):
def test_parallel_execute_noansi(capsys):
ParallelStreamWriter.instance = None
ParallelStreamWriter.set_noansi()
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.NEVER)
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,

View File

@@ -28,6 +28,7 @@ from compose.service import Service
def build_config(**kwargs):
return Config(
config_version=kwargs.get('config_version', VERSION),
version=kwargs.get('version', VERSION),
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),

View File

@@ -1,5 +1,5 @@
[tox]
envlist = py37,pre-commit
envlist = py37,py39,pre-commit
[testenv]
usedevelop=True