Compare commits

...

476 Commits

Author SHA1 Message Date
Nicolas De Loof
9ad10575d1 Prepare drop of python 2.x support
see https://github.com/docker/compose/issues/6890

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-11-20 16:00:53 +01:00
Ulysses Souza
2887d82d16 Merge pull request #6982 from smamessier/fix_non_ascii_error
Fixed non-ascii error when using COMPOSE_DOCKER_CLI_BUILD=1 for Buildkit
2019-11-18 16:45:04 +01:00
Ulysses Souza
2919bebea4 Fix non ascii chars error. Python2 only
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-11-18 15:43:50 +01:00
Djordje Lukic
5478c966f1 Merge pull request #7008 from zelahi/fix-readme-link
Fixed broken README link for common use cases
2019-11-07 09:59:49 +01:00
Zuhayr Elahi
e546533cfe Fixed broken README link for common use cases
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-11-06 17:10:48 -08:00
Jean-Christophe Sirot
abef11b2a6 Merge pull request #6996 from ajlai/fix-color-order-and-remove-red
Make container service color deterministic, remove red from chosen colors
2019-11-06 16:11:57 +01:00
Anthony Lai
802fa20228 Make container service color deterministic, remove red from chosen colors
Signed-off-by: Anthony Lai <anthonyjlai@gmail.com>
2019-11-03 23:44:31 +00:00
Djordje Lukic
fa34ee7362 Merge pull request #6973 from glours/set_no_color_if_clicolor_defined_to_0
Set no-colors to true if CLICOLOR env variable is set to 0
2019-10-31 16:45:10 +01:00
Sebastien Mamessier
a3a23bf949 Fixed error when using startswith on non-ascii string
Signed-off-by: Sebastien Mamessier <smamessier@uber.com>
2019-10-30 13:57:08 +01:00
Jean-Christophe Sirot
cfc48f2c13 Merge pull request #6986 from rumpl/fix-unit-test-close-fd
Cleanup all open files
2019-10-28 16:07:18 +01:00
Djordje Lukic
f8142a899c Cleanup all open files
If the fd is not closed the cleanup will fail on windows.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-10-28 15:36:05 +01:00
Guillaume Lours
2e7493a889 Set no-colors to true if CLICOLOR env variable is set to 0
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-21 11:37:46 +02:00
Jean-Christophe Sirot
4be2fa010a Merge pull request #6972 from glours/align_image_size_display_to_docker_cli
Format image size as decimal to be align with Docker CLI
2019-10-18 15:26:15 +02:00
Guillaume Lours
386bdda246 Format image size as decimal to be align with Docker CLI
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-18 12:50:38 +02:00
okor
17bbbba7d6 update docker-py
Signed-off-by: Jason Ormand <jason.ormand1@gmail.com>
2019-10-18 09:37:24 +02:00
Nicolas De Loof
1ca10f90fb Fix acceptance tests
tty is now (correclty) reported to have 80 columns, which split service
ID in two lines

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Nicolas De Loof
452880af7c Use python Posix support to get tty size
stty is not portable outside *nix
Note: shutil.get_terminal_size require python 3.3

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Guillaume LOURS
944660048d Merge pull request #6964 from guillaumerose/addmorelabels
Add working dir, config files and env file in service labels
2019-10-15 10:06:42 +02:00
Guillaume Rose
dbe4d7323e Add working dir, config files and env file in service labels
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-15 09:18:09 +02:00
Guillaume Rose
1678a4fbe4 Run CI on amd64
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-14 22:01:04 +02:00
Guillaume LOURS
4e83bafec6 Merge pull request #6955 from ndeloof/paramiko
Bump paramiko to 2.6.0
2019-10-10 10:59:44 +02:00
Nicolas De Loof
8973a940e6 Bump paramiko to 2.6.0
close #6953

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-10 08:55:15 +02:00
Zuhayr Elahi
8835056ce4 UPDATED log message
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Zuhayr Elahi
3135a0a839 Added log message to check compose file
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Guillaume Lours
cdae06a89c exclude issue flagged with kind/feature from stale process
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:51:34 +02:00
Guillaume Lours
79bf9ed652 correct invalid yaml indentation
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:10:18 +02:00
Nicolas De loof
29af1a84ca Merge pull request #6952 from glours/stale_configuration
Add config file for @probot/stale
2019-10-09 16:41:58 +02:00
Guillaume Lours
9375c15bad Add config file for @probot/stale
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 16:16:57 +02:00
Chris Crone
8ebb1a6f19 Merge pull request #6949 from jcsirot/fix-pushbin-script-verbosity
Remove set -x to make this script less verbose
2019-10-09 12:04:24 +02:00
Jean-Christophe Sirot
37be2ad9cd Remove set -x to make this script less verbose
Signed-off-by: Jean-Christophe Sirot <jean-christophe.sirot@docker.com>
2019-10-09 10:51:17 +02:00
Nicolas De loof
6fe35498a5 Add dependencies for ARM build (#6908)
Add dependencies for ARM build
2019-10-09 09:38:58 +02:00
Stefan Scherer
ce52f597a0 Enhance build script for different CPU architectures
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2019-10-09 09:11:29 +02:00
Stefan Scherer
79f29dda23 Add dependencies for ARM build
Signed-off-by: Stefan Scherer <scherer_stefan@icloud.com>
2019-10-09 09:11:29 +02:00
Nicolas De loof
7172849913 Fix "extends" same file optimization (#6425)
Fix "extends" same file optimization
2019-10-09 08:50:54 +02:00
Aleksandr Mezin
c24b7b6464 Fix same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Aleksandr Mezin
74f892de95 Add test to verify same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Nicolas De loof
09acc5febf [TAR-995] ADDED a stage for executing License Scans (#6875)
[TAR-995] ADDED a stage for executing License Scans
2019-10-08 16:25:28 +02:00
Nicolas De loof
1f16a7929d Merge pull request #6864 from samueljsb/formatter_class
Change Formatter.table method to staticmethod
2019-10-08 16:24:40 +02:00
Nicolas De loof
f9113202e8 Add automatic labeling of bug, feature & question issues (#6944)
Add automatic labeling of bug, feature & question issues
2019-10-08 16:23:15 +02:00
Nicolas De loof
5f2161cad9 Merge pull request #6912 from cranzy/fixing_broken_link
Fixing features broken link
2019-10-08 16:19:31 +02:00
Guillaume LOURS
70f8e38b1d Add automatic labeling of bug, feature & question issues
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-08 11:07:04 +02:00
Ulysses Souza
186aa6e5c3 Merge pull request #6914 from lukas9393/6913-progress-arg
Fix --progress arg when run docker-compose build
2019-10-07 12:28:49 +02:00
Guillaume LOURS
bc57a1bd54 Merge pull request #6925 from ulyssessouza/fix-secrets-warning-message
Fix secret missing warning
2019-09-27 10:51:37 +02:00
ulyssessouza
eca358e2f0 Fix secret missing warning
Signed-off-by: ulyssessouza <ulyssessouza@gmail.com>
2019-09-27 09:10:49 +02:00
Lukas Hettwer
32ac6edb86 Fix --progress arg when run docker-compose build
--progress is no longer processed as flag but as argument with value.

Signed-off-by: Lukas Hettwer <lukas.hettwer@aboutyou.de>

Resolve: [#6913]
2019-09-24 16:02:12 +02:00
Dimitar Dimitrov
475f8199f7 Fixing features broken link
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@docker.com>
2019-09-24 13:31:30 +03:00
Zuhayr Elahi
98d7cc8d0c ADDED a stage for executing License Scans
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-09-13 14:25:06 -07:00
Ulysses Souza
d7c7e21921 Merge pull request #6131 from sagarafr/fix-5920-missing-secret-message
Add a warning message to secret file
2019-09-09 17:45:08 +02:00
Ulysses Souza
70ead597d2 Add tests to 'get_secret' warnings
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-09 10:04:05 +02:00
Marian Gappa
b9092cacdb Fix missing secret error message
Add a warning message when the secret file doesn't exist

Fixes #5920

Signed-off-by: Marian Gappa <marian.gappa@gmail.com>
2019-09-09 10:04:05 +02:00
Silvin Lubecki
1566930a70 Merge pull request #6862 from deathtracktor/master
Fix KeyError when remote network labels are None.
2019-09-06 11:13:48 +02:00
Danil Kister
a5fbf91b72 Prevent KeyError when remote network labels are None.
Signed-off-by: Danil Kister <danil.kister@gmail.com>
2019-09-05 21:36:10 +02:00
Ulysses Souza
ecf03fe280 Merge pull request #6882 from ulyssessouza/fix_attach_restarting_container
Fix race condition on watch_events
2019-09-05 16:46:14 +02:00
Ulysses Souza
47d170b06a Fix race condition on watch_events
Avoid to attach to restarting containers and ignore
race conditions when trying to attach to already
dead containers

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-04 17:55:05 +02:00
Chris Crone
9973f051ba Merge pull request #6878 from ulyssessouza/bump-debian
Bump runtime debian
2019-08-30 16:42:56 +02:00
Ulysses Souza
2199278b44 Merge pull request #6865 from ulyssessouza/support-cli-build
Add support to CLI build
2019-08-30 13:46:21 +02:00
Ulysses Souza
5add9192ac Rename envvar switch to COMPOSE_DOCKER_CLI_BUILD
From `COMPOSE_NATIVE_BUILDER` to `COMPOSE_DOCKER_CLI_BUILD`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-30 12:11:09 +02:00
Ulysses Souza
0c6fce271e Bump runtime debian
From `stretch-20190708-slim` to `stretch-20190812-slim`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 17:45:21 +02:00
Ulysses Souza
9d7ad3bac1 Add comment on native build and fix typo
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 16:30:50 +02:00
Nao YONASHIRO
719a1b0581 fix: use subprocess32 for python2
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-29 14:21:19 +02:00
Ulysses Souza
bbdb3cab88 Add integration tests to native builder
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 09:31:16 +02:00
Ulysses Souza
ee8ca5d6f8 Rephrase warnings when building with the cli
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
15e8edca3c feat: add a warning if someone uses the --compress or --parallel flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
81e223d499 feat: add --progress flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
862a13b8f3 fix: add build flags
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
cacbcccc0c Add support to CLI build
This includes can be enabled by setting the env var
`COMPOSE_NATIVE_BUILDER=1`.

Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:14 +02:00
Samuel Searles-Bryant
672ced8742 Change Formatter.table method to staticmethod
Make this a staticmethod so it's easier to use without needing to init a
Formatter object first.

Signed-off-by: Samuel Searles-Bryant <samuel.searles-bryant@unipart.io>
2019-08-22 14:25:15 +01:00
Djordje Lukic
4cfa622de8 Merge pull request #6631 from chibby0ne/update_jsonschema_dependency
requirements: update jsonschema dependency
2019-08-22 12:54:48 +02:00
Ulysses Souza
525bc9ef7a Merge pull request #6856 from aiordache/bump-alpine
update alpine version to 3.10.1
2019-08-21 15:49:14 +02:00
aiordache
60dcf87cc0 update alpine version to 3.10.1
Signed-off-by: aiordache <anca.iordache@docker.com>
2019-08-20 12:10:26 +02:00
Jean-Christophe Sirot
cf3c07d6ee Merge pull request #6826 from ulyssessouza/env_override_integration_test
Add integration tests regarding environment
2019-07-31 14:15:53 +02:00
Ulysses Souza
b03889ac2a Add integration tests regarding environment
This covers what was included in #6800

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-31 02:09:41 +02:00
Antonio Gutierrez
66856e884c requirements: update jsonschema dependency
Fixes: https://github.com/docker/compose/issues/6347

Signed-off-by: Antonio Gutierrez <chibby0ne@gmail.com>
2019-07-27 21:43:40 +02:00
Djordje Lukic
7a7c9ff67a Merge pull request #6800 from KlaasH/revise-env-file-option
Make '--env-file' option top-level only and fix failure with subcommands
2019-07-25 12:15:11 +02:00
Klaas Hoekema
413e5db7b3 Add shell completions for --env-file option
Adds completions for the --env-file toplevel option to the bash, fish,
and zsh completions files.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
69c0683bfe Pass toplevel_environment to run_one_off_container
Instead of passing `project_dir` from `TopLevelCommand.run` to
`run_one_off_container` then using it there to load the toplevel
environment (duplicating the logic that `TopLevelCommand.toplevel_environment`
encapsulates), pass the Environment object.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
088a798e7a Fix typo in 'split_env' error message
Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
35eb40424c Call TopLevelCommand's environment 'toplevel_environment'
To help prevent confusion between the different meanings and sources
of "environment", rename the method that loads the environment from
the .env or --env-file (i.e. the one that applies at a project level)
to 'toplevel_environment'.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:05 -04:00
Klaas Hoekema
99464d9c2b Handle environment file override within TopLevelCommand
Several (but not all) of the subcommands are accepting and processing the
`--env-file` option, but only because they need to look for a specific
value in the environment. The work of applying the override makes more
sense as the domain of TopLevelCommand, and moving it there and removing
the option from the subcommands makes things simpler.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:24:06 -04:00
Silvin Lubecki
cd8e2f870f Merge pull request #6813 from ulyssessouza/fix_stdin_open
Fix stdin_open when running docker-compose run
2019-07-24 11:20:20 +02:00
Ulysses Souza
c641ea08ae Fix stdin_open when running docker-compose run
This fix makes sure that stdin_open specified in the service
is considering when shelling out to the CLI

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-22 17:27:10 +02:00
Jean-Christophe Sirot
d285ba6aee Merge pull request #6803 from ulyssessouza/pin-image-tags
Pin test images on a non rolling tag
2019-07-19 16:35:22 +02:00
Ulysses Souza
cd098e0cad Pin test images on a non rolling tag
Mainly busybox:latest to the current latest which is 1.31.0-uclibc

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-18 11:10:37 +02:00
Djordje Lukic
d212fe68a6 Merge pull request #6728 from albers/completion-config--no-interpolate
Add bash completion for `config --no-interpolate`
2019-07-16 11:59:18 +02:00
Djordje Lukic
c8279bc4db Merge pull request #6738 from Inconnu08/set-optimization
Replace sets with set literal syntax for efficiency
2019-07-15 15:23:24 +02:00
Djordje Lukic
61aa2e346e Merge pull request #6797 from chris-crone/macos-bump-python-3.7.4
Bump macOS build dependency
2019-07-15 10:33:45 +02:00
Djordje Lukic
98932e9cb4 Merge pull request #6754 from Goryudyuma/6740-fix-display
fix: The correct number is displayed
2019-07-15 10:30:59 +02:00
Goryudyuma
59491c7d77 add: test for units
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 04:31:16 +09:00
Kei Matsumoto
75d41edb94 fix: Add test
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 03:38:29 +09:00
Goryudyuma
f9099c91ae fix: The correct number is displayed
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 03:38:12 +09:00
Ulysses Souza
1b326fce57 Merge pull request #6720 from ijc/pass-env-to-docker-cli
Pass environment when calling through to docker cli.
2019-07-12 10:11:47 +02:00
Ulysses Souza
ca721728f6 Merge pull request #6588 from javabrett/6587-default-mand-interp-err
Default ?err to (missing) required VAR name. Fixed #6587.
2019-07-11 17:17:46 +02:00
Ulysses Souza
2e31ebba6a Merge pull request #6798 from chris-crone/linux-bump-deps
Bump Linux build dependencies
2019-07-11 16:09:58 +02:00
Christopher Crone
993bada521 Bump Linux build dependencies
* Python 3.7.2 to 3.7.4
* Docker 18.09.5 to 18.09.7
* Alpine 3.9.3 to 3.10.0
* Debian stretch-20190326 to stretch-20190708

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-10 17:32:13 +02:00
Christopher Crone
b0e7d801a3 Bump macOS build dependency
* Python 3.7.3 to 3.7.4

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-10 17:03:45 +02:00
Chris Crone
7258edb75d Merge pull request #6793 from chris-crone/bump-openssl-1.1.1c-python-3.7.3
Bump macOS build dependencies
2019-07-08 18:42:35 +02:00
Ulysses Souza
f9d1075a5d Merge pull request #6792 from ulyssessouza/bump-texttable
Bump texttable from 0.9.1 to 1.6.2
2019-07-08 15:23:31 +02:00
Ulysses Souza
a1c9d4925a Merge pull request #6791 from ulyssessouza/bump-mock
Bump mock from 2.0.0 to 3.0.5
2019-07-08 15:23:22 +02:00
Christopher Crone
3d80c8e86d Bump macOS build dependencies
* OpenSSL 1.1.1a to 1.1.1c
* Python 3.7.2 to 3.7.3

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-08 15:05:10 +02:00
Ulysses Souza
0bfa1c34f0 Bump texttable from 0.9.1 to 1.6.2
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-08 14:52:30 +02:00
Ulysses Souza
57a2bb0c50 Bump mock from 2.0.0 to 3.0.5
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-08 13:47:19 +02:00
Ulysses Souza
3d693f3733 Merge pull request #6778 from ulyssessouza/cleanup-setup_py-versioning
Strip up generic versions and bump requests
2019-07-03 17:31:15 +02:00
Ulysses Souza
ce5451c5b4 Strip up generic versions and bump requests
Replaces generic limitations with a next major value
Bump the minimal `requests` to 2.20.0

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-02 15:49:07 +02:00
Ulysses Souza
df2e833cf0 Merge pull request #6777 from ulyssessouza/pin-busybox-image-version
Pin busybox image version in tests
2019-07-02 14:33:56 +02:00
Ulysses Souza
cacc9752a3 Pin busybox image version in tests
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-02 13:42:41 +02:00
Eli Uriegas
cf419dce4c Add .fossa.yml file (#6750)
Add .fossa.yml file
2019-06-17 10:23:44 -07:00
Dave Tucker
8c387c6013 Add .fossa.yml file
This commit adds a .fossa.yml file used by fossa.io
It allows for fossa to scan the dependencies and figure out which oss
licenses are in use. This can be added to CI at some point in the near
future.

Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
2019-06-14 14:58:17 +01:00
Inconnu08
57055e0e66 Replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-06-02 20:21:21 +06:00
Inconnu08
c37fb783fe replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-06-01 01:31:35 +06:00
Inconnu08
b29b6a1538 replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-05-31 20:29:09 +06:00
Harald Albers
d68113f5c0 Add bash completion for config --no-interpolate
Signed-off-by: Harald Albers <github@albersweb.de>
2019-05-24 21:59:14 +02:00
Ulysses Souza
26e1a2dd31 Merge pull request #6725 from ulyssessouza/fix-release-script
Fix release script get_full_version()
2019-05-23 23:37:06 +02:00
Ulysses Souza
1f55b533c4 Fix release script get_full_version()
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-23 23:35:57 +02:00
Brett Randall
fb4d5aa7e6 Include required but missing VAR name and assignment in interpolation error message.
Error message format is now e.g.:

ERROR: Missing mandatory value for "environment" option interpolating ['MYENV=${MYVAR:?}'] in service "myservice":

Fixed #6587.

Signed-off-by: Brett Randall <javabrett@gmail.com>
2019-05-24 07:01:39 +10:00
Ulysses Souza
e806520dc3 Merge pull request #6723 from ulyssessouza/fix-reorder-imports
Fix imports ordering
2019-05-23 22:04:50 +02:00
Ulysses Souza
a2516c48d9 Fix imports ordering
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-23 22:04:02 +02:00
Ulysses Souza
79639af394 Merge pull request #6721 from ulyssessouza/fix-release-finalize
Fix 'finalize' command on release script
2019-05-23 21:39:40 +02:00
Ulysses Souza
2d2b0bd9a8 Fix 'finalize' command on release script
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-23 21:38:20 +02:00
Ian Campbell
9d2508cf58 Pass environment when calling through to docker cli.
This ensures that settings from any `.env` file (such as `DOCKER_HOST`) are
passed on to the cli.

Unit tests are adjusted for the new parameter and a new case is added to ensure
it is propagated as expected.

Fixes: 6661

Signed-off-by: Ian Campbell <ijc@docker.com>
2019-05-23 16:29:46 +01:00
Ulysses Souza
e1baa90f6b Merge pull request #6719 from ulyssessouza/fix-release-script
Fix release script for null user
2019-05-22 19:01:01 +02:00
Ulysses Souza
c15e8af7f8 Fix release script for null user
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-22 19:00:16 +02:00
Ulysses Souza
4218b46c78 Merge pull request #6666 from joakimr-axis/armpurge
Purge Dockerfile.armhf which is no longer needed
2019-05-22 16:27:09 +02:00
Ulysses Souza
81258f59db Merge pull request #6715 from ulyssessouza/bump-dockerpy-401
Bump docker-py 4.0.1
2019-05-22 14:11:07 +02:00
Ulysses Souza
e4b4babc24 Bump docker-py 4.0.1
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-22 11:55:37 +02:00
Ian Campbell
f764faa841 Merge pull request #6649 from GeyseR/requests_upgrade
support requests up to 2.22.0 version
2019-05-21 13:36:31 +01:00
Sergey Fursov
a857be3f7e support requests up to 2.22.x version
Signed-off-by: Sergey Fursov <geyser85@gmail.com>
2019-05-21 14:50:17 +03:00
Ian Campbell
a89128118b Merge pull request #6342 from collin5/b5547
--remove-orphans is ignored when using up --no-start
2019-05-20 15:45:24 +01:00
Ian Campbell
263d18ce93 Merge pull request #6624 from orisano/feat-empty-cache-from
feat: drop empty tag on cache_from
2019-05-20 15:35:48 +01:00
Nao YONASHIRO
51ee6093df feat: drop empty tag on cache_from
Signed-off-by: Nao YONASHIRO <owan.orisano@gmail.com>
2019-05-20 23:32:15 +09:00
Ulysses Souza
9de6ec3700 Merge pull request #6695 from Inconnu08/fix_depreciation
fixes warn method is deprecated
2019-05-20 12:34:51 +02:00
Inconnu08
99e67d0c06 fix warning method is deprecated with tests
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-05-15 23:46:12 +06:00
Ulysses Souza
75d5eb0108 Merge pull request #6707 from ulyssessouza/clean-containers-before-rm
Remove remaining containers on test_build_run
2019-05-15 11:44:37 +02:00
Ulysses Souza
8a9575bd0d Remove remaining containers on test_build_run
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-05-14 19:13:21 +02:00
Ulysses Souza
b612361541 Merge pull request #6654 from jkldgoefgkljefogeg/master
fix #6579 cli ps --all
2019-05-14 11:53:04 +02:00
noname
c2783d6f88 fix #6579 cli ps --all
Signed-off-by: Seedf <github_commit@nohdmi.com>
2019-05-13 18:56:43 -07:00
Ulysses Souza
a5b13f369d Merge pull request #6700 from ulyssessouza/bump_urllib3
Bump urllib3
2019-05-13 14:07:21 +02:00
ulyssessouza
3a47000e71 Bump urllib3
Signed-off-by: ulyssessouza <ulysses.souza@docker.com>
2019-05-12 13:33:38 +02:00
Joakim Roubert
482bca9519 Purge Dockerfile.armhf which is no longer needed
Current Dockerfile builds fine for armhf and thus the outdated
Dockerfile.armhf is unnecessary.

Change-Id: Idafdb9fbddedd622c2c0aaddb1d5331d81cfe57d
Signed-off-by: Joakim Roubert <joakimr@axis.com>
2019-04-23 09:53:42 +02:00
Djordje Lukic
79557e3d3a Merge pull request #6657 from ulyssessouza/skip_race_condition_test
Avoid race condition on test
2019-04-19 16:37:14 +02:00
Ulysses Souza
f2dc923084 Avoid race condition on test
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-04-19 15:53:02 +02:00
Ulysses Souza
8a89d94e15 Merge pull request #6641 from ulyssessouza/dockerfiles_refactor
Refactor Dockerfiles for generating musl binaries
2019-04-19 11:46:41 +02:00
Ulysses Souza
e047169315 Workaround race conditions on tests
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-04-17 17:59:12 +02:00
Ulysses Souza
2b24eb693c Refactor release and build scripts
- Make use of the same Dockerfile when producing
an image for testing and for deploying to
DockerHub

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-04-17 16:08:33 +02:00
Ulysses Souza
c217bab7f6 Refactor Dockerfiles for generating musl binaries
- Refactor Dockerfile to be used for tests and distribution on docker hub on debian and alpine
to use for final usage and also tests
- Adapt test scripts to the new Dockerfiles' structure
- Adapt Jenkinsfile to add alpine to the test matrix

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-04-17 16:08:33 +02:00
Ulysses Souza
5265f63c34 Merge pull request #6632 from ulyssessouza/update-releasedocs
Update release process on updating docs
2019-04-04 15:05:49 +02:00
Ulysses Souza
9e3d9f6681 Update release process on updating docs
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-04-04 11:51:12 +02:00
Ulysses Souza
41c8df39fe Merge pull request #6618 from ulyssessouza/bump-1.24.0-changelog
Bump 1.24.0 changelog on master
2019-03-29 14:38:20 +01:00
Djordje Lukic
ef10c1803f "Bump 1.24.0"
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-28 19:38:25 +01:00
Djordje Lukic
ada945c5cd Merge pull request #6615 from ulyssessouza/bump-docker-py-3.7.2
Bump docker-py 3.7.2
2019-03-28 18:15:30 +01:00
Ulysses Souza
ac148bc1ca Bump docker-py 3.7.2
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-28 17:45:03 +01:00
Ian Campbell
e84ffb6aeb Merge pull request #6592 from treatwell/6589-depends_on-recreation-fix
Fixed depends_on recreation behaviour for issue #6589
2019-03-26 14:06:14 +00:00
joeweoj
8a339946fa Fixed depends_on recreation behaviour for issue #6589
Previously any containers which did *not* have any links were always recreated.
In order to fix depends_on and preserve expected links recreation behaviour, we now only use the ConvergenceStrategy.always recreation strategy for a service if any of the the following conditions are true:
* --always-recreate-deps flag provided
* service container is stopped
* service defines links but the container does not have any
* container has links but the service definition does not

Signed-off-by: joeweoj <joewardell@gmail.com>
2019-03-26 11:48:20 +00:00
Ulysses Souza
b2723d6b3d Merge pull request #6610 from ulyssessouza/fix-httperror-no-attribute-message
Fix script for the case of release file already present on pypi
2019-03-25 17:18:59 +01:00
Ian Campbell
6ccbb56fec Merge pull request #6494 from collin5/b6464
Only pull images that can't build `docker-compose pull`
2019-03-25 13:25:32 +00:00
Collins Abitekaniza
c6dd7da15e only pull images that can't build
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-03-24 01:05:30 +03:00
Ulysses Souza
154d7c1722 Fix script for release file already present case
This avoids a:
"AttributeError: 'HTTPError' object has no attribute 'message'"

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 17:14:18 +01:00
Ulysses Souza
fc757fb4f5 Merge pull request #6604 from ulyssessouza/fix-release-resources
Fix release resources
2019-03-22 11:52:36 +01:00
Ulysses Souza
2948c396a6 Fix bintray docker-compose link
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 11:23:44 +01:00
Ulysses Souza
15f8c30a51 Fix typo on finalize
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 11:23:44 +01:00
Ulysses Souza
cd1fcd3ea5 Use os.system() instead of run_setup()
Use `os.system()` instead of `run_setup()` because the last
is not taking any effect

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 11:23:36 +01:00
Ulysses Souza
1e4fde8aa7 Bump docker-py version to 3.7.1
This docker-py version includes ssh fixes

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 11:22:11 +01:00
Ulysses Souza
a27448bdab Merge pull request #6594 from ulyssessouza/bump-docker-py-to-3.7.1
Bump docker-py version to 3.7.1
2019-03-21 10:30:54 +01:00
Ulysses Souza
dc712bfa23 Bump docker-py version to 3.7.1
This docker-py version includes ssh fixes

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-20 18:03:41 +01:00
Chris Crone
3b846ac8de Merge pull request #6578 from bfirsh/bootloader-ignore-signals
Enable bootloader_ignore_signals in pyinstaller
2019-03-15 11:48:33 +01:00
Ben Firshman
0863785e96 Enable bootloader_ignore_signals in pyinstaller
Fixes #3347

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2019-03-12 12:25:35 -04:00
Ian Campbell
c6c57fcf49 Merge pull request #6518 from ulyssessouza/bump-python-37
Bump python 3.6.8 -> 3.7.2
2019-03-08 12:05:55 +00:00
Ian Campbell
733b827f85 Merge pull request #6544 from CatEars/secrets-added-after-container
Add test and implementation for secret added after container has been…
2019-03-08 11:21:00 +00:00
Henke Adolfsson
853215acf6 Remove project.stop() in test
Signed-off-by: Henke Adolfsson <catears13@gmail.com>
2019-03-08 07:43:53 +01:00
Henke Adolfsson
87935893fc Update data for unit tests
Signed-off-by: Henke Adolfsson <catears13@gmail.com>
2019-03-08 07:43:53 +01:00
Henke Adolfsson
aa79fb2473 Ensure test passes
Signed-off-by: Henke Adolfsson <catears13@gmail.com>
2019-03-08 07:43:53 +01:00
Henke Adolfsson
76d0406fab Add test and implementation for secret added after container has been created
The issue is that if a secret is added to the compose file, then it will
not notice that containers have diverged since last run, because secrets
are not part of the config_hash, which determines if the configuration of
a service is the same or not.

Signed-off-by: Henke Adolfsson <catears13@gmail.com>
2019-03-08 07:43:53 +01:00
Chris Crone
a1f3cb6d89 Merge pull request #6546 from ijc/update-docs-branch
docs/README.md: update since `vnext-compose` branch is no longer used.
2019-03-07 20:31:43 +01:00
Ian Campbell
7bf9963cd6 Merge pull request #6547 from kudos/bugfix/scale-zero-default
Fix scale attribute to accept 0 as a value
2019-03-07 15:40:14 +00:00
Michael Irwin
d8e390eb9f Added test case to verify fix for #6525
Signed-off-by: Michael Irwin <mikesir87@gmail.com>
2019-03-07 15:30:11 +01:00
Michael Irwin
3f1d41a97e Fix merging of compose files when network has None config
Signed-off-by: Michael Irwin <mikesir87@gmail.com>

Resolves #6525
2019-03-07 15:30:11 +01:00
Jonathan Cremin
087bef4f95 Add tests for compose file 'scale: 0'
Signed-off-by: Jonathan Cremin <jonathan@crem.in>
2019-03-06 12:57:14 +00:00
Ian Campbell
0b039202ac docs/README.md: update since vnext-compose branch is no longer used.
All PRs should be made to `master` now. Also:

- Template seems to exist now[0] so remove the "coming soon".
- The labels used seem different now, but labelling seems more like a docs
  maintainer thing than a contributor thing, so just drop that paragraph.

[0] https://raw.githubusercontent.com/docker/docker.github.io/master/.github/PULL_REQUEST_TEMPLATE.md

Signed-off-by: Ian Campbell <ijc@docker.com>
2019-03-06 10:37:32 +00:00
Ian Campbell
40b0ce3e5d Merge pull request #6542 from akshitgrover/6028-Add_Quiet_Builds
Add --quiet build flag
2019-03-05 14:55:32 +00:00
Jonathan Cremin
42c965935f Fix scale attribute to accept 0 as a value
Signed-off-by: Jonathan Cremin <jonathan@crem.in>
2019-03-05 11:34:48 +00:00
Ian Campbell
615c01c50a Merge pull request #6368 from xificurC/master
adds --no-interpolate to docker-compose config
2019-03-05 09:38:36 +00:00
Peter Nagy (NPE)
e34d329227 adds --no-interpolate to docker-compose config
Signed-off-by: Peter Nagy <pnagy@gratex.com>
2019-03-04 13:03:35 +01:00
Akshit Grover
1f97a572fe Add --quiet build flag
Signed-off-by: Akshit Grover <akshit.grover2016@gmail.com>
2019-03-02 13:07:23 +05:30
slowr
b09d8802ed Added additional argument (--env-file) for docker-compose to import environment variables from a given PATH.
Signed-off-by: Dimitrios Mavrommatis <jim.mavrommatis@gmail.com>
2019-02-26 16:38:54 +01:00
tuttieee
572032fc0b Fix Project#build_container_operation_with_timeout_func not to mutate a 'option' dict over multiple containers
Signed-off-by: Yuichiro Tsuchiya <t.yic.yt@gmail.com>
2019-02-25 13:07:41 +01:00
Christopher Crone
133df63108 Add built Python smoke test to macOS setup script
Prior to this smoke test, the macOS setup step wouldn't fail if the
Python that it built wasn't functional. This will make debugging Python
build issues easier in the future.

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-21 16:21:09 +01:00
Christopher Crone
dbc229dc37 Fix macOS build for Python 3.7
- Specify --with-openssl directory for Python build
- Better checks for downloaded SDK, OpenSSL, and Python
- Fix missing slash for Python build CPPFLAGS

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-21 13:57:25 +01:00
Ulysses Souza
bb0bd3b26b Harmonize tox and virtualenv versions
- Set all tox versions to 2.9.1
- Set all virtualenv version to 16.2.0

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-02-21 13:56:45 +01:00
Ulysses Souza
a734371e7f Bump python version from 3.6.8 to 3.7.2
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-02-21 13:56:45 +01:00
Chris Crone
768c788da9 Merge pull request #6504 from docker/mac-bump-openssl
Bump OpenSSL for macOS build
2019-02-21 11:50:24 +01:00
Ulysses Souza
aee88e21bf Merge pull request #6529 from ulyssessouza/rm-option
Add --no-rm to command build
2019-02-20 18:33:29 +01:00
Ulysses Souza
a35aef4953 Add --no-rm to command build
- When present, build does not remove
intermediate containers after a successful build.

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-02-20 18:09:09 +01:00
Christopher Crone
fbbf78d3da macOS: Bump OpenSSL to 1.1.1a
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-20 15:59:41 +01:00
Chris Crone
a65b3cd758 Merge pull request #6503 from docker/mac-virtualenv-fix
Force virtualenv version for macOS CI
2019-02-20 15:59:13 +01:00
Chris Crone
4813689c9e Merge pull request #6514 from albers/completion-fix-build--memory
Fix bash completion for `build --memory`
2019-02-20 15:48:15 +01:00
Harald Albers
436a343a18 Fix bash completion for build --memory
- the option requires an argument
- adds missing short form `-m`

Signed-off-by: Harald Albers <github@albersweb.de>
2019-02-11 13:50:41 +01:00
Christopher Crone
d9ffec4002 circleci: Fix virtualenv version to 16.2.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-05 12:13:19 +01:00
Chris Crone
3cddd1b670 Merge pull request #6501 from chris-crone/build-fixes
Various build fixes
2019-02-05 11:41:29 +01:00
Ulysses Souza
c8a621b637 Fix Flake8 lint
This removes extra indentation and replace the use of `is` by `==` when
comparing strings

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-02-05 11:11:52 +01:00
Christopher Crone
f472fd545b Dockerfile: Force version of virtualenv to 16.2.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-05 10:51:33 +01:00
Christopher Crone
f1f0894c1b script.build.linux: Do not tail image build logs
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-05 10:50:55 +01:00
Christopher Crone
b572b32999 requirements-dev: Fix version of mock to 2.0.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-05 10:50:25 +01:00
Christopher Crone
8ad4c08109 macOS: Bump Python and OpenSSL
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-02-05 10:40:03 +01:00
Collins Abitekaniza
c27132afad remove stopped containers on --remove-orphans
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>

kill orphan containers, catch APIError Exception

Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>

test remove orphans with --no-start

Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-01-25 14:28:56 +03:00
Ulysses Souza
9de1f569f3 Merge pull request #6479 from ulyssessouza/shell-completion-parallel
Add `--parallel` to `docker build`'s options in `bash` and `zsh` completion
2019-01-24 15:10:06 +01:00
Ulysses Souza
698ea33b15 Add --parallel to docker build's options in bash and zsh completion
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-01-21 19:13:45 +01:00
Ulysses Domiciano Souza
f158fb03e7 Merge pull request #6364 from ulyssessouza/6350-avoid-warning-on-exec
Avoids misleading warning concerning env vars when perfoming an `exec` command
2019-01-21 12:01:31 +01:00
Chris Crone
8f5f7e72be Merge pull request #6466 from rumpl/credential-spec
Support for credential_spec
2019-01-21 11:03:34 +01:00
Chris Crone
718346f103 Merge pull request #6454 from rumpl/digest-distribution
Resolve digests without pulling image
2019-01-21 11:02:42 +01:00
Djordje Lukic
ae0f3c74a0 Support for credential_spec
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-01-17 16:00:22 +01:00
Chris Crone
e40eaa5df6 Merge pull request #6461 from albers/completion-ps--all
Add bash completion for `ps --all|-a`
2019-01-16 22:08:35 +01:00
Djordje Lukic
0c20fc5d91 Resolve digests without pulling image
If there is no image locally `docker-compose --resolve-image-digests`
will try and get the digest from the repository.

Fixes https://github.com/docker/compose/issues/5818

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-01-15 14:24:26 +01:00
Chris Crone
d5d49a8e29 Merge pull request #6460 from shin-/maintainers_update
Update maintainers file
2019-01-15 12:56:21 +01:00
Harald Albers
14a1a0c020 Add bash completion for ps --all|-a
Signed-off-by: Harald Albers <github@albersweb.de>
2019-01-15 09:01:49 +01:00
Joffrey F
6933435004 Update maintainers file
Signed-off-by: Joffrey F <joffrey@docker.com>
2019-01-14 15:22:12 -08:00
Joffrey F
cf96fcb4af Merge pull request #6452 from docker/collin5-b6446
Fix failure check in parallel_execute_watch
2019-01-10 17:55:33 -08:00
Joffrey F
bcccac69fa Merge pull request #6444 from qboot/master
Upgrade pyyaml to 4.2b1
2019-01-10 16:31:29 -08:00
Joffrey F
2ec7615ed6 Merge pull request #6448 from smueller18/race-condition-pull
fix race condition after pulling image
2019-01-10 15:53:07 -08:00
Joffrey F
2ed171cae9 Bring zero container check up in the call stack
Signed-off-by: Joffrey F <joffrey@docker.com>
2019-01-10 15:48:37 -08:00
Collins Abitekaniza
325637d9d5 test image pull done
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-01-10 15:48:37 -08:00
Collins Abitekaniza
bab8b3985e check for started containers only on service_start
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-01-10 15:48:37 -08:00
Joffrey F
532d00fede Merge pull request #6451 from docker/bump_sdk
Bump SDK version -> 3.7.0
2019-01-10 14:52:48 -08:00
Joffrey F
ab0a0d69d9 Bump SDK version -> 3.7.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2019-01-10 14:25:20 -08:00
Stephan Müller
56fbd22825 fix race condition after pulling image
Signed-off-by: Stephan Müller <mail@stephanmueller.eu>
2019-01-09 23:14:12 +01:00
Quentin Brunet
8419a670ae Upgrade pyyaml to 4.2b1
Signed-off-by: Quentin Brunet <hello@quentinbrunet.com>
2019-01-08 14:19:57 +01:00
Joffrey F
4bd93b95a9 Merge pull request #6406 from collin5/b5948
Error on duplicate mount points.
2019-01-02 10:11:50 -08:00
Collins Abitekaniza
47ff8d710c test create from config with duplicate mount points
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-01-02 17:51:22 +03:00
Collins Abitekaniza
d980d170a6 error on duplicate mount points
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2019-01-02 17:34:26 +03:00
Joffrey F
f9061720b5 Merge pull request #6434 from docker/and800-master
Lower severity to "warning" if `down` tries to remove nonexisting image
2018-12-28 14:25:33 -08:00
Andriy Maletsky
01eb4b6250 Lower severity to "warning" if down tries to remove nonexisting image
Signed-off-by: Andriy Maletsky <andriy.maletsky@gmail.com>
2018-12-28 13:21:23 -08:00
Ulysses Souza
f4ed9b2ef5 Detects the execution on anexec command and sets the environment to silent mode.
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2018-12-28 02:43:41 +01:00
Joffrey F
b7374b6271 Merge pull request #6390 from ulyssessouza/6369-override-networks-settings
Fix merge on networks section
2018-12-28 07:25:57 +09:00
Joffrey F
6b3855335e Merge pull request #6410 from docker/2618-new-events-api
Use improved API fields for project events when possible
2018-12-28 07:22:24 +09:00
Joffrey F
6e697c3b97 Merge pull request #6419 from docker/6416-runsh-no-input
Always connect Compose container to stdin
2018-12-20 08:47:24 +09:00
Joffrey F
fee5261014 Always connect Compose container to stdin
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-12-19 00:24:33 -08:00
Joffrey F
0612d973c7 Merge branch 'hirochachacha-feature/reject_environment_variable_that_contains_white_spaces' 2018-12-14 14:37:36 -08:00
Joffrey F
0323920957 Style and language fixes
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-12-14 14:36:40 -08:00
Joffrey F
5232100331 Merge branch 'feature/reject_environment_variable_that_contains_white_spaces' of https://github.com/hirochachacha/compose into hirochachacha-feature/reject_environment_variable_that_contains_white_spaces 2018-12-14 13:42:05 -08:00
Joffrey F
8b293d486e Use improved API fields for project events when possible
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-12-13 14:13:20 -08:00
Ulysses Souza
a2bcf52665 Fix merge on networks section
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2018-12-11 10:36:49 +01:00
Hiroshi Ioka
afc161a0b1 reject environment variable that contains white spaces
Signed-off-by: Hiroshi Ioka <hirochachacha@gmail.com>
2018-12-11 12:52:29 +09:00
Joffrey F
14e7a11b3c Merge pull request #6346 from collin5/b5469
Show failed services 'docker-compose start' when containers are not availabe
2018-12-10 15:39:16 -08:00
Joffrey F
c139455fce Merge branch 'ulyssessouza-6245-docker-compose-multiple-push' 2018-12-04 17:14:12 -08:00
Joffrey F
d3933cd34a Move multi-push test to unit tests
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-12-04 17:13:40 -08:00
Joffrey F
5b2092688a Merge branch '6245-docker-compose-multiple-push' of https://github.com/ulyssessouza/compose into ulyssessouza-6245-docker-compose-multiple-push 2018-12-04 17:10:31 -08:00
Joffrey F
64633a81cc Merge pull request #6389 from docker/6386-update-setup-py
Update setup.py for modern pypi /setuptools
2018-12-04 13:22:41 -08:00
Joffrey F
fc3df83d39 Update setup.py for modern pypi /setuptools
Remove pandoc dependencies

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-30 17:59:55 -08:00
Joffrey F
0fc3b51b50 Merge pull request #6388 from docker/6336-enable-ssh-support
Add SSH-enabled docker SDK to requirements
2018-11-30 17:31:32 -08:00
Joffrey F
7b82b2e8c7 Add SSH-enabled docker SDK to requirements
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-30 16:24:38 -08:00
Joffrey F
cfa5d02b52 Merge branch 'release' 2018-11-30 16:17:36 -08:00
Joffrey F
dd240787c2 Merge pull request #6387 from ulyssessouza/reorder-imports-update
Update `reorder_python_imports` version to fix Unicode problems
2018-11-30 16:15:54 -08:00
Ulysses Souza
d563a66405 Update reorder_python_imports version to fix Unicode problems
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2018-12-01 00:35:34 +01:00
Joffrey F
b0c10cb876 Merge pull request #6382 from docker/bump-1.23.2
Bump 1.23.2
2018-11-28 15:14:02 -08:00
Joffrey F
dd927e0fdd Merge pull request #6381 from docker/incorrect_precreate_identifier
Fix incorrect pre-create container name in up logs
2018-11-28 14:52:12 -08:00
Joffrey F
1110ad0108 "Bump 1.23.2"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:26:26 -08:00
Joffrey F
f266e3459d Fix incorrect pre-create container name in up logs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:54 -08:00
Joffrey F
bffb6094da Bump SDK version
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:54 -08:00
Joffrey F
66ed9b492e Don't append slugs to containers created by "up"
This change reverts the new naming convention introduced in 1.23 for service containers.
One-off containers will now use a slug instead of a sequential number as they do not
present addressability concerns and benefit from being capable of running in parallel.

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:30 -08:00
Joffrey F
07e2717bee Don't add long path prefix to build context URLs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:03 -08:00
Joffrey F
dce70a5566 Fix parse_key_from_error_msg to not error out on non-string keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:02 -08:00
Joffrey F
4682e766a3 Fix config merging for isolation and storage_opt keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:23:17 -08:00
Joffrey F
8a0090c18c Only use supported protocols when starting engine CLI subprocess
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:22:41 -08:00
Joffrey F
a7894ddfea Fix incorrect pre-create container name in up logs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:19:21 -08:00
Joffrey F
516eae0f5a Merge pull request #6379 from docker/bump_sdk
Bump SDK version - 3.6.0
2018-11-28 13:20:09 -08:00
Joffrey F
4bc1cbc32a Merge pull request #6377 from docker/6316-noslug
Don't append slugs to containers created by "up"
2018-11-28 12:15:23 -08:00
Ulysses Souza
d9e05f262f Avoids pushing the same image more than once.
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2018-11-28 20:55:54 +01:00
Joffrey F
d1bf27e73a Bump SDK version
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 11:53:26 -08:00
Collins Abitekaniza
b8b6199958 refactor cli tests
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2018-11-28 15:44:58 +03:00
Collins Abitekaniza
dbe3a6e9a9 stdout failed for failing services
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2018-11-28 15:44:54 +03:00
Joffrey F
61bb1ea484 Don't append slugs to containers created by "up"
This change reverts the new naming convention introduced in 1.23 for service containers.
One-off containers will now use a slug instead of a sequential number as they do not
present addressability concerns and benefit from being capable of running in parallel.

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-27 18:58:55 -08:00
Joffrey F
eedbb28d5e Merge pull request #6371 from docker/6354-url-builds
Don't add long path prefix to build context URLs
2018-11-26 17:47:36 -08:00
Joffrey F
2e20097f56 Merge pull request #6351 from hartwork/readme-improvements
Small improvements to README.md
2018-11-26 15:41:29 -08:00
Sebastian Pipping
10864ba687 README.md: Update bug report link
Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
2018-11-27 00:26:56 +01:00
Sebastian Pipping
6421ae5ea3 README.md: Add a few missing full stops
One full stop is moved out of a link
and a "Thank you!" is added as well.

Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
2018-11-27 00:26:56 +01:00
Sebastian Pipping
6ea20e43f6 README.md: Drop reference to IRC channel
Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
2018-11-27 00:26:56 +01:00
Joffrey F
ccc777831c Don't add long path prefix to build context URLs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-26 15:26:27 -08:00
Joffrey F
2975b5a279 Merge pull request #6352 from hartwork/issue-6302-stop-run-from-leaving-restarting-containers-behind
Fix one-off commands for "restart: unless-stopped" (fixes #6302)
2018-11-26 15:25:32 -08:00
Sebastian Pipping
e7f82d2989 Rename build_container_options to build_one_off_container_options
.. to better reflect that its scope is limited to one-off execution
(i.e. the "run" command)

Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
2018-11-26 23:23:56 +01:00
Sebastian Pipping
6559af7660 Fix one-off commands for "restart: unless-stopped" (fixes #6302)
Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
2018-11-26 23:23:56 +01:00
Joffrey F
c32bc095f3 Merge pull request #6363 from ulyssessouza/6157-build-from-source
Adopts 'unknown' as build revision in case git cannot retrieve it.
2018-11-26 12:57:37 -08:00
Ulysses Souza
1affc55b17 Adopts 'unknown' as build revision in case git cannot retrieve it.
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2018-11-22 15:58:41 +01:00
Joffrey F
e86e10fb6b Merge pull request #6327 from collin5/b6271
Add option for `--all` flag to `ps`
2018-11-15 14:32:58 -08:00
Collins Abitekaniza
e0e06a4b56 add detail to description for --all flag
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2018-11-15 15:24:50 +03:00
Collins Abitekaniza
05efe52ccd test --all flag
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2018-11-06 14:59:48 +03:00
Collins Abitekaniza
ba1e0311a7 add option to list all processes
Signed-off-by: Collins Abitekaniza <abtcolns@gmail.com>
2018-11-06 14:52:24 +03:00
Joffrey F
8edb0d872d Merge pull request #6326 from docker/6325-fix-validation-error-parsing
Fix validation error parsing to not raise on non-string keys
2018-11-05 14:27:03 -08:00
Joffrey F
d5eb209be0 Fix parse_key_from_error_msg to not error out on non-string keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-05 14:02:13 -08:00
Joffrey F
f009de025c Merge pull request #6322 from alexpusch/fix/zsh-autocomplete
Fix ZSH autocomplete for multiple -f flags
2018-11-05 11:25:59 -08:00
Alex Puschinsky
5b02922455 Fix ZSH autocomplete for multiple -f flags
Signed-off-by: Alex Puschinsky <alexpoo@gmail.com>
2018-11-03 18:37:34 +02:00
Joffrey F
2b604c1e8b Merge pull request #6320 from docker/6319-isolation-storageopt-merge
Fix config merging for isolation and storage_opt keys
2018-11-02 14:08:07 -07:00
Joffrey F
db819bf0b2 Fix config merging for isolation and storage_opt keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-02 11:35:34 -07:00
Joffrey F
3727fd3fb9 Merge pull request #6314 from docker/bump-1.23.1
Bump 1.23.1
2018-11-01 11:20:34 -07:00
Joffrey F
afa5d93c90 Merge pull request #6313 from docker/6310-fix-project-directory
Impose consistent behavior across command for --project-directory flag
2018-10-31 15:14:22 -07:00
Joffrey F
fb8cd7d813 Merge pull request #6244 from Cyral/windows-conn-err-msg
Show more helpful error message when Docker is not running. Fixes #6175
2018-10-31 14:56:08 -07:00
Joffrey F
b02f130684 "Bump 1.23.1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:49:00 -07:00
Joffrey F
176a4efaf2 Impose consistent behavior across command for --project-directory flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:46:19 -07:00
Joffrey F
187f48e338 Don't attempt to truncate a None value in Container.slug
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:46:18 -07:00
Joffrey F
8f4d56a648 Impose consistent behavior across command for --project-directory flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:35:09 -07:00
Joffrey F
9b12f489aa Merge pull request #6312 from docker/6311-container_slug_none
Don't attempt to truncate a None value in Container.slug
2018-10-31 14:25:53 -07:00
Joffrey F
03bdd67eb5 Don't attempt to truncate a None value in Container.slug
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 13:55:20 -07:00
Joffrey F
69fe42027a Merge pull request #6297 from docker/6293-cli-protocols
Only use supported protocols when starting engine CLI subprocess
2018-10-30 16:31:05 -05:00
Joffrey F
7925f8cfa8 Fix version
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-30 14:30:11 -07:00
Joffrey F
147a8e9ab8 Bump next dev version
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-30 14:29:07 -07:00
Joffrey F
91182ccb34 Merge branch 'release'
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-30 14:26:40 -07:00
Joffrey F
a7ca78d854 Merge pull request #6306 from docker/bump-1.23.0
Bump 1.23.0
2018-10-30 16:06:05 -05:00
Joffrey F
9194b8783e Merge pull request #6307 from docker/bump_requests
Bump requests version in requirements.txt
2018-10-29 18:33:25 -05:00
Joffrey F
c8524dc1aa Bump requests version in requirements.txt
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 14:40:32 -07:00
Joffrey F
fd83791d55 Bump requests version in requirements.txt
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 14:38:50 -07:00
Joffrey F
140431d3b9 "Bump 1.23.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 12:22:22 -07:00
Joffrey F
3104597e7d "Bump 1.23.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 11:43:45 -07:00
Joffrey F
1c002b5844 Fix new flake8 errors/warnings
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 11:38:22 -07:00
Ofek Lev
8f9ead34d3 Allow requests 2.20.x
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-29 11:38:20 -07:00
Joffrey F
f0264e1991 Merge pull request #6289 from ofek/ofek/requests
Allow requests 2.20.x for CVE fix
2018-10-24 19:26:27 -05:00
Ofek Lev
e008db5c97 Allow requests 2.20.x
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-24 19:48:39 -04:00
Joffrey F
4368b8ac05 Only use supported protocols when starting engine CLI subprocess
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-24 16:08:56 -07:00
Joffrey F
2f5d5fc93f Merge pull request #6296 from docker/flake8-update
Fix new flake8 errors/warnings
2018-10-24 17:32:47 -05:00
Joffrey F
98bb68e404 Fix new flake8 errors/warnings
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-24 15:06:04 -07:00
Joffrey F
de8717cd07 Merge pull request #6288 from docker/update_dockerignore
Some additional exclusions in .gitignore / .dockerignore
2018-10-17 14:59:12 -07:00
Joffrey F
7bd4291f90 Merge pull request #6286 from docker/bump-1.23.0-rc3
Bump 1.23.0-rc3
2018-10-17 14:48:37 -07:00
Joffrey F
ea3d406eed Some additional exclusions in .gitignore / .dockerignore
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 13:40:42 -07:00
Joffrey F
ca8ab06571 Some additional exclusions in .gitignore / .dockerignore
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 13:39:11 -07:00
Joffrey F
45189c134d "Bump 1.23.0-rc3"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:16:34 -07:00
Joffrey F
5ab3e47b42 Add workaround for Debian/Ubuntu venv setup failure
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:52 -07:00
Joffrey F
0fa1462b0f Don't use dot as a path separator as it is a valid character in resource identifiers
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:51 -07:00
Joffrey F
5e4098d228 Avoid creating duplicate mount points when recreating a service
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
12f7e0d2fb Remove obsolete curl dependency
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
23beeb353c Update versions in Dockerfiles
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
da25be8f99 Fix ImageManager inconsistencies
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Joffrey F
c9107cff39 Fix arg checks in release.sh
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Joffrey F
51d44c7ebc Add pypirc check
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Ofek Lev
e722190d50 Update requirements.txt
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-17 12:12:48 -07:00
Ofek Lev
fe347321c9 Upgrade Windows-specific dependency colorama
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-17 12:12:48 -07:00
Andrew Rabert
9bccfa8dd0 Use Docker binary from official Docker image
Signed-off-by: Andrew Rabert <ar@nullsum.net>
2018-10-17 12:12:47 -07:00
Joffrey F
5cf25f519e Decontainerize release script Credentials management inside containers is a mess. Let's work on the host instead.
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:46 -07:00
Joffrey F
956434504c Merge pull request #6285 from docker/fix_venv_script
[Release script] Add workaround for Debian/Ubuntu venv setup failure
2018-10-17 12:11:13 -07:00
Joffrey F
7712d19b32 Add workaround for Debian/Ubuntu venv setup failure
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:10:08 -07:00
Joffrey F
b1adcfb7e3 Merge pull request #6284 from docker/new-issue-templates
Update issue templates
2018-10-17 10:54:04 -07:00
Joffrey F
5017b25f14 Update issue templates
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-16 18:18:21 -07:00
Joffrey F
12ed765af8 Merge pull request #6282 from docker/6279-abolish-dot-separator
Don't use dot as a path separator as it is a valid character in resource identifiers
2018-10-16 17:43:37 -07:00
Joffrey F
62057d098f Don't use dot as a path separator as it is a valid character in resource identifiers
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-16 17:21:57 -07:00
Joffrey F
fdb7a16212 Merge pull request #6281 from docker/6280-mount-overrides-volume
Avoid creating duplicate mount points when recreating a service
2018-10-16 16:22:47 -07:00
Joffrey F
5b869b1ad5 Merge pull request #6277 from docker/bump_dockerfile_versions
Update versions in Dockerfiles
2018-10-16 16:22:35 -07:00
Joffrey F
4cb92294a3 Avoid creating duplicate mount points when recreating a service
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-16 13:57:01 -07:00
Joffrey F
9df0a4f3a9 Remove obsolete curl dependency
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-15 19:22:25 -07:00
Joffrey F
3844ff2fde Update versions in Dockerfiles
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-15 19:14:58 -07:00
Joffrey F
d82190025a Merge pull request #6265 from ofek/ofek/colorama
Upgrade Windows-specific dependency colorama
2018-10-15 15:50:02 -07:00
Joffrey F
013cb51582 Merge pull request #6270 from docker/release_script_upgrade
Release script upgrade
2018-10-15 15:47:32 -07:00
Ofek Lev
402060e419 Update requirements.txt
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-12 11:36:38 -04:00
Joffrey F
bd67b90869 Fix ImageManager inconsistencies
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-12 06:39:56 -07:00
Joffrey F
297bee897b Fix arg checks in release.sh
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-12 06:14:35 -07:00
Joffrey F
be324d57a2 Add pypirc check
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-12 06:13:55 -07:00
Ofek Lev
c7c5b5e8c4 Upgrade Windows-specific dependency colorama
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-10 22:16:25 -04:00
Joffrey F
9018511750 Merge pull request #6234 from zasca/tilda_src_paths
Improved expanding source paths of volumes
2018-10-08 18:51:40 +02:00
Joffrey F
7107431ae0 Merge pull request #6253 from nvllsvm/dockerbase
Use docker binary from official Docker image
2018-10-08 18:49:51 +02:00
Silvin Lubecki
82e265b806 Merge pull request #6255 from docker/bump-1.23.0-rc2
Bump 1.23.0-rc2
2018-10-08 18:06:46 +02:00
Andrew Rabert
21a51bcd60 Use Docker binary from official Docker image
Signed-off-by: Andrew Rabert <ar@nullsum.net>
2018-10-08 11:56:42 -04:00
Silvin Lubecki
350a555e04 "Bump 1.23.0-rc2"
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
2018-10-08 17:10:25 +02:00
Joffrey F
099c887b59 Re-enable testing of TP and beta releases
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:23 +02:00
Joffrey F
90625cf31b Don't attempt iterating on None during parallel pull
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:21 +02:00
Joffrey F
970f8317c5 Fix twine upload for RC versions
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:19 +02:00
Harald Albers
30c91388f3 Fix bash completion for config --hash
Signed-off-by: Harald Albers <github@albersweb.de>
2018-10-08 17:06:17 +02:00
Antony MECHIN
eb86881af1 utils: Fix typo in unique_everseen.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
b64184e388 service: Use OrderedDict to preserve volumes order on versions prior 3.6.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
d5c314b382 tests.unity.service: Make sure volumes order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
18c2d08011 utils: Add unique_everseen (from itertools recipies).
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
bb87a3d040 tests.unit.config: Make sure volume order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
62aeb767d3 tests.unit.config: Make make_service_dict working dir argument optional.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Joffrey F
5629f62644 Merge pull request #6236 from ceh-forks/mutable-default-values
Avoid modifying mutable default value
2018-10-08 10:22:02 +02:00
Joffrey F
756eae0f01 Merge pull request #6251 from docker/decontainerize-release
Decontainerize release script
2018-10-05 19:03:23 +02:00
Joffrey F
6a35663781 Decontainerize release script
Credentials management inside containers is a mess. Let's work on the host instead.

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-05 08:21:39 -07:00
Alexander
9d7202d122 Squashed commit of the following:
commit d3fbd3d630099dc0d34cb1a93b0a664f633a1c25
Author: zasca <gorstav@gmail.com>
Date:   Wed Oct 3 11:27:43 2018 +0600

    Fix typo in function name, path separator updated

commit bc3f03cd9a7702b3f2d96b18380d75e10f18def0
Author: zasca <gorstav@gmail.com>
Date:   Tue Oct 2 11:12:28 2018 +0600

    Fix endswith arg in the test

commit 602d2977b4e881850c99c7555bc284690a802815
Author: zasca <gorstav@gmail.com>
Date:   Mon Oct 1 12:24:17 2018 +0600

    Update test

commit 6cd7a4a2c411ddf9b8e7d91194c60fb2238db8d7
Author: zasca <gorstav@gmail.com>
Date:   Fri Sep 28 11:13:36 2018 +0600

    Fix last test

commit 0d37343433caceec18ea15babf924b5975b83c80
Author: zasca <gorstav@gmail.com>
Date:   Fri Sep 28 10:58:57 2018 +0600

    Unit test added

commit fc086e544677dd33bad798c773cb92600aaefc51
Author: zasca <gorstav@gmail.com>
Date:   Thu Sep 27 20:28:03 2018 +0600

    Improved expanding source paths of volumes

    defined with long syntax when paths starts with '~'

    Signed-off-by: Alexander <a.gorst.vinia@gmail.com>
2018-10-05 14:52:56 +06:00
Joffrey F
e3e93d40a8 Merge pull request #6248 from docker/test-betas-and-tps
Re-enable testing of TP and beta releases
2018-10-04 11:14:41 +02:00
Joffrey F
feccc03e4a Merge pull request #6247 from docker/fix_parallel_pull_noimg
Don't attempt iterating on None during parallel pull
2018-10-04 11:04:11 +02:00
Joffrey F
b21a06cd6f Re-enable testing of TP and beta releases
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-04 01:40:39 -07:00
Joffrey F
7b02f4c3a7 Merge pull request #6246 from docker/fix_twine_upload
Fix twine upload for RC versions
2018-10-04 10:39:31 +02:00
Joffrey F
cc595a65f0 Don't attempt iterating on None during parallel pull
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-04 01:09:48 -07:00
Joffrey F
25e419c763 Fix twine upload for RC versions
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-04 00:48:53 -07:00
Heath Milligan
abf67565f6 Show more helpful error message when Docker is not running. Fixes #6175
Signed-off-by: Heath Milligan <heath@pyratron.com>
2018-10-02 14:25:33 -04:00
Joffrey F
7208a50bdc Merge pull request #6237 from gmsantos/patch-1
Reffer Docker for Mac and Windows as Docker Desktop
2018-10-02 11:14:58 +02:00
Gabriel Machado
8493540a1c Reffer Docker for Mac and Windows as Docker Desktop
Signed-off-by: Gabriel Machado <gabriel.ms1@hotmail.com>
2018-09-29 21:17:26 -03:00
Emil Hessman
15089886c2 Avoid modifying mutable default value
Rationale: http://effbot.org/zone/default-values.htm

Signed-off-by: Emil Hessman <emil@hessman.se>
2018-09-29 18:32:47 +02:00
Joffrey F
48a6f2132b Merge pull request #6230 from albers/completion-services--hash
Fix bash completion for `config --hash`
2018-09-27 14:36:56 -07:00
Joffrey F
467d910959 Merge pull request #6221 from Dimrok/feature/volumes-order
Preserve volumes order as declared in the compose file.
2018-09-27 14:15:26 -07:00
Antony MECHIN
5b9b519e8a utils: Fix typo in unique_everseen.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-27 14:17:32 +02:00
Harald Albers
b29ffb49e9 Fix bash completion for config --hash
Signed-off-by: Harald Albers <github@albersweb.de>
2018-09-27 09:20:44 +02:00
Joffrey F
c5d5d42158 Merge pull request #6222 from docker/bump-1.23.0-rc1
Bump 1.23.0-rc1
2018-09-26 15:18:00 -07:00
Joffrey F
c17274d014 Merge pull request #6227 from docker/release-credentials
Avoid cred helpers errors in release script
2018-09-26 15:09:06 -07:00
Joffrey F
320e4819d8 Avoid cred helpers errors in release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-26 21:10:56 +00:00
Joffrey F
772a307192 Avoid cred helpers errors in release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-26 13:58:13 -07:00
Antony MECHIN
bf46a6cc60 service: Use OrderedDict to preserve volumes order on versions prior 3.6.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-26 15:57:27 +02:00
Antony MECHIN
39b0518850 tests.unity.service: Make sure volumes order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-26 15:57:27 +02:00
Antony MECHIN
de1958c5ff utils: Add unique_everseen (from itertools recipies).
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-26 15:57:27 +02:00
Antony MECHIN
bbcfce4029 tests.unit.config: Make sure volume order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-26 15:57:27 +02:00
Antony MECHIN
879f7cb1ed tests.unit.config: Make make_service_dict working dir argument optional.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-09-26 10:26:55 +02:00
Joffrey F
c327a498b0 Don't rely on container names containing the db string to identify them
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 23:45:10 +00:00
Joffrey F
47d740b800 Fix some release script issues
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 23:45:08 +00:00
Joffrey F
54c3136e34 Merge pull request #6225 from docker/fix_recreate_tests
Don't rely on container names containing the db string to identify them
2018-09-25 10:49:03 -07:00
Joffrey F
cc2462e6f4 Don't rely on container names containing the db string to identify them
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 09:13:12 -07:00
Joffrey F
6194d78813 Merge pull request #6223 from docker/release_script_upgrade
Fix some release script issues
2018-09-25 09:03:36 -07:00
Joffrey F
4b4c250638 Fix some release script issues
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-24 20:05:40 -07:00
Joffrey F
ec4ea8d2f1 "Bump 1.23.0-rc1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 00:46:52 +00:00
Joffrey F
936e6971f9 "Bump 1.23.0-rc1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-24 23:46:38 +00:00
Joffrey F
de2be2bf37 Merge pull request #6168 from jrbenito/update_armhf
[armhf] Make Dockerfile.armhf compatible with main
2018-09-21 16:05:32 -07:00
Joffrey F
2a7beb6350 Merge pull request #6204 from docker/5716-unix-paths-from-winhost
Don't convert slashes for UNIX paths on Windows hosts
2018-09-21 10:47:22 -07:00
Joffrey F
30afcc4994 Merge pull request #6209 from docker/images-use-service-tag
Images use service tag
2018-09-20 16:51:26 -07:00
Joffrey F
834acca497 Update acceptance test for image matching
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-20 15:48:08 -07:00
Joffrey F
7d0fb7d3f3 Rewrite images command method to decrease complexity
Also ensure we properly detect matching image names when tag is omitted

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-20 15:48:08 -07:00
Boris HUISGEN
1b668973a2 Add acceptance test
Signed-off-by: Boris HUISGEN <bhuisgen@hbis.fr>
2018-09-20 15:48:08 -07:00
Boris HUISGEN
a2ec572fdf Use same tag as service definition
Signed-off-by: Boris HUISGEN <bhuisgen@hbis.fr>
2018-09-20 15:48:08 -07:00
Joffrey F
0fb6cd1139 Merge pull request #6205 from docker/2473-windows-long-paths
Force consistent behavior around long paths on Windows builds
2018-09-19 18:11:29 -07:00
Joffrey F
96a49a0253 Force consistent behavior around long paths on Windows builds
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-19 16:09:21 -07:00
Joffrey F
f80630ffcf Merge pull request #6140 from docker/4688_no_sequential_ids
Add randomly generated slug to container names to prevent collisions
2018-09-19 15:12:41 -07:00
Joffrey F
9f9122cd95 Don't convert slashes for UNIX paths on Windows hosts
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-19 11:36:51 -07:00
Joffrey F
a5f42ae9e4 Merge pull request #6184 from docker/fix-zsh-completion
Update zsh completion with new options, and ensure service names are properly retrieved
2018-09-13 17:01:03 -07:00
Joffrey F
17d4845dbb Merge pull request #6186 from maxwellb/patch-1
Handle userns security
2018-09-12 17:04:37 -07:00
Maxwell Bloch
a7c05f41f1 Handle userns security
- Adds `--userns=host` when `userns-remap` is set

Signed-off-by: Maxwell Bloch <maxwellbloch@live.com>
2018-09-12 19:29:03 -04:00
Joffrey F
265d9dae4b Update zsh completion with new options, and ensure service names are properly retrieved
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-12 16:17:30 -07:00
Joffrey F
5916639383 Preserve container numbers, add slug to prevent name collisions
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-12 12:07:52 -07:00
Joffrey F
4e2de3c1ff Replace sequential container indexes with randomly generated IDs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-11 15:26:58 -07:00
Joffrey F
bd8b2dfbbc Merge pull request #6178 from docker/update-versions-script
Skip testing TPs/betas for now
2018-09-10 15:44:42 -07:00
Joffrey F
d491a81cec Skip testing TPs/betas for now
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-10 15:08:10 -07:00
Joffrey F
bdd2c80d98 Merge pull request #6172 from riverzhang/typo
Fix some typos
2018-09-07 11:41:52 -07:00
Joffrey F
58c5b92f09 Merge pull request #6173 from mirake/fix-typo
Typo fix: overriden -> overridden
2018-09-07 11:41:00 -07:00
Joffrey F
7e6275219b Merge pull request #6171 from tossmilestone/fix-typos
Fix typos in CHANGELOG
2018-09-07 11:27:30 -07:00
rongzhang
373c83ccd7 Fix some typo
Signed-off-by: rongzhang <rongzhang@alauda.io>
2018-09-07 16:57:43 +08:00
Xiaoxi He
b66782b412 Fix typos in CHANGELOG
Signed-off-by: Xiaoxi He <xxhe@alauda.io>
2018-09-07 16:42:38 +08:00
ruicao
5713215e84 Typo fix: overriden -> overridden
Signed-off-by: ruicao <ruicao@alauda.io>
2018-09-07 16:08:19 +08:00
Josenivaldo Benito Jr
a541d88d57 [armhf] Make Dockerfile.armhf compatible with main
Dockerfile now uses python:3.6 image while Dockerfile.armhf uses
debian. Python image is officially supported in ARM archtecture hence,
the now both dockerfiles differs only on dockerbins.tgz file version.

May we use environmental variables to select dockerbins.tgz?

Signed-off-by: Josenivaldo Benito Jr <jrbenito@benito.qsl.br>
2018-09-05 11:52:50 -03:00
Joffrey F
db391c03ad Merge pull request #6100 from docker/5960-parallel-pull-progress
Add progress messages to parallel pull
2018-08-24 11:02:24 -07:00
Joffrey F
2038bb5cf7 Merge pull request #6145 from deivid-rodriguez/bug/broken_url
Fix broken url
2018-08-20 14:34:00 -07:00
David Rodríguez
3a93e85762 Fix broken url
As per https://github.com/sgerrand/alpine-pkg-glibc#please-note.

Signed-off-by: David Rodríguez <deivid.rodriguez@riseup.net>
2018-08-17 14:08:41 -03:00
Joffrey F
901ee4e77b Merge pull request #6134 from docker/4841-fix-project-dir
Fix --project-directory handling to apply to .env files as well
2018-08-13 16:03:47 -07:00
Joffrey F
eb63e9f3c7 Fix --project-directory handling to apply to .env files as well
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-10 17:02:56 -07:00
Joffrey F
ed245474c2 Merge pull request #6130 from docker/bump_sdk
Bump Python SDK -> 3.5.0
2018-08-10 14:08:36 -07:00
Joffrey F
5ad50dc0b3 Bump Python SDK -> 3.5.0
Add support for Python 3.7

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-09 18:31:08 -07:00
Joffrey F
f207d94b3c Merge pull request #6126 from docker/wfender-2013-expose-config-hash
Add --hash opt for config command
2018-08-07 21:12:48 -07:00
Joffrey F
ee878aee4c Handle missing (not built) service image in config --hash
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 20:23:21 -07:00
Joffrey F
861031b9b7 Reduce config --hash code complexity and add test
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 17:25:35 -07:00
Joffrey F
707e21183f Fix config hash consistency with unprioritized networks
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 16:51:01 -07:00
Fender William
541fb65259 Add --hash opt for config command
Signed-off-by: Fender William <fender.william@gmail.com>
2018-08-07 16:51:01 -07:00
Joffrey F
473703d0d9 Merge pull request #6115 from graphaelli/parallel-build
add --parallel option to build
2018-08-02 15:23:06 -07:00
Joffrey F
6e95eb7437 Merge pull request #6104 from glorpen/fix-pipes
Fixes pipe handling in container mode.
2018-07-31 15:39:07 -07:00
Gil Raphaelli
89f2bfe4f3 add --parallel option to build
Signed-off-by: Gil Raphaelli <g@raphaelli.com>
2018-07-31 12:06:59 -04:00
Joffrey F
635c77db6c Merge pull request #6071 from nickhiggs/6060-reattach-logger-on-restart
Attach logger to containers after crashing.
2018-07-25 15:20:43 -07:00
Joffrey F
c956785cdc Add progress messages to parallel pull
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-25 14:39:18 -07:00
Arkadiusz Dzięgiel
7f9c042300 Fixes pipe handling in container mode.
Closes #4599, #4460

- adds a way to provide options from env in both cases (tty & non tty)
- allocates TTY only if both stdin & stdout are TTYs
- enables interactive mode if stdin is not TTY

Signed-off-by: Arkadiusz Dzięgiel <arkadiusz.dziegiel@glorpen.pl>
2018-07-24 12:23:31 +02:00
Joffrey F
ebad981bcc Merge pull request #6092 from ofek/support-newer-requests
support newer minor version of requests
2018-07-23 13:30:07 -07:00
Joffrey F
5d0fe7bcd3 Merge pull request #6080 from chris-crone/macos-rework-build
Rework build on macOS
2018-07-23 13:26:21 -07:00
Christopher Crone
450efd557a macOS: Rework build scripts
Allows us to build for older versions of macOS by downloading an
older SDK and building OpenSSL and Python against it.

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2018-07-23 11:41:32 +02:00
Ofek Lev
88d88d1998 support newer minor version of requests
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-07-18 22:25:01 -04:00
Joffrey F
6cb17b90ef 1.23.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-18 11:11:34 -07:00
Joffrey F
bb00352c34 Fix up_with_networks test
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-18 11:09:25 -07:00
Joffrey F
1396cdb4be Merge pull request #6088 from docker/release
Resync master with release
2018-07-18 11:00:56 -07:00
Joffrey F
e20d808ed2 Merge pull request #6087 from docker/bump-1.22.0
Bump 1.22.0
2018-07-17 16:01:56 -07:00
Joffrey F
f46880fe9a "Bump 1.22.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:48:24 +00:00
Joffrey F
cda827cbfc Improve finalize robustness and allow resume using special --finalize-resume flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
8c0411910d Avoid unrelated file uploads with twine
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
d9545a5909 Add distclean to remove old build files
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
cb1b88c4f8 s/release.py/release.sh/
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
9271f9f46f Merge pull request #6077 from docker/5966-exitcode-from-sigkill
Fix --exit-code-from to reflect exit code after termination by Compose
2018-07-16 20:04:35 -04:00
Joffrey F
e6d18b1881 Fix --exit-code-from to reflect exit code after termination by Compose
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-10 15:28:55 -04:00
Joffrey F
8c4fc4bc2e Merge pull request #6073 from docker/release-tool-improve
Misc improvements to release script
2018-07-10 15:04:31 -04:00
Joffrey F
64918235d2 Merge pull request #6072 from docker/6037-external-false
Avoid overriding external = False in serializer
2018-07-09 13:55:59 -07:00
Joffrey F
d7f5220292 Improve finalize robustness and allow resume using special --finalize-resume flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:51:01 -04:00
Joffrey F
0b5f68098c Avoid unrelated file uploads with twine
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:25:06 -04:00
Joffrey F
8a7ee5a7d5 Add distclean to remove old build files
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:19:17 -04:00
Joffrey F
e9aaece40d s/release.py/release.sh/
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 15:46:56 -04:00
Joffrey F
9c2ffe6384 Avoid overriding external = False in serializer
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 15:28:32 -04:00
Nicholas Higgins
28085ebee2 Attach logger to containers after crashing.
Fixes #6060

Signed-off-by: Nicholas Higgins <nickhiggins42@gmail.com>
2018-07-09 08:47:20 +10:00
Joffrey F
40631f9a01 Merge pull request #6051 from docker/bump_sdk
Docker SDK -> 3.4.1
2018-06-29 13:32:51 -07:00
Joffrey F
e8713d7cef Docker SDK -> 3.4.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-29 13:05:20 -07:00
Joffrey F
7ae632a9ee Merge pull request #6041 from docker/5929-underscore-projname-2
Don't create image names starting with - or _
2018-06-22 16:25:08 -07:00
Joffrey F
b00db08aa9 Prevent attempts to create image names starting with - or _
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-22 15:56:53 -07:00
Joffrey F
6e30c130d5 Merge pull request #6035 from docker/fix-api-version-typo
Fix API version typo
2018-06-21 14:40:10 -07:00
Joffrey F
a82986943b Fix release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:48:32 -07:00
Joffrey F
73663e46b9 3.7 --> API v1.38
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:47:44 -07:00
Joffrey F
47584a37c9 Fix bintray API client
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 11:49:35 -07:00
179 changed files with 4384 additions and 1482 deletions

View File

@@ -2,7 +2,7 @@ version: 2
jobs:
test:
macos:
xcode: "8.3.3"
xcode: "9.4.1"
steps:
- checkout
- run:
@@ -10,33 +10,32 @@ jobs:
command: ./script/setup/osx
- run:
name: install tox
command: sudo pip install --upgrade tox==2.1.1
command: sudo pip install --upgrade tox==2.1.1 virtualenv==16.2.0
- run:
name: unit tests
command: tox -e py27,py36 -- tests/unit
command: tox -e py27,py37 -- tests/unit
build-osx-binary:
macos:
xcode: "8.3.3"
xcode: "9.4.1"
steps:
- checkout
- run:
name: upgrade python tools
command: sudo pip install --upgrade pip virtualenv
command: sudo pip install --upgrade pip virtualenv==16.2.0
- run:
name: setup script
command: ./script/setup/osx
command: DEPLOYMENT_TARGET=10.11 ./script/setup/osx
- run:
name: build script
command: ./script/build/osx
- store_artifacts:
path: dist/docker-compose-Darwin-x86_64
destination: docker-compose-Darwin-x86_64
# - deploy:
# name: Deploy binary to bintray
# command: |
# OS_NAME=Darwin PKG_NAME=osx ./script/circle/bintray-deploy.sh
- deploy:
name: Deploy binary to bintray
command: |
OS_NAME=Darwin PKG_NAME=osx ./script/circle/bintray-deploy.sh
build-linux-binary:
machine:
@@ -54,28 +53,6 @@ jobs:
command: |
OS_NAME=Linux PKG_NAME=linux ./script/circle/bintray-deploy.sh
trigger-osx-binary-deploy:
# We use a separate repo to build OSX binaries meant for distribution
# with support for OSSX 10.11 (xcode 7). This job triggers a build on
# that repo.
docker:
- image: alpine:3.6
steps:
- run:
name: install curl
command: apk update && apk add curl
- run:
name: API trigger
command: |
curl -X POST -H "Content-Type: application/json" -d "{\
\"build_parameters\": {\
\"COMPOSE_BRANCH\": \"${CIRCLE_BRANCH}\"\
}\
}" https://circleci.com/api/v1.1/project/github/docker/compose-osx-release?circle-token=${OSX_RELEASE_TOKEN} \
> /dev/null
workflows:
version: 2
@@ -84,9 +61,3 @@ workflows:
- test
- build-linux-binary
- build-osx-binary
- trigger-osx-binary-deploy:
filters:
branches:
only:
- master
- /bump-.*/

View File

@@ -1,11 +1,13 @@
*.egg-info
.coverage
.git
.github
.tox
build
binaries
coverage-html
docs/_site
venv
*venv
.tox
**/__pycache__
*.pyc

14
.fossa.yml Normal file
View File

@@ -0,0 +1,14 @@
# Generated by FOSSA CLI (https://github.com/fossas/fossa-cli)
# Visit https://fossa.io to learn more
version: 2
cli:
server: https://app.fossa.io
fetcher: custom
project: git@github.com:docker/compose
analyze:
modules:
- name: .
type: pip
target: .
path: .

63
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,63 @@
---
name: Bug report
about: Report a bug encountered while using docker-compose
title: ''
labels: kind/bug
assignees: ''
---
<!--
Welcome to the docker-compose issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to docker-compose
- For questions and general support, use https://forums.docker.com
- For documentation issues, use https://github.com/docker/docker.github.io
- For issues with the `docker stack` commands and the version 3 of the Compose file, use
https://github.com/docker/cli
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
-->
## Description of the issue
## Context information (for bug reports)
**Output of `docker-compose version`**
```
(paste here)
```
**Output of `docker version`**
```
(paste here)
```
**Output of `docker-compose config`**
(Make sure to add the relevant `-f` and other flags)
```
(paste here)
```
## Steps to reproduce the issue
1.
2.
3.
### Observed result
### Expected result
### Stacktrace / full error message
```
(paste here)
```
## Additional information
OS version / distribution, `docker-compose` install method, etc.

View File

@@ -0,0 +1,32 @@
---
name: Feature request
about: Suggest an idea to improve Compose
title: ''
labels: kind/feature
assignees: ''
---
<!--
Welcome to the docker-compose issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to docker-compose
- For questions and general support, use https://forums.docker.com
- For documentation issues, use https://github.com/docker/docker.github.io
- For issues with the `docker stack` commands and the version 3 of the Compose file, use
https://github.com/docker/cli
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
-->
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -0,0 +1,12 @@
---
name: Question about using Compose
about: This is not the appropriate channel
title: ''
labels: kind/question
assignees: ''
---
Please post on our forums: https://forums.docker.com for questions about using `docker-compose`.
Posts that are not a bug report or a feature/enhancement request will not be addressed on this issue tracker.

59
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 180
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
daysUntilClose: 7
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
exemptLabels:
- kind/feature
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: false
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: true
# Label to use when marking as stale
staleLabel: stale
# Comment to post when marking as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when removing the stale label.
unmarkComment: >
This issue has been automatically marked as not stale anymore due to the recent activity.
# Comment to post when closing a stale Issue or Pull Request.
closeComment: >
This issue has been automatically closed because it had not recent activity during the stale period.
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
# Limit to only `issues` or `pulls`
only: issues
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
# pulls:
# daysUntilStale: 30
# markComment: >
# This pull request has been automatically marked as stale because it has not had
# recent activity. It will be closed if no further activity occurs. Thank you
# for your contributions.
# issues:
# exemptLabels:
# - confirmed

17
.gitignore vendored
View File

@@ -1,15 +1,18 @@
*.egg-info
*.pyc
*.swo
*.swp
.cache
.coverage*
.DS_Store
.idea
/.tox
/binaries
/build
/compose/GITSHA
/coverage-html
/dist
/docs/_site
/venv
README.rst
compose/GITSHA
*.swo
*.swp
.DS_Store
.cache
/README.rst
/*venv

View File

@@ -14,7 +14,7 @@
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: git://github.com/asottile/reorder_python_imports
sha: v0.3.5
sha: v1.3.4
hooks:
- id: reorder-python-imports
language_version: 'python2.7'

View File

@@ -1,7 +1,170 @@
Change log
==========
1.22.0 (2018-06-30)
1.24.0 (2019-03-28)
-------------------
### Features
- Added support for connecting to the Docker Engine using the `ssh` protocol.
- Added a `--all` flag to `docker-compose ps` to include stopped one-off containers
in the command's output.
- Add bash completion for `ps --all|-a`
- Support for credential_spec
- Add `--parallel` to `docker build`'s options in `bash` and `zsh` completion
### Bugfixes
- Fixed a bug where some valid credential helpers weren't properly handled by Compose
when attempting to pull images from private registries.
- Fixed an issue where the output of `docker-compose start` before containers were created
was misleading
- To match the Docker CLI behavior and to avoid confusing issues, Compose will no longer
accept whitespace in variable names sourced from environment files.
- Compose will now report a configuration error if a service attempts to declare
duplicate mount points in the volumes section.
- Fixed an issue with the containerized version of Compose that prevented users from
writing to stdin during interactive sessions started by `run` or `exec`.
- One-off containers started by `run` no longer adopt the restart policy of the service,
and are instead set to never restart.
- Fixed an issue that caused some container events to not appear in the output of
the `docker-compose events` command.
- Missing images will no longer stop the execution of `docker-compose down` commands
(a warning will be displayed instead).
- Force `virtualenv` version for macOS CI
- Fix merging of compose files when network has `None` config
- Fix `CTRL+C` issues by enabling `bootloader_ignore_signals` in `pyinstaller`
- Bump `docker-py` version to `3.7.2` to fix SSH and proxy config issues
- Fix release script and some typos on release documentation
1.23.2 (2018-11-28)
-------------------
### Bugfixes
- Reverted a 1.23.0 change that appended random strings to container names
created by `docker-compose up`, causing addressability issues.
Note: Containers created by `docker-compose run` will continue to use
randomly generated names to avoid collisions during parallel runs.
- Fixed an issue where some `dockerfile` paths would fail unexpectedly when
attempting to build on Windows.
- Fixed a bug where build context URLs would fail to build on Windows.
- Fixed a bug that caused `run` and `exec` commands to fail for some otherwise
accepted values of the `--host` parameter.
- Fixed an issue where overrides for the `storage_opt` and `isolation` keys in
service definitions weren't properly applied.
- Fixed a bug where some invalid Compose files would raise an uncaught
exception during validation.
1.23.1 (2018-11-01)
-------------------
### Bugfixes
- Fixed a bug where working with containers created with a previous (< 1.23.0)
version of Compose would cause unexpected crashes
- Fixed an issue where the behavior of the `--project-directory` flag would
vary depending on which subcommand was being used.
1.23.0 (2018-10-30)
-------------------
### Important note
The default naming scheme for containers created by Compose in this version
has changed from `<project>_<service>_<index>` to
`<project>_<service>_<index>_<slug>`, where `<slug>` is a randomly-generated
hexadecimal string. Please make sure to update scripts relying on the old
naming scheme accordingly before upgrading.
### Features
- Logs for containers restarting after a crash will now appear in the output
of the `up` and `logs` commands.
- Added `--hash` option to the `docker-compose config` command, allowing users
to print a hash string for each service's configuration to facilitate rolling
updates.
- Added `--parallel` flag to the `docker-compose build` command, allowing
Compose to build up to 5 images simultaneously.
- Output for the `pull` command now reports status / progress even when pulling
multiple images in parallel.
- For images with multiple names, Compose will now attempt to match the one
present in the service configuration in the output of the `images` command.
### Bugfixes
- Parallel `run` commands for the same service will no longer fail due to name
collisions.
- Fixed an issue where paths longer than 260 characters on Windows clients would
cause `docker-compose build` to fail.
- Fixed a bug where attempting to mount `/var/run/docker.sock` with
Docker Desktop for Windows would result in failure.
- The `--project-directory` option is now used by Compose to determine where to
look for the `.env` file.
- `docker-compose build` no longer fails when attempting to pull an image with
credentials provided by the gcloud credential helper.
- Fixed the `--exit-code-from` option in `docker-compose up` to always report
the actual exit code even when the watched container isn't the cause of the
exit.
- Fixed an issue that would prevent recreating a service in some cases where
a volume would be mapped to the same mountpoint as a volume declared inside
the image's Dockerfile.
- Fixed a bug that caused hash configuration with multiple networks to be
inconsistent, causing some services to be unnecessarily restarted.
- Fixed a bug that would cause failures with variable substitution for services
with a name containing one or more dot characters
- Fixed a pipe handling issue when using the containerized version of Compose.
- Fixed a bug causing `external: false` entries in the Compose file to be
printed as `external: true` in the output of `docker-compose config`
- Fixed a bug where issuing a `docker-compose pull` command on services
without a defined image key would cause Compose to crash
- Volumes and binds are now mounted in the order they're declared in the
service definition
### Miscellaneous
- The `zsh` completion script has been updated with new options, and no
longer suggests container names where service names are expected.
1.22.0 (2018-07-17)
-------------------
### Features
@@ -60,7 +223,7 @@ Change log
### Bugfixes
- Fixed a bug where the ip_range attirbute in IPAM configs was prevented
- Fixed a bug where the ip_range attribute in IPAM configs was prevented
from passing validation
1.21.1 (2018-04-27)
@@ -285,7 +448,7 @@ Change log
preventing Compose from recovering volume data from previous containers for
anonymous volumes
- Added limit for number of simulatenous parallel operations, which should
- Added limit for number of simultaneous parallel operations, which should
prevent accidental resource exhaustion of the server. Default is 64 and
can be configured using the `COMPOSE_PARALLEL_LIMIT` environment variable
@@ -583,7 +746,7 @@ Change log
### Bugfixes
- Volumes specified through the `--volume` flag of `docker-compose run` now
complement volumes declared in the service's defintion instead of replacing
complement volumes declared in the service's definition instead of replacing
them
- Fixed a bug where using multiple Compose files would unset the scale value

View File

@@ -1,39 +1,74 @@
FROM python:3.6
ARG DOCKER_VERSION=18.09.7
ARG PYTHON_VERSION=3.7.4
ARG BUILD_ALPINE_VERSION=3.10
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.10.1
ARG RUNTIME_DEBIAN_VERSION=stretch-20190812-slim
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
curl \
python-dev \
git
ARG BUILD_PLATFORM=alpine
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz" && \
SHA256=692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d; \
echo "${SHA256} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz
FROM docker:${DOCKER_VERSION} AS docker-cli
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
FROM python:${PYTHON_VERSION}-alpine${BUILD_ALPINE_VERSION} AS build-alpine
RUN apk add --no-cache \
bash \
build-base \
ca-certificates \
curl \
gcc \
git \
libc-dev \
libffi-dev \
libgcc \
make \
musl-dev \
openssl \
openssl-dev \
python2 \
python2-dev \
zlib-dev
ENV BUILD_BOOTLOADER=1
RUN useradd -d /home/user -m -s /bin/bash user
FROM python:${PYTHON_VERSION}-${BUILD_DEBIAN_VERSION} AS build-debian
RUN apt-get update && apt-get install --no-install-recommends -y \
curl \
gcc \
git \
libc-dev \
libffi-dev \
libgcc-6-dev \
libssl-dev \
make \
openssl \
python2.7-dev \
zlib1g-dev
FROM build-${BUILD_PLATFORM} AS build
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
WORKDIR /code/
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
RUN pip install virtualenv==16.2.0
RUN pip install tox==2.9.1
RUN pip install tox==2.1.1
ADD requirements.txt /code/
ADD requirements-dev.txt /code/
ADD .pre-commit-config.yaml /code/
ADD setup.py /code/
ADD tox.ini /code/
ADD compose /code/compose/
COPY requirements.txt .
COPY requirements-dev.txt .
COPY .pre-commit-config.yaml .
COPY tox.ini .
COPY setup.py .
COPY README.md .
COPY compose compose/
RUN tox --notest
COPY . .
ARG GIT_COMMIT=unknown
ENV DOCKER_COMPOSE_GITSHA=$GIT_COMMIT
RUN script/build/linux-entrypoint
ADD . /code/
RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py36/bin/docker-compose"]
FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
FROM debian:${RUNTIME_DEBIAN_VERSION} AS runtime-debian
FROM runtime-${BUILD_PLATFORM} AS runtime
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
COPY --from=build /usr/local/bin/docker-compose /usr/local/bin/docker-compose

View File

@@ -1,73 +0,0 @@
FROM armhf/debian:wheezy
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
gcc \
make \
zlib1g \
zlib1g-dev \
libssl-dev \
git \
ca-certificates \
curl \
libsqlite3-dev \
libbz2-dev \
; \
rm -rf /var/lib/apt/lists/*
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/armhf/docker-17.12.0-ce.tgz" && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz
# Build Python 2.7.13 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz | tar -xz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.13
# Build python 3.6 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz | tar -xz; \
cd Python-3.6.4; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.6.4
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install pip
RUN set -ex; \
curl -L https://bootstrap.pypa.io/get-pip.py | python
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
RUN useradd -d /home/user -m -s /bin/bash user
WORKDIR /code/
RUN pip install tox==2.1.1
ADD requirements.txt /code/
ADD requirements-dev.txt /code/
ADD .pre-commit-config.yaml /code/
ADD setup.py /code/
ADD tox.ini /code/
ADD compose /code/compose/
RUN tox --notest
ADD . /code/
RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"]

View File

@@ -1,23 +0,0 @@
FROM alpine:3.6
ENV GLIBC 2.27-r0
ENV DOCKERBINS_SHA 1270dce1bd7e1838d62ae21d2505d87f16efc1d9074645571daaefdfd0c14054
RUN apk update && apk add --no-cache openssl ca-certificates curl libgcc && \
curl -fsSL -o /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub && \
curl -fsSL -o glibc-$GLIBC.apk https://github.com/sgerrand/alpine-pkg-glibc/releases/download/$GLIBC/glibc-$GLIBC.apk && \
apk add --no-cache glibc-$GLIBC.apk && \
ln -s /lib/libz.so.1 /usr/glibc-compat/lib/ && \
ln -s /lib/libc.musl-x86_64.so.1 /usr/glibc-compat/lib && \
ln -s /usr/lib/libgcc_s.so.1 /usr/glibc-compat/lib && \
curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.1-ce.tgz" && \
echo "${DOCKERBINS_SHA} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz /etc/apk/keys/sgerrand.rsa.pub glibc-$GLIBC.apk && \
apk del curl
COPY dist/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
ENTRYPOINT ["docker-compose"]

View File

@@ -1,4 +1,4 @@
FROM s390x/alpine:3.6
FROM s390x/alpine:3.10.1
ARG COMPOSE_VERSION=1.16.1

59
Jenkinsfile vendored
View File

@@ -1,29 +1,38 @@
#!groovy
def image
def buildImage = { ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("build image") {
def buildImage = { String baseImage ->
def image
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("build image for \"${baseImage}\"") {
checkout(scm)
def imageName = "dockerbuildbot/compose:${gitCommit()}"
def imageName = "dockerbuildbot/compose:${baseImage}-${gitCommit()}"
image = docker.image(imageName)
try {
image.pull()
} catch (Exception exc) {
image = docker.build(imageName, ".")
image.push()
sh """GIT_COMMIT=\$(script/build/write-git-sha) && \\
docker build -t ${imageName} \\
--target build \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg GIT_COMMIT="${GIT_COMMIT}" \\
.\\
"""
sh "docker push ${imageName}"
echo "${imageName}"
return imageName
}
}
}
echo "image.id: ${image.id}"
return image.id
}
def get_versions = { int number ->
def get_versions = { String imageId, int number ->
def docker_versions
wrappedNode(label: "ubuntu && !zfs") {
wrappedNode(label: "ubuntu && amd64 && !zfs") {
def result = sh(script: """docker run --rm \\
--entrypoint=/code/.tox/py27/bin/python \\
${image.id} \\
--entrypoint=/code/.tox/py37/bin/python \\
${imageId} \\
/code/script/test/versions.py -n ${number} docker/docker-ce recent
""", returnStdout: true
)
@@ -35,17 +44,19 @@ def get_versions = { int number ->
def runTests = { Map settings ->
def dockerVersions = settings.get("dockerVersions", null)
def pythonVersions = settings.get("pythonVersions", null)
def baseImage = settings.get("baseImage", null)
def imageName = settings.get("image", null)
if (!pythonVersions) {
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py27,py36')`")
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py37')`")
}
if (!dockerVersions) {
throw new Exception("Need Docker versions to test. e.g.: `runTests(dockerVersions: 'all')`")
}
{ ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions}") {
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions} / baseImage=${baseImage}") {
checkout(scm)
def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
@@ -55,13 +66,13 @@ def runTests = { Map settings ->
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
-e "TAG=${image.id}" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersions}" \\
-e "BUILD_NUMBER=\$BUILD_TAG" \\
-e "PY_TEST_VERSIONS=${pythonVersions}" \\
--entrypoint="script/test/ci" \\
${image.id} \\
${imageName} \\
--verbose
"""
}
@@ -69,15 +80,13 @@ def runTests = { Map settings ->
}
}
buildImage()
def testMatrix = [failFast: true]
def docker_versions = get_versions(2)
for (int i = 0 ;i < docker_versions.length ; i++) {
def dockerVersion = docker_versions[i]
testMatrix["${dockerVersion}_py27"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py27"])
testMatrix["${dockerVersion}_py36"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py36"])
def baseImages = ['alpine', 'debian']
baseImages.each { baseImage ->
def imageName = buildImage(baseImage)
get_versions(imageName, 2).each { dockerVersion ->
testMatrix["${baseImage}_${dockerVersion}"] = runTests([baseImage: baseImage, image: imageName, dockerVersions: dockerVersion, pythonVersions: 'py37'])
}
}
parallel(testMatrix)

View File

@@ -11,9 +11,8 @@
[Org]
[Org."Core maintainers"]
people = [
"mefyl",
"mnottale",
"shin-",
"rumpl",
"ulyssessouza",
]
[Org.Alumni]
people = [
@@ -34,6 +33,10 @@
# including muti-file support, variable interpolation, secrets
# emulation and many more
"dnephin",
"shin-",
"mefyl",
"mnottale",
]
[people]
@@ -74,7 +77,17 @@
Email = "mazz@houseofmnowster.com"
GitHub = "mnowster"
[People.shin-]
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.shin-]
Name = "Joffrey F"
Email = "joffrey@docker.com"
Email = "f.joffrey@gmail.com"
GitHub = "shin-"
[people.ulyssessouza]
Name = "Ulysses Domiciano Souza"
Email = "ulysses.souza@docker.com"
GitHub = "ulyssessouza"

View File

@@ -4,8 +4,7 @@ include requirements.txt
include requirements-dev.txt
include tox.ini
include *.md
exclude README.md
include README.rst
include README.md
include compose/config/*.json
include compose/GITSHA
recursive-include contrib/completion *

View File

@@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
Using Compose is basically a three-step process.
@@ -35,7 +35,7 @@ A `docker-compose.yml` looks like this:
image: redis
For more information about the Compose file, see the
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md)
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
Compose has commands for managing the whole lifecycle of your application:
@@ -48,9 +48,8 @@ Installation and documentation
------------------------------
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
- If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
- Code repository for Compose is on [GitHub](https://github.com/docker/compose)
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
- Code repository for Compose is on [GitHub](https://github.com/docker/compose).
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
Contributing
------------

View File

@@ -2,15 +2,15 @@
version: '{branch}-{build}'
install:
- "SET PATH=C:\\Python36-x64;C:\\Python36-x64\\Scripts;%PATH%"
- "SET PATH=C:\\Python37-x64;C:\\Python37-x64\\Scripts;%PATH%"
- "python --version"
- "pip install tox==2.9.1 virtualenv==15.1.0"
- "pip install tox==2.9.1 virtualenv==16.2.0"
# Build the binary after tests
build: false
test_script:
- "tox -e py27,py36 -- tests/unit"
- "tox -e py27,py37 -- tests/unit"
- ps: ".\\script\\build\\windows.ps1"
artifacts:

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.22.0-rc2'
__version__ = '1.25.0dev'

View File

@@ -95,19 +95,10 @@ def get_image_digest(service, allow_push=False):
if separator == '@':
return service.options['image']
try:
image = service.image()
except NoSuchImageError:
action = 'build' if 'build' in service.options else 'pull'
raise UserError(
"Image not found for service '{service}'. "
"You might need to run `docker-compose {action} {service}`."
.format(service=service.name, action=action))
digest = get_digest(service)
if image['RepoDigests']:
# TODO: pick a digest based on the image tag if there are multiple
# digests
return image['RepoDigests'][0]
if digest:
return digest
if 'build' not in service.options:
raise NeedsPull(service.image_name, service.name)
@@ -118,6 +109,32 @@ def get_image_digest(service, allow_push=False):
return push_image(service)
def get_digest(service):
digest = None
try:
image = service.image()
# TODO: pick a digest based on the image tag if there are multiple
# digests
if image['RepoDigests']:
digest = image['RepoDigests'][0]
except NoSuchImageError:
try:
# Fetch the image digest from the registry
distribution = service.get_image_registry_data()
if distribution['Descriptor']['digest']:
digest = '{image_name}@{digest}'.format(
image_name=service.image_name,
digest=distribution['Descriptor']['digest']
)
except NoSuchImageError:
raise UserError(
"Digest not found for service '{service}'. "
"Repository does not exist or may require 'docker login'"
.format(service=service.name))
return digest
def push_image(service):
try:
digest = service.push()
@@ -147,10 +164,10 @@ def push_image(service):
def to_bundle(config, image_digests):
if config.networks:
log.warn("Unsupported top level key 'networks' - ignoring")
log.warning("Unsupported top level key 'networks' - ignoring")
if config.volumes:
log.warn("Unsupported top level key 'volumes' - ignoring")
log.warning("Unsupported top level key 'volumes' - ignoring")
config = denormalize_config(config)
@@ -175,7 +192,7 @@ def convert_service_to_bundle(name, service_dict, image_digest):
continue
if key not in SUPPORTED_KEYS:
log.warn("Unsupported key '{}' in services.{} - ignoring".format(key, name))
log.warning("Unsupported key '{}' in services.{} - ignoring".format(key, name))
continue
if key == 'environment':
@@ -222,7 +239,7 @@ def make_service_networks(name, service_dict):
for network_name, network_def in get_network_defs_for_service(service_dict).items():
for key in network_def.keys():
log.warn(
log.warning(
"Unsupported key '{}' in services.{}.networks.{} - ignoring"
.format(key, name, network_name))

View File

@@ -41,9 +41,9 @@ for (name, code) in get_pairs():
def rainbow():
cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue',
cs = ['cyan', 'yellow', 'green', 'magenta', 'blue',
'intense_cyan', 'intense_yellow', 'intense_green',
'intense_magenta', 'intense_red', 'intense_blue']
'intense_magenta', 'intense_blue']
for c in cs:
yield globals()[c]

View File

@@ -13,6 +13,9 @@ from .. import config
from .. import parallel
from ..config.environment import Environment
from ..const import API_VERSIONS
from ..const import LABEL_CONFIG_FILES
from ..const import LABEL_ENVIRONMENT_FILE
from ..const import LABEL_WORKING_DIR
from ..project import Project
from .docker_client import docker_client
from .docker_client import get_tls_version
@@ -21,9 +24,27 @@ from .utils import get_version_info
log = logging.getLogger(__name__)
SILENT_COMMANDS = {
'events',
'exec',
'kill',
'logs',
'pause',
'ps',
'restart',
'rm',
'start',
'stop',
'top',
'unpause',
}
def project_from_options(project_dir, options):
environment = Environment.from_env_file(project_dir)
def project_from_options(project_dir, options, additional_options={}):
override_dir = options.get('--project-directory')
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or project_dir, environment_file)
environment.silent = options.get('COMMAND', None) in SILENT_COMMANDS
set_parallel_limit(environment)
host = options.get('--host')
@@ -37,8 +58,10 @@ def project_from_options(project_dir, options):
host=host,
tls_config=tls_config_from_options(options, environment),
environment=environment,
override_dir=options.get('--project-directory'),
override_dir=override_dir,
compatibility=options.get('--compatibility'),
interpolate=(not additional_options.get('--no-interpolate')),
environment_file=environment_file
)
@@ -58,14 +81,17 @@ def set_parallel_limit(environment):
parallel.GlobalLimit.set_global_limit(parallel_limit)
def get_config_from_options(base_dir, options):
environment = Environment.from_env_file(base_dir)
def get_config_from_options(base_dir, options, additional_options={}):
override_dir = options.get('--project-directory')
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or base_dir, environment_file)
config_path = get_config_path_from_options(
base_dir, options, environment
)
return config.load(
config.find(base_dir, config_path, environment),
options.get('--compatibility')
config.find(base_dir, config_path, environment, override_dir),
options.get('--compatibility'),
not additional_options.get('--no-interpolate')
)
@@ -103,14 +129,14 @@ def get_client(environment, verbose=False, version=None, tls_config=None, host=N
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
host=None, tls_config=None, environment=None, override_dir=None,
compatibility=False):
compatibility=False, interpolate=True, environment_file=None):
if not environment:
environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir)
project_name = get_project_name(
config_details.working_dir, project_name, environment
)
config_data = config.load(config_details, compatibility)
config_data = config.load(config_details, compatibility, interpolate)
api_version = environment.get(
'COMPOSE_API_VERSION',
@@ -123,10 +149,30 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
with errors.handle_connection_errors(client):
return Project.from_config(
project_name, config_data, client, environment.get('DOCKER_DEFAULT_PLATFORM')
project_name,
config_data,
client,
environment.get('DOCKER_DEFAULT_PLATFORM'),
execution_context_labels(config_details, environment_file),
)
def execution_context_labels(config_details, environment_file):
extra_labels = [
'{0}={1}'.format(LABEL_WORKING_DIR, os.path.abspath(config_details.working_dir)),
'{0}={1}'.format(LABEL_CONFIG_FILES, config_files_label(config_details)),
]
if environment_file is not None:
extra_labels.append('{0}={1}'.format(LABEL_ENVIRONMENT_FILE,
os.path.normpath(environment_file)))
return extra_labels
def config_files_label(config_details):
return ",".join(
map(str, (os.path.normpath(c.filename) for c in config_details.config_files)))
def get_project_name(working_dir, project_name=None, environment=None):
def normalize_name(name):
return re.sub(r'[^-_a-z0-9]', '', name.lower())

View File

@@ -31,7 +31,7 @@ def get_tls_version(environment):
tls_attr_name = "PROTOCOL_{}".format(compose_tls_version)
if not hasattr(ssl, tls_attr_name):
log.warn(
log.warning(
'The "{}" protocol is unavailable. You may need to update your '
'version of Python or OpenSSL. Falling back to TLSv1 (default).'
.format(compose_tls_version)

View File

@@ -54,7 +54,7 @@ def handle_connection_errors(client):
except APIError as e:
log_api_error(e, client.api_version)
raise ConnectionError()
except (ReadTimeout, socket.timeout) as e:
except (ReadTimeout, socket.timeout):
log_timeout_error(client.timeout)
raise ConnectionError()
except Exception as e:
@@ -67,7 +67,9 @@ def handle_connection_errors(client):
def log_windows_pipe_error(exc):
if exc.winerror == 232: # https://github.com/docker/compose/issues/5005
if exc.winerror == 2:
log.error("Couldn't connect to Docker daemon. You might need to start Docker for Windows.")
elif exc.winerror == 232: # https://github.com/docker/compose/issues/5005
log.error(
"The current Compose file version is not compatible with your engine version. "
"Please upgrade your Compose file to a more recent version, or set "

View File

@@ -2,25 +2,32 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import logging
import os
import shutil
import six
import texttable
from compose.cli import colors
if hasattr(shutil, "get_terminal_size"):
from shutil import get_terminal_size
else:
from backports.shutil_get_terminal_size import get_terminal_size
def get_tty_width():
tty_size = os.popen('stty size 2> /dev/null', 'r').read().split()
if len(tty_size) != 2:
try:
width, _ = get_terminal_size()
return int(width)
except OSError:
return 0
_, width = tty_size
return int(width)
class Formatter(object):
class Formatter:
"""Format tabular data for printing."""
def table(self, headers, rows):
@staticmethod
def table(headers, rows):
table = texttable.Texttable(max_width=get_tty_width())
table.set_cols_dtype(['t' for h in headers])
table.add_rows([headers] + rows)

View File

@@ -134,7 +134,10 @@ def build_thread(container, presenter, queue, log_args):
def build_thread_map(initial_containers, presenters, thread_args):
return {
container.id: build_thread(container, next(presenters), *thread_args)
for container in initial_containers
# Container order is unspecified, so they are sorted by name in order to make
# container:presenter (log color) assignment deterministic when given a list of containers
# with the same names.
for container in sorted(initial_containers, key=lambda c: c.name)
}
@@ -210,10 +213,15 @@ def start_producer_thread(thread_args):
def watch_events(thread_map, event_stream, presenters, thread_args):
crashed_containers = set()
for event in event_stream:
if event['action'] == 'stop':
thread_map.pop(event['id'], None)
if event['action'] == 'die':
thread_map.pop(event['id'], None)
crashed_containers.add(event['id'])
if event['action'] != 'start':
continue
@@ -223,10 +231,22 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
# Container was stopped and started, we need a new thread
thread_map.pop(event['id'], None)
# Container crashed so we should reattach to it
if event['id'] in crashed_containers:
container = event['container']
if not container.is_restarting:
try:
container.attach_log_stream()
except APIError:
# Just ignore errors when reattaching to already crashed containers
pass
crashed_containers.remove(event['id'])
thread_map[event['id']] = build_thread(
event['container'],
next(presenters),
*thread_args)
*thread_args
)
def consume_queue(queue, cascade_stop):

View File

@@ -6,6 +6,7 @@ import contextlib
import functools
import json
import logging
import os
import pipes
import re
import subprocess
@@ -102,9 +103,9 @@ def dispatch():
options, handler, command_options = dispatcher.parse(sys.argv[1:])
setup_console_handler(console_handler,
options.get('--verbose'),
options.get('--no-ansi'),
set_no_color_if_clicolor(options.get('--no-ansi')),
options.get("--log-level"))
setup_parallel_logger(options.get('--no-ansi'))
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
if options.get('--no-ansi'):
command_options['--no-color'] = True
return functools.partial(perform_command, options, handler, command_options)
@@ -206,8 +207,9 @@ class TopLevelCommand(object):
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert deploy
keys in v3 files to their non-Swarm equivalent
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent
--env-file PATH Specify an alternate environment file
Commands:
build Build or rebuild services
@@ -238,11 +240,19 @@ class TopLevelCommand(object):
version Show the Docker-Compose version information
"""
def __init__(self, project, project_dir='.', options=None):
def __init__(self, project, options=None):
self.project = project
self.project_dir = '.'
self.toplevel_options = options or {}
@property
def project_dir(self):
return self.toplevel_options.get('--project-directory') or '.'
@property
def toplevel_environment(self):
environment_file = self.toplevel_options.get('--env-file')
return Environment.from_env_file(self.project_dir, environment_file)
def build(self, options):
"""
Build or rebuild services.
@@ -254,12 +264,18 @@ class TopLevelCommand(object):
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--build-arg key=val Set build-time variables for services.
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
-m, --memory MEM Set memory limit for the build container.
--no-cache Do not use cache when building the image.
--no-rm Do not remove intermediate containers after a successful build.
--parallel Build images in parallel.
--progress string Set type of progress output (auto, plain, tty).
EXPERIMENTAL flag for native builder.
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
-q, --quiet Don't print anything to STDOUT
"""
service_names = options['SERVICE']
build_args = options.get('--build-arg', None)
@@ -269,8 +285,9 @@ class TopLevelCommand(object):
'--build-arg is only supported when services are specified for API version < 1.25.'
' Please use a Compose file version > 2.2 or specify which services to build.'
)
environment = Environment.from_env_file(self.project_dir)
build_args = resolve_build_args(build_args, environment)
build_args = resolve_build_args(build_args, self.toplevel_environment)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
self.project.build(
service_names=options['SERVICE'],
@@ -278,8 +295,13 @@ class TopLevelCommand(object):
pull=bool(options.get('--pull', False)),
force_rm=bool(options.get('--force-rm', False)),
memory=options.get('--memory'),
rm=not bool(options.get('--no-rm', False)),
build_args=build_args,
gzip=options.get('--compress', False),
parallel_build=options.get('--parallel', False),
silent=options.get('--quiet', False),
cli=native_builder,
progress=options.get('--progress'),
)
def bundle(self, options):
@@ -301,7 +323,7 @@ class TopLevelCommand(object):
-o, --output PATH Path to write the bundle file to.
Defaults to "<project name>.dab".
"""
compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
compose_config = get_config_from_options('.', self.toplevel_options)
output = options["--output"]
if not output:
@@ -322,18 +344,22 @@ class TopLevelCommand(object):
Options:
--resolve-image-digests Pin image tags to digests.
--no-interpolate Don't interpolate environment variables
-q, --quiet Only validate the configuration, don't print
anything.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
Set "service1,service2" for a list of specified services
or use the wildcard symbol to display all services
"""
compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
additional_options = {'--no-interpolate': options.get('--no-interpolate')}
compose_config = get_config_from_options('.', self.toplevel_options, additional_options)
image_digests = None
if options['--resolve-image-digests']:
self.project = project_from_options('.', self.toplevel_options)
self.project = project_from_options('.', self.toplevel_options, additional_options)
with errors.handle_connection_errors(self.project.client):
image_digests = image_digests_for_project(self.project)
@@ -348,7 +374,16 @@ class TopLevelCommand(object):
print('\n'.join(volume for volume in compose_config.volumes))
return
print(serialize_config(compose_config, image_digests))
if options['--hash'] is not None:
h = options['--hash']
self.project = project_from_options('.', self.toplevel_options, additional_options)
services = [svc for svc in options['--hash'].split(',')] if h != '*' else None
with errors.handle_connection_errors(self.project.client):
for service in self.project.get_services(services):
print('{} {}'.format(service.name, service.config_hash))
return
print(serialize_config(compose_config, image_digests, not options['--no-interpolate']))
def create(self, options):
"""
@@ -367,7 +402,7 @@ class TopLevelCommand(object):
"""
service_names = options['SERVICE']
log.warn(
log.warning(
'The create command is deprecated. '
'Use the up command with the --no-start flag instead.'
)
@@ -406,8 +441,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
environment = Environment.from_env_file(self.project_dir)
ignore_orphans = environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
ignore_orphans = self.toplevel_environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
if ignore_orphans and options['--remove-orphans']:
raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.")
@@ -464,8 +498,7 @@ class TopLevelCommand(object):
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
"""
environment = Environment.from_env_file(self.project_dir)
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
use_cli = not self.toplevel_environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
index = int(options.get('--index'))
service = self.project.get_service(options['SERVICE'])
detach = options.get('--detach')
@@ -488,7 +521,7 @@ class TopLevelCommand(object):
if IS_WINDOWS_PLATFORM or use_cli and not detach:
sys.exit(call_docker(
build_exec_command(options, container.id, command),
self.toplevel_options)
self.toplevel_options, self.toplevel_environment)
)
create_exec_options = {
@@ -552,31 +585,43 @@ class TopLevelCommand(object):
if options['--quiet']:
for image in set(c.image for c in containers):
print(image.split(':')[1])
else:
headers = [
'Container',
'Repository',
'Tag',
'Image Id',
'Size'
]
rows = []
for container in containers:
image_config = container.image_config
repo_tags = (
image_config['RepoTags'][0].rsplit(':', 1) if image_config['RepoTags']
else ('<none>', '<none>')
)
image_id = image_config['Id'].split(':')[1][:12]
size = human_readable_file_size(image_config['Size'])
rows.append([
container.name,
repo_tags[0],
repo_tags[1],
image_id,
size
])
print(Formatter().table(headers, rows))
return
def add_default_tag(img_name):
if ':' not in img_name.split('/')[-1]:
return '{}:latest'.format(img_name)
return img_name
headers = [
'Container',
'Repository',
'Tag',
'Image Id',
'Size'
]
rows = []
for container in containers:
image_config = container.image_config
service = self.project.get_service(container.service)
index = 0
img_name = add_default_tag(service.image_name)
if img_name in image_config['RepoTags']:
index = image_config['RepoTags'].index(img_name)
repo_tags = (
image_config['RepoTags'][index].rsplit(':', 1) if image_config['RepoTags']
else ('<none>', '<none>')
)
image_id = image_config['Id'].split(':')[1][:12]
size = human_readable_file_size(image_config['Size'])
rows.append([
container.name,
repo_tags[0],
repo_tags[1],
image_id,
size
])
print(Formatter.table(headers, rows))
def kill(self, options):
"""
@@ -622,7 +667,7 @@ class TopLevelCommand(object):
log_printer_from_project(
self.project,
containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
log_args,
event_stream=self.project.events(service_names=options['SERVICE'])).run()
@@ -666,6 +711,7 @@ class TopLevelCommand(object):
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
"""
if options['--quiet'] and options['--services']:
raise UserError('--quiet and --services cannot be combined')
@@ -678,10 +724,15 @@ class TopLevelCommand(object):
print('\n'.join(service.name for service in services))
return
containers = sorted(
self.project.containers(service_names=options['SERVICE'], stopped=True) +
self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
key=attrgetter('name'))
if options['--all']:
containers = sorted(self.project.containers(service_names=options['SERVICE'],
one_off=OneOffFilter.include, stopped=True),
key=attrgetter('name'))
else:
containers = sorted(
self.project.containers(service_names=options['SERVICE'], stopped=True) +
self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
key=attrgetter('name'))
if options['--quiet']:
for container in containers:
@@ -704,7 +755,7 @@ class TopLevelCommand(object):
container.human_readable_state,
container.human_readable_ports,
])
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def pull(self, options):
"""
@@ -720,7 +771,7 @@ class TopLevelCommand(object):
--include-deps Also pull services declared as dependencies
"""
if options.get('--parallel'):
log.warn('--parallel option is deprecated and will be removed in future versions.')
log.warning('--parallel option is deprecated and will be removed in future versions.')
self.project.pull(
service_names=options['SERVICE'],
ignore_pull_failures=options.get('--ignore-pull-failures'),
@@ -761,7 +812,7 @@ class TopLevelCommand(object):
-a, --all Deprecated - no effect.
"""
if options.get('--all'):
log.warn(
log.warning(
'--all flag is obsolete. This is now the default behavior '
'of `docker-compose rm`'
)
@@ -839,10 +890,12 @@ class TopLevelCommand(object):
else:
command = service.options.get('command')
container_options = build_container_options(options, detach, command)
options['stdin_open'] = service.options.get('stdin_open', True)
container_options = build_one_off_container_options(options, detach, command)
run_one_off_container(
container_options, self.project, service, options,
self.toplevel_options, self.project_dir
self.toplevel_options, self.toplevel_environment
)
def scale(self, options):
@@ -871,7 +924,7 @@ class TopLevelCommand(object):
'Use the up command with the --scale flag instead.'
)
else:
log.warn(
log.warning(
'The scale command is deprecated. '
'Use the up command with the --scale flag instead.'
)
@@ -942,7 +995,7 @@ class TopLevelCommand(object):
rows.append(process)
print(container.name)
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def unpause(self, options):
"""
@@ -1017,8 +1070,7 @@ class TopLevelCommand(object):
if detached and (cascade_stop or exit_value_from):
raise UserError("--abort-on-container-exit and -d cannot be combined.")
environment = Environment.from_env_file(self.project_dir)
ignore_orphans = environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
ignore_orphans = self.toplevel_environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
if ignore_orphans and remove_orphans:
raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.")
@@ -1027,6 +1079,8 @@ class TopLevelCommand(object):
for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
with up_shutdown_context(self.project, service_names, timeout, detached):
warn_for_swarm_mode(self.project.client)
@@ -1046,6 +1100,7 @@ class TopLevelCommand(object):
reset_container_image=rebuild,
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
cli=native_builder,
)
try:
@@ -1070,7 +1125,7 @@ class TopLevelCommand(object):
log_printer = log_printer_from_project(
self.project,
attached_containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
{'follow': True},
cascade_stop,
event_stream=self.project.events(service_names=service_names))
@@ -1085,12 +1140,15 @@ class TopLevelCommand(object):
)
self.project.stop(service_names=service_names, timeout=timeout)
if exit_value_from:
exit_code = compute_service_exit_code(exit_value_from, attached_containers)
sys.exit(exit_code)
@classmethod
def version(cls, options):
"""
Show version informations
Show version information
Usage: version [--short]
@@ -1103,33 +1161,33 @@ class TopLevelCommand(object):
print(get_version_info('full'))
def compute_service_exit_code(exit_value_from, attached_containers):
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
return 2
if len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
return exit_values[0]
return candidates[0].inspect()['State']['ExitCode']
def compute_exit_code(exit_value_from, attached_containers, cascade_starter, all_containers):
exit_code = 0
if exit_value_from:
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
exit_code = 2
elif len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
exit_code = exit_values[0]
else:
exit_code = candidates[0].inspect()['State']['ExitCode']
else:
for e in all_containers:
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
for e in all_containers:
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
return exit_code
@@ -1200,7 +1258,7 @@ def exitval_from_opts(options, project):
exit_value_from = options.get('--exit-code-from')
if exit_value_from:
if not options.get('--abort-on-container-exit'):
log.warn('using --exit-code-from implies --abort-on-container-exit')
log.warning('using --exit-code-from implies --abort-on-container-exit')
options['--abort-on-container-exit'] = True
if exit_value_from not in [s.name for s in project.get_services()]:
log.error('No service named "%s" was found in your compose file.',
@@ -1231,11 +1289,11 @@ def build_action_from_opts(options):
return BuildAction.none
def build_container_options(options, detach, command):
def build_one_off_container_options(options, detach, command):
container_options = {
'command': command,
'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
'stdin_open': not detach,
'stdin_open': options.get('stdin_open'),
'detach': detach,
}
@@ -1252,8 +1310,8 @@ def build_container_options(options, detach, command):
[""] if options['--entrypoint'] == '' else options['--entrypoint']
)
if options['--rm']:
container_options['restart'] = None
# Ensure that run command remains one-off (issue #6302)
container_options['restart'] = None
if options['--user']:
container_options['user'] = options.get('--user')
@@ -1278,7 +1336,7 @@ def build_container_options(options, detach, command):
def run_one_off_container(container_options, project, service, options, toplevel_options,
project_dir='.'):
toplevel_environment):
if not options['--no-deps']:
deps = service.get_dependency_names()
if deps:
@@ -1307,8 +1365,7 @@ def run_one_off_container(container_options, project, service, options, toplevel
if options['--rm']:
project.client.remove_container(container.id, force=True, v=True)
environment = Environment.from_env_file(project_dir)
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
use_cli = not toplevel_environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
signals.set_signal_handler_to_shutdown()
signals.set_signal_handler_to_hang_up()
@@ -1317,8 +1374,8 @@ def run_one_off_container(container_options, project, service, options, toplevel
if IS_WINDOWS_PLATFORM or use_cli:
service.connect_container_to_networks(container, use_network_aliases)
exit_code = call_docker(
["start", "--attach", "--interactive", container.id],
toplevel_options
get_docker_start_call(container_options, container.id),
toplevel_options, toplevel_environment
)
else:
operation = RunOperation(
@@ -1344,6 +1401,16 @@ def run_one_off_container(container_options, project, service, options, toplevel
sys.exit(exit_code)
def get_docker_start_call(container_options, container_id):
docker_call = ["start"]
if not container_options.get('detach'):
docker_call.append("--attach")
if container_options.get('stdin_open'):
docker_call.append("--interactive")
docker_call.append(container_id)
return docker_call
def log_printer_from_project(
project,
containers,
@@ -1398,7 +1465,7 @@ def exit_if(condition, message, exit_code):
raise SystemExit(exit_code)
def call_docker(args, dockeropts):
def call_docker(args, dockeropts, environment):
executable_path = find_executable('docker')
if not executable_path:
raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary."))
@@ -1421,12 +1488,14 @@ def call_docker(args, dockeropts):
if verify:
tls_options.append('--tlsverify')
if host:
tls_options.extend(['--host', host.lstrip('=')])
tls_options.extend(
['--host', re.sub(r'^https?://', 'tcp://', host.lstrip('='))]
)
args = [executable_path] + tls_options + args
log.debug(" ".join(map(pipes.quote, args)))
return subprocess.call(args)
return subprocess.call(args, env=environment)
def parse_scale_args(options):
@@ -1527,10 +1596,14 @@ def warn_for_swarm_mode(client):
# UCP does multi-node scheduling with traditional Compose files.
return
log.warn(
log.warning(
"The Docker Engine you're using is running in swarm mode.\n\n"
"Compose does not use swarm mode to deploy services to multiple nodes in a swarm. "
"All containers will be scheduled on the current node.\n\n"
"To deploy your application across the swarm, "
"use `docker stack deploy`.\n"
)
def set_no_color_if_clicolor(no_color_flag):
return no_color_flag or os.environ.get('CLICOLOR') == "0"

View File

@@ -133,12 +133,12 @@ def generate_user_agent():
def human_readable_file_size(size):
suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
order = int(math.log(size, 2) / 10) if size else 0
order = int(math.log(size, 1000)) if size else 0
if order >= len(suffixes):
order = len(suffixes) - 1
return '{0:.3g} {1}'.format(
size / float(1 << (order * 10)),
return '{0:.4g} {1}'.format(
size / pow(10, order * 3),
suffixes[order]
)

View File

@@ -6,6 +6,7 @@ from . import environment
from .config import ConfigurationError
from .config import DOCKER_CONFIG_KEYS
from .config import find
from .config import is_url
from .config import load
from .config import merge_environment
from .config import merge_labels

View File

@@ -8,6 +8,7 @@ import os
import string
import sys
from collections import namedtuple
from operator import attrgetter
import six
import yaml
@@ -50,6 +51,7 @@ from .validation import match_named_volumes
from .validation import validate_against_config_schema
from .validation import validate_config_section
from .validation import validate_cpu
from .validation import validate_credential_spec
from .validation import validate_depends_on
from .validation import validate_extends_file_path
from .validation import validate_healthcheck
@@ -91,6 +93,7 @@ DOCKER_CONFIG_KEYS = [
'healthcheck',
'image',
'ipc',
'isolation',
'labels',
'links',
'mac_address',
@@ -195,9 +198,9 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
version = self.config['version']
if isinstance(version, dict):
log.warn('Unexpected type for "version" key in "{}". Assuming '
'"version" is the name of a service, and defaulting to '
'Compose file version 1.'.format(self.filename))
log.warning('Unexpected type for "version" key in "{}". Assuming '
'"version" is the name of a service, and defaulting to '
'Compose file version 1.'.format(self.filename))
return V1
if not isinstance(version, six.string_types):
@@ -315,8 +318,8 @@ def get_default_config_files(base_dir):
winner = candidates[0]
if len(candidates) > 1:
log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
log.warn("Using %s\n", winner)
log.warning("Found multiple config files with supported names: %s", ", ".join(candidates))
log.warning("Using %s\n", winner)
return [os.path.join(path, winner)] + get_default_override_file(path)
@@ -359,7 +362,7 @@ def check_swarm_only_config(service_dicts, compatibility=False):
def check_swarm_only_key(service_dicts, key):
services = [s for s in service_dicts if s.get(key)]
if services:
log.warn(
log.warning(
warning_template.format(
services=", ".join(sorted(s['name'] for s in services)),
key=key
@@ -367,11 +370,10 @@ def check_swarm_only_config(service_dicts, compatibility=False):
)
if not compatibility:
check_swarm_only_key(service_dicts, 'deploy')
check_swarm_only_key(service_dicts, 'credential_spec')
check_swarm_only_key(service_dicts, 'configs')
def load(config_details, compatibility=False):
def load(config_details, compatibility=False, interpolate=True):
"""Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top
of each other to create the final configuration.
@@ -381,7 +383,7 @@ def load(config_details, compatibility=False):
validate_config_version(config_details.config_files)
processed_files = [
process_config_file(config_file, config_details.environment)
process_config_file(config_file, config_details.environment, interpolate=interpolate)
for config_file in config_details.config_files
]
config_details = config_details._replace(config_files=processed_files)
@@ -503,7 +505,6 @@ def load_services(config_details, config_file, compatibility=False):
def interpolate_config_section(config_file, config, section, environment):
validate_config_section(config_file.filename, config, section)
return interpolate_environment_variables(
config_file.version,
config,
@@ -512,38 +513,60 @@ def interpolate_config_section(config_file, config, section, environment):
)
def process_config_file(config_file, environment, service_name=None):
services = interpolate_config_section(
def process_config_section(config_file, config, section, environment, interpolate):
validate_config_section(config_file.filename, config, section)
if interpolate:
return interpolate_environment_variables(
config_file.version,
config,
section,
environment
)
else:
return config
def process_config_file(config_file, environment, service_name=None, interpolate=True):
services = process_config_section(
config_file,
config_file.get_service_dicts(),
'service',
environment)
environment,
interpolate,
)
if config_file.version > V1:
processed_config = dict(config_file.config)
processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section(
processed_config['volumes'] = process_config_section(
config_file,
config_file.get_volumes(),
'volume',
environment)
processed_config['networks'] = interpolate_config_section(
environment,
interpolate,
)
processed_config['networks'] = process_config_section(
config_file,
config_file.get_networks(),
'network',
environment)
environment,
interpolate,
)
if config_file.version >= const.COMPOSEFILE_V3_1:
processed_config['secrets'] = interpolate_config_section(
processed_config['secrets'] = process_config_section(
config_file,
config_file.get_secrets(),
'secret',
environment)
environment,
interpolate,
)
if config_file.version >= const.COMPOSEFILE_V3_3:
processed_config['configs'] = interpolate_config_section(
processed_config['configs'] = process_config_section(
config_file,
config_file.get_configs(),
'config',
environment
environment,
interpolate,
)
else:
processed_config = services
@@ -592,7 +615,7 @@ class ServiceExtendsResolver(object):
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
if config_path == self.config_file.filename:
if config_path == os.path.abspath(self.config_file.filename):
try:
service_config = self.config_file.get_service(service_name)
except KeyError:
@@ -704,6 +727,7 @@ def validate_service(service_config, service_names, config_file):
validate_depends_on(service_config, service_names)
validate_links(service_config, service_names)
validate_healthcheck(service_config)
validate_credential_spec(service_config)
if not service_dict.get('image') and has_uppercase(service_name):
raise ConfigurationError(
@@ -834,6 +858,17 @@ def finalize_service_volumes(service_dict, environment):
finalized_volumes.append(MountSpec.parse(v, normalize, win_host))
else:
finalized_volumes.append(VolumeSpec.parse(v, normalize, win_host))
duplicate_mounts = []
mounts = [v.as_volume_spec() if isinstance(v, MountSpec) else v for v in finalized_volumes]
for mount in mounts:
if list(map(attrgetter('internal'), mounts)).count(mount.internal) > 1:
duplicate_mounts.append(mount.repr())
if duplicate_mounts:
raise ConfigurationError("Duplicate mount points: [%s]" % (
', '.join(duplicate_mounts)))
service_dict['volumes'] = finalized_volumes
return service_dict
@@ -881,11 +916,12 @@ def finalize_service(service_config, service_names, version, environment, compat
normalize_build(service_dict, service_config.working_dir, environment)
if compatibility:
service_dict = translate_credential_spec_to_security_opt(service_dict)
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
service_dict
)
if ignored_keys:
log.warn(
log.warning(
'The following deploy sub-keys are not supported in compatibility mode and have'
' been ignored: {}'.format(', '.join(ignored_keys))
)
@@ -917,6 +953,25 @@ def convert_restart_policy(name):
raise ConfigurationError('Invalid restart policy "{}"'.format(name))
def convert_credential_spec_to_security_opt(credential_spec):
if 'file' in credential_spec:
return 'file://{file}'.format(file=credential_spec['file'])
return 'registry://{registry}'.format(registry=credential_spec['registry'])
def translate_credential_spec_to_security_opt(service_dict):
result = []
if 'credential_spec' in service_dict:
spec = convert_credential_spec_to_security_opt(service_dict['credential_spec'])
result.append('credentialspec={spec}'.format(spec=spec))
if result:
service_dict['security_opt'] = result
return service_dict
def translate_deploy_keys_to_container_config(service_dict):
if 'credential_spec' in service_dict:
del service_dict['credential_spec']
@@ -1039,15 +1094,16 @@ def merge_service_dicts(base, override, version):
md.merge_mapping('environment', parse_environment)
md.merge_mapping('labels', parse_labels)
md.merge_mapping('ulimits', parse_flat_dict)
md.merge_mapping('networks', parse_networks)
md.merge_mapping('sysctls', parse_sysctls)
md.merge_mapping('depends_on', parse_depends_on)
md.merge_mapping('storage_opt', parse_flat_dict)
md.merge_sequence('links', ServiceLink.parse)
md.merge_sequence('secrets', types.ServiceSecret.parse)
md.merge_sequence('configs', types.ServiceConfig.parse)
md.merge_sequence('security_opt', types.SecurityOpt.parse)
md.merge_mapping('extra_hosts', parse_extra_hosts)
md.merge_field('networks', merge_networks, default={})
for field in ['volumes', 'devices']:
md.merge_field(field, merge_path_mappings)
@@ -1152,6 +1208,22 @@ def merge_deploy(base, override):
return dict(md)
def merge_networks(base, override):
merged_networks = {}
all_network_names = set(base) | set(override)
base = {k: {} for k in base} if isinstance(base, list) else base
override = {k: {} for k in override} if isinstance(override, list) else override
for network_name in all_network_names:
md = MergeDict(base.get(network_name) or {}, override.get(network_name) or {})
md.merge_field('aliases', merge_unique_items_lists, [])
md.merge_field('link_local_ips', merge_unique_items_lists, [])
md.merge_scalar('priority')
md.merge_scalar('ipv4_address')
md.merge_scalar('ipv6_address')
merged_networks[network_name] = dict(md)
return merged_networks
def merge_reservations(base, override):
md = MergeDict(base, override)
md.merge_scalar('cpus')
@@ -1281,7 +1353,7 @@ def resolve_volume_paths(working_dir, service_dict):
def resolve_volume_path(working_dir, volume):
if isinstance(volume, dict):
if volume.get('source', '').startswith('.') and volume['type'] == 'bind':
if volume.get('source', '').startswith(('.', '~')) and volume['type'] == 'bind':
volume['source'] = expand_path(working_dir, volume['source'])
return volume

View File

@@ -5,11 +5,13 @@ import codecs
import contextlib
import logging
import os
import re
import six
from ..const import IS_WINDOWS_PLATFORM
from .errors import ConfigurationError
from .errors import EnvFileNotFound
log = logging.getLogger(__name__)
@@ -17,10 +19,16 @@ log = logging.getLogger(__name__)
def split_env(env):
if isinstance(env, six.binary_type):
env = env.decode('utf-8', 'replace')
key = value = None
if '=' in env:
return env.split('=', 1)
key, value = env.split('=', 1)
else:
return env, None
key = env
if re.search(r'\s', key):
raise ConfigurationError(
"environment variable name '{}' may not contain whitespace.".format(key)
)
return key, value
def env_vars_from_file(filename):
@@ -28,16 +36,19 @@ def env_vars_from_file(filename):
Read in a line delimited file of environment variables.
"""
if not os.path.exists(filename):
raise ConfigurationError("Couldn't find env file: %s" % filename)
raise EnvFileNotFound("Couldn't find env file: {}".format(filename))
elif not os.path.isfile(filename):
raise ConfigurationError("%s is not a file." % (filename))
raise EnvFileNotFound("{} is not a file.".format(filename))
env = {}
with contextlib.closing(codecs.open(filename, 'r', 'utf-8-sig')) as fileobj:
for line in fileobj:
line = line.strip()
if line and not line.startswith('#'):
k, v = split_env(line)
env[k] = v
try:
k, v = split_env(line)
env[k] = v
except ConfigurationError as e:
raise ConfigurationError('In file {}: {}'.format(filename, e.msg))
return env
@@ -45,19 +56,24 @@ class Environment(dict):
def __init__(self, *args, **kwargs):
super(Environment, self).__init__(*args, **kwargs)
self.missing_keys = []
self.silent = False
@classmethod
def from_env_file(cls, base_dir):
def from_env_file(cls, base_dir, env_file=None):
def _initialize():
result = cls()
if base_dir is None:
return result
env_file_path = os.path.join(base_dir, '.env')
if env_file:
env_file_path = os.path.join(base_dir, env_file)
else:
env_file_path = os.path.join(base_dir, '.env')
try:
return cls(env_vars_from_file(env_file_path))
except ConfigurationError:
except EnvFileNotFound:
pass
return result
instance = _initialize()
instance.update(os.environ)
return instance
@@ -83,8 +99,8 @@ class Environment(dict):
return super(Environment, self).__getitem__(key.upper())
except KeyError:
pass
if key not in self.missing_keys:
log.warn(
if not self.silent and key not in self.missing_keys:
log.warning(
"The {} variable is not set. Defaulting to a blank string."
.format(key)
)

View File

@@ -19,6 +19,10 @@ class ConfigurationError(Exception):
return self.msg
class EnvFileNotFound(ConfigurationError):
pass
class DependencyError(ConfigurationError):
pass

View File

@@ -48,7 +48,7 @@ def interpolate_environment_variables(version, config, section, environment):
def get_config_path(config_key, section, name):
return '{}.{}.{}'.format(section, name, config_key)
return '{}/{}/{}'.format(section, name, config_key)
def interpolate_value(name, config_key, value, section, interpolator):
@@ -64,18 +64,18 @@ def interpolate_value(name, config_key, value, section, interpolator):
string=e.string))
except UnsetRequiredSubstitution as e:
raise ConfigurationError(
'Missing mandatory value for "{config_key}" option in {section} "{name}": {err}'.format(
config_key=config_key,
name=name,
section=section,
err=e.err
)
'Missing mandatory value for "{config_key}" option interpolating {value} '
'in {section} "{name}": {err}'.format(config_key=config_key,
value=value,
name=name,
section=section,
err=e.err)
)
def recursive_interpolate(obj, interpolator, config_path):
def append(config_path, key):
return '{}.{}'.format(config_path, key)
return '{}/{}'.format(config_path, key)
if isinstance(obj, six.string_types):
return converter.convert(config_path, interpolator.interpolate(obj))
@@ -160,12 +160,12 @@ class UnsetRequiredSubstitution(Exception):
self.err = custom_err_msg
PATH_JOKER = '[^.]+'
PATH_JOKER = '[^/]+'
FULL_JOKER = '.+'
def re_path(*args):
return re.compile('^{}$'.format('\.'.join(args)))
return re.compile('^{}$'.format('/'.join(args)))
def re_path_basic(section, name):
@@ -288,7 +288,7 @@ class ConversionMap(object):
except ValueError as e:
raise ConfigurationError(
'Error while attempting to convert {} to appropriate type: {}'.format(
path, e
path.replace('/', '.'), e
)
)
return value

View File

@@ -24,14 +24,12 @@ def serialize_dict_type(dumper, data):
def serialize_string(dumper, data):
""" Ensure boolean-like strings are quoted in the output and escape $ characters """
""" Ensure boolean-like strings are quoted in the output """
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
if isinstance(data, six.binary_type):
data = data.decode('utf-8')
data = data.replace('$', '$$')
if data.lower() in ('y', 'n', 'yes', 'no', 'on', 'off', 'true', 'false'):
# Empirically only y/n appears to be an issue, but this might change
# depending on which PyYaml version is being used. Err on safe side.
@@ -39,6 +37,12 @@ def serialize_string(dumper, data):
return representer(data)
def serialize_string_escape_dollar(dumper, data):
""" Ensure boolean-like strings are quoted in the output and escape $ characters """
data = data.replace('$', '$$')
return serialize_string(dumper, data)
yaml.SafeDumper.add_representer(types.MountSpec, serialize_dict_type)
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
@@ -46,8 +50,6 @@ yaml.SafeDumper.add_representer(types.SecurityOpt, serialize_config_type)
yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
def denormalize_config(config, image_digests=None):
@@ -78,7 +80,7 @@ def denormalize_config(config, image_digests=None):
config.version >= V3_0 and config.version < v3_introduced_name_key(key)):
del conf['name']
elif 'external' in conf:
conf['external'] = True
conf['external'] = bool(conf['external'])
if 'attachable' in conf and config.version < V3_2:
# For compatibility mode, this option is invalid in v2
@@ -93,7 +95,13 @@ def v3_introduced_name_key(key):
return V3_5
def serialize_config(config, image_digests=None):
def serialize_config(config, image_digests=None, escape_dollar=True):
if escape_dollar:
yaml.SafeDumper.add_representer(str, serialize_string_escape_dollar)
yaml.SafeDumper.add_representer(six.text_type, serialize_string_escape_dollar)
else:
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
return yaml.safe_dump(
denormalize_config(config, image_digests),
default_flow_style=False,

View File

@@ -125,7 +125,7 @@ def parse_extra_hosts(extra_hosts_config):
def normalize_path_for_engine(path):
"""Windows paths, c:\my\path\shiny, need to be changed to be compatible with
"""Windows paths, c:\\my\\path\\shiny, need to be changed to be compatible with
the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
"""
drive, tail = splitdrive(path)
@@ -136,6 +136,20 @@ def normalize_path_for_engine(path):
return path.replace('\\', '/')
def normpath(path, win_host=False):
""" Custom path normalizer that handles Compose-specific edge cases like
UNIX paths on Windows hosts and vice-versa. """
sysnorm = ntpath.normpath if win_host else os.path.normpath
# If a path looks like a UNIX absolute path on Windows, it probably is;
# we'll need to revert the backslashes to forward slashes after normalization
flip_slashes = path.startswith('/') and IS_WINDOWS_PLATFORM
path = sysnorm(path)
if flip_slashes:
path = path.replace('\\', '/')
return path
class MountSpec(object):
options_map = {
'volume': {
@@ -152,12 +166,11 @@ class MountSpec(object):
@classmethod
def parse(cls, mount_dict, normalize=False, win_host=False):
normpath = ntpath.normpath if win_host else os.path.normpath
if mount_dict.get('source'):
if mount_dict['type'] == 'tmpfs':
raise ConfigurationError('tmpfs mounts can not specify a source')
mount_dict['source'] = normpath(mount_dict['source'])
mount_dict['source'] = normpath(mount_dict['source'], win_host)
if normalize:
mount_dict['source'] = normalize_path_for_engine(mount_dict['source'])
@@ -247,7 +260,7 @@ class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
else:
external = parts[0]
parts = separate_next_section(parts[1])
external = ntpath.normpath(external)
external = normpath(external, True)
internal = parts[0]
if len(parts) > 1:
if ':' in parts[1]:

View File

@@ -41,15 +41,15 @@ DOCKER_CONFIG_HINTS = {
}
VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
VALID_NAME_CHARS = r'[a-zA-Z0-9\._\-]'
VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
VALID_IPV4_SEG = r'(\d{1,2}|1\d{2}|2[0-4]\d|25[0-5])'
VALID_IPV4_ADDR = "({IPV4_SEG}\.){{3}}{IPV4_SEG}".format(IPV4_SEG=VALID_IPV4_SEG)
VALID_REGEX_IPV4_CIDR = "^{IPV4_ADDR}/(\d|[1-2]\d|3[0-2])$".format(IPV4_ADDR=VALID_IPV4_ADDR)
VALID_IPV4_ADDR = r"({IPV4_SEG}\.){{3}}{IPV4_SEG}".format(IPV4_SEG=VALID_IPV4_SEG)
VALID_REGEX_IPV4_CIDR = r"^{IPV4_ADDR}/(\d|[1-2]\d|3[0-2])$".format(IPV4_ADDR=VALID_IPV4_ADDR)
VALID_IPV6_SEG = r'[0-9a-fA-F]{1,4}'
VALID_REGEX_IPV6_CIDR = "".join("""
VALID_REGEX_IPV6_CIDR = "".join(r"""
^
(
(({IPV6_SEG}:){{7}}{IPV6_SEG})|
@@ -240,6 +240,18 @@ def validate_depends_on(service_config, service_names):
)
def validate_credential_spec(service_config):
credential_spec = service_config.config.get('credential_spec')
if not credential_spec:
return
if 'registry' not in credential_spec and 'file' not in credential_spec:
raise ConfigurationError(
"Service '{s.name}' is missing 'credential_spec.file' or "
"credential_spec.registry'".format(s=service_config)
)
def get_unsupported_config_msg(path, error_key):
msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
if error_key in DOCKER_CONFIG_HINTS:
@@ -330,7 +342,10 @@ def handle_generic_error(error, path):
def parse_key_from_error_msg(error):
return error.message.split("'")[1]
try:
return error.message.split("'")[1]
except IndexError:
return error.message.split('(')[1].split(' ')[0].strip("'")
def path_string(path):

View File

@@ -7,20 +7,24 @@ from .version import ComposeVersion
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'
LABEL_WORKING_DIR = 'com.docker.compose.project.working_dir'
LABEL_CONFIG_FILES = 'com.docker.compose.project.config_files'
LABEL_ENVIRONMENT_FILE = 'com.docker.compose.project.environment_file'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'
LABEL_SLUG = 'com.docker.compose.slug'
LABEL_VOLUME = 'com.docker.compose.volume'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
NANOCPUS_SCALE = 1000000000
PARALLEL_LIMIT = 64
SECRETS_PATH = '/run/secrets'
WINDOWS_LONGPATH_PREFIX = '\\\\?\\'
COMPOSEFILE_V1 = ComposeVersion('1')
COMPOSEFILE_V2_0 = ComposeVersion('2.0')

View File

@@ -7,9 +7,12 @@ import six
from docker.errors import ImageNotFound
from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_SLUG
from .const import LABEL_VERSION
from .utils import truncate_id
from .version import ComposeVersion
@@ -80,18 +83,36 @@ class Container(object):
@property
def name_without_project(self):
if self.name.startswith('{0}_{1}'.format(self.project, self.service)):
return '{0}_{1}'.format(self.service, self.number)
return '{0}_{1}'.format(self.service, self.number if self.number is not None else self.slug)
else:
return self.name
@property
def number(self):
if self.one_off:
# One-off containers are no longer assigned numbers and use slugs instead.
return None
number = self.labels.get(LABEL_CONTAINER_NUMBER)
if not number:
raise ValueError("Container {0} does not have a {1} label".format(
self.short_id, LABEL_CONTAINER_NUMBER))
return int(number)
@property
def slug(self):
if not self.full_slug:
return None
return truncate_id(self.full_slug)
@property
def full_slug(self):
return self.labels.get(LABEL_SLUG)
@property
def one_off(self):
return self.labels.get(LABEL_ONE_OFF) == 'True'
@property
def ports(self):
self.inspect_if_not_inspected()

View File

@@ -226,12 +226,12 @@ def check_remote_network_config(remote, local):
raise NetworkConfigChangedError(local.true_name, 'enable_ipv6')
local_labels = local.labels or {}
remote_labels = remote.get('Labels', {})
remote_labels = remote.get('Labels') or {}
for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue
if remote_labels.get(k) != local_labels.get(k):
log.warn(
log.warning(
'Network {}: label "{}" has changed. It may need to be'
' recreated.'.format(local.true_name, k)
)
@@ -276,7 +276,7 @@ class ProjectNetworks(object):
}
unused = set(networks) - set(service_networks) - {'default'}
if unused:
log.warn(
log.warning(
"Some networks were defined but are not used by any service: "
"{}".format(", ".join(unused)))
return cls(service_networks, use_networking)
@@ -288,7 +288,7 @@ class ProjectNetworks(object):
try:
network.remove()
except NotFound:
log.warn("Network %s not found.", network.true_name)
log.warning("Network %s not found.", network.true_name)
def initialize(self):
if not self.use_networking:
@@ -323,7 +323,12 @@ def get_networks(service_dict, network_definitions):
'Service "{}" uses an undefined network "{}"'
.format(service_dict['name'], name))
return OrderedDict(sorted(
networks.items(),
key=lambda t: t[1].get('priority') or 0, reverse=True
))
if any([v.get('priority') for v in networks.values()]):
return OrderedDict(sorted(
networks.items(),
key=lambda t: t[1].get('priority') or 0, reverse=True
))
else:
# Ensure Compose will pick a consistent primary network if no
# priority is set
return OrderedDict(sorted(networks.items(), key=lambda t: t[0]))

View File

@@ -43,14 +43,17 @@ class GlobalLimit(object):
cls.global_limiter = Semaphore(value)
def parallel_execute_watch(events, writer, errors, results, msg, get_name):
def parallel_execute_watch(events, writer, errors, results, msg, get_name, fail_check):
""" Watch events from a parallel execution, update status and fill errors and results.
Returns exception to re-raise.
"""
error_to_reraise = None
for obj, result, exception in events:
if exception is None:
writer.write(msg, get_name(obj), 'done', green)
if fail_check is not None and fail_check(obj):
writer.write(msg, get_name(obj), 'failed', red)
else:
writer.write(msg, get_name(obj), 'done', green)
results.append(result)
elif isinstance(exception, ImageNotFound):
# This is to bubble up ImageNotFound exceptions to the client so we
@@ -72,12 +75,14 @@ def parallel_execute_watch(events, writer, errors, results, msg, get_name):
return error_to_reraise
def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None):
def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None, fail_check=None):
"""Runs func on objects in parallel while ensuring that func is
ran on object only after it is ran on all its dependencies.
get_deps called on object must return a collection with its dependencies.
get_name called on object must return its name.
fail_check is an additional failure check for cases that should display as a failure
in the CLI logs, but don't raise an exception (such as attempting to start 0 containers)
"""
objects = list(objects)
stream = get_output_stream(sys.stderr)
@@ -96,7 +101,9 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None):
errors = {}
results = []
error_to_reraise = parallel_execute_watch(events, writer, errors, results, msg, get_name)
error_to_reraise = parallel_execute_watch(
events, writer, errors, results, msg, get_name, fail_check
)
for obj_name, error in errors.items():
stream.write("\nERROR: for {} {}\n".format(obj_name, error))
@@ -313,6 +320,13 @@ class ParallelStreamWriter(object):
self._write_ansi(msg, obj_index, color_func(status))
def get_stream_writer():
instance = ParallelStreamWriter.instance
if instance is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
return instance
def parallel_operation(containers, operation, options, message):
parallel_execute(
containers,

View File

@@ -19,12 +19,11 @@ def write_to_stream(s, stream):
def stream_output(output, stream):
is_terminal = hasattr(stream, 'isatty') and stream.isatty()
stream = utils.get_output_stream(stream)
all_events = []
lines = {}
diff = 0
for event in utils.json_stream(output):
all_events.append(event)
yield event
is_progress_event = 'progress' in event or 'progressDetail' in event
if not is_progress_event:
@@ -57,8 +56,6 @@ def stream_output(output, stream):
stream.flush()
return all_events
def print_output_event(event, stream, is_terminal):
if 'errorDetail' in event:
@@ -101,14 +98,14 @@ def print_output_event(event, stream, is_terminal):
def get_digest_from_pull(events):
digest = None
for event in events:
status = event.get('status')
if not status or 'Digest' not in status:
continue
_, digest = status.split(':', 1)
return digest.strip()
return None
else:
digest = status.split(':', 1)[1].strip()
return digest
def get_digest_from_push(events):

View File

@@ -6,17 +6,18 @@ import logging
import operator
import re
from functools import reduce
from os import path
import enum
import six
from docker.errors import APIError
from docker.utils import version_lt
from . import parallel
from .config import ConfigurationError
from .config.config import V1
from .config.sort_services import get_container_name_from_network_mode
from .config.sort_services import get_service_name_from_network_mode
from .const import IMAGE_EVENTS
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
@@ -29,12 +30,13 @@ from .service import ContainerNetworkMode
from .service import ContainerPidMode
from .service import ConvergenceStrategy
from .service import NetworkMode
from .service import parse_repository_tag
from .service import PidMode
from .service import Service
from .service import ServiceName
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import microseconds_from_time_nano
from .utils import truncate_string
from .volume import ProjectVolumes
@@ -81,7 +83,7 @@ class Project(object):
return labels
@classmethod
def from_config(cls, name, config_data, client, default_platform=None):
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=[]):
"""
Construct a Project from a config.Config object.
"""
@@ -134,6 +136,7 @@ class Project(object):
pid_mode=pid_mode,
platform=service_dict.pop('platform', None),
default_platform=default_platform,
extra_labels=extra_labels,
**service_dict)
)
@@ -198,25 +201,6 @@ class Project(object):
service.remove_duplicate_containers()
return services
def get_scaled_services(self, services, scale_override):
"""
Returns a list of this project's services as scaled ServiceName objects.
services: a list of Service objects
scale_override: a dict with the scale to apply to each service (k: service_name, v: scale)
"""
service_names = []
for service in services:
if service.name in scale_override:
scale = scale_override[service.name]
else:
scale = service.scale_num
for i in range(1, scale + 1):
service_names.append(ServiceName(self.name, service.name, i))
return service_names
def get_links(self, service_dict):
links = []
if 'links' in service_dict:
@@ -298,6 +282,7 @@ class Project(object):
operator.attrgetter('name'),
'Starting',
get_deps,
fail_check=lambda obj: not obj.containers(),
)
return containers
@@ -372,13 +357,45 @@ class Project(object):
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
build_args=None, gzip=False):
build_args=None, gzip=False, parallel_build=False, rm=True, silent=False, cli=False,
progress=None):
services = []
for service in self.get_services(service_names):
if service.can_be_built():
service.build(no_cache, pull, force_rm, memory, build_args, gzip)
else:
services.append(service)
elif not silent:
log.info('%s uses an image, skipping' % service.name)
if cli:
log.warning("Native build is an experimental feature and could change at any time")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
if gzip:
log.warning("Flag '--compress' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
def build_service(service):
service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent, cli, progress)
if parallel_build:
_, errors = parallel.parallel_execute(
services,
build_service,
operator.attrgetter('name'),
'Building',
limit=5,
)
if len(errors):
combined_errors = '\n'.join([
e.decode('utf-8') if isinstance(e, six.binary_type) else e for e in errors.values()
])
raise ProjectError(combined_errors)
else:
for service in services:
build_service(service)
def create(
self,
service_names=None,
@@ -397,11 +414,13 @@ class Project(object):
detached=True,
start=False)
def events(self, service_names=None):
def _legacy_event_processor(self, service_names):
# Only for v1 files or when Compose is forced to use an older API version
def build_container_event(event, container):
time = datetime.datetime.fromtimestamp(event['time'])
time = time.replace(
microsecond=microseconds_from_time_nano(event['timeNano']))
microsecond=microseconds_from_time_nano(event['timeNano'])
)
return {
'time': time,
'type': 'container',
@@ -420,17 +439,15 @@ class Project(object):
filters={'label': self.labels()},
decode=True
):
# The first part of this condition is a guard against some events
# broadcasted by swarm that don't have a status field.
# This is a guard against some events broadcasted by swarm that
# don't have a status field.
# See https://github.com/docker/compose/issues/3316
if 'status' not in event or event['status'] in IMAGE_EVENTS:
# We don't receive any image events because labels aren't applied
# to images
if 'status' not in event:
continue
# TODO: get labels from the API v1.22 , see github issue 2618
try:
# this can fail if the container has been removed
# this can fail if the container has been removed or if the event
# refers to an image
container = Container.from_id(self.client, event['id'])
except APIError:
continue
@@ -438,6 +455,56 @@ class Project(object):
continue
yield build_container_event(event, container)
def events(self, service_names=None):
if version_lt(self.client.api_version, '1.22'):
# New, better event API was introduced in 1.22.
return self._legacy_event_processor(service_names)
def build_container_event(event):
container_attrs = event['Actor']['Attributes']
time = datetime.datetime.fromtimestamp(event['time'])
time = time.replace(
microsecond=microseconds_from_time_nano(event['timeNano'])
)
container = None
try:
container = Container.from_id(self.client, event['id'])
except APIError:
# Container may have been removed (e.g. if this is a destroy event)
pass
return {
'time': time,
'type': 'container',
'action': event['status'],
'id': event['Actor']['ID'],
'service': container_attrs.get(LABEL_SERVICE),
'attributes': dict([
(k, v) for k, v in container_attrs.items()
if not k.startswith('com.docker.compose.')
]),
'container': container,
}
def yield_loop(service_names):
for event in self.client.events(
filters={'label': self.labels()},
decode=True
):
# TODO: support other event types
if event.get('Type') != 'container':
continue
try:
if event['Actor']['Attributes'][LABEL_SERVICE] not in service_names:
continue
except KeyError:
continue
yield build_container_event(event)
return yield_loop(set(service_names) if service_names else self.service_names)
def up(self,
service_names=None,
start_deps=True,
@@ -454,8 +521,12 @@ class Project(object):
reset_container_image=False,
renew_anonymous_volumes=False,
silent=False,
cli=False,
):
if cli:
log.warning("Native build is an experimental feature and could change at any time")
self.initialize()
if not ignore_orphans:
self.find_orphan_containers(remove_orphans)
@@ -468,10 +539,9 @@ class Project(object):
include_deps=start_deps)
for svc in services:
svc.ensure_image_exists(do_build=do_build, silent=silent)
svc.ensure_image_exists(do_build=do_build, silent=silent, cli=cli)
plans = self._get_convergence_plans(
services, strategy, always_recreate_deps=always_recreate_deps)
scaled_services = self.get_scaled_services(services, scale_override)
def do(service):
@@ -482,7 +552,6 @@ class Project(object):
scale_override=scale_override.get(service.name),
rescale=rescale,
start=start,
project_services=scaled_services,
reset_container_image=reset_container_image,
renew_anonymous_volumes=renew_anonymous_volumes,
)
@@ -533,8 +602,10 @@ class Project(object):
", ".join(updated_dependencies))
containers_stopped = any(
service.containers(stopped=True, filters={'status': ['created', 'exited']}))
has_links = any(c.get('HostConfig.Links') for c in service.containers())
if always_recreate_deps or containers_stopped or not has_links:
service_has_links = any(service.get_link_names())
container_has_links = any(c.get('HostConfig.Links') for c in service.containers())
should_recreate_for_links = service_has_links ^ container_has_links
if always_recreate_deps or containers_stopped or should_recreate_for_links:
plan = service.convergence_plan(ConvergenceStrategy.always)
else:
plan = service.convergence_plan(strategy)
@@ -548,16 +619,38 @@ class Project(object):
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
include_deps=False):
services = self.get_services(service_names, include_deps)
images_to_build = {service.image_name for service in services if service.can_be_built()}
services_to_pull = [service for service in services if service.image_name not in images_to_build]
msg = not silent and 'Pulling' or None
if parallel_pull:
def pull_service(service):
service.pull(ignore_pull_failures, True)
strm = service.pull(ignore_pull_failures, True, stream=True)
if strm is None: # Attempting to pull service with no `image` key is a no-op
return
writer = parallel.get_stream_writer()
for event in strm:
if 'status' not in event:
continue
status = event['status'].lower()
if 'progressDetail' in event:
detail = event['progressDetail']
if 'current' in detail and 'total' in detail:
percentage = float(detail['current']) / float(detail['total'])
status = '{} ({:.1%})'.format(status, percentage)
writer.write(
msg, service.name, truncate_string(status), lambda s: s
)
_, errors = parallel.parallel_execute(
services,
services_to_pull,
pull_service,
operator.attrgetter('name'),
not silent and 'Pulling' or None,
msg,
limit=5,
)
if len(errors):
@@ -567,12 +660,19 @@ class Project(object):
raise ProjectError(combined_errors)
else:
for service in services:
for service in services_to_pull:
service.pull(ignore_pull_failures, silent=silent)
def push(self, service_names=None, ignore_push_failures=False):
unique_images = set()
for service in self.get_services(service_names, include_deps=False):
service.push(ignore_push_failures)
# Considering <image> and <image:latest> as the same
repo, tag, sep = parse_repository_tag(service.image_name)
service_image_name = sep.join((repo, tag)) if tag else sep.join((repo, 'latest'))
if service_image_name not in unique_images:
service.push(ignore_push_failures)
unique_images.add(service_image_name)
def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude):
ctnrs = list(filter(None, [
@@ -606,7 +706,7 @@ class Project(object):
def find_orphan_containers(self, remove_orphans):
def _find():
containers = self._labeled_containers()
containers = set(self._labeled_containers() + self._labeled_containers(stopped=True))
for ctnr in containers:
service_name = ctnr.labels.get(LABEL_SERVICE)
if service_name not in self.service_names:
@@ -617,7 +717,10 @@ class Project(object):
if remove_orphans:
for ctnr in orphans:
log.info('Removing orphan container "{0}"'.format(ctnr.name))
ctnr.kill()
try:
ctnr.kill()
except APIError:
pass
ctnr.remove(force=True)
else:
log.warning(
@@ -645,10 +748,11 @@ class Project(object):
def build_container_operation_with_timeout_func(self, operation, options):
def container_operation_with_timeout(container):
if options.get('timeout') is None:
_options = options.copy()
if _options.get('timeout') is None:
service = self.get_service(container.service)
options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**options)
_options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**_options)
return container_operation_with_timeout
@@ -691,13 +795,13 @@ def get_secrets(service, service_secrets, secret_defs):
.format(service=service, secret=secret.source))
if secret_def.get('external'):
log.warn("Service \"{service}\" uses secret \"{secret}\" which is external. "
"External secrets are not available to containers created by "
"docker-compose.".format(service=service, secret=secret.source))
log.warning("Service \"{service}\" uses secret \"{secret}\" which is external. "
"External secrets are not available to containers created by "
"docker-compose.".format(service=service, secret=secret.source))
continue
if secret.uid or secret.gid or secret.mode:
log.warn(
log.warning(
"Service \"{service}\" uses secret \"{secret}\" with uid, "
"gid, or mode. These fields are not supported by this "
"implementation of the Compose file".format(
@@ -705,7 +809,15 @@ def get_secrets(service, service_secrets, secret_defs):
)
)
secrets.append({'secret': secret, 'file': secret_def.get('file')})
secret_file = secret_def.get('file')
if not path.isfile(str(secret_file)):
log.warning(
"Service \"{service}\" uses an undefined secret file \"{secret_file}\", "
"the following file should be created \"{secret_file}\"".format(
service=service, secret_file=secret_file
)
)
secrets.append({'secret': secret, 'file': secret_file})
return secrets

View File

@@ -2,10 +2,12 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import itertools
import json
import logging
import os
import re
import sys
import tempfile
from collections import namedtuple
from collections import OrderedDict
from operator import attrgetter
@@ -27,6 +29,7 @@ from . import __version__
from . import const
from . import progress_stream
from .config import DOCKER_CONFIG_KEYS
from .config import is_url
from .config import merge_environment
from .config import merge_labels
from .config.errors import DependencyError
@@ -40,8 +43,10 @@ from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_SLUG
from .const import LABEL_VERSION
from .const import NANOCPUS_SCALE
from .const import WINDOWS_LONGPATH_PREFIX
from .container import Container
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
@@ -49,14 +54,20 @@ from .errors import OperationFailedError
from .parallel import parallel_execute
from .progress_stream import stream_output
from .progress_stream import StreamOutputError
from .utils import generate_random_id
from .utils import json_hash
from .utils import parse_bytes
from .utils import parse_seconds_float
from .utils import truncate_id
from .utils import unique_everseen
if six.PY2:
import subprocess32 as subprocess
else:
import subprocess
log = logging.getLogger(__name__)
HOST_CONFIG_KEYS = [
'cap_add',
'cap_drop',
@@ -80,6 +91,7 @@ HOST_CONFIG_KEYS = [
'group_add',
'init',
'ipc',
'isolation',
'read_only',
'log_driver',
'log_opt',
@@ -124,7 +136,6 @@ class NoSuchImageError(Exception):
ServiceName = namedtuple('ServiceName', 'project service number')
ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
@@ -160,20 +171,21 @@ class BuildAction(enum.Enum):
class Service(object):
def __init__(
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=None,
pid_mode=None,
default_platform=None,
**options
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=1,
pid_mode=None,
default_platform=None,
extra_labels=[],
**options
):
self.name = name
self.client = client
@@ -185,14 +197,17 @@ class Service(object):
self.pid_mode = pid_mode or PidMode(None)
self.networks = networks or {}
self.secrets = secrets or []
self.scale_num = scale or 1
self.scale_num = scale
self.default_platform = default_platform
self.options = options
self.extra_labels = extra_labels
def __repr__(self):
return '<Service: {}>'.format(self.name)
def containers(self, stopped=False, one_off=False, filters={}, labels=None):
def containers(self, stopped=False, one_off=False, filters=None, labels=None):
if filters is None:
filters = {}
filters.update({'label': self.labels(one_off=one_off) + (labels or [])})
result = list(filter(None, [
@@ -200,7 +215,7 @@ class Service(object):
for container in self.client.containers(
all=stopped,
filters=filters)])
)
)
if result:
return result
@@ -219,7 +234,6 @@ class Service(object):
"""Return a :class:`compose.container.Container` for this service. The
container must be active, and match `number`.
"""
for container in self.containers(labels=['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]):
return container
@@ -233,15 +247,15 @@ class Service(object):
def show_scale_warnings(self, desired_num):
if self.custom_container_name and desired_num > 1:
log.warn('The "%s" service is using the custom container name "%s". '
'Docker requires each container to have a unique name. '
'Remove the custom name to scale the service.'
% (self.name, self.custom_container_name))
log.warning('The "%s" service is using the custom container name "%s". '
'Docker requires each container to have a unique name. '
'Remove the custom name to scale the service.'
% (self.name, self.custom_container_name))
if self.specifies_host_port() and desired_num > 1:
log.warn('The "%s" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'
% self.name)
log.warning('The "%s" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'
% self.name)
def scale(self, desired_num, timeout=None):
"""
@@ -283,7 +297,7 @@ class Service(object):
c for c in stopped_containers if self._containers_have_diverged([c])
]
for c in divergent_containers:
c.remove()
c.remove()
all_containers = list(set(all_containers) - set(divergent_containers))
@@ -331,9 +345,9 @@ class Service(object):
raise OperationFailedError("Cannot create container for service %s: %s" %
(self.name, ex.explanation))
def ensure_image_exists(self, do_build=BuildAction.none, silent=False):
def ensure_image_exists(self, do_build=BuildAction.none, silent=False, cli=False):
if self.can_be_built() and do_build == BuildAction.force:
self.build()
self.build(cli=cli)
return
try:
@@ -349,12 +363,18 @@ class Service(object):
if do_build == BuildAction.skip:
raise NeedsBuildError(self)
self.build()
log.warn(
self.build(cli=cli)
log.warning(
"Image for service {} was built because it did not already exist. To "
"rebuild this image you must use `docker-compose build` or "
"`docker-compose up --build`.".format(self.name))
def get_image_registry_data(self):
try:
return self.client.inspect_distribution(self.image_name)
except APIError:
raise NoSuchImageError("Image '{}' not found".format(self.image_name))
def image(self):
try:
return self.client.inspect_image(self.image_name)
@@ -384,8 +404,8 @@ class Service(object):
return ConvergencePlan('start', containers)
if (
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
):
return ConvergencePlan('recreate', containers)
@@ -425,74 +445,79 @@ class Service(object):
return has_diverged
def _execute_convergence_create(self, scale, detached, start, project_services=None):
i = self._next_container_number()
def _execute_convergence_create(self, scale, detached, start):
def create_and_start(service, n):
container = service.create_container(number=n, quiet=True)
if not detached:
container.attach_log_stream()
if start:
self.start_container(container)
return container
i = self._next_container_number()
containers, errors = parallel_execute(
[ServiceName(self.project, self.name, index) for index in range(i, i + scale)],
lambda service_name: create_and_start(self, service_name.number),
lambda service_name: self.get_container_name(service_name.service, service_name.number),
"Creating"
)
for error in errors.values():
raise OperationFailedError(error)
def create_and_start(service, n):
container = service.create_container(number=n, quiet=True)
if not detached:
container.attach_log_stream()
if start:
self.start_container(container)
return container
return containers
containers, errors = parallel_execute(
[
ServiceName(self.project, self.name, index)
for index in range(i, i + scale)
],
lambda service_name: create_and_start(self, service_name.number),
lambda service_name: self.get_container_name(service_name.service, service_name.number),
"Creating"
)
for error in errors.values():
raise OperationFailedError(error)
return containers
def _execute_convergence_recreate(self, containers, scale, timeout, detached, start,
renew_anonymous_volumes):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
def recreate(container):
return self.recreate_container(
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
lambda c: c.name,
"Recreating",
def recreate(container):
return self.recreate_container(
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
lambda c: c.name,
"Recreating",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _execute_convergence_start(self, containers, scale, timeout, detached, start):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
_, errors = parallel_execute(
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _execute_convergence_start(self, containers, scale, timeout, detached, start):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
_, errors = parallel_execute(
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _downscale(self, containers, timeout=None):
def stop_and_remove(container):
@@ -508,8 +533,8 @@ class Service(object):
def execute_convergence_plan(self, plan, timeout=None, detached=False,
start=True, scale_override=None,
rescale=True, project_services=None,
reset_container_image=False, renew_anonymous_volumes=False):
rescale=True, reset_container_image=False,
renew_anonymous_volumes=False):
(action, containers) = plan
scale = scale_override if scale_override is not None else self.scale_num
containers = sorted(containers, key=attrgetter('number'))
@@ -518,7 +543,7 @@ class Service(object):
if action == 'create':
return self._execute_convergence_create(
scale, detached, start, project_services
scale, detached, start
)
# The create action needs always needs an initial scale, but otherwise,
@@ -568,7 +593,7 @@ class Service(object):
container.rename_to_tmp_name()
new_container = self.create_container(
previous_container=container if not renew_anonymous_volumes else None,
number=container.labels.get(LABEL_CONTAINER_NUMBER),
number=container.number,
quiet=True,
)
if attach_logs:
@@ -599,6 +624,8 @@ class Service(object):
try:
container.start()
except APIError as ex:
if "driver failed programming external connectivity" in ex.explanation:
log.warn("Host is already in use by another container")
raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation))
return container
@@ -656,12 +683,19 @@ class Service(object):
return json_hash(self.config_dict())
def config_dict(self):
def image_id():
try:
return self.image()['Id']
except NoSuchImageError:
return None
return {
'options': self.options,
'image_id': self.image()['Id'],
'image_id': image_id(),
'links': self.get_link_names(),
'net': self.network_mode.id,
'networks': self.networks,
'secrets': self.secrets,
'volumes_from': [
(v.source.name, v.mode)
for v in self.volumes_from if isinstance(v.source, Service)
@@ -672,11 +706,11 @@ class Service(object):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
@@ -717,19 +751,19 @@ class Service(object):
def get_volumes_from_names(self):
return [s.source.name for s in self.volumes_from if isinstance(s.source, Service)]
# TODO: this would benefit from github.com/docker/docker/pull/14699
# to remove the need to inspect every container
def _next_container_number(self, one_off=False):
if one_off:
return None
containers = itertools.chain(
self._fetch_containers(
all=True,
filters={'label': self.labels(one_off=one_off)}
filters={'label': self.labels(one_off=False)}
), self._fetch_containers(
all=True,
filters={'label': self.labels(one_off=one_off, legacy=True)}
filters={'label': self.labels(one_off=False, legacy=True)}
)
)
numbers = [c.number for c in containers]
numbers = [c.number for c in containers if c.number is not None]
return 1 if not numbers else max(numbers) + 1
def _fetch_containers(self, **fetch_options):
@@ -807,6 +841,7 @@ class Service(object):
one_off=False,
previous_container=None):
add_config_hash = (not one_off and not override_options)
slug = generate_random_id() if one_off else None
container_options = dict(
(k, self.options[k])
@@ -815,7 +850,7 @@ class Service(object):
container_options.update(override_options)
if not container_options.get('name'):
container_options['name'] = self.get_container_name(self.name, number, one_off)
container_options['name'] = self.get_container_name(self.name, number, slug)
container_options.setdefault('detach', True)
@@ -865,9 +900,11 @@ class Service(object):
container_options['labels'] = build_container_labels(
container_options.get('labels', {}),
self.labels(one_off=one_off),
self.labels(one_off=one_off) + self.extra_labels,
number,
self.config_hash if add_config_hash else None)
self.config_hash if add_config_hash else None,
slug
)
# Delete options which are only used in HostConfig
for key in HOST_CONFIG_KEYS:
@@ -924,8 +961,9 @@ class Service(object):
override_options['mounts'] = override_options.get('mounts') or []
override_options['mounts'].extend([build_mount(v) for v in secret_volumes])
# Remove possible duplicates (see e.g. https://github.com/docker/compose/issues/5885)
override_options['binds'] = list(set(binds))
# Remove possible duplicates (see e.g. https://github.com/docker/compose/issues/5885).
# unique_everseen preserves order. (see https://github.com/docker/compose/issues/6091).
override_options['binds'] = list(unique_everseen(binds))
return container_options, override_options
def _get_container_host_config(self, override_options, one_off=False):
@@ -1021,8 +1059,11 @@ class Service(object):
return [build_spec(secret) for secret in self.secrets]
def build(self, no_cache=False, pull=False, force_rm=False, memory=None, build_args_override=None,
gzip=False):
log.info('Building %s' % self.name)
gzip=False, rm=True, silent=False, cli=False, progress=None):
output_stream = open(os.devnull, 'w')
if not silent:
output_stream = sys.stdout
log.info('Building %s' % self.name)
build_opts = self.options.get('build', {})
@@ -1033,26 +1074,22 @@ class Service(object):
for k, v in self._parse_proxy_config().items():
build_args.setdefault(k, v)
# python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe
path = build_opts.get('context')
if not six.PY3 and not IS_WINDOWS_PLATFORM:
path = path.encode('utf8')
path = rewrite_build_path(build_opts.get('context'))
if self.platform and version_lt(self.client.api_version, '1.35'):
raise OperationFailedError(
'Impossible to perform platform-targeted builds for API version < 1.35'
)
build_output = self.client.build(
builder = self.client if not cli else _CLIBuilder(progress)
build_output = builder.build(
path=path,
tag=self.image_name,
rm=True,
rm=rm,
forcerm=force_rm,
pull=pull,
nocache=no_cache,
dockerfile=build_opts.get('dockerfile', None),
cache_from=build_opts.get('cache_from', None),
cache_from=self.get_cache_from(build_opts),
labels=build_opts.get('labels', None),
buildargs=build_args,
network_mode=build_opts.get('network', None),
@@ -1068,7 +1105,7 @@ class Service(object):
)
try:
all_events = stream_output(build_output, sys.stdout)
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(self, six.text_type(e))
@@ -1090,6 +1127,12 @@ class Service(object):
return image_id
def get_cache_from(self, build_opts):
cache_from = build_opts.get('cache_from', None)
if cache_from is not None:
cache_from = [tag for tag in cache_from if tag]
return cache_from
def can_be_built(self):
return 'build' in self.options
@@ -1105,12 +1148,12 @@ class Service(object):
def custom_container_name(self):
return self.options.get('container_name')
def get_container_name(self, service_name, number, one_off=False):
if self.custom_container_name and not one_off:
def get_container_name(self, service_name, number, slug=None):
if self.custom_container_name and slug is None:
return self.custom_container_name
container_name = build_container_name(
self.project, service_name, number, one_off,
self.project, service_name, number, slug,
)
ext_links_origins = [l.split(':')[0] for l in self.options.get('external_links', [])]
if container_name in ext_links_origins:
@@ -1131,6 +1174,9 @@ class Service(object):
try:
self.client.remove_image(self.image_name)
return True
except ImageNotFound:
log.warning("Image %s not found.", self.image_name)
return False
except APIError as e:
log.error("Failed to remove image for service %s: %s", self.name, e)
return False
@@ -1162,7 +1208,23 @@ class Service(object):
return any(has_host_port(binding) for binding in self.options.get('ports', []))
def pull(self, ignore_pull_failures=False, silent=False):
def _do_pull(self, repo, pull_kwargs, silent, ignore_pull_failures):
try:
output = self.client.pull(repo, **pull_kwargs)
if silent:
with open(os.devnull, 'w') as devnull:
for event in stream_output(output, devnull):
yield event
else:
for event in stream_output(output, sys.stdout):
yield event
except (StreamOutputError, NotFound) as e:
if not ignore_pull_failures:
raise
else:
log.error(six.text_type(e))
def pull(self, ignore_pull_failures=False, silent=False, stream=False):
if 'image' not in self.options:
return
@@ -1179,20 +1241,11 @@ class Service(object):
raise OperationFailedError(
'Impossible to perform platform-targeted pulls for API version < 1.35'
)
try:
output = self.client.pull(repo, **kwargs)
if silent:
with open(os.devnull, 'w') as devnull:
return progress_stream.get_digest_from_pull(
stream_output(output, devnull))
else:
return progress_stream.get_digest_from_pull(
stream_output(output, sys.stdout))
except (StreamOutputError, NotFound) as e:
if not ignore_pull_failures:
raise
else:
log.error(six.text_type(e))
event_stream = self._do_pull(repo, kwargs, silent, ignore_pull_failures)
if stream:
return event_stream
return progress_stream.get_digest_from_pull(event_stream)
def push(self, ignore_push_failures=False):
if 'image' not in self.options or 'build' not in self.options:
@@ -1289,7 +1342,7 @@ class ServicePidMode(PidMode):
if containers:
return 'container:' + containers[0].id
log.warn(
log.warning(
"Service %s is trying to use reuse the PID namespace "
"of another service that is not running." % (self.service_name)
)
@@ -1352,19 +1405,21 @@ class ServiceNetworkMode(object):
if containers:
return 'container:' + containers[0].id
log.warn("Service %s is trying to use reuse the network stack "
"of another service that is not running." % (self.id))
log.warning("Service %s is trying to use reuse the network stack "
"of another service that is not running." % (self.id))
return None
# Names
def build_container_name(project, service, number, one_off=False):
def build_container_name(project, service, number, slug=None):
bits = [project.lstrip('-_'), service]
if one_off:
bits.append('run')
return '_'.join(bits + [str(number)])
if slug:
bits.extend(['run', truncate_id(slug)])
else:
bits.append(str(number))
return '_'.join(bits)
# Images
@@ -1407,7 +1462,7 @@ def merge_volume_bindings(volumes, tmpfs, previous_container, mounts):
"""
affinity = {}
volume_bindings = dict(
volume_bindings = OrderedDict(
build_volume_binding(volume)
for volume in volumes
if volume.external
@@ -1467,6 +1522,11 @@ def get_container_data_volumes(container, volumes_option, tmpfs_option, mounts_o
if not mount.get('Name'):
continue
# Volume (probably an image volume) is overridden by a mount in the service's config
# and would cause a duplicate mountpoint error
if volume.internal in [m.target for m in mounts_option]:
continue
# Copy existing volume from old container
volume = volume._replace(external=mount['Name'])
volumes.append(volume)
@@ -1493,11 +1553,11 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
for volume in volumes_option:
if (
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
):
log.warn((
log.warning((
"Service \"{service}\" is using volume \"{volume}\" from the "
"previous container. Host mapping \"{host_path}\" has no effect. "
"Remove the existing containers (with `docker-compose rm {service}`) "
@@ -1542,13 +1602,17 @@ def build_mount(mount_spec):
read_only=mount_spec.read_only, consistency=mount_spec.consistency, **kwargs
)
# Labels
def build_container_labels(label_options, service_labels, number, config_hash):
def build_container_labels(label_options, service_labels, number, config_hash, slug):
labels = dict(label_options or {})
labels.update(label.split('=', 1) for label in service_labels)
labels[LABEL_CONTAINER_NUMBER] = str(number)
if number is not None:
labels[LABEL_CONTAINER_NUMBER] = str(number)
if slug is not None:
labels[LABEL_SLUG] = slug
labels[LABEL_VERSION] = __version__
if config_hash:
@@ -1593,6 +1657,7 @@ def format_environment(environment):
if isinstance(value, six.binary_type):
value = value.decode('utf-8')
return '{key}={value}'.format(key=key, value=value)
return [format_env(*item) for item in environment.items()]
@@ -1637,3 +1702,151 @@ def convert_blkio_config(blkio_config):
arr.append(dict([(k.capitalize(), v) for k, v in item.items()]))
result[field] = arr
return result
def rewrite_build_path(path):
# python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe
if not six.PY3 and not IS_WINDOWS_PLATFORM:
path = path.encode('utf8')
if IS_WINDOWS_PLATFORM and not is_url(path) and not path.startswith(WINDOWS_LONGPATH_PREFIX):
path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
return path
class _CLIBuilder(object):
def __init__(self, progress):
self._progress = progress
def build(self, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True):
"""
Args:
path (str): Path to the directory containing the Dockerfile
buildargs (dict): A dictionary of build arguments
cache_from (:py:class:`list`): A list of images used for build
cache resolution
container_limits (dict): A dictionary of limits applied to each
container created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable
swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g.,
``"0-3"``, ``"0,1"``
custom_context (bool): Optional if using ``fileobj``
decode (bool): If set to ``True``, the returned stream will be
decoded into dicts on the fly. Default ``False``
dockerfile (str): path within the build context to the Dockerfile
encoding (str): The encoding for a stream. Set to ``gzip`` for
compressing
extra_hosts (dict): Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
fileobj: A file object to use as the Dockerfile. (Or a file-like
object)
forcerm (bool): Always remove intermediate containers, even after
unsuccessful builds
isolation (str): Isolation technology used during build.
Default: `None`.
labels (dict): A dictionary of labels to set on the image
network_mode (str): networking mode for the run commands during
build
nocache (bool): Don't use the cache when set to ``True``
platform (str): Platform in the format ``os[/arch[/variant]]``
pull (bool): Downloads any updates to the FROM image in Dockerfiles
quiet (bool): Whether to return the status
rm (bool): Remove intermediate containers. The ``docker build``
command now defaults to ``--rm=true``, but we have kept the old
default of `False` to preserve backward compatibility
shmsize (int): Size of `/dev/shm` in bytes. The size must be
greater than 0. If omitted the system uses 64MB
squash (bool): Squash the resulting images layers into a
single layer.
tag (str): A tag to add to the final image
target (str): Name of the build-stage to build in a multi-stage
Dockerfile
timeout (int): HTTP timeout
use_config_proxy (bool): If ``True``, and if the docker client
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
Returns:
A generator for the build output.
"""
if dockerfile:
dockerfile = os.path.join(path, dockerfile)
iidfile = tempfile.mktemp()
command_builder = _CommandBuilder()
command_builder.add_params("--build-arg", buildargs)
command_builder.add_list("--cache-from", cache_from)
command_builder.add_arg("--file", dockerfile)
command_builder.add_flag("--force-rm", forcerm)
command_builder.add_arg("--memory", container_limits.get("memory"))
command_builder.add_flag("--no-cache", nocache)
command_builder.add_arg("--progress", self._progress)
command_builder.add_flag("--pull", pull)
command_builder.add_arg("--tag", tag)
command_builder.add_arg("--target", target)
command_builder.add_arg("--iidfile", iidfile)
args = command_builder.build([path])
magic_word = "Successfully built "
appear = False
with subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
# Fix non ascii chars on Python2. To remove when #6890 is complete.
if six.PY2:
magic_word = str(magic_word)
if line.startswith(magic_word):
appear = True
yield json.dumps({"stream": line})
with open(iidfile) as f:
line = f.readline()
image_id = line.split(":")[1].strip()
os.remove(iidfile)
# In case of `DOCKER_BUILDKIT=1`
# there is no success message already present in the output.
# Since that's the way `Service::build` gets the `image_id`
# it has to be added `manually`
if not appear:
yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
class _CommandBuilder(object):
def __init__(self):
self._args = ["docker", "build"]
def add_arg(self, name, value):
if value:
self._args.extend([name, str(value)])
def add_flag(self, name, flag):
if flag:
self._args.extend([name])
def add_params(self, name, params):
if params:
for key, val in params.items():
self._args.extend([name, "{}={}".format(key, val)])
def add_list(self, name, values):
if values:
for val in values:
self._args.extend([name, val])
def build(self, args):
return self._args + args

View File

@@ -3,10 +3,10 @@ from __future__ import unicode_literals
import codecs
import hashlib
import json
import json.decoder
import logging
import ntpath
import random
import six
from docker.errors import DockerException
@@ -151,3 +151,37 @@ def unquote_path(s):
if s[0] == '"' and s[-1] == '"':
return s[1:-1]
return s
def generate_random_id():
while True:
val = hex(random.getrandbits(32 * 8))[2:-1]
try:
int(truncate_id(val))
continue
except ValueError:
return val
def truncate_id(value):
if ':' in value:
value = value[value.index(':') + 1:]
if len(value) > 12:
return value[:12]
return value
def unique_everseen(iterable, key=lambda x: x):
"List unique elements, preserving order. Remember all elements ever seen."
seen = set()
for element in iterable:
unique_key = key(element)
if unique_key not in seen:
seen.add(unique_key)
yield element
def truncate_string(s, max_chars=35):
if len(s) > max_chars:
return s[:max_chars - 2] + '...'
return s

View File

@@ -127,7 +127,7 @@ class ProjectVolumes(object):
try:
volume.remove()
except NotFound:
log.warn("Volume %s not found.", volume.true_name)
log.warning("Volume %s not found.", volume.true_name)
def initialize(self):
try:
@@ -209,7 +209,7 @@ def check_remote_volume_config(remote, local):
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue
if remote_labels.get(k) != local_labels.get(k):
log.warn(
log.warning(
'Volume {}: label "{}" has changed. It may need to be'
' recreated.'.format(local.name, k)
)

View File

@@ -110,11 +110,14 @@ _docker_compose_build() {
__docker_compose_nospace
return
;;
--memory|-m)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory --no-cache --pull" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory -m --no-cache --no-rm --pull --parallel -q --quiet" -- "$cur" ) )
;;
*)
__docker_compose_complete_services --filter source=build
@@ -136,7 +139,18 @@ _docker_compose_bundle() {
_docker_compose_config() {
COMPREPLY=( $( compgen -W "--help --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
case "$prev" in
--hash)
if [[ $cur == \\* ]] ; then
COMPREPLY=( '\*' )
else
COMPREPLY=( $(compgen -W "$(__docker_compose_services) \\\* " -- "$cur") )
fi
return
;;
esac
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
}
@@ -170,6 +184,10 @@ _docker_compose_docker_compose() {
_filedir -d
return
;;
--env-file)
_filedir
return
;;
$(__docker_compose_to_extglob "$daemon_options_with_args") )
return
;;
@@ -350,7 +368,7 @@ _docker_compose_ps() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --quiet -q --services --filter" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--all -a --filter --help --quiet -q --services" -- "$cur" ) )
;;
*)
__docker_compose_complete_services
@@ -598,6 +616,7 @@ _docker_compose() {
--tlsverify
"
local daemon_options_with_args="
--env-file
--file -f
--host -H
--project-directory

View File

@@ -12,6 +12,7 @@ end
complete -c docker-compose -s f -l file -r -d 'Specify an alternate compose file'
complete -c docker-compose -s p -l project-name -x -d 'Specify an alternate project name'
complete -c docker-compose -l env-file -r -d 'Specify an alternate environment file (default: .env)'
complete -c docker-compose -l verbose -d 'Show more output'
complete -c docker-compose -s H -l host -x -d 'Daemon socket to connect to'
complete -c docker-compose -l tls -d 'Use TLS; implied by --tlsverify'

170
contrib/completion/zsh/_docker-compose Normal file → Executable file
View File

@@ -23,7 +23,7 @@ __docker-compose_all_services_in_compose_file() {
local already_selected
local -a services
already_selected=$(echo $words | tr " " "|")
__docker-compose_q config --services \
__docker-compose_q ps --services "$@" \
| grep -Ev "^(${already_selected})$"
}
@@ -31,125 +31,42 @@ __docker-compose_all_services_in_compose_file() {
__docker-compose_services_all() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
services=$(__docker-compose_all_services_in_compose_file)
services=$(__docker-compose_all_services_in_compose_file "$@")
_alternative "args:services:($services)" && ret=0
return ret
}
# All services that have an entry with the given key in their docker-compose.yml section
__docker-compose_services_with_key() {
local already_selected
local -a buildable
already_selected=$(echo $words | tr " " "|")
# flatten sections to one line, then filter lines containing the key and return section name.
__docker-compose_q config \
| sed -n -e '/^services:/,/^[^ ]/p' \
| sed -n 's/^ //p' \
| awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' \
| grep " \+$1:" \
| cut -d: -f1 \
| grep -Ev "^(${already_selected})$"
}
# All services that are defined by a Dockerfile reference
__docker-compose_services_from_build() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
buildable=$(__docker-compose_services_with_key build)
_alternative "args:buildable services:($buildable)" && ret=0
return ret
__docker-compose_services_all --filter source=build
}
# All services that are defined by an image
__docker-compose_services_from_image() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
pullable=$(__docker-compose_services_with_key image)
_alternative "args:pullable services:($pullable)" && ret=0
return ret
}
__docker-compose_get_services() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
local kind
declare -a running paused stopped lines args services
docker_status=$(docker ps > /dev/null 2>&1)
if [ $? -ne 0 ]; then
_message "Error! Docker is not running."
return 1
fi
kind=$1
shift
[[ $kind =~ (stopped|all) ]] && args=($args -a)
lines=(${(f)"$(_call_program commands docker $docker_options ps --format 'table' $args)"})
services=(${(f)"$(_call_program commands docker-compose 2>/dev/null $compose_options ps -q)"})
# Parse header line to find columns
local i=1 j=1 k header=${lines[1]}
declare -A begin end
while (( j < ${#header} - 1 )); do
i=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 1 ))
j=$(( i + ${${header[$i,-1]}[(i) ]} - 1 ))
k=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 2 ))
begin[${header[$i,$((j-1))]}]=$i
end[${header[$i,$((j-1))]}]=$k
done
lines=(${lines[2,-1]})
# Container ID
local line s name
local -a names
for line in $lines; do
if [[ ${services[@]} == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then
names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}})
for name in $names; do
s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
stopped=($stopped $s)
else
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = *\(Paused\)* ]]; then
paused=($paused $s)
fi
running=($running $s)
fi
done
fi
done
[[ $kind =~ (running|all) ]] && _describe -t services-running "running services" running "$@" && ret=0
[[ $kind =~ (paused|all) ]] && _describe -t services-paused "paused services" paused "$@" && ret=0
[[ $kind =~ (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped "$@" && ret=0
return ret
__docker-compose_services_all --filter source=image
}
__docker-compose_pausedservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services paused "$@"
__docker-compose_services_all --filter status=paused
}
__docker-compose_stoppedservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services stopped "$@"
__docker-compose_services_all --filter status=stopped
}
__docker-compose_runningservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services running "$@"
__docker-compose_services_all --filter status=running
}
__docker-compose_services() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services all "$@"
__docker-compose_services_all
}
__docker-compose_caching_policy() {
@@ -196,9 +113,12 @@ __docker-compose_subcommand() {
$opts_help \
"*--build-arg=[Set build-time variables for one service.]:<varname>=<value>: " \
'--force-rm[Always remove intermediate containers.]' \
'--memory[Memory limit for the build container.]' \
'(--quiet -q)'{--quiet,-q}'[Curb build output]' \
'(--memory -m)'{--memory,-m}'[Memory limit for the build container.]' \
'--no-cache[Do not use cache when building the image.]' \
'--pull[Always attempt to pull a newer version of the image.]' \
'--compress[Compress the build context using gzip.]' \
'--parallel[Build images in parallel.]' \
'*:services:__docker-compose_services_from_build' && ret=0
;;
(bundle)
@@ -213,7 +133,8 @@ __docker-compose_subcommand() {
'(--quiet -q)'{--quiet,-q}"[Only validate the configuration, don't print anything.]" \
'--resolve-image-digests[Pin image tags to digests.]' \
'--services[Print the service names, one per line.]' \
'--volumes[Print the volume names, one per line.]' && ret=0
'--volumes[Print the volume names, one per line.]' \
'--hash[Print the service config hash, one per line. Set "service1,service2" for a list of specified services.]' \ && ret=0
;;
(create)
_arguments \
@@ -222,11 +143,12 @@ __docker-compose_subcommand() {
$opts_no_recreate \
$opts_no_build \
"(--no-build)--build[Build images before creating containers.]" \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(down)
_arguments \
$opts_help \
$opts_timeout \
"--rmi[Remove images. Type must be one of: 'all': Remove all images used by any service. 'local': Remove only images that don't have a custom tag set by the \`image\` field.]:type:(all local)" \
'(-v --volumes)'{-v,--volumes}"[Remove named volumes declared in the \`volumes\` section of the Compose file and anonymous volumes attached to containers.]" \
$opts_remove_orphans && ret=0
@@ -235,16 +157,18 @@ __docker-compose_subcommand() {
_arguments \
$opts_help \
'--json[Output events as a stream of json objects]' \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(exec)
_arguments \
$opts_help \
'-d[Detached mode: Run command in the background.]' \
'--privileged[Give extended privileges to the process.]' \
'(-u --user)'{-u,--user=}'[Run the command as this user.]:username:_users' \
'(-u --user)'{-u,--user=}'[Run the command as this user.]:username:_users' \
'-T[Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY.]' \
'--index=[Index of the container if there are multiple instances of a service \[default: 1\]]:index: ' \
'*'{-e,--env}'[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
'(-w --workdir)'{-w,--workdir=}'[Working directory inside the container]:workdir: ' \
'(-):running services:__docker-compose_runningservices' \
'(-):command: _command_names -e' \
'*::arguments: _normal' && ret=0
@@ -252,12 +176,12 @@ __docker-compose_subcommand() {
(help)
_arguments ':subcommand:__docker-compose_commands' && ret=0
;;
(images)
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services_all' && ret=0
;;
(images)
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services' && ret=0
;;
(kill)
_arguments \
$opts_help \
@@ -271,7 +195,7 @@ __docker-compose_subcommand() {
$opts_no_color \
'--tail=[Number of lines to show from the end of the logs for each container.]:number of lines: ' \
'(-t --timestamps)'{-t,--timestamps}'[Show timestamps]' \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(pause)
_arguments \
@@ -290,12 +214,16 @@ __docker-compose_subcommand() {
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services_all' && ret=0
'--filter KEY=VAL[Filter services by a property]:<filtername>=<value>:' \
'*:services:__docker-compose_services' && ret=0
;;
(pull)
_arguments \
$opts_help \
'--ignore-pull-failures[Pull what it can and ignores images with pull failures.]' \
'--no-parallel[Disable parallel pulling]' \
'(-q --quiet)'{-q,--quiet}'[Pull without printing progress information]' \
'--include-deps[Also pull services declared as dependencies]' \
'*:services:__docker-compose_services_from_image' && ret=0
;;
(push)
@@ -317,6 +245,7 @@ __docker-compose_subcommand() {
$opts_no_deps \
'-d[Detached mode: Run container in the background, print new container name.]' \
'*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
'*'{-l,--label}'[KEY=VAL Add or override a label (can be used multiple times)]:label KEY=VAL: ' \
'--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \
'--name=[Assign a name to the container]:name: ' \
'(-p --publish)'{-p,--publish=}"[Publish a container's port(s) to the host]" \
@@ -326,6 +255,7 @@ __docker-compose_subcommand() {
'(-u --user)'{-u,--user=}'[Run as specified username or uid]:username or uid:_users' \
'(-v --volume)*'{-v,--volume=}'[Bind mount a volume]:volume: ' \
'(-w --workdir)'{-w,--workdir=}'[Working directory inside the container]:workdir: ' \
"--use-aliases[Use the services network aliases in the network(s) the container connects to]" \
'(-):services:__docker-compose_services' \
'(-):command: _command_names -e' \
'*::arguments: _normal' && ret=0
@@ -369,8 +299,10 @@ __docker-compose_subcommand() {
"(--no-build)--build[Build images before starting containers.]" \
"(-d)--abort-on-container-exit[Stops all containers if any container was stopped. Incompatible with -d.]" \
'(-t --timeout)'{-t,--timeout}"[Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10)]:seconds: " \
'--scale[SERVICE=NUM Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.]:service scale SERVICE=NUM: ' \
'--exit-code-from=[Return the exit code of the selected service container. Implies --abort-on-container-exit]:service:__docker-compose_services' \
$opts_remove_orphans \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(version)
_arguments \
@@ -409,8 +341,12 @@ _docker-compose() {
'(- :)'{-h,--help}'[Get help]' \
'*'{-f,--file}"[${file_description}]:file:_files -g '*.yml'" \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
'--verbose[Show more output]' \
'--env-file[Specify an alternate environment file (default: .env)]:env-file:_files' \
"--compatibility[If set, Compose will attempt to convert keys in v3 files to their non-Swarm equivalent]" \
'(- :)'{-v,--version}'[Print version and exit]' \
'--verbose[Show more output]' \
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \
'--no-ansi[Do not print ANSI control characters]' \
'(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \
'--tls[Use TLS; implied by --tlsverify]' \
'--tlscacert=[Trust certs signed only by this CA]:ca path:' \
@@ -421,9 +357,10 @@ _docker-compose() {
'(-): :->command' \
'(-)*:: :->option-or-argument' && ret=0
local -a relevant_compose_flags relevant_docker_flags compose_options docker_options
local -a relevant_compose_flags relevant_compose_repeatable_flags relevant_docker_flags compose_options docker_options
relevant_compose_flags=(
"--env-file"
"--file" "-f"
"--host" "-H"
"--project-name" "-p"
@@ -435,6 +372,10 @@ _docker-compose() {
"--skip-hostname-check"
)
relevant_compose_repeatable_flags=(
"--file" "-f"
)
relevant_docker_flags=(
"--host" "-H"
"--tls"
@@ -452,9 +393,18 @@ _docker-compose() {
fi
fi
if [[ -n "${relevant_compose_flags[(r)$k]}" ]]; then
compose_options+=$k
if [[ -n "$opt_args[$k]" ]]; then
compose_options+=$opt_args[$k]
if [[ -n "${relevant_compose_repeatable_flags[(r)$k]}" ]]; then
values=("${(@s/:/)opt_args[$k]}")
for value in $values
do
compose_options+=$k
compose_options+=$value
done
else
compose_options+=$k
if [[ -n "$opt_args[$k]" ]]; then
compose_options+=$opt_args[$k]
fi
fi
fi
done

View File

@@ -44,7 +44,7 @@ def warn_for_links(name, service):
links = service.get('links')
if links:
example_service = links[0].partition(':')[0]
log.warn(
log.warning(
"Service {name} has links, which no longer create environment "
"variables such as {example_service_upper}_PORT. "
"If you are using those in your application code, you should "
@@ -57,7 +57,7 @@ def warn_for_links(name, service):
def warn_for_external_links(name, service):
external_links = service.get('external_links')
if external_links:
log.warn(
log.warning(
"Service {name} has external_links: {ext}, which now work "
"slightly differently. In particular, two containers must be "
"connected to at least one network in common in order to "
@@ -107,7 +107,7 @@ def rewrite_volumes_from(service, service_names):
def create_volumes_section(data):
named_volumes = get_named_volumes(data['services'])
if named_volumes:
log.warn(
log.warning(
"Named volumes ({names}) must be explicitly declared. Creating a "
"'volumes' section with declarations.\n\n"
"For backwards-compatibility, they've been declared as external. "

20
docker-compose-entrypoint.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -e
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- docker-compose "$@"
fi
# if our command is a valid Docker subcommand, let's invoke it through Docker instead
# (this allows for "docker run docker ps", etc)
if docker-compose help "$1" > /dev/null 2>&1; then
set -- docker-compose "$@"
fi
# if we have "--link some-docker:docker" and not DOCKER_HOST, let's set DOCKER_HOST automatically
if [ -z "$DOCKER_HOST" -a "$DOCKER_PORT_2375_TCP" ]; then
export DOCKER_HOST='tcp://docker:2375'
fi
exec "$@"

View File

@@ -98,4 +98,5 @@ exe = EXE(pyz,
debug=False,
strip=None,
upx=True,
console=True)
console=True,
bootloader_ignore_signals=True)

View File

@@ -6,11 +6,9 @@ The documentation for Compose has been merged into
The docs for Compose are now here:
https://github.com/docker/docker.github.io/tree/master/compose
Please submit pull requests for unpublished features on the `vnext-compose` branch (https://github.com/docker/docker.github.io/tree/vnext-compose).
Please submit pull requests for unreleased features/changes on the `master` branch (https://github.com/docker/docker.github.io/tree/master), please prefix the PR title with `[WIP]` to indicate that it relates to an unreleased change.
If you submit a PR to this codebase that has a docs impact, create a second docs PR on `docker.github.io`. Use the docs PR template provided (coming soon - watch this space).
PRs for typos, additional information, etc. for already-published features should be labeled as `okay-to-publish` (we are still settling on a naming convention, will provide a label soon). You can submit these PRs either to `vnext-compose` or directly to `master` on `docker.github.io`
If you submit a PR to this codebase that has a docs impact, create a second docs PR on `docker.github.io`. Use the docs PR template provided.
As always, the docs remain open-source and we appreciate your feedback and
pull requests!

13
pyinstaller/ldd Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/sh
# From http://wiki.musl-libc.org/wiki/FAQ#Q:_where_is_ldd_.3F
#
# Musl's dynlinker comes with ldd functionality built in. just create a
# symlink from ld-musl-$ARCH.so to /bin/ldd. If the dynlinker was started
# as "ldd", it will detect that and print the appropriate DSO information.
#
# Instead, this string replaced "ldd" with the package so that pyinstaller
# can find the actual lib.
exec /usr/bin/ldd "$@" | \
sed -r 's/([^[:space:]]+) => ldd/\1 => \/lib\/\1/g' | \
sed -r 's/ldd \(.*\)//g'

View File

@@ -1 +1 @@
pyinstaller==3.3.1
pyinstaller==3.5

View File

@@ -1,5 +1,6 @@
coverage==4.4.2
ddt==1.2.0
flake8==3.5.0
mock>=1.0.1
pytest==2.9.2
mock==3.0.5
pytest==3.6.3
pytest-cov==2.5.1

View File

@@ -1,23 +1,25 @@
backports.shutil_get_terminal_size==1.0.0
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
docker==3.4.1
docker-pycreds==0.3.0
colorama==0.4.0; sys_platform == 'win32'
docker==4.1.0
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
enum34==1.1.6; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
git+git://github.com/tartley/colorama.git@bd378c725b45eba0b8e5cc091c3ca76a954c92ff; sys_platform == 'win32'
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
jsonschema==3.0.1
paramiko==2.6.0
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==220; sys_platform == 'win32' and python_version >= '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.6.7
PyYAML==3.12
requests==2.18.4
six==1.10.0
texttable==0.9.1
urllib3==1.21.1
PyYAML==4.2b1
requests==2.22.0
six==1.12.0
texttable==1.6.2
urllib3==1.24.2; python_version == '3.3'
websocket-client==0.32.0

20
script/Jenkinsfile.fossa Normal file
View File

@@ -0,0 +1,20 @@
pipeline {
agent any
stages {
stage("License Scan") {
agent {
label 'ubuntu-1604-aufs-edge'
}
steps {
withCredentials([
string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')
]) {
checkout scm
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' BRANCH_NAME='${env.BRANCH_NAME}' make -f script/fossa.mk fossa-analyze"
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' make -f script/fossa.mk fossa-test"
}
}
}
}
}

View File

@@ -7,11 +7,14 @@ if [ -z "$1" ]; then
exit 1
fi
TAG=$1
TAG="$1"
VERSION="$(python setup.py --version)"
./script/build/write-git-sha
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
python setup.py sdist bdist_wheel
./script/build/linux
docker build -t docker/compose:$TAG -f Dockerfile.run .
docker build \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}" \
-t "${TAG}" .

View File

@@ -4,10 +4,15 @@ set -ex
./script/clean
TAG="docker-compose"
docker build -t "$TAG" . | tail -n 200
docker run \
--rm --entrypoint="script/build/linux-entrypoint" \
-v $(pwd)/dist:/code/dist \
-v $(pwd)/.git:/code/.git \
"$TAG"
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
TAG="docker/compose:tmp-glibc-linux-binary-${DOCKER_COMPOSE_GITSHA}"
docker build -t "${TAG}" . \
--build-arg BUILD_PLATFORM=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
TMP_CONTAINER=$(docker create "${TAG}")
mkdir -p dist
ARCH=$(uname -m)
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
docker container rm -f "${TMP_CONTAINER}"
docker image rm -f "${TAG}"

View File

@@ -2,14 +2,39 @@
set -ex
TARGET=dist/docker-compose-$(uname -s)-$(uname -m)
VENV=/code/.tox/py36
CODE_PATH=/code
VENV="${CODE_PATH}"/.tox/py37
mkdir -p `pwd`/dist
chmod 777 `pwd`/dist
cd "${CODE_PATH}"
mkdir -p dist
chmod 777 dist
$VENV/bin/pip install -q -r requirements-build.txt
./script/build/write-git-sha
su -c "$VENV/bin/pyinstaller docker-compose.spec" user
mv dist/docker-compose $TARGET
$TARGET version
"${VENV}"/bin/pip3 install -q -r requirements-build.txt
# TODO(ulyssessouza) To check if really needed
if [ -z "${DOCKER_COMPOSE_GITSHA}" ]; then
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
fi
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
export PATH="${CODE_PATH}/pyinstaller:${PATH}"
if [ ! -z "${BUILD_BOOTLOADER}" ]; then
# Build bootloader for alpine; develop is the main branch
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
cd /tmp/pyinstaller/bootloader
# Checkout commit corresponding to version in requirements-build
git checkout v3.5
"${VENV}"/bin/python3 ./waf configure --no-lsb all
"${VENV}"/bin/pip3 install ..
cd "${CODE_PATH}"
rm -Rf /tmp/pyinstaller
else
echo "NOT compiling bootloader!!!"
fi
"${VENV}"/bin/pyinstaller --exclude-module pycrypto --exclude-module PyInstaller docker-compose.spec
ls -la dist/
ldd dist/docker-compose
mv dist/docker-compose /usr/local/bin
docker-compose version

View File

@@ -1,15 +1,16 @@
#!/bin/bash
set -ex
PATH="/usr/local/bin:$PATH"
TOOLCHAIN_PATH="$(realpath $(dirname $0)/../../build/toolchain)"
rm -rf venv
virtualenv -p /usr/local/bin/python3 venv
virtualenv -p "${TOOLCHAIN_PATH}"/bin/python3 venv
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-build.txt
venv/bin/pip install --no-deps .
./script/build/write-git-sha
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
venv/bin/pyinstaller docker-compose.spec
mv dist/docker-compose dist/docker-compose-Darwin-x86_64
dist/docker-compose-Darwin-x86_64 version

View File

@@ -7,11 +7,12 @@ if [ -z "$1" ]; then
exit 1
fi
TAG=$1
TAG="$1"
IMAGE="docker/compose-tests"
docker build -t docker-compose-tests:tmp .
ctnr_id=$(docker create --entrypoint=tox docker-compose-tests:tmp)
docker commit $ctnr_id docker/compose-tests:latest
docker tag docker/compose-tests:latest docker/compose-tests:$TAG
docker rm -f $ctnr_id
docker rmi -f docker-compose-tests:tmp
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
docker build -t "${IMAGE}:${TAG}" . \
--target build \
--build-arg BUILD_PLATFORM="debian" \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
docker tag "${IMAGE}":"${TAG}" "${IMAGE}":latest

View File

@@ -6,17 +6,17 @@
#
# http://git-scm.com/download/win
#
# 2. Install Python 3.6.4:
# 2. Install Python 3.7.2:
#
# https://www.python.org/downloads/
#
# 3. Append ";C:\Python36;C:\Python36\Scripts" to the "Path" environment variable:
# 3. Append ";C:\Python37;C:\Python37\Scripts" to the "Path" environment variable:
#
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
#
# 4. In Powershell, run the following commands:
#
# $ pip install 'virtualenv>=15.1.0'
# $ pip install 'virtualenv==16.2.0'
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
#
# 5. Clone the repository:
@@ -44,7 +44,7 @@ virtualenv .\venv
# pip and pyinstaller generate lots of warnings, so we need to ignore them
$ErrorActionPreference = "Continue"
.\venv\Scripts\pip install pypiwin32==220
.\venv\Scripts\pip install pypiwin32==223
.\venv\Scripts\pip install -r requirements.txt
.\venv\Scripts\pip install --no-deps .
.\venv\Scripts\pip install -r requirements-build.txt

View File

@@ -2,6 +2,11 @@
#
# Write the current commit sha to the file GITSHA. This file is included in
# packaging so that `docker-compose version` can include the git sha.
#
set -e
git rev-parse --short HEAD > compose/GITSHA
# sets to 'unknown' and echoes a message if the command is not successful
DOCKER_COMPOSE_GITSHA="$(git rev-parse --short HEAD)"
if [[ "${?}" != "0" ]]; then
echo "Couldn't get revision of the git repository. Setting to 'unknown' instead"
DOCKER_COMPOSE_GITSHA="unknown"
fi
echo "${DOCKER_COMPOSE_GITSHA}"

View File

@@ -1,7 +1,5 @@
#!/bin/bash
set -x
curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}

16
script/fossa.mk Normal file
View File

@@ -0,0 +1,16 @@
# Variables for Fossa
BUILD_ANALYZER?=docker/fossa-analyzer
FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true
fossa-analyze:
docker run --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) analyze ${FOSSA_OPTS} --branch ${BRANCH_NAME}
# This command is used to run the fossa test command
fossa-test:
docker run -i -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) test

View File

@@ -1,15 +0,0 @@
FROM python:3.6
RUN mkdir -p /src && pip install -U Jinja2==2.10 \
PyGithub==1.39 \
pypandoc==1.4 \
GitPython==2.1.9 \
requests==2.18.4 \
twine==1.11.0 && \
apt-get update && apt-get install -y pandoc
VOLUME /src/script/release
WORKDIR /src
COPY . /src
RUN python setup.py develop
ENTRYPOINT ["python", "script/release/release.py"]
CMD ["--help"]

View File

@@ -9,8 +9,7 @@ The following things are required to bring a release to a successful conclusion
### Local Docker engine (Linux Containers)
The release script runs inside a container and builds images that will be part
of the release.
The release script builds images that will be part of the release.
### Docker Hub account
@@ -20,6 +19,10 @@ following repositories:
- docker/compose
- docker/compose-tests
### Python
The release script is written in Python and requires Python 3.3 at minimum.
### A Github account and Github API token
Your Github account needs to have write access on the `docker/compose` repo.
@@ -37,7 +40,7 @@ This API token should be exposed to the release script through the
### A Bintray account and Bintray API key
Your Bintray account will need to be an admin member of the
[docker-compose organization](https://github.com/settings/tokens).
[docker-compose organization](https://bintray.com/docker-compose).
Additionally, you should generate a personal API key. To do so, click your
username in the top-right hand corner and select "Edit profile" ; on the new
page, select "API key" in the left-side menu.
@@ -53,6 +56,18 @@ Said account needs to be a member of the maintainers group for the
Moreover, the `~/.pypirc` file should exist on your host and contain the
relevant pypi credentials.
The following is a sample `.pypirc` provided as a guideline:
```
[distutils]
index-servers =
pypi
[pypi]
username = user
password = pass
```
## Start a feature release
A feature release is a release that includes all changes present in the
@@ -114,7 +129,7 @@ assets public), proceed to the "Finalize a release" section of this guide.
Once you're ready to make your release public, you may execute the following
command from the root of the Compose repository:
```
./script/release/release.sh -b <BINTRAY_USERNAME> finalize RELEAE_VERSION
./script/release/release.sh -b <BINTRAY_USERNAME> finalize RELEASE_VERSION
```
Note that this command will create and publish versioned assets to the public.
@@ -177,6 +192,8 @@ be handled manually by the operator:
- Bump the version in `compose/__init__.py` to the *next* minor version
number with `dev` appended. For example, if you just released `1.4.0`,
update it to `1.5.0dev`
- Update compose_version in [github.com/docker/docker.github.io/blob/master/_config.yml](https://github.com/docker/docker.github.io/blob/master/_config.yml) and [github.com/docker/docker.github.io/blob/master/_config_authoring.yml](https://github.com/docker/docker.github.io/blob/master/_config_authoring.yml)
- Update the release note in [github.com/docker/docker.github.io](https://github.com/docker/docker.github.io/blob/master/release-notes/docker-compose.md)
## Advanced options

View File

@@ -26,12 +26,6 @@ if [ -z "$(command -v jq 2> /dev/null)" ]; then
fi
if [ -z "$(command -v pandoc 2> /dev/null)" ]; then
>&2 echo "$0 requires http://pandoc.org/"
>&2 echo "Please install it and make sure it is available on your \$PATH."
exit 2
fi
API=https://api.github.com/repos
REPO=docker/compose
GITHUB_REPO=git@github.com:$REPO
@@ -59,8 +53,6 @@ docker push docker/compose-tests:latest
docker push docker/compose-tests:$VERSION
echo "Uploading package to PyPI"
pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst
./script/build/write-git-sha
python setup.py sdist bdist_wheel
if [ "$(command -v twine 2> /dev/null)" ]; then

View File

@@ -1,6 +1,6 @@
If you're a Mac or Windows user, the best way to install Compose and keep it up-to-date is **[Docker for Mac and Windows](https://www.docker.com/products/docker)**.
If you're a Mac or Windows user, the best way to install Compose and keep it up-to-date is **[Docker Desktop for Mac and Windows](https://www.docker.com/products/docker-desktop)**.
Docker for Mac and Windows will automatically install the latest version of Docker Engine for you.
Docker Desktop will automatically install the latest version of Docker Engine for you.
Alternatively, you can use the usual commands to install or upgrade Compose:

View File

@@ -4,11 +4,10 @@ from __future__ import unicode_literals
import argparse
import os
import shutil
import sys
import time
from distutils.core import run_setup
import pypandoc
from jinja2 import Template
from release.bintray import BintrayAPI
from release.const import BINTRAY_ORG
@@ -16,6 +15,9 @@ from release.const import NAME
from release.const import REPO_ROOT
from release.downloader import BinaryDownloader
from release.images import ImageManager
from release.images import is_tag_latest
from release.pypi import check_pypirc
from release.pypi import pypi_upload
from release.repository import delete_assets
from release.repository import get_contributors
from release.repository import Repository
@@ -27,7 +29,6 @@ from release.utils import ScriptError
from release.utils import update_init_py_version
from release.utils import update_run_sh_version
from release.utils import yesno
from twine.commands.upload import main as twine_upload
def create_initial_branch(repository, args):
@@ -58,8 +59,11 @@ def create_bump_commit(repository, release_branch, bintray_user, bintray_org):
repository.push_branch_to_remote(release_branch)
bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], bintray_user)
print('Creating data repository {} on bintray'.format(release_branch.name))
bintray_api.create_repository(bintray_org, release_branch.name, 'generic')
if not bintray_api.repository_exists(bintray_org, release_branch.name):
print('Creating data repository {} on bintray'.format(release_branch.name))
bintray_api.create_repository(bintray_org, release_branch.name, 'generic')
else:
print('Bintray repository {} already exists. Skipping'.format(release_branch.name))
def monitor_pr_status(pr_data):
@@ -72,19 +76,24 @@ def monitor_pr_status(pr_data):
'pending': 0,
'success': 0,
'failure': 0,
'error': 0,
}
for detail in status.statuses:
if detail.context == 'dco-signed':
# dco-signed check breaks on merge remote-tracking ; ignore it
continue
summary[detail.state] += 1
print('{pending} pending, {success} successes, {failure} failures'.format(**summary))
if summary['pending'] == 0 and summary['failure'] == 0 and summary['success'] > 0:
if detail.state in summary:
summary[detail.state] += 1
print(
'{pending} pending, {success} successes, {failure} failures, '
'{error} errors'.format(**summary)
)
if summary['failure'] > 0 or summary['error'] > 0:
raise ScriptError('CI failures detected!')
elif summary['pending'] == 0 and summary['success'] > 0:
# This check assumes at least 1 non-DCO CI check to avoid race conditions.
# If testing on a repo without CI, use --skip-ci-check to avoid looping eternally
return True
elif summary['failure'] > 0:
raise ScriptError('CI failures detected!')
time.sleep(30)
elif status.state == 'success':
print('{} successes: all clear!'.format(status.total_count))
@@ -92,12 +101,14 @@ def monitor_pr_status(pr_data):
def check_pr_mergeable(pr_data):
if not pr_data.mergeable:
if pr_data.mergeable is False:
# mergeable can also be null, in which case the warning would be a false positive.
print(
'WARNING!! PR #{} can not currently be merged. You will need to '
'resolve the conflicts manually before finalizing the release.'.format(pr_data.number)
)
return pr_data.mergeable
return pr_data.mergeable is True
def create_release_draft(repository, version, pr_data, files):
@@ -125,13 +136,42 @@ def print_final_instructions(args):
"You're almost done! Please verify that everything is in order and "
"you are ready to make the release public, then run the following "
"command:\n{exe} -b {user} finalize {version}".format(
exe=sys.argv[0], user=args.bintray_user, version=args.release
exe='./script/release/release.sh', user=args.bintray_user, version=args.release
)
)
def distclean():
print('Running distclean...')
dirs = [
os.path.join(REPO_ROOT, 'build'), os.path.join(REPO_ROOT, 'dist'),
os.path.join(REPO_ROOT, 'docker-compose.egg-info')
]
files = []
for base, dirnames, fnames in os.walk(REPO_ROOT):
for fname in fnames:
path = os.path.normpath(os.path.join(base, fname))
if fname.endswith('.pyc'):
files.append(path)
elif fname.startswith('.coverage.'):
files.append(path)
for dirname in dirnames:
path = os.path.normpath(os.path.join(base, dirname))
if dirname == '__pycache__':
dirs.append(path)
elif dirname == '.coverage-binfiles':
dirs.append(path)
for file in files:
os.unlink(file)
for folder in dirs:
shutil.rmtree(folder, ignore_errors=True)
def resume(args):
try:
distclean()
repository = Repository(REPO_ROOT, args.repo)
br_name = branch_name(args.release)
if not repository.branch_exists(br_name):
@@ -165,7 +205,7 @@ def resume(args):
delete_assets(gh_release)
upload_assets(gh_release, files)
img_manager = ImageManager(args.release)
img_manager.build_images(repository, files)
img_manager.build_images(repository)
except ScriptError as e:
print(e)
return 1
@@ -183,6 +223,7 @@ def cancel(args):
bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], args.bintray_user)
print('Removing Bintray data repository for {}'.format(args.release))
bintray_api.delete_repository(args.bintray_org, branch_name(args.release))
distclean()
except ScriptError as e:
print(e)
return 1
@@ -191,6 +232,7 @@ def cancel(args):
def start(args):
distclean()
try:
repository = Repository(REPO_ROOT, args.repo)
create_initial_branch(repository, args)
@@ -203,7 +245,7 @@ def start(args):
gh_release = create_release_draft(repository, args.release, pr_data, files)
upload_assets(gh_release, files)
img_manager = ImageManager(args.release)
img_manager.build_images(repository, files)
img_manager.build_images(repository)
except ScriptError as e:
print(e)
return 1
@@ -213,15 +255,18 @@ def start(args):
def finalize(args):
distclean()
try:
check_pypirc()
repository = Repository(REPO_ROOT, args.repo)
img_manager = ImageManager(args.release)
tag_as_latest = is_tag_latest(args.release)
img_manager = ImageManager(args.release, tag_as_latest)
pr_data = repository.find_release_pr(args.release)
if not pr_data:
raise ScriptError('No PR found for {}'.format(args.release))
if not check_pr_mergeable(pr_data):
raise ScriptError('Can not finalize release with an unmergeable PR')
if not img_manager.check_images(args.release):
if not img_manager.check_images():
raise ScriptError('Missing release image')
br_name = branch_name(args.release)
if not repository.branch_exists(br_name):
@@ -232,16 +277,17 @@ def finalize(args):
repository.checkout_branch(br_name)
pypandoc.convert_file(
os.path.join(REPO_ROOT, 'README.md'), 'rst', outputfile=os.path.join(REPO_ROOT, 'README.rst')
)
run_setup(os.path.join(REPO_ROOT, 'setup.py'), script_args=['sdist', 'bdist_wheel'])
os.system('python {setup_script} sdist bdist_wheel'.format(
setup_script=os.path.join(REPO_ROOT, 'setup.py')))
merge_status = pr_data.merge()
if not merge_status.merged:
raise ScriptError('Unable to merge PR #{}: {}'.format(pr_data.number, merge_status.message))
print('Uploading to PyPi')
twine_upload(['dist/*'])
if not merge_status.merged and not args.finalize_resume:
raise ScriptError(
'Unable to merge PR #{}: {}'.format(pr_data.number, merge_status.message)
)
pypi_upload(args)
img_manager.push_images()
repository.publish_release(gh_release)
except ScriptError as e:
@@ -260,13 +306,13 @@ ACTIONS = [
EPILOG = '''Example uses:
* Start a new feature release (includes all changes currently in master)
release.py -b user start 1.23.0
release.sh -b user start 1.23.0
* Start a new patch release
release.py -b user --patch 1.21.0 start 1.21.1
release.sh -b user --patch 1.21.0 start 1.21.1
* Cancel / rollback an existing release draft
release.py -b user cancel 1.23.0
release.sh -b user cancel 1.23.0
* Restart a previously aborted patch release
release.py -b user -p 1.21.0 resume 1.21.1
release.sh -b user -p 1.21.0 resume 1.21.1
'''
@@ -316,6 +362,10 @@ def main():
'--skip-ci-checks', dest='skip_ci', action='store_true',
help='If set, the program will not wait for CI jobs to complete'
)
parser.add_argument(
'--finalize-resume', dest='finalize_resume', action='store_true',
help='If set, finalize will continue through steps that have already been completed.'
)
args = parser.parse_args()
if args.action == 'start':

View File

@@ -1,27 +1,13 @@
#!/bin/sh
docker image inspect compose/release-tool > /dev/null
if test $? -ne 0; then
docker build -t compose/release-tool -f $(pwd)/script/release/Dockerfile $(pwd)
if test -d ${VENV_DIR:-./.release-venv}; then
true
else
./script/release/setup-venv.sh
fi
if test -z $GITHUB_TOKEN; then
echo "GITHUB_TOKEN environment variable must be set"
exit 1
if test -z "$*"; then
args="--help"
fi
if test -z $BINTRAY_TOKEN; then
echo "BINTRAY_TOKEN environment variable must be set"
exit 1
fi
docker run -e GITHUB_TOKEN=$GITHUB_TOKEN -e BINTRAY_TOKEN=$BINTRAY_TOKEN -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK -it \
--mount type=bind,source=$(pwd),target=/src \
--mount type=bind,source=$(pwd)/.git,target=/src/.git \
--mount type=bind,source=$HOME/.docker,target=/root/.docker \
--mount type=bind,source=$HOME/.gitconfig,target=/root/.gitconfig \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--mount type=bind,source=$HOME/.ssh,target=/root/.ssh \
--mount type=bind,source=/tmp,target=/tmp \
-v $HOME/.pypirc:/root/.pypirc \
compose/release-tool $*
${VENV_DIR:-./.release-venv}/bin/python ./script/release/release.py "$@"

View File

@@ -15,7 +15,7 @@ class BintrayAPI(requests.Session):
self.base_url = 'https://api.bintray.com/'
def create_repository(self, subject, repo_name, repo_type='generic'):
url = '{base}/repos/{subject}/{repo_name}'.format(
url = '{base}repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
data = {
@@ -27,10 +27,20 @@ class BintrayAPI(requests.Session):
}
return self.post_json(url, data)
def delete_repository(self, subject, repo_name):
def repository_exists(self, subject, repo_name):
url = '{base}/repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
result = self.get(url)
if result.status_code == 404:
return False
result.raise_for_status()
return True
def delete_repository(self, subject, repo_name):
url = '{base}repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
return self.delete(url)
def post_json(self, url, data, **kwargs):

View File

@@ -6,4 +6,5 @@ import os
REPO_ROOT = os.path.join(os.path.dirname(__file__), '..', '..', '..')
NAME = 'docker/compose'
COMPOSE_TESTS_IMAGE_BASE_NAME = NAME + '-tests'
BINTRAY_ORG = 'docker-compose'

View File

@@ -2,31 +2,76 @@ from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import base64
import json
import os
import shutil
import docker
from enum import Enum
from .const import NAME
from .const import REPO_ROOT
from .utils import ScriptError
from .utils import yesno
from script.release.release.const import COMPOSE_TESTS_IMAGE_BASE_NAME
class Platform(Enum):
ALPINE = 'alpine'
DEBIAN = 'debian'
def __str__(self):
return self.value
# Checks if this version respects the GA version format ('x.y.z') and not an RC
def is_tag_latest(version):
ga_version = all(n.isdigit() for n in version.split('.')) and version.count('.') == 2
return ga_version and yesno('Should this release be tagged as \"latest\"? [Y/n]: ', default=True)
class ImageManager(object):
def __init__(self, version):
def __init__(self, version, latest=False):
self.docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
self.version = version
self.latest = latest
if 'HUB_CREDENTIALS' in os.environ:
print('HUB_CREDENTIALS found in environment, issuing login')
credentials = json.loads(base64.urlsafe_b64decode(os.environ['HUB_CREDENTIALS']))
self.docker_client.login(
username=credentials['Username'], password=credentials['Password']
)
def build_images(self, repository, files):
print("Building release images...")
repository.write_git_sha()
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
distdir = os.path.join(REPO_ROOT, 'dist')
os.makedirs(distdir, exist_ok=True)
shutil.copy(files['docker-compose-Linux-x86_64'][0], distdir)
os.chmod(os.path.join(distdir, 'docker-compose-Linux-x86_64'), 0o755)
print('Building docker/compose image')
logstream = docker_client.build(
REPO_ROOT, tag='docker/compose:{}'.format(self.version), dockerfile='Dockerfile.run',
def _tag(self, image, existing_tag, new_tag):
existing_repo_tag = '{image}:{tag}'.format(image=image, tag=existing_tag)
new_repo_tag = '{image}:{tag}'.format(image=image, tag=new_tag)
self.docker_client.tag(existing_repo_tag, new_repo_tag)
def get_full_version(self, platform=None):
return self.version + '-' + platform.__str__() if platform else self.version
def get_runtime_image_tag(self, tag):
return '{image_base_image}:{tag}'.format(
image_base_image=NAME,
tag=self.get_full_version(tag)
)
def build_runtime_image(self, repository, platform):
git_sha = repository.write_git_sha()
compose_image_base_name = NAME
print('Building {image} image ({platform} based)'.format(
image=compose_image_base_name,
platform=platform
))
full_version = self.get_full_version(platform)
build_tag = self.get_runtime_image_tag(platform)
logstream = self.docker_client.build(
REPO_ROOT,
tag=build_tag,
buildargs={
'BUILD_PLATFORM': platform.value,
'GIT_COMMIT': git_sha,
},
decode=True
)
for chunk in logstream:
@@ -35,9 +80,33 @@ class ImageManager(object):
if 'stream' in chunk:
print(chunk['stream'], end='')
print('Building test image (for UCP e2e)')
logstream = docker_client.build(
REPO_ROOT, tag='docker-compose-tests:tmp', decode=True
if platform == Platform.ALPINE:
self._tag(compose_image_base_name, full_version, self.version)
if self.latest:
self._tag(compose_image_base_name, full_version, platform)
if platform == Platform.ALPINE:
self._tag(compose_image_base_name, full_version, 'latest')
def get_ucp_test_image_tag(self, tag=None):
return '{image}:{tag}'.format(
image=COMPOSE_TESTS_IMAGE_BASE_NAME,
tag=tag or self.version
)
# Used for producing a test image for UCP
def build_ucp_test_image(self, repository):
print('Building test image (debian based for UCP e2e)')
git_sha = repository.write_git_sha()
ucp_test_image_tag = self.get_ucp_test_image_tag()
logstream = self.docker_client.build(
REPO_ROOT,
tag=ucp_test_image_tag,
target='build',
buildargs={
'BUILD_PLATFORM': Platform.DEBIAN.value,
'GIT_COMMIT': git_sha,
},
decode=True
)
for chunk in logstream:
if 'error' in chunk:
@@ -45,39 +114,44 @@ class ImageManager(object):
if 'stream' in chunk:
print(chunk['stream'], end='')
container = docker_client.create_container(
'docker-compose-tests:tmp', entrypoint='tox'
)
docker_client.commit(container, 'docker/compose-tests', 'latest')
docker_client.tag('docker/compose-tests:latest', 'docker/compose-tests:{}'.format(self.version))
docker_client.remove_container(container, force=True)
docker_client.remove_image('docker-compose-tests:tmp', force=True)
self._tag(COMPOSE_TESTS_IMAGE_BASE_NAME, self.version, 'latest')
@property
def image_names(self):
return [
'docker/compose-tests:latest',
'docker/compose-tests:{}'.format(self.version),
'docker/compose:{}'.format(self.version)
]
def build_images(self, repository):
self.build_runtime_image(repository, Platform.ALPINE)
self.build_runtime_image(repository, Platform.DEBIAN)
self.build_ucp_test_image(repository)
def check_images(self, version):
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
for name in self.image_names:
def check_images(self):
for name in self.get_images_to_push():
try:
docker_client.inspect_image(name)
self.docker_client.inspect_image(name)
except docker.errors.ImageNotFound:
print('Expected image {} was not found'.format(name))
return False
return True
def push_images(self):
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
def get_images_to_push(self):
tags_to_push = {
"{}:{}".format(NAME, self.version),
self.get_runtime_image_tag(Platform.ALPINE),
self.get_runtime_image_tag(Platform.DEBIAN),
self.get_ucp_test_image_tag(),
self.get_ucp_test_image_tag('latest'),
}
if is_tag_latest(self.version):
tags_to_push.add("{}:latest".format(NAME))
return tags_to_push
for name in self.image_names:
def push_images(self):
tags_to_push = self.get_images_to_push()
print('Build tags to push {}'.format(tags_to_push))
for name in tags_to_push:
print('Pushing {} to Docker Hub'.format(name))
logstream = docker_client.push(name, stream=True, decode=True)
logstream = self.docker_client.push(name, stream=True, decode=True)
for chunk in logstream:
if 'status' in chunk:
print(chunk['status'])
if 'error' in chunk:
raise ScriptError(
'Error pushing {name}: {err}'.format(name=name, err=chunk['error'])
)

View File

@@ -0,0 +1,44 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from configparser import Error
from requests.exceptions import HTTPError
from twine.commands.upload import main as twine_upload
from twine.utils import get_config
from .utils import ScriptError
def pypi_upload(args):
print('Uploading to PyPi')
try:
rel = args.release.replace('-rc', 'rc')
twine_upload([
'dist/docker_compose-{}*.whl'.format(rel),
'dist/docker-compose-{}*.tar.gz'.format(rel)
])
except HTTPError as e:
if e.response.status_code == 400 and 'File already exists' in str(e):
if not args.finalize_resume:
raise ScriptError(
'Package already uploaded on PyPi.'
)
print('Skipping PyPi upload - package already uploaded')
else:
raise ScriptError('Unexpected HTTP error uploading package to PyPi: {}'.format(e))
def check_pypirc():
try:
config = get_config()
except Error as e:
raise ScriptError('Failed to parse .pypirc file: {}'.format(e))
if config is None:
raise ScriptError('Failed to parse .pypirc file')
if 'pypi' not in config:
raise ScriptError('Missing [pypi] section in .pypirc file')
if not (config['pypi'].get('username') and config['pypi'].get('password')):
raise ScriptError('Missing login/password pair for pypi repo')

View File

@@ -175,6 +175,7 @@ class Repository(object):
def write_git_sha(self):
with open(os.path.join(REPO_ROOT, 'compose', 'GITSHA'), 'w') as f:
f.write(self.git_repo.head.commit.hexsha[:7])
return self.git_repo.head.commit.hexsha[:7]
def cherry_pick_prs(self, release_branch, ids):
if not ids:
@@ -219,6 +220,8 @@ def get_contributors(pr_data):
commits = pr_data.get_commits()
authors = {}
for commit in commits:
if not commit or not commit.author or not commit.author.login:
continue
author = commit.author.login
authors[author] = authors.get(author, 0) + 1
return [x[0] for x in sorted(list(authors.items()), key=lambda x: x[1])]

47
script/release/setup-venv.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/bash
debian_based() { test -f /etc/debian_version; }
if test -z $VENV_DIR; then
VENV_DIR=./.release-venv
fi
if test -z $PYTHONBIN; then
PYTHONBIN=$(which python3)
if test -z $PYTHONBIN; then
PYTHONBIN=$(which python)
fi
fi
VERSION=$($PYTHONBIN -c "import sys; print('{}.{}'.format(*sys.version_info[0:2]))")
if test $(echo $VERSION | cut -d. -f1) -lt 3; then
echo "Python 3.3 or above is required"
fi
if test $(echo $VERSION | cut -d. -f2) -lt 3; then
echo "Python 3.3 or above is required"
fi
# Debian / Ubuntu workaround:
# https://askubuntu.com/questions/879437/ensurepip-is-disabled-in-debian-ubuntu-for-the-system-python
if debian_based; then
VENV_FLAGS="$VENV_FLAGS --without-pip"
fi
$PYTHONBIN -m venv $VENV_DIR $VENV_FLAGS
VENV_PYTHONBIN=$VENV_DIR/bin/python
if debian_based; then
curl https://bootstrap.pypa.io/get-pip.py -o $VENV_DIR/get-pip.py
$VENV_PYTHONBIN $VENV_DIR/get-pip.py
fi
$VENV_PYTHONBIN -m pip install -U Jinja2==2.10 \
PyGithub==1.39 \
GitPython==2.1.9 \
requests==2.18.4 \
setuptools==40.6.2 \
twine==1.11.0
$VENV_PYTHONBIN setup.py develop

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.22.0-rc2"
VERSION="1.24.0"
IMAGE="docker/compose:$VERSION"
@@ -47,11 +47,17 @@ if [ -n "$HOME" ]; then
fi
# Only allocate tty if we detect one
if [ -t 1 ]; then
DOCKER_RUN_OPTIONS="-t"
if [ -t 0 -a -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
if [ -t 0 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -i"
# Always set -i to support piped and terminal input in run/exec
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -i"
# Handle userns security
if [ ! -z "$(docker info 2>/dev/null | grep userns)" ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS --userns=host"
fi
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"

View File

@@ -1,43 +1,110 @@
#!/bin/bash
#!/usr/bin/env bash
set -ex
python_version() {
python -V 2>&1
}
. $(dirname $0)/osx_helpers.sh
python3_version() {
python3 -V 2>&1
}
DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET:-"$(macos_version)"}
SDK_FETCH=
if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_FETCH=1
# SDK URL from https://github.com/docker/golang-cross/blob/master/osx-cross.sh
SDK_URL=https://s3.dockerproject.org/darwin/v2/MacOSX${DEPLOYMENT_TARGET}.sdk.tar.xz
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
openssl_version() {
python -c "import ssl; print ssl.OPENSSL_VERSION"
}
OPENSSL_VERSION=1.1.1c
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=71b830a077276cbeccc994369538617a21bee808
desired_python3_version="3.6.4"
desired_python3_brew_version="3.6.4_2"
python3_formula="https://raw.githubusercontent.com/Homebrew/homebrew-core/b4e69a9a592232fa5a82741f6acecffc2f1d198d/Formula/python3.rb"
PYTHON_VERSION=3.7.4
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=fb1d764be8a9dcd40f2f152a610a0ab04e0d0ed3
PATH="/usr/local/bin:$PATH"
if !(which brew); then
#
# Install prerequisites.
#
if ! [ -x "$(command -v brew)" ]; then
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
fi
brew update > /dev/null
if !(python3_version | grep "$desired_python3_version"); then
if brew list | grep python3; then
brew unlink python3
fi
brew install "$python3_formula"
brew switch python3 "$desired_python3_brew_version"
if ! [ -x "$(command -v grealpath)" ]; then
brew update > /dev/null
brew install coreutils
fi
if ! [ -x "$(command -v python3)" ]; then
brew update > /dev/null
brew install python3
fi
if ! [ -x "$(command -v virtualenv)" ]; then
pip install virtualenv==16.2.0
fi
echo "*** Using $(python3_version) ; $(python_version)"
echo "*** Using $(openssl_version)"
#
# Create toolchain directory.
#
BUILD_PATH="$(grealpath $(dirname $0)/../../build)"
mkdir -p ${BUILD_PATH}
TOOLCHAIN_PATH="${BUILD_PATH}/toolchain"
mkdir -p ${TOOLCHAIN_PATH}
if !(which virtualenv); then
pip install virtualenv
#
# Set macOS SDK.
#
if [[ ${SDK_FETCH} && ! -f ${TOOLCHAIN_PATH}/MacOSX${DEPLOYMENT_TARGET}.sdk/SDKSettings.plist ]]; then
SDK_PATH=${TOOLCHAIN_PATH}/MacOSX${DEPLOYMENT_TARGET}.sdk
fetch_tarball ${SDK_URL} ${SDK_PATH} ${SDK_SHA1}
else
SDK_PATH="$(xcode-select --print-path)/Platforms/MacOSX.platform/Developer/SDKs/MacOSX${DEPLOYMENT_TARGET}.sdk"
fi
#
# Build OpenSSL.
#
OPENSSL_SRC_PATH=${TOOLCHAIN_PATH}/openssl-${OPENSSL_VERSION}
if ! [[ $(${TOOLCHAIN_PATH}/bin/openssl version) == *"${OPENSSL_VERSION}"* ]]; then
rm -rf ${OPENSSL_SRC_PATH}
fetch_tarball ${OPENSSL_URL} ${OPENSSL_SRC_PATH} ${OPENSSL_SHA1}
(
cd ${OPENSSL_SRC_PATH}
export MACOSX_DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET}
export SDKROOT=${SDK_PATH}
./Configure darwin64-x86_64-cc --prefix=${TOOLCHAIN_PATH}
make install_sw install_dev
)
fi
#
# Build Python.
#
PYTHON_SRC_PATH=${TOOLCHAIN_PATH}/Python-${PYTHON_VERSION}
if ! [[ $(${TOOLCHAIN_PATH}/bin/python3 --version) == *"${PYTHON_VERSION}"* ]]; then
rm -rf ${PYTHON_SRC_PATH}
fetch_tarball ${PYTHON_URL} ${PYTHON_SRC_PATH} ${PYTHON_SHA1}
(
cd ${PYTHON_SRC_PATH}
./configure --prefix=${TOOLCHAIN_PATH} \
--enable-ipv6 --without-ensurepip --with-dtrace --without-gcc \
--datarootdir=${TOOLCHAIN_PATH}/share \
--datadir=${TOOLCHAIN_PATH}/share \
--enable-framework=${TOOLCHAIN_PATH}/Frameworks \
--with-openssl=${TOOLCHAIN_PATH} \
MACOSX_DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET} \
CFLAGS="-isysroot ${SDK_PATH} -I${TOOLCHAIN_PATH}/include" \
CPPFLAGS="-I${SDK_PATH}/usr/include -I${TOOLCHAIN_PATH}/include" \
LDFLAGS="-isysroot ${SDK_PATH} -L ${TOOLCHAIN_PATH}/lib"
make -j 4
make install PYTHONAPPSDIR=${TOOLCHAIN_PATH}
make frameworkinstallextras PYTHONAPPSDIR=${TOOLCHAIN_PATH}/share
)
fi
#
# Smoke test built Python.
#
openssl_version ${TOOLCHAIN_PATH}
echo ""
echo "*** Targeting macOS: ${DEPLOYMENT_TARGET}"
echo "*** Using SDK ${SDK_PATH}"
echo "*** Using $(python3_version ${TOOLCHAIN_PATH})"
echo "*** Using $(openssl_version ${TOOLCHAIN_PATH})"

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Check file's ($1) SHA1 ($2).
check_sha1() {
echo -n "$2 *$1" | shasum -c -
}
# Download URL ($1) to path ($2).
download() {
curl -L $1 -o $2
}
# Extract tarball ($1) in folder ($2).
extract() {
tar xf $1 -C $2
}
# Download URL ($1), check SHA1 ($3), and extract utility ($2).
fetch_tarball() {
url=$1
tarball=$2.tarball
sha1=$3
download $url $tarball
check_sha1 $tarball $sha1
extract $tarball $(dirname $tarball)
}
# Version of Python at toolchain path ($1).
python3_version() {
$1/bin/python3 -V 2>&1
}
# Version of OpenSSL used by toolchain ($1) Python.
openssl_version() {
$1/bin/python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"
}
# System macOS version.
macos_version() {
sw_vers -productVersion | cut -f1,2 -d'.'
}

View File

@@ -8,8 +8,7 @@ set -e
docker run --rm \
--tty \
${GIT_VOLUME} \
--entrypoint="tox" \
"$TAG" -e pre-commit
"$TAG" tox -e pre-commit
get_versions="docker run --rm
--entrypoint=/code/.tox/py27/bin/python
@@ -24,7 +23,7 @@ fi
BUILD_NUMBER=${BUILD_NUMBER-$USER}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py36}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py37}
for version in $DOCKER_VERSIONS; do
>&2 echo "Running tests against Docker $version"

View File

@@ -20,6 +20,3 @@ export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER"
GIT_VOLUME="--volumes-from=$(hostname)"
. script/test/all
>&2 echo "Building Linux binary"
. script/build/linux-entrypoint

View File

@@ -3,17 +3,18 @@
set -ex
TAG="docker-compose:$(git rev-parse --short HEAD)"
TAG="docker-compose:alpine-$(git rev-parse --short HEAD)"
# By default use the Dockerfile, but can be overriden to use an alternative file
# e.g DOCKERFILE=Dockerfile.armhf script/test/default
# By default use the Dockerfile, but can be overridden to use an alternative file
# e.g DOCKERFILE=Dockerfile.s390x script/test/default
DOCKERFILE="${DOCKERFILE:-Dockerfile}"
DOCKER_BUILD_TARGET="${DOCKER_BUILD_TARGET:-build}"
rm -rf coverage-html
# Create the host directory so it's owned by $USER
mkdir -p coverage-html
docker build -f ${DOCKERFILE} -t "$TAG" .
docker build -f "${DOCKERFILE}" -t "${TAG}" --target "${DOCKER_BUILD_TARGET}" .
GIT_VOLUME="--volume=$(pwd)/.git:/code/.git"
. script/test/all

View File

@@ -36,23 +36,24 @@ import requests
GITHUB_API = 'https://api.github.com/repos'
STAGES = ['tp', 'beta', 'rc']
class Version(namedtuple('_Version', 'major minor patch rc edition')):
class Version(namedtuple('_Version', 'major minor patch stage edition')):
@classmethod
def parse(cls, version):
edition = None
version = version.lstrip('v')
version, _, rc = version.partition('-')
if rc:
if 'rc' not in rc:
edition = rc
rc = None
elif '-' in rc:
edition, rc = rc.split('-')
version, _, stage = version.partition('-')
if stage:
if not any(marker in stage for marker in STAGES):
edition = stage
stage = None
elif '-' in stage:
edition, stage = stage.split('-')
major, minor, patch = version.split('.', 3)
return cls(major, minor, patch, rc, edition)
return cls(major, minor, patch, stage, edition)
@property
def major_minor(self):
@@ -63,14 +64,22 @@ class Version(namedtuple('_Version', 'major minor patch rc edition')):
"""Return a representation that allows this object to be sorted
correctly with the default comparator.
"""
# rc releases should appear before official releases
rc = (0, self.rc) if self.rc else (1, )
return (int(self.major), int(self.minor), int(self.patch)) + rc
# non-GA releases should appear before GA releases
# Order: tp -> beta -> rc -> GA
if self.stage:
for st in STAGES:
if st in self.stage:
stage = (STAGES.index(st), self.stage)
break
else:
stage = (len(STAGES),)
return (int(self.major), int(self.minor), int(self.patch)) + stage
def __str__(self):
rc = '-{}'.format(self.rc) if self.rc else ''
stage = '-{}'.format(self.stage) if self.stage else ''
edition = '-{}'.format(self.edition) if self.edition else ''
return '.'.join(map(str, self[:3])) + edition + rc
return '.'.join(map(str, self[:3])) + edition + stage
BLACKLIST = [ # List of versions known to be broken and should not be used
@@ -113,9 +122,9 @@ def get_latest_versions(versions, num=1):
def get_default(versions):
"""Return a :class:`Version` for the latest non-rc version."""
"""Return a :class:`Version` for the latest GA version."""
for version in versions:
if not version.rc:
if not version.stage:
return version
@@ -123,8 +132,9 @@ def get_versions(tags):
for tag in tags:
try:
v = Version.parse(tag['name'])
if v not in BLACKLIST:
yield v
if v in BLACKLIST:
continue
yield v
except ValueError:
print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr)

View File

@@ -31,31 +31,33 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.19',
'texttable >= 0.9.0, < 0.10',
'websocket-client >= 0.32.0, < 1.0',
'docker >= 3.4.1, < 4.0',
'dockerpty >= 0.4.1, < 0.5',
'docopt >= 0.6.1, < 1',
'PyYAML >= 3.10, < 5',
'requests >= 2.20.0, < 3',
'texttable >= 0.9.0, < 2',
'websocket-client >= 0.32.0, < 1',
'docker[ssh] >= 3.7.0, < 5',
'dockerpty >= 0.4.1, < 1',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
'jsonschema >= 2.5.1, < 4',
]
tests_require = [
'pytest',
'pytest < 6',
]
if sys.version_info[:2] < (3, 4):
tests_require.append('mock >= 1.0.1')
tests_require.append('mock >= 1.0.1, < 4')
extras_require = {
':python_version < "3.2"': ['subprocess32 >= 3.5.4, < 4'],
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
':sys_platform == "win32"': ['colorama >= 0.3.9, < 0.4'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
':python_version < "3.3"': ['backports.shutil_get_terminal_size == 1.0.0',
'ipaddress >= 1.0.16, < 2'],
':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}
@@ -77,19 +79,26 @@ setup(
name='docker-compose',
version=find_version("compose", "__init__.py"),
description='Multi-container orchestration for Docker',
long_description=read('README.md'),
long_description_content_type='text/markdown',
url='https://www.docker.com/',
project_urls={
'Documentation': 'https://docs.docker.com/compose/overview',
'Changelog': 'https://github.com/docker/compose/blob/release/CHANGELOG.md',
'Source': 'https://github.com/docker/compose',
'Tracker': 'https://github.com/docker/compose/issues',
},
author='Docker, Inc.',
license='Apache License 2.0',
packages=find_packages(exclude=['tests.*', 'tests']),
include_package_data=True,
test_suite='nose.collector',
install_requires=install_requires,
extras_require=extras_require,
tests_require=tests_require,
entry_points="""
[console_scripts]
docker-compose=compose.cli.main:main
""",
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
entry_points={
'console_scripts': ['docker-compose=compose.cli.main:main'],
},
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
@@ -100,5 +109,6 @@ setup(
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)

View File

@@ -4,7 +4,6 @@ from __future__ import unicode_literals
import datetime
import json
import os
import os.path
import re
import signal
@@ -12,6 +11,7 @@ import subprocess
import time
from collections import Counter
from collections import namedtuple
from functools import reduce
from operator import attrgetter
import pytest
@@ -20,6 +20,7 @@ import yaml
from docker import errors
from .. import mock
from ..helpers import BUSYBOX_IMAGE_WITH_TAG
from ..helpers import create_host_file
from compose.cli.command import get_project
from compose.config.errors import DuplicateOverrideFileFound
@@ -41,7 +42,7 @@ ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
BUILD_CACHE_TEXT = 'Using cache'
BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:latest'
BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:1.27.2'
def start_process(base_dir, options):
@@ -63,6 +64,12 @@ def wait_on_process(proc, returncode=0):
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
def dispatch(base_dir, options, project_options=None, returncode=0):
project_options = project_options or []
proc = start_process(base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode)
def wait_on_condition(condition, delay=0.1, timeout=40):
start_time = time.time()
while not condition():
@@ -99,7 +106,14 @@ class ContainerStateCondition(object):
def __call__(self):
try:
container = self.client.inspect_container(self.name)
if self.name.endswith('*'):
ctnrs = self.client.containers(all=True, filters={'name': self.name[:-1]})
if len(ctnrs) > 0:
container = self.client.inspect_container(ctnrs[0]['Id'])
else:
return False
else:
container = self.client.inspect_container(self.name)
return container['State']['Status'] == self.status
except errors.APIError:
return False
@@ -143,9 +157,7 @@ class CLITestCase(DockerClientTestCase):
return self._project
def dispatch(self, options, project_options=None, returncode=0):
project_options = project_options or []
proc = start_process(self.base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode)
return dispatch(self.base_dir, options, project_options, returncode)
def execute(self, container, cmd):
# Remove once Hijack and CloseNotifier sign a peace treaty
@@ -164,6 +176,13 @@ class CLITestCase(DockerClientTestCase):
# Prevent tearDown from trying to create a project
self.base_dir = None
def test_quiet_build(self):
self.base_dir = 'tests/fixtures/build-args'
result = self.dispatch(['build'], None)
quietResult = self.dispatch(['build', '-q'], None)
assert result.stdout != ""
assert quietResult.stdout == ""
def test_help_nonexistent(self):
self.base_dir = 'tests/fixtures/no-composefile'
result = self.dispatch(['help', 'foobar'], returncode=1)
@@ -222,6 +241,16 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/v2-full'
assert self.dispatch(['config', '--quiet']).stdout == ''
def test_config_with_hash_option(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config', '--hash=*'])
for service in self.project.get_services():
assert '{} {}\n'.format(service.name, service.config_hash) in result.stdout
svc = self.project.get_service('other')
result = self.dispatch(['config', '--hash=other'])
assert result.stdout == '{} {}\n'.format(svc.name, svc.config_hash)
def test_config_default(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config'])
@@ -242,7 +271,7 @@ class CLITestCase(DockerClientTestCase):
'volumes_from': ['service:other:rw'],
},
'other': {
'image': 'busybox:latest',
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'volumes': ['/data'],
},
@@ -293,6 +322,51 @@ class CLITestCase(DockerClientTestCase):
}
}
def test_config_with_dot_env(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'true',
'image': 'alpine:latest',
'ports': ['5643/tcp', '9999/tcp']
}
},
'version': '2.4'
}
def test_config_with_env_file(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['--env-file', '.env2', 'config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'false',
'image': 'alpine:latest',
'ports': ['5644/tcp', '9998/tcp']
}
},
'version': '2.4'
}
def test_config_with_dot_env_and_override_dir(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['--project-directory', 'alt/', 'config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'echo uwu',
'image': 'alpine:3.10.1',
'ports': ['3341/tcp', '4449/tcp']
}
},
'version': '2.4'
}
def test_config_external_volume_v2(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v2.yml', 'config'])
@@ -485,7 +559,7 @@ class CLITestCase(DockerClientTestCase):
'services': {
'foo': {
'command': '/bin/true',
'image': 'alpine:3.7',
'image': 'alpine:3.10.1',
'scale': 3,
'restart': 'always:7',
'mem_limit': '300M',
@@ -552,15 +626,25 @@ class CLITestCase(DockerClientTestCase):
assert 'with_build' in running.stdout
assert 'with_image' in running.stdout
def test_ps_all(self):
self.project.get_service('simple').create_container(one_off='blahblah')
result = self.dispatch(['ps'])
assert 'simple-composefile_simple_run_' not in result.stdout
result2 = self.dispatch(['ps', '--all'])
assert 'simple-composefile_simple_run_' in result2.stdout
def test_pull(self):
result = self.dispatch(['pull'])
assert 'Pulling simple' in result.stderr
assert 'Pulling another' in result.stderr
assert 'done' in result.stderr
assert 'failed' not in result.stderr
def test_pull_with_digest(self):
result = self.dispatch(['-f', 'digest.yml', 'pull', '--no-parallel'])
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling simple ({})...'.format(BUSYBOX_IMAGE_WITH_TAG) in result.stderr
assert ('Pulling digest (busybox@'
'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b520'
'04ee8502d)...') in result.stderr
@@ -571,12 +655,19 @@ class CLITestCase(DockerClientTestCase):
'pull', '--ignore-pull-failures', '--no-parallel']
)
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling simple ({})...'.format(BUSYBOX_IMAGE_WITH_TAG) in result.stderr
assert 'Pulling another (nonexisting-image:latest)...' in result.stderr
assert ('repository nonexisting-image not found' in result.stderr or
'image library/nonexisting-image:latest not found' in result.stderr or
'pull access denied for nonexisting-image' in result.stderr)
def test_pull_with_build(self):
result = self.dispatch(['-f', 'pull-with-build.yml', 'pull'])
assert 'Pulling simple' not in result.stderr
assert 'Pulling from_simple' not in result.stderr
assert 'Pulling another ...' in result.stderr
def test_pull_with_quiet(self):
assert self.dispatch(['pull', '--quiet']).stderr == ''
assert self.dispatch(['pull', '--quiet']).stdout == ''
@@ -602,15 +693,15 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', '--no-parallel', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling web (busybox:latest)...',
'Pulling web (busybox:1.27.2)...',
]
def test_pull_with_include_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', '--no-parallel', '--include-deps', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling db (busybox:latest)...',
'Pulling web (busybox:latest)...',
'Pulling db (busybox:1.27.2)...',
'Pulling web (busybox:1.27.2)...',
]
def test_build_plain(self):
@@ -691,6 +782,27 @@ class CLITestCase(DockerClientTestCase):
]
assert not containers
@pytest.mark.xfail(True, reason='Flaky on local')
def test_build_rm(self):
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(all=True)
]
assert not containers
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', '--no-rm', 'simple'], returncode=0)
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(all=True)
]
assert containers
for c in self.project.client.containers(all=True):
self.addCleanup(self.project.client.remove_container, c, force=True)
def test_build_shm_size_build_option(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-shm-size'
@@ -773,6 +885,13 @@ class CLITestCase(DockerClientTestCase):
assert 'does not exist, is not accessible, or is not a valid URL' in result.stderr
def test_build_parallel(self):
self.base_dir = 'tests/fixtures/build-multiple-composefile'
result = self.dispatch(['build', '--parallel'])
assert 'Successfully tagged build-multiple-composefile_a:latest' in result.stdout
assert 'Successfully tagged build-multiple-composefile_b:latest' in result.stdout
assert 'Successfully built' in result.stdout
def test_create(self):
self.dispatch(['create'])
service = self.project.get_service('simple')
@@ -911,11 +1030,11 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['down', '--rmi=local', '--volumes'])
assert 'Stopping v2-full_web_1' in result.stderr
assert 'Stopping v2-full_other_1' in result.stderr
assert 'Stopping v2-full_web_run_2' in result.stderr
assert 'Stopping v2-full_web_run_' in result.stderr
assert 'Removing v2-full_web_1' in result.stderr
assert 'Removing v2-full_other_1' in result.stderr
assert 'Removing v2-full_web_run_1' in result.stderr
assert 'Removing v2-full_web_run_2' in result.stderr
assert 'Removing v2-full_web_run_' in result.stderr
assert 'Removing v2-full_web_run_' in result.stderr
assert 'Removing volume v2-full_data' in result.stderr
assert 'Removing image v2-full_web' in result.stderr
assert 'Removing image busybox' not in result.stderr
@@ -972,11 +1091,15 @@ class CLITestCase(DockerClientTestCase):
def test_up_attached(self):
self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-color'])
simple_name = self.project.get_service('simple').containers(stopped=True)[0].name_without_project
another_name = self.project.get_service('another').containers(
stopped=True
)[0].name_without_project
assert 'simple_1 | simple' in result.stdout
assert 'another_1 | another' in result.stdout
assert 'simple_1 exited with code 0' in result.stdout
assert 'another_1 exited with code 0' in result.stdout
assert '{} | simple'.format(simple_name) in result.stdout
assert '{} | another'.format(another_name) in result.stdout
assert '{} exited with code 0'.format(simple_name) in result.stdout
assert '{} exited with code 0'.format(another_name) in result.stdout
@v2_only()
def test_up(self):
@@ -1041,6 +1164,22 @@ class CLITestCase(DockerClientTestCase):
]
assert len(remote_volumes) > 0
@v2_only()
def test_up_no_start_remove_orphans(self):
self.base_dir = 'tests/fixtures/v2-simple'
self.dispatch(['up', '--no-start'], None)
services = self.project.get_services()
stopped = reduce((lambda prev, next: prev.containers(
stopped=True) + next.containers(stopped=True)), services)
assert len(stopped) == 2
self.dispatch(['-f', 'one-container.yml', 'up', '--no-start', '--remove-orphans'], None)
stopped2 = reduce((lambda prev, next: prev.containers(
stopped=True) + next.containers(stopped=True)), services)
assert len(stopped2) == 1
@v2_only()
def test_up_no_ansi(self):
self.base_dir = 'tests/fixtures/v2-simple'
@@ -1313,7 +1452,7 @@ class CLITestCase(DockerClientTestCase):
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set([volume_with_label])
assert set([v['Name'].split('/')[-1] for v in volumes]) == {volume_with_label}
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -1680,11 +1819,12 @@ class CLITestCase(DockerClientTestCase):
def test_run_rm(self):
self.base_dir = 'tests/fixtures/volume'
proc = start_process(self.base_dir, ['run', '--rm', 'test'])
service = self.project.get_service('test')
wait_on_condition(ContainerStateCondition(
self.project.client,
'volume_test_run_1',
'running'))
service = self.project.get_service('test')
'volume_test_run_*',
'running')
)
containers = service.containers(one_off=OneOffFilter.only)
assert len(containers) == 1
mounts = containers[0].get('Mounts')
@@ -1977,7 +2117,7 @@ class CLITestCase(DockerClientTestCase):
for _, config in networks.items():
# TODO: once we drop support for API <1.24, this can be changed to:
# assert config['Aliases'] == [container.short_id]
aliases = set(config['Aliases'] or []) - set([container.short_id])
aliases = set(config['Aliases'] or []) - {container.short_id}
assert not aliases
@v2_only()
@@ -1997,7 +2137,7 @@ class CLITestCase(DockerClientTestCase):
for _, config in networks.items():
# TODO: once we drop support for API <1.24, this can be changed to:
# assert config['Aliases'] == [container.short_id]
aliases = set(config['Aliases'] or []) - set([container.short_id])
aliases = set(config['Aliases'] or []) - {container.short_id}
assert not aliases
assert self.lookup(container, 'app')
@@ -2007,39 +2147,39 @@ class CLITestCase(DockerClientTestCase):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGINT)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
def test_run_handles_sigterm(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGTERM)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
def test_run_handles_sighup(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGHUP)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
@mock.patch.dict(os.environ)
@@ -2162,6 +2302,7 @@ class CLITestCase(DockerClientTestCase):
def test_start_no_containers(self):
result = self.dispatch(['start'], returncode=1)
assert 'failed' in result.stderr
assert 'No containers to start' in result.stderr
@v2_only()
@@ -2232,6 +2373,7 @@ class CLITestCase(DockerClientTestCase):
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_follow_logs_from_new_containers(self):
self.base_dir = 'tests/fixtures/logs-composefile'
self.dispatch(['up', '-d', 'simple'])
@@ -2239,20 +2381,47 @@ class CLITestCase(DockerClientTestCase):
proc = start_process(self.base_dir, ['logs', '-f'])
self.dispatch(['up', '-d', 'another'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'logs-composefile_another_1',
'exited'))
another_name = self.project.get_service('another').get_container().name_without_project
wait_on_condition(
ContainerStateCondition(
self.project.client,
'logs-composefile_another_*',
'exited'
)
)
simple_name = self.project.get_service('simple').get_container().name_without_project
self.dispatch(['kill', 'simple'])
result = wait_on_process(proc)
assert 'hello' in result.stdout
assert 'test' in result.stdout
assert 'logs-composefile_another_1 exited with code 0' in result.stdout
assert 'logs-composefile_simple_1 exited with code 137' in result.stdout
assert '{} exited with code 0'.format(another_name) in result.stdout
assert '{} exited with code 137'.format(simple_name) in result.stdout
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_follow_logs_from_restarted_containers(self):
self.base_dir = 'tests/fixtures/logs-restart-composefile'
proc = start_process(self.base_dir, ['up'])
wait_on_condition(
ContainerStateCondition(
self.project.client,
'logs-restart-composefile_another_*',
'exited'
)
)
self.dispatch(['kill', 'simple'])
result = wait_on_process(proc)
assert result.stdout.count(
r'logs-restart-composefile_another_1 exited with code 1'
) == 3
assert result.stdout.count('world') == 3
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_default(self):
self.base_dir = 'tests/fixtures/logs-composefile'
self.dispatch(['up', '-d'])
@@ -2276,17 +2445,17 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
result = self.dispatch(['logs', '-f', '-t'])
assert re.search('(\d{4})-(\d{2})-(\d{2})T(\d{2})\:(\d{2})\:(\d{2})', result.stdout)
assert re.search(r'(\d{4})-(\d{2})-(\d{2})T(\d{2})\:(\d{2})\:(\d{2})', result.stdout)
def test_logs_tail(self):
self.base_dir = 'tests/fixtures/logs-tail-composefile'
self.dispatch(['up'])
result = self.dispatch(['logs', '--tail', '2'])
assert 'c\n' in result.stdout
assert 'd\n' in result.stdout
assert 'a\n' not in result.stdout
assert 'b\n' not in result.stdout
assert 'y\n' in result.stdout
assert 'z\n' in result.stdout
assert 'w\n' not in result.stdout
assert 'x\n' not in result.stdout
def test_kill(self):
self.dispatch(['up', '-d'], None)
@@ -2379,10 +2548,12 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=3'])
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'worker=1'])
assert len(project.get_service('web').containers()) == 3
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 1
def test_up_scale_scale_down(self):
self.base_dir = 'tests/fixtures/scale'
@@ -2391,22 +2562,26 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=1'])
assert len(project.get_service('web').containers()) == 1
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
def test_up_scale_reset(self):
self.base_dir = 'tests/fixtures/scale'
project = self.project
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'db=3'])
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'db=3', '--scale', 'worker=3'])
assert len(project.get_service('web').containers()) == 3
assert len(project.get_service('db').containers()) == 3
assert len(project.get_service('worker').containers()) == 3
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
def test_up_scale_to_zero(self):
self.base_dir = 'tests/fixtures/scale'
@@ -2415,10 +2590,12 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=0', '--scale', 'db=0'])
self.dispatch(['up', '-d', '--scale', 'web=0', '--scale', 'db=0', '--scale', 'worker=0'])
assert len(project.get_service('web').containers()) == 0
assert len(project.get_service('db').containers()) == 0
assert len(project.get_service('worker').containers()) == 0
def test_port(self):
self.base_dir = 'tests/fixtures/ports-composefile'
@@ -2460,9 +2637,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', '--index=' + str(index), 'simple', str(number)])
return result.stdout.rstrip()
assert get_port(3000) == containers[0].get_local_port(3000)
assert get_port(3000, index=1) == containers[0].get_local_port(3000)
assert get_port(3000, index=2) == containers[1].get_local_port(3000)
assert get_port(3000) in (containers[0].get_local_port(3000), containers[1].get_local_port(3000))
assert get_port(3000, index=containers[0].number) == containers[0].get_local_port(3000)
assert get_port(3000, index=containers[1].number) == containers[1].get_local_port(3000)
assert get_port(3002) == ""
def test_events_json(self):
@@ -2498,7 +2675,7 @@ class CLITestCase(DockerClientTestCase):
container, = self.project.containers()
expected_template = ' container {} {}'
expected_meta_info = ['image=busybox:latest', 'name=simple-composefile_simple_1']
expected_meta_info = ['image=busybox:1.27.2', 'name=simple-composefile_simple_']
assert expected_template.format('create', container.id) in lines[0]
assert expected_template.format('start', container.id) in lines[1]
@@ -2570,7 +2747,7 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/extends'
self.dispatch(['up', '-d'], None)
assert set([s.name for s in self.project.services]) == set(['mydb', 'myweb'])
assert set([s.name for s in self.project.services]) == {'mydb', 'myweb'}
# Sort by name so we get [db, web]
containers = sorted(
@@ -2580,14 +2757,11 @@ class CLITestCase(DockerClientTestCase):
assert len(containers) == 2
web = containers[1]
db_name = containers[0].name_without_project
assert set(get_links(web)) == set(['db', 'mydb_1', 'extends_mydb_1'])
assert set(get_links(web)) == {'db', db_name, 'extends_{}'.format(db_name)}
expected_env = set([
"FOO=1",
"BAR=2",
"BAZ=2",
])
expected_env = {"FOO=1", "BAR=2", "BAZ=2"}
assert expected_env <= set(web.get('Config.Env'))
def test_top_services_not_running(self):
@@ -2614,17 +2788,27 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/exit-code-from'
proc = start_process(
self.base_dir,
['up', '--abort-on-container-exit', '--exit-code-from', 'another'])
['up', '--abort-on-container-exit', '--exit-code-from', 'another']
)
result = wait_on_process(proc, returncode=1)
assert 'exit-code-from_another_1 exited with code 1' in result.stdout
def test_exit_code_from_signal_stop(self):
self.base_dir = 'tests/fixtures/exit-code-from'
proc = start_process(
self.base_dir,
['up', '--abort-on-container-exit', '--exit-code-from', 'simple']
)
result = wait_on_process(proc, returncode=137) # SIGKILL
name = self.project.get_service('another').containers(stopped=True)[0].name_without_project
assert '{} exited with code 1'.format(name) in result.stdout
def test_images(self):
self.project.get_service('simple').create_container()
result = self.dispatch(['images'])
assert 'busybox' in result.stdout
assert 'simple-composefile_simple_1' in result.stdout
assert 'simple-composefile_simple_' in result.stdout
def test_images_default_composefile(self):
self.base_dir = 'tests/fixtures/multiple-composefiles'
@@ -2632,8 +2816,8 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['images'])
assert 'busybox' in result.stdout
assert 'multiple-composefiles_another_1' in result.stdout
assert 'multiple-composefiles_simple_1' in result.stdout
assert '_another_1' in result.stdout
assert '_simple_1' in result.stdout
@mock.patch.dict(os.environ)
def test_images_tagless_image(self):
@@ -2672,3 +2856,13 @@ class CLITestCase(DockerClientTestCase):
with pytest.raises(DuplicateOverrideFileFound):
get_project(self.base_dir, [])
self.base_dir = None
def test_images_use_service_tag(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/images-service-tag'
self.dispatch(['up', '-d', '--build'])
result = self.dispatch(['images'])
assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
assert re.search(r'foo3.+test[ \t]+latest', result.stdout) is not None

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: ls .

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: ls /thecakeisalie

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
ARG favorite_th_character
RUN echo "Favorite Touhou Character: ${favorite_th_character}"

View File

@@ -1,3 +1,3 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
CMD echo "success"

View File

@@ -1,4 +1,4 @@
FROM busybox
FROM busybox:1.31.0-uclibc
# Report the memory (through the size of the group memory)
RUN echo "memory:" $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)

View File

@@ -0,0 +1,4 @@
FROM busybox:1.31.0-uclibc
RUN echo a
CMD top

View File

@@ -0,0 +1,4 @@
FROM busybox:1.31.0-uclibc
RUN echo b
CMD top

View File

@@ -0,0 +1,8 @@
version: "2"
services:
a:
build: ./a
b:
build: ./b

View File

@@ -1,7 +1,7 @@
version: '3.5'
services:
foo:
image: alpine:3.7
image: alpine:3.10.1
command: /bin/true
deploy:
replicas: 3

4
tests/fixtures/default-env-file/.env2 vendored Normal file
View File

@@ -0,0 +1,4 @@
IMAGE=alpine:latest
COMMAND=false
PORT1=5644
PORT2=9998

View File

@@ -0,0 +1,4 @@
IMAGE=alpine:3.10.1
COMMAND=echo uwu
PORT1=3341
PORT2=4449

Some files were not shown because too many files have changed in this diff Show More