Compare commits

..

113 Commits
1.24.1 ... py2

Author SHA1 Message Date
Nicolas De Loof
9ad10575d1 Prepare drop of python 2.x support
see https://github.com/docker/compose/issues/6890

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-11-20 16:00:53 +01:00
Ulysses Souza
2887d82d16 Merge pull request #6982 from smamessier/fix_non_ascii_error
Fixed non-ascii error when using COMPOSE_DOCKER_CLI_BUILD=1 for Buildkit
2019-11-18 16:45:04 +01:00
Ulysses Souza
2919bebea4 Fix non ascii chars error. Python2 only
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-11-18 15:43:50 +01:00
Djordje Lukic
5478c966f1 Merge pull request #7008 from zelahi/fix-readme-link
Fixed broken README link for common use cases
2019-11-07 09:59:49 +01:00
Zuhayr Elahi
e546533cfe Fixed broken README link for common use cases
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-11-06 17:10:48 -08:00
Jean-Christophe Sirot
abef11b2a6 Merge pull request #6996 from ajlai/fix-color-order-and-remove-red
Make container service color deterministic, remove red from chosen colors
2019-11-06 16:11:57 +01:00
Anthony Lai
802fa20228 Make container service color deterministic, remove red from chosen colors
Signed-off-by: Anthony Lai <anthonyjlai@gmail.com>
2019-11-03 23:44:31 +00:00
Djordje Lukic
fa34ee7362 Merge pull request #6973 from glours/set_no_color_if_clicolor_defined_to_0
Set no-colors to true if CLICOLOR env variable is set to 0
2019-10-31 16:45:10 +01:00
Sebastien Mamessier
a3a23bf949 Fixed error when using startswith on non-ascii string
Signed-off-by: Sebastien Mamessier <smamessier@uber.com>
2019-10-30 13:57:08 +01:00
Jean-Christophe Sirot
cfc48f2c13 Merge pull request #6986 from rumpl/fix-unit-test-close-fd
Cleanup all open files
2019-10-28 16:07:18 +01:00
Djordje Lukic
f8142a899c Cleanup all open files
If the fd is not closed the cleanup will fail on windows.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-10-28 15:36:05 +01:00
Guillaume Lours
2e7493a889 Set no-colors to true if CLICOLOR env variable is set to 0
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-21 11:37:46 +02:00
Jean-Christophe Sirot
4be2fa010a Merge pull request #6972 from glours/align_image_size_display_to_docker_cli
Format image size as decimal to be align with Docker CLI
2019-10-18 15:26:15 +02:00
Guillaume Lours
386bdda246 Format image size as decimal to be align with Docker CLI
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-18 12:50:38 +02:00
okor
17bbbba7d6 update docker-py
Signed-off-by: Jason Ormand <jason.ormand1@gmail.com>
2019-10-18 09:37:24 +02:00
Nicolas De Loof
1ca10f90fb Fix acceptance tests
tty is now (correclty) reported to have 80 columns, which split service
ID in two lines

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Nicolas De Loof
452880af7c Use python Posix support to get tty size
stty is not portable outside *nix
Note: shutil.get_terminal_size require python 3.3

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Guillaume LOURS
944660048d Merge pull request #6964 from guillaumerose/addmorelabels
Add working dir, config files and env file in service labels
2019-10-15 10:06:42 +02:00
Guillaume Rose
dbe4d7323e Add working dir, config files and env file in service labels
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-15 09:18:09 +02:00
Guillaume Rose
1678a4fbe4 Run CI on amd64
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-14 22:01:04 +02:00
Guillaume LOURS
4e83bafec6 Merge pull request #6955 from ndeloof/paramiko
Bump paramiko to 2.6.0
2019-10-10 10:59:44 +02:00
Nicolas De Loof
8973a940e6 Bump paramiko to 2.6.0
close #6953

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-10 08:55:15 +02:00
Zuhayr Elahi
8835056ce4 UPDATED log message
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Zuhayr Elahi
3135a0a839 Added log message to check compose file
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Guillaume Lours
cdae06a89c exclude issue flagged with kind/feature from stale process
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:51:34 +02:00
Guillaume Lours
79bf9ed652 correct invalid yaml indentation
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:10:18 +02:00
Nicolas De loof
29af1a84ca Merge pull request #6952 from glours/stale_configuration
Add config file for @probot/stale
2019-10-09 16:41:58 +02:00
Guillaume Lours
9375c15bad Add config file for @probot/stale
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 16:16:57 +02:00
Chris Crone
8ebb1a6f19 Merge pull request #6949 from jcsirot/fix-pushbin-script-verbosity
Remove set -x to make this script less verbose
2019-10-09 12:04:24 +02:00
Jean-Christophe Sirot
37be2ad9cd Remove set -x to make this script less verbose
Signed-off-by: Jean-Christophe Sirot <jean-christophe.sirot@docker.com>
2019-10-09 10:51:17 +02:00
Nicolas De loof
6fe35498a5 Add dependencies for ARM build (#6908)
Add dependencies for ARM build
2019-10-09 09:38:58 +02:00
Stefan Scherer
ce52f597a0 Enhance build script for different CPU architectures
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2019-10-09 09:11:29 +02:00
Stefan Scherer
79f29dda23 Add dependencies for ARM build
Signed-off-by: Stefan Scherer <scherer_stefan@icloud.com>
2019-10-09 09:11:29 +02:00
Nicolas De loof
7172849913 Fix "extends" same file optimization (#6425)
Fix "extends" same file optimization
2019-10-09 08:50:54 +02:00
Aleksandr Mezin
c24b7b6464 Fix same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Aleksandr Mezin
74f892de95 Add test to verify same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Nicolas De loof
09acc5febf [TAR-995] ADDED a stage for executing License Scans (#6875)
[TAR-995] ADDED a stage for executing License Scans
2019-10-08 16:25:28 +02:00
Nicolas De loof
1f16a7929d Merge pull request #6864 from samueljsb/formatter_class
Change Formatter.table method to staticmethod
2019-10-08 16:24:40 +02:00
Nicolas De loof
f9113202e8 Add automatic labeling of bug, feature & question issues (#6944)
Add automatic labeling of bug, feature & question issues
2019-10-08 16:23:15 +02:00
Nicolas De loof
5f2161cad9 Merge pull request #6912 from cranzy/fixing_broken_link
Fixing features broken link
2019-10-08 16:19:31 +02:00
Guillaume LOURS
70f8e38b1d Add automatic labeling of bug, feature & question issues
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-08 11:07:04 +02:00
Ulysses Souza
186aa6e5c3 Merge pull request #6914 from lukas9393/6913-progress-arg
Fix --progress arg when run docker-compose build
2019-10-07 12:28:49 +02:00
Guillaume LOURS
bc57a1bd54 Merge pull request #6925 from ulyssessouza/fix-secrets-warning-message
Fix secret missing warning
2019-09-27 10:51:37 +02:00
ulyssessouza
eca358e2f0 Fix secret missing warning
Signed-off-by: ulyssessouza <ulyssessouza@gmail.com>
2019-09-27 09:10:49 +02:00
Lukas Hettwer
32ac6edb86 Fix --progress arg when run docker-compose build
--progress is no longer processed as flag but as argument with value.

Signed-off-by: Lukas Hettwer <lukas.hettwer@aboutyou.de>

Resolve: [#6913]
2019-09-24 16:02:12 +02:00
Dimitar Dimitrov
475f8199f7 Fixing features broken link
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@docker.com>
2019-09-24 13:31:30 +03:00
Zuhayr Elahi
98d7cc8d0c ADDED a stage for executing License Scans
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-09-13 14:25:06 -07:00
Ulysses Souza
d7c7e21921 Merge pull request #6131 from sagarafr/fix-5920-missing-secret-message
Add a warning message to secret file
2019-09-09 17:45:08 +02:00
Ulysses Souza
70ead597d2 Add tests to 'get_secret' warnings
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-09 10:04:05 +02:00
Marian Gappa
b9092cacdb Fix missing secret error message
Add a warning message when the secret file doesn't exist

Fixes #5920

Signed-off-by: Marian Gappa <marian.gappa@gmail.com>
2019-09-09 10:04:05 +02:00
Silvin Lubecki
1566930a70 Merge pull request #6862 from deathtracktor/master
Fix KeyError when remote network labels are None.
2019-09-06 11:13:48 +02:00
Danil Kister
a5fbf91b72 Prevent KeyError when remote network labels are None.
Signed-off-by: Danil Kister <danil.kister@gmail.com>
2019-09-05 21:36:10 +02:00
Ulysses Souza
ecf03fe280 Merge pull request #6882 from ulyssessouza/fix_attach_restarting_container
Fix race condition on watch_events
2019-09-05 16:46:14 +02:00
Ulysses Souza
47d170b06a Fix race condition on watch_events
Avoid to attach to restarting containers and ignore
race conditions when trying to attach to already
dead containers

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-04 17:55:05 +02:00
Chris Crone
9973f051ba Merge pull request #6878 from ulyssessouza/bump-debian
Bump runtime debian
2019-08-30 16:42:56 +02:00
Ulysses Souza
2199278b44 Merge pull request #6865 from ulyssessouza/support-cli-build
Add support to CLI build
2019-08-30 13:46:21 +02:00
Ulysses Souza
5add9192ac Rename envvar switch to COMPOSE_DOCKER_CLI_BUILD
From `COMPOSE_NATIVE_BUILDER` to `COMPOSE_DOCKER_CLI_BUILD`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-30 12:11:09 +02:00
Ulysses Souza
0c6fce271e Bump runtime debian
From `stretch-20190708-slim` to `stretch-20190812-slim`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 17:45:21 +02:00
Ulysses Souza
9d7ad3bac1 Add comment on native build and fix typo
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 16:30:50 +02:00
Nao YONASHIRO
719a1b0581 fix: use subprocess32 for python2
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-29 14:21:19 +02:00
Ulysses Souza
bbdb3cab88 Add integration tests to native builder
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 09:31:16 +02:00
Ulysses Souza
ee8ca5d6f8 Rephrase warnings when building with the cli
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
15e8edca3c feat: add a warning if someone uses the --compress or --parallel flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
81e223d499 feat: add --progress flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
862a13b8f3 fix: add build flags
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
cacbcccc0c Add support to CLI build
This includes can be enabled by setting the env var
`COMPOSE_NATIVE_BUILDER=1`.

Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:14 +02:00
Samuel Searles-Bryant
672ced8742 Change Formatter.table method to staticmethod
Make this a staticmethod so it's easier to use without needing to init a
Formatter object first.

Signed-off-by: Samuel Searles-Bryant <samuel.searles-bryant@unipart.io>
2019-08-22 14:25:15 +01:00
Djordje Lukic
4cfa622de8 Merge pull request #6631 from chibby0ne/update_jsonschema_dependency
requirements: update jsonschema dependency
2019-08-22 12:54:48 +02:00
Ulysses Souza
525bc9ef7a Merge pull request #6856 from aiordache/bump-alpine
update alpine version to 3.10.1
2019-08-21 15:49:14 +02:00
aiordache
60dcf87cc0 update alpine version to 3.10.1
Signed-off-by: aiordache <anca.iordache@docker.com>
2019-08-20 12:10:26 +02:00
Jean-Christophe Sirot
cf3c07d6ee Merge pull request #6826 from ulyssessouza/env_override_integration_test
Add integration tests regarding environment
2019-07-31 14:15:53 +02:00
Ulysses Souza
b03889ac2a Add integration tests regarding environment
This covers what was included in #6800

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-31 02:09:41 +02:00
Antonio Gutierrez
66856e884c requirements: update jsonschema dependency
Fixes: https://github.com/docker/compose/issues/6347

Signed-off-by: Antonio Gutierrez <chibby0ne@gmail.com>
2019-07-27 21:43:40 +02:00
Djordje Lukic
7a7c9ff67a Merge pull request #6800 from KlaasH/revise-env-file-option
Make '--env-file' option top-level only and fix failure with subcommands
2019-07-25 12:15:11 +02:00
Klaas Hoekema
413e5db7b3 Add shell completions for --env-file option
Adds completions for the --env-file toplevel option to the bash, fish,
and zsh completions files.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
69c0683bfe Pass toplevel_environment to run_one_off_container
Instead of passing `project_dir` from `TopLevelCommand.run` to
`run_one_off_container` then using it there to load the toplevel
environment (duplicating the logic that `TopLevelCommand.toplevel_environment`
encapsulates), pass the Environment object.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
088a798e7a Fix typo in 'split_env' error message
Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:10 -04:00
Klaas Hoekema
35eb40424c Call TopLevelCommand's environment 'toplevel_environment'
To help prevent confusion between the different meanings and sources
of "environment", rename the method that loads the environment from
the .env or --env-file (i.e. the one that applies at a project level)
to 'toplevel_environment'.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:25:05 -04:00
Klaas Hoekema
99464d9c2b Handle environment file override within TopLevelCommand
Several (but not all) of the subcommands are accepting and processing the
`--env-file` option, but only because they need to look for a specific
value in the environment. The work of applying the override makes more
sense as the domain of TopLevelCommand, and moving it there and removing
the option from the subcommands makes things simpler.

Signed-off-by: Klaas Hoekema <khoekema@azavea.com>
2019-07-24 09:24:06 -04:00
Silvin Lubecki
cd8e2f870f Merge pull request #6813 from ulyssessouza/fix_stdin_open
Fix stdin_open when running docker-compose run
2019-07-24 11:20:20 +02:00
Ulysses Souza
c641ea08ae Fix stdin_open when running docker-compose run
This fix makes sure that stdin_open specified in the service
is considering when shelling out to the CLI

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-22 17:27:10 +02:00
Jean-Christophe Sirot
d285ba6aee Merge pull request #6803 from ulyssessouza/pin-image-tags
Pin test images on a non rolling tag
2019-07-19 16:35:22 +02:00
Ulysses Souza
cd098e0cad Pin test images on a non rolling tag
Mainly busybox:latest to the current latest which is 1.31.0-uclibc

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-18 11:10:37 +02:00
Djordje Lukic
d212fe68a6 Merge pull request #6728 from albers/completion-config--no-interpolate
Add bash completion for `config --no-interpolate`
2019-07-16 11:59:18 +02:00
Djordje Lukic
c8279bc4db Merge pull request #6738 from Inconnu08/set-optimization
Replace sets with set literal syntax for efficiency
2019-07-15 15:23:24 +02:00
Djordje Lukic
61aa2e346e Merge pull request #6797 from chris-crone/macos-bump-python-3.7.4
Bump macOS build dependency
2019-07-15 10:33:45 +02:00
Djordje Lukic
98932e9cb4 Merge pull request #6754 from Goryudyuma/6740-fix-display
fix: The correct number is displayed
2019-07-15 10:30:59 +02:00
Goryudyuma
59491c7d77 add: test for units
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 04:31:16 +09:00
Kei Matsumoto
75d41edb94 fix: Add test
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 03:38:29 +09:00
Goryudyuma
f9099c91ae fix: The correct number is displayed
Signed-off-by: Kei Matsumoto <umaretekyoumade@gmail.com>
2019-07-14 03:38:12 +09:00
Ulysses Souza
1b326fce57 Merge pull request #6720 from ijc/pass-env-to-docker-cli
Pass environment when calling through to docker cli.
2019-07-12 10:11:47 +02:00
Ulysses Souza
ca721728f6 Merge pull request #6588 from javabrett/6587-default-mand-interp-err
Default ?err to (missing) required VAR name. Fixed #6587.
2019-07-11 17:17:46 +02:00
Ulysses Souza
2e31ebba6a Merge pull request #6798 from chris-crone/linux-bump-deps
Bump Linux build dependencies
2019-07-11 16:09:58 +02:00
Christopher Crone
993bada521 Bump Linux build dependencies
* Python 3.7.2 to 3.7.4
* Docker 18.09.5 to 18.09.7
* Alpine 3.9.3 to 3.10.0
* Debian stretch-20190326 to stretch-20190708

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-10 17:32:13 +02:00
Christopher Crone
b0e7d801a3 Bump macOS build dependency
* Python 3.7.3 to 3.7.4

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-10 17:03:45 +02:00
Chris Crone
7258edb75d Merge pull request #6793 from chris-crone/bump-openssl-1.1.1c-python-3.7.3
Bump macOS build dependencies
2019-07-08 18:42:35 +02:00
Ulysses Souza
f9d1075a5d Merge pull request #6792 from ulyssessouza/bump-texttable
Bump texttable from 0.9.1 to 1.6.2
2019-07-08 15:23:31 +02:00
Ulysses Souza
a1c9d4925a Merge pull request #6791 from ulyssessouza/bump-mock
Bump mock from 2.0.0 to 3.0.5
2019-07-08 15:23:22 +02:00
Christopher Crone
3d80c8e86d Bump macOS build dependencies
* OpenSSL 1.1.1a to 1.1.1c
* Python 3.7.2 to 3.7.3

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-07-08 15:05:10 +02:00
Ulysses Souza
0bfa1c34f0 Bump texttable from 0.9.1 to 1.6.2
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-08 14:52:30 +02:00
Ulysses Souza
57a2bb0c50 Bump mock from 2.0.0 to 3.0.5
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-08 13:47:19 +02:00
Ulysses Souza
3d693f3733 Merge pull request #6778 from ulyssessouza/cleanup-setup_py-versioning
Strip up generic versions and bump requests
2019-07-03 17:31:15 +02:00
Ulysses Souza
ce5451c5b4 Strip up generic versions and bump requests
Replaces generic limitations with a next major value
Bump the minimal `requests` to 2.20.0

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-02 15:49:07 +02:00
Ulysses Souza
df2e833cf0 Merge pull request #6777 from ulyssessouza/pin-busybox-image-version
Pin busybox image version in tests
2019-07-02 14:33:56 +02:00
Ulysses Souza
cacc9752a3 Pin busybox image version in tests
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-07-02 13:42:41 +02:00
Eli Uriegas
cf419dce4c Add .fossa.yml file (#6750)
Add .fossa.yml file
2019-06-17 10:23:44 -07:00
Dave Tucker
8c387c6013 Add .fossa.yml file
This commit adds a .fossa.yml file used by fossa.io
It allows for fossa to scan the dependencies and figure out which oss
licenses are in use. This can be added to CI at some point in the near
future.

Signed-off-by: Dave Tucker <dave@dtucker.co.uk>
2019-06-14 14:58:17 +01:00
Inconnu08
57055e0e66 Replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-06-02 20:21:21 +06:00
Inconnu08
c37fb783fe replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-06-01 01:31:35 +06:00
Inconnu08
b29b6a1538 replace sets with set literal syntax for efficiency
Signed-off-by: Taufiq Rahman <taufiqrx8@gmail.com>
2019-05-31 20:29:09 +06:00
Harald Albers
d68113f5c0 Add bash completion for config --no-interpolate
Signed-off-by: Harald Albers <github@albersweb.de>
2019-05-24 21:59:14 +02:00
Brett Randall
fb4d5aa7e6 Include required but missing VAR name and assignment in interpolation error message.
Error message format is now e.g.:

ERROR: Missing mandatory value for "environment" option interpolating ['MYENV=${MYVAR:?}'] in service "myservice":

Fixed #6587.

Signed-off-by: Brett Randall <javabrett@gmail.com>
2019-05-24 07:01:39 +10:00
Ian Campbell
9d2508cf58 Pass environment when calling through to docker cli.
This ensures that settings from any `.env` file (such as `DOCKER_HOST`) are
passed on to the cli.

Unit tests are adjusted for the new parameter and a new case is added to ensure
it is propagated as expected.

Fixes: 6661

Signed-off-by: Ian Campbell <ijc@docker.com>
2019-05-23 16:29:46 +01:00
144 changed files with 2065 additions and 747 deletions

View File

@@ -13,7 +13,7 @@ jobs:
command: sudo pip install --upgrade tox==2.1.1 virtualenv==16.2.0
- run:
name: unit tests
command: tox -e py27,py36,py37 -- tests/unit
command: tox -e py27,py37 -- tests/unit
build-osx-binary:
macos:

View File

@@ -1,6 +1,9 @@
---
name: Bug report
about: Report a bug encountered while using docker-compose
title: ''
labels: kind/bug
assignees: ''
---

View File

@@ -1,6 +1,9 @@
---
name: Feature request
about: Suggest an idea to improve Compose
title: ''
labels: kind/feature
assignees: ''
---

View File

@@ -1,6 +1,9 @@
---
name: Question about using Compose
about: This is not the appropriate channel
title: ''
labels: kind/question
assignees: ''
---

59
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 180
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
daysUntilClose: 7
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
exemptLabels:
- kind/feature
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: false
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: true
# Label to use when marking as stale
staleLabel: stale
# Comment to post when marking as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when removing the stale label.
unmarkComment: >
This issue has been automatically marked as not stale anymore due to the recent activity.
# Comment to post when closing a stale Issue or Pull Request.
closeComment: >
This issue has been automatically closed because it had not recent activity during the stale period.
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
# Limit to only `issues` or `pulls`
only: issues
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
# pulls:
# daysUntilStale: 30
# markComment: >
# This pull request has been automatically marked as stale because it has not had
# recent activity. It will be closed if no further activity occurs. Thank you
# for your contributions.
# issues:
# exemptLabels:
# - confirmed

View File

@@ -1,14 +1,7 @@
Change log
==========
1.24.1 (2019-06-24)
-------------------
### Bugfixes
- Fixed acceptance tests
1.24.0 (2019-03-22)
1.24.0 (2019-03-28)
-------------------
### Features

View File

@@ -1,36 +1,74 @@
FROM docker:18.06.1 as docker
FROM python:3.6
ARG DOCKER_VERSION=18.09.7
ARG PYTHON_VERSION=3.7.4
ARG BUILD_ALPINE_VERSION=3.10
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.10.1
ARG RUNTIME_DEBIAN_VERSION=stretch-20190812-slim
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
python-dev \
git
ARG BUILD_PLATFORM=alpine
COPY --from=docker /usr/local/bin/docker /usr/local/bin/docker
FROM docker:${DOCKER_VERSION} AS docker-cli
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
FROM python:${PYTHON_VERSION}-alpine${BUILD_ALPINE_VERSION} AS build-alpine
RUN apk add --no-cache \
bash \
build-base \
ca-certificates \
curl \
gcc \
git \
libc-dev \
libffi-dev \
libgcc \
make \
musl-dev \
openssl \
openssl-dev \
python2 \
python2-dev \
zlib-dev
ENV BUILD_BOOTLOADER=1
RUN useradd -d /home/user -m -s /bin/bash user
FROM python:${PYTHON_VERSION}-${BUILD_DEBIAN_VERSION} AS build-debian
RUN apt-get update && apt-get install --no-install-recommends -y \
curl \
gcc \
git \
libc-dev \
libffi-dev \
libgcc-6-dev \
libssl-dev \
make \
openssl \
python2.7-dev \
zlib1g-dev
FROM build-${BUILD_PLATFORM} AS build
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
WORKDIR /code/
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
RUN pip install virtualenv==16.2.0
RUN pip install tox==2.1.1
RUN pip install tox==2.9.1
ADD requirements.txt /code/
ADD requirements-dev.txt /code/
ADD .pre-commit-config.yaml /code/
ADD setup.py /code/
ADD tox.ini /code/
ADD compose /code/compose/
ADD README.md /code/
COPY requirements.txt .
COPY requirements-dev.txt .
COPY .pre-commit-config.yaml .
COPY tox.ini .
COPY setup.py .
COPY README.md .
COPY compose compose/
RUN tox --notest
COPY . .
ARG GIT_COMMIT=unknown
ENV DOCKER_COMPOSE_GITSHA=$GIT_COMMIT
RUN script/build/linux-entrypoint
ADD . /code/
RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py36/bin/docker-compose"]
FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
FROM debian:${RUNTIME_DEBIAN_VERSION} AS runtime-debian
FROM runtime-${BUILD_PLATFORM} AS runtime
COPY docker-compose-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
COPY --from=build /usr/local/bin/docker-compose /usr/local/bin/docker-compose

View File

@@ -1,39 +0,0 @@
FROM python:3.6
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
curl \
python-dev \
git
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/armhf/docker-17.12.0-ce.tgz" && \
SHA256=f8de6378dad825b9fd5c3c2f949e791d22f918623c27a72c84fd6975a0e5d0a2; \
echo "${SHA256} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
RUN useradd -d /home/user -m -s /bin/bash user
WORKDIR /code/
RUN pip install tox==2.1.1
ADD requirements.txt /code/
ADD requirements-dev.txt /code/
ADD .pre-commit-config.yaml /code/
ADD setup.py /code/
ADD tox.ini /code/
ADD compose /code/compose/
RUN tox --notest
ADD . /code/
RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py36/bin/docker-compose"]

View File

@@ -1,19 +0,0 @@
FROM docker:18.06.1 as docker
FROM alpine:3.8
ENV GLIBC 2.28-r0
RUN apk update && apk add --no-cache openssl ca-certificates curl libgcc && \
curl -fsSL -o /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
curl -fsSL -o glibc-$GLIBC.apk https://github.com/sgerrand/alpine-pkg-glibc/releases/download/$GLIBC/glibc-$GLIBC.apk && \
apk add --no-cache glibc-$GLIBC.apk && \
ln -s /lib/libz.so.1 /usr/glibc-compat/lib/ && \
ln -s /lib/libc.musl-x86_64.so.1 /usr/glibc-compat/lib && \
ln -s /usr/lib/libgcc_s.so.1 /usr/glibc-compat/lib && \
rm /etc/apk/keys/sgerrand.rsa.pub glibc-$GLIBC.apk && \
apk del curl
COPY --from=docker /usr/local/bin/docker /usr/local/bin/docker
COPY dist/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
ENTRYPOINT ["docker-compose"]

View File

@@ -1,4 +1,4 @@
FROM s390x/alpine:3.6
FROM s390x/alpine:3.10.1
ARG COMPOSE_VERSION=1.16.1

60
Jenkinsfile vendored
View File

@@ -1,29 +1,38 @@
#!groovy
def image
def buildImage = { ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("build image") {
def buildImage = { String baseImage ->
def image
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("build image for \"${baseImage}\"") {
checkout(scm)
def imageName = "dockerbuildbot/compose:${gitCommit()}"
def imageName = "dockerbuildbot/compose:${baseImage}-${gitCommit()}"
image = docker.image(imageName)
try {
image.pull()
} catch (Exception exc) {
image = docker.build(imageName, ".")
image.push()
sh """GIT_COMMIT=\$(script/build/write-git-sha) && \\
docker build -t ${imageName} \\
--target build \\
--build-arg BUILD_PLATFORM="${baseImage}" \\
--build-arg GIT_COMMIT="${GIT_COMMIT}" \\
.\\
"""
sh "docker push ${imageName}"
echo "${imageName}"
return imageName
}
}
}
echo "image.id: ${image.id}"
return image.id
}
def get_versions = { int number ->
def get_versions = { String imageId, int number ->
def docker_versions
wrappedNode(label: "ubuntu && !zfs") {
wrappedNode(label: "ubuntu && amd64 && !zfs") {
def result = sh(script: """docker run --rm \\
--entrypoint=/code/.tox/py27/bin/python \\
${image.id} \\
--entrypoint=/code/.tox/py37/bin/python \\
${imageId} \\
/code/script/test/versions.py -n ${number} docker/docker-ce recent
""", returnStdout: true
)
@@ -35,17 +44,19 @@ def get_versions = { int number ->
def runTests = { Map settings ->
def dockerVersions = settings.get("dockerVersions", null)
def pythonVersions = settings.get("pythonVersions", null)
def baseImage = settings.get("baseImage", null)
def imageName = settings.get("image", null)
if (!pythonVersions) {
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py27,py36')`")
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py37')`")
}
if (!dockerVersions) {
throw new Exception("Need Docker versions to test. e.g.: `runTests(dockerVersions: 'all')`")
}
{ ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions}") {
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions} / baseImage=${baseImage}") {
checkout(scm)
def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
@@ -55,13 +66,13 @@ def runTests = { Map settings ->
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
-e "TAG=${image.id}" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersions}" \\
-e "BUILD_NUMBER=\$BUILD_TAG" \\
-e "PY_TEST_VERSIONS=${pythonVersions}" \\
--entrypoint="script/test/ci" \\
${image.id} \\
${imageName} \\
--verbose
"""
}
@@ -69,16 +80,13 @@ def runTests = { Map settings ->
}
}
buildImage()
def testMatrix = [failFast: true]
def docker_versions = get_versions(2)
for (int i = 0; i < docker_versions.length; i++) {
def dockerVersion = docker_versions[i]
testMatrix["${dockerVersion}_py27"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py27"])
testMatrix["${dockerVersion}_py36"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py36"])
testMatrix["${dockerVersion}_py37"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py37"])
def baseImages = ['alpine', 'debian']
baseImages.each { baseImage ->
def imageName = buildImage(baseImage)
get_versions(imageName, 2).each { dockerVersion ->
testMatrix["${baseImage}_${dockerVersion}"] = runTests([baseImage: baseImage, image: imageName, dockerVersions: dockerVersion, pythonVersions: 'py37'])
}
}
parallel(testMatrix)

View File

@@ -11,9 +11,8 @@
[Org]
[Org."Core maintainers"]
people = [
"mefyl",
"mnottale",
"shin-",
"rumpl",
"ulyssessouza",
]
[Org.Alumni]
people = [
@@ -34,6 +33,10 @@
# including muti-file support, variable interpolation, secrets
# emulation and many more
"dnephin",
"shin-",
"mefyl",
"mnottale",
]
[people]
@@ -74,7 +77,17 @@
Email = "mazz@houseofmnowster.com"
GitHub = "mnowster"
[People.shin-]
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.shin-]
Name = "Joffrey F"
Email = "joffrey@docker.com"
Email = "f.joffrey@gmail.com"
GitHub = "shin-"
[people.ulyssessouza]
Name = "Ulysses Domiciano Souza"
Email = "ulysses.souza@docker.com"
GitHub = "ulyssessouza"

View File

@@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
Using Compose is basically a three-step process.

View File

@@ -2,15 +2,15 @@
version: '{branch}-{build}'
install:
- "SET PATH=C:\\Python36-x64;C:\\Python36-x64\\Scripts;%PATH%"
- "SET PATH=C:\\Python37-x64;C:\\Python37-x64\\Scripts;%PATH%"
- "python --version"
- "pip install tox==2.9.1 virtualenv==15.1.0"
- "pip install tox==2.9.1 virtualenv==16.2.0"
# Build the binary after tests
build: false
test_script:
- "tox -e py27,py36,py37 -- tests/unit"
- "tox -e py27,py37 -- tests/unit"
- ps: ".\\script\\build\\windows.ps1"
artifacts:

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.24.1'
__version__ = '1.25.0dev'

View File

@@ -95,19 +95,10 @@ def get_image_digest(service, allow_push=False):
if separator == '@':
return service.options['image']
try:
image = service.image()
except NoSuchImageError:
action = 'build' if 'build' in service.options else 'pull'
raise UserError(
"Image not found for service '{service}'. "
"You might need to run `docker-compose {action} {service}`."
.format(service=service.name, action=action))
digest = get_digest(service)
if image['RepoDigests']:
# TODO: pick a digest based on the image tag if there are multiple
# digests
return image['RepoDigests'][0]
if digest:
return digest
if 'build' not in service.options:
raise NeedsPull(service.image_name, service.name)
@@ -118,6 +109,32 @@ def get_image_digest(service, allow_push=False):
return push_image(service)
def get_digest(service):
digest = None
try:
image = service.image()
# TODO: pick a digest based on the image tag if there are multiple
# digests
if image['RepoDigests']:
digest = image['RepoDigests'][0]
except NoSuchImageError:
try:
# Fetch the image digest from the registry
distribution = service.get_image_registry_data()
if distribution['Descriptor']['digest']:
digest = '{image_name}@{digest}'.format(
image_name=service.image_name,
digest=distribution['Descriptor']['digest']
)
except NoSuchImageError:
raise UserError(
"Digest not found for service '{service}'. "
"Repository does not exist or may require 'docker login'"
.format(service=service.name))
return digest
def push_image(service):
try:
digest = service.push()
@@ -147,10 +164,10 @@ def push_image(service):
def to_bundle(config, image_digests):
if config.networks:
log.warn("Unsupported top level key 'networks' - ignoring")
log.warning("Unsupported top level key 'networks' - ignoring")
if config.volumes:
log.warn("Unsupported top level key 'volumes' - ignoring")
log.warning("Unsupported top level key 'volumes' - ignoring")
config = denormalize_config(config)
@@ -175,7 +192,7 @@ def convert_service_to_bundle(name, service_dict, image_digest):
continue
if key not in SUPPORTED_KEYS:
log.warn("Unsupported key '{}' in services.{} - ignoring".format(key, name))
log.warning("Unsupported key '{}' in services.{} - ignoring".format(key, name))
continue
if key == 'environment':
@@ -222,7 +239,7 @@ def make_service_networks(name, service_dict):
for network_name, network_def in get_network_defs_for_service(service_dict).items():
for key in network_def.keys():
log.warn(
log.warning(
"Unsupported key '{}' in services.{}.networks.{} - ignoring"
.format(key, name, network_name))

View File

@@ -41,9 +41,9 @@ for (name, code) in get_pairs():
def rainbow():
cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue',
cs = ['cyan', 'yellow', 'green', 'magenta', 'blue',
'intense_cyan', 'intense_yellow', 'intense_green',
'intense_magenta', 'intense_red', 'intense_blue']
'intense_magenta', 'intense_blue']
for c in cs:
yield globals()[c]

View File

@@ -13,6 +13,9 @@ from .. import config
from .. import parallel
from ..config.environment import Environment
from ..const import API_VERSIONS
from ..const import LABEL_CONFIG_FILES
from ..const import LABEL_ENVIRONMENT_FILE
from ..const import LABEL_WORKING_DIR
from ..project import Project
from .docker_client import docker_client
from .docker_client import get_tls_version
@@ -21,10 +24,27 @@ from .utils import get_version_info
log = logging.getLogger(__name__)
SILENT_COMMANDS = {
'events',
'exec',
'kill',
'logs',
'pause',
'ps',
'restart',
'rm',
'start',
'stop',
'top',
'unpause',
}
def project_from_options(project_dir, options):
def project_from_options(project_dir, options, additional_options={}):
override_dir = options.get('--project-directory')
environment = Environment.from_env_file(override_dir or project_dir)
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or project_dir, environment_file)
environment.silent = options.get('COMMAND', None) in SILENT_COMMANDS
set_parallel_limit(environment)
host = options.get('--host')
@@ -40,6 +60,8 @@ def project_from_options(project_dir, options):
environment=environment,
override_dir=override_dir,
compatibility=options.get('--compatibility'),
interpolate=(not additional_options.get('--no-interpolate')),
environment_file=environment_file
)
@@ -59,15 +81,17 @@ def set_parallel_limit(environment):
parallel.GlobalLimit.set_global_limit(parallel_limit)
def get_config_from_options(base_dir, options):
def get_config_from_options(base_dir, options, additional_options={}):
override_dir = options.get('--project-directory')
environment = Environment.from_env_file(override_dir or base_dir)
environment_file = options.get('--env-file')
environment = Environment.from_env_file(override_dir or base_dir, environment_file)
config_path = get_config_path_from_options(
base_dir, options, environment
)
return config.load(
config.find(base_dir, config_path, environment, override_dir),
options.get('--compatibility')
options.get('--compatibility'),
not additional_options.get('--no-interpolate')
)
@@ -105,14 +129,14 @@ def get_client(environment, verbose=False, version=None, tls_config=None, host=N
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
host=None, tls_config=None, environment=None, override_dir=None,
compatibility=False):
compatibility=False, interpolate=True, environment_file=None):
if not environment:
environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir)
project_name = get_project_name(
config_details.working_dir, project_name, environment
)
config_data = config.load(config_details, compatibility)
config_data = config.load(config_details, compatibility, interpolate)
api_version = environment.get(
'COMPOSE_API_VERSION',
@@ -125,10 +149,30 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
with errors.handle_connection_errors(client):
return Project.from_config(
project_name, config_data, client, environment.get('DOCKER_DEFAULT_PLATFORM')
project_name,
config_data,
client,
environment.get('DOCKER_DEFAULT_PLATFORM'),
execution_context_labels(config_details, environment_file),
)
def execution_context_labels(config_details, environment_file):
extra_labels = [
'{0}={1}'.format(LABEL_WORKING_DIR, os.path.abspath(config_details.working_dir)),
'{0}={1}'.format(LABEL_CONFIG_FILES, config_files_label(config_details)),
]
if environment_file is not None:
extra_labels.append('{0}={1}'.format(LABEL_ENVIRONMENT_FILE,
os.path.normpath(environment_file)))
return extra_labels
def config_files_label(config_details):
return ",".join(
map(str, (os.path.normpath(c.filename) for c in config_details.config_files)))
def get_project_name(working_dir, project_name=None, environment=None):
def normalize_name(name):
return re.sub(r'[^-_a-z0-9]', '', name.lower())

View File

@@ -31,7 +31,7 @@ def get_tls_version(environment):
tls_attr_name = "PROTOCOL_{}".format(compose_tls_version)
if not hasattr(ssl, tls_attr_name):
log.warn(
log.warning(
'The "{}" protocol is unavailable. You may need to update your '
'version of Python or OpenSSL. Falling back to TLSv1 (default).'
.format(compose_tls_version)

View File

@@ -2,25 +2,32 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import logging
import os
import shutil
import six
import texttable
from compose.cli import colors
if hasattr(shutil, "get_terminal_size"):
from shutil import get_terminal_size
else:
from backports.shutil_get_terminal_size import get_terminal_size
def get_tty_width():
tty_size = os.popen('stty size 2> /dev/null', 'r').read().split()
if len(tty_size) != 2:
try:
width, _ = get_terminal_size()
return int(width)
except OSError:
return 0
_, width = tty_size
return int(width)
class Formatter(object):
class Formatter:
"""Format tabular data for printing."""
def table(self, headers, rows):
@staticmethod
def table(headers, rows):
table = texttable.Texttable(max_width=get_tty_width())
table.set_cols_dtype(['t' for h in headers])
table.add_rows([headers] + rows)

View File

@@ -134,7 +134,10 @@ def build_thread(container, presenter, queue, log_args):
def build_thread_map(initial_containers, presenters, thread_args):
return {
container.id: build_thread(container, next(presenters), *thread_args)
for container in initial_containers
# Container order is unspecified, so they are sorted by name in order to make
# container:presenter (log color) assignment deterministic when given a list of containers
# with the same names.
for container in sorted(initial_containers, key=lambda c: c.name)
}
@@ -230,7 +233,13 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
# Container crashed so we should reattach to it
if event['id'] in crashed_containers:
event['container'].attach_log_stream()
container = event['container']
if not container.is_restarting:
try:
container.attach_log_stream()
except APIError:
# Just ignore errors when reattaching to already crashed containers
pass
crashed_containers.remove(event['id'])
thread_map[event['id']] = build_thread(

View File

@@ -6,6 +6,7 @@ import contextlib
import functools
import json
import logging
import os
import pipes
import re
import subprocess
@@ -102,9 +103,9 @@ def dispatch():
options, handler, command_options = dispatcher.parse(sys.argv[1:])
setup_console_handler(console_handler,
options.get('--verbose'),
options.get('--no-ansi'),
set_no_color_if_clicolor(options.get('--no-ansi')),
options.get("--log-level"))
setup_parallel_logger(options.get('--no-ansi'))
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
if options.get('--no-ansi'):
command_options['--no-color'] = True
return functools.partial(perform_command, options, handler, command_options)
@@ -208,6 +209,7 @@ class TopLevelCommand(object):
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent
--env-file PATH Specify an alternate environment file
Commands:
build Build or rebuild services
@@ -246,6 +248,11 @@ class TopLevelCommand(object):
def project_dir(self):
return self.toplevel_options.get('--project-directory') or '.'
@property
def toplevel_environment(self):
environment_file = self.toplevel_options.get('--env-file')
return Environment.from_env_file(self.project_dir, environment_file)
def build(self, options):
"""
Build or rebuild services.
@@ -257,13 +264,18 @@ class TopLevelCommand(object):
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--build-arg key=val Set build-time variables for services.
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
-m, --memory MEM Set memory limit for the build container.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--no-rm Do not remove intermediate containers after a successful build.
--parallel Build images in parallel.
--progress string Set type of progress output (auto, plain, tty).
EXPERIMENTAL flag for native builder.
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
"""
service_names = options['SERVICE']
build_args = options.get('--build-arg', None)
@@ -273,8 +285,9 @@ class TopLevelCommand(object):
'--build-arg is only supported when services are specified for API version < 1.25.'
' Please use a Compose file version > 2.2 or specify which services to build.'
)
environment = Environment.from_env_file(self.project_dir)
build_args = resolve_build_args(build_args, environment)
build_args = resolve_build_args(build_args, self.toplevel_environment)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
self.project.build(
service_names=options['SERVICE'],
@@ -282,9 +295,13 @@ class TopLevelCommand(object):
pull=bool(options.get('--pull', False)),
force_rm=bool(options.get('--force-rm', False)),
memory=options.get('--memory'),
rm=not bool(options.get('--no-rm', False)),
build_args=build_args,
gzip=options.get('--compress', False),
parallel_build=options.get('--parallel', False),
silent=options.get('--quiet', False),
cli=native_builder,
progress=options.get('--progress'),
)
def bundle(self, options):
@@ -327,6 +344,7 @@ class TopLevelCommand(object):
Options:
--resolve-image-digests Pin image tags to digests.
--no-interpolate Don't interpolate environment variables
-q, --quiet Only validate the configuration, don't print
anything.
--services Print the service names, one per line.
@@ -336,11 +354,12 @@ class TopLevelCommand(object):
or use the wildcard symbol to display all services
"""
compose_config = get_config_from_options('.', self.toplevel_options)
additional_options = {'--no-interpolate': options.get('--no-interpolate')}
compose_config = get_config_from_options('.', self.toplevel_options, additional_options)
image_digests = None
if options['--resolve-image-digests']:
self.project = project_from_options('.', self.toplevel_options)
self.project = project_from_options('.', self.toplevel_options, additional_options)
with errors.handle_connection_errors(self.project.client):
image_digests = image_digests_for_project(self.project)
@@ -357,14 +376,14 @@ class TopLevelCommand(object):
if options['--hash'] is not None:
h = options['--hash']
self.project = project_from_options('.', self.toplevel_options)
self.project = project_from_options('.', self.toplevel_options, additional_options)
services = [svc for svc in options['--hash'].split(',')] if h != '*' else None
with errors.handle_connection_errors(self.project.client):
for service in self.project.get_services(services):
print('{} {}'.format(service.name, service.config_hash))
return
print(serialize_config(compose_config, image_digests))
print(serialize_config(compose_config, image_digests, not options['--no-interpolate']))
def create(self, options):
"""
@@ -383,7 +402,7 @@ class TopLevelCommand(object):
"""
service_names = options['SERVICE']
log.warn(
log.warning(
'The create command is deprecated. '
'Use the up command with the --no-start flag instead.'
)
@@ -422,8 +441,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
environment = Environment.from_env_file(self.project_dir)
ignore_orphans = environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
ignore_orphans = self.toplevel_environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
if ignore_orphans and options['--remove-orphans']:
raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.")
@@ -480,8 +498,7 @@ class TopLevelCommand(object):
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.
"""
environment = Environment.from_env_file(self.project_dir)
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
use_cli = not self.toplevel_environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
index = int(options.get('--index'))
service = self.project.get_service(options['SERVICE'])
detach = options.get('--detach')
@@ -504,7 +521,7 @@ class TopLevelCommand(object):
if IS_WINDOWS_PLATFORM or use_cli and not detach:
sys.exit(call_docker(
build_exec_command(options, container.id, command),
self.toplevel_options)
self.toplevel_options, self.toplevel_environment)
)
create_exec_options = {
@@ -604,7 +621,7 @@ class TopLevelCommand(object):
image_id,
size
])
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def kill(self, options):
"""
@@ -650,7 +667,7 @@ class TopLevelCommand(object):
log_printer_from_project(
self.project,
containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
log_args,
event_stream=self.project.events(service_names=options['SERVICE'])).run()
@@ -709,7 +726,8 @@ class TopLevelCommand(object):
if options['--all']:
containers = sorted(self.project.containers(service_names=options['SERVICE'],
one_off=OneOffFilter.include, stopped=True))
one_off=OneOffFilter.include, stopped=True),
key=attrgetter('name'))
else:
containers = sorted(
self.project.containers(service_names=options['SERVICE'], stopped=True) +
@@ -737,7 +755,7 @@ class TopLevelCommand(object):
container.human_readable_state,
container.human_readable_ports,
])
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def pull(self, options):
"""
@@ -753,7 +771,7 @@ class TopLevelCommand(object):
--include-deps Also pull services declared as dependencies
"""
if options.get('--parallel'):
log.warn('--parallel option is deprecated and will be removed in future versions.')
log.warning('--parallel option is deprecated and will be removed in future versions.')
self.project.pull(
service_names=options['SERVICE'],
ignore_pull_failures=options.get('--ignore-pull-failures'),
@@ -794,7 +812,7 @@ class TopLevelCommand(object):
-a, --all Deprecated - no effect.
"""
if options.get('--all'):
log.warn(
log.warning(
'--all flag is obsolete. This is now the default behavior '
'of `docker-compose rm`'
)
@@ -872,10 +890,12 @@ class TopLevelCommand(object):
else:
command = service.options.get('command')
options['stdin_open'] = service.options.get('stdin_open', True)
container_options = build_one_off_container_options(options, detach, command)
run_one_off_container(
container_options, self.project, service, options,
self.toplevel_options, self.project_dir
self.toplevel_options, self.toplevel_environment
)
def scale(self, options):
@@ -904,7 +924,7 @@ class TopLevelCommand(object):
'Use the up command with the --scale flag instead.'
)
else:
log.warn(
log.warning(
'The scale command is deprecated. '
'Use the up command with the --scale flag instead.'
)
@@ -975,7 +995,7 @@ class TopLevelCommand(object):
rows.append(process)
print(container.name)
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def unpause(self, options):
"""
@@ -1050,8 +1070,7 @@ class TopLevelCommand(object):
if detached and (cascade_stop or exit_value_from):
raise UserError("--abort-on-container-exit and -d cannot be combined.")
environment = Environment.from_env_file(self.project_dir)
ignore_orphans = environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
ignore_orphans = self.toplevel_environment.get_boolean('COMPOSE_IGNORE_ORPHANS')
if ignore_orphans and remove_orphans:
raise UserError("COMPOSE_IGNORE_ORPHANS and --remove-orphans cannot be combined.")
@@ -1060,6 +1079,8 @@ class TopLevelCommand(object):
for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
with up_shutdown_context(self.project, service_names, timeout, detached):
warn_for_swarm_mode(self.project.client)
@@ -1079,6 +1100,7 @@ class TopLevelCommand(object):
reset_container_image=rebuild,
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
cli=native_builder,
)
try:
@@ -1103,7 +1125,7 @@ class TopLevelCommand(object):
log_printer = log_printer_from_project(
self.project,
attached_containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
{'follow': True},
cascade_stop,
event_stream=self.project.events(service_names=service_names))
@@ -1236,7 +1258,7 @@ def exitval_from_opts(options, project):
exit_value_from = options.get('--exit-code-from')
if exit_value_from:
if not options.get('--abort-on-container-exit'):
log.warn('using --exit-code-from implies --abort-on-container-exit')
log.warning('using --exit-code-from implies --abort-on-container-exit')
options['--abort-on-container-exit'] = True
if exit_value_from not in [s.name for s in project.get_services()]:
log.error('No service named "%s" was found in your compose file.',
@@ -1271,7 +1293,7 @@ def build_one_off_container_options(options, detach, command):
container_options = {
'command': command,
'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
'stdin_open': not detach,
'stdin_open': options.get('stdin_open'),
'detach': detach,
}
@@ -1314,7 +1336,7 @@ def build_one_off_container_options(options, detach, command):
def run_one_off_container(container_options, project, service, options, toplevel_options,
project_dir='.'):
toplevel_environment):
if not options['--no-deps']:
deps = service.get_dependency_names()
if deps:
@@ -1343,8 +1365,7 @@ def run_one_off_container(container_options, project, service, options, toplevel
if options['--rm']:
project.client.remove_container(container.id, force=True, v=True)
environment = Environment.from_env_file(project_dir)
use_cli = not environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
use_cli = not toplevel_environment.get_boolean('COMPOSE_INTERACTIVE_NO_CLI')
signals.set_signal_handler_to_shutdown()
signals.set_signal_handler_to_hang_up()
@@ -1353,8 +1374,8 @@ def run_one_off_container(container_options, project, service, options, toplevel
if IS_WINDOWS_PLATFORM or use_cli:
service.connect_container_to_networks(container, use_network_aliases)
exit_code = call_docker(
["start", "--attach", "--interactive", container.id],
toplevel_options
get_docker_start_call(container_options, container.id),
toplevel_options, toplevel_environment
)
else:
operation = RunOperation(
@@ -1380,6 +1401,16 @@ def run_one_off_container(container_options, project, service, options, toplevel
sys.exit(exit_code)
def get_docker_start_call(container_options, container_id):
docker_call = ["start"]
if not container_options.get('detach'):
docker_call.append("--attach")
if container_options.get('stdin_open'):
docker_call.append("--interactive")
docker_call.append(container_id)
return docker_call
def log_printer_from_project(
project,
containers,
@@ -1434,7 +1465,7 @@ def exit_if(condition, message, exit_code):
raise SystemExit(exit_code)
def call_docker(args, dockeropts):
def call_docker(args, dockeropts, environment):
executable_path = find_executable('docker')
if not executable_path:
raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary."))
@@ -1464,7 +1495,7 @@ def call_docker(args, dockeropts):
args = [executable_path] + tls_options + args
log.debug(" ".join(map(pipes.quote, args)))
return subprocess.call(args)
return subprocess.call(args, env=environment)
def parse_scale_args(options):
@@ -1565,10 +1596,14 @@ def warn_for_swarm_mode(client):
# UCP does multi-node scheduling with traditional Compose files.
return
log.warn(
log.warning(
"The Docker Engine you're using is running in swarm mode.\n\n"
"Compose does not use swarm mode to deploy services to multiple nodes in a swarm. "
"All containers will be scheduled on the current node.\n\n"
"To deploy your application across the swarm, "
"use `docker stack deploy`.\n"
)
def set_no_color_if_clicolor(no_color_flag):
return no_color_flag or os.environ.get('CLICOLOR') == "0"

View File

@@ -133,12 +133,12 @@ def generate_user_agent():
def human_readable_file_size(size):
suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
order = int(math.log(size, 2) / 10) if size else 0
order = int(math.log(size, 1000)) if size else 0
if order >= len(suffixes):
order = len(suffixes) - 1
return '{0:.3g} {1}'.format(
size / float(1 << (order * 10)),
return '{0:.4g} {1}'.format(
size / pow(10, order * 3),
suffixes[order]
)

View File

@@ -198,9 +198,9 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
version = self.config['version']
if isinstance(version, dict):
log.warn('Unexpected type for "version" key in "{}". Assuming '
'"version" is the name of a service, and defaulting to '
'Compose file version 1.'.format(self.filename))
log.warning('Unexpected type for "version" key in "{}". Assuming '
'"version" is the name of a service, and defaulting to '
'Compose file version 1.'.format(self.filename))
return V1
if not isinstance(version, six.string_types):
@@ -318,8 +318,8 @@ def get_default_config_files(base_dir):
winner = candidates[0]
if len(candidates) > 1:
log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
log.warn("Using %s\n", winner)
log.warning("Found multiple config files with supported names: %s", ", ".join(candidates))
log.warning("Using %s\n", winner)
return [os.path.join(path, winner)] + get_default_override_file(path)
@@ -362,7 +362,7 @@ def check_swarm_only_config(service_dicts, compatibility=False):
def check_swarm_only_key(service_dicts, key):
services = [s for s in service_dicts if s.get(key)]
if services:
log.warn(
log.warning(
warning_template.format(
services=", ".join(sorted(s['name'] for s in services)),
key=key
@@ -373,7 +373,7 @@ def check_swarm_only_config(service_dicts, compatibility=False):
check_swarm_only_key(service_dicts, 'configs')
def load(config_details, compatibility=False):
def load(config_details, compatibility=False, interpolate=True):
"""Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top
of each other to create the final configuration.
@@ -383,7 +383,7 @@ def load(config_details, compatibility=False):
validate_config_version(config_details.config_files)
processed_files = [
process_config_file(config_file, config_details.environment)
process_config_file(config_file, config_details.environment, interpolate=interpolate)
for config_file in config_details.config_files
]
config_details = config_details._replace(config_files=processed_files)
@@ -505,7 +505,6 @@ def load_services(config_details, config_file, compatibility=False):
def interpolate_config_section(config_file, config, section, environment):
validate_config_section(config_file.filename, config, section)
return interpolate_environment_variables(
config_file.version,
config,
@@ -514,38 +513,60 @@ def interpolate_config_section(config_file, config, section, environment):
)
def process_config_file(config_file, environment, service_name=None):
services = interpolate_config_section(
def process_config_section(config_file, config, section, environment, interpolate):
validate_config_section(config_file.filename, config, section)
if interpolate:
return interpolate_environment_variables(
config_file.version,
config,
section,
environment
)
else:
return config
def process_config_file(config_file, environment, service_name=None, interpolate=True):
services = process_config_section(
config_file,
config_file.get_service_dicts(),
'service',
environment)
environment,
interpolate,
)
if config_file.version > V1:
processed_config = dict(config_file.config)
processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section(
processed_config['volumes'] = process_config_section(
config_file,
config_file.get_volumes(),
'volume',
environment)
processed_config['networks'] = interpolate_config_section(
environment,
interpolate,
)
processed_config['networks'] = process_config_section(
config_file,
config_file.get_networks(),
'network',
environment)
environment,
interpolate,
)
if config_file.version >= const.COMPOSEFILE_V3_1:
processed_config['secrets'] = interpolate_config_section(
processed_config['secrets'] = process_config_section(
config_file,
config_file.get_secrets(),
'secret',
environment)
environment,
interpolate,
)
if config_file.version >= const.COMPOSEFILE_V3_3:
processed_config['configs'] = interpolate_config_section(
processed_config['configs'] = process_config_section(
config_file,
config_file.get_configs(),
'config',
environment
environment,
interpolate,
)
else:
processed_config = services
@@ -594,7 +615,7 @@ class ServiceExtendsResolver(object):
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
if config_path == self.config_file.filename:
if config_path == os.path.abspath(self.config_file.filename):
try:
service_config = self.config_file.get_service(service_name)
except KeyError:
@@ -900,7 +921,7 @@ def finalize_service(service_config, service_names, version, environment, compat
service_dict
)
if ignored_keys:
log.warn(
log.warning(
'The following deploy sub-keys are not supported in compatibility mode and have'
' been ignored: {}'.format(', '.join(ignored_keys))
)

View File

@@ -26,7 +26,7 @@ def split_env(env):
key = env
if re.search(r'\s', key):
raise ConfigurationError(
"environment variable name '{}' may not contains whitespace.".format(key)
"environment variable name '{}' may not contain whitespace.".format(key)
)
return key, value
@@ -56,14 +56,18 @@ class Environment(dict):
def __init__(self, *args, **kwargs):
super(Environment, self).__init__(*args, **kwargs)
self.missing_keys = []
self.silent = False
@classmethod
def from_env_file(cls, base_dir):
def from_env_file(cls, base_dir, env_file=None):
def _initialize():
result = cls()
if base_dir is None:
return result
env_file_path = os.path.join(base_dir, '.env')
if env_file:
env_file_path = os.path.join(base_dir, env_file)
else:
env_file_path = os.path.join(base_dir, '.env')
try:
return cls(env_vars_from_file(env_file_path))
except EnvFileNotFound:
@@ -95,8 +99,8 @@ class Environment(dict):
return super(Environment, self).__getitem__(key.upper())
except KeyError:
pass
if key not in self.missing_keys:
log.warn(
if not self.silent and key not in self.missing_keys:
log.warning(
"The {} variable is not set. Defaulting to a blank string."
.format(key)
)

View File

@@ -64,12 +64,12 @@ def interpolate_value(name, config_key, value, section, interpolator):
string=e.string))
except UnsetRequiredSubstitution as e:
raise ConfigurationError(
'Missing mandatory value for "{config_key}" option in {section} "{name}": {err}'.format(
config_key=config_key,
name=name,
section=section,
err=e.err
)
'Missing mandatory value for "{config_key}" option interpolating {value} '
'in {section} "{name}": {err}'.format(config_key=config_key,
value=value,
name=name,
section=section,
err=e.err)
)

View File

@@ -24,14 +24,12 @@ def serialize_dict_type(dumper, data):
def serialize_string(dumper, data):
""" Ensure boolean-like strings are quoted in the output and escape $ characters """
""" Ensure boolean-like strings are quoted in the output """
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
if isinstance(data, six.binary_type):
data = data.decode('utf-8')
data = data.replace('$', '$$')
if data.lower() in ('y', 'n', 'yes', 'no', 'on', 'off', 'true', 'false'):
# Empirically only y/n appears to be an issue, but this might change
# depending on which PyYaml version is being used. Err on safe side.
@@ -39,6 +37,12 @@ def serialize_string(dumper, data):
return representer(data)
def serialize_string_escape_dollar(dumper, data):
""" Ensure boolean-like strings are quoted in the output and escape $ characters """
data = data.replace('$', '$$')
return serialize_string(dumper, data)
yaml.SafeDumper.add_representer(types.MountSpec, serialize_dict_type)
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
@@ -46,8 +50,6 @@ yaml.SafeDumper.add_representer(types.SecurityOpt, serialize_config_type)
yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
def denormalize_config(config, image_digests=None):
@@ -93,7 +95,13 @@ def v3_introduced_name_key(key):
return V3_5
def serialize_config(config, image_digests=None):
def serialize_config(config, image_digests=None, escape_dollar=True):
if escape_dollar:
yaml.SafeDumper.add_representer(str, serialize_string_escape_dollar)
yaml.SafeDumper.add_representer(six.text_type, serialize_string_escape_dollar)
else:
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
return yaml.safe_dump(
denormalize_config(config, image_digests),
default_flow_style=False,

View File

@@ -11,6 +11,9 @@ IS_WINDOWS_PLATFORM = (sys.platform == "win32")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'
LABEL_WORKING_DIR = 'com.docker.compose.project.working_dir'
LABEL_CONFIG_FILES = 'com.docker.compose.project.config_files'
LABEL_ENVIRONMENT_FILE = 'com.docker.compose.project.environment_file'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'

View File

@@ -226,12 +226,12 @@ def check_remote_network_config(remote, local):
raise NetworkConfigChangedError(local.true_name, 'enable_ipv6')
local_labels = local.labels or {}
remote_labels = remote.get('Labels', {})
remote_labels = remote.get('Labels') or {}
for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue
if remote_labels.get(k) != local_labels.get(k):
log.warn(
log.warning(
'Network {}: label "{}" has changed. It may need to be'
' recreated.'.format(local.true_name, k)
)
@@ -276,7 +276,7 @@ class ProjectNetworks(object):
}
unused = set(networks) - set(service_networks) - {'default'}
if unused:
log.warn(
log.warning(
"Some networks were defined but are not used by any service: "
"{}".format(", ".join(unused)))
return cls(service_networks, use_networking)
@@ -288,7 +288,7 @@ class ProjectNetworks(object):
try:
network.remove()
except NotFound:
log.warn("Network %s not found.", network.true_name)
log.warning("Network %s not found.", network.true_name)
def initialize(self):
if not self.use_networking:

View File

@@ -6,6 +6,7 @@ import logging
import operator
import re
from functools import reduce
from os import path
import enum
import six
@@ -82,7 +83,7 @@ class Project(object):
return labels
@classmethod
def from_config(cls, name, config_data, client, default_platform=None):
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=[]):
"""
Construct a Project from a config.Config object.
"""
@@ -135,6 +136,7 @@ class Project(object):
pid_mode=pid_mode,
platform=service_dict.pop('platform', None),
default_platform=default_platform,
extra_labels=extra_labels,
**service_dict)
)
@@ -355,18 +357,27 @@ class Project(object):
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
build_args=None, gzip=False, parallel_build=False):
build_args=None, gzip=False, parallel_build=False, rm=True, silent=False, cli=False,
progress=None):
services = []
for service in self.get_services(service_names):
if service.can_be_built():
services.append(service)
else:
elif not silent:
log.info('%s uses an image, skipping' % service.name)
def build_service(service):
service.build(no_cache, pull, force_rm, memory, build_args, gzip)
if cli:
log.warning("Native build is an experimental feature and could change at any time")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
if gzip:
log.warning("Flag '--compress' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
def build_service(service):
service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent, cli, progress)
if parallel_build:
_, errors = parallel.parallel_execute(
services,
@@ -510,8 +521,12 @@ class Project(object):
reset_container_image=False,
renew_anonymous_volumes=False,
silent=False,
cli=False,
):
if cli:
log.warning("Native build is an experimental feature and could change at any time")
self.initialize()
if not ignore_orphans:
self.find_orphan_containers(remove_orphans)
@@ -524,7 +539,7 @@ class Project(object):
include_deps=start_deps)
for svc in services:
svc.ensure_image_exists(do_build=do_build, silent=silent)
svc.ensure_image_exists(do_build=do_build, silent=silent, cli=cli)
plans = self._get_convergence_plans(
services, strategy, always_recreate_deps=always_recreate_deps)
@@ -587,8 +602,10 @@ class Project(object):
", ".join(updated_dependencies))
containers_stopped = any(
service.containers(stopped=True, filters={'status': ['created', 'exited']}))
has_links = any(c.get('HostConfig.Links') for c in service.containers())
if always_recreate_deps or containers_stopped or not has_links:
service_has_links = any(service.get_link_names())
container_has_links = any(c.get('HostConfig.Links') for c in service.containers())
should_recreate_for_links = service_has_links ^ container_has_links
if always_recreate_deps or containers_stopped or should_recreate_for_links:
plan = service.convergence_plan(ConvergenceStrategy.always)
else:
plan = service.convergence_plan(strategy)
@@ -602,6 +619,9 @@ class Project(object):
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
include_deps=False):
services = self.get_services(service_names, include_deps)
images_to_build = {service.image_name for service in services if service.can_be_built()}
services_to_pull = [service for service in services if service.image_name not in images_to_build]
msg = not silent and 'Pulling' or None
if parallel_pull:
@@ -627,7 +647,7 @@ class Project(object):
)
_, errors = parallel.parallel_execute(
services,
services_to_pull,
pull_service,
operator.attrgetter('name'),
msg,
@@ -640,7 +660,7 @@ class Project(object):
raise ProjectError(combined_errors)
else:
for service in services:
for service in services_to_pull:
service.pull(ignore_pull_failures, silent=silent)
def push(self, service_names=None, ignore_push_failures=False):
@@ -686,7 +706,7 @@ class Project(object):
def find_orphan_containers(self, remove_orphans):
def _find():
containers = self._labeled_containers()
containers = set(self._labeled_containers() + self._labeled_containers(stopped=True))
for ctnr in containers:
service_name = ctnr.labels.get(LABEL_SERVICE)
if service_name not in self.service_names:
@@ -697,7 +717,10 @@ class Project(object):
if remove_orphans:
for ctnr in orphans:
log.info('Removing orphan container "{0}"'.format(ctnr.name))
ctnr.kill()
try:
ctnr.kill()
except APIError:
pass
ctnr.remove(force=True)
else:
log.warning(
@@ -725,10 +748,11 @@ class Project(object):
def build_container_operation_with_timeout_func(self, operation, options):
def container_operation_with_timeout(container):
if options.get('timeout') is None:
_options = options.copy()
if _options.get('timeout') is None:
service = self.get_service(container.service)
options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**options)
_options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**_options)
return container_operation_with_timeout
@@ -771,13 +795,13 @@ def get_secrets(service, service_secrets, secret_defs):
.format(service=service, secret=secret.source))
if secret_def.get('external'):
log.warn("Service \"{service}\" uses secret \"{secret}\" which is external. "
"External secrets are not available to containers created by "
"docker-compose.".format(service=service, secret=secret.source))
log.warning("Service \"{service}\" uses secret \"{secret}\" which is external. "
"External secrets are not available to containers created by "
"docker-compose.".format(service=service, secret=secret.source))
continue
if secret.uid or secret.gid or secret.mode:
log.warn(
log.warning(
"Service \"{service}\" uses secret \"{secret}\" with uid, "
"gid, or mode. These fields are not supported by this "
"implementation of the Compose file".format(
@@ -785,7 +809,15 @@ def get_secrets(service, service_secrets, secret_defs):
)
)
secrets.append({'secret': secret, 'file': secret_def.get('file')})
secret_file = secret_def.get('file')
if not path.isfile(str(secret_file)):
log.warning(
"Service \"{service}\" uses an undefined secret file \"{secret_file}\", "
"the following file should be created \"{secret_file}\"".format(
service=service, secret_file=secret_file
)
)
secrets.append({'secret': secret, 'file': secret_file})
return secrets

View File

@@ -2,10 +2,12 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import itertools
import json
import logging
import os
import re
import sys
import tempfile
from collections import namedtuple
from collections import OrderedDict
from operator import attrgetter
@@ -59,10 +61,13 @@ from .utils import parse_seconds_float
from .utils import truncate_id
from .utils import unique_everseen
if six.PY2:
import subprocess32 as subprocess
else:
import subprocess
log = logging.getLogger(__name__)
HOST_CONFIG_KEYS = [
'cap_add',
'cap_drop',
@@ -131,7 +136,6 @@ class NoSuchImageError(Exception):
ServiceName = namedtuple('ServiceName', 'project service number')
ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
@@ -167,20 +171,21 @@ class BuildAction(enum.Enum):
class Service(object):
def __init__(
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=None,
pid_mode=None,
default_platform=None,
**options
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=1,
pid_mode=None,
default_platform=None,
extra_labels=[],
**options
):
self.name = name
self.client = client
@@ -192,9 +197,10 @@ class Service(object):
self.pid_mode = pid_mode or PidMode(None)
self.networks = networks or {}
self.secrets = secrets or []
self.scale_num = scale or 1
self.scale_num = scale
self.default_platform = default_platform
self.options = options
self.extra_labels = extra_labels
def __repr__(self):
return '<Service: {}>'.format(self.name)
@@ -209,7 +215,7 @@ class Service(object):
for container in self.client.containers(
all=stopped,
filters=filters)])
)
)
if result:
return result
@@ -241,15 +247,15 @@ class Service(object):
def show_scale_warnings(self, desired_num):
if self.custom_container_name and desired_num > 1:
log.warn('The "%s" service is using the custom container name "%s". '
'Docker requires each container to have a unique name. '
'Remove the custom name to scale the service.'
% (self.name, self.custom_container_name))
log.warning('The "%s" service is using the custom container name "%s". '
'Docker requires each container to have a unique name. '
'Remove the custom name to scale the service.'
% (self.name, self.custom_container_name))
if self.specifies_host_port() and desired_num > 1:
log.warn('The "%s" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'
% self.name)
log.warning('The "%s" service specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'
% self.name)
def scale(self, desired_num, timeout=None):
"""
@@ -339,9 +345,9 @@ class Service(object):
raise OperationFailedError("Cannot create container for service %s: %s" %
(self.name, ex.explanation))
def ensure_image_exists(self, do_build=BuildAction.none, silent=False):
def ensure_image_exists(self, do_build=BuildAction.none, silent=False, cli=False):
if self.can_be_built() and do_build == BuildAction.force:
self.build()
self.build(cli=cli)
return
try:
@@ -357,12 +363,18 @@ class Service(object):
if do_build == BuildAction.skip:
raise NeedsBuildError(self)
self.build()
log.warn(
self.build(cli=cli)
log.warning(
"Image for service {} was built because it did not already exist. To "
"rebuild this image you must use `docker-compose build` or "
"`docker-compose up --build`.".format(self.name))
def get_image_registry_data(self):
try:
return self.client.inspect_distribution(self.image_name)
except APIError:
raise NoSuchImageError("Image '{}' not found".format(self.image_name))
def image(self):
try:
return self.client.inspect_image(self.image_name)
@@ -392,8 +404,8 @@ class Service(object):
return ConvergencePlan('start', containers)
if (
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
):
return ConvergencePlan('recreate', containers)
@@ -470,6 +482,7 @@ class Service(object):
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
@@ -611,6 +624,8 @@ class Service(object):
try:
container.start()
except APIError as ex:
if "driver failed programming external connectivity" in ex.explanation:
log.warn("Host is already in use by another container")
raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation))
return container
@@ -680,6 +695,7 @@ class Service(object):
'links': self.get_link_names(),
'net': self.network_mode.id,
'networks': self.networks,
'secrets': self.secrets,
'volumes_from': [
(v.source.name, v.mode)
for v in self.volumes_from if isinstance(v.source, Service)
@@ -690,11 +706,11 @@ class Service(object):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
@@ -884,7 +900,7 @@ class Service(object):
container_options['labels'] = build_container_labels(
container_options.get('labels', {}),
self.labels(one_off=one_off),
self.labels(one_off=one_off) + self.extra_labels,
number,
self.config_hash if add_config_hash else None,
slug
@@ -1043,8 +1059,11 @@ class Service(object):
return [build_spec(secret) for secret in self.secrets]
def build(self, no_cache=False, pull=False, force_rm=False, memory=None, build_args_override=None,
gzip=False):
log.info('Building %s' % self.name)
gzip=False, rm=True, silent=False, cli=False, progress=None):
output_stream = open(os.devnull, 'w')
if not silent:
output_stream = sys.stdout
log.info('Building %s' % self.name)
build_opts = self.options.get('build', {})
@@ -1061,15 +1080,16 @@ class Service(object):
'Impossible to perform platform-targeted builds for API version < 1.35'
)
build_output = self.client.build(
builder = self.client if not cli else _CLIBuilder(progress)
build_output = builder.build(
path=path,
tag=self.image_name,
rm=True,
rm=rm,
forcerm=force_rm,
pull=pull,
nocache=no_cache,
dockerfile=build_opts.get('dockerfile', None),
cache_from=build_opts.get('cache_from', None),
cache_from=self.get_cache_from(build_opts),
labels=build_opts.get('labels', None),
buildargs=build_args,
network_mode=build_opts.get('network', None),
@@ -1085,7 +1105,7 @@ class Service(object):
)
try:
all_events = list(stream_output(build_output, sys.stdout))
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(self, six.text_type(e))
@@ -1107,6 +1127,12 @@ class Service(object):
return image_id
def get_cache_from(self, build_opts):
cache_from = build_opts.get('cache_from', None)
if cache_from is not None:
cache_from = [tag for tag in cache_from if tag]
return cache_from
def can_be_built(self):
return 'build' in self.options
@@ -1316,7 +1342,7 @@ class ServicePidMode(PidMode):
if containers:
return 'container:' + containers[0].id
log.warn(
log.warning(
"Service %s is trying to use reuse the PID namespace "
"of another service that is not running." % (self.service_name)
)
@@ -1379,8 +1405,8 @@ class ServiceNetworkMode(object):
if containers:
return 'container:' + containers[0].id
log.warn("Service %s is trying to use reuse the network stack "
"of another service that is not running." % (self.id))
log.warning("Service %s is trying to use reuse the network stack "
"of another service that is not running." % (self.id))
return None
@@ -1527,11 +1553,11 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
for volume in volumes_option:
if (
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
):
log.warn((
log.warning((
"Service \"{service}\" is using volume \"{volume}\" from the "
"previous container. Host mapping \"{host_path}\" has no effect. "
"Remove the existing containers (with `docker-compose rm {service}`) "
@@ -1576,6 +1602,7 @@ def build_mount(mount_spec):
read_only=mount_spec.read_only, consistency=mount_spec.consistency, **kwargs
)
# Labels
@@ -1630,6 +1657,7 @@ def format_environment(environment):
if isinstance(value, six.binary_type):
value = value.decode('utf-8')
return '{key}={value}'.format(key=key, value=value)
return [format_env(*item) for item in environment.items()]
@@ -1686,3 +1714,139 @@ def rewrite_build_path(path):
path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
return path
class _CLIBuilder(object):
def __init__(self, progress):
self._progress = progress
def build(self, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True):
"""
Args:
path (str): Path to the directory containing the Dockerfile
buildargs (dict): A dictionary of build arguments
cache_from (:py:class:`list`): A list of images used for build
cache resolution
container_limits (dict): A dictionary of limits applied to each
container created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable
swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g.,
``"0-3"``, ``"0,1"``
custom_context (bool): Optional if using ``fileobj``
decode (bool): If set to ``True``, the returned stream will be
decoded into dicts on the fly. Default ``False``
dockerfile (str): path within the build context to the Dockerfile
encoding (str): The encoding for a stream. Set to ``gzip`` for
compressing
extra_hosts (dict): Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
fileobj: A file object to use as the Dockerfile. (Or a file-like
object)
forcerm (bool): Always remove intermediate containers, even after
unsuccessful builds
isolation (str): Isolation technology used during build.
Default: `None`.
labels (dict): A dictionary of labels to set on the image
network_mode (str): networking mode for the run commands during
build
nocache (bool): Don't use the cache when set to ``True``
platform (str): Platform in the format ``os[/arch[/variant]]``
pull (bool): Downloads any updates to the FROM image in Dockerfiles
quiet (bool): Whether to return the status
rm (bool): Remove intermediate containers. The ``docker build``
command now defaults to ``--rm=true``, but we have kept the old
default of `False` to preserve backward compatibility
shmsize (int): Size of `/dev/shm` in bytes. The size must be
greater than 0. If omitted the system uses 64MB
squash (bool): Squash the resulting images layers into a
single layer.
tag (str): A tag to add to the final image
target (str): Name of the build-stage to build in a multi-stage
Dockerfile
timeout (int): HTTP timeout
use_config_proxy (bool): If ``True``, and if the docker client
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
Returns:
A generator for the build output.
"""
if dockerfile:
dockerfile = os.path.join(path, dockerfile)
iidfile = tempfile.mktemp()
command_builder = _CommandBuilder()
command_builder.add_params("--build-arg", buildargs)
command_builder.add_list("--cache-from", cache_from)
command_builder.add_arg("--file", dockerfile)
command_builder.add_flag("--force-rm", forcerm)
command_builder.add_arg("--memory", container_limits.get("memory"))
command_builder.add_flag("--no-cache", nocache)
command_builder.add_arg("--progress", self._progress)
command_builder.add_flag("--pull", pull)
command_builder.add_arg("--tag", tag)
command_builder.add_arg("--target", target)
command_builder.add_arg("--iidfile", iidfile)
args = command_builder.build([path])
magic_word = "Successfully built "
appear = False
with subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
# Fix non ascii chars on Python2. To remove when #6890 is complete.
if six.PY2:
magic_word = str(magic_word)
if line.startswith(magic_word):
appear = True
yield json.dumps({"stream": line})
with open(iidfile) as f:
line = f.readline()
image_id = line.split(":")[1].strip()
os.remove(iidfile)
# In case of `DOCKER_BUILDKIT=1`
# there is no success message already present in the output.
# Since that's the way `Service::build` gets the `image_id`
# it has to be added `manually`
if not appear:
yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
class _CommandBuilder(object):
def __init__(self):
self._args = ["docker", "build"]
def add_arg(self, name, value):
if value:
self._args.extend([name, str(value)])
def add_flag(self, name, flag):
if flag:
self._args.extend([name])
def add_params(self, name, params):
if params:
for key, val in params.items():
self._args.extend([name, "{}={}".format(key, val)])
def add_list(self, name, values):
if values:
for val in values:
self._args.extend([name, val])
def build(self, args):
return self._args + args

View File

@@ -127,7 +127,7 @@ class ProjectVolumes(object):
try:
volume.remove()
except NotFound:
log.warn("Volume %s not found.", volume.true_name)
log.warning("Volume %s not found.", volume.true_name)
def initialize(self):
try:
@@ -209,7 +209,7 @@ def check_remote_volume_config(remote, local):
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue
if remote_labels.get(k) != local_labels.get(k):
log.warn(
log.warning(
'Volume {}: label "{}" has changed. It may need to be'
' recreated.'.format(local.name, k)
)

View File

@@ -110,11 +110,14 @@ _docker_compose_build() {
__docker_compose_nospace
return
;;
--memory|-m)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory --no-cache --pull --parallel" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory -m --no-cache --no-rm --pull --parallel -q --quiet" -- "$cur" ) )
;;
*)
__docker_compose_complete_services --filter source=build
@@ -147,7 +150,7 @@ _docker_compose_config() {
;;
esac
COMPREPLY=( $( compgen -W "--hash --help --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
}
@@ -181,6 +184,10 @@ _docker_compose_docker_compose() {
_filedir -d
return
;;
--env-file)
_filedir
return
;;
$(__docker_compose_to_extglob "$daemon_options_with_args") )
return
;;
@@ -609,6 +616,7 @@ _docker_compose() {
--tlsverify
"
local daemon_options_with_args="
--env-file
--file -f
--host -H
--project-directory

View File

@@ -12,6 +12,7 @@ end
complete -c docker-compose -s f -l file -r -d 'Specify an alternate compose file'
complete -c docker-compose -s p -l project-name -x -d 'Specify an alternate project name'
complete -c docker-compose -l env-file -r -d 'Specify an alternate environment file (default: .env)'
complete -c docker-compose -l verbose -d 'Show more output'
complete -c docker-compose -s H -l host -x -d 'Daemon socket to connect to'
complete -c docker-compose -l tls -d 'Use TLS; implied by --tlsverify'

View File

@@ -113,6 +113,7 @@ __docker-compose_subcommand() {
$opts_help \
"*--build-arg=[Set build-time variables for one service.]:<varname>=<value>: " \
'--force-rm[Always remove intermediate containers.]' \
'(--quiet -q)'{--quiet,-q}'[Curb build output]' \
'(--memory -m)'{--memory,-m}'[Memory limit for the build container.]' \
'--no-cache[Do not use cache when building the image.]' \
'--pull[Always attempt to pull a newer version of the image.]' \
@@ -340,6 +341,7 @@ _docker-compose() {
'(- :)'{-h,--help}'[Get help]' \
'*'{-f,--file}"[${file_description}]:file:_files -g '*.yml'" \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
'--env-file[Specify an alternate environment file (default: .env)]:env-file:_files' \
"--compatibility[If set, Compose will attempt to convert keys in v3 files to their non-Swarm equivalent]" \
'(- :)'{-v,--version}'[Print version and exit]' \
'--verbose[Show more output]' \
@@ -358,6 +360,7 @@ _docker-compose() {
local -a relevant_compose_flags relevant_compose_repeatable_flags relevant_docker_flags compose_options docker_options
relevant_compose_flags=(
"--env-file"
"--file" "-f"
"--host" "-H"
"--project-name" "-p"

View File

@@ -44,7 +44,7 @@ def warn_for_links(name, service):
links = service.get('links')
if links:
example_service = links[0].partition(':')[0]
log.warn(
log.warning(
"Service {name} has links, which no longer create environment "
"variables such as {example_service_upper}_PORT. "
"If you are using those in your application code, you should "
@@ -57,7 +57,7 @@ def warn_for_links(name, service):
def warn_for_external_links(name, service):
external_links = service.get('external_links')
if external_links:
log.warn(
log.warning(
"Service {name} has external_links: {ext}, which now work "
"slightly differently. In particular, two containers must be "
"connected to at least one network in common in order to "
@@ -107,7 +107,7 @@ def rewrite_volumes_from(service, service_names):
def create_volumes_section(data):
named_volumes = get_named_volumes(data['services'])
if named_volumes:
log.warn(
log.warning(
"Named volumes ({names}) must be explicitly declared. Creating a "
"'volumes' section with declarations.\n\n"
"For backwards-compatibility, they've been declared as external. "

20
docker-compose-entrypoint.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/sh
set -e
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- docker-compose "$@"
fi
# if our command is a valid Docker subcommand, let's invoke it through Docker instead
# (this allows for "docker run docker ps", etc)
if docker-compose help "$1" > /dev/null 2>&1; then
set -- docker-compose "$@"
fi
# if we have "--link some-docker:docker" and not DOCKER_HOST, let's set DOCKER_HOST automatically
if [ -z "$DOCKER_HOST" -a "$DOCKER_PORT_2375_TCP" ]; then
export DOCKER_HOST='tcp://docker:2375'
fi
exec "$@"

View File

@@ -6,11 +6,9 @@ The documentation for Compose has been merged into
The docs for Compose are now here:
https://github.com/docker/docker.github.io/tree/master/compose
Please submit pull requests for unpublished features on the `vnext-compose` branch (https://github.com/docker/docker.github.io/tree/vnext-compose).
Please submit pull requests for unreleased features/changes on the `master` branch (https://github.com/docker/docker.github.io/tree/master), please prefix the PR title with `[WIP]` to indicate that it relates to an unreleased change.
If you submit a PR to this codebase that has a docs impact, create a second docs PR on `docker.github.io`. Use the docs PR template provided (coming soon - watch this space).
PRs for typos, additional information, etc. for already-published features should be labeled as `okay-to-publish` (we are still settling on a naming convention, will provide a label soon). You can submit these PRs either to `vnext-compose` or directly to `master` on `docker.github.io`
If you submit a PR to this codebase that has a docs impact, create a second docs PR on `docker.github.io`. Use the docs PR template provided.
As always, the docs remain open-source and we appreciate your feedback and
pull requests!

13
pyinstaller/ldd Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/sh
# From http://wiki.musl-libc.org/wiki/FAQ#Q:_where_is_ldd_.3F
#
# Musl's dynlinker comes with ldd functionality built in. just create a
# symlink from ld-musl-$ARCH.so to /bin/ldd. If the dynlinker was started
# as "ldd", it will detect that and print the appropriate DSO information.
#
# Instead, this string replaced "ldd" with the package so that pyinstaller
# can find the actual lib.
exec /usr/bin/ldd "$@" | \
sed -r 's/([^[:space:]]+) => ldd/\1 => \/lib\/\1/g' | \
sed -r 's/ldd \(.*\)//g'

View File

@@ -1 +1 @@
pyinstaller==3.3.1
pyinstaller==3.5

View File

@@ -1,5 +1,6 @@
coverage==4.4.2
ddt==1.2.0
flake8==3.5.0
mock==2.0.0
mock==3.0.5
pytest==3.6.3
pytest-cov==2.5.1

View File

@@ -1,9 +1,10 @@
backports.shutil_get_terminal_size==1.0.0
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
colorama==0.4.0; sys_platform == 'win32'
docker==3.7.3
docker==4.1.0
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
@@ -11,14 +12,14 @@ enum34==1.1.6; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
paramiko==2.4.2
jsonschema==3.0.1
paramiko==2.6.0
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.6.7
PyYAML==4.2b1
requests==2.20.0
six==1.10.0
texttable==0.9.1
urllib3==1.21.1; python_version == '3.3'
websocket-client==0.56.0
requests==2.22.0
six==1.12.0
texttable==1.6.2
urllib3==1.24.2; python_version == '3.3'
websocket-client==0.32.0

20
script/Jenkinsfile.fossa Normal file
View File

@@ -0,0 +1,20 @@
pipeline {
agent any
stages {
stage("License Scan") {
agent {
label 'ubuntu-1604-aufs-edge'
}
steps {
withCredentials([
string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')
]) {
checkout scm
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' BRANCH_NAME='${env.BRANCH_NAME}' make -f script/fossa.mk fossa-analyze"
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' make -f script/fossa.mk fossa-test"
}
}
}
}
}

View File

@@ -7,11 +7,14 @@ if [ -z "$1" ]; then
exit 1
fi
TAG=$1
TAG="$1"
VERSION="$(python setup.py --version)"
./script/build/write-git-sha
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
python setup.py sdist bdist_wheel
./script/build/linux
docker build -t docker/compose:$TAG -f Dockerfile.run .
docker build \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}" \
-t "${TAG}" .

View File

@@ -4,10 +4,15 @@ set -ex
./script/clean
TAG="docker-compose"
docker build -t "$TAG" .
docker run \
--rm --entrypoint="script/build/linux-entrypoint" \
-v $(pwd)/dist:/code/dist \
-v $(pwd)/.git:/code/.git \
"$TAG"
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
TAG="docker/compose:tmp-glibc-linux-binary-${DOCKER_COMPOSE_GITSHA}"
docker build -t "${TAG}" . \
--build-arg BUILD_PLATFORM=debian \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
TMP_CONTAINER=$(docker create "${TAG}")
mkdir -p dist
ARCH=$(uname -m)
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
docker container rm -f "${TMP_CONTAINER}"
docker image rm -f "${TAG}"

View File

@@ -2,14 +2,39 @@
set -ex
TARGET=dist/docker-compose-$(uname -s)-$(uname -m)
VENV=/code/.tox/py36
CODE_PATH=/code
VENV="${CODE_PATH}"/.tox/py37
mkdir -p `pwd`/dist
chmod 777 `pwd`/dist
cd "${CODE_PATH}"
mkdir -p dist
chmod 777 dist
$VENV/bin/pip install -q -r requirements-build.txt
./script/build/write-git-sha
su -c "$VENV/bin/pyinstaller docker-compose.spec" user
mv dist/docker-compose $TARGET
$TARGET version
"${VENV}"/bin/pip3 install -q -r requirements-build.txt
# TODO(ulyssessouza) To check if really needed
if [ -z "${DOCKER_COMPOSE_GITSHA}" ]; then
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
fi
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
export PATH="${CODE_PATH}/pyinstaller:${PATH}"
if [ ! -z "${BUILD_BOOTLOADER}" ]; then
# Build bootloader for alpine; develop is the main branch
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
cd /tmp/pyinstaller/bootloader
# Checkout commit corresponding to version in requirements-build
git checkout v3.5
"${VENV}"/bin/python3 ./waf configure --no-lsb all
"${VENV}"/bin/pip3 install ..
cd "${CODE_PATH}"
rm -Rf /tmp/pyinstaller
else
echo "NOT compiling bootloader!!!"
fi
"${VENV}"/bin/pyinstaller --exclude-module pycrypto --exclude-module PyInstaller docker-compose.spec
ls -la dist/
ldd dist/docker-compose
mv dist/docker-compose /usr/local/bin
docker-compose version

View File

@@ -5,11 +5,12 @@ TOOLCHAIN_PATH="$(realpath $(dirname $0)/../../build/toolchain)"
rm -rf venv
virtualenv -p ${TOOLCHAIN_PATH}/bin/python3 venv
virtualenv -p "${TOOLCHAIN_PATH}"/bin/python3 venv
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-build.txt
venv/bin/pip install --no-deps .
./script/build/write-git-sha
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
venv/bin/pyinstaller docker-compose.spec
mv dist/docker-compose dist/docker-compose-Darwin-x86_64
dist/docker-compose-Darwin-x86_64 version

View File

@@ -7,11 +7,12 @@ if [ -z "$1" ]; then
exit 1
fi
TAG=$1
TAG="$1"
IMAGE="docker/compose-tests"
docker build -t docker-compose-tests:tmp .
ctnr_id=$(docker create --entrypoint=tox docker-compose-tests:tmp)
docker commit $ctnr_id docker/compose-tests:latest
docker tag docker/compose-tests:latest docker/compose-tests:$TAG
docker rm -f $ctnr_id
docker rmi -f docker-compose-tests:tmp
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
docker build -t "${IMAGE}:${TAG}" . \
--target build \
--build-arg BUILD_PLATFORM="debian" \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
docker tag "${IMAGE}":"${TAG}" "${IMAGE}":latest

View File

@@ -6,17 +6,17 @@
#
# http://git-scm.com/download/win
#
# 2. Install Python 3.6.4:
# 2. Install Python 3.7.2:
#
# https://www.python.org/downloads/
#
# 3. Append ";C:\Python36;C:\Python36\Scripts" to the "Path" environment variable:
# 3. Append ";C:\Python37;C:\Python37\Scripts" to the "Path" environment variable:
#
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
#
# 4. In Powershell, run the following commands:
#
# $ pip install 'virtualenv>=15.1.0'
# $ pip install 'virtualenv==16.2.0'
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
#
# 5. Clone the repository:

View File

@@ -9,4 +9,4 @@ if [[ "${?}" != "0" ]]; then
echo "Couldn't get revision of the git repository. Setting to 'unknown' instead"
DOCKER_COMPOSE_GITSHA="unknown"
fi
echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
echo "${DOCKER_COMPOSE_GITSHA}"

View File

@@ -1,7 +1,5 @@
#!/bin/bash
set -x
curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}

16
script/fossa.mk Normal file
View File

@@ -0,0 +1,16 @@
# Variables for Fossa
BUILD_ANALYZER?=docker/fossa-analyzer
FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true
fossa-analyze:
docker run --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) analyze ${FOSSA_OPTS} --branch ${BRANCH_NAME}
# This command is used to run the fossa test command
fossa-test:
docker run -i -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) test

View File

@@ -192,6 +192,8 @@ be handled manually by the operator:
- Bump the version in `compose/__init__.py` to the *next* minor version
number with `dev` appended. For example, if you just released `1.4.0`,
update it to `1.5.0dev`
- Update compose_version in [github.com/docker/docker.github.io/blob/master/_config.yml](https://github.com/docker/docker.github.io/blob/master/_config.yml) and [github.com/docker/docker.github.io/blob/master/_config_authoring.yml](https://github.com/docker/docker.github.io/blob/master/_config_authoring.yml)
- Update the release note in [github.com/docker/docker.github.io](https://github.com/docker/docker.github.io/blob/master/release-notes/docker-compose.md)
## Advanced options

View File

@@ -15,6 +15,7 @@ from release.const import NAME
from release.const import REPO_ROOT
from release.downloader import BinaryDownloader
from release.images import ImageManager
from release.images import is_tag_latest
from release.pypi import check_pypirc
from release.pypi import pypi_upload
from release.repository import delete_assets
@@ -204,7 +205,7 @@ def resume(args):
delete_assets(gh_release)
upload_assets(gh_release, files)
img_manager = ImageManager(args.release)
img_manager.build_images(repository, files)
img_manager.build_images(repository)
except ScriptError as e:
print(e)
return 1
@@ -244,7 +245,7 @@ def start(args):
gh_release = create_release_draft(repository, args.release, pr_data, files)
upload_assets(gh_release, files)
img_manager = ImageManager(args.release)
img_manager.build_images(repository, files)
img_manager.build_images(repository)
except ScriptError as e:
print(e)
return 1
@@ -258,7 +259,8 @@ def finalize(args):
try:
check_pypirc()
repository = Repository(REPO_ROOT, args.repo)
img_manager = ImageManager(args.release)
tag_as_latest = is_tag_latest(args.release)
img_manager = ImageManager(args.release, tag_as_latest)
pr_data = repository.find_release_pr(args.release)
if not pr_data:
raise ScriptError('No PR found for {}'.format(args.release))

View File

@@ -6,4 +6,5 @@ import os
REPO_ROOT = os.path.join(os.path.dirname(__file__), '..', '..', '..')
NAME = 'docker/compose'
COMPOSE_TESTS_IMAGE_BASE_NAME = NAME + '-tests'
BINTRAY_ORG = 'docker-compose'

View File

@@ -5,18 +5,36 @@ from __future__ import unicode_literals
import base64
import json
import os
import shutil
import docker
from enum import Enum
from .const import NAME
from .const import REPO_ROOT
from .utils import ScriptError
from .utils import yesno
from script.release.release.const import COMPOSE_TESTS_IMAGE_BASE_NAME
class Platform(Enum):
ALPINE = 'alpine'
DEBIAN = 'debian'
def __str__(self):
return self.value
# Checks if this version respects the GA version format ('x.y.z') and not an RC
def is_tag_latest(version):
ga_version = all(n.isdigit() for n in version.split('.')) and version.count('.') == 2
return ga_version and yesno('Should this release be tagged as \"latest\"? [Y/n]: ', default=True)
class ImageManager(object):
def __init__(self, version):
def __init__(self, version, latest=False):
self.docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
self.version = version
self.latest = latest
if 'HUB_CREDENTIALS' in os.environ:
print('HUB_CREDENTIALS found in environment, issuing login')
credentials = json.loads(base64.urlsafe_b64decode(os.environ['HUB_CREDENTIALS']))
@@ -24,16 +42,36 @@ class ImageManager(object):
username=credentials['Username'], password=credentials['Password']
)
def build_images(self, repository, files):
print("Building release images...")
repository.write_git_sha()
distdir = os.path.join(REPO_ROOT, 'dist')
os.makedirs(distdir, exist_ok=True)
shutil.copy(files['docker-compose-Linux-x86_64'][0], distdir)
os.chmod(os.path.join(distdir, 'docker-compose-Linux-x86_64'), 0o755)
print('Building docker/compose image')
def _tag(self, image, existing_tag, new_tag):
existing_repo_tag = '{image}:{tag}'.format(image=image, tag=existing_tag)
new_repo_tag = '{image}:{tag}'.format(image=image, tag=new_tag)
self.docker_client.tag(existing_repo_tag, new_repo_tag)
def get_full_version(self, platform=None):
return self.version + '-' + platform.__str__() if platform else self.version
def get_runtime_image_tag(self, tag):
return '{image_base_image}:{tag}'.format(
image_base_image=NAME,
tag=self.get_full_version(tag)
)
def build_runtime_image(self, repository, platform):
git_sha = repository.write_git_sha()
compose_image_base_name = NAME
print('Building {image} image ({platform} based)'.format(
image=compose_image_base_name,
platform=platform
))
full_version = self.get_full_version(platform)
build_tag = self.get_runtime_image_tag(platform)
logstream = self.docker_client.build(
REPO_ROOT, tag='docker/compose:{}'.format(self.version), dockerfile='Dockerfile.run',
REPO_ROOT,
tag=build_tag,
buildargs={
'BUILD_PLATFORM': platform.value,
'GIT_COMMIT': git_sha,
},
decode=True
)
for chunk in logstream:
@@ -42,9 +80,33 @@ class ImageManager(object):
if 'stream' in chunk:
print(chunk['stream'], end='')
print('Building test image (for UCP e2e)')
if platform == Platform.ALPINE:
self._tag(compose_image_base_name, full_version, self.version)
if self.latest:
self._tag(compose_image_base_name, full_version, platform)
if platform == Platform.ALPINE:
self._tag(compose_image_base_name, full_version, 'latest')
def get_ucp_test_image_tag(self, tag=None):
return '{image}:{tag}'.format(
image=COMPOSE_TESTS_IMAGE_BASE_NAME,
tag=tag or self.version
)
# Used for producing a test image for UCP
def build_ucp_test_image(self, repository):
print('Building test image (debian based for UCP e2e)')
git_sha = repository.write_git_sha()
ucp_test_image_tag = self.get_ucp_test_image_tag()
logstream = self.docker_client.build(
REPO_ROOT, tag='docker-compose-tests:tmp', decode=True
REPO_ROOT,
tag=ucp_test_image_tag,
target='build',
buildargs={
'BUILD_PLATFORM': Platform.DEBIAN.value,
'GIT_COMMIT': git_sha,
},
decode=True
)
for chunk in logstream:
if 'error' in chunk:
@@ -52,26 +114,15 @@ class ImageManager(object):
if 'stream' in chunk:
print(chunk['stream'], end='')
container = self.docker_client.create_container(
'docker-compose-tests:tmp', entrypoint='tox'
)
self.docker_client.commit(container, 'docker/compose-tests', 'latest')
self.docker_client.tag(
'docker/compose-tests:latest', 'docker/compose-tests:{}'.format(self.version)
)
self.docker_client.remove_container(container, force=True)
self.docker_client.remove_image('docker-compose-tests:tmp', force=True)
self._tag(COMPOSE_TESTS_IMAGE_BASE_NAME, self.version, 'latest')
@property
def image_names(self):
return [
'docker/compose-tests:latest',
'docker/compose-tests:{}'.format(self.version),
'docker/compose:{}'.format(self.version)
]
def build_images(self, repository):
self.build_runtime_image(repository, Platform.ALPINE)
self.build_runtime_image(repository, Platform.DEBIAN)
self.build_ucp_test_image(repository)
def check_images(self):
for name in self.image_names:
for name in self.get_images_to_push():
try:
self.docker_client.inspect_image(name)
except docker.errors.ImageNotFound:
@@ -79,8 +130,22 @@ class ImageManager(object):
return False
return True
def get_images_to_push(self):
tags_to_push = {
"{}:{}".format(NAME, self.version),
self.get_runtime_image_tag(Platform.ALPINE),
self.get_runtime_image_tag(Platform.DEBIAN),
self.get_ucp_test_image_tag(),
self.get_ucp_test_image_tag('latest'),
}
if is_tag_latest(self.version):
tags_to_push.add("{}:latest".format(NAME))
return tags_to_push
def push_images(self):
for name in self.image_names:
tags_to_push = self.get_images_to_push()
print('Build tags to push {}'.format(tags_to_push))
for name in tags_to_push:
print('Pushing {} to Docker Hub'.format(name))
logstream = self.docker_client.push(name, stream=True, decode=True)
for chunk in logstream:

View File

@@ -175,6 +175,7 @@ class Repository(object):
def write_git_sha(self):
with open(os.path.join(REPO_ROOT, 'compose', 'GITSHA'), 'w') as f:
f.write(self.git_repo.head.commit.hexsha[:7])
return self.git_repo.head.commit.hexsha[:7]
def cherry_pick_prs(self, release_branch, ids):
if not ids:
@@ -219,7 +220,7 @@ def get_contributors(pr_data):
commits = pr_data.get_commits()
authors = {}
for commit in commits:
if not commit.author:
if not commit or not commit.author or not commit.author.login:
continue
author = commit.author.login
authors[author] = authors.get(author, 0) + 1

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.24.1"
VERSION="1.24.0"
IMAGE="docker/compose:$VERSION"
@@ -48,7 +48,7 @@ fi
# Only allocate tty if we detect one
if [ -t 0 -a -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
# Always set -i to support piped and terminal input in run/exec

View File

@@ -13,13 +13,13 @@ if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
OPENSSL_VERSION=1.1.0j
OPENSSL_VERSION=1.1.1c
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=dcad1efbacd9a4ed67d4514470af12bbe2a1d60a
OPENSSL_SHA1=71b830a077276cbeccc994369538617a21bee808
PYTHON_VERSION=3.6.8
PYTHON_VERSION=3.7.4
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=09fcc4edaef0915b4dedbfb462f1cd15f82d3a6f
PYTHON_SHA1=fb1d764be8a9dcd40f2f152a610a0ab04e0d0ed3
#
# Install prerequisites.
@@ -36,7 +36,7 @@ if ! [ -x "$(command -v python3)" ]; then
brew install python3
fi
if ! [ -x "$(command -v virtualenv)" ]; then
pip install virtualenv
pip install virtualenv==16.2.0
fi
#
@@ -50,7 +50,7 @@ mkdir -p ${TOOLCHAIN_PATH}
#
# Set macOS SDK.
#
if [ ${SDK_FETCH} ]; then
if [[ ${SDK_FETCH} && ! -f ${TOOLCHAIN_PATH}/MacOSX${DEPLOYMENT_TARGET}.sdk/SDKSettings.plist ]]; then
SDK_PATH=${TOOLCHAIN_PATH}/MacOSX${DEPLOYMENT_TARGET}.sdk
fetch_tarball ${SDK_URL} ${SDK_PATH} ${SDK_SHA1}
else
@@ -61,7 +61,7 @@ fi
# Build OpenSSL.
#
OPENSSL_SRC_PATH=${TOOLCHAIN_PATH}/openssl-${OPENSSL_VERSION}
if ! [ -f ${TOOLCHAIN_PATH}/bin/openssl ]; then
if ! [[ $(${TOOLCHAIN_PATH}/bin/openssl version) == *"${OPENSSL_VERSION}"* ]]; then
rm -rf ${OPENSSL_SRC_PATH}
fetch_tarball ${OPENSSL_URL} ${OPENSSL_SRC_PATH} ${OPENSSL_SHA1}
(
@@ -77,7 +77,7 @@ fi
# Build Python.
#
PYTHON_SRC_PATH=${TOOLCHAIN_PATH}/Python-${PYTHON_VERSION}
if ! [ -f ${TOOLCHAIN_PATH}/bin/python3 ]; then
if ! [[ $(${TOOLCHAIN_PATH}/bin/python3 --version) == *"${PYTHON_VERSION}"* ]]; then
rm -rf ${PYTHON_SRC_PATH}
fetch_tarball ${PYTHON_URL} ${PYTHON_SRC_PATH} ${PYTHON_SHA1}
(
@@ -87,9 +87,10 @@ if ! [ -f ${TOOLCHAIN_PATH}/bin/python3 ]; then
--datarootdir=${TOOLCHAIN_PATH}/share \
--datadir=${TOOLCHAIN_PATH}/share \
--enable-framework=${TOOLCHAIN_PATH}/Frameworks \
--with-openssl=${TOOLCHAIN_PATH} \
MACOSX_DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET} \
CFLAGS="-isysroot ${SDK_PATH} -I${TOOLCHAIN_PATH}/include" \
CPPFLAGS="-I${SDK_PATH}/usr/include -I${TOOLCHAIN_PATH}include" \
CPPFLAGS="-I${SDK_PATH}/usr/include -I${TOOLCHAIN_PATH}/include" \
LDFLAGS="-isysroot ${SDK_PATH} -L ${TOOLCHAIN_PATH}/lib"
make -j 4
make install PYTHONAPPSDIR=${TOOLCHAIN_PATH}
@@ -97,6 +98,11 @@ if ! [ -f ${TOOLCHAIN_PATH}/bin/python3 ]; then
)
fi
#
# Smoke test built Python.
#
openssl_version ${TOOLCHAIN_PATH}
echo ""
echo "*** Targeting macOS: ${DEPLOYMENT_TARGET}"
echo "*** Using SDK ${SDK_PATH}"

View File

@@ -8,8 +8,7 @@ set -e
docker run --rm \
--tty \
${GIT_VOLUME} \
--entrypoint="tox" \
"$TAG" -e pre-commit
"$TAG" tox -e pre-commit
get_versions="docker run --rm
--entrypoint=/code/.tox/py27/bin/python
@@ -24,7 +23,7 @@ fi
BUILD_NUMBER=${BUILD_NUMBER-$USER}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py36}
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py27,py37}
for version in $DOCKER_VERSIONS; do
>&2 echo "Running tests against Docker $version"

View File

@@ -20,6 +20,3 @@ export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER"
GIT_VOLUME="--volumes-from=$(hostname)"
. script/test/all
>&2 echo "Building Linux binary"
. script/build/linux-entrypoint

View File

@@ -3,17 +3,18 @@
set -ex
TAG="docker-compose:$(git rev-parse --short HEAD)"
TAG="docker-compose:alpine-$(git rev-parse --short HEAD)"
# By default use the Dockerfile, but can be overridden to use an alternative file
# e.g DOCKERFILE=Dockerfile.armhf script/test/default
# e.g DOCKERFILE=Dockerfile.s390x script/test/default
DOCKERFILE="${DOCKERFILE:-Dockerfile}"
DOCKER_BUILD_TARGET="${DOCKER_BUILD_TARGET:-build}"
rm -rf coverage-html
# Create the host directory so it's owned by $USER
mkdir -p coverage-html
docker build -f ${DOCKERFILE} -t "$TAG" .
docker build -f "${DOCKERFILE}" -t "${TAG}" --target "${DOCKER_BUILD_TARGET}" .
GIT_VOLUME="--volume=$(pwd)/.git:/code/.git"
. script/test/all

View File

@@ -31,31 +31,33 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4.3',
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.21',
'texttable >= 0.9.0, < 0.10',
'websocket-client >= 0.32.0, < 1.0',
'docker[ssh] >= 3.7.0, < 4.0',
'dockerpty >= 0.4.1, < 0.5',
'docopt >= 0.6.1, < 1',
'PyYAML >= 3.10, < 5',
'requests >= 2.20.0, < 3',
'texttable >= 0.9.0, < 2',
'websocket-client >= 0.32.0, < 1',
'docker[ssh] >= 3.7.0, < 5',
'dockerpty >= 0.4.1, < 1',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
'jsonschema >= 2.5.1, < 4',
]
tests_require = [
'pytest',
'pytest < 6',
]
if sys.version_info[:2] < (3, 4):
tests_require.append('mock >= 1.0.1')
tests_require.append('mock >= 1.0.1, < 4')
extras_require = {
':python_version < "3.2"': ['subprocess32 >= 3.5.4, < 4'],
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
':sys_platform == "win32"': ['colorama >= 0.4, < 0.5'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
':python_version < "3.3"': ['backports.shutil_get_terminal_size == 1.0.0',
'ipaddress >= 1.0.16, < 2'],
':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}

View File

@@ -11,6 +11,7 @@ import subprocess
import time
from collections import Counter
from collections import namedtuple
from functools import reduce
from operator import attrgetter
import pytest
@@ -19,6 +20,7 @@ import yaml
from docker import errors
from .. import mock
from ..helpers import BUSYBOX_IMAGE_WITH_TAG
from ..helpers import create_host_file
from compose.cli.command import get_project
from compose.config.errors import DuplicateOverrideFileFound
@@ -62,6 +64,12 @@ def wait_on_process(proc, returncode=0):
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
def dispatch(base_dir, options, project_options=None, returncode=0):
project_options = project_options or []
proc = start_process(base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode)
def wait_on_condition(condition, delay=0.1, timeout=40):
start_time = time.time()
while not condition():
@@ -149,9 +157,7 @@ class CLITestCase(DockerClientTestCase):
return self._project
def dispatch(self, options, project_options=None, returncode=0):
project_options = project_options or []
proc = start_process(self.base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode)
return dispatch(self.base_dir, options, project_options, returncode)
def execute(self, container, cmd):
# Remove once Hijack and CloseNotifier sign a peace treaty
@@ -170,6 +176,13 @@ class CLITestCase(DockerClientTestCase):
# Prevent tearDown from trying to create a project
self.base_dir = None
def test_quiet_build(self):
self.base_dir = 'tests/fixtures/build-args'
result = self.dispatch(['build'], None)
quietResult = self.dispatch(['build', '-q'], None)
assert result.stdout != ""
assert quietResult.stdout == ""
def test_help_nonexistent(self):
self.base_dir = 'tests/fixtures/no-composefile'
result = self.dispatch(['help', 'foobar'], returncode=1)
@@ -258,7 +271,7 @@ class CLITestCase(DockerClientTestCase):
'volumes_from': ['service:other:rw'],
},
'other': {
'image': 'busybox:latest',
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'volumes': ['/data'],
},
@@ -324,6 +337,21 @@ class CLITestCase(DockerClientTestCase):
'version': '2.4'
}
def test_config_with_env_file(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['--env-file', '.env2', 'config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'false',
'image': 'alpine:latest',
'ports': ['5644/tcp', '9998/tcp']
}
},
'version': '2.4'
}
def test_config_with_dot_env_and_override_dir(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['--project-directory', 'alt/', 'config'])
@@ -332,7 +360,7 @@ class CLITestCase(DockerClientTestCase):
'services': {
'web': {
'command': 'echo uwu',
'image': 'alpine:3.4',
'image': 'alpine:3.10.1',
'ports': ['3341/tcp', '4449/tcp']
}
},
@@ -531,7 +559,7 @@ class CLITestCase(DockerClientTestCase):
'services': {
'foo': {
'command': '/bin/true',
'image': 'alpine:3.7',
'image': 'alpine:3.10.1',
'scale': 3,
'restart': 'always:7',
'mem_limit': '300M',
@@ -616,7 +644,7 @@ class CLITestCase(DockerClientTestCase):
def test_pull_with_digest(self):
result = self.dispatch(['-f', 'digest.yml', 'pull', '--no-parallel'])
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling simple ({})...'.format(BUSYBOX_IMAGE_WITH_TAG) in result.stderr
assert ('Pulling digest (busybox@'
'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b520'
'04ee8502d)...') in result.stderr
@@ -627,12 +655,19 @@ class CLITestCase(DockerClientTestCase):
'pull', '--ignore-pull-failures', '--no-parallel']
)
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling simple ({})...'.format(BUSYBOX_IMAGE_WITH_TAG) in result.stderr
assert 'Pulling another (nonexisting-image:latest)...' in result.stderr
assert ('repository nonexisting-image not found' in result.stderr or
'image library/nonexisting-image:latest not found' in result.stderr or
'pull access denied for nonexisting-image' in result.stderr)
def test_pull_with_build(self):
result = self.dispatch(['-f', 'pull-with-build.yml', 'pull'])
assert 'Pulling simple' not in result.stderr
assert 'Pulling from_simple' not in result.stderr
assert 'Pulling another ...' in result.stderr
def test_pull_with_quiet(self):
assert self.dispatch(['pull', '--quiet']).stderr == ''
assert self.dispatch(['pull', '--quiet']).stdout == ''
@@ -747,6 +782,27 @@ class CLITestCase(DockerClientTestCase):
]
assert not containers
@pytest.mark.xfail(True, reason='Flaky on local')
def test_build_rm(self):
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(all=True)
]
assert not containers
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', '--no-rm', 'simple'], returncode=0)
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(all=True)
]
assert containers
for c in self.project.client.containers(all=True):
self.addCleanup(self.project.client.remove_container, c, force=True)
def test_build_shm_size_build_option(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-shm-size'
@@ -1108,6 +1164,22 @@ class CLITestCase(DockerClientTestCase):
]
assert len(remote_volumes) > 0
@v2_only()
def test_up_no_start_remove_orphans(self):
self.base_dir = 'tests/fixtures/v2-simple'
self.dispatch(['up', '--no-start'], None)
services = self.project.get_services()
stopped = reduce((lambda prev, next: prev.containers(
stopped=True) + next.containers(stopped=True)), services)
assert len(stopped) == 2
self.dispatch(['-f', 'one-container.yml', 'up', '--no-start', '--remove-orphans'], None)
stopped2 = reduce((lambda prev, next: prev.containers(
stopped=True) + next.containers(stopped=True)), services)
assert len(stopped2) == 1
@v2_only()
def test_up_no_ansi(self):
self.base_dir = 'tests/fixtures/v2-simple'
@@ -1380,7 +1452,7 @@ class CLITestCase(DockerClientTestCase):
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set([volume_with_label])
assert set([v['Name'].split('/')[-1] for v in volumes]) == {volume_with_label}
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -2045,7 +2117,7 @@ class CLITestCase(DockerClientTestCase):
for _, config in networks.items():
# TODO: once we drop support for API <1.24, this can be changed to:
# assert config['Aliases'] == [container.short_id]
aliases = set(config['Aliases'] or []) - set([container.short_id])
aliases = set(config['Aliases'] or []) - {container.short_id}
assert not aliases
@v2_only()
@@ -2065,7 +2137,7 @@ class CLITestCase(DockerClientTestCase):
for _, config in networks.items():
# TODO: once we drop support for API <1.24, this can be changed to:
# assert config['Aliases'] == [container.short_id]
aliases = set(config['Aliases'] or []) - set([container.short_id])
aliases = set(config['Aliases'] or []) - {container.short_id}
assert not aliases
assert self.lookup(container, 'app')
@@ -2301,6 +2373,7 @@ class CLITestCase(DockerClientTestCase):
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_follow_logs_from_new_containers(self):
self.base_dir = 'tests/fixtures/logs-composefile'
self.dispatch(['up', '-d', 'simple'])
@@ -2327,6 +2400,7 @@ class CLITestCase(DockerClientTestCase):
assert '{} exited with code 0'.format(another_name) in result.stdout
assert '{} exited with code 137'.format(simple_name) in result.stdout
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_follow_logs_from_restarted_containers(self):
self.base_dir = 'tests/fixtures/logs-restart-composefile'
proc = start_process(self.base_dir, ['up'])
@@ -2347,6 +2421,7 @@ class CLITestCase(DockerClientTestCase):
) == 3
assert result.stdout.count('world') == 3
@pytest.mark.skip(reason="race condition between up and logs")
def test_logs_default(self):
self.base_dir = 'tests/fixtures/logs-composefile'
self.dispatch(['up', '-d'])
@@ -2473,10 +2548,12 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=3'])
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'worker=1'])
assert len(project.get_service('web').containers()) == 3
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 1
def test_up_scale_scale_down(self):
self.base_dir = 'tests/fixtures/scale'
@@ -2485,22 +2562,26 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=1'])
assert len(project.get_service('web').containers()) == 1
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
def test_up_scale_reset(self):
self.base_dir = 'tests/fixtures/scale'
project = self.project
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'db=3'])
self.dispatch(['up', '-d', '--scale', 'web=3', '--scale', 'db=3', '--scale', 'worker=3'])
assert len(project.get_service('web').containers()) == 3
assert len(project.get_service('db').containers()) == 3
assert len(project.get_service('worker').containers()) == 3
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
def test_up_scale_to_zero(self):
self.base_dir = 'tests/fixtures/scale'
@@ -2509,10 +2590,12 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
assert len(project.get_service('web').containers()) == 2
assert len(project.get_service('db').containers()) == 1
assert len(project.get_service('worker').containers()) == 0
self.dispatch(['up', '-d', '--scale', 'web=0', '--scale', 'db=0'])
self.dispatch(['up', '-d', '--scale', 'web=0', '--scale', 'db=0', '--scale', 'worker=0'])
assert len(project.get_service('web').containers()) == 0
assert len(project.get_service('db').containers()) == 0
assert len(project.get_service('worker').containers()) == 0
def test_port(self):
self.base_dir = 'tests/fixtures/ports-composefile'
@@ -2664,7 +2747,7 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/extends'
self.dispatch(['up', '-d'], None)
assert set([s.name for s in self.project.services]) == set(['mydb', 'myweb'])
assert set([s.name for s in self.project.services]) == {'mydb', 'myweb'}
# Sort by name so we get [db, web]
containers = sorted(
@@ -2676,15 +2759,9 @@ class CLITestCase(DockerClientTestCase):
web = containers[1]
db_name = containers[0].name_without_project
assert set(get_links(web)) == set(
['db', db_name, 'extends_{}'.format(db_name)]
)
assert set(get_links(web)) == {'db', db_name, 'extends_{}'.format(db_name)}
expected_env = set([
"FOO=1",
"BAR=2",
"BAZ=2",
])
expected_env = {"FOO=1", "BAR=2", "BAZ=2"}
assert expected_env <= set(web.get('Config.Env'))
def test_top_services_not_running(self):
@@ -2739,8 +2816,8 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['images'])
assert 'busybox' in result.stdout
assert 'multiple-composefiles_another_1' in result.stdout
assert 'multiple-composefiles_simple_1' in result.stdout
assert '_another_1' in result.stdout
assert '_simple_1' in result.stdout
@mock.patch.dict(os.environ)
def test_images_tagless_image(self):
@@ -2788,4 +2865,4 @@ class CLITestCase(DockerClientTestCase):
assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
assert re.search(r'foo3.+_foo3[ \t]+latest', result.stdout) is not None
assert re.search(r'foo3.+test[ \t]+latest', result.stdout) is not None

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: ls .

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: ls /thecakeisalie

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
ARG favorite_th_character
RUN echo "Favorite Touhou Character: ${favorite_th_character}"

View File

@@ -1,3 +1,3 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
CMD echo "success"

View File

@@ -1,4 +1,4 @@
FROM busybox
FROM busybox:1.31.0-uclibc
# Report the memory (through the size of the group memory)
RUN echo "memory:" $(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
RUN echo a
CMD top

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
RUN echo b
CMD top

View File

@@ -1,7 +1,7 @@
version: '3.5'
services:
foo:
image: alpine:3.7
image: alpine:3.10.1
command: /bin/true
deploy:
replicas: 3

4
tests/fixtures/default-env-file/.env2 vendored Normal file
View File

@@ -0,0 +1,4 @@
IMAGE=alpine:latest
COMMAND=false
PORT1=5644
PORT2=9998

View File

@@ -1,4 +1,4 @@
IMAGE=alpine:3.4
IMAGE=alpine:3.10.1
COMMAND=echo uwu
PORT1=3341
PORT2=4449

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
VOLUME /data
CMD top

View File

@@ -1,10 +1,10 @@
web:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: "sleep 100"
links:
- db
db:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: "sleep 200"

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: echo simple
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: echo another

View File

@@ -1,4 +1,4 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
LABEL com.docker.compose.test_image=true
ENTRYPOINT ["printf"]
CMD ["default", "args"]

View File

@@ -0,0 +1,2 @@
WHEREAMI
DEFAULT_CONF_LOADED=true

View File

@@ -0,0 +1 @@
WHEREAMI=override

View File

@@ -0,0 +1,6 @@
version: '3.7'
services:
test:
image: busybox
env_file: .env.conf
entrypoint: env

View File

@@ -1,5 +1,5 @@
service:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
environment:

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: sh -c "echo hello && tail -f /dev/null"
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: /bin/false

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
expose:
- '3000'

View File

@@ -1,2 +1,2 @@
FROM busybox:latest
FROM busybox:1.31.0-uclibc
RUN touch /foo

View File

@@ -8,3 +8,4 @@ services:
image: test:prod
foo3:
build: .
image: test:latest

View File

@@ -1,9 +1,9 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
log_driver: "none"
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
log_driver: "json-file"
log_opt:

View File

@@ -1,12 +1,12 @@
version: "2"
services:
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
logging:
driver: "none"
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
logging:
driver: "json-file"

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
command: sh -c "echo hello && tail -f /dev/null"
image: busybox:1.31.0-uclibc
command: sh -c "sleep 1 && echo hello && tail -f /dev/null"
another:
image: busybox:latest
command: sh -c "echo test"
image: busybox:1.31.0-uclibc
command: sh -c "sleep 1 && echo test"

View File

@@ -1,7 +1,7 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: sh -c "echo hello && tail -f /dev/null"
another:
image: busybox:latest
command: sh -c "sleep 0.5 && echo world && /bin/false"
image: busybox:1.31.0-uclibc
command: sh -c "sleep 2 && echo world && /bin/false"
restart: "on-failure:2"

View File

@@ -1,3 +1,3 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: sh -c "echo w && echo x && echo y && echo z"

View File

@@ -1,3 +1,3 @@
definedinyamlnotyml:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,3 +1,3 @@
yetanother:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,10 +1,10 @@
version: "2"
services:
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
networks:
default:

View File

@@ -2,17 +2,17 @@ version: "2"
services:
web:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["front"]
app:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["front", "back"]
links:
- "db:database"
db:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["back"]

View File

@@ -1,10 +1,10 @@
version: "2"
services:
simple:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
another:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
networks:
default:

View File

@@ -1,9 +1,9 @@
db:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
web:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top
console:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: top

View File

@@ -1,10 +1,10 @@
version: '2.2'
services:
web:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: "sleep 200"
depends_on:
- db
db:
image: busybox:latest
image: busybox:1.31.0-uclibc
command: "sleep 200"

Some files were not shown because too many files have changed in this diff Show More