Compare commits

..

201 Commits

Author SHA1 Message Date
Joffrey F
dfed245b57 Bump 1.11.2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-16 17:16:07 -08:00
Joffrey F
b9e9177ba9 Fix config command output with service.secrets section
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-16 17:16:07 -08:00
Daniel Nephin
1d88989ff5 Fix secrets config.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-02-16 17:16:07 -08:00
Joffrey F
0d668aa446 Don't import pip inside Compose
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-16 17:16:07 -08:00
Joffrey F
5198a5d33e Update docker SDK dependency
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-16 17:16:07 -08:00
Joffrey F
afe22ed15d Merge remote-tracking branch 'source/release' into bump-1.11.2 2017-02-16 17:07:04 -08:00
Joffrey F
fa90e4e555 Merge pull request #4458 from docker/bump-1.11.1
Bump 1.11.1
2017-02-09 12:38:41 -08:00
Joffrey F
7c5d5e4031 Bump 1.11.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-09 10:18:13 -08:00
Daniel Nephin
c79a1c7288 Fix version 3.1
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-02-09 10:18:13 -08:00
Joffrey F
4e2e383282 Merge remote-tracking branch 'source/release' into bump-1.11.1 2017-02-09 10:12:21 -08:00
Joffrey F
6238502087 Merge pull request #4447 from docker/bump-1.11.0
Bump 1.11.0
2017-02-08 13:42:55 -08:00
Joffrey F
6de1806658 Bump 1.11.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-08 11:30:29 -08:00
Joffrey F
fbe688b05a Merge remote-tracking branch 'source/release' into bump-1.11.0 2017-02-08 11:28:53 -08:00
Joffrey F
daed6dbb91 Merge pull request #4427 from docker/bump-1.11.0-rc1
Bump 1.11.0 RC1
2017-02-06 15:02:54 -08:00
Joffrey F
0ea24e7a80 Bump 1.11.0-rc1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-03 15:27:42 -08:00
Joffrey F
6e9a894ccf Upgrade python and pip versions in Dockerfile Add libbz2 dependency
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-03 15:19:33 -08:00
Joffrey F
0519afd5d3 Use newer version of PyInstaller to fix prelinking issues
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-03 15:19:33 -08:00
Kevin Jing Qiu
8f72dadd75 Close the open file handle using context manager
Signed-off-by: Kevin Jing Qiu <kevin.qiu@points.com>
2017-02-03 15:19:21 -08:00
Joffrey F
9a59a9c3ff Bump 1.10.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-03 15:18:44 -08:00
Joffrey F
37a987edaf Merge remote-tracking branch 'source/release' into bump-1.11.0-rc1 2017-02-03 15:02:44 -08:00
Joffrey F
951497c0f2 Merge pull request #4419 from shin-/4418-healthcheck-extends
Don't re-parse healthcheck values coming from extended services
2017-02-03 12:24:53 -08:00
Joffrey F
e22164ec9f Merge pull request #4035 from urda/urda/compose-top
Added `top` to `docker-compose` to display running processes
2017-02-02 15:41:14 -08:00
Joffrey F
f106d23776 Merge pull request #4414 from shin-/4184-merge-pids
Add missing comma in DOCKER_CONFIG_KEYS list
2017-02-02 14:51:42 -08:00
Joffrey F
cf43e6edf7 Don't re-parse healthcheck values coming from extended services
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-02 14:50:53 -08:00
Joffrey F
7e8958e6ca Add missing comma in DOCKER_CONFIG_KEYS list
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-02-01 16:40:43 -08:00
Joffrey F
11038c455b Merge pull request #4334 from muicoder/master
add IMAGE_EVENTS: load/save
2017-02-01 15:49:03 -08:00
Peter Urda
a67500ee57 Added top to docker-compose to display running processes.
This commit allows `docker-compose` to access `top` for containers
much like running `docker top` directly on a given container.

This commit includes:

* `docker-compose` CLI changes to expose `top`
* Completions for `bash` and `zsh`
* Required testing for the new `top` command

Signed-off-by: Peter Urda <peter.urda@gmail.com>
2017-02-01 15:42:30 -08:00
Joffrey F
1f39b33357 Merge pull request #3989 from mattjbray/patch-1
Zsh completion: permit multiple --file arguments
2017-02-01 15:08:39 -08:00
Joffrey F
67e1111806 Merge pull request #4410 from shin-/4408-colors-no-tty
Don't strip ANSI color codes when output is not a TTY
2017-02-01 14:20:10 -08:00
Joffrey F
c9eb9380ed Merge pull request #4368 from dnephin/secrets-using-bind-mounts
Secrets using bind mounts
2017-02-01 14:11:20 -08:00
Joffrey F
5a0ef19ee0 Merge pull request #4407 from docker/bump-1.10.1
Bump 1.10.1
2017-02-01 14:09:38 -08:00
Joffrey F
b25273892d Bump 1.10.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-31 15:39:34 -08:00
Joffrey F
c16cd77737 Merge pull request #4406 from shin-/bump_docker_py
Bump docker SDK version
2017-01-31 15:36:30 -08:00
Joffrey F
8efb7e6e8b Don't strip ANSI color codes when output is not a TTY
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-31 12:51:46 -08:00
Daniel Nephin
59d1847d9b Fix some test failures.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:40 -05:00
Daniel Nephin
3a2735abb9 Rebase compose v3.1 on the latest v3
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Daniel Nephin
0d609b68ac Add a warning for unsupported secret fields.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Daniel Nephin
4053adc7d3 Add an integration test for secrets using bind mounts.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Daniel Nephin
e0c6397999 Implement secrets using bind mounts
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Daniel Nephin
add56ce818 Read service secrets as a type.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Daniel Nephin
a82de8863e Add v3.1 with secrets.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-31 09:53:16 -05:00
Joffrey F
586637b6a8 Bump docker SDK version
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 17:18:47 -08:00
Joffrey F
2593366a3e Bump docker SDK version
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:49:05 -08:00
Joffrey F
507e0d7a64 Convert time data back to string values when serializing config
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:24:40 -08:00
Joffrey F
d454a1d3fb Detect conflicting version of the docker python SDK and prevent execution until issue is fixed
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:24:40 -08:00
Joffrey F
10365278cc Don't encode build context path on Windows
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:24:40 -08:00
Daniel Nephin
ce2219ec37 Add missing network.internal to v3 schema.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-30 16:24:40 -08:00
Joffrey F
e035931f2e Fix volume definition in v3 schema
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:24:40 -08:00
Joffrey F
5b912082e1 depends_on merge now retains condition information when present
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-30 16:24:40 -08:00
Joffrey F
4d94594d37 Merge remote-tracking branch 'source/release' into bump-1.10.1 2017-01-30 16:11:06 -08:00
Joffrey F
76d4f5bea6 Merge pull request #4383 from shin-/4344-detect-docker-py
Detect conflicting version of the docker python SDK
2017-01-30 12:14:27 -08:00
Joffrey F
5895d8bbc9 Detect conflicting version of the docker python SDK and prevent execution
until issue is fixed

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-26 17:12:41 -08:00
Joffrey F
e05a9f4e62 Merge pull request #4389 from shin-/4372-normalize-time-values
Convert time data back to string values when serializing config
2017-01-26 13:47:34 -08:00
Joffrey F
e10d1140b9 Convert time data back to string values when serializing config
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-26 11:59:22 -08:00
Joffrey F
56e01f25ea Merge pull request #4367 from dnephin/add-missing-network-internal-to-v3
Add missing network.internal to v3 schema.
2017-01-25 13:41:41 -08:00
Joffrey F
c86faab4ec Merge pull request #4370 from shin-/4357-win32-unicode-paths
Don't encode build context path on Windows
2017-01-23 12:04:42 -08:00
Joffrey F
20d6f450b5 Don't encode build context path on Windows
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-20 15:05:53 -08:00
Daniel Nephin
644e1716c3 Add missing network.internal to v3 schema.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-20 12:55:59 -05:00
Joffrey F
9a0962dacb Merge pull request #4361 from shin-/4348-serialize-ext-volumes
Remove external_name from volume def in config output
2017-01-19 17:41:08 -08:00
Joffrey F
263b9e9317 Merge pull request #4360 from shin-/4359-volume-labels
Fix volume definition in v3 schema
2017-01-19 17:40:29 -08:00
Joffrey F
d83d31889e Remove external_name from volume def in config output
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-19 16:05:13 -08:00
Joffrey F
5c2165eaaf Fix volume definition in v3 schema
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-19 15:41:31 -08:00
Joffrey F
8a27a0f059 Merge pull request #4350 from shin-/fix-invalid-depends-on-merge
depends_on merge now retains condition information when present
2017-01-19 14:50:29 -08:00
Joffrey F
b47c97e94e Merge pull request #4358 from shin-/1.11-dev
1.11.0dev
2017-01-19 14:50:10 -08:00
Joffrey F
a482c138d8 Merge pull request #4353 from xulike666/fight-for-readability
Fix typo in script/test/versions.py
2017-01-19 14:49:05 -08:00
Joffrey F
1c46525c2b 1.11.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-19 14:47:31 -08:00
Aaron.L.Xu
169289c8b6 find a fishbone
Signed-off-by: Aaron.L.Xu <likexu@harmonycloud.cn>
2017-01-20 00:52:19 +08:00
Joffrey F
1a02121ab5 depends_on merge now retains condition information when present
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-18 17:52:03 -08:00
Joffrey F
adcd1901e9 Merge pull request #4340 from docker/bump-1.10.0
Bump 1.10.0
2017-01-17 15:25:06 -08:00
Joffrey F
708c4f9534 Merge pull request #4339 from shin-/4328-catch-healthcheck-exception
Catch healthcheck exceptions in parallel_execute
2017-01-17 14:16:37 -08:00
Joffrey F
4bd6f1a0d8 Bump 1.10.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-17 13:27:49 -08:00
Joffrey F
4302861b21 Catch healthcheck exceptions in parallel_execute
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-17 13:27:49 -08:00
Joffrey F
76678747c7 Provide valid serialization of depends_on when format is not 2.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-17 13:27:49 -08:00
Joffrey F
ab97716a95 Use correct wheel file name in twine upload command
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-17 13:27:49 -08:00
Joffrey F
260995fd56 Merge remote-tracking branch 'source/release' into bump-1.10.0 2017-01-17 13:24:01 -08:00
Joffrey F
56a1b02aac Catch healthcheck exceptions in parallel_execute
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-17 13:22:16 -08:00
muicoder
931027c598 add IMAGE_EVENTS: load/save
Signed-off-by: muicoder <muicoder@gmail.com>
2017-01-16 10:53:40 +08:00
Joffrey F
5ade097d74 Merge pull request #4324 from shin-/4321-v3-depends-on
Provide valid serialization of depends_on when format is not 2.1
2017-01-12 11:52:15 -08:00
Joffrey F
62cdd25b7d Merge pull request #4323 from shin-/fix-push-release-script
Use correct wheel file name in twine upload command
2017-01-12 11:52:05 -08:00
Joffrey F
2df31bb13c Provide valid serialization of depends_on when format is not 2.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-11 16:25:40 -08:00
Joffrey F
29b46d5b26 Use correct wheel file name in twine upload command
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-11 15:39:48 -08:00
Joffrey F
0f8a1aa7a3 Merge pull request #4312 from docker/bump-1.10.0-rc2
Bump 1.10.0 rc2
2017-01-11 11:36:56 -08:00
Joffrey F
2091149fee Merge pull request #4317 from shin-/release-script-fixes
Fix docker image build script when using universal wheels
2017-01-10 17:21:27 -08:00
Joffrey F
fb241d0906 Bump 1.10.0-rc2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-10 16:47:40 -08:00
Joffrey F
71dd874600 Fix docker image build script when using universal wheels
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-10 16:47:40 -08:00
Joffrey F
19190ea0df Fix docker image build script when using universal wheels
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-10 16:43:26 -08:00
Joffrey F
1c1fe89e43 Merge pull request #4316 from shin-/update-setup-py
Update setup.py extra_requires
2017-01-10 15:50:11 -08:00
Joffrey F
740a6842e8 Update setup.py extra_requires
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-10 15:01:13 -08:00
Thomas Grainger
340a3fc09c enable universal wheels
Signed-off-by: Thomas Grainger <tom.grainger@procensus.com>
2017-01-10 15:01:13 -08:00
Joffrey F
52792b7a96 Update setup.py extra_requires
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-10 14:59:11 -08:00
Joffrey F
9f6778aa73 Merge pull request #4172 from graingert/enable-universal-wheels
enable universal wheels
2017-01-10 14:57:22 -08:00
Jun Guo
cbb730172f Fix 404 issue, change APIError to more accureate ImageNotFound
Signed-off-by: Jun Guo <blackhumour.gj@gmail.com>
2017-01-09 16:44:32 -08:00
Joffrey F
545153f117 Merge pull request #4288 from tntC4stl3/fix_404_issue
Fix 404 issue, change APIError to more accureate ImageNotFound
2017-01-09 16:43:57 -08:00
Daniel Nephin
91851cd5ae Fix schema typo.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2017-01-09 16:38:42 -08:00
Joffrey F
22b837975d Add support for stop_grace_period in v2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-09 16:38:25 -08:00
Joffrey F
de38c023ce Falsy values in COMPOSE_CONVERT_WINDOWS_PATHS are properly recognized
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-09 16:38:23 -08:00
Joffrey F
344f015a22 Use docker SDK 2.0.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-09 16:38:14 -08:00
Joffrey F
5dabe4ae62 Merge remote-tracking branch 'source/release' into bump-1.10.0-rc2 2017-01-09 16:27:38 -08:00
Joffrey F
3f7b3fbf0a Merge pull request #4304 from shin-/4302-dockerignore-windows
Use docker SDK patch
2017-01-09 16:14:57 -08:00
Joffrey F
88294b46dd Merge pull request #4294 from shin-/4240-compose-convert-false
Ensure falsy values in COMPOSE_CONVERT_WINDOWS_PATHS are properly recognized
2017-01-09 15:49:44 -08:00
Joffrey F
2c157e8fa9 Use docker SDK 2.0.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-09 15:27:20 -08:00
Joffrey F
b570cba965 Merge pull request #3961 from ankon/patch-1
Fix typo
2017-01-09 12:15:04 -08:00
Joffrey F
3bb8a7d178 Merge pull request #4297 from shin-/4271-fix-schemas
Fix config schemas (misplaced "additionalProperties")
2017-01-06 12:10:22 -08:00
Joffrey F
45c7ee4466 Merge pull request #4279 from dnephin/fix-schema-typo
Fix schema typo
2017-01-05 17:15:39 -08:00
Joffrey F
e063c5739f Fix config schemas (misplaced "additionalProperties")
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-05 11:18:21 -08:00
Joffrey F
534b4ed820 Falsy values in COMPOSE_CONVERT_WINDOWS_PATHS are properly recognized
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 15:45:11 -08:00
Joffrey F
27d91bba01 Merge pull request #4293 from shin-/stop_timeout_v2.1
Add support for stop_grace_period in v2
2017-01-04 15:14:18 -08:00
Joffrey F
3dc5f91942 Merge pull request #4292 from docker/bump-1.10.0-rc1
Bump 1.10.0 rc1
2017-01-04 14:53:19 -08:00
Joffrey F
1be41f59c9 Add support for stop_grace_period in v2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 14:30:20 -08:00
Joffrey F
ecff6f1a9a Bump 1.10.0-rc1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 13:54:11 -08:00
Joffrey F
f90618fc43 Unify healthcheck spec definition in v2 and v3
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 13:54:11 -08:00
Joffrey F
838bdd71f3 Merge pull request #4291 from shin-/unify-healthcheck-def-v2v3
Unify healthcheck spec definition in v2 and v3
2017-01-04 13:53:39 -08:00
Joffrey F
a45bb184f1 Merge remote-tracking branch 'source/release' into bump-1.10.0-rc1 2017-01-04 13:16:46 -08:00
Joffrey F
47b672e393 Merge pull request #4051 from shin-/update-release-process
Update release process document to account for recent changes.
2017-01-04 13:15:49 -08:00
Joffrey F
8145429399 Unify healthcheck spec definition in v2 and v3
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 13:14:23 -08:00
Joffrey F
be88bb0e6c Merge pull request #4267 from shin-/3754-depends-on-healthcheck
Allow service dependencies to wait on healthy containers
2017-01-04 13:00:19 -08:00
Joffrey F
bef2308530 Fix condition name in config tests
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 11:28:30 -08:00
Joffrey F
04394b1d0a Expand depends_on to allow different conditions (service_start, service_healthy)
Rework "up" and "start" to wait on conditional state of dependent services
Add integration tests

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 11:28:30 -08:00
Joffrey F
f6edd610f3 Add 3.0 schema to docker-compose.spec
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 11:26:45 -08:00
Joffrey F
0edfe08bf0 Add healthchecks to 2.1 schema. Update depends_on
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-01-04 11:26:45 -08:00
Thomas Grainger
2648af6807 enable universal wheels
Signed-off-by: Thomas Grainger <tom.grainger@procensus.com>
2017-01-04 18:33:58 +00:00
Jun Guo
c73fc26824 Fix 404 issue, change APIError to more accureate ImageNotFound
Signed-off-by: Jun Guo <blackhumour.gj@gmail.com>
2017-01-04 15:42:31 +08:00
Daniel Nephin
a74b2f2f70 Fix schema typo.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2016-12-28 15:16:53 -05:00
Joffrey F
09690e1758 Merge pull request #4220 from shin-/4211-default_network_labels
Add default labels to networks and volumes created by Compose
2016-12-20 02:34:57 -08:00
Joffrey F
3058e39407 Merge pull request #4268 from shin-/jtakkala-3765-sysctl-support
Add sysctl option support when creating service
2016-12-20 02:34:02 -08:00
Joffrey F
e1ebb0df22 Merge pull request #4070 from shin-/use-colorama
Use colorama to enable colored output on Windows
2016-12-20 02:32:51 -08:00
Joffrey F
ba47fb99ba Add default labels to networks and volumes created by Compose
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-19 20:43:35 -08:00
Joffrey F
346802715d Use colorama to enable colored output on Windows
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-19 20:40:47 -08:00
Joffrey F
eb6441c8e3 Add sysctls option to 3.0 schema
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-19 20:35:09 -08:00
Joffrey F
82230071d5 Merge branch '3765-sysctl-support' of https://github.com/jtakkala/compose into jtakkala-3765-sysctl-support
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-19 20:33:54 -08:00
Joffrey F
e6b2949edc Merge pull request #4216 from lawliet89/userns_mode
Implement `userns_mode` HostConfig for services
2016-12-19 18:12:15 -08:00
Joffrey F
ea7b565009 Merge pull request #4232 from shin-/attachable_networks
Make created networks attachable for file format >=2.1
2016-12-16 13:06:57 -08:00
Joffrey F
e736151ee4 Make created networks attachable for file format >=2.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-16 12:23:24 -08:00
Joffrey F
fb165d9c15 Add v3_only marker to healthcheck test
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-16 12:21:59 -08:00
Joffrey F
9cb6a70b6f Merge pull request #4219 from shin-/dockerpy_2.0
Use docker SDK 2.0
2016-12-14 16:19:34 -08:00
Joffrey F
04e5925a23 Use docker SDK 2.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-14 15:36:08 -08:00
Joffrey F
635a281777 Merge pull request #4163 from aanand/add-healthcheck
Implement 'healthcheck' option
2016-12-14 15:35:05 -08:00
Aanand Prasad
599b29e405 Merge pull request #4213 from shin-/4212-interactive-connect
Win32 interactive run - Connect container to networks before starting
2016-12-08 13:41:33 +00:00
Aanand Prasad
30eac9f1ae Merge pull request #4226 from dnephin/update-api-version-for-v3
Increase minimum version for compose format v3
2016-12-07 16:59:28 +00:00
Daniel Nephin
e04a12b5ca Increase minimum version for v3.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2016-12-07 11:16:40 -05:00
Joffrey F
b453ee46e8 Merge pull request #4228 from shin-/bump_version
Bump master version to 1.10.0dev
2016-12-06 17:50:44 -08:00
Joffrey F
4d0575355c Bump master version to 1.10.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-06 17:14:05 -08:00
Yong Wen Chua
62f8b1402e Implement userns_mode HostConfig for services
Fixes #3349

This allows the key `userns_mode` to be used in service definitions.
Since `userns_mode` requires API version > 1.23, this is only available
in 2.1 and 3.0 versions of compose file

Signed-off-by: Yong Wen Chua <me@yongwen.xyz>
2016-12-05 14:25:56 +08:00
Joffrey F
6aacf51427 Win32 interactive run - Connect container to networks before starting
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-12-02 16:10:15 -08:00
Joffrey F
9bfb855b89 Merge pull request #4187 from ijc25/pull-urxvt-corruption
progress_stream: Avoid undefined ANSI escape codes
2016-12-02 13:54:08 -08:00
Ian Campbell
dc9184a90f progress_stream: Avoid undefined ANSI escape codes
The ANSI escape codes \e[0A (cursor up 0 lines) and \e[0B (cursor down 0 lines)
are not well defined and are treated differently by different terminals. In
particular xterm treats 0 as a missing parameter and therefore defaults to 1,
whereas rxvt-unicode treats these escapes as a request to move 0 lines.

However the use of these codes is unnecessary and were really just hiding the
fact that we were not correctly computing diff when adding a new line. Having
added the new line to the ids map and output the corresponding \n the correct
diff would be 1 and not 0 (which xterm interprets as 1) as currently.

Rather than changing the hardcoded 0 to a 1 pull the diff calculation out and
always do it since it produces the correct answer in both cases.

This fixes similar corruption when compose is pulling an image to that seen
with `docker pull` and rxvt-unicode (and likely other terminals in that family)
seen in docker/docker#28111.

This is the same as the fix made to Docker's pkg/jsonmessage in
https://github.com/docker/docker/pull/28238 (and I have shamelessly ripped off
most of this commit message from there).

Signed-off-by: Ian Campbell <ian.campbell@docker.com>
2016-11-25 10:16:37 +00:00
Joffrey F
def150a129 Merge pull request #4173 from graingert/case-pypi-correctly
case PyPI correctly
2016-11-22 18:15:10 -08:00
Aanand Prasad
af894b4dff Merge pull request #4170 from dnephin/warn-on-deploy
Update messages about docker stack deploy
2016-11-22 17:07:13 +00:00
Thomas Grainger
024b5dd6da case PyPI correctly
Signed-off-by: Thomas Grainger <tom.grainger@procensus.com>
2016-11-22 11:15:21 +00:00
Daniel Nephin
c26a2afaf3 Update messages about docker stack deploy.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2016-11-21 16:34:49 -05:00
Aanand Prasad
716a6baa59 Implement 'healthcheck' option
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2016-11-18 14:47:02 +00:00
Joffrey F
f5ad3e7577 Merge pull request #4165 from shin-/changelog_update
Changelog update
2016-11-17 13:29:32 -08:00
Joffrey F
b93211881b Changelog update
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-16 13:35:29 -08:00
Joffrey F
ae26fdf916 Merge pull request #4157 from docker/bump-1.9.0
Bump 1.9.0
2016-11-16 11:20:18 -08:00
Aanand Prasad
466ebb6cc1 Merge pull request #4146 from dnephin/add-stop-grace-period
Add stop grace period to v3
2016-11-16 17:56:20 +00:00
Daniel Nephin
079c95c340 Use stop grace period for container stop.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2016-11-16 12:12:28 -05:00
Daniel Nephin
6cac48c056 Add a vendored and modified pytimeparse
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2016-11-16 12:10:32 -05:00
Aanand Prasad
586443ba5d Merge pull request #4147 from aanand/version-3.0
Support version 3.0 of the Compose file format
2016-11-16 16:34:50 +00:00
Aanand Prasad
f75ef6862f Warn if any services use 'deploy'
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2016-11-16 16:07:02 +00:00
Aanand Prasad
d717c88b6e Support version 3.0 of the Compose file format
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2016-11-16 16:07:02 +00:00
Joffrey F
721bf89447 Merge pull request #4134 from shin-/fix_none_opts_network_check
Avoid breaking when remote driver options are null
2016-11-15 15:01:24 -08:00
Joffrey F
09540339e0 Merge pull request #4131 from aanand/test-environment-overrides-env-file
Test that values in 'environment' override env files
2016-11-15 14:18:54 -08:00
Joffrey F
7f60ff5ae6 Avoid breaking when remote driver options are null.
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-15 12:00:52 -08:00
Joffrey F
3d0747017a Merge pull request #4151 from shin-/fix_ignore_pull_failure_behavior_1.13
Handle new pull failures behavior in Engine 1.13
2016-11-15 12:00:19 -08:00
Joffrey F
efb4ed1b9e Handle new pull failures behavior in Engine 1.13
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-15 11:30:52 -08:00
Joffrey F
d7e9748501 Merge pull request #4150 from shin-/test_only_ubuntu_host
Limit testing pool to Ubuntu hosts to avoid errors with dind
2016-11-14 18:04:30 -08:00
Joffrey F
0291d9ade5 Limit testing pool to Ubuntu hosts to avoid errors with dind not
starting properly.

Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-14 17:23:25 -08:00
Jari Takkala
91620ae97b Add sysctl option support when creating service
Closes #3765

Signed-off-by: Jari Takkala <jtakkala@gmail.com>
2016-11-10 16:41:16 -05:00
Aanand Prasad
ba249e5179 Test that values in 'environment' override env files
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2016-11-09 15:10:27 +00:00
Joffrey F
0e28163ccb Merge pull request #4115 from shin-/fix_overlay_options_mismatch
Call check_remote_network_config from Network.ensure
2016-11-08 15:35:38 -08:00
Aanand Prasad
0902edef43 Merge pull request #4113 from shin-/4103-merge-empty-logging
Fix logging dict merging
2016-11-08 14:56:27 +00:00
Joffrey F
4aa7d15d97 Call check_remote_network_config from Network.ensure
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-04 10:51:14 -07:00
Joffrey F
10417eebd7 Fix logging dict merging
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-03 17:22:31 -07:00
Joffrey F
9046e33ab2 Merge pull request #4112 from mikedougherty/rm-docs-checker
Remove docs checker from Jenkinsfile and use cleanWorkspace option on wrappedNode
2016-11-03 17:22:05 -07:00
Mike Dougherty
da1508051d Remove docs checker from Jenkinsfile and use cleanWorkspace option on wrappedNode
Signed-off-by: Mike Dougherty <mike.dougherty@docker.com>
2016-11-03 14:00:17 -07:00
Joffrey F
969abca47f Merge pull request #4098 from shin-/fix_overlay_options_mismatch
Add whitelisted driver option added by the overlay driver
2016-11-03 11:43:11 -07:00
Joffrey F
d525dd1846 Merge pull request #4104 from shin-/bump_docker_py
Bump docker-py
2016-11-03 11:43:04 -07:00
Joffrey F
7a430dbe96 Updated docker-py dependency to latest version
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-02 16:56:45 -07:00
Joffrey F
3b46a62f36 Merge pull request #4073 from mikedougherty/jenkinsfile
Update Jenkinsfile to run "janky" tasks
2016-11-02 14:55:11 -07:00
Joffrey F
ba43d08fbd Add whitelisted driver option added by the overlay driver to avoid breakage
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-11-01 11:47:12 -07:00
Joffrey F
ed71391f66 Merge pull request #4062 from strayobject/patch-1
fix(docs): updated documentation links
2016-10-28 11:55:58 -07:00
Joffrey F
e563b58595 Merge pull request #4071 from NiR-/fix/run-as-container
Fix path of the parent dir of COMPOSE_FILE
2016-10-27 15:03:54 -07:00
Aanand Prasad
62c9ed93d0 Merge pull request #4084 from shin-/bump_docker_py
Bump docker-py version to include latest patch
2016-10-27 12:53:44 -07:00
Joffrey F
046144e8f4 Bump docker-py version to include latest patch
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-27 12:13:32 -07:00
Mike Dougherty
4871523d5e Update Jenkinsfile to perform existing jenkins tasks
Signed-off-by: Mike Dougherty <mike.dougherty@docker.com>
2016-10-25 11:51:02 -07:00
Aanand Prasad
505c785c5c Merge pull request #4066 from shin-/fix-schema-divergence
Fix schema divergence - add missing fields to compose 2.1 schema
2016-10-25 11:20:51 -07:00
Albin Kerouanton
99343fd76c Fix path of the parent dir of COMPOSE_FILE
Signed-off-by: Albin Kerouanton <albin.kerouanton@knplabs.com>
2016-10-25 11:09:45 +02:00
Aanand Prasad
8b5782ba9f Merge pull request #4067 from shin-/portable-find-exe
Replace "which" calls with the portable find_executable function
2016-10-24 16:15:21 -07:00
Aanand Prasad
d586328812 Merge pull request #4065 from shin-/refine-swarm-warning
Do not print Swarm mode warning when connecting to a UCP server
2016-10-24 16:15:14 -07:00
Aanand Prasad
021bb41fc6 Merge pull request #4069 from shin-/project_tests_robustness
Improve robustness of a couple integration tests with occasional failures
2016-10-24 15:35:52 -07:00
Joffrey F
60d005b055 Improve robustness of a couple integration tests with occasional failures
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-24 13:58:45 -07:00
Joffrey F
d2fb146913 Replace "which" calls with the portable find_executable function
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-24 13:22:52 -07:00
Joffrey F
43e29b41c0 Fix schema divergence - add missing fields to compose 2.1 schema
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-24 11:51:45 -07:00
Joffrey F
a652b6818c Merge pull request #4064 from shin-/missing-2.1-spec
Add missing config schema to docker-compose.spec
2016-10-24 11:43:17 -07:00
Joffrey F
ea68be3441 Do not print Swarm mode warning when connecting to a UCP server
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-24 11:36:44 -07:00
Joffrey F
2c24bc3a08 Add missing config schema to docker-compose.spec
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-24 10:55:04 -07:00
Michal Zdrojewski
3d8dc6f47a fix(docs): updated documentation links
Signed-off-by: Michal Zdrojewski <code@strayobject.co.uk>
2016-10-24 15:41:31 +01:00
Joffrey F
f039c8b43c Update release process document to account for recent changes.
Signed-off-by: Joffrey F <joffrey@docker.com>
2016-10-20 17:47:07 -07:00
Matt Bray
a37d99f201 Zsh completion: change --file description text
Signed-off-by: Matt Bray <mattjbray@gmail.com>
2016-09-30 00:45:46 +01:00
Matthew Bray
90356b7040 Zsh completion: permit multiple --file arguments
Before this change:

```
$ docker-compose --file docker-compose.yml -<TAB>
 -- option --
--help                 -h  -- Get help
--host                 -H  -- Daemon socket to connect to
--project-name         -p  -- Specify an alternate project name (default: directory name)
--skip-hostname-check      -- Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)
--tls                      -- Use TLS; implied by --tlsverify
--tlscacert                -- Trust certs signed only by this CA
--tlscert                  -- Path to TLS certificate file
--tlskey                   -- Path to TLS key file
--tlsverify                -- Use TLS and verify the remote
--verbose                  -- Show more output
--version              -v  -- Print version and exit
```

(Note the `--file` argument is no longer available to complete.)

After this change:

```
docker-compose --file docker-compose.yml -<TAB>
 -- option --
--file                 -f  -- Specify an alternate docker-compose file (default: docker-compose.yml)
--help                 -h  -- Get help
--host                 -H  -- Daemon socket to connect to
--project-name         -p  -- Specify an alternate project name (default: directory name)
--skip-hostname-check      -- Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)
--tls                      -- Use TLS; implied by --tlsverify
--tlscacert                -- Trust certs signed only by this CA
--tlscert                  -- Path to TLS certificate file
--tlskey                   -- Path to TLS key file
--tlsverify                -- Use TLS and verify the remote
--verbose                  -- Show more output
--version              -v  -- Print version and exit
```

Signed-off-by: Matt Bray <mattjbray@gmail.com>
2016-09-28 12:04:13 +01:00
Andreas Kohn
cb3bf869f4 Fix typo
Signed-off-by: Andreas Kohn <andreas.kohn@gmail.com>
2016-09-20 13:59:17 +02:00
67 changed files with 2808 additions and 311 deletions

View File

@@ -1,6 +1,130 @@
Change log
==========
1.11.2 (2017-02-17)
-------------------
### Bugfixes
- Fixed a bug that was preventing secrets configuration from being
loaded properly
- Fixed a bug where the `docker-compose config` command would fail
if the config file contained secrets definitions
- Fixed an issue where Compose on some linux distributions would
pick up and load an outdated version of the requests library
- Fixed an issue where socket-type files inside a build folder
would cause `docker-compose` to crash when trying to build that
service
- Fixed an issue where recursive wildcard patterns `**` were not being
recognized in `.dockerignore` files.
1.11.1 (2017-02-09)
-------------------
### Bugfixes
- Fixed a bug where the 3.1 file format was not being recognized as valid
by the Compose parser
1.11.0 (2017-02-08)
-------------------
### New Features
#### Compose file version 3.1
- Introduced version 3.1 of the `docker-compose.yml` specification. This
version requires Docker Engine 1.13.0 or above. It introduces support
for secrets. See the documentation for more information
#### Compose file version 2.0 and up
- Introduced the `docker-compose top` command that displays processes running
for the different services managed by Compose.
### Bugfixes
- Fixed a bug where extending a service defining a healthcheck dictionary
would cause `docker-compose` to error out.
- Fixed an issue where the `pid` entry in a service definition was being
ignored when using multiple Compose files.
1.10.1 (2017-02-01)
------------------
### Bugfixes
- Fixed an issue where presence of older versions of the docker-py
package would cause unexpected crashes while running Compose
- Fixed an issue where healthcheck dependencies would be lost when
using multiple compose files for a project
- Fixed a few issues that made the output of the `config` command
invalid
- Fixed an issue where adding volume labels to v3 Compose files would
result in an error
- Fixed an issue on Windows where build context paths containing unicode
characters were being improperly encoded
- Fixed a bug where Compose would occasionally crash while streaming logs
when containers would stop or restart
1.10.0 (2017-01-18)
-------------------
### New Features
#### Compose file version 3.0
- Introduced version 3.0 of the `docker-compose.yml` specification. This
version requires to be used with Docker Engine 1.13 or above and is
specifically designed to work with the `docker stack` commands.
#### Compose file version 2.1 and up
- Healthcheck configuration can now be done in the service definition using
the `healthcheck` parameter
- Containers dependencies can now be set up to wait on positive healthchecks
when declared using `depends_on`. See the documentation for the updated
syntax.
**Note:** This feature will not be ported to version 3 Compose files.
- Added support for the `sysctls` parameter in service definitions
- Added support for the `userns_mode` parameter in service definitions
- Compose now adds identifying labels to networks and volumes it creates
#### Compose file version 2.0 and up
- Added support for the `stop_grace_period` option in service definitions.
### Bugfixes
- Colored output now works properly on Windows.
- Fixed a bug where docker-compose run would fail to set up link aliases
in interactive mode on Windows.
- Networks created by Compose are now always made attachable
(Compose files v2.1 and up).
- Fixed a bug where falsy values of `COMPOSE_CONVERT_WINDOWS_PATHS`
(`0`, `false`, empty value) were being interpreted as true.
- Fixed a bug where forward slashes in some .dockerignore patterns weren't
being parsed correctly on Windows
1.9.0 (2016-11-16)
-----------------
@@ -143,7 +267,7 @@ Bug Fixes
- Fixed a bug in Windows environment where volume mappings of the
host's root directory would be parsed incorrectly.
- Fixed a bug where `docker-compose config` would ouput an invalid
- Fixed a bug where `docker-compose config` would output an invalid
Compose file if external networks were specified.
- Fixed an issue where unset buildargs would be assigned a string
@@ -814,7 +938,7 @@ Fig has been renamed to Docker Compose, or just Compose for short. This has seve
- The command you type is now `docker-compose`, not `fig`.
- You should rename your fig.yml to docker-compose.yml.
- If youre installing via PyPi, the package is now `docker-compose`, so install it with `pip install docker-compose`.
- If youre installing via PyPI, the package is now `docker-compose`, so install it with `pip install docker-compose`.
Besides that, theres a lot of new stuff in this release:

View File

@@ -13,6 +13,7 @@ RUN set -ex; \
ca-certificates \
curl \
libsqlite3-dev \
libbz2-dev \
; \
rm -rf /var/lib/apt/lists/*
@@ -20,40 +21,32 @@ RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \
-o /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker
# Build Python 2.7.9 from source
# Build Python 2.7.13 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz | tar -xz; \
cd Python-2.7.9; \
curl -L https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz | tar -xz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.9
rm -rf /Python-2.7.13
# Build python 3.4 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz | tar -xz; \
cd Python-3.4.3; \
curl -L https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz | tar -xz; \
cd Python-3.4.6; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.4.3
rm -rf /Python-3.4.6
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install setuptools
RUN set -ex; \
curl -L https://bootstrap.pypa.io/ez_setup.py | python
# Install pip
RUN set -ex; \
curl -L https://pypi.python.org/packages/source/p/pip/pip-8.1.1.tar.gz | tar -xz; \
cd pip-8.1.1; \
python setup.py install; \
cd ..; \
rm -rf pip-8.1.1
curl -L https://bootstrap.pypa.io/get-pip.py | python
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen

View File

@@ -1,5 +1,6 @@
FROM alpine:3.4
ARG version
RUN apk -U add \
python \
py-pip
@@ -7,7 +8,7 @@ RUN apk -U add \
COPY requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
ADD dist/docker-compose-release.tar.gz /code/docker-compose
RUN pip install --no-deps /code/docker-compose/docker-compose-*
COPY dist/docker_compose-${version}-py2.py3-none-any.whl /code/
RUN pip install --no-deps /code/docker_compose-${version}-py2.py3-none-any.whl
ENTRYPOINT ["/usr/bin/docker-compose"]

30
Jenkinsfile vendored
View File

@@ -2,17 +2,10 @@
def image
def checkDocs = { ->
wrappedNode(label: 'linux') {
deleteDir(); checkout(scm)
documentationChecker("docs")
}
}
def buildImage = { ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("build image") {
deleteDir(); checkout(scm)
checkout(scm)
def imageName = "dockerbuildbot/compose:${gitCommit()}"
image = docker.image(imageName)
try {
@@ -39,7 +32,7 @@ def runTests = { Map settings ->
{ ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions}") {
deleteDir(); checkout(scm)
checkout(scm)
def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
sh """docker run \\
@@ -62,19 +55,10 @@ def runTests = { Map settings ->
}
}
def buildAndTest = { ->
buildImage()
// TODO: break this out into meaningful "DOCKER_VERSIONS" values instead of all
parallel(
failFast: true,
all_py27: runTests(pythonVersions: "py27", dockerVersions: "all"),
all_py34: runTests(pythonVersions: "py34", dockerVersions: "all"),
)
}
buildImage()
// TODO: break this out into meaningful "DOCKER_VERSIONS" values instead of all
parallel(
failFast: false,
docs: checkDocs,
test: buildAndTest
failFast: true,
all_py27: runTests(pythonVersions: "py27", dockerVersions: "all"),
all_py34: runTests(pythonVersions: "py34", dockerVersions: "all"),
)

View File

@@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
Using Compose is basically a three-step process.
@@ -35,7 +35,7 @@ A `docker-compose.yml` looks like this:
image: redis
For more information about the Compose file, see the
[Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file.md)
Compose has commands for managing the whole lifecycle of your application:

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.9.0'
__version__ = '1.11.2'

View File

@@ -0,0 +1,37 @@
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import subprocess
import sys
# Attempt to detect https://github.com/docker/compose/issues/4344
try:
# We don't try importing pip because it messes with package imports
# on some Linux distros (Ubuntu, Fedora)
# https://github.com/docker/compose/issues/4425
# https://github.com/docker/compose/issues/4481
# https://github.com/pypa/pip/blob/master/pip/_vendor/__init__.py
s_cmd = subprocess.Popen(
['pip', 'freeze'], stderr=subprocess.PIPE, stdout=subprocess.PIPE
)
packages = s_cmd.communicate()[0].splitlines()
dockerpy_installed = len(
list(filter(lambda p: p.startswith(b'docker-py=='), packages))
) > 0
if dockerpy_installed:
from .colors import red
print(
red('ERROR:'),
"Dependency conflict: an older version of the 'docker-py' package "
"is polluting the namespace. "
"Run the following command to remedy the issue:\n"
"pip uninstall docker docker-py; pip install docker",
file=sys.stderr
)
sys.exit(1)
except OSError:
# pip command is not available, which indicates it's probably the binary
# distribution of Compose which is not affected
pass

View File

@@ -1,5 +1,8 @@
from __future__ import absolute_import
from __future__ import unicode_literals
import colorama
NAMES = [
'grey',
'red',
@@ -30,6 +33,7 @@ def make_color_fn(code):
return lambda s: ansi_color(code, s)
colorama.init(strip=False)
for (name, code) in get_pairs():
globals()[name] = make_color_fn(code)

View File

@@ -3,7 +3,7 @@ from __future__ import unicode_literals
import logging
from docker import Client
from docker import APIClient
from docker.errors import TLSParameterError
from docker.tls import TLSConfig
from docker.utils import kwargs_from_env
@@ -71,4 +71,4 @@ def docker_client(environment, version=None, tls_config=None, host=None,
kwargs['user_agent'] = generate_user_agent()
return Client(**kwargs)
return APIClient(**kwargs)

View File

@@ -24,7 +24,6 @@ from ..config import ConfigurationError
from ..config import parse_environment
from ..config.environment import Environment
from ..config.serialize import serialize_config
from ..const import DEFAULT_TIMEOUT
from ..const import IS_WINDOWS_PLATFORM
from ..errors import StreamParseError
from ..progress_stream import StreamOutputError
@@ -192,6 +191,7 @@ class TopLevelCommand(object):
scale Set number of containers for a service
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information
@@ -726,7 +726,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
timeout = timeout_from_opts(options)
for s in options['SERVICE=NUM']:
if '=' not in s:
@@ -760,7 +760,7 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
timeout = timeout_from_opts(options)
self.project.stop(service_names=options['SERVICE'], timeout=timeout)
def restart(self, options):
@@ -773,10 +773,37 @@ class TopLevelCommand(object):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
timeout = timeout_from_opts(options)
containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
exit_if(not containers, 'No containers to restart', 1)
def top(self, options):
"""
Display the running processes
Usage: top [SERVICE...]
"""
containers = sorted(
self.project.containers(service_names=options['SERVICE'], stopped=False) +
self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only),
key=attrgetter('name')
)
for idx, container in enumerate(containers):
if idx > 0:
print()
top_data = self.project.client.top(container.name)
headers = top_data.get("Titles")
rows = []
for process in top_data.get("Processes", []):
rows.append(process)
print(container.name)
print(Formatter().table(headers, rows))
def unpause(self, options):
"""
Unpause services.
@@ -831,7 +858,7 @@ class TopLevelCommand(object):
start_deps = not options['--no-deps']
cascade_stop = options['--abort-on-container-exit']
service_names = options['SERVICE']
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
timeout = timeout_from_opts(options)
remove_orphans = options['--remove-orphans']
detached = options.get('-d')
@@ -896,6 +923,11 @@ def convergence_strategy_from_opts(options):
return ConvergenceStrategy.changed
def timeout_from_opts(options):
timeout = options.get('--timeout')
return None if timeout is None else int(timeout)
def image_type_from_opt(flag, value):
if not value:
return ImageType.none
@@ -984,6 +1016,7 @@ def run_one_off_container(container_options, project, service, options):
try:
try:
if IS_WINDOWS_PLATFORM:
service.connect_container_to_networks(container)
exit_code = call_docker(["start", "--attach", "--interactive", container.id])
else:
operation = RunOperation(

View File

@@ -12,10 +12,14 @@ import six
import yaml
from cached_property import cached_property
from . import types
from ..const import COMPOSEFILE_V1 as V1
from ..const import COMPOSEFILE_V2_0 as V2_0
from ..const import COMPOSEFILE_V2_1 as V2_1
from ..const import COMPOSEFILE_V3_0 as V3_0
from ..const import COMPOSEFILE_V3_1 as V3_1
from ..utils import build_string_dict
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive
from .environment import env_vars_from_file
from .environment import Environment
@@ -64,6 +68,7 @@ DOCKER_CONFIG_KEYS = [
'extra_hosts',
'group_add',
'hostname',
'healthcheck',
'image',
'ipc',
'labels',
@@ -73,18 +78,21 @@ DOCKER_CONFIG_KEYS = [
'memswap_limit',
'mem_swappiness',
'net',
'oom_score_adj'
'oom_score_adj',
'pid',
'ports',
'privileged',
'read_only',
'restart',
'secrets',
'security_opt',
'shm_size',
'stdin_open',
'stop_signal',
'sysctls',
'tty',
'user',
'userns_mode',
'volume_driver',
'volumes',
'volumes_from',
@@ -175,10 +183,8 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
if version == '2':
version = V2_0
if version not in (V2_0, V2_1):
raise ConfigurationError(
'Version in "{}" is unsupported. {}'
.format(self.filename, VERSION_EXPLANATION))
if version == '3':
version = V3_0
return version
@@ -194,8 +200,11 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
def get_networks(self):
return {} if self.version == V1 else self.config.get('networks', {})
def get_secrets(self):
return {} if self.version < V3_1 else self.config.get('secrets', {})
class Config(namedtuple('_Config', 'version services volumes networks')):
class Config(namedtuple('_Config', 'version services volumes networks secrets')):
"""
:param version: configuration version
:type version: int
@@ -320,13 +329,22 @@ def load(config_details):
networks = load_mapping(
config_details.config_files, 'get_networks', 'Network'
)
secrets = load_secrets(config_details.config_files, config_details.working_dir)
service_dicts = load_services(config_details, main_file)
if main_file.version != V1:
for service_dict in service_dicts:
match_named_volumes(service_dict, volumes)
return Config(main_file.version, service_dicts, volumes, networks)
services_using_deploy = [s for s in service_dicts if s.get('deploy')]
if services_using_deploy:
log.warn(
"Some services ({}) use the 'deploy' key, which will be ignored. "
"Compose does not support deploy configuration - use "
"`docker stack deploy` to deploy to a swarm."
.format(", ".join(sorted(s['name'] for s in services_using_deploy))))
return Config(main_file.version, service_dicts, volumes, networks, secrets)
def load_mapping(config_files, get_func, entity_type):
@@ -340,22 +358,12 @@ def load_mapping(config_files, get_func, entity_type):
external = config.get('external')
if external:
if len(config.keys()) > 1:
raise ConfigurationError(
'{} {} declared as external but specifies'
' additional attributes ({}). '.format(
entity_type,
name,
', '.join([k for k in config.keys() if k != 'external'])
)
)
validate_external(entity_type, name, config)
if isinstance(external, dict):
config['external_name'] = external.get('name')
else:
config['external_name'] = name
mapping[name] = config
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
config['driver_opts']
@@ -367,6 +375,39 @@ def load_mapping(config_files, get_func, entity_type):
return mapping
def validate_external(entity_type, name, config):
if len(config.keys()) <= 1:
return
raise ConfigurationError(
"{} {} declared as external but specifies additional attributes "
"({}).".format(
entity_type, name, ', '.join(k for k in config if k != 'external')))
def load_secrets(config_files, working_dir):
mapping = {}
for config_file in config_files:
for name, config in config_file.get_secrets().items():
mapping[name] = config or {}
if not config:
continue
external = config.get('external')
if external:
validate_external('Secret', name, config)
if isinstance(external, dict):
config['external_name'] = external.get('name')
else:
config['external_name'] = name
if 'file' in config:
config['file'] = expand_path(working_dir, config['file'])
return mapping
def load_services(config_details, config_file):
def build_service(service_name, service_dict, service_names):
service_config = ServiceConfig.with_abs_paths(
@@ -433,7 +474,7 @@ def process_config_file(config_file, environment, service_name=None):
'service',
environment)
if config_file.version in (V2_0, V2_1):
if config_file.version in (V2_0, V2_1, V3_0, V3_1):
processed_config = dict(config_file.config)
processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section(
@@ -446,9 +487,12 @@ def process_config_file(config_file, environment, service_name=None):
config_file.get_networks(),
'network',
environment)
if config_file.version == V1:
elif config_file.version == V1:
processed_config = services
else:
raise ConfigurationError(
'Version in "{}" is unsupported. {}'
.format(config_file.filename, VERSION_EXPLANATION))
config_file = config_file._replace(config=processed_config)
validate_against_config_schema(config_file)
@@ -629,10 +673,59 @@ def process_service(service_config):
if 'extra_hosts' in service_dict:
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
if 'sysctls' in service_dict:
service_dict['sysctls'] = build_string_dict(parse_sysctls(service_dict['sysctls']))
service_dict = process_depends_on(service_dict)
for field in ['dns', 'dns_search', 'tmpfs']:
if field in service_dict:
service_dict[field] = to_list(service_dict[field])
service_dict = process_healthcheck(service_dict, service_config.name)
return service_dict
def process_depends_on(service_dict):
if 'depends_on' in service_dict and not isinstance(service_dict['depends_on'], dict):
service_dict['depends_on'] = dict([
(svc, {'condition': 'service_started'}) for svc in service_dict['depends_on']
])
return service_dict
def process_healthcheck(service_dict, service_name):
if 'healthcheck' not in service_dict:
return service_dict
hc = {}
raw = service_dict['healthcheck']
if raw.get('disable'):
if len(raw) > 1:
raise ConfigurationError(
'Service "{}" defines an invalid healthcheck: '
'"disable: true" cannot be combined with other options'
.format(service_name))
hc['test'] = ['NONE']
elif 'test' in raw:
hc['test'] = raw['test']
if 'interval' in raw:
if not isinstance(raw['interval'], six.integer_types):
hc['interval'] = parse_nanoseconds_int(raw['interval'])
else: # Conversion has been done previously
hc['interval'] = raw['interval']
if 'timeout' in raw:
if not isinstance(raw['timeout'], six.integer_types):
hc['timeout'] = parse_nanoseconds_int(raw['timeout'])
else: # Conversion has been done previously
hc['timeout'] = raw['timeout']
if 'retries' in raw:
hc['retries'] = raw['retries']
service_dict['healthcheck'] = hc
return service_dict
@@ -652,7 +745,7 @@ def finalize_service(service_config, service_names, version, environment):
if 'volumes' in service_dict:
service_dict['volumes'] = [
VolumeSpec.parse(
v, environment.get('COMPOSE_CONVERT_WINDOWS_PATHS')
v, environment.get_boolean('COMPOSE_CONVERT_WINDOWS_PATHS')
) for v in service_dict['volumes']
]
@@ -670,6 +763,11 @@ def finalize_service(service_config, service_names, version, environment):
if 'restart' in service_dict:
service_dict['restart'] = parse_restart_spec(service_dict['restart'])
if 'secrets' in service_dict:
service_dict['secrets'] = [
types.ServiceSecret.parse(s) for s in service_dict['secrets']
]
normalize_build(service_dict, service_config.working_dir, environment)
service_dict['name'] = service_config.name
@@ -757,14 +855,17 @@ def merge_service_dicts(base, override, version):
md.merge_mapping('labels', parse_labels)
md.merge_mapping('ulimits', parse_ulimits)
md.merge_mapping('networks', parse_networks)
md.merge_mapping('sysctls', parse_sysctls)
md.merge_mapping('depends_on', parse_depends_on)
md.merge_sequence('links', ServiceLink.parse)
md.merge_sequence('secrets', types.ServiceSecret.parse)
for field in ['volumes', 'devices']:
md.merge_field(field, merge_path_mappings)
for field in [
'ports', 'cap_add', 'cap_drop', 'expose', 'external_links',
'security_opt', 'volumes_from', 'depends_on',
'security_opt', 'volumes_from',
]:
md.merge_field(field, merge_unique_items_lists, default=[])
@@ -831,11 +932,11 @@ def merge_environment(base, override):
return env
def split_label(label):
if '=' in label:
return label.split('=', 1)
def split_kv(kvpair):
if '=' in kvpair:
return kvpair.split('=', 1)
else:
return label, ''
return kvpair, ''
def parse_dict_or_list(split_func, type_name, arguments):
@@ -856,8 +957,12 @@ def parse_dict_or_list(split_func, type_name, arguments):
parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
parse_labels = functools.partial(parse_dict_or_list, split_kv, 'labels')
parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks')
parse_sysctls = functools.partial(parse_dict_or_list, split_kv, 'sysctls')
parse_depends_on = functools.partial(
parse_dict_or_list, lambda k: (k, {'condition': 'service_started'}), 'depends_on'
)
def parse_ulimits(ulimits):

View File

@@ -192,6 +192,7 @@
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
@@ -275,9 +276,9 @@
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"additionalProperties": false
},
"additionalProperties": false
}
},
"additionalProperties": false
},

View File

@@ -77,7 +77,28 @@
"cpu_shares": {"type": ["number", "string"]},
"cpu_quota": {"type": ["number", "string"]},
"cpuset": {"type": "string"},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"depends_on": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"additionalProperties": false,
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"type": "object",
"additionalProperties": false,
"properties": {
"condition": {
"type": "string",
"enum": ["service_started", "service_healthy"]
}
},
"required": ["condition"]
}
}
}
]
},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
@@ -120,6 +141,7 @@
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
@@ -193,7 +215,9 @@
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
@@ -217,6 +241,7 @@
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"volume_driver": {"type": "string"},
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
@@ -229,6 +254,24 @@
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"network": {
"id": "#/definitions/network",
"type": "object",
@@ -279,10 +322,10 @@
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"},
"additionalProperties": false
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},

View File

@@ -0,0 +1,383 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.0.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"container_name": {"type": "string"},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "ports"
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
}
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -0,0 +1,428 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.1.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"container_name": {"type": "string"},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "ports"
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
}
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -2,6 +2,7 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import codecs
import contextlib
import logging
import os
@@ -31,11 +32,12 @@ def env_vars_from_file(filename):
elif not os.path.isfile(filename):
raise ConfigurationError("%s is not a file." % (filename))
env = {}
for line in codecs.open(filename, 'r', 'utf-8'):
line = line.strip()
if line and not line.startswith('#'):
k, v = split_env(line)
env[k] = v
with contextlib.closing(codecs.open(filename, 'r', 'utf-8')) as fileobj:
for line in fileobj:
line = line.strip()
if line and not line.startswith('#'):
k, v = split_env(line)
env[k] = v
return env
@@ -105,3 +107,14 @@ class Environment(dict):
super(Environment, self).get(key.upper(), *args, **kwargs)
)
return super(Environment, self).get(key, *args, **kwargs)
def get_boolean(self, key):
# Convert a value to a boolean using "common sense" rules.
# Unset, empty, "0" and "false" (i-case) yield False.
# All other values yield True.
value = self.get(key)
if not value:
return False
if value.lower() in ['0', 'false']:
return False
return True

View File

@@ -3,8 +3,8 @@ from __future__ import unicode_literals
VERSION_EXPLANATION = (
'You might be seeing this error because you\'re using the wrong Compose '
'file version. Either specify a version of "2" (or "2.0") and place your '
'You might be seeing this error because you\'re using the wrong Compose file version. '
'Either specify a supported version ("2.0", "2.1", "3.0") and place your '
'service definitions under the `services` key, or omit the `version` key '
'and place your service definitions at the root of the file to use '
'version 1.\nFor more on the Compose file format versions, see '

View File

@@ -6,7 +6,6 @@ import yaml
from compose.config import types
from compose.config.config import V1
from compose.config.config import V2_0
from compose.config.config import V2_1
@@ -33,15 +32,20 @@ def denormalize_config(config):
if 'external_name' in net_conf:
del net_conf['external_name']
volumes = config.volumes.copy()
for vol_name, vol_conf in volumes.items():
if 'external_name' in vol_conf:
del vol_conf['external_name']
version = config.version
if version not in (V2_0, V2_1):
if version == V1:
version = V2_1
return {
'version': version,
'services': services,
'networks': networks,
'volumes': config.volumes,
'volumes': volumes,
}
@@ -53,13 +57,52 @@ def serialize_config(config):
width=80)
def serialize_ns_time_value(value):
result = (value, 'ns')
table = [
(1000., 'us'),
(1000., 'ms'),
(1000., 's'),
(60., 'm'),
(60., 'h')
]
for stage in table:
tmp = value / stage[0]
if tmp == int(value / stage[0]):
value = tmp
result = (int(value), stage[1])
else:
break
return '{0}{1}'.format(*result)
def denormalize_service_dict(service_dict, version):
service_dict = service_dict.copy()
if 'restart' in service_dict:
service_dict['restart'] = types.serialize_restart_spec(service_dict['restart'])
service_dict['restart'] = types.serialize_restart_spec(
service_dict['restart']
)
if version == V1 and 'network_mode' not in service_dict:
service_dict['network_mode'] = 'bridge'
if 'depends_on' in service_dict and version != V2_1:
service_dict['depends_on'] = sorted([
svc for svc in service_dict['depends_on'].keys()
])
if 'healthcheck' in service_dict:
if 'interval' in service_dict['healthcheck']:
service_dict['healthcheck']['interval'] = serialize_ns_time_value(
service_dict['healthcheck']['interval']
)
if 'timeout' in service_dict['healthcheck']:
service_dict['healthcheck']['timeout'] = serialize_ns_time_value(
service_dict['healthcheck']['timeout']
)
if 'secrets' in service_dict:
service_dict['secrets'] = map(lambda s: s.repr(), service_dict['secrets'])
return service_dict

View File

@@ -10,8 +10,8 @@ from collections import namedtuple
import six
from compose.config.config import V1
from compose.config.errors import ConfigurationError
from ..const import COMPOSEFILE_V1 as V1
from .errors import ConfigurationError
from compose.const import IS_WINDOWS_PLATFORM
from compose.utils import splitdrive
@@ -234,3 +234,27 @@ class ServiceLink(namedtuple('_ServiceLink', 'target alias')):
@property
def merge_field(self):
return self.alias
class ServiceSecret(namedtuple('_ServiceSecret', 'source target uid gid mode')):
@classmethod
def parse(cls, spec):
if isinstance(spec, six.string_types):
return cls(spec, None, None, None, None)
return cls(
spec.get('source'),
spec.get('target'),
spec.get('uid'),
spec.get('gid'),
spec.get('mode'),
)
@property
def merge_field(self):
return self.source
def repr(self):
return dict(
[(k, v) for k, v in self._asdict().items() if v is not None]
)

View File

@@ -180,11 +180,13 @@ def validate_links(service_config, service_names):
def validate_depends_on(service_config, service_names):
for dependency in service_config.config.get('depends_on', []):
deps = service_config.config.get('depends_on', {})
for dependency in deps.keys():
if dependency not in service_names:
raise ConfigurationError(
"Service '{s.name}' depends on service '{dep}' which is "
"undefined.".format(s=service_config, dep=dependency))
"undefined.".format(s=service_config, dep=dependency)
)
def get_unsupported_config_msg(path, error_key):
@@ -201,7 +203,7 @@ def anglicize_json_type(json_type):
def is_service_dict_schema(schema_id):
return schema_id in ('config_schema_v1.json', '#/properties/services')
return schema_id in ('config_schema_v1.json', '#/properties/services')
def handle_error_for_schema_with_id(error, path):

View File

@@ -5,27 +5,37 @@ import sys
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IMAGE_EVENTS = ['delete', 'import', 'pull', 'push', 'tag', 'untag']
IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'
LABEL_VOLUME = 'com.docker.compose.volume'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
SECRETS_PATH = '/run/secrets'
COMPOSEFILE_V1 = '1'
COMPOSEFILE_V2_0 = '2.0'
COMPOSEFILE_V2_1 = '2.1'
COMPOSEFILE_V3_0 = '3.0'
COMPOSEFILE_V3_1 = '3.1'
API_VERSIONS = {
COMPOSEFILE_V1: '1.21',
COMPOSEFILE_V2_0: '1.22',
COMPOSEFILE_V2_1: '1.24',
COMPOSEFILE_V3_0: '1.25',
COMPOSEFILE_V3_1: '1.25',
}
API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V1]: '1.9.0',
API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0',
API_VERSIONS[COMPOSEFILE_V2_1]: '1.12.0',
API_VERSIONS[COMPOSEFILE_V3_0]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_1]: '1.13.0',
}

View File

@@ -10,3 +10,24 @@ class OperationFailedError(Exception):
class StreamParseError(RuntimeError):
def __init__(self, reason):
self.msg = reason
class HealthCheckException(Exception):
def __init__(self, reason):
self.msg = reason
class HealthCheckFailed(HealthCheckException):
def __init__(self, container_id):
super(HealthCheckFailed, self).__init__(
'Container "{}" is unhealthy.'.format(container_id)
)
class NoHealthCheckConfigured(HealthCheckException):
def __init__(self, service_name):
super(NoHealthCheckConfigured, self).__init__(
'Service "{}" is missing a healthcheck configuration'.format(
service_name
)
)

View File

@@ -4,10 +4,14 @@ from __future__ import unicode_literals
import logging
from docker.errors import NotFound
from docker.utils import create_ipam_config
from docker.utils import create_ipam_pool
from docker.types import IPAMConfig
from docker.types import IPAMPool
from docker.utils import version_gte
from docker.utils import version_lt
from .config import ConfigurationError
from .const import LABEL_NETWORK
from .const import LABEL_PROJECT
log = logging.getLogger(__name__)
@@ -71,7 +75,8 @@ class Network(object):
ipam=self.ipam,
internal=self.internal,
enable_ipv6=self.enable_ipv6,
labels=self.labels,
labels=self._labels,
attachable=version_gte(self.client._version, '1.24') or None,
)
def remove(self):
@@ -91,15 +96,26 @@ class Network(object):
return self.external_name
return '{0}_{1}'.format(self.project, self.name)
@property
def _labels(self):
if version_lt(self.client._version, '1.23'):
return None
labels = self.labels.copy() if self.labels else {}
labels.update({
LABEL_PROJECT: self.project,
LABEL_NETWORK: self.name,
})
return labels
def create_ipam_config_from_dict(ipam_dict):
if not ipam_dict:
return None
return create_ipam_config(
return IPAMConfig(
driver=ipam_dict.get('driver'),
pool_configs=[
create_ipam_pool(
IPAMPool(
subnet=config.get('subnet'),
iprange=config.get('ip_range'),
gateway=config.get('gateway'),

View File

@@ -12,6 +12,8 @@ from six.moves.queue import Empty
from six.moves.queue import Queue
from compose.cli.signals import ShutdownException
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.errors import OperationFailedError
from compose.utils import get_output_stream
@@ -48,7 +50,7 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None):
elif isinstance(exception, APIError):
errors[get_name(obj)] = exception.explanation
writer.write(get_name(obj), 'error')
elif isinstance(exception, OperationFailedError):
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured)):
errors[get_name(obj)] = exception.msg
writer.write(get_name(obj), 'error')
elif isinstance(exception, UpstreamError):
@@ -164,20 +166,27 @@ def feed_queue(objects, func, get_deps, results, state):
for obj in pending:
deps = get_deps(obj)
if any(dep in state.failed for dep in deps):
log.debug('{} has upstream errors - not processing'.format(obj))
results.put((obj, None, UpstreamError()))
state.failed.add(obj)
elif all(
dep not in objects or dep in state.finished
for dep in deps
):
log.debug('Starting producer thread for {}'.format(obj))
t = Thread(target=producer, args=(obj, func, results))
t.daemon = True
t.start()
state.started.add(obj)
try:
if any(dep[0] in state.failed for dep in deps):
log.debug('{} has upstream errors - not processing'.format(obj))
results.put((obj, None, UpstreamError()))
state.failed.add(obj)
elif all(
dep not in objects or (
dep in state.finished and (not ready_check or ready_check(dep))
) for dep, ready_check in deps
):
log.debug('Starting producer thread for {}'.format(obj))
t = Thread(target=producer, args=(obj, func, results))
t.daemon = True
t.start()
state.started.add(obj)
except (HealthCheckFailed, NoHealthCheckConfigured) as e:
log.debug(
'Healthcheck for service(s) upstream of {} failed - '
'not processing'.format(obj)
)
results.put((obj, None, e))
if state.is_done():
results.put(STOP)
@@ -248,7 +257,3 @@ def parallel_unpause(containers, options):
def parallel_kill(containers, options):
parallel_operation(containers, 'kill', options, 'Killing')
def parallel_restart(containers, options):
parallel_operation(containers, 'restart', options, 'Restarting')

View File

@@ -32,12 +32,11 @@ def stream_output(output, stream):
if not image_id:
continue
if image_id in lines:
diff = len(lines) - lines[image_id]
else:
if image_id not in lines:
lines[image_id] = len(lines)
stream.write("\n")
diff = 0
diff = len(lines) - lines[image_id]
# move cursor up `diff` rows
stream.write("%c[%dA" % (27, diff))

View File

@@ -14,7 +14,6 @@ from .config import ConfigurationError
from .config.config import V1
from .config.sort_services import get_container_name_from_network_mode
from .config.sort_services import get_service_name_from_network_mode
from .const import DEFAULT_TIMEOUT
from .const import IMAGE_EVENTS
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
@@ -105,6 +104,11 @@ class Project(object):
for volume_spec in service_dict.get('volumes', [])
]
secrets = get_secrets(
service_dict['name'],
service_dict.pop('secrets', None) or [],
config_data.secrets)
project.services.append(
Service(
service_dict.pop('name'),
@@ -115,6 +119,7 @@ class Project(object):
links=links,
network_mode=network_mode,
volumes_from=volumes_from,
secrets=secrets,
**service_dict)
)
@@ -228,7 +233,10 @@ class Project(object):
services = self.get_services(service_names)
def get_deps(service):
return {self.get_service(dep) for dep in service.get_dependency_names()}
return {
(self.get_service(dep), config)
for dep, config in service.get_dependency_configs().items()
}
parallel.parallel_execute(
services,
@@ -244,13 +252,13 @@ class Project(object):
def get_deps(container):
# actually returning inversed dependencies
return {other for other in containers
return {(other, None) for other in containers
if container.service in
self.get_service(other.service).get_dependency_names()}
parallel.parallel_execute(
containers,
operator.methodcaller('stop', **options),
self.build_container_operation_with_timeout_func('stop', options),
operator.attrgetter('name'),
'Stopping',
get_deps)
@@ -291,7 +299,12 @@ class Project(object):
def restart(self, service_names=None, **options):
containers = self.containers(service_names, stopped=True)
parallel.parallel_restart(containers, options)
parallel.parallel_execute(
containers,
self.build_container_operation_with_timeout_func('restart', options),
operator.attrgetter('name'),
'Restarting')
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False):
@@ -365,7 +378,7 @@ class Project(object):
start_deps=True,
strategy=ConvergenceStrategy.changed,
do_build=BuildAction.none,
timeout=DEFAULT_TIMEOUT,
timeout=None,
detached=False,
remove_orphans=False):
@@ -390,7 +403,10 @@ class Project(object):
)
def get_deps(service):
return {self.get_service(dep) for dep in service.get_dependency_names()}
return {
(self.get_service(dep), config)
for dep, config in service.get_dependency_configs().items()
}
results, errors = parallel.parallel_execute(
services,
@@ -506,6 +522,14 @@ class Project(object):
dep_services.append(service)
return acc + dep_services
def build_container_operation_with_timeout_func(self, operation, options):
def container_operation_with_timeout(container):
if options.get('timeout') is None:
service = self.get_service(container.service)
options['timeout'] = service.stop_timeout(None)
return getattr(container, operation)(**options)
return container_operation_with_timeout
def get_volumes_from(project, service_dict):
volumes_from = service_dict.pop('volumes_from', None)
@@ -535,6 +559,33 @@ def get_volumes_from(project, service_dict):
return [build_volume_from(vf) for vf in volumes_from]
def get_secrets(service, service_secrets, secret_defs):
secrets = []
for secret in service_secrets:
secret_def = secret_defs.get(secret.source)
if not secret_def:
raise ConfigurationError(
"Service \"{service}\" uses an undefined secret \"{secret}\" "
.format(service=service, secret=secret.source))
if secret_def.get('external_name'):
log.warn("Service \"{service}\" uses secret \"{secret}\" which is external. "
"External secrets are not available to containers created by "
"docker-compose.".format(service=service, secret=secret.source))
continue
if secret.uid or secret.gid or secret.mode:
log.warn("Service \"{service}\" uses secret \"{secret}\" with uid, "
"gid, or mode. These fields are not supported by this "
"implementation of the Compose file".format(
service=service, secret=secret.source))
secrets.append({'secret': secret, 'file': secret_def.get('file')})
return secrets
def warn_for_swarm_mode(client):
info = client.info()
if info.get('Swarm', {}).get('LocalNodeState') == 'active':
@@ -547,9 +598,7 @@ def warn_for_swarm_mode(client):
"Compose does not use swarm mode to deploy services to multiple nodes in a swarm. "
"All containers will be scheduled on the current node.\n\n"
"To deploy your application across the swarm, "
"use the bundle feature of the Docker experimental build.\n\n"
"More info:\n"
"https://docs.docker.com/compose/bundles\n"
"use `docker stack deploy`.\n"
)

View File

@@ -10,17 +10,20 @@ from operator import attrgetter
import enum
import six
from docker.errors import APIError
from docker.errors import ImageNotFound
from docker.errors import NotFound
from docker.utils import LogConfig
from docker.types import LogConfig
from docker.utils.ports import build_port_bindings
from docker.utils.ports import split_port
from . import __version__
from . import const
from . import progress_stream
from .config import DOCKER_CONFIG_KEYS
from .config import merge_environment
from .config.types import VolumeSpec
from .const import DEFAULT_TIMEOUT
from .const import IS_WINDOWS_PLATFORM
from .const import LABEL_CONFIG_HASH
from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
@@ -28,12 +31,15 @@ from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_VERSION
from .container import Container
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
from .errors import OperationFailedError
from .parallel import parallel_execute
from .parallel import parallel_start
from .progress_stream import stream_output
from .progress_stream import StreamOutputError
from .utils import json_hash
from .utils import parse_seconds_float
log = logging.getLogger(__name__)
@@ -63,9 +69,14 @@ DOCKER_START_KEYS = [
'restart',
'security_opt',
'shm_size',
'sysctls',
'userns_mode',
'volumes_from',
]
CONDITION_STARTED = 'service_started'
CONDITION_HEALTHY = 'service_healthy'
class BuildError(Exception):
def __init__(self, service, reason):
@@ -129,6 +140,7 @@ class Service(object):
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
**options
):
self.name = name
@@ -139,6 +151,7 @@ class Service(object):
self.volumes_from = volumes_from or []
self.network_mode = network_mode or NetworkMode(None)
self.networks = networks or {}
self.secrets = secrets or []
self.options = options
def __repr__(self):
@@ -169,7 +182,7 @@ class Service(object):
self.start_container_if_stopped(c, **options)
return containers
def scale(self, desired_num, timeout=DEFAULT_TIMEOUT):
def scale(self, desired_num, timeout=None):
"""
Adjusts the number of containers to the specified number and ensures
they are running.
@@ -196,7 +209,7 @@ class Service(object):
return container
def stop_and_remove(container):
container.stop(timeout=timeout)
container.stop(timeout=self.stop_timeout(timeout))
container.remove()
running_containers = self.containers(stopped=False)
@@ -315,11 +328,8 @@ class Service(object):
def image(self):
try:
return self.client.inspect_image(self.image_name)
except APIError as e:
if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
raise NoSuchImageError("Image '{}' not found".format(self.image_name))
else:
raise
except ImageNotFound:
raise NoSuchImageError("Image '{}' not found".format(self.image_name))
@property
def image_name(self):
@@ -374,7 +384,7 @@ class Service(object):
def execute_convergence_plan(self,
plan,
timeout=DEFAULT_TIMEOUT,
timeout=None,
detached=False,
start=True):
(action, containers) = plan
@@ -421,7 +431,7 @@ class Service(object):
def recreate_container(
self,
container,
timeout=DEFAULT_TIMEOUT,
timeout=None,
attach_logs=False,
start_new_container=True):
"""Recreate a container.
@@ -432,7 +442,7 @@ class Service(object):
"""
log.info("Recreating %s" % container.name)
container.stop(timeout=timeout)
container.stop(timeout=self.stop_timeout(timeout))
container.rename_to_tmp_name()
new_container = self.create_container(
previous_container=container,
@@ -446,6 +456,14 @@ class Service(object):
container.remove()
return new_container
def stop_timeout(self, timeout):
if timeout is not None:
return timeout
timeout = parse_seconds_float(self.options.get('stop_grace_period'))
if timeout is not None:
return timeout
return DEFAULT_TIMEOUT
def start_container_if_stopped(self, container, attach_logs=False, quiet=False):
if not container.is_running:
if not quiet:
@@ -483,10 +501,10 @@ class Service(object):
link_local_ips=netdefs.get('link_local_ips', None),
)
def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT):
def remove_duplicate_containers(self, timeout=None):
for c in self.duplicate_containers():
log.info('Removing %s' % c.name)
c.stop(timeout=timeout)
c.stop(timeout=self.stop_timeout(timeout))
c.remove()
def duplicate_containers(self):
@@ -522,10 +540,38 @@ class Service(object):
def get_dependency_names(self):
net_name = self.network_mode.service_name
return (self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
self.options.get('depends_on', []))
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
net_name = self.network_mode.service_name
configs = dict(
[(name, None) for name in self.get_linked_service_names()]
)
configs.update(dict(
[(name, None) for name in self.get_volumes_from_names()]
))
configs.update({net_name: None} if net_name else {})
configs.update(self.options.get('depends_on', {}))
for svc, config in self.options.get('depends_on', {}).items():
if config['condition'] == CONDITION_STARTED:
configs[svc] = lambda s: True
elif config['condition'] == CONDITION_HEALTHY:
configs[svc] = lambda s: s.is_healthy()
else:
# The config schema already prevents this, but it might be
# bypassed if Compose is called programmatically.
raise ValueError(
'depends_on condition "{}" is invalid.'.format(
config['condition']
)
)
return configs
def get_linked_service_names(self):
return [service.name for (service, _) in self.links]
@@ -649,9 +695,14 @@ class Service(object):
override_options['binds'] = binds
container_options['environment'].update(affinity)
if 'volumes' in container_options:
container_options['volumes'] = dict(
(v.internal, {}) for v in container_options['volumes'])
container_options['volumes'] = dict(
(v.internal, {}) for v in container_options.get('volumes') or {})
secret_volumes = self.get_secret_volumes()
if secret_volumes:
override_options['binds'].extend(v.repr() for v in secret_volumes)
container_options['volumes'].update(
(v.internal, {}) for v in secret_volumes)
container_options['image'] = self.image_name
@@ -708,10 +759,12 @@ class Service(object):
cgroup_parent=options.get('cgroup_parent'),
cpu_quota=options.get('cpu_quota'),
shm_size=options.get('shm_size'),
sysctls=options.get('sysctls'),
tmpfs=options.get('tmpfs'),
oom_score_adj=options.get('oom_score_adj'),
mem_swappiness=options.get('mem_swappiness'),
group_add=options.get('group_add')
group_add=options.get('group_add'),
userns_mode=options.get('userns_mode')
)
# TODO: Add as an argument to create_host_config once it's supported
@@ -720,14 +773,23 @@ class Service(object):
return host_config
def get_secret_volumes(self):
def build_spec(secret):
target = '{}/{}'.format(
const.SECRETS_PATH,
secret['secret'].target or secret['secret'].source)
return VolumeSpec(secret['file'], target, 'ro')
return [build_spec(secret) for secret in self.secrets]
def build(self, no_cache=False, pull=False, force_rm=False):
log.info('Building %s' % self.name)
build_opts = self.options.get('build', {})
path = build_opts.get('context')
# python2 os.path() doesn't support unicode, so we need to encode it to
# a byte string
if not six.PY3:
# python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe
if not six.PY3 and not IS_WINDOWS_PLATFORM:
path = path.encode('utf8')
build_output = self.client.build(
@@ -858,6 +920,24 @@ class Service(object):
else:
log.error(six.text_type(e))
def is_healthy(self):
""" Check that all containers for this service report healthy.
Returns false if at least one healthcheck is pending.
If an unhealthy container is detected, raise a HealthCheckFailed
exception.
"""
result = True
for ctnr in self.containers():
ctnr.inspect()
status = ctnr.get('State.Health.Status')
if status is None:
raise NoHealthCheckConfigured(self.name)
elif status == 'starting':
result = False
elif status == 'unhealthy':
raise HealthCheckFailed(ctnr.short_id)
return result
def short_id_alias_exists(container, network):
aliases = container.get(

96
compose/timeparse.py Normal file
View File

@@ -0,0 +1,96 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
timeparse.py
(c) Will Roberts <wildwilhelm@gmail.com> 1 February, 2014
This is a vendored and modified copy of:
github.com/wroberts/pytimeparse @ cc0550d
It has been modified to mimic the behaviour of
https://golang.org/pkg/time/#ParseDuration
'''
# MIT LICENSE
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import absolute_import
from __future__ import unicode_literals
import re
HOURS = r'(?P<hours>[\d.]+)h'
MINS = r'(?P<mins>[\d.]+)m'
SECS = r'(?P<secs>[\d.]+)s'
MILLI = r'(?P<milli>[\d.]+)ms'
MICRO = r'(?P<micro>[\d.]+)(?:us|µs)'
NANO = r'(?P<nano>[\d.]+)ns'
def opt(x):
return r'(?:{x})?'.format(x=x)
TIMEFORMAT = r'{HOURS}{MINS}{SECS}{MILLI}{MICRO}{NANO}'.format(
HOURS=opt(HOURS),
MINS=opt(MINS),
SECS=opt(SECS),
MILLI=opt(MILLI),
MICRO=opt(MICRO),
NANO=opt(NANO),
)
MULTIPLIERS = dict([
('hours', 60 * 60),
('mins', 60),
('secs', 1),
('milli', 1.0 / 1000),
('micro', 1.0 / 1000.0 / 1000),
('nano', 1.0 / 1000.0 / 1000.0 / 1000.0),
])
def timeparse(sval):
"""Parse a time expression, returning it as a number of seconds. If
possible, the return value will be an `int`; if this is not
possible, the return will be a `float`. Returns `None` if a time
expression cannot be parsed from the given string.
Arguments:
- `sval`: the string value to parse
>>> timeparse('1m24s')
84
>>> timeparse('1.2 minutes')
72
>>> timeparse('1.2 seconds')
1.2
"""
match = re.match(r'\s*' + TIMEFORMAT + r'\s*$', sval, re.I)
if not match or not match.group(0).strip():
return
mdict = match.groupdict()
return sum(
MULTIPLIERS[k] * cast(v) for (k, v) in mdict.items() if v is not None)
def cast(value):
return int(value, 10) if value.isdigit() else float(value)

View File

@@ -11,6 +11,7 @@ import ntpath
import six
from .errors import StreamParseError
from .timeparse import timeparse
json_decoder = json.JSONDecoder()
@@ -107,6 +108,21 @@ def microseconds_from_time_nano(time_nano):
return int(time_nano % 1000000000 / 1000)
def nanoseconds_from_time_seconds(time_seconds):
return time_seconds * 1000000000
def parse_seconds_float(value):
return timeparse(value or '')
def parse_nanoseconds_int(value):
parsed = timeparse(value or '')
if parsed is None:
return None
return int(parsed * 1000000000)
def build_string_dict(source_dict):
return dict((k, str(v if v is not None else '')) for k, v in source_dict.items())

View File

@@ -4,8 +4,11 @@ from __future__ import unicode_literals
import logging
from docker.errors import NotFound
from docker.utils import version_lt
from .config import ConfigurationError
from .const import LABEL_PROJECT
from .const import LABEL_VOLUME
log = logging.getLogger(__name__)
@@ -23,7 +26,7 @@ class Volume(object):
def create(self):
return self.client.create_volume(
self.full_name, self.driver, self.driver_opts, labels=self.labels
self.full_name, self.driver, self.driver_opts, labels=self._labels
)
def remove(self):
@@ -53,6 +56,17 @@ class Volume(object):
return self.external_name
return '{0}_{1}'.format(self.project, self.name)
@property
def _labels(self):
if version_lt(self.client._version, '1.23'):
return None
labels = self.labels.copy() if self.labels else {}
labels.update({
LABEL_PROJECT: self.project,
LABEL_VOLUME: self.name,
})
return labels
class ProjectVolumes(object):

View File

@@ -434,6 +434,18 @@ _docker_compose_stop() {
}
_docker_compose_top() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
;;
*)
__docker_compose_services_running
;;
esac
}
_docker_compose_unpause() {
case "$cur" in
-*)
@@ -499,6 +511,7 @@ _docker_compose() {
scale
start
stop
top
unpause
up
version

View File

@@ -341,6 +341,11 @@ __docker-compose_subcommand() {
$opts_timeout \
'*:running services:__docker-compose_runningservices' && ret=0
;;
(top)
_arguments \
$opts_help \
'*:running services:__docker-compose_runningservices' && ret=0
;;
(unpause)
_arguments \
$opts_help \
@@ -386,9 +391,17 @@ _docker-compose() {
integer ret=1
typeset -A opt_args
local file_description
if [[ -n ${words[(r)-f]} || -n ${words[(r)--file]} ]] ; then
file_description="Specify an override docker-compose file (default: docker-compose.override.yml)"
else
file_description="Specify an alternate docker-compose file (default: docker-compose.yml)"
fi
_arguments -C \
'(- :)'{-h,--help}'[Get help]' \
'(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \
'*'{-f,--file}"[${file_description}]:file:_files -g '*.yml'" \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
'--verbose[Show more output]' \
'(- :)'{-v,--version}'[Print version and exit]' \

View File

@@ -32,6 +32,16 @@ exe = EXE(pyz,
'compose/config/config_schema_v2.1.json',
'DATA'
),
(
'compose/config/config_schema_v3.0.json',
'compose/config/config_schema_v3.0.json',
'DATA'
),
(
'compose/config/config_schema_v3.1.json',
'compose/config/config_schema_v3.1.json',
'DATA'
),
(
'compose/GITSHA',
'compose/GITSHA',

View File

@@ -20,18 +20,30 @@ release.
As part of this script you'll be asked to:
1. Update the version in `docs/install.md` and `compose/__init__.py`.
1. Update the version in `compose/__init__.py` and `script/run/run.sh`.
If the next release will be an RC, append `rcN`, e.g. `1.4.0rc1`.
If the next release will be an RC, append `-rcN`, e.g. `1.4.0-rc1`.
2. Write release notes in `CHANGES.md`.
Almost every feature enhancement should be mentioned, with the most visible/exciting ones first. Use descriptive sentences and give context where appropriate.
Almost every feature enhancement should be mentioned, with the most
visible/exciting ones first. Use descriptive sentences and give context
where appropriate.
Bug fixes are worth mentioning if it's likely that they've affected lots of people, or if they were regressions in the previous version.
Bug fixes are worth mentioning if it's likely that they've affected lots
of people, or if they were regressions in the previous version.
Improvements to the code are not worth mentioning.
3. Create a new repository on [bintray](https://bintray.com/docker-compose).
The name has to match the name of the branch (e.g. `bump-1.9.0`) and the
type should be "Generic". Other fields can be left blank.
4. Check that the `vnext-compose` branch on
[the docs repo](https://github.com/docker/docker.github.io/) has
documentation for all the new additions in the upcoming release, and create
a PR there for what needs to be amended.
## When a PR is merged into master that we want in the release
@@ -55,8 +67,8 @@ Check out the bump branch and run the `build-binaries` script
When prompted build the non-linux binaries and test them.
1. Download the osx binary from Bintray. Make sure that the latest build has
finished, otherwise you'll be downloading an old binary.
1. Download the osx binary from Bintray. Make sure that the latest Travis
build has finished, otherwise you'll be downloading an old binary.
https://dl.bintray.com/docker-compose/$BRANCH_NAME/
@@ -67,22 +79,24 @@ When prompted build the non-linux binaries and test them.
3. Draft a release from the tag on GitHub (the script will open the window for
you)
In the "Tag version" dropdown, select the tag you just pushed.
The tag will only be present on Github when you run the `push-release`
script in step 7, but you can pre-fill it at that point.
4. Paste in installation instructions and release notes. Here's an example - change the Compose version and Docker version as appropriate:
4. Paste in installation instructions and release notes. Here's an example -
change the Compose version and Docker version as appropriate:
Firstly, note that Compose 1.5.0 requires Docker 1.8.0 or later.
If you're a Mac or Windows user, the best way to install Compose and keep it up-to-date is **[Docker for Mac and Windows](https://www.docker.com/products/docker)**.
Secondly, if you're a Mac user, the **[Docker Toolbox](https://www.docker.com/toolbox)** will install Compose 1.5.0 for you, alongside the latest versions of the Docker Engine, Machine and Kitematic.
Note that Compose 1.9.0 requires Docker Engine 1.10.0 or later for version 2 of the Compose File format, and Docker Engine 1.9.1 or later for version 1. Docker for Mac and Windows will automatically install the latest version of Docker Engine for you.
Otherwise, you can use the usual commands to install/upgrade. Either download the binary:
Alternatively, you can use the usual commands to install or upgrade Compose:
curl -L https://github.com/docker/compose/releases/download/1.5.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
Or install the PyPi package:
pip install -U docker-compose==1.5.0
See the [install docs](https://docs.docker.com/compose/install/) for more install options and instructions.
Here's what's new:
@@ -99,6 +113,8 @@ When prompted build the non-linux binaries and test them.
./script/release/push-release
8. Merge the bump PR.
8. Publish the release on GitHub.
9. Check that all the binaries download (following the install instructions) and run.
@@ -107,19 +123,7 @@ When prompted build the non-linux binaries and test them.
## If its a stable release (not an RC)
1. Merge the bump PR.
2. Make sure `origin/release` is updated locally:
git fetch origin
3. Update the `docs` branch on the upstream repo:
git push git@github.com:docker/compose.git origin/release:docs
4. Let the docs team know that its been updated so they can publish it.
5. Close the releases milestone.
1. Close the releases milestone.
## If its a minor release (1.x.0), rather than a patch release (1.x.y)

View File

@@ -1 +1 @@
pyinstaller==3.1.1
pyinstaller==3.2.1

View File

@@ -1,7 +1,8 @@
PyYAML==3.11
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.2.0
docker-py==1.10.6
colorama==0.3.7
docker==2.1.0
dockerpty==0.4.1
docopt==0.6.1
enum34==1.0.4; python_version < '3.4'

View File

@@ -11,6 +11,5 @@ TAG=$1
VERSION="$(python setup.py --version)"
./script/build/write-git-sha
python setup.py sdist
cp dist/docker-compose-$VERSION.tar.gz dist/docker-compose-release.tar.gz
docker build -t docker/compose:$TAG -f Dockerfile.run .
python setup.py sdist bdist_wheel
docker build --build-arg version=$VERSION -t docker/compose:$TAG -f Dockerfile.run .

View File

@@ -65,8 +65,8 @@ git config "branch.${BRANCH}.release" $VERSION
editor=${EDITOR:-vim}
echo "Update versions in compose/__init__.py, script/run/run.sh"
# $editor docs/install.md
echo "Update versions in docs/install.md, compose/__init__.py, script/run/run.sh"
$editor docs/install.md
$editor compose/__init__.py
$editor script/run/run.sh

View File

@@ -54,18 +54,19 @@ git push $GITHUB_REPO $VERSION
echo "Uploading the docker image"
docker push docker/compose:$VERSION
echo "Uploading sdist to pypi"
echo "Uploading package to PyPI"
pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst
./script/build/write-git-sha
python setup.py sdist
python setup.py sdist bdist_wheel
if [ "$(command -v twine 2> /dev/null)" ]; then
twine upload ./dist/docker-compose-${VERSION/-/}.tar.gz
twine upload ./dist/docker-compose-${VERSION/-/}.tar.gz ./dist/docker_compose-${VERSION/-/}-py2.py3-none-any.whl
else
python setup.py upload
fi
echo "Testing pip package"
deactivate || true
virtualenv venv-test
source venv-test/bin/activate
pip install docker-compose==$VERSION

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.9.0"
VERSION="1.11.2"
IMAGE="docker/compose:$VERSION"
@@ -35,7 +35,7 @@ if [ "$(pwd)" != '/' ]; then
VOLUMES="-v $(pwd):$(pwd)"
fi
if [ -n "$COMPOSE_FILE" ]; then
compose_dir=$(dirname $COMPOSE_FILE)
compose_dir=$(realpath $(dirname $COMPOSE_FILE))
fi
# TODO: also check --file argument
if [ -n "$compose_dir" ]; then

View File

@@ -5,7 +5,7 @@ version tags for recent releases, or the default release.
The default release is the most recent non-RC version.
Recent is a list of unqiue major.minor versions, where each is the most
Recent is a list of unique major.minor versions, where each is the most
recent version in the series.
For example, if the list of versions is:

2
setup.cfg Normal file
View File

@@ -0,0 +1,2 @@
[bdist_wheel]
universal=1

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import codecs
@@ -8,6 +9,7 @@ import os
import re
import sys
import pkg_resources
from setuptools import find_packages
from setuptools import setup
@@ -29,12 +31,13 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'colorama >= 0.3.7, < 0.4',
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.6.1, != 2.11.0, < 2.12',
'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.32.0, < 1.0',
'docker-py >= 1.10.6, < 2.0',
'docker >= 2.1.0, < 3.0',
'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
@@ -48,7 +51,25 @@ tests_require = [
if sys.version_info[:2] < (3, 4):
tests_require.append('mock >= 1.0.1')
install_requires.append('enum34 >= 1.0.4, < 2')
extras_require = {
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
}
try:
if 'bdist_wheel' not in sys.argv:
for key, value in extras_require.items():
if key.startswith(':') and pkg_resources.evaluate_marker(key[1:]):
install_requires.extend(value)
except Exception as e:
print("Failed to compute platform dependencies: {}. ".format(e) +
"All dependencies will be installed as a result.", file=sys.stderr)
for key, value in extras_require.items():
if key.startswith(':'):
install_requires.extend(value)
setup(
@@ -62,6 +83,7 @@ setup(
include_package_data=True,
test_suite='nose.collector',
install_requires=install_requires,
extras_require=extras_require,
tests_require=tests_require,
entry_points="""
[console_scripts]

View File

@@ -21,11 +21,13 @@ from .. import mock
from compose.cli.command import get_project
from compose.container import Container
from compose.project import OneOffFilter
from compose.utils import nanoseconds_from_time_seconds
from tests.integration.testcases import DockerClientTestCase
from tests.integration.testcases import get_links
from tests.integration.testcases import pull_busybox
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
@@ -260,6 +262,20 @@ class CLITestCase(DockerClientTestCase):
}
}
def test_config_external_volume(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes.yml', 'config'])
json_result = yaml.load(result.stdout)
assert 'volumes' in json_result
assert json_result['volumes'] == {
'foo': {
'external': True
},
'bar': {
'external': {'name': 'some_bar'}
}
}
def test_config_v1(self):
self.base_dir = 'tests/fixtures/v1-config'
result = self.dispatch(['config'])
@@ -285,6 +301,68 @@ class CLITestCase(DockerClientTestCase):
'volumes': {},
}
@v3_only()
def test_config_v3(self):
self.base_dir = 'tests/fixtures/v3-full'
result = self.dispatch(['config'])
assert yaml.load(result.stdout) == {
'version': '3.0',
'networks': {},
'volumes': {
'foobar': {
'labels': {
'com.docker.compose.test': 'true',
},
},
},
'services': {
'web': {
'image': 'busybox',
'deploy': {
'mode': 'replicated',
'replicas': 6,
'labels': ['FOO=BAR'],
'update_config': {
'parallelism': 3,
'delay': '10s',
'failure_action': 'continue',
'monitor': '60s',
'max_failure_ratio': 0.3,
},
'resources': {
'limits': {
'cpus': '0.001',
'memory': '50M',
},
'reservations': {
'cpus': '0.0001',
'memory': '20M',
},
},
'restart_policy': {
'condition': 'on_failure',
'delay': '5s',
'max_attempts': 3,
'window': '120s',
},
'placement': {
'constraints': ['node=foo'],
},
},
'healthcheck': {
'test': 'cat /etc/passwd',
'interval': '10s',
'timeout': '1s',
'retries': 5,
},
'stop_grace_period': '20s',
},
},
}
def test_ps(self):
self.project.get_service('simple').create_container()
result = self.dispatch(['ps'])
@@ -792,8 +870,8 @@ class CLITestCase(DockerClientTestCase):
]
assert [n['Name'] for n in networks] == [network_with_label]
assert networks[0]['Labels'] == {'label_key': 'label_val'}
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels']['label_key'] == 'label_val'
@v2_1_only()
def test_up_with_volume_labels(self):
@@ -812,8 +890,8 @@ class CLITestCase(DockerClientTestCase):
]
assert [v['Name'] for v in volumes] == [volume_with_label]
assert volumes[0]['Labels'] == {'label_key': 'label_val'}
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@v2_only()
def test_up_no_services(self):
@@ -870,6 +948,50 @@ class CLITestCase(DockerClientTestCase):
assert foo_container.get('HostConfig.NetworkMode') == \
'container:{}'.format(bar_container.id)
@v3_only()
def test_up_with_healthcheck(self):
def wait_on_health_status(container, status):
def condition():
container.inspect()
return container.get('State.Health.Status') == status
return wait_on_condition(condition, delay=0.5)
self.base_dir = 'tests/fixtures/healthcheck'
self.dispatch(['up', '-d'], None)
passes = self.project.get_service('passes')
passes_container = passes.containers()[0]
assert passes_container.get('Config.Healthcheck') == {
"Test": ["CMD-SHELL", "/bin/true"],
"Interval": nanoseconds_from_time_seconds(1),
"Timeout": nanoseconds_from_time_seconds(30 * 60),
"Retries": 1,
}
wait_on_health_status(passes_container, 'healthy')
fails = self.project.get_service('fails')
fails_container = fails.containers()[0]
assert fails_container.get('Config.Healthcheck') == {
"Test": ["CMD", "/bin/false"],
"Interval": nanoseconds_from_time_seconds(2.5),
"Retries": 2,
}
wait_on_health_status(fails_container, 'unhealthy')
disabled = self.project.get_service('disabled')
disabled_container = disabled.containers()[0]
assert disabled_container.get('Config.Healthcheck') == {
"Test": ["NONE"],
}
assert 'Health' not in disabled_container.get('State')
def test_up_with_no_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', '--no-deps', 'web'], None)
@@ -1785,3 +1907,23 @@ class CLITestCase(DockerClientTestCase):
"BAZ=2",
])
self.assertTrue(expected_env <= set(web.get('Config.Env')))
def test_top_services_not_running(self):
self.base_dir = 'tests/fixtures/top'
result = self.dispatch(['top'])
assert len(result.stdout) == 0
def test_top_services_running(self):
self.base_dir = 'tests/fixtures/top'
self.dispatch(['up', '-d'])
result = self.dispatch(['top'])
self.assertIn('top_service_a', result.stdout)
self.assertIn('top_service_b', result.stdout)
self.assertNotIn('top_not_a_service', result.stdout)
def test_top_processes_running(self):
self.base_dir = 'tests/fixtures/top'
self.dispatch(['up', '-d'])
result = self.dispatch(['top'])
assert result.stdout.count("top") == 4

View File

@@ -0,0 +1,9 @@
version: '2.1'
services:
demo:
image: foobar:latest
healthcheck:
test: ["CMD", "/health.sh"]
interval: 10s
timeout: 5s
retries: 36

View File

@@ -0,0 +1,6 @@
version: '2.1'
services:
demo:
extends:
file: healthcheck-1.yml
service: demo

View File

@@ -0,0 +1,24 @@
version: "3"
services:
passes:
image: busybox
command: top
healthcheck:
test: "/bin/true"
interval: 1s
timeout: 30m
retries: 1
fails:
image: busybox
command: top
healthcheck:
test: ["CMD", "/bin/false"]
interval: 2.5s
retries: 2
disabled:
image: busybox
command: top
healthcheck:
disable: true

1
tests/fixtures/secrets/default vendored Normal file
View File

@@ -0,0 +1 @@
This is the secret

6
tests/fixtures/top/docker-compose.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
service_a:
image: busybox:latest
command: top
service_b:
image: busybox:latest
command: top

View File

@@ -0,0 +1,41 @@
version: "3"
services:
web:
image: busybox
deploy:
mode: replicated
replicas: 6
labels: [FOO=BAR]
update_config:
parallelism: 3
delay: 10s
failure_action: continue
monitor: 60s
max_failure_ratio: 0.3
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
restart_policy:
condition: on_failure
delay: 5s
max_attempts: 3
window: 120s
placement:
constraints: [node=foo]
healthcheck:
test: cat /etc/passwd
interval: 10s
timeout: 1s
retries: 5
stop_grace_period: 20s
volumes:
foobar:
labels:
com.docker.compose.test: 'true'

View File

@@ -0,0 +1,2 @@
version: '2.1'
services: {}

View File

@@ -0,0 +1,16 @@
version: "2.1"
services:
web:
image: busybox
command: top
volumes:
- foo:/var/lib/
- bar:/etc/
volumes:
foo:
external: true
bar:
external:
name: some_bar

View File

@@ -0,0 +1,17 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from .testcases import DockerClientTestCase
from compose.const import LABEL_NETWORK
from compose.const import LABEL_PROJECT
from compose.network import Network
class NetworkTest(DockerClientTestCase):
def test_network_default_labels(self):
net = Network(self.client, 'composetest', 'foonet')
net.ensure()
net_data = net.inspect()
labels = net_data['Labels']
assert labels[LABEL_NETWORK] == net.name
assert labels[LABEL_PROJECT] == net.project

View File

@@ -1,6 +1,7 @@
from __future__ import absolute_import
from __future__ import unicode_literals
import os.path
import random
import py
@@ -8,22 +9,36 @@ import pytest
from docker.errors import NotFound
from .. import mock
from ..helpers import build_config
from ..helpers import build_config as load_config
from .testcases import DockerClientTestCase
from compose.config import config
from compose.config import ConfigurationError
from compose.config import types
from compose.config.config import V2_0
from compose.config.config import V2_1
from compose.config.config import V3_1
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.project import Project
from compose.project import ProjectError
from compose.service import ConvergenceStrategy
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
def build_config(**kwargs):
return config.Config(
version=kwargs.get('version'),
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),
networks=kwargs.get('networks'),
secrets=kwargs.get('secrets'))
class ProjectTest(DockerClientTestCase):
@@ -68,7 +83,7 @@ class ProjectTest(DockerClientTestCase):
def test_volumes_from_service(self):
project = Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'data': {
'image': 'busybox:latest',
'volumes': ['/var/data'],
@@ -94,7 +109,7 @@ class ProjectTest(DockerClientTestCase):
)
project = Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'db': {
'image': 'busybox:latest',
'volumes_from': ['composetest_data_container'],
@@ -110,7 +125,7 @@ class ProjectTest(DockerClientTestCase):
project = Project.from_config(
name='composetest',
client=self.client,
config_data=build_config({
config_data=load_config({
'version': V2_0,
'services': {
'net': {
@@ -137,7 +152,7 @@ class ProjectTest(DockerClientTestCase):
def get_project():
return Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'version': V2_0,
'services': {
'web': {
@@ -172,7 +187,7 @@ class ProjectTest(DockerClientTestCase):
def test_net_from_service_v1(self):
project = Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'net': {
'image': 'busybox:latest',
'command': ["top"]
@@ -196,7 +211,7 @@ class ProjectTest(DockerClientTestCase):
def get_project():
return Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'web': {
'image': 'busybox:latest',
'net': 'container:composetest_net_container'
@@ -467,7 +482,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_starts_depends(self):
project = Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'console': {
'image': 'busybox:latest',
'command': ["top"],
@@ -502,7 +517,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_with_no_deps(self):
project = Project.from_config(
name='composetest',
config_data=build_config({
config_data=load_config({
'console': {
'image': 'busybox:latest',
'command': ["top"],
@@ -562,7 +577,7 @@ class ProjectTest(DockerClientTestCase):
@v2_only()
def test_project_up_networks(self):
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -574,7 +589,6 @@ class ProjectTest(DockerClientTestCase):
'baz': {'aliases': ['extra']},
},
}],
volumes={},
networks={
'foo': {'driver': 'bridge'},
'bar': {'driver': None},
@@ -608,14 +622,13 @@ class ProjectTest(DockerClientTestCase):
@v2_only()
def test_up_with_ipam_config(self):
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
'networks': {'front': None},
}],
volumes={},
networks={
'front': {
'driver': 'bridge',
@@ -669,7 +682,7 @@ class ProjectTest(DockerClientTestCase):
@v2_only()
def test_up_with_network_static_addresses(self):
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -682,7 +695,6 @@ class ProjectTest(DockerClientTestCase):
}
},
}],
volumes={},
networks={
'static_test': {
'driver': 'bridge',
@@ -724,7 +736,7 @@ class ProjectTest(DockerClientTestCase):
@v2_1_only()
def test_up_with_enable_ipv6(self):
self.require_api_version('1.23')
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -736,7 +748,6 @@ class ProjectTest(DockerClientTestCase):
}
},
}],
volumes={},
networks={
'static_test': {
'driver': 'bridge',
@@ -768,7 +779,7 @@ class ProjectTest(DockerClientTestCase):
@v2_only()
def test_up_with_network_static_addresses_missing_subnet(self):
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -780,7 +791,6 @@ class ProjectTest(DockerClientTestCase):
}
},
}],
volumes={},
networks={
'static_test': {
'driver': 'bridge',
@@ -805,7 +815,7 @@ class ProjectTest(DockerClientTestCase):
@v2_1_only()
def test_up_with_network_link_local_ips(self):
config_data = config.Config(
config_data = build_config(
version=V2_1,
services=[{
'name': 'web',
@@ -816,7 +826,6 @@ class ProjectTest(DockerClientTestCase):
}
}
}],
volumes={},
networks={
'linklocaltest': {'driver': 'bridge'}
}
@@ -842,15 +851,13 @@ class ProjectTest(DockerClientTestCase):
@v2_1_only()
def test_up_with_isolation(self):
self.require_api_version('1.24')
config_data = config.Config(
config_data = build_config(
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
'isolation': 'default'
}],
volumes={},
networks={}
)
project = Project.from_config(
client=self.client,
@@ -864,15 +871,13 @@ class ProjectTest(DockerClientTestCase):
@v2_1_only()
def test_up_with_invalid_isolation(self):
self.require_api_version('1.24')
config_data = config.Config(
config_data = build_config(
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
'isolation': 'foobar'
}],
volumes={},
networks={}
)
project = Project.from_config(
client=self.client,
@@ -885,14 +890,13 @@ class ProjectTest(DockerClientTestCase):
@v2_only()
def test_project_up_with_network_internal(self):
self.require_api_version('1.23')
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
'networks': {'internal': None},
}],
volumes={},
networks={
'internal': {'driver': 'bridge', 'internal': True},
},
@@ -915,14 +919,13 @@ class ProjectTest(DockerClientTestCase):
network_name = 'network_with_label'
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
'networks': {network_name: None}
}],
volumes={},
networks={
network_name: {'labels': {'label_key': 'label_val'}}
}
@@ -942,14 +945,14 @@ class ProjectTest(DockerClientTestCase):
]
assert [n['Name'] for n in networks] == ['composetest_{}'.format(network_name)]
assert networks[0]['Labels'] == {'label_key': 'label_val'}
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels']['label_key'] == 'label_val'
@v2_only()
def test_project_up_volumes(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -957,7 +960,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {'driver': 'local'}},
networks={},
)
project = Project.from_config(
@@ -977,7 +979,7 @@ class ProjectTest(DockerClientTestCase):
volume_name = 'volume_with_label'
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -991,7 +993,6 @@ class ProjectTest(DockerClientTestCase):
}
}
},
networks={},
)
project = Project.from_config(
@@ -1009,7 +1010,8 @@ class ProjectTest(DockerClientTestCase):
assert [v['Name'] for v in volumes] == ['composetest_{}'.format(volume_name)]
assert volumes[0]['Labels'] == {'label_key': 'label_val'}
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@v2_only()
def test_project_up_logging_with_multiple_files(self):
@@ -1103,7 +1105,7 @@ class ProjectTest(DockerClientTestCase):
def test_initialize_volumes(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1111,7 +1113,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {}},
networks={},
)
project = Project.from_config(
@@ -1121,14 +1122,14 @@ class ProjectTest(DockerClientTestCase):
project.volumes.initialize()
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
assert volume_data['Name'] == full_vol_name
assert volume_data['Driver'] == 'local'
@v2_only()
def test_project_up_implicit_volume_driver(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1136,7 +1137,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {}},
networks={},
)
project = Project.from_config(
@@ -1149,11 +1149,47 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
@v3_only()
def test_project_up_with_secrets(self):
create_host_file(self.client, os.path.abspath('tests/fixtures/secrets/default'))
config_data = build_config(
version=V3_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
'command': 'cat /run/secrets/special',
'secrets': [
types.ServiceSecret.parse({'source': 'super', 'target': 'special'}),
],
}],
secrets={
'super': {
'file': os.path.abspath('tests/fixtures/secrets/default'),
},
},
)
project = Project.from_config(
client=self.client,
name='composetest',
config_data=config_data,
)
project.up()
project.stop()
containers = project.containers(stopped=True)
assert len(containers) == 1
container, = containers
output = container.logs()
assert output == b"This is the secret\n"
@v2_only()
def test_initialize_volumes_invalid_volume_driver(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1161,7 +1197,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {'driver': 'foobar'}},
networks={},
)
project = Project.from_config(
@@ -1176,7 +1211,7 @@ class ProjectTest(DockerClientTestCase):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1184,7 +1219,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {'driver': 'local'}},
networks={},
)
project = Project.from_config(
name='composetest',
@@ -1215,7 +1249,7 @@ class ProjectTest(DockerClientTestCase):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1223,7 +1257,6 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={vol_name: {'driver': 'local'}},
networks={},
)
project = Project.from_config(
name='composetest',
@@ -1254,7 +1287,7 @@ class ProjectTest(DockerClientTestCase):
vol_name = 'composetest_{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
self.client.create_volume(vol_name)
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1264,7 +1297,6 @@ class ProjectTest(DockerClientTestCase):
volumes={
vol_name: {'external': True, 'external_name': vol_name}
},
networks=None,
)
project = Project.from_config(
name='composetest',
@@ -1279,7 +1311,7 @@ class ProjectTest(DockerClientTestCase):
def test_initialize_volumes_inexistent_external_volume(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
config_data = config.Config(
config_data = build_config(
version=V2_0,
services=[{
'name': 'web',
@@ -1289,7 +1321,6 @@ class ProjectTest(DockerClientTestCase):
volumes={
vol_name: {'external': True, 'external_name': vol_name}
},
networks=None,
)
project = Project.from_config(
name='composetest',
@@ -1346,7 +1377,7 @@ class ProjectTest(DockerClientTestCase):
}
}
config_data = build_config(config_dict)
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
@@ -1354,7 +1385,7 @@ class ProjectTest(DockerClientTestCase):
config_dict['service2'] = config_dict['service1']
del config_dict['service1']
config_data = build_config(config_dict)
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
@@ -1374,3 +1405,142 @@ class ProjectTest(DockerClientTestCase):
ctnr for ctnr in project._labeled_containers()
if ctnr.labels.get(LABEL_SERVICE) == 'service1'
]) == 0
@v2_1_only()
def test_project_up_healthy_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'test': 'exit 0',
'retries': 1,
'timeout': '10s',
'interval': '0.1s'
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
containers = project.containers()
assert len(containers) == 2
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert svc1.is_healthy()
@v2_1_only()
def test_project_up_unhealthy_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'test': 'exit 1',
'retries': 1,
'timeout': '10s',
'interval': '0.1s'
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
containers = project.containers()
assert len(containers) == 1
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
with pytest.raises(HealthCheckFailed):
svc1.is_healthy()
@v2_1_only()
def test_project_up_no_healthcheck_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'top',
'healthcheck': {
'disable': True
},
},
'svc2': {
'image': 'busybox:latest',
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_healthy'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
containers = project.containers()
assert len(containers) == 1
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
with pytest.raises(NoHealthCheckConfigured):
svc1.is_healthy()
def create_host_file(client, filename):
dirname = os.path.dirname(filename)
with open(filename, 'r') as fh:
content = fh.read()
container = client.create_container(
'busybox:latest',
['sh', '-c', 'echo -n "{}" > {}'.format(content, filename)],
volumes={dirname: {}},
host_config=client.create_host_config(
binds={dirname: {'bind': dirname, 'ro': False}},
network_mode='none',
),
)
try:
client.start(container)
exitcode = client.wait(container)
if exitcode != 0:
output = client.logs(container)
raise Exception(
"Container exited with code {}:\n{}".format(exitcode, output))
finally:
client.remove_container(container, force=True)

View File

@@ -30,6 +30,7 @@ from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy
from compose.service import NetworkMode
from compose.service import Service
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only
@@ -842,6 +843,18 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), 'host')
@v2_1_only()
def test_userns_mode_none_defined(self):
service = self.create_service('web', userns_mode=None)
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.UsernsMode'), '')
@v2_1_only()
def test_userns_mode_host(self):
service = self.create_service('web', userns_mode='host')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.UsernsMode'), 'host')
def test_dns_no_value(self):
service = self.create_service('web')
container = create_and_start_container(service)

View File

@@ -13,6 +13,7 @@ from compose.config.config import resolve_environment
from compose.config.config import V1
from compose.config.config import V2_0
from compose.config.config import V2_1
from compose.config.config import V3_0
from compose.config.environment import Environment
from compose.const import API_VERSIONS
from compose.const import LABEL_PROJECT
@@ -36,39 +37,41 @@ def get_links(container):
def engine_max_version():
if 'DOCKER_VERSION' not in os.environ:
return V2_1
return V3_0
version = os.environ['DOCKER_VERSION'].partition('-')[0]
if version_lt(version, '1.10'):
return V1
elif version_lt(version, '1.12'):
if version_lt(version, '1.12'):
return V2_0
return V2_1
if version_lt(version, '1.13'):
return V2_1
return V3_0
def build_version_required_decorator(ignored_versions):
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
max_version = engine_max_version()
if max_version in ignored_versions:
skip("Engine version %s is too low" % max_version)
return
return f(self, *args, **kwargs)
return wrapper
return decorator
def v2_only():
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if engine_max_version() == V1:
skip("Engine version is too low")
return
return f(self, *args, **kwargs)
return wrapper
return decorator
return build_version_required_decorator((V1,))
def v2_1_only():
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if engine_max_version() in (V1, V2_0):
skip('Engine version is too low')
return
return f(self, *args, **kwargs)
return wrapper
return build_version_required_decorator((V1, V2_0))
return decorator
def v3_only():
return build_version_required_decorator((V1, V2_0, V2_1))
class DockerClientTestCase(unittest.TestCase):

View File

@@ -4,6 +4,8 @@ from __future__ import unicode_literals
from docker.errors import DockerException
from .testcases import DockerClientTestCase
from compose.const import LABEL_PROJECT
from compose.const import LABEL_VOLUME
from compose.volume import Volume
@@ -94,3 +96,11 @@ class VolumeTest(DockerClientTestCase):
assert vol.exists() is False
vol.create()
assert vol.exists() is True
def test_volume_default_labels(self):
vol = self.create_volume('volume01')
vol.create()
vol_data = vol.inspect()
labels = vol_data['Labels']
assert labels[LABEL_VOLUME] == vol.name
assert labels[LABEL_PROJECT] == vol.project

View File

@@ -15,7 +15,7 @@ from compose.config.config import Config
def mock_service():
return mock.create_autospec(
service.Service,
client=mock.create_autospec(docker.Client),
client=mock.create_autospec(docker.APIClient),
options={})
@@ -77,7 +77,8 @@ def test_to_bundle():
version=2,
services=services,
volumes={'special': {}},
networks={'extra': {}})
networks={'extra': {}},
secrets={})
with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log:
output = bundle.to_bundle(config, image_digests)

View File

@@ -97,7 +97,7 @@ class CLITestCase(unittest.TestCase):
@mock.patch('compose.cli.main.RunOperation', autospec=True)
@mock.patch('compose.cli.main.PseudoTerminal', autospec=True)
def test_run_interactive_passes_logs_false(self, mock_pseudo_terminal, mock_run_operation):
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
project = Project.from_config(
name='composetest',
client=mock_client,
@@ -128,7 +128,7 @@ class CLITestCase(unittest.TestCase):
assert call_kwargs['logs'] is False
def test_run_service_with_restart_always(self):
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
project = Project.from_config(
name='composetest',

View File

@@ -13,16 +13,22 @@ import pytest
from ...helpers import build_config_details
from compose.config import config
from compose.config import types
from compose.config.config import resolve_build_args
from compose.config.config import resolve_environment
from compose.config.config import V1
from compose.config.config import V2_0
from compose.config.config import V2_1
from compose.config.config import V3_0
from compose.config.config import V3_1
from compose.config.environment import Environment
from compose.config.errors import ConfigurationError
from compose.config.errors import VERSION_EXPLANATION
from compose.config.serialize import denormalize_service_dict
from compose.config.serialize import serialize_ns_time_value
from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM
from compose.utils import nanoseconds_from_time_seconds
from tests import mock
from tests import unittest
@@ -48,6 +54,10 @@ def service_sort(services):
return sorted(services, key=itemgetter('name'))
def secret_sort(secrets):
return sorted(secrets, key=itemgetter('source'))
class ConfigTest(unittest.TestCase):
def test_load(self):
service_dicts = config.load(
@@ -156,9 +166,17 @@ class ConfigTest(unittest.TestCase):
for version in ['2', '2.0']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V2_0
cfg = config.load(build_config_details({'version': '2.1'}))
assert cfg.version == V2_1
for version in ['3', '3.0']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V3_0
cfg = config.load(build_config_details({'version': '3.1'}))
assert cfg.version == V3_1
def test_v1_file_version(self):
cfg = config.load(build_config_details({'web': {'image': 'busybox'}}))
assert cfg.version == V1
@@ -913,7 +931,10 @@ class ConfigTest(unittest.TestCase):
'build': {'context': os.path.abspath('/')},
'image': 'example/web',
'volumes': [VolumeSpec.parse('/home/user/project:/code')],
'depends_on': ['db', 'other'],
'depends_on': {
'db': {'condition': 'service_started'},
'other': {'condition': 'service_started'},
},
},
{
'name': 'db',
@@ -1702,6 +1723,90 @@ class ConfigTest(unittest.TestCase):
}
}
def test_merge_depends_on_no_override(self):
base = {
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'}
}
}
override = {}
actual = config.merge_service_dicts(base, override, V2_1)
assert actual == base
def test_merge_depends_on_mixed_syntax(self):
base = {
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'}
}
}
override = {
'depends_on': ['app3']
}
actual = config.merge_service_dicts(base, override, V2_1)
assert actual == {
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_started'}
}
}
def test_merge_pid(self):
# Regression: https://github.com/docker/compose/issues/4184
base = {
'image': 'busybox',
'pid': 'host'
}
override = {
'labels': {'com.docker.compose.test': 'yes'}
}
actual = config.merge_service_dicts(base, override, V2_0)
assert actual == {
'image': 'busybox',
'pid': 'host',
'labels': {'com.docker.compose.test': 'yes'}
}
def test_merge_different_secrets(self):
base = {
'image': 'busybox',
'secrets': [
{'source': 'src.txt'}
]
}
override = {'secrets': ['other-src.txt']}
actual = config.merge_service_dicts(base, override, V3_1)
assert secret_sort(actual['secrets']) == secret_sort([
{'source': 'src.txt'},
{'source': 'other-src.txt'}
])
def test_merge_secrets_override(self):
base = {
'image': 'busybox',
'secrets': ['src.txt'],
}
override = {
'secrets': [
{
'source': 'src.txt',
'target': 'data.txt',
'mode': 0o400
}
]
}
actual = config.merge_service_dicts(base, override, V3_1)
assert actual['secrets'] == override['secrets']
def test_external_volume_config(self):
config_details = build_config_details({
'version': '2',
@@ -1781,6 +1886,91 @@ class ConfigTest(unittest.TestCase):
config.load(config_details)
assert 'has neither an image nor a build context' in exc.exconly()
def test_load_secrets(self):
base_file = config.ConfigFile(
'base.yaml',
{
'version': '3.1',
'services': {
'web': {
'image': 'example/web',
'secrets': [
'one',
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
},
],
},
},
'secrets': {
'one': {'file': 'secret.txt'},
},
})
details = config.ConfigDetails('.', [base_file])
service_dicts = config.load(details).services
expected = [
{
'name': 'web',
'image': 'example/web',
'secrets': [
types.ServiceSecret('one', None, None, None, None),
types.ServiceSecret('source', 'target', '100', '200', 0o777),
],
},
]
assert service_sort(service_dicts) == service_sort(expected)
def test_load_secrets_multi_file(self):
base_file = config.ConfigFile(
'base.yaml',
{
'version': '3.1',
'services': {
'web': {
'image': 'example/web',
'secrets': ['one'],
},
},
'secrets': {
'one': {'file': 'secret.txt'},
},
})
override_file = config.ConfigFile(
'base.yaml',
{
'version': '3.1',
'services': {
'web': {
'secrets': [
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
},
],
},
},
})
details = config.ConfigDetails('.', [base_file, override_file])
service_dicts = config.load(details).services
expected = [
{
'name': 'web',
'image': 'example/web',
'secrets': [
types.ServiceSecret('one', None, None, None, None),
types.ServiceSecret('source', 'target', '100', '200', 0o777),
],
},
]
assert service_sort(service_dicts) == service_sort(expected)
class NetworkModeTest(unittest.TestCase):
def test_network_mode_standard(self):
@@ -3048,7 +3238,22 @@ class ExtendsTest(unittest.TestCase):
image: example
""")
services = load_from_filename(str(tmpdir.join('docker-compose.yml')))
assert service_sort(services)[2]['depends_on'] == ['other']
assert service_sort(services)[2]['depends_on'] == {
'other': {'condition': 'service_started'}
}
def test_extends_with_healthcheck(self):
service_dicts = load_from_filename('tests/fixtures/extends/healthcheck-2.yml')
assert service_sort(service_dicts) == [{
'name': 'demo',
'image': 'foobar:latest',
'healthcheck': {
'test': ['CMD', '/health.sh'],
'interval': 10000000000,
'timeout': 5000000000,
'retries': 36,
}
}]
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
@@ -3165,6 +3370,54 @@ class BuildPathTest(unittest.TestCase):
assert 'build path' in exc.exconly()
class HealthcheckTest(unittest.TestCase):
def test_healthcheck(self):
service_dict = make_service_dict(
'test',
{'healthcheck': {
'test': ['CMD', 'true'],
'interval': '1s',
'timeout': '1m',
'retries': 3,
}},
'.',
)
assert service_dict['healthcheck'] == {
'test': ['CMD', 'true'],
'interval': nanoseconds_from_time_seconds(1),
'timeout': nanoseconds_from_time_seconds(60),
'retries': 3,
}
def test_disable(self):
service_dict = make_service_dict(
'test',
{'healthcheck': {
'disable': True,
}},
'.',
)
assert service_dict['healthcheck'] == {
'test': ['NONE'],
}
def test_disable_with_other_config_is_invalid(self):
with pytest.raises(ConfigurationError) as excinfo:
make_service_dict(
'invalid-healthcheck',
{'healthcheck': {
'disable': True,
'interval': '1s',
}},
'.',
)
assert 'invalid-healthcheck' in excinfo.exconly()
assert 'disable' in excinfo.exconly()
class GetDefaultConfigFilesTestCase(unittest.TestCase):
files = [
@@ -3209,3 +3462,89 @@ def get_config_filename_for_files(filenames, subdir=None):
return os.path.basename(filename)
finally:
shutil.rmtree(project_dir)
class SerializeTest(unittest.TestCase):
def test_denormalize_depends_on_v3(self):
service_dict = {
'image': 'busybox',
'command': 'true',
'depends_on': {
'service2': {'condition': 'service_started'},
'service3': {'condition': 'service_started'},
}
}
assert denormalize_service_dict(service_dict, V3_0) == {
'image': 'busybox',
'command': 'true',
'depends_on': ['service2', 'service3']
}
def test_denormalize_depends_on_v2_1(self):
service_dict = {
'image': 'busybox',
'command': 'true',
'depends_on': {
'service2': {'condition': 'service_started'},
'service3': {'condition': 'service_started'},
}
}
assert denormalize_service_dict(service_dict, V2_1) == service_dict
def test_serialize_time(self):
data = {
9: '9ns',
9000: '9us',
9000000: '9ms',
90000000: '90ms',
900000000: '900ms',
999999999: '999999999ns',
1000000000: '1s',
60000000000: '1m',
60000000001: '60000000001ns',
9000000000000: '150m',
90000000000000: '25h',
}
for k, v in data.items():
assert serialize_ns_time_value(k) == v
def test_denormalize_healthcheck(self):
service_dict = {
'image': 'test',
'healthcheck': {
'test': 'exit 1',
'interval': '1m40s',
'timeout': '30s',
'retries': 5
}
}
processed_service = config.process_service(config.ServiceConfig(
'.', 'test', 'test', service_dict
))
denormalized_service = denormalize_service_dict(processed_service, V2_1)
assert denormalized_service['healthcheck']['interval'] == '100s'
assert denormalized_service['healthcheck']['timeout'] == '30s'
def test_denormalize_secrets(self):
service_dict = {
'name': 'web',
'image': 'example/web',
'secrets': [
types.ServiceSecret('one', None, None, None, None),
types.ServiceSecret('source', 'target', '100', '200', 0o777),
],
}
denormalized_service = denormalize_service_dict(service_dict, V3_1)
assert secret_sort(denormalized_service['secrets']) == secret_sort([
{'source': 'one'},
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
},
])

View File

@@ -0,0 +1,40 @@
# encoding: utf-8
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
from compose.config.environment import Environment
from tests import unittest
class EnvironmentTest(unittest.TestCase):
def test_get_simple(self):
env = Environment({
'FOO': 'bar',
'BAR': '1',
'BAZ': ''
})
assert env.get('FOO') == 'bar'
assert env.get('BAR') == '1'
assert env.get('BAZ') == ''
def test_get_undefined(self):
env = Environment({
'FOO': 'bar'
})
assert env.get('FOOBAR') is None
def test_get_boolean(self):
env = Environment({
'FOO': '',
'BAR': '0',
'BAZ': 'FALSE',
'FOOBAR': 'true',
})
assert env.get_boolean('FOO') is False
assert env.get_boolean('BAR') is False
assert env.get_boolean('BAZ') is False
assert env.get_boolean('FOOBAR') is True
assert env.get_boolean('UNDEFINED') is False

View File

@@ -98,7 +98,7 @@ class ContainerTest(unittest.TestCase):
self.assertEqual(container.name_without_project, "custom_name_of_container")
def test_inspect_if_not_inspected(self):
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
container = Container(mock_client, dict(Id="the_id"))
container.inspect_if_not_inspected()

View File

@@ -25,7 +25,7 @@ deps = {
def get_deps(obj):
return deps[obj]
return [(dep, None) for dep in deps[obj]]
def test_parallel_execute():

View File

@@ -19,7 +19,7 @@ from compose.service import Service
class ProjectTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
self.mock_client = mock.create_autospec(docker.APIClient)
def test_from_config(self):
config = Config(
@@ -36,6 +36,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
)
project = Project.from_config(
name='composetest',
@@ -64,6 +65,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
)
project = Project.from_config('composetest', config, None)
self.assertEqual(len(project.services), 2)
@@ -170,6 +172,7 @@ class ProjectTest(unittest.TestCase):
}],
networks=None,
volumes=None,
secrets=None,
),
)
assert project.get_service('test')._get_volumes_from() == [container_id + ":rw"]
@@ -202,6 +205,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
assert project.get_service('test')._get_volumes_from() == [container_name + ":rw"]
@@ -227,6 +231,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
with mock.patch.object(Service, 'containers') as mock_return:
@@ -360,6 +365,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
service = project.get_service('test')
@@ -384,6 +390,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
service = project.get_service('test')
@@ -417,6 +424,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
@@ -437,6 +445,7 @@ class ProjectTest(unittest.TestCase):
],
networks=None,
volumes=None,
secrets=None,
),
)
@@ -457,6 +466,7 @@ class ProjectTest(unittest.TestCase):
],
networks={'custom': {}},
volumes=None,
secrets=None,
),
)
@@ -487,6 +497,7 @@ class ProjectTest(unittest.TestCase):
}],
networks=None,
volumes=None,
secrets=None,
),
)
self.assertEqual([c.id for c in project.containers()], ['1'])
@@ -503,6 +514,7 @@ class ProjectTest(unittest.TestCase):
}],
networks={'default': {}},
volumes={'data': {}},
secrets=None,
),
)
self.mock_client.remove_network.side_effect = NotFound(None, None, 'oops')

View File

@@ -34,7 +34,7 @@ from compose.service import warn_on_masked_volume
class ServiceTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
self.mock_client = mock.create_autospec(docker.APIClient)
def test_containers(self):
service = Service('db', self.mock_client, 'myproject', image='foo')
@@ -666,7 +666,7 @@ class ServiceTest(unittest.TestCase):
class TestServiceNetwork(object):
def test_connect_container_to_networks_short_aliase_exists(self):
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
service = Service(
'db',
mock_client,
@@ -751,7 +751,7 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service(self):
container_id = 'bbbb'
service_name = 'web'
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
mock_client.containers.return_value = [
{'Id': container_id, 'Name': container_id, 'Image': 'abcd'},
]
@@ -765,7 +765,7 @@ class NetTestCase(unittest.TestCase):
def test_network_mode_service_no_containers(self):
service_name = 'web'
mock_client = mock.create_autospec(docker.Client)
mock_client = mock.create_autospec(docker.APIClient)
mock_client.containers.return_value = []
service = Service(name=service_name, client=mock_client)
@@ -783,7 +783,7 @@ def build_mount(destination, source, mode='rw'):
class ServiceVolumesTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
self.mock_client = mock.create_autospec(docker.APIClient)
def test_build_volume_binding(self):
binding = build_volume_binding(VolumeSpec.parse('/outside:/inside', True))

View File

@@ -0,0 +1,56 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from compose import timeparse
def test_milli():
assert timeparse.timeparse('5ms') == 0.005
def test_milli_float():
assert timeparse.timeparse('50.5ms') == 0.0505
def test_second_milli():
assert timeparse.timeparse('200s5ms') == 200.005
def test_second_milli_micro():
assert timeparse.timeparse('200s5ms10us') == 200.00501
def test_second():
assert timeparse.timeparse('200s') == 200
def test_second_as_float():
assert timeparse.timeparse('20.5s') == 20.5
def test_minute():
assert timeparse.timeparse('32m') == 1920
def test_hour_minute():
assert timeparse.timeparse('2h32m') == 9120
def test_minute_as_float():
assert timeparse.timeparse('1.5m') == 90
def test_hour_minute_second():
assert timeparse.timeparse('5h34m56s') == 20096
def test_invalid_with_space():
assert timeparse.timeparse('5h 34m 56s') is None
def test_invalid_with_comma():
assert timeparse.timeparse('5h,34m,56s') is None
def test_invalid_with_empty_string():
assert timeparse.timeparse('') is None

View File

@@ -10,7 +10,7 @@ from tests import mock
@pytest.fixture
def mock_client():
return mock.create_autospec(docker.Client)
return mock.create_autospec(docker.APIClient)
class TestVolume(object):