Compare commits

..

223 Commits

Author SHA1 Message Date
Joffrey F
b0c10cb876 Merge pull request #6382 from docker/bump-1.23.2
Bump 1.23.2
2018-11-28 15:14:02 -08:00
Joffrey F
1110ad0108 "Bump 1.23.2"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:26:26 -08:00
Joffrey F
f266e3459d Fix incorrect pre-create container name in up logs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:54 -08:00
Joffrey F
bffb6094da Bump SDK version
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:54 -08:00
Joffrey F
66ed9b492e Don't append slugs to containers created by "up"
This change reverts the new naming convention introduced in 1.23 for service containers.
One-off containers will now use a slug instead of a sequential number as they do not
present addressability concerns and benefit from being capable of running in parallel.

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:30 -08:00
Joffrey F
07e2717bee Don't add long path prefix to build context URLs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:03 -08:00
Joffrey F
dce70a5566 Fix parse_key_from_error_msg to not error out on non-string keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:24:02 -08:00
Joffrey F
4682e766a3 Fix config merging for isolation and storage_opt keys
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:23:17 -08:00
Joffrey F
8a0090c18c Only use supported protocols when starting engine CLI subprocess
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-11-28 14:22:41 -08:00
Joffrey F
3727fd3fb9 Merge pull request #6314 from docker/bump-1.23.1
Bump 1.23.1
2018-11-01 11:20:34 -07:00
Joffrey F
b02f130684 "Bump 1.23.1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:49:00 -07:00
Joffrey F
176a4efaf2 Impose consistent behavior across command for --project-directory flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:46:19 -07:00
Joffrey F
187f48e338 Don't attempt to truncate a None value in Container.slug
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-31 14:46:18 -07:00
Joffrey F
a7ca78d854 Merge pull request #6306 from docker/bump-1.23.0
Bump 1.23.0
2018-10-30 16:06:05 -05:00
Joffrey F
c8524dc1aa Bump requests version in requirements.txt
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 14:40:32 -07:00
Joffrey F
140431d3b9 "Bump 1.23.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 12:22:22 -07:00
Joffrey F
3104597e7d "Bump 1.23.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 11:43:45 -07:00
Joffrey F
1c002b5844 Fix new flake8 errors/warnings
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-29 11:38:22 -07:00
Ofek Lev
8f9ead34d3 Allow requests 2.20.x
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-29 11:38:20 -07:00
Joffrey F
7bd4291f90 Merge pull request #6286 from docker/bump-1.23.0-rc3
Bump 1.23.0-rc3
2018-10-17 14:48:37 -07:00
Joffrey F
ea3d406eed Some additional exclusions in .gitignore / .dockerignore
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 13:40:42 -07:00
Joffrey F
45189c134d "Bump 1.23.0-rc3"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:16:34 -07:00
Joffrey F
5ab3e47b42 Add workaround for Debian/Ubuntu venv setup failure
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:52 -07:00
Joffrey F
0fa1462b0f Don't use dot as a path separator as it is a valid character in resource identifiers
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:51 -07:00
Joffrey F
5e4098d228 Avoid creating duplicate mount points when recreating a service
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
12f7e0d2fb Remove obsolete curl dependency
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
23beeb353c Update versions in Dockerfiles
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:50 -07:00
Joffrey F
da25be8f99 Fix ImageManager inconsistencies
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Joffrey F
c9107cff39 Fix arg checks in release.sh
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Joffrey F
51d44c7ebc Add pypirc check
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:49 -07:00
Ofek Lev
e722190d50 Update requirements.txt
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-17 12:12:48 -07:00
Ofek Lev
fe347321c9 Upgrade Windows-specific dependency colorama
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-10-17 12:12:48 -07:00
Andrew Rabert
9bccfa8dd0 Use Docker binary from official Docker image
Signed-off-by: Andrew Rabert <ar@nullsum.net>
2018-10-17 12:12:47 -07:00
Joffrey F
5cf25f519e Decontainerize release script Credentials management inside containers is a mess. Let's work on the host instead.
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-17 12:12:46 -07:00
Silvin Lubecki
82e265b806 Merge pull request #6255 from docker/bump-1.23.0-rc2
Bump 1.23.0-rc2
2018-10-08 18:06:46 +02:00
Silvin Lubecki
350a555e04 "Bump 1.23.0-rc2"
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
2018-10-08 17:10:25 +02:00
Joffrey F
099c887b59 Re-enable testing of TP and beta releases
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:23 +02:00
Joffrey F
90625cf31b Don't attempt iterating on None during parallel pull
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:21 +02:00
Joffrey F
970f8317c5 Fix twine upload for RC versions
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-10-08 17:06:19 +02:00
Harald Albers
30c91388f3 Fix bash completion for config --hash
Signed-off-by: Harald Albers <github@albersweb.de>
2018-10-08 17:06:17 +02:00
Antony MECHIN
eb86881af1 utils: Fix typo in unique_everseen.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
b64184e388 service: Use OrderedDict to preserve volumes order on versions prior 3.6.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
d5c314b382 tests.unity.service: Make sure volumes order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
18c2d08011 utils: Add unique_everseen (from itertools recipies).
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
bb87a3d040 tests.unit.config: Make sure volume order is preserved.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Antony MECHIN
62aeb767d3 tests.unit.config: Make make_service_dict working dir argument optional.
Signed-off-by: Antony MECHIN <antony.mechin@docker.com>
2018-10-08 17:06:16 +02:00
Joffrey F
c5d5d42158 Merge pull request #6222 from docker/bump-1.23.0-rc1
Bump 1.23.0-rc1
2018-09-26 15:18:00 -07:00
Joffrey F
320e4819d8 Avoid cred helpers errors in release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-26 21:10:56 +00:00
Joffrey F
c327a498b0 Don't rely on container names containing the db string to identify them
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 23:45:10 +00:00
Joffrey F
47d740b800 Fix some release script issues
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 23:45:08 +00:00
Joffrey F
ec4ea8d2f1 "Bump 1.23.0-rc1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-25 00:46:52 +00:00
Joffrey F
936e6971f9 "Bump 1.23.0-rc1"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-24 23:46:38 +00:00
Joffrey F
de2be2bf37 Merge pull request #6168 from jrbenito/update_armhf
[armhf] Make Dockerfile.armhf compatible with main
2018-09-21 16:05:32 -07:00
Joffrey F
2a7beb6350 Merge pull request #6204 from docker/5716-unix-paths-from-winhost
Don't convert slashes for UNIX paths on Windows hosts
2018-09-21 10:47:22 -07:00
Joffrey F
30afcc4994 Merge pull request #6209 from docker/images-use-service-tag
Images use service tag
2018-09-20 16:51:26 -07:00
Joffrey F
834acca497 Update acceptance test for image matching
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-20 15:48:08 -07:00
Joffrey F
7d0fb7d3f3 Rewrite images command method to decrease complexity
Also ensure we properly detect matching image names when tag is omitted

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-20 15:48:08 -07:00
Boris HUISGEN
1b668973a2 Add acceptance test
Signed-off-by: Boris HUISGEN <bhuisgen@hbis.fr>
2018-09-20 15:48:08 -07:00
Boris HUISGEN
a2ec572fdf Use same tag as service definition
Signed-off-by: Boris HUISGEN <bhuisgen@hbis.fr>
2018-09-20 15:48:08 -07:00
Joffrey F
0fb6cd1139 Merge pull request #6205 from docker/2473-windows-long-paths
Force consistent behavior around long paths on Windows builds
2018-09-19 18:11:29 -07:00
Joffrey F
96a49a0253 Force consistent behavior around long paths on Windows builds
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-19 16:09:21 -07:00
Joffrey F
f80630ffcf Merge pull request #6140 from docker/4688_no_sequential_ids
Add randomly generated slug to container names to prevent collisions
2018-09-19 15:12:41 -07:00
Joffrey F
9f9122cd95 Don't convert slashes for UNIX paths on Windows hosts
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-19 11:36:51 -07:00
Joffrey F
a5f42ae9e4 Merge pull request #6184 from docker/fix-zsh-completion
Update zsh completion with new options, and ensure service names are properly retrieved
2018-09-13 17:01:03 -07:00
Joffrey F
17d4845dbb Merge pull request #6186 from maxwellb/patch-1
Handle userns security
2018-09-12 17:04:37 -07:00
Maxwell Bloch
a7c05f41f1 Handle userns security
- Adds `--userns=host` when `userns-remap` is set

Signed-off-by: Maxwell Bloch <maxwellbloch@live.com>
2018-09-12 19:29:03 -04:00
Joffrey F
265d9dae4b Update zsh completion with new options, and ensure service names are properly retrieved
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-12 16:17:30 -07:00
Joffrey F
5916639383 Preserve container numbers, add slug to prevent name collisions
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-12 12:07:52 -07:00
Joffrey F
4e2de3c1ff Replace sequential container indexes with randomly generated IDs
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-11 15:26:58 -07:00
Joffrey F
bd8b2dfbbc Merge pull request #6178 from docker/update-versions-script
Skip testing TPs/betas for now
2018-09-10 15:44:42 -07:00
Joffrey F
d491a81cec Skip testing TPs/betas for now
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-09-10 15:08:10 -07:00
Joffrey F
bdd2c80d98 Merge pull request #6172 from riverzhang/typo
Fix some typos
2018-09-07 11:41:52 -07:00
Joffrey F
58c5b92f09 Merge pull request #6173 from mirake/fix-typo
Typo fix: overriden -> overridden
2018-09-07 11:41:00 -07:00
Joffrey F
7e6275219b Merge pull request #6171 from tossmilestone/fix-typos
Fix typos in CHANGELOG
2018-09-07 11:27:30 -07:00
rongzhang
373c83ccd7 Fix some typo
Signed-off-by: rongzhang <rongzhang@alauda.io>
2018-09-07 16:57:43 +08:00
Xiaoxi He
b66782b412 Fix typos in CHANGELOG
Signed-off-by: Xiaoxi He <xxhe@alauda.io>
2018-09-07 16:42:38 +08:00
ruicao
5713215e84 Typo fix: overriden -> overridden
Signed-off-by: ruicao <ruicao@alauda.io>
2018-09-07 16:08:19 +08:00
Josenivaldo Benito Jr
a541d88d57 [armhf] Make Dockerfile.armhf compatible with main
Dockerfile now uses python:3.6 image while Dockerfile.armhf uses
debian. Python image is officially supported in ARM archtecture hence,
the now both dockerfiles differs only on dockerbins.tgz file version.

May we use environmental variables to select dockerbins.tgz?

Signed-off-by: Josenivaldo Benito Jr <jrbenito@benito.qsl.br>
2018-09-05 11:52:50 -03:00
Joffrey F
db391c03ad Merge pull request #6100 from docker/5960-parallel-pull-progress
Add progress messages to parallel pull
2018-08-24 11:02:24 -07:00
Joffrey F
2038bb5cf7 Merge pull request #6145 from deivid-rodriguez/bug/broken_url
Fix broken url
2018-08-20 14:34:00 -07:00
David Rodríguez
3a93e85762 Fix broken url
As per https://github.com/sgerrand/alpine-pkg-glibc#please-note.

Signed-off-by: David Rodríguez <deivid.rodriguez@riseup.net>
2018-08-17 14:08:41 -03:00
Joffrey F
901ee4e77b Merge pull request #6134 from docker/4841-fix-project-dir
Fix --project-directory handling to apply to .env files as well
2018-08-13 16:03:47 -07:00
Joffrey F
eb63e9f3c7 Fix --project-directory handling to apply to .env files as well
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-10 17:02:56 -07:00
Joffrey F
ed245474c2 Merge pull request #6130 from docker/bump_sdk
Bump Python SDK -> 3.5.0
2018-08-10 14:08:36 -07:00
Joffrey F
5ad50dc0b3 Bump Python SDK -> 3.5.0
Add support for Python 3.7

Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-09 18:31:08 -07:00
Joffrey F
f207d94b3c Merge pull request #6126 from docker/wfender-2013-expose-config-hash
Add --hash opt for config command
2018-08-07 21:12:48 -07:00
Joffrey F
ee878aee4c Handle missing (not built) service image in config --hash
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 20:23:21 -07:00
Joffrey F
861031b9b7 Reduce config --hash code complexity and add test
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 17:25:35 -07:00
Joffrey F
707e21183f Fix config hash consistency with unprioritized networks
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-08-07 16:51:01 -07:00
Fender William
541fb65259 Add --hash opt for config command
Signed-off-by: Fender William <fender.william@gmail.com>
2018-08-07 16:51:01 -07:00
Joffrey F
473703d0d9 Merge pull request #6115 from graphaelli/parallel-build
add --parallel option to build
2018-08-02 15:23:06 -07:00
Joffrey F
6e95eb7437 Merge pull request #6104 from glorpen/fix-pipes
Fixes pipe handling in container mode.
2018-07-31 15:39:07 -07:00
Gil Raphaelli
89f2bfe4f3 add --parallel option to build
Signed-off-by: Gil Raphaelli <g@raphaelli.com>
2018-07-31 12:06:59 -04:00
Joffrey F
635c77db6c Merge pull request #6071 from nickhiggs/6060-reattach-logger-on-restart
Attach logger to containers after crashing.
2018-07-25 15:20:43 -07:00
Joffrey F
c956785cdc Add progress messages to parallel pull
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-25 14:39:18 -07:00
Arkadiusz Dzięgiel
7f9c042300 Fixes pipe handling in container mode.
Closes #4599, #4460

- adds a way to provide options from env in both cases (tty & non tty)
- allocates TTY only if both stdin & stdout are TTYs
- enables interactive mode if stdin is not TTY

Signed-off-by: Arkadiusz Dzięgiel <arkadiusz.dziegiel@glorpen.pl>
2018-07-24 12:23:31 +02:00
Joffrey F
ebad981bcc Merge pull request #6092 from ofek/support-newer-requests
support newer minor version of requests
2018-07-23 13:30:07 -07:00
Joffrey F
5d0fe7bcd3 Merge pull request #6080 from chris-crone/macos-rework-build
Rework build on macOS
2018-07-23 13:26:21 -07:00
Christopher Crone
450efd557a macOS: Rework build scripts
Allows us to build for older versions of macOS by downloading an
older SDK and building OpenSSL and Python against it.

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2018-07-23 11:41:32 +02:00
Ofek Lev
88d88d1998 support newer minor version of requests
Signed-off-by: Ofek Lev <ofekmeister@gmail.com>
2018-07-18 22:25:01 -04:00
Joffrey F
6cb17b90ef 1.23.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-18 11:11:34 -07:00
Joffrey F
bb00352c34 Fix up_with_networks test
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-18 11:09:25 -07:00
Joffrey F
1396cdb4be Merge pull request #6088 from docker/release
Resync master with release
2018-07-18 11:00:56 -07:00
Joffrey F
e20d808ed2 Merge pull request #6087 from docker/bump-1.22.0
Bump 1.22.0
2018-07-17 16:01:56 -07:00
Joffrey F
f46880fe9a "Bump 1.22.0"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:48:24 +00:00
Joffrey F
cda827cbfc Improve finalize robustness and allow resume using special --finalize-resume flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
8c0411910d Avoid unrelated file uploads with twine
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
d9545a5909 Add distclean to remove old build files
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
cb1b88c4f8 s/release.py/release.sh/
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-17 22:47:23 +00:00
Joffrey F
9271f9f46f Merge pull request #6077 from docker/5966-exitcode-from-sigkill
Fix --exit-code-from to reflect exit code after termination by Compose
2018-07-16 20:04:35 -04:00
Joffrey F
e6d18b1881 Fix --exit-code-from to reflect exit code after termination by Compose
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-10 15:28:55 -04:00
Joffrey F
8c4fc4bc2e Merge pull request #6073 from docker/release-tool-improve
Misc improvements to release script
2018-07-10 15:04:31 -04:00
Joffrey F
64918235d2 Merge pull request #6072 from docker/6037-external-false
Avoid overriding external = False in serializer
2018-07-09 13:55:59 -07:00
Joffrey F
d7f5220292 Improve finalize robustness and allow resume using special --finalize-resume flag
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:51:01 -04:00
Joffrey F
0b5f68098c Avoid unrelated file uploads with twine
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:25:06 -04:00
Joffrey F
8a7ee5a7d5 Add distclean to remove old build files
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 16:19:17 -04:00
Joffrey F
e9aaece40d s/release.py/release.sh/
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 15:46:56 -04:00
Joffrey F
9c2ffe6384 Avoid overriding external = False in serializer
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-09 15:28:32 -04:00
Nicholas Higgins
28085ebee2 Attach logger to containers after crashing.
Fixes #6060

Signed-off-by: Nicholas Higgins <nickhiggins42@gmail.com>
2018-07-09 08:47:20 +10:00
Matthieu Nottale
5985d046e3 Merge pull request #6065 from docker/bump-1.22.0-rc2
Bump 1.22.0-rc2
2018-07-05 17:24:31 +02:00
Matthieu Nottale
6817b533a8 "Bump 1.22.0-rc2"
Signed-off-by: Matthieu Nottale <matthieu.nottale@docker.com>
2018-07-05 15:10:31 +00:00
Joffrey F
15718810c0 Prevent attempts to create image names starting with - or _
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-05 15:07:35 +00:00
Joffrey F
969525c190 Docker SDK -> 3.4.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-07-05 15:07:34 +00:00
Joffrey F
40631f9a01 Merge pull request #6051 from docker/bump_sdk
Docker SDK -> 3.4.1
2018-06-29 13:32:51 -07:00
Joffrey F
e8713d7cef Docker SDK -> 3.4.1
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-29 13:05:20 -07:00
Joffrey F
7ae632a9ee Merge pull request #6041 from docker/5929-underscore-projname-2
Don't create image names starting with - or _
2018-06-22 16:25:08 -07:00
Joffrey F
b00db08aa9 Prevent attempts to create image names starting with - or _
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-22 15:56:53 -07:00
Joffrey F
6e30c130d5 Merge pull request #6035 from docker/fix-api-version-typo
Fix API version typo
2018-06-21 14:40:10 -07:00
Joffrey F
bdd7d47640 Merge pull request #6034 from docker/bump-1.22.0-rc1
Bump 1.22.0-rc1
2018-06-21 14:30:26 -07:00
Joffrey F
e7de1bc3c9 3.7 --> API v1.38
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 20:51:50 +00:00
Joffrey F
a82986943b Fix release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:48:32 -07:00
Joffrey F
73663e46b9 3.7 --> API v1.38
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:47:44 -07:00
Joffrey F
1fb5039585 Bump 1.22.0-rc1
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:28:13 -07:00
Joffrey F
e8af19daa3 Fix release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 13:28:13 -07:00
Joffrey F
47584a37c9 Fix bintray API client
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 11:49:35 -07:00
Joffrey F
156ce2bc1d Merge remote-tracking branch 'origin/release' into bump-1.22.0-rc1 2018-06-21 18:47:21 +00:00
Joffrey F
709ba0975d Release script fixes
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-21 11:40:32 -07:00
Joffrey F
429b1c8b3c Merge pull request #6017 from docker/6015-utf8-bom
Better support for UTF8+bom Compose files
2018-06-18 17:05:56 -07:00
Joffrey F
7f0734ca3c Merge pull request #6012 from docker/5930-credstore-ldpath
Use original LD_LIBRARY_PATH when shelling out to credential stores
2018-06-18 17:05:37 -07:00
Joffrey F
80322cfa5b Better support for UTF8+bom Compose files
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-18 16:46:37 -07:00
Joffrey F
c187d3c39f Use original LD_LIBRARY_PATH when shelling out to credential stores
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-18 16:07:55 -07:00
Joffrey F
f0674be578 Merge pull request #6027 from docker/bump_sdk
Bump Python SDK -> 3.4.0
2018-06-18 16:06:07 -07:00
Joffrey F
a728ff6a59 Bump Python SDK -> 3.4.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-06-18 15:31:25 -07:00
Joffrey F
9cb1a07c66 Merge pull request #6025 from vdemeester/init-in-3.7
Add `init` support in 3.7 schema
2018-06-18 11:14:54 -07:00
Vincent Demeester
c584ad67fc Add init support in 3.7 schema
> Run an init inside the container that forwards signals and reaps
> processes

This is already supported in 2.4 schema

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2018-06-18 10:52:57 +02:00
Joffrey F
13d8cf413e Merge pull request #5995 from vdemeester/x-objects
Allow `x-*` extension on 3rd level objects
2018-06-05 09:41:35 -07:00
Vincent Demeester
7a19b7548f Allow x-* extension on 3rd level objects
As for top-level key, any 3rd-level key which starts with `x-` will be
ignored by compose. This allows for users to:
* include additional metadata in their compose files
* create YAML anchor objects that can be re-used in other parts of the config

This matches a similar feature in the swagger spec definition:
https://swagger.io/specification/#specificationExtensions

This means a composefile like the following is valid

```
verison: "3.7"
services:
  foo:
    image: foo/bar
    x-foo: bar
network:
  bar:
    x-bar: baz
```

It concerns services, volumes, networks, configs and secrets.

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2018-05-31 14:15:10 +02:00
Joffrey F
b9cccf2efc Merge pull request #5992 from vdemeester/3.7-rollback-config
Add 3.7 schema and add rollback_config to it
2018-05-30 12:41:36 -07:00
Vincent Demeester
70574efd5b Support for rollback config in compose 3.7
Ignoring it on docker-compose

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2018-05-30 13:36:59 +02:00
Vincent Demeester
025fb7f860 Add composefile v3.7
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2018-05-29 11:58:54 +02:00
Joffrey F
706164accd Merge pull request #5982 from docker/5933-retrieve-legacy-containers
Allow all Compose commands to retrieve and handle legacy-name containers
2018-05-24 11:33:43 -07:00
Joffrey F
e245fb04cf Allow all Compose commands to retrieve and handle legacy-name containers
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-24 11:11:23 -07:00
Joffrey F
cc62764c12 Merge pull request #5968 from albers/completion-fix-running-services
Fix bash completion for running services
2018-05-22 15:36:21 -07:00
Harald Albers
7846f6e2a0 Fix bash completion for running services
Signed-off-by: Harald Albers <github@albersweb.de>
2018-05-17 15:57:07 +02:00
Joffrey F
c15c79ed2f Merge pull request #5940 from docker/5929-underscore-projname
Don't attempt to create resources with name starting with illegal chars
2018-05-04 16:44:22 -07:00
Joffrey F
263e939125 Merge pull request #5939 from docker/5928-compatibility-attachable
Ignore attachable property on networks in compatibility mode
2018-05-04 16:44:00 -07:00
Joffrey F
d5ebc73482 Don't attempt to create resources with name starting with illegal characters
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-04 16:15:52 -07:00
Joffrey F
f368b4846f Ignore attachable property on networks in compatibility mode
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-04 15:16:43 -07:00
Joffrey F
1cf1217ecb Merge pull request #5938 from docker/5931-ignore-default-platform
Ignore default platform if API version doesn't support platform param
2018-05-04 15:13:40 -07:00
Joffrey F
c3bb958865 Ignore default platform if API version doesn't support platform param
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-04 14:51:53 -07:00
Joffrey F
ddcd5c9fe9 Merge pull request #5925 from docker/5923-iprange
iprange -> ip_range
2018-05-02 17:49:26 -07:00
Joffrey F
0898c783ad Merge pull request #5926 from docker/bump-1.21.2
Bump 1.21.2
2018-05-02 16:04:55 -07:00
Joffrey F
a133471152 Fix appveyor build
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 11:19:42 -07:00
Joffrey F
f336694912 "Bump 1.21.2"
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-01 17:37:27 -07:00
Joffrey F
7db742d3f2 Esnure docker-compose binary is executable (fixes #5917)
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-01 17:37:14 -07:00
Joffrey F
064471e640 iprange -> ip_range
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:45 +00:00
Joffrey F
f05f1699c4 Partial revert bc03441550
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:44 +00:00
Joffrey F
d8b4b94585 Typo fix
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:44 +00:00
Joffrey F
d3ca20074d Automatically detect pickable PRs for patch releases
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:44 +00:00
Joffrey F
7b4603dc22 Finalize fixes
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:43 +00:00
Joffrey F
507376549c Improve release automation
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-02 00:28:43 +00:00
Joffrey F
5aafa54667 iprange -> ip_range
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-05-01 17:11:14 -07:00
Joffrey F
05638ab5ea Esnure docker-compose binary is executable (fixes #5917)
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-28 13:42:24 -07:00
Joffrey F
31a4ceeab0 Merge pull request #5916 from docker/autotests_fixes
Auto release improvements
2018-04-27 19:16:25 -07:00
Joffrey F
e6aedb1ce0 Partial revert bc03441550
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-27 18:48:30 -07:00
Joffrey F
5eb3f4b32f Typo fix
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-27 18:43:05 -07:00
Joffrey F
bc03441550 Automatically detect pickable PRs for patch releases
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-27 18:36:48 -07:00
Joffrey F
bb44d06f07 Merge pull request #5915 from docker/autotests_fixes
Improve release automation
2018-04-27 15:26:28 -07:00
Joffrey F
90c89e34f1 Finalize fixes
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-27 15:06:36 -07:00
Joffrey F
948ce555da Merge branch 'release'
Conflicts:
	compose/__init__.py
2018-04-27 14:50:37 -07:00
Joffrey F
d469113b37 Improve release automation
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-27 12:24:43 -07:00
Joffrey F
c2355175ea Merge pull request #5910 from docker/5885-duplicate-binds
Prevent duplicate binds in generated container config
2018-04-26 16:01:53 -07:00
Joffrey F
aecc0de28f Prevent duplicate binds in generated container config
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-26 15:20:45 -07:00
Joffrey F
9f42fac2bb Merge pull request #5906 from docker/bump_sdk
Bump SDK version to latest
2018-04-25 18:35:49 -07:00
Joffrey F
6e09e37114 Bump SDK version to latest
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-25 18:08:34 -07:00
Joffrey F
faa532c315 Merge pull request #5896 from docker/5874-legacy-proj-name
Retrieve objects using legacy (< 1.21) project names
2018-04-24 16:41:42 -07:00
Joffrey F
3b2ce82fa1 Use true_name for remove operation
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-24 16:19:38 -07:00
Joffrey F
c1657dc46a Improve legacy network and volume detection
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-24 16:05:12 -07:00
Joffrey F
fa3acbeb8d Merge pull request #5897 from docker/5882-ipam_options_check
Incorrect key name for IPAM options check
2018-04-24 13:33:42 -07:00
Joffrey F
3cf58705b7 Merge pull request #5898 from docker/5884-ipam-config-schema
Clearly define IPAM config schema for validation
2018-04-24 12:42:39 -07:00
Joffrey F
fa6d837b49 Clearly define IPAM config schema for validation
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-23 19:08:55 -07:00
Joffrey F
299ce6ad00 Incorrect key name for IPAM options check
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-23 18:16:58 -07:00
Joffrey F
4dece7fcb2 Retrieve objects using legacy (< 1.21) project names
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-23 17:30:08 -07:00
Joffrey F
aa66338f39 Merge pull request #5891 from shin-/automated-releases
Automated releases
2018-04-23 15:03:04 -07:00
Joffrey F
0578a58471 Remove obsolete release scripts
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-23 15:01:30 -07:00
Joffrey F
7536c331e0 Document new release process
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-23 14:52:15 -07:00
Joffrey F
62fc24eb27 Uncomment deploy steps
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 17:15:45 -07:00
Joffrey F
eba67910f3 Containerize release tool
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 16:55:48 -07:00
Joffrey F
a752208621 Fix appveyor build
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 15:29:37 -07:00
Joffrey F
6b83a651f6 Improve monitor function
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 14:39:58 -07:00
Joffrey F
2b5ad06e00 Cleanup
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 14:15:15 -07:00
Joffrey F
b06bc3cdea Add support for PR cherry picks
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-20 13:14:50 -07:00
Joffrey F
8511570764 Default base is master
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-19 15:24:41 -07:00
Joffrey F
e7086091be Early check for non-draft release in resume
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-19 15:22:55 -07:00
Joffrey F
c49eca41a0 Avoid accidental prod push
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-19 15:08:49 -07:00
Joffrey F
a120759c9d Add finalize step
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-19 14:47:04 -07:00
Joffrey F
e9f6abf8f4 Add images build step and finalize placeholder
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-18 18:42:08 -07:00
Joffrey F
599456378b Added logging for asset removal
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-18 17:07:41 -07:00
Joffrey F
6a71040514 Temp test
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-18 17:02:15 -07:00
Joffrey F
ae6dd8a93c Implement resuming a release
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-18 16:58:24 -07:00
Joffrey F
b1c831c54a Inital pass on comprehensive automated release script
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-18 16:01:52 -07:00
Joffrey F
fc923c3580 Update .gitignore
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-17 12:13:51 -07:00
Joffrey F
12b68572ef Merge pull request #5868 from albers/completion-for-1.21.0
Add support for features added in 1.21.0 to bash completion
2018-04-12 11:47:00 -07:00
Joffrey F
3f85c4291b Merge pull request #5867 from albers/refactor-completion-services
Refactor bash completion for services
2018-04-12 11:46:21 -07:00
Joffrey F
7078c8740a Bump 1.22.0 dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-12 11:31:44 -07:00
Joffrey F
d898b0cee4 Merge branch 'release'
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-12 11:31:30 -07:00
Joffrey F
27447d9144 Merge branch 'release'
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-12 11:30:26 -07:00
Harald Albers
ca396bac6d Add support for features added in 1.21.0 to bash completion
- add support for `docker-compose exec --workdir|-w`
- add support for `docker-compose build --compress`
- add support for `docker-compose pull --no-parallel`, drop deprecated
  option `--parallel`

Signed-off-by: Harald Albers <github@albersweb.de>
2018-04-12 08:52:20 +02:00
Harald Albers
20a9ae50b0 Refactor bash completion for services
Signed-off-by: Harald Albers <github@albersweb.de>
2018-04-11 12:47:10 +02:00
Joffrey F
6234cc8343 Merge pull request #5858 from docker/5855-error-encoding
Make sure error messages are unicode strings before combining
2018-04-09 12:06:29 -07:00
Joffrey F
8356576a9a Make sure error messages are unicode strings before combining
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-04-09 11:46:33 -07:00
Joffrey F
2975f06ca2 Merge pull request #5845 from docker/5253-port-serialize
Fix port serialization with external IP
2018-03-30 18:34:18 -07:00
Joffrey F
7aa51a18ff Fix port serialization with external IP
Signed-off-by: Joffrey F <joffrey@docker.com>
2018-03-30 18:02:06 -07:00
83 changed files with 2464 additions and 712 deletions

View File

@@ -2,7 +2,7 @@ version: 2
jobs:
test:
macos:
xcode: "8.3.3"
xcode: "9.4.1"
steps:
- checkout
- run:
@@ -13,11 +13,11 @@ jobs:
command: sudo pip install --upgrade tox==2.1.1
- run:
name: unit tests
command: tox -e py27,py36 -- tests/unit
command: tox -e py27,py36,py37 -- tests/unit
build-osx-binary:
macos:
xcode: "8.3.3"
xcode: "9.4.1"
steps:
- checkout
- run:
@@ -25,18 +25,17 @@ jobs:
command: sudo pip install --upgrade pip virtualenv
- run:
name: setup script
command: ./script/setup/osx
command: DEPLOYMENT_TARGET=10.11 ./script/setup/osx
- run:
name: build script
command: ./script/build/osx
- store_artifacts:
path: dist/docker-compose-Darwin-x86_64
destination: docker-compose-Darwin-x86_64
# - deploy:
# name: Deploy binary to bintray
# command: |
# OS_NAME=Darwin PKG_NAME=osx ./script/circle/bintray-deploy.sh
- deploy:
name: Deploy binary to bintray
command: |
OS_NAME=Darwin PKG_NAME=osx ./script/circle/bintray-deploy.sh
build-linux-binary:
machine:
@@ -54,28 +53,6 @@ jobs:
command: |
OS_NAME=Linux PKG_NAME=linux ./script/circle/bintray-deploy.sh
trigger-osx-binary-deploy:
# We use a separate repo to build OSX binaries meant for distribution
# with support for OSSX 10.11 (xcode 7). This job triggers a build on
# that repo.
docker:
- image: alpine:3.6
steps:
- run:
name: install curl
command: apk update && apk add curl
- run:
name: API trigger
command: |
curl -X POST -H "Content-Type: application/json" -d "{\
\"build_parameters\": {\
\"COMPOSE_BRANCH\": \"${CIRCLE_BRANCH}\"\
}\
}" https://circleci.com/api/v1.1/project/github/docker/compose-osx-release?circle-token=${OSX_RELEASE_TOKEN} \
> /dev/null
workflows:
version: 2
@@ -84,9 +61,3 @@ workflows:
- test
- build-linux-binary
- build-osx-binary
- trigger-osx-binary-deploy:
filters:
branches:
only:
- master
- /bump-.*/

View File

@@ -1,11 +1,13 @@
*.egg-info
.coverage
.git
.github
.tox
build
binaries
coverage-html
docs/_site
venv
*venv
.tox
**/__pycache__
*.pyc

16
.gitignore vendored
View File

@@ -1,14 +1,18 @@
*.egg-info
*.pyc
*.swo
*.swp
.cache
.coverage*
.DS_Store
.idea
/.tox
/binaries
/build
/compose/GITSHA
/coverage-html
/dist
/docs/_site
/venv
README.rst
compose/GITSHA
*.swo
*.swp
.DS_Store
/README.rst
/*venv

View File

@@ -1,6 +1,179 @@
Change log
==========
1.23.2 (2018-11-28)
-------------------
### Bugfixes
- Reverted a 1.23.0 change that appended random strings to container names
created by `docker-compose up`, causing addressability issues.
Note: Containers created by `docker-compose run` will continue to use
randomly generated names to avoid collisions during parallel runs.
- Fixed an issue where some `dockerfile` paths would fail unexpectedly when
attempting to build on Windows.
- Fixed a bug where build context URLs would fail to build on Windows.
- Fixed a bug that caused `run` and `exec` commands to fail for some otherwise
accepted values of the `--host` parameter.
- Fixed an issue where overrides for the `storage_opt` and `isolation` keys in
service definitions weren't properly applied.
- Fixed a bug where some invalid Compose files would raise an uncaught
exception during validation.
1.23.1 (2018-11-01)
-------------------
### Bugfixes
- Fixed a bug where working with containers created with a previous (< 1.23.0)
version of Compose would cause unexpected crashes
- Fixed an issue where the behavior of the `--project-directory` flag would
vary depending on which subcommand was being used.
1.23.0 (2018-10-30)
-------------------
### Important note
The default naming scheme for containers created by Compose in this version
has changed from `<project>_<service>_<index>` to
`<project>_<service>_<index>_<slug>`, where `<slug>` is a randomly-generated
hexadecimal string. Please make sure to update scripts relying on the old
naming scheme accordingly before upgrading.
### Features
- Logs for containers restarting after a crash will now appear in the output
of the `up` and `logs` commands.
- Added `--hash` option to the `docker-compose config` command, allowing users
to print a hash string for each service's configuration to facilitate rolling
updates.
- Added `--parallel` flag to the `docker-compose build` command, allowing
Compose to build up to 5 images simultaneously.
- Output for the `pull` command now reports status / progress even when pulling
multiple images in parallel.
- For images with multiple names, Compose will now attempt to match the one
present in the service configuration in the output of the `images` command.
### Bugfixes
- Parallel `run` commands for the same service will no longer fail due to name
collisions.
- Fixed an issue where paths longer than 260 characters on Windows clients would
cause `docker-compose build` to fail.
- Fixed a bug where attempting to mount `/var/run/docker.sock` with
Docker Desktop for Windows would result in failure.
- The `--project-directory` option is now used by Compose to determine where to
look for the `.env` file.
- `docker-compose build` no longer fails when attempting to pull an image with
credentials provided by the gcloud credential helper.
- Fixed the `--exit-code-from` option in `docker-compose up` to always report
the actual exit code even when the watched container isn't the cause of the
exit.
- Fixed an issue that would prevent recreating a service in some cases where
a volume would be mapped to the same mountpoint as a volume declared inside
the image's Dockerfile.
- Fixed a bug that caused hash configuration with multiple networks to be
inconsistent, causing some services to be unnecessarily restarted.
- Fixed a bug that would cause failures with variable substitution for services
with a name containing one or more dot characters
- Fixed a pipe handling issue when using the containerized version of Compose.
- Fixed a bug causing `external: false` entries in the Compose file to be
printed as `external: true` in the output of `docker-compose config`
- Fixed a bug where issuing a `docker-compose pull` command on services
without a defined image key would cause Compose to crash
- Volumes and binds are now mounted in the order they're declared in the
service definition
### Miscellaneous
- The `zsh` completion script has been updated with new options, and no
longer suggests container names where service names are expected.
1.22.0 (2018-07-17)
-------------------
### Features
#### Compose format version 3.7
- Introduced version 3.7 of the `docker-compose.yml` specification.
This version requires Docker Engine 18.06.0 or above.
- Added support for `rollback_config` in the deploy configuration
- Added support for the `init` parameter in service configurations
- Added support for extension fields in service, network, volume, secret,
and config configurations
#### Compose format version 2.4
- Added support for extension fields in service, network,
and volume configurations
### Bugfixes
- Fixed a bug that prevented deployment with some Compose files when
`DOCKER_DEFAULT_PLATFORM` was set
- Compose will no longer try to create containers or volumes with
invalid starting characters
- Fixed several bugs that prevented Compose commands from working properly
with containers created with an older version of Compose
- Fixed an issue with the output of `docker-compose config` with the
`--compatibility-mode` flag enabled when the source file contains
attachable networks
- Fixed a bug that prevented the `gcloud` credential store from working
properly when used with the Compose binary on UNIX
- Fixed a bug that caused connection errors when trying to operate
over a non-HTTPS TCP connection on Windows
- Fixed a bug that caused builds to fail on Windows if the Dockerfile
was located in a subdirectory of the build context
- Fixed an issue that prevented proper parsing of UTF-8 BOM encoded
Compose files on Windows
- Fixed an issue with handling of the double-wildcard (`**`) pattern in `.dockerignore` files when using `docker-compose build`
- Fixed a bug that caused auth values in legacy `.dockercfg` files to be ignored
- `docker-compose build` will no longer attempt to create image names starting with an invalid character
1.21.2 (2018-05-03)
-------------------
### Bugfixes
- Fixed a bug where the ip_range attribute in IPAM configs was prevented
from passing validation
1.21.1 (2018-04-27)
-------------------
@@ -223,7 +396,7 @@ Change log
preventing Compose from recovering volume data from previous containers for
anonymous volumes
- Added limit for number of simulatenous parallel operations, which should
- Added limit for number of simultaneous parallel operations, which should
prevent accidental resource exhaustion of the server. Default is 64 and
can be configured using the `COMPOSE_PARALLEL_LIMIT` environment variable
@@ -521,7 +694,7 @@ Change log
### Bugfixes
- Volumes specified through the `--volume` flag of `docker-compose run` now
complement volumes declared in the service's defintion instead of replacing
complement volumes declared in the service's definition instead of replacing
them
- Fixed a bug where using multiple Compose files would unset the scale value

View File

@@ -1,20 +1,14 @@
FROM docker:18.06.1 as docker
FROM python:3.6
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
curl \
python-dev \
git
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz" && \
SHA256=692e1c72937f6214b1038def84463018d8e320c8eaf8530546c84c2f8f9c767d; \
echo "${SHA256} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz
COPY --from=docker /usr/local/bin/docker /usr/local/bin/docker
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen

View File

@@ -1,55 +1,21 @@
FROM armhf/debian:wheezy
FROM python:3.6
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
locales \
gcc \
make \
zlib1g \
zlib1g-dev \
libssl-dev \
git \
ca-certificates \
curl \
libsqlite3-dev \
libbz2-dev \
; \
rm -rf /var/lib/apt/lists/*
python-dev \
git
RUN curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/armhf/docker-17.12.0-ce.tgz" && \
SHA256=f8de6378dad825b9fd5c3c2f949e791d22f918623c27a72c84fd6975a0e5d0a2; \
echo "${SHA256} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz
# Build Python 2.7.13 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz | tar -xz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.13
# Build python 3.6 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/3.6.4/Python-3.6.4.tgz | tar -xz; \
cd Python-3.6.4; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.6.4
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install pip
RUN set -ex; \
curl -L https://bootstrap.pypa.io/get-pip.py | python
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
@@ -70,4 +36,4 @@ RUN tox --notest
ADD . /code/
RUN chown -R user /code/
ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"]
ENTRYPOINT ["/code/.tox/py36/bin/docker-compose"]

View File

@@ -1,23 +1,19 @@
FROM alpine:3.6
FROM docker:18.06.1 as docker
FROM alpine:3.8
ENV GLIBC 2.27-r0
ENV DOCKERBINS_SHA 1270dce1bd7e1838d62ae21d2505d87f16efc1d9074645571daaefdfd0c14054
ENV GLIBC 2.28-r0
RUN apk update && apk add --no-cache openssl ca-certificates curl libgcc && \
curl -fsSL -o /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub && \
curl -fsSL -o /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
curl -fsSL -o glibc-$GLIBC.apk https://github.com/sgerrand/alpine-pkg-glibc/releases/download/$GLIBC/glibc-$GLIBC.apk && \
apk add --no-cache glibc-$GLIBC.apk && \
ln -s /lib/libz.so.1 /usr/glibc-compat/lib/ && \
ln -s /lib/libc.musl-x86_64.so.1 /usr/glibc-compat/lib && \
ln -s /usr/lib/libgcc_s.so.1 /usr/glibc-compat/lib && \
curl -fsSL -o dockerbins.tgz "https://download.docker.com/linux/static/stable/x86_64/docker-17.12.1-ce.tgz" && \
echo "${DOCKERBINS_SHA} dockerbins.tgz" | sha256sum -c - && \
tar xvf dockerbins.tgz docker/docker --strip-components 1 && \
mv docker /usr/local/bin/docker && \
chmod +x /usr/local/bin/docker && \
rm dockerbins.tgz /etc/apk/keys/sgerrand.rsa.pub glibc-$GLIBC.apk && \
rm /etc/apk/keys/sgerrand.rsa.pub glibc-$GLIBC.apk && \
apk del curl
COPY --from=docker /usr/local/bin/docker /usr/local/bin/docker
COPY dist/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
ENTRYPOINT ["docker-compose"]

3
Jenkinsfile vendored
View File

@@ -74,10 +74,11 @@ buildImage()
def testMatrix = [failFast: true]
def docker_versions = get_versions(2)
for (int i = 0 ;i < docker_versions.length ; i++) {
for (int i = 0; i < docker_versions.length; i++) {
def dockerVersion = docker_versions[i]
testMatrix["${dockerVersion}_py27"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py27"])
testMatrix["${dockerVersion}_py36"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py36"])
testMatrix["${dockerVersion}_py37"] = runTests([dockerVersions: dockerVersion, pythonVersions: "py37"])
}
parallel(testMatrix)

View File

@@ -10,7 +10,7 @@ install:
build: false
test_script:
- "tox -e py27,py36 -- tests/unit"
- "tox -e py27,py36,py37 -- tests/unit"
- ps: ".\\script\\build\\windows.ps1"
artifacts:

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.21.1'
__version__ = '1.23.2'

View File

@@ -23,7 +23,8 @@ log = logging.getLogger(__name__)
def project_from_options(project_dir, options):
environment = Environment.from_env_file(project_dir)
override_dir = options.get('--project-directory')
environment = Environment.from_env_file(override_dir or project_dir)
set_parallel_limit(environment)
host = options.get('--host')
@@ -37,7 +38,7 @@ def project_from_options(project_dir, options):
host=host,
tls_config=tls_config_from_options(options, environment),
environment=environment,
override_dir=options.get('--project-directory'),
override_dir=override_dir,
compatibility=options.get('--compatibility'),
)
@@ -59,12 +60,13 @@ def set_parallel_limit(environment):
def get_config_from_options(base_dir, options):
environment = Environment.from_env_file(base_dir)
override_dir = options.get('--project-directory')
environment = Environment.from_env_file(override_dir or base_dir)
config_path = get_config_path_from_options(
base_dir, options, environment
)
return config.load(
config.find(base_dir, config_path, environment),
config.find(base_dir, config_path, environment, override_dir),
options.get('--compatibility')
)

View File

@@ -117,6 +117,13 @@ def docker_client(environment, version=None, tls_config=None, host=None,
kwargs['user_agent'] = generate_user_agent()
# Workaround for
# https://pyinstaller.readthedocs.io/en/v3.3.1/runtime-information.html#ld-library-path-libpath-considerations
if 'LD_LIBRARY_PATH_ORIG' in environment:
kwargs['credstore_env'] = {
'LD_LIBRARY_PATH': environment.get('LD_LIBRARY_PATH_ORIG'),
}
client = APIClient(**kwargs)
client._original_base_url = kwargs.get('base_url')

View File

@@ -54,7 +54,7 @@ def handle_connection_errors(client):
except APIError as e:
log_api_error(e, client.api_version)
raise ConnectionError()
except (ReadTimeout, socket.timeout) as e:
except (ReadTimeout, socket.timeout):
log_timeout_error(client.timeout)
raise ConnectionError()
except Exception as e:

View File

@@ -210,10 +210,15 @@ def start_producer_thread(thread_args):
def watch_events(thread_map, event_stream, presenters, thread_args):
crashed_containers = set()
for event in event_stream:
if event['action'] == 'stop':
thread_map.pop(event['id'], None)
if event['action'] == 'die':
thread_map.pop(event['id'], None)
crashed_containers.add(event['id'])
if event['action'] != 'start':
continue
@@ -223,6 +228,11 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
# Container was stopped and started, we need a new thread
thread_map.pop(event['id'], None)
# Container crashed so we should reattach to it
if event['id'] in crashed_containers:
event['container'].attach_log_stream()
crashed_containers.remove(event['id'])
thread_map[event['id']] = build_thread(
event['container'],
next(presenters),

View File

@@ -238,11 +238,14 @@ class TopLevelCommand(object):
version Show the Docker-Compose version information
"""
def __init__(self, project, project_dir='.', options=None):
def __init__(self, project, options=None):
self.project = project
self.project_dir = '.'
self.toplevel_options = options or {}
@property
def project_dir(self):
return self.toplevel_options.get('--project-directory') or '.'
def build(self, options):
"""
Build or rebuild services.
@@ -260,6 +263,7 @@ class TopLevelCommand(object):
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--parallel Build images in parallel.
"""
service_names = options['SERVICE']
build_args = options.get('--build-arg', None)
@@ -280,6 +284,7 @@ class TopLevelCommand(object):
memory=options.get('--memory'),
build_args=build_args,
gzip=options.get('--compress', False),
parallel_build=options.get('--parallel', False),
)
def bundle(self, options):
@@ -301,7 +306,7 @@ class TopLevelCommand(object):
-o, --output PATH Path to write the bundle file to.
Defaults to "<project name>.dab".
"""
compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
compose_config = get_config_from_options('.', self.toplevel_options)
output = options["--output"]
if not output:
@@ -326,10 +331,12 @@ class TopLevelCommand(object):
anything.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
Set "service1,service2" for a list of specified services
or use the wildcard symbol to display all services
"""
compose_config = get_config_from_options(self.project_dir, self.toplevel_options)
compose_config = get_config_from_options('.', self.toplevel_options)
image_digests = None
if options['--resolve-image-digests']:
@@ -348,6 +355,15 @@ class TopLevelCommand(object):
print('\n'.join(volume for volume in compose_config.volumes))
return
if options['--hash'] is not None:
h = options['--hash']
self.project = project_from_options('.', self.toplevel_options)
services = [svc for svc in options['--hash'].split(',')] if h != '*' else None
with errors.handle_connection_errors(self.project.client):
for service in self.project.get_services(services):
print('{} {}'.format(service.name, service.config_hash))
return
print(serialize_config(compose_config, image_digests))
def create(self, options):
@@ -552,31 +568,43 @@ class TopLevelCommand(object):
if options['--quiet']:
for image in set(c.image for c in containers):
print(image.split(':')[1])
else:
headers = [
'Container',
'Repository',
'Tag',
'Image Id',
'Size'
]
rows = []
for container in containers:
image_config = container.image_config
repo_tags = (
image_config['RepoTags'][0].rsplit(':', 1) if image_config['RepoTags']
else ('<none>', '<none>')
)
image_id = image_config['Id'].split(':')[1][:12]
size = human_readable_file_size(image_config['Size'])
rows.append([
container.name,
repo_tags[0],
repo_tags[1],
image_id,
size
])
print(Formatter().table(headers, rows))
return
def add_default_tag(img_name):
if ':' not in img_name.split('/')[-1]:
return '{}:latest'.format(img_name)
return img_name
headers = [
'Container',
'Repository',
'Tag',
'Image Id',
'Size'
]
rows = []
for container in containers:
image_config = container.image_config
service = self.project.get_service(container.service)
index = 0
img_name = add_default_tag(service.image_name)
if img_name in image_config['RepoTags']:
index = image_config['RepoTags'].index(img_name)
repo_tags = (
image_config['RepoTags'][index].rsplit(':', 1) if image_config['RepoTags']
else ('<none>', '<none>')
)
image_id = image_config['Id'].split(':')[1][:12]
size = human_readable_file_size(image_config['Size'])
rows.append([
container.name,
repo_tags[0],
repo_tags[1],
image_id,
size
])
print(Formatter().table(headers, rows))
def kill(self, options):
"""
@@ -1085,12 +1113,15 @@ class TopLevelCommand(object):
)
self.project.stop(service_names=service_names, timeout=timeout)
if exit_value_from:
exit_code = compute_service_exit_code(exit_value_from, attached_containers)
sys.exit(exit_code)
@classmethod
def version(cls, options):
"""
Show version informations
Show version information
Usage: version [--short]
@@ -1103,33 +1134,33 @@ class TopLevelCommand(object):
print(get_version_info('full'))
def compute_service_exit_code(exit_value_from, attached_containers):
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
return 2
if len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
return exit_values[0]
return candidates[0].inspect()['State']['ExitCode']
def compute_exit_code(exit_value_from, attached_containers, cascade_starter, all_containers):
exit_code = 0
if exit_value_from:
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
exit_code = 2
elif len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
exit_code = exit_values[0]
else:
exit_code = candidates[0].inspect()['State']['ExitCode']
else:
for e in all_containers:
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
for e in all_containers:
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
return exit_code
@@ -1421,7 +1452,9 @@ def call_docker(args, dockeropts):
if verify:
tls_options.append('--tlsverify')
if host:
tls_options.extend(['--host', host.lstrip('=')])
tls_options.extend(
['--host', re.sub(r'^https?://', 'tcp://', host.lstrip('='))]
)
args = [executable_path] + tls_options + args
log.debug(" ".join(map(pipes.quote, args)))

View File

@@ -6,6 +6,7 @@ from . import environment
from .config import ConfigurationError
from .config import DOCKER_CONFIG_KEYS
from .config import find
from .config import is_url
from .config import load
from .config import merge_environment
from .config import merge_labels

View File

@@ -91,6 +91,7 @@ DOCKER_CONFIG_KEYS = [
'healthcheck',
'image',
'ipc',
'isolation',
'labels',
'links',
'mac_address',
@@ -918,12 +919,17 @@ def convert_restart_policy(name):
def translate_deploy_keys_to_container_config(service_dict):
if 'credential_spec' in service_dict:
del service_dict['credential_spec']
if 'configs' in service_dict:
del service_dict['configs']
if 'deploy' not in service_dict:
return service_dict, []
deploy_dict = service_dict['deploy']
ignored_keys = [
k for k in ['endpoint_mode', 'labels', 'update_config', 'placement']
k for k in ['endpoint_mode', 'labels', 'update_config', 'rollback_config', 'placement']
if k in deploy_dict
]
@@ -946,10 +952,6 @@ def translate_deploy_keys_to_container_config(service_dict):
)
del service_dict['deploy']
if 'credential_spec' in service_dict:
del service_dict['credential_spec']
if 'configs' in service_dict:
del service_dict['configs']
return service_dict, ignored_keys
@@ -1041,6 +1043,7 @@ def merge_service_dicts(base, override, version):
md.merge_mapping('networks', parse_networks)
md.merge_mapping('sysctls', parse_sysctls)
md.merge_mapping('depends_on', parse_depends_on)
md.merge_mapping('storage_opt', parse_flat_dict)
md.merge_sequence('links', ServiceLink.parse)
md.merge_sequence('secrets', types.ServiceSecret.parse)
md.merge_sequence('configs', types.ServiceConfig.parse)
@@ -1135,6 +1138,7 @@ def merge_deploy(base, override):
md.merge_scalar('replicas')
md.merge_mapping('labels', parse_labels)
md.merge_mapping('update_config')
md.merge_mapping('rollback_config')
md.merge_mapping('restart_policy')
if md.needs_merge('resources'):
resources_md = MergeDict(md.base.get('resources') or {}, md.override.get('resources') or {})
@@ -1434,15 +1438,15 @@ def has_uppercase(name):
return any(char in string.ascii_uppercase for char in name)
def load_yaml(filename, encoding=None):
def load_yaml(filename, encoding=None, binary=True):
try:
with io.open(filename, 'r', encoding=encoding) as fh:
with io.open(filename, 'rb' if binary else 'r', encoding=encoding) as fh:
return yaml.safe_load(fh)
except (IOError, yaml.YAMLError, UnicodeDecodeError) as e:
if encoding is None:
# Sometimes the user's locale sets an encoding that doesn't match
# the YAML files. Im such cases, retry once with the "default"
# UTF-8 encoding
return load_yaml(filename, encoding='utf-8')
return load_yaml(filename, encoding='utf-8-sig', binary=False)
error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__
raise ConfigurationError(u"{}: {}".format(error_name, e))

View File

@@ -311,7 +311,7 @@
"type": "object",
"properties": {
"subnet": {"type": "string"},
"iprange": {"type": "string"},
"ip_range": {"type": "string"},
"gateway": {"type": "string"},
"aux_addresses": {
"type": "object",

View File

@@ -365,7 +365,7 @@
"type": "object",
"properties": {
"subnet": {"type": "string"},
"iprange": {"type": "string"},
"ip_range": {"type": "string"},
"gateway": {"type": "string"},
"aux_addresses": {
"type": "object",

View File

@@ -374,7 +374,7 @@
"type": "object",
"properties": {
"subnet": {"type": "string"},
"iprange": {"type": "string"},
"ip_range": {"type": "string"},
"gateway": {"type": "string"},
"aux_addresses": {
"type": "object",

View File

@@ -418,7 +418,7 @@
"type": "object",
"properties": {
"subnet": {"type": "string"},
"iprange": {"type": "string"},
"ip_range": {"type": "string"},
"gateway": {"type": "string"},
"aux_addresses": {
"type": "object",

View File

@@ -346,6 +346,7 @@
"dependencies": {
"memswap_limit": ["mem_limit"]
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
@@ -409,6 +410,7 @@
"labels": {"$ref": "#/definitions/labels"},
"name": {"type": "string"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
@@ -417,7 +419,7 @@
"type": "object",
"properties": {
"subnet": {"type": "string"},
"iprange": {"type": "string"},
"ip_range": {"type": "string"},
"gateway": {"type": "string"},
"aux_addresses": {
"type": "object",
@@ -451,6 +453,7 @@
"labels": {"$ref": "#/definitions/labels"},
"name": {"type": "string"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},

View File

@@ -0,0 +1,602 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.7.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/config"
}
},
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"configs": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"container_name": {"type": "string"},
"credential_spec": {"type": "object", "properties": {
"file": {"type": "string"},
"registry": {"type": "string"}
}},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"init": {"type": "boolean"},
"ipc": {"type": "string"},
"isolation": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{
"type": "object",
"properties": {
"mode": {"type": "string"},
"target": {"type": "integer"},
"published": {"type": "integer"},
"protocol": {"type": "string"}
},
"additionalProperties": false
}
]
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
"propagation": {"type": "string"}
}
},
"volume": {
"type": "object",
"properties": {
"nocopy": {"type": "boolean"}
}
},
"tmpfs": {
"type": "object",
"properties": {
"size": {
"type": "integer",
"minimum": 0
}
}
}
},
"additionalProperties": false
}
],
"uniqueItems": true
}
},
"working_dir": {"type": "string"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string", "format": "duration"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string", "format": "duration"},
"start_period": {"type": "string", "format": "duration"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"endpoint_mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"rollback_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"},
"order": {"type": "string", "enum": [
"start-first", "stop-first"
]}
},
"additionalProperties": false
},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"},
"order": {"type": "string", "enum": [
"start-first", "stop-first"
]}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"reservations": {
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"},
"generic_resources": {"$ref": "#/definitions/generic_resources"}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}},
"preferences": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spread": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"generic_resources": {
"id": "#/definitions/generic_resources",
"type": "array",
"items": {
"type": "object",
"properties": {
"discrete_resource_spec": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"value": {"type": "number"}
},
"additionalProperties": false
}
},
"additionalProperties": false
}
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"attachable": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"name": {"type": "string"},
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
"config": {
"id": "#/definitions/config",
"type": "object",
"properties": {
"name": {"type": "string"},
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -48,7 +48,7 @@ def interpolate_environment_variables(version, config, section, environment):
def get_config_path(config_key, section, name):
return '{}.{}.{}'.format(section, name, config_key)
return '{}/{}/{}'.format(section, name, config_key)
def interpolate_value(name, config_key, value, section, interpolator):
@@ -75,7 +75,7 @@ def interpolate_value(name, config_key, value, section, interpolator):
def recursive_interpolate(obj, interpolator, config_path):
def append(config_path, key):
return '{}.{}'.format(config_path, key)
return '{}/{}'.format(config_path, key)
if isinstance(obj, six.string_types):
return converter.convert(config_path, interpolator.interpolate(obj))
@@ -160,12 +160,12 @@ class UnsetRequiredSubstitution(Exception):
self.err = custom_err_msg
PATH_JOKER = '[^.]+'
PATH_JOKER = '[^/]+'
FULL_JOKER = '.+'
def re_path(*args):
return re.compile('^{}$'.format('\.'.join(args)))
return re.compile('^{}$'.format('/'.join(args)))
def re_path_basic(section, name):
@@ -248,6 +248,8 @@ class ConversionMap(object):
service_path('deploy', 'replicas'): to_int,
service_path('deploy', 'update_config', 'parallelism'): to_int,
service_path('deploy', 'update_config', 'max_failure_ratio'): to_float,
service_path('deploy', 'rollback_config', 'parallelism'): to_int,
service_path('deploy', 'rollback_config', 'max_failure_ratio'): to_float,
service_path('deploy', 'restart_policy', 'max_attempts'): to_int,
service_path('mem_swappiness'): to_int,
service_path('labels', FULL_JOKER): to_str,
@@ -286,7 +288,7 @@ class ConversionMap(object):
except ValueError as e:
raise ConfigurationError(
'Error while attempting to convert {} to appropriate type: {}'.format(
path, e
path.replace('/', '.'), e
)
)
return value

View File

@@ -78,7 +78,11 @@ def denormalize_config(config, image_digests=None):
config.version >= V3_0 and config.version < v3_introduced_name_key(key)):
del conf['name']
elif 'external' in conf:
conf['external'] = True
conf['external'] = bool(conf['external'])
if 'attachable' in conf and config.version < V3_2:
# For compatibility mode, this option is invalid in v2
del conf['attachable']
return result

View File

@@ -125,7 +125,7 @@ def parse_extra_hosts(extra_hosts_config):
def normalize_path_for_engine(path):
"""Windows paths, c:\my\path\shiny, need to be changed to be compatible with
"""Windows paths, c:\\my\\path\\shiny, need to be changed to be compatible with
the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
"""
drive, tail = splitdrive(path)
@@ -136,6 +136,20 @@ def normalize_path_for_engine(path):
return path.replace('\\', '/')
def normpath(path, win_host=False):
""" Custom path normalizer that handles Compose-specific edge cases like
UNIX paths on Windows hosts and vice-versa. """
sysnorm = ntpath.normpath if win_host else os.path.normpath
# If a path looks like a UNIX absolute path on Windows, it probably is;
# we'll need to revert the backslashes to forward slashes after normalization
flip_slashes = path.startswith('/') and IS_WINDOWS_PLATFORM
path = sysnorm(path)
if flip_slashes:
path = path.replace('\\', '/')
return path
class MountSpec(object):
options_map = {
'volume': {
@@ -152,12 +166,11 @@ class MountSpec(object):
@classmethod
def parse(cls, mount_dict, normalize=False, win_host=False):
normpath = ntpath.normpath if win_host else os.path.normpath
if mount_dict.get('source'):
if mount_dict['type'] == 'tmpfs':
raise ConfigurationError('tmpfs mounts can not specify a source')
mount_dict['source'] = normpath(mount_dict['source'])
mount_dict['source'] = normpath(mount_dict['source'], win_host)
if normalize:
mount_dict['source'] = normalize_path_for_engine(mount_dict['source'])
@@ -247,7 +260,7 @@ class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
else:
external = parts[0]
parts = separate_next_section(parts[1])
external = ntpath.normpath(external)
external = normpath(external, True)
internal = parts[0]
if len(parts) > 1:
if ':' in parts[1]:

View File

@@ -41,15 +41,15 @@ DOCKER_CONFIG_HINTS = {
}
VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
VALID_NAME_CHARS = r'[a-zA-Z0-9\._\-]'
VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
VALID_IPV4_SEG = r'(\d{1,2}|1\d{2}|2[0-4]\d|25[0-5])'
VALID_IPV4_ADDR = "({IPV4_SEG}\.){{3}}{IPV4_SEG}".format(IPV4_SEG=VALID_IPV4_SEG)
VALID_REGEX_IPV4_CIDR = "^{IPV4_ADDR}/(\d|[1-2]\d|3[0-2])$".format(IPV4_ADDR=VALID_IPV4_ADDR)
VALID_IPV4_ADDR = r"({IPV4_SEG}\.){{3}}{IPV4_SEG}".format(IPV4_SEG=VALID_IPV4_SEG)
VALID_REGEX_IPV4_CIDR = r"^{IPV4_ADDR}/(\d|[1-2]\d|3[0-2])$".format(IPV4_ADDR=VALID_IPV4_ADDR)
VALID_IPV6_SEG = r'[0-9a-fA-F]{1,4}'
VALID_REGEX_IPV6_CIDR = "".join("""
VALID_REGEX_IPV6_CIDR = "".join(r"""
^
(
(({IPV6_SEG}:){{7}}{IPV6_SEG})|
@@ -330,7 +330,10 @@ def handle_generic_error(error, path):
def parse_key_from_error_msg(error):
return error.message.split("'")[1]
try:
return error.message.split("'")[1]
except IndexError:
return error.message.split('(')[1].split(' ')[0].strip("'")
def path_string(path):

View File

@@ -15,12 +15,14 @@ LABEL_PROJECT = 'com.docker.compose.project'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'
LABEL_SLUG = 'com.docker.compose.slug'
LABEL_VOLUME = 'com.docker.compose.volume'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
NANOCPUS_SCALE = 1000000000
PARALLEL_LIMIT = 64
SECRETS_PATH = '/run/secrets'
WINDOWS_LONGPATH_PREFIX = '\\\\?\\'
COMPOSEFILE_V1 = ComposeVersion('1')
COMPOSEFILE_V2_0 = ComposeVersion('2.0')
@@ -36,6 +38,7 @@ COMPOSEFILE_V3_3 = ComposeVersion('3.3')
COMPOSEFILE_V3_4 = ComposeVersion('3.4')
COMPOSEFILE_V3_5 = ComposeVersion('3.5')
COMPOSEFILE_V3_6 = ComposeVersion('3.6')
COMPOSEFILE_V3_7 = ComposeVersion('3.7')
API_VERSIONS = {
COMPOSEFILE_V1: '1.21',
@@ -51,6 +54,7 @@ API_VERSIONS = {
COMPOSEFILE_V3_4: '1.30',
COMPOSEFILE_V3_5: '1.30',
COMPOSEFILE_V3_6: '1.36',
COMPOSEFILE_V3_7: '1.38',
}
API_VERSION_TO_ENGINE_VERSION = {
@@ -67,4 +71,5 @@ API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V3_4]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_5]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_6]: '18.02.0',
API_VERSIONS[COMPOSEFILE_V3_7]: '18.06.0',
}

View File

@@ -7,8 +7,13 @@ import six
from docker.errors import ImageNotFound
from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_SLUG
from .const import LABEL_VERSION
from .utils import truncate_id
from .version import ComposeVersion
class Container(object):
@@ -78,18 +83,36 @@ class Container(object):
@property
def name_without_project(self):
if self.name.startswith('{0}_{1}'.format(self.project, self.service)):
return '{0}_{1}'.format(self.service, self.number)
return '{0}_{1}'.format(self.service, self.number if self.number is not None else self.slug)
else:
return self.name
@property
def number(self):
if self.one_off:
# One-off containers are no longer assigned numbers and use slugs instead.
return None
number = self.labels.get(LABEL_CONTAINER_NUMBER)
if not number:
raise ValueError("Container {0} does not have a {1} label".format(
self.short_id, LABEL_CONTAINER_NUMBER))
return int(number)
@property
def slug(self):
if not self.full_slug:
return None
return truncate_id(self.full_slug)
@property
def full_slug(self):
return self.labels.get(LABEL_SLUG)
@property
def one_off(self):
return self.labels.get(LABEL_ONE_OFF) == 'True'
@property
def ports(self):
self.inspect_if_not_inspected()
@@ -283,6 +306,12 @@ class Container(object):
def attach(self, *args, **kwargs):
return self.client.attach(self.id, *args, **kwargs)
def has_legacy_proj_name(self, project_name):
return (
ComposeVersion(self.labels.get(LABEL_VERSION)) < ComposeVersion('1.21.0') and
self.project != project_name
)
def __repr__(self):
return '<Container: %s (%s)>' % (self.name, self.id[:6])

View File

@@ -323,7 +323,12 @@ def get_networks(service_dict, network_definitions):
'Service "{}" uses an undefined network "{}"'
.format(service_dict['name'], name))
return OrderedDict(sorted(
networks.items(),
key=lambda t: t[1].get('priority') or 0, reverse=True
))
if any([v.get('priority') for v in networks.values()]):
return OrderedDict(sorted(
networks.items(),
key=lambda t: t[1].get('priority') or 0, reverse=True
))
else:
# Ensure Compose will pick a consistent primary network if no
# priority is set
return OrderedDict(sorted(networks.items(), key=lambda t: t[0]))

View File

@@ -313,6 +313,13 @@ class ParallelStreamWriter(object):
self._write_ansi(msg, obj_index, color_func(status))
def get_stream_writer():
instance = ParallelStreamWriter.instance
if instance is None:
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
return instance
def parallel_operation(containers, operation, options, message):
parallel_execute(
containers,

View File

@@ -19,12 +19,11 @@ def write_to_stream(s, stream):
def stream_output(output, stream):
is_terminal = hasattr(stream, 'isatty') and stream.isatty()
stream = utils.get_output_stream(stream)
all_events = []
lines = {}
diff = 0
for event in utils.json_stream(output):
all_events.append(event)
yield event
is_progress_event = 'progress' in event or 'progressDetail' in event
if not is_progress_event:
@@ -57,8 +56,6 @@ def stream_output(output, stream):
stream.flush()
return all_events
def print_output_event(event, stream, is_terminal):
if 'errorDetail' in event:

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import datetime
import logging
import operator
import re
from functools import reduce
import enum
@@ -30,10 +31,10 @@ from .service import ConvergenceStrategy
from .service import NetworkMode
from .service import PidMode
from .service import Service
from .service import ServiceName
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import microseconds_from_time_nano
from .utils import truncate_string
from .volume import ProjectVolumes
@@ -70,8 +71,11 @@ class Project(object):
self.networks = networks or ProjectNetworks({}, False)
self.config_version = config_version
def labels(self, one_off=OneOffFilter.exclude):
labels = ['{0}={1}'.format(LABEL_PROJECT, self.name)]
def labels(self, one_off=OneOffFilter.exclude, legacy=False):
name = self.name
if legacy:
name = re.sub(r'[_-]', '', name)
labels = ['{0}={1}'.format(LABEL_PROJECT, name)]
OneOffFilter.update_labels(one_off, labels)
return labels
@@ -128,7 +132,8 @@ class Project(object):
volumes_from=volumes_from,
secrets=secrets,
pid_mode=pid_mode,
platform=service_dict.pop('platform', default_platform),
platform=service_dict.pop('platform', None),
default_platform=default_platform,
**service_dict)
)
@@ -193,25 +198,6 @@ class Project(object):
service.remove_duplicate_containers()
return services
def get_scaled_services(self, services, scale_override):
"""
Returns a list of this project's services as scaled ServiceName objects.
services: a list of Service objects
scale_override: a dict with the scale to apply to each service (k: service_name, v: scale)
"""
service_names = []
for service in services:
if service.name in scale_override:
scale = scale_override[service.name]
else:
scale = service.scale_num
for i in range(1, scale + 1):
service_names.append(ServiceName(self.name, service.name, i))
return service_names
def get_links(self, service_dict):
links = []
if 'links' in service_dict:
@@ -367,13 +353,36 @@ class Project(object):
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
build_args=None, gzip=False):
build_args=None, gzip=False, parallel_build=False):
services = []
for service in self.get_services(service_names):
if service.can_be_built():
service.build(no_cache, pull, force_rm, memory, build_args, gzip)
services.append(service)
else:
log.info('%s uses an image, skipping' % service.name)
def build_service(service):
service.build(no_cache, pull, force_rm, memory, build_args, gzip)
if parallel_build:
_, errors = parallel.parallel_execute(
services,
build_service,
operator.attrgetter('name'),
'Building',
limit=5,
)
if len(errors):
combined_errors = '\n'.join([
e.decode('utf-8') if isinstance(e, six.binary_type) else e for e in errors.values()
])
raise ProjectError(combined_errors)
else:
for service in services:
build_service(service)
def create(
self,
service_names=None,
@@ -466,7 +475,6 @@ class Project(object):
svc.ensure_image_exists(do_build=do_build, silent=silent)
plans = self._get_convergence_plans(
services, strategy, always_recreate_deps=always_recreate_deps)
scaled_services = self.get_scaled_services(services, scale_override)
def do(service):
@@ -477,7 +485,6 @@ class Project(object):
scale_override=scale_override.get(service.name),
rescale=rescale,
start=start,
project_services=scaled_services,
reset_container_image=reset_container_image,
renew_anonymous_volumes=renew_anonymous_volumes,
)
@@ -543,16 +550,35 @@ class Project(object):
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
include_deps=False):
services = self.get_services(service_names, include_deps)
msg = not silent and 'Pulling' or None
if parallel_pull:
def pull_service(service):
service.pull(ignore_pull_failures, True)
strm = service.pull(ignore_pull_failures, True, stream=True)
if strm is None: # Attempting to pull service with no `image` key is a no-op
return
writer = parallel.get_stream_writer()
for event in strm:
if 'status' not in event:
continue
status = event['status'].lower()
if 'progressDetail' in event:
detail = event['progressDetail']
if 'current' in detail and 'total' in detail:
percentage = float(detail['current']) / float(detail['total'])
status = '{} ({:.1%})'.format(status, percentage)
writer.write(
msg, service.name, truncate_string(status), lambda s: s
)
_, errors = parallel.parallel_execute(
services,
pull_service,
operator.attrgetter('name'),
not silent and 'Pulling' or None,
msg,
limit=5,
)
if len(errors):
@@ -570,12 +596,21 @@ class Project(object):
service.push(ignore_push_failures)
def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude):
return list(filter(None, [
ctnrs = list(filter(None, [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
filters={'label': self.labels(one_off=one_off)})])
)
if ctnrs:
return ctnrs
return list(filter(lambda c: c.has_legacy_proj_name(self.name), filter(None, [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
filters={'label': self.labels(one_off=one_off, legacy=True)})])
))
def containers(self, service_names=None, stopped=False, one_off=OneOffFilter.exclude):
if service_names:

View File

@@ -1,6 +1,7 @@
from __future__ import absolute_import
from __future__ import unicode_literals
import itertools
import logging
import os
import re
@@ -26,6 +27,7 @@ from . import __version__
from . import const
from . import progress_stream
from .config import DOCKER_CONFIG_KEYS
from .config import is_url
from .config import merge_environment
from .config import merge_labels
from .config.errors import DependencyError
@@ -39,8 +41,10 @@ from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_SLUG
from .const import LABEL_VERSION
from .const import NANOCPUS_SCALE
from .const import WINDOWS_LONGPATH_PREFIX
from .container import Container
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
@@ -48,10 +52,12 @@ from .errors import OperationFailedError
from .parallel import parallel_execute
from .progress_stream import stream_output
from .progress_stream import StreamOutputError
from .utils import generate_random_id
from .utils import json_hash
from .utils import parse_bytes
from .utils import parse_seconds_float
from .version import ComposeVersion
from .utils import truncate_id
from .utils import unique_everseen
log = logging.getLogger(__name__)
@@ -80,6 +86,7 @@ HOST_CONFIG_KEYS = [
'group_add',
'init',
'ipc',
'isolation',
'read_only',
'log_driver',
'log_opt',
@@ -172,6 +179,7 @@ class Service(object):
secrets=None,
scale=None,
pid_mode=None,
default_platform=None,
**options
):
self.name = name
@@ -185,13 +193,14 @@ class Service(object):
self.networks = networks or {}
self.secrets = secrets or []
self.scale_num = scale or 1
self.default_platform = default_platform
self.options = options
def __repr__(self):
return '<Service: {}>'.format(self.name)
def containers(self, stopped=False, one_off=False, filters={}):
filters.update({'label': self.labels(one_off=one_off)})
def containers(self, stopped=False, one_off=False, filters={}, labels=None):
filters.update({'label': self.labels(one_off=one_off) + (labels or [])})
result = list(filter(None, [
Container.from_ps(self.client, container)
@@ -202,10 +211,10 @@ class Service(object):
if result:
return result
filters.update({'label': self.labels(one_off=one_off, legacy=True)})
filters.update({'label': self.labels(one_off=one_off, legacy=True) + (labels or [])})
return list(
filter(
self.has_legacy_proj_name, filter(None, [
lambda c: c.has_legacy_proj_name(self.project), filter(None, [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
@@ -217,9 +226,8 @@ class Service(object):
"""Return a :class:`compose.container.Container` for this service. The
container must be active, and match `number`.
"""
labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]
for container in self.client.containers(filters={'label': labels}):
return Container.from_ps(self.client, container)
for container in self.containers(labels=['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]):
return container
raise ValueError("No container found for %s_%s" % (self.name, number))
@@ -256,6 +264,11 @@ class Service(object):
running_containers = self.containers(stopped=False)
num_running = len(running_containers)
for c in running_containers:
if not c.has_legacy_proj_name(self.project):
continue
log.info('Recreating container with legacy name %s' % c.name)
self.recreate_container(c, timeout, start_new_container=False)
if desired_num == num_running:
# do nothing as we already have the desired number
@@ -356,7 +369,16 @@ class Service(object):
@property
def image_name(self):
return self.options.get('image', '{s.project}_{s.name}'.format(s=self))
return self.options.get('image', '{project}_{s.name}'.format(
s=self, project=self.project.lstrip('_-')
))
@property
def platform(self):
platform = self.options.get('platform')
if not platform and version_gte(self.client.api_version, '1.35'):
platform = self.default_platform
return platform
def convergence_plan(self, strategy=ConvergenceStrategy.changed):
containers = self.containers(stopped=True)
@@ -395,7 +417,7 @@ class Service(object):
has_diverged = False
for c in containers:
if self.has_legacy_proj_name(c):
if c.has_legacy_proj_name(self.project):
log.debug('%s has diverged: Legacy project name' % c.name)
has_diverged = True
continue
@@ -409,27 +431,31 @@ class Service(object):
return has_diverged
def _execute_convergence_create(self, scale, detached, start, project_services=None):
i = self._next_container_number()
def _execute_convergence_create(self, scale, detached, start):
def create_and_start(service, n):
container = service.create_container(number=n, quiet=True)
if not detached:
container.attach_log_stream()
if start:
self.start_container(container)
return container
i = self._next_container_number()
containers, errors = parallel_execute(
[ServiceName(self.project, self.name, index) for index in range(i, i + scale)],
lambda service_name: create_and_start(self, service_name.number),
lambda service_name: self.get_container_name(service_name.service, service_name.number),
"Creating"
)
for error in errors.values():
raise OperationFailedError(error)
def create_and_start(service, n):
container = service.create_container(number=n, quiet=True)
if not detached:
container.attach_log_stream()
if start:
self.start_container(container)
return container
return containers
containers, errors = parallel_execute(
[
ServiceName(self.project, self.name, index)
for index in range(i, i + scale)
],
lambda service_name: create_and_start(self, service_name.number),
lambda service_name: self.get_container_name(service_name.service, service_name.number),
"Creating"
)
for error in errors.values():
raise OperationFailedError(error)
return containers
def _execute_convergence_recreate(self, containers, scale, timeout, detached, start,
renew_anonymous_volumes):
@@ -492,8 +518,8 @@ class Service(object):
def execute_convergence_plan(self, plan, timeout=None, detached=False,
start=True, scale_override=None,
rescale=True, project_services=None,
reset_container_image=False, renew_anonymous_volumes=False):
rescale=True, reset_container_image=False,
renew_anonymous_volumes=False):
(action, containers) = plan
scale = scale_override if scale_override is not None else self.scale_num
containers = sorted(containers, key=attrgetter('number'))
@@ -502,7 +528,7 @@ class Service(object):
if action == 'create':
return self._execute_convergence_create(
scale, detached, start, project_services
scale, detached, start
)
# The create action needs always needs an initial scale, but otherwise,
@@ -552,7 +578,7 @@ class Service(object):
container.rename_to_tmp_name()
new_container = self.create_container(
previous_container=container if not renew_anonymous_volumes else None,
number=container.labels.get(LABEL_CONTAINER_NUMBER),
number=container.number,
quiet=True,
)
if attach_logs:
@@ -640,9 +666,15 @@ class Service(object):
return json_hash(self.config_dict())
def config_dict(self):
def image_id():
try:
return self.image()['Id']
except NoSuchImageError:
return None
return {
'options': self.options,
'image_id': self.image()['Id'],
'image_id': image_id(),
'links': self.get_link_names(),
'net': self.network_mode.id,
'networks': self.networks,
@@ -701,14 +733,19 @@ class Service(object):
def get_volumes_from_names(self):
return [s.source.name for s in self.volumes_from if isinstance(s.source, Service)]
# TODO: this would benefit from github.com/docker/docker/pull/14699
# to remove the need to inspect every container
def _next_container_number(self, one_off=False):
containers = self._fetch_containers(
all=True,
filters={'label': self.labels(one_off=one_off)}
if one_off:
return None
containers = itertools.chain(
self._fetch_containers(
all=True,
filters={'label': self.labels(one_off=False)}
), self._fetch_containers(
all=True,
filters={'label': self.labels(one_off=False, legacy=True)}
)
)
numbers = [c.number for c in containers]
numbers = [c.number for c in containers if c.number is not None]
return 1 if not numbers else max(numbers) + 1
def _fetch_containers(self, **fetch_options):
@@ -786,6 +823,7 @@ class Service(object):
one_off=False,
previous_container=None):
add_config_hash = (not one_off and not override_options)
slug = generate_random_id() if one_off else None
container_options = dict(
(k, self.options[k])
@@ -794,7 +832,7 @@ class Service(object):
container_options.update(override_options)
if not container_options.get('name'):
container_options['name'] = self.get_container_name(self.name, number, one_off)
container_options['name'] = self.get_container_name(self.name, number, slug)
container_options.setdefault('detach', True)
@@ -846,7 +884,9 @@ class Service(object):
container_options.get('labels', {}),
self.labels(one_off=one_off),
number,
self.config_hash if add_config_hash else None)
self.config_hash if add_config_hash else None,
slug
)
# Delete options which are only used in HostConfig
for key in HOST_CONFIG_KEYS:
@@ -903,8 +943,9 @@ class Service(object):
override_options['mounts'] = override_options.get('mounts') or []
override_options['mounts'].extend([build_mount(v) for v in secret_volumes])
# Remove possible duplicates (see e.g. https://github.com/docker/compose/issues/5885)
override_options['binds'] = list(set(binds))
# Remove possible duplicates (see e.g. https://github.com/docker/compose/issues/5885).
# unique_everseen preserves order. (see https://github.com/docker/compose/issues/6091).
override_options['binds'] = list(unique_everseen(binds))
return container_options, override_options
def _get_container_host_config(self, override_options, one_off=False):
@@ -1012,14 +1053,8 @@ class Service(object):
for k, v in self._parse_proxy_config().items():
build_args.setdefault(k, v)
# python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe
path = build_opts.get('context')
if not six.PY3 and not IS_WINDOWS_PLATFORM:
path = path.encode('utf8')
platform = self.options.get('platform')
if platform and version_lt(self.client.api_version, '1.35'):
path = rewrite_build_path(build_opts.get('context'))
if self.platform and version_lt(self.client.api_version, '1.35'):
raise OperationFailedError(
'Impossible to perform platform-targeted builds for API version < 1.35'
)
@@ -1044,11 +1079,11 @@ class Service(object):
},
gzip=gzip,
isolation=build_opts.get('isolation', self.options.get('isolation', None)),
platform=platform,
platform=self.platform,
)
try:
all_events = stream_output(build_output, sys.stdout)
all_events = list(stream_output(build_output, sys.stdout))
except StreamOutputError as e:
raise BuildError(self, six.text_type(e))
@@ -1085,12 +1120,12 @@ class Service(object):
def custom_container_name(self):
return self.options.get('container_name')
def get_container_name(self, service_name, number, one_off=False):
if self.custom_container_name and not one_off:
def get_container_name(self, service_name, number, slug=None):
if self.custom_container_name and slug is None:
return self.custom_container_name
container_name = build_container_name(
self.project, service_name, number, one_off,
self.project, service_name, number, slug,
)
ext_links_origins = [l.split(':')[0] for l in self.options.get('external_links', [])]
if container_name in ext_links_origins:
@@ -1142,7 +1177,23 @@ class Service(object):
return any(has_host_port(binding) for binding in self.options.get('ports', []))
def pull(self, ignore_pull_failures=False, silent=False):
def _do_pull(self, repo, pull_kwargs, silent, ignore_pull_failures):
try:
output = self.client.pull(repo, **pull_kwargs)
if silent:
with open(os.devnull, 'w') as devnull:
for event in stream_output(output, devnull):
yield event
else:
for event in stream_output(output, sys.stdout):
yield event
except (StreamOutputError, NotFound) as e:
if not ignore_pull_failures:
raise
else:
log.error(six.text_type(e))
def pull(self, ignore_pull_failures=False, silent=False, stream=False):
if 'image' not in self.options:
return
@@ -1150,29 +1201,20 @@ class Service(object):
kwargs = {
'tag': tag or 'latest',
'stream': True,
'platform': self.options.get('platform'),
'platform': self.platform,
}
if not silent:
log.info('Pulling %s (%s%s%s)...' % (self.name, repo, separator, tag))
if kwargs['platform'] and version_lt(self.client.api_version, '1.35'):
raise OperationFailedError(
'Impossible to perform platform-targeted builds for API version < 1.35'
'Impossible to perform platform-targeted pulls for API version < 1.35'
)
try:
output = self.client.pull(repo, **kwargs)
if silent:
with open(os.devnull, 'w') as devnull:
return progress_stream.get_digest_from_pull(
stream_output(output, devnull))
else:
return progress_stream.get_digest_from_pull(
stream_output(output, sys.stdout))
except (StreamOutputError, NotFound) as e:
if not ignore_pull_failures:
raise
else:
log.error(six.text_type(e))
event_stream = self._do_pull(repo, kwargs, silent, ignore_pull_failures)
if stream:
return event_stream
return progress_stream.get_digest_from_pull(event_stream)
def push(self, ignore_push_failures=False):
if 'image' not in self.options or 'build' not in self.options:
@@ -1235,12 +1277,6 @@ class Service(object):
return result
def has_legacy_proj_name(self, ctnr):
return (
ComposeVersion(ctnr.labels.get(LABEL_VERSION)) < ComposeVersion('1.21.0') and
ctnr.project != self.project
)
def short_id_alias_exists(container, network):
aliases = container.get(
@@ -1346,11 +1382,13 @@ class ServiceNetworkMode(object):
# Names
def build_container_name(project, service, number, one_off=False):
bits = [project, service]
if one_off:
bits.append('run')
return '_'.join(bits + [str(number)])
def build_container_name(project, service, number, slug=None):
bits = [project.lstrip('-_'), service]
if slug:
bits.extend(['run', truncate_id(slug)])
else:
bits.append(str(number))
return '_'.join(bits)
# Images
@@ -1393,7 +1431,7 @@ def merge_volume_bindings(volumes, tmpfs, previous_container, mounts):
"""
affinity = {}
volume_bindings = dict(
volume_bindings = OrderedDict(
build_volume_binding(volume)
for volume in volumes
if volume.external
@@ -1453,6 +1491,11 @@ def get_container_data_volumes(container, volumes_option, tmpfs_option, mounts_o
if not mount.get('Name'):
continue
# Volume (probably an image volume) is overridden by a mount in the service's config
# and would cause a duplicate mountpoint error
if volume.internal in [m.target for m in mounts_option]:
continue
# Copy existing volume from old container
volume = volume._replace(external=mount['Name'])
volumes.append(volume)
@@ -1531,10 +1574,13 @@ def build_mount(mount_spec):
# Labels
def build_container_labels(label_options, service_labels, number, config_hash):
def build_container_labels(label_options, service_labels, number, config_hash, slug):
labels = dict(label_options or {})
labels.update(label.split('=', 1) for label in service_labels)
labels[LABEL_CONTAINER_NUMBER] = str(number)
if number is not None:
labels[LABEL_CONTAINER_NUMBER] = str(number)
if slug is not None:
labels[LABEL_SLUG] = slug
labels[LABEL_VERSION] = __version__
if config_hash:
@@ -1623,3 +1669,15 @@ def convert_blkio_config(blkio_config):
arr.append(dict([(k.capitalize(), v) for k, v in item.items()]))
result[field] = arr
return result
def rewrite_build_path(path):
# python2 os.stat() doesn't support unicode on some UNIX, so we
# encode it to a bytestring to be safe
if not six.PY3 and not IS_WINDOWS_PLATFORM:
path = path.encode('utf8')
if IS_WINDOWS_PLATFORM and not is_url(path) and not path.startswith(WINDOWS_LONGPATH_PREFIX):
path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
return path

View File

@@ -7,6 +7,7 @@ import json
import json.decoder
import logging
import ntpath
import random
import six
from docker.errors import DockerException
@@ -151,3 +152,37 @@ def unquote_path(s):
if s[0] == '"' and s[-1] == '"':
return s[1:-1]
return s
def generate_random_id():
while True:
val = hex(random.getrandbits(32 * 8))[2:-1]
try:
int(truncate_id(val))
continue
except ValueError:
return val
def truncate_id(value):
if ':' in value:
value = value[value.index(':') + 1:]
if len(value) > 12:
return value[:12]
return value
def unique_everseen(iterable, key=lambda x: x):
"List unique elements, preserving order. Remember all elements ever seen."
seen = set()
for element in iterable:
unique_key = key(element)
if unique_key not in seen:
seen.add(unique_key)
yield element
def truncate_string(s, max_chars=35):
if len(s) > max_chars:
return s[:max_chars - 2] + '...'
return s

View File

@@ -60,7 +60,7 @@ class Volume(object):
def full_name(self):
if self.custom_name:
return self.name
return '{0}_{1}'.format(self.project, self.name)
return '{0}_{1}'.format(self.project.lstrip('-_'), self.name)
@property
def legacy_full_name(self):

View File

@@ -98,7 +98,7 @@ __docker_compose_complete_services() {
# The services for which at least one running container exists
__docker_compose_complete_running_services() {
local names=$(__docker_compose_complete_services --filter status=running)
local names=$(__docker_compose_services --filter status=running)
COMPREPLY=( $(compgen -W "$names" -- "$cur") )
}
@@ -136,7 +136,18 @@ _docker_compose_bundle() {
_docker_compose_config() {
COMPREPLY=( $( compgen -W "--help --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
case "$prev" in
--hash)
if [[ $cur == \\* ]] ; then
COMPREPLY=( '\*' )
else
COMPREPLY=( $(compgen -W "$(__docker_compose_services) \\\* " -- "$cur") )
fi
return
;;
esac
COMPREPLY=( $( compgen -W "--hash --help --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
}

145
contrib/completion/zsh/_docker-compose Normal file → Executable file
View File

@@ -23,7 +23,7 @@ __docker-compose_all_services_in_compose_file() {
local already_selected
local -a services
already_selected=$(echo $words | tr " " "|")
__docker-compose_q config --services \
__docker-compose_q ps --services "$@" \
| grep -Ev "^(${already_selected})$"
}
@@ -31,125 +31,42 @@ __docker-compose_all_services_in_compose_file() {
__docker-compose_services_all() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
services=$(__docker-compose_all_services_in_compose_file)
services=$(__docker-compose_all_services_in_compose_file "$@")
_alternative "args:services:($services)" && ret=0
return ret
}
# All services that have an entry with the given key in their docker-compose.yml section
__docker-compose_services_with_key() {
local already_selected
local -a buildable
already_selected=$(echo $words | tr " " "|")
# flatten sections to one line, then filter lines containing the key and return section name.
__docker-compose_q config \
| sed -n -e '/^services:/,/^[^ ]/p' \
| sed -n 's/^ //p' \
| awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' \
| grep " \+$1:" \
| cut -d: -f1 \
| grep -Ev "^(${already_selected})$"
}
# All services that are defined by a Dockerfile reference
__docker-compose_services_from_build() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
buildable=$(__docker-compose_services_with_key build)
_alternative "args:buildable services:($buildable)" && ret=0
return ret
__docker-compose_services_all --filter source=build
}
# All services that are defined by an image
__docker-compose_services_from_image() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
pullable=$(__docker-compose_services_with_key image)
_alternative "args:pullable services:($pullable)" && ret=0
return ret
}
__docker-compose_get_services() {
[[ $PREFIX = -* ]] && return 1
integer ret=1
local kind
declare -a running paused stopped lines args services
docker_status=$(docker ps > /dev/null 2>&1)
if [ $? -ne 0 ]; then
_message "Error! Docker is not running."
return 1
fi
kind=$1
shift
[[ $kind =~ (stopped|all) ]] && args=($args -a)
lines=(${(f)"$(_call_program commands docker $docker_options ps --format 'table' $args)"})
services=(${(f)"$(_call_program commands docker-compose 2>/dev/null $compose_options ps -q)"})
# Parse header line to find columns
local i=1 j=1 k header=${lines[1]}
declare -A begin end
while (( j < ${#header} - 1 )); do
i=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 1 ))
j=$(( i + ${${header[$i,-1]}[(i) ]} - 1 ))
k=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 2 ))
begin[${header[$i,$((j-1))]}]=$i
end[${header[$i,$((j-1))]}]=$k
done
lines=(${lines[2,-1]})
# Container ID
local line s name
local -a names
for line in $lines; do
if [[ ${services[@]} == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then
names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}})
for name in $names; do
s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"
s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}"
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
stopped=($stopped $s)
else
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = *\(Paused\)* ]]; then
paused=($paused $s)
fi
running=($running $s)
fi
done
fi
done
[[ $kind =~ (running|all) ]] && _describe -t services-running "running services" running "$@" && ret=0
[[ $kind =~ (paused|all) ]] && _describe -t services-paused "paused services" paused "$@" && ret=0
[[ $kind =~ (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped "$@" && ret=0
return ret
__docker-compose_services_all --filter source=image
}
__docker-compose_pausedservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services paused "$@"
__docker-compose_services_all --filter status=paused
}
__docker-compose_stoppedservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services stopped "$@"
__docker-compose_services_all --filter status=stopped
}
__docker-compose_runningservices() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services running "$@"
__docker-compose_services_all --filter status=running
}
__docker-compose_services() {
[[ $PREFIX = -* ]] && return 1
__docker-compose_get_services all "$@"
__docker-compose_services_all
}
__docker-compose_caching_policy() {
@@ -196,9 +113,10 @@ __docker-compose_subcommand() {
$opts_help \
"*--build-arg=[Set build-time variables for one service.]:<varname>=<value>: " \
'--force-rm[Always remove intermediate containers.]' \
'--memory[Memory limit for the build container.]' \
'(--memory -m)'{--memory,-m}'[Memory limit for the build container.]' \
'--no-cache[Do not use cache when building the image.]' \
'--pull[Always attempt to pull a newer version of the image.]' \
'--compress[Compress the build context using gzip.]' \
'*:services:__docker-compose_services_from_build' && ret=0
;;
(bundle)
@@ -213,7 +131,8 @@ __docker-compose_subcommand() {
'(--quiet -q)'{--quiet,-q}"[Only validate the configuration, don't print anything.]" \
'--resolve-image-digests[Pin image tags to digests.]' \
'--services[Print the service names, one per line.]' \
'--volumes[Print the volume names, one per line.]' && ret=0
'--volumes[Print the volume names, one per line.]' \
'--hash[Print the service config hash, one per line. Set "service1,service2" for a list of specified services.]' \ && ret=0
;;
(create)
_arguments \
@@ -222,11 +141,12 @@ __docker-compose_subcommand() {
$opts_no_recreate \
$opts_no_build \
"(--no-build)--build[Build images before creating containers.]" \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(down)
_arguments \
$opts_help \
$opts_timeout \
"--rmi[Remove images. Type must be one of: 'all': Remove all images used by any service. 'local': Remove only images that don't have a custom tag set by the \`image\` field.]:type:(all local)" \
'(-v --volumes)'{-v,--volumes}"[Remove named volumes declared in the \`volumes\` section of the Compose file and anonymous volumes attached to containers.]" \
$opts_remove_orphans && ret=0
@@ -235,16 +155,18 @@ __docker-compose_subcommand() {
_arguments \
$opts_help \
'--json[Output events as a stream of json objects]' \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(exec)
_arguments \
$opts_help \
'-d[Detached mode: Run command in the background.]' \
'--privileged[Give extended privileges to the process.]' \
'(-u --user)'{-u,--user=}'[Run the command as this user.]:username:_users' \
'(-u --user)'{-u,--user=}'[Run the command as this user.]:username:_users' \
'-T[Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY.]' \
'--index=[Index of the container if there are multiple instances of a service \[default: 1\]]:index: ' \
'*'{-e,--env}'[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
'(-w --workdir)'{-w,--workdir=}'[Working directory inside the container]:workdir: ' \
'(-):running services:__docker-compose_runningservices' \
'(-):command: _command_names -e' \
'*::arguments: _normal' && ret=0
@@ -252,12 +174,12 @@ __docker-compose_subcommand() {
(help)
_arguments ':subcommand:__docker-compose_commands' && ret=0
;;
(images)
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services_all' && ret=0
;;
(images)
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services' && ret=0
;;
(kill)
_arguments \
$opts_help \
@@ -271,7 +193,7 @@ __docker-compose_subcommand() {
$opts_no_color \
'--tail=[Number of lines to show from the end of the logs for each container.]:number of lines: ' \
'(-t --timestamps)'{-t,--timestamps}'[Show timestamps]' \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(pause)
_arguments \
@@ -290,12 +212,16 @@ __docker-compose_subcommand() {
_arguments \
$opts_help \
'-q[Only display IDs]' \
'*:services:__docker-compose_services_all' && ret=0
'--filter KEY=VAL[Filter services by a property]:<filtername>=<value>:' \
'*:services:__docker-compose_services' && ret=0
;;
(pull)
_arguments \
$opts_help \
'--ignore-pull-failures[Pull what it can and ignores images with pull failures.]' \
'--no-parallel[Disable parallel pulling]' \
'(-q --quiet)'{-q,--quiet}'[Pull without printing progress information]' \
'--include-deps[Also pull services declared as dependencies]' \
'*:services:__docker-compose_services_from_image' && ret=0
;;
(push)
@@ -317,6 +243,7 @@ __docker-compose_subcommand() {
$opts_no_deps \
'-d[Detached mode: Run container in the background, print new container name.]' \
'*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
'*'{-l,--label}'[KEY=VAL Add or override a label (can be used multiple times)]:label KEY=VAL: ' \
'--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \
'--name=[Assign a name to the container]:name: ' \
'(-p --publish)'{-p,--publish=}"[Publish a container's port(s) to the host]" \
@@ -326,6 +253,7 @@ __docker-compose_subcommand() {
'(-u --user)'{-u,--user=}'[Run as specified username or uid]:username or uid:_users' \
'(-v --volume)*'{-v,--volume=}'[Bind mount a volume]:volume: ' \
'(-w --workdir)'{-w,--workdir=}'[Working directory inside the container]:workdir: ' \
"--use-aliases[Use the services network aliases in the network(s) the container connects to]" \
'(-):services:__docker-compose_services' \
'(-):command: _command_names -e' \
'*::arguments: _normal' && ret=0
@@ -369,8 +297,10 @@ __docker-compose_subcommand() {
"(--no-build)--build[Build images before starting containers.]" \
"(-d)--abort-on-container-exit[Stops all containers if any container was stopped. Incompatible with -d.]" \
'(-t --timeout)'{-t,--timeout}"[Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10)]:seconds: " \
'--scale[SERVICE=NUM Scale SERVICE to NUM instances. Overrides the `scale` setting in the Compose file if present.]:service scale SERVICE=NUM: ' \
'--exit-code-from=[Return the exit code of the selected service container. Implies --abort-on-container-exit]:service:__docker-compose_services' \
$opts_remove_orphans \
'*:services:__docker-compose_services_all' && ret=0
'*:services:__docker-compose_services' && ret=0
;;
(version)
_arguments \
@@ -409,8 +339,11 @@ _docker-compose() {
'(- :)'{-h,--help}'[Get help]' \
'*'{-f,--file}"[${file_description}]:file:_files -g '*.yml'" \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
'--verbose[Show more output]' \
"--compatibility[If set, Compose will attempt to convert deploy keys in v3 files to their non-Swarm equivalent]" \
'(- :)'{-v,--version}'[Print version and exit]' \
'--verbose[Show more output]' \
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \
'--no-ansi[Do not print ANSI control characters]' \
'(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \
'--tls[Use TLS; implied by --tlsverify]' \
'--tlscacert=[Trust certs signed only by this CA]:ca path:' \

View File

@@ -82,6 +82,11 @@ exe = EXE(pyz,
'compose/config/config_schema_v3.6.json',
'DATA'
),
(
'compose/config/config_schema_v3.7.json',
'compose/config/config_schema_v3.7.json',
'DATA'
),
(
'compose/GITSHA',
'compose/GITSHA',

View File

@@ -1,5 +1,5 @@
coverage==4.4.2
flake8==3.5.0
mock>=1.0.1
pytest==2.9.2
pytest==3.6.3
pytest-cov==2.5.1

View File

@@ -2,22 +2,22 @@ backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
docker==3.3.0
docker-pycreds==0.2.3
colorama==0.4.0; sys_platform == 'win32'
docker==3.6.0
docker-pycreds==0.3.0
dockerpty==0.4.1
docopt==0.6.2
enum34==1.1.6; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
git+git://github.com/tartley/colorama.git@bd378c725b45eba0b8e5cc091c3ca76a954c92ff; sys_platform == 'win32'
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==220; sys_platform == 'win32' and python_version >= '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.6.7
PyYAML==3.12
requests==2.18.4
requests==2.20.0
six==1.10.0
texttable==0.9.1
urllib3==1.21.1
urllib3==1.21.1; python_version == '3.3'
websocket-client==0.32.0

View File

@@ -1,11 +1,11 @@
#!/bin/bash
set -ex
PATH="/usr/local/bin:$PATH"
TOOLCHAIN_PATH="$(realpath $(dirname $0)/../../build/toolchain)"
rm -rf venv
virtualenv -p /usr/local/bin/python3 venv
virtualenv -p ${TOOLCHAIN_PATH}/bin/python3 venv
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-build.txt
venv/bin/pip install --no-deps .

View File

@@ -44,16 +44,10 @@ virtualenv .\venv
# pip and pyinstaller generate lots of warnings, so we need to ignore them
$ErrorActionPreference = "Continue"
# Install dependencies
# Fix for https://github.com/pypa/pip/issues/3964
# Remove-Item -Recurse -Force .\venv\Lib\site-packages\pip
# .\venv\Scripts\easy_install pip==9.0.1
# .\venv\Scripts\pip install --upgrade pip setuptools
# End fix
.\venv\Scripts\pip install pypiwin32==220
.\venv\Scripts\pip install pypiwin32==223
.\venv\Scripts\pip install -r requirements.txt
.\venv\Scripts\pip install --no-deps .
.\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt
.\venv\Scripts\pip install -r requirements-build.txt
git rev-parse --short HEAD | out-file -encoding ASCII compose\GITSHA

View File

@@ -1,14 +0,0 @@
FROM python:3.6
RUN mkdir -p /src && pip install -U Jinja2==2.10 \
PyGithub==1.39 \
pypandoc==1.4 \
GitPython==2.1.9 \
requests==2.18.4 && \
apt-get update && apt-get install -y pandoc
VOLUME /src/script/release
WORKDIR /src
COPY . /src
RUN python setup.py develop
ENTRYPOINT ["python", "script/release/release.py"]
CMD ["--help"]

View File

@@ -9,8 +9,7 @@ The following things are required to bring a release to a successful conclusion
### Local Docker engine (Linux Containers)
The release script runs inside a container and builds images that will be part
of the release.
The release script builds images that will be part of the release.
### Docker Hub account
@@ -20,6 +19,10 @@ following repositories:
- docker/compose
- docker/compose-tests
### Python
The release script is written in Python and requires Python 3.3 at minimum.
### A Github account and Github API token
Your Github account needs to have write access on the `docker/compose` repo.
@@ -53,6 +56,18 @@ Said account needs to be a member of the maintainers group for the
Moreover, the `~/.pypirc` file should exist on your host and contain the
relevant pypi credentials.
The following is a sample `.pypirc` provided as a guideline:
```
[distutils]
index-servers =
pypi
[pypi]
username = user
password = pass
```
## Start a feature release
A feature release is a release that includes all changes present in the

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
import argparse
import os
import shutil
import sys
import time
from distutils.core import run_setup
@@ -16,6 +17,8 @@ from release.const import NAME
from release.const import REPO_ROOT
from release.downloader import BinaryDownloader
from release.images import ImageManager
from release.pypi import check_pypirc
from release.pypi import pypi_upload
from release.repository import delete_assets
from release.repository import get_contributors
from release.repository import Repository
@@ -57,8 +60,11 @@ def create_bump_commit(repository, release_branch, bintray_user, bintray_org):
repository.push_branch_to_remote(release_branch)
bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], bintray_user)
print('Creating data repository {} on bintray'.format(release_branch.name))
bintray_api.create_repository(bintray_org, release_branch.name, 'generic')
if not bintray_api.repository_exists(bintray_org, release_branch.name):
print('Creating data repository {} on bintray'.format(release_branch.name))
bintray_api.create_repository(bintray_org, release_branch.name, 'generic')
else:
print('Bintray repository {} already exists. Skipping'.format(release_branch.name))
def monitor_pr_status(pr_data):
@@ -71,20 +77,24 @@ def monitor_pr_status(pr_data):
'pending': 0,
'success': 0,
'failure': 0,
'error': 0,
}
for detail in status.statuses:
if detail.context == 'dco-signed':
# dco-signed check breaks on merge remote-tracking ; ignore it
continue
summary[detail.state] += 1
print('{pending} pending, {success} successes, {failure} failures'.format(**summary))
if status.total_count == 0:
# Mostly for testing purposes against repos with no CI setup
return True
elif summary['pending'] == 0 and summary['failure'] == 0:
return True
elif summary['failure'] > 0:
if detail.state in summary:
summary[detail.state] += 1
print(
'{pending} pending, {success} successes, {failure} failures, '
'{error} errors'.format(**summary)
)
if summary['failure'] > 0 or summary['error'] > 0:
raise ScriptError('CI failures detected!')
elif summary['pending'] == 0 and summary['success'] > 0:
# This check assumes at least 1 non-DCO CI check to avoid race conditions.
# If testing on a repo without CI, use --skip-ci-check to avoid looping eternally
return True
time.sleep(30)
elif status.state == 'success':
print('{} successes: all clear!'.format(status.total_count))
@@ -92,12 +102,14 @@ def monitor_pr_status(pr_data):
def check_pr_mergeable(pr_data):
if not pr_data.mergeable:
if pr_data.mergeable is False:
# mergeable can also be null, in which case the warning would be a false positive.
print(
'WARNING!! PR #{} can not currently be merged. You will need to '
'resolve the conflicts manually before finalizing the release.'.format(pr_data.number)
)
return pr_data.mergeable
return pr_data.mergeable is True
def create_release_draft(repository, version, pr_data, files):
@@ -125,13 +137,42 @@ def print_final_instructions(args):
"You're almost done! Please verify that everything is in order and "
"you are ready to make the release public, then run the following "
"command:\n{exe} -b {user} finalize {version}".format(
exe=sys.argv[0], user=args.bintray_user, version=args.release
exe='./script/release/release.sh', user=args.bintray_user, version=args.release
)
)
def distclean():
print('Running distclean...')
dirs = [
os.path.join(REPO_ROOT, 'build'), os.path.join(REPO_ROOT, 'dist'),
os.path.join(REPO_ROOT, 'docker-compose.egg-info')
]
files = []
for base, dirnames, fnames in os.walk(REPO_ROOT):
for fname in fnames:
path = os.path.normpath(os.path.join(base, fname))
if fname.endswith('.pyc'):
files.append(path)
elif fname.startswith('.coverage.'):
files.append(path)
for dirname in dirnames:
path = os.path.normpath(os.path.join(base, dirname))
if dirname == '__pycache__':
dirs.append(path)
elif dirname == '.coverage-binfiles':
dirs.append(path)
for file in files:
os.unlink(file)
for folder in dirs:
shutil.rmtree(folder, ignore_errors=True)
def resume(args):
try:
distclean()
repository = Repository(REPO_ROOT, args.repo)
br_name = branch_name(args.release)
if not repository.branch_exists(br_name):
@@ -156,7 +197,8 @@ def resume(args):
if not pr_data:
pr_data = repository.create_release_pull_request(args.release)
check_pr_mergeable(pr_data)
monitor_pr_status(pr_data)
if not args.skip_ci:
monitor_pr_status(pr_data)
downloader = BinaryDownloader(args.destination)
files = downloader.download_all(args.release)
if not gh_release:
@@ -182,6 +224,7 @@ def cancel(args):
bintray_api = BintrayAPI(os.environ['BINTRAY_TOKEN'], args.bintray_user)
print('Removing Bintray data repository for {}'.format(args.release))
bintray_api.delete_repository(args.bintray_org, branch_name(args.release))
distclean()
except ScriptError as e:
print(e)
return 1
@@ -190,12 +233,14 @@ def cancel(args):
def start(args):
distclean()
try:
repository = Repository(REPO_ROOT, args.repo)
create_initial_branch(repository, args)
pr_data = repository.create_release_pull_request(args.release)
check_pr_mergeable(pr_data)
monitor_pr_status(pr_data)
if not args.skip_ci:
monitor_pr_status(pr_data)
downloader = BinaryDownloader(args.destination)
files = downloader.download_all(args.release)
gh_release = create_release_draft(repository, args.release, pr_data, files)
@@ -211,7 +256,9 @@ def start(args):
def finalize(args):
distclean()
try:
check_pypirc()
repository = Repository(REPO_ROOT, args.repo)
img_manager = ImageManager(args.release)
pr_data = repository.find_release_pr(args.release)
@@ -219,7 +266,7 @@ def finalize(args):
raise ScriptError('No PR found for {}'.format(args.release))
if not check_pr_mergeable(pr_data):
raise ScriptError('Can not finalize release with an unmergeable PR')
if not img_manager.check_images(args.release):
if not img_manager.check_images():
raise ScriptError('Missing release image')
br_name = branch_name(args.release)
if not repository.branch_exists(br_name):
@@ -236,11 +283,14 @@ def finalize(args):
run_setup(os.path.join(REPO_ROOT, 'setup.py'), script_args=['sdist', 'bdist_wheel'])
merge_status = pr_data.merge()
if not merge_status.merged:
raise ScriptError('Unable to merge PR #{}: {}'.format(pr_data.number, merge_status.message))
print('Uploading to PyPi')
run_setup(os.path.join(REPO_ROOT, 'setup.py'), script_args=['upload'])
img_manager.push_images(args.release)
if not merge_status.merged and not args.finalize_resume:
raise ScriptError(
'Unable to merge PR #{}: {}'.format(pr_data.number, merge_status.message)
)
pypi_upload(args)
img_manager.push_images()
repository.publish_release(gh_release)
except ScriptError as e:
print(e)
@@ -258,13 +308,13 @@ ACTIONS = [
EPILOG = '''Example uses:
* Start a new feature release (includes all changes currently in master)
release.py -b user start 1.23.0
release.sh -b user start 1.23.0
* Start a new patch release
release.py -b user --patch 1.21.0 start 1.21.1
release.sh -b user --patch 1.21.0 start 1.21.1
* Cancel / rollback an existing release draft
release.py -b user cancel 1.23.0
release.sh -b user cancel 1.23.0
* Restart a previously aborted patch release
release.py -b user -p 1.21.0 resume 1.21.1
release.sh -b user -p 1.21.0 resume 1.21.1
'''
@@ -310,6 +360,14 @@ def main():
'--no-cherries', '-C', dest='cherries', action='store_false',
help='If set, the program will not prompt the user for PR numbers to cherry-pick'
)
parser.add_argument(
'--skip-ci-checks', dest='skip_ci', action='store_true',
help='If set, the program will not wait for CI jobs to complete'
)
parser.add_argument(
'--finalize-resume', dest='finalize_resume', action='store_true',
help='If set, finalize will continue through steps that have already been completed.'
)
args = parser.parse_args()
if args.action == 'start':

View File

@@ -1,25 +1,13 @@
#!/bin/sh
docker image inspect compose/release-tool > /dev/null
if test $? -ne 0; then
docker build -t compose/release-tool -f $(pwd)/script/release/Dockerfile $(pwd)
if test -d ${VENV_DIR:-./.release-venv}; then
true
else
./script/release/setup-venv.sh
fi
if test -z $GITHUB_TOKEN; then
echo "GITHUB_TOKEN environment variable must be set"
exit 1
if test -z "$*"; then
args="--help"
fi
if test -z $BINTRAY_TOKEN; then
echo "BINTRAY_TOKEN environment variable must be set"
exit 1
fi
docker run -e GITHUB_TOKEN=$GITHUB_TOKEN -e BINTRAY_TOKEN=$BINTRAY_TOKEN -it \
--mount type=bind,source=$(pwd),target=/src \
--mount type=bind,source=$(pwd)/.git,target=/src/.git \
--mount type=bind,source=$HOME/.docker,target=/root/.docker \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--mount type=bind,source=$HOME/.ssh,target=/root/.ssh \
-v $HOME/.pypirc:/root/.pypirc \
compose/release-tool $*
${VENV_DIR:-./.release-venv}/bin/python ./script/release/release.py "$@"

View File

@@ -15,7 +15,7 @@ class BintrayAPI(requests.Session):
self.base_url = 'https://api.bintray.com/'
def create_repository(self, subject, repo_name, repo_type='generic'):
url = '{base}/repos/{subject}/{repo_name}'.format(
url = '{base}repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
data = {
@@ -27,10 +27,20 @@ class BintrayAPI(requests.Session):
}
return self.post_json(url, data)
def delete_repository(self, subject, repo_name):
def repository_exists(self, subject, repo_name):
url = '{base}/repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
result = self.get(url)
if result.status_code == 404:
return False
result.raise_for_status()
return True
def delete_repository(self, subject, repo_name):
url = '{base}repos/{subject}/{repo_name}'.format(
base=self.base_url, subject=subject, repo_name=repo_name,
)
return self.delete(url)
def post_json(self, url, data, **kwargs):

View File

@@ -2,6 +2,8 @@ from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import base64
import json
import os
import shutil
@@ -15,16 +17,22 @@ class ImageManager(object):
def __init__(self, version):
self.docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
self.version = version
if 'HUB_CREDENTIALS' in os.environ:
print('HUB_CREDENTIALS found in environment, issuing login')
credentials = json.loads(base64.urlsafe_b64decode(os.environ['HUB_CREDENTIALS']))
self.docker_client.login(
username=credentials['Username'], password=credentials['Password']
)
def build_images(self, repository, files):
print("Building release images...")
repository.write_git_sha()
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
distdir = os.path.join(REPO_ROOT, 'dist')
os.makedirs(distdir, exist_ok=True)
shutil.copy(files['docker-compose-Linux-x86_64'][0], distdir)
os.chmod(os.path.join(distdir, 'docker-compose-Linux-x86_64'), 0o755)
print('Building docker/compose image')
logstream = docker_client.build(
logstream = self.docker_client.build(
REPO_ROOT, tag='docker/compose:{}'.format(self.version), dockerfile='Dockerfile.run',
decode=True
)
@@ -35,7 +43,7 @@ class ImageManager(object):
print(chunk['stream'], end='')
print('Building test image (for UCP e2e)')
logstream = docker_client.build(
logstream = self.docker_client.build(
REPO_ROOT, tag='docker-compose-tests:tmp', decode=True
)
for chunk in logstream:
@@ -44,13 +52,15 @@ class ImageManager(object):
if 'stream' in chunk:
print(chunk['stream'], end='')
container = docker_client.create_container(
container = self.docker_client.create_container(
'docker-compose-tests:tmp', entrypoint='tox'
)
docker_client.commit(container, 'docker/compose-tests:latest')
docker_client.tag('docker/compose-tests:latest', 'docker/compose-tests:{}'.format(self.version))
docker_client.remove_container(container, force=True)
docker_client.remove_image('docker-compose-tests:tmp', force=True)
self.docker_client.commit(container, 'docker/compose-tests', 'latest')
self.docker_client.tag(
'docker/compose-tests:latest', 'docker/compose-tests:{}'.format(self.version)
)
self.docker_client.remove_container(container, force=True)
self.docker_client.remove_image('docker-compose-tests:tmp', force=True)
@property
def image_names(self):
@@ -60,23 +70,23 @@ class ImageManager(object):
'docker/compose:{}'.format(self.version)
]
def check_images(self, version):
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
def check_images(self):
for name in self.image_names:
try:
docker_client.inspect_image(name)
self.docker_client.inspect_image(name)
except docker.errors.ImageNotFound:
print('Expected image {} was not found'.format(name))
return False
return True
def push_images(self):
docker_client = docker.APIClient(**docker.utils.kwargs_from_env())
for name in self.image_names:
print('Pushing {} to Docker Hub'.format(name))
logstream = docker_client.push(name, stream=True, decode=True)
logstream = self.docker_client.push(name, stream=True, decode=True)
for chunk in logstream:
if 'status' in chunk:
print(chunk['status'])
if 'error' in chunk:
raise ScriptError(
'Error pushing {name}: {err}'.format(name=name, err=chunk['error'])
)

View File

@@ -0,0 +1,44 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from configparser import Error
from requests.exceptions import HTTPError
from twine.commands.upload import main as twine_upload
from twine.utils import get_config
from .utils import ScriptError
def pypi_upload(args):
print('Uploading to PyPi')
try:
rel = args.release.replace('-rc', 'rc')
twine_upload([
'dist/docker_compose-{}*.whl'.format(rel),
'dist/docker-compose-{}*.tar.gz'.format(rel)
])
except HTTPError as e:
if e.response.status_code == 400 and 'File already exists' in e.message:
if not args.finalize_resume:
raise ScriptError(
'Package already uploaded on PyPi.'
)
print('Skipping PyPi upload - package already uploaded')
else:
raise ScriptError('Unexpected HTTP error uploading package to PyPi: {}'.format(e))
def check_pypirc():
try:
config = get_config()
except Error as e:
raise ScriptError('Failed to parse .pypirc file: {}'.format(e))
if config is None:
raise ScriptError('Failed to parse .pypirc file')
if 'pypi' not in config:
raise ScriptError('Missing [pypi] section in .pypirc file')
if not (config['pypi'].get('username') and config['pypi'].get('password')):
raise ScriptError('Missing login/password pair for pypi repo')

View File

@@ -196,6 +196,24 @@ class Repository(object):
f.flush()
self.git_repo.git.am('--3way', f.name)
def get_prs_in_milestone(self, version):
milestones = self.gh_repo.get_milestones(state='open')
milestone = None
for ms in milestones:
if ms.title == version:
milestone = ms
break
if not milestone:
print('Didn\'t find a milestone matching "{}"'.format(version))
return None
issues = self.gh_repo.get_issues(milestone=milestone, state='all')
prs = []
for issue in issues:
if issue.pull_request is not None:
prs.append(issue.number)
return sorted(prs)
def get_contributors(pr_data):
commits = pr_data.get_commits()

47
script/release/setup-venv.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/bash
debian_based() { test -f /etc/debian_version; }
if test -z $VENV_DIR; then
VENV_DIR=./.release-venv
fi
if test -z $PYTHONBIN; then
PYTHONBIN=$(which python3)
if test -z $PYTHONBIN; then
PYTHONBIN=$(which python)
fi
fi
VERSION=$($PYTHONBIN -c "import sys; print('{}.{}'.format(*sys.version_info[0:2]))")
if test $(echo $VERSION | cut -d. -f1) -lt 3; then
echo "Python 3.3 or above is required"
fi
if test $(echo $VERSION | cut -d. -f2) -lt 3; then
echo "Python 3.3 or above is required"
fi
# Debian / Ubuntu workaround:
# https://askubuntu.com/questions/879437/ensurepip-is-disabled-in-debian-ubuntu-for-the-system-python
if debian_based; then
VENV_FLAGS="$VENV_FLAGS --without-pip"
fi
$PYTHONBIN -m venv $VENV_DIR $VENV_FLAGS
VENV_PYTHONBIN=$VENV_DIR/bin/python
if debian_based; then
curl https://bootstrap.pypa.io/get-pip.py -o $VENV_DIR/get-pip.py
$VENV_PYTHONBIN $VENV_DIR/get-pip.py
fi
$VENV_PYTHONBIN -m pip install -U Jinja2==2.10 \
PyGithub==1.39 \
pypandoc==1.4 \
GitPython==2.1.9 \
requests==2.18.4 \
twine==1.11.0
$VENV_PYTHONBIN setup.py develop

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.21.1"
VERSION="1.23.2"
IMAGE="docker/compose:$VERSION"
@@ -47,11 +47,17 @@ if [ -n "$HOME" ]; then
fi
# Only allocate tty if we detect one
if [ -t 1 ]; then
DOCKER_RUN_OPTIONS="-t"
fi
if [ -t 0 ]; then
if [ -t 1 ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -t"
fi
else
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -i"
fi
# Handle userns security
if [ ! -z "$(docker info 2>/dev/null | grep userns)" ]; then
DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS --userns=host"
fi
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"

View File

@@ -1,43 +1,104 @@
#!/bin/bash
#!/usr/bin/env bash
set -ex
python_version() {
python -V 2>&1
}
. $(dirname $0)/osx_helpers.sh
python3_version() {
python3 -V 2>&1
}
DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET:-"$(macos_version)"}
SDK_FETCH=
if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_FETCH=1
# SDK URL from https://github.com/docker/golang-cross/blob/master/osx-cross.sh
SDK_URL=https://s3.dockerproject.org/darwin/v2/MacOSX${DEPLOYMENT_TARGET}.sdk.tar.xz
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
openssl_version() {
python -c "import ssl; print ssl.OPENSSL_VERSION"
}
OPENSSL_VERSION=1.1.0h
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=0fc39f6aa91b6e7f4d05018f7c5e991e1d2491fd
desired_python3_version="3.6.4"
desired_python3_brew_version="3.6.4_2"
python3_formula="https://raw.githubusercontent.com/Homebrew/homebrew-core/b4e69a9a592232fa5a82741f6acecffc2f1d198d/Formula/python3.rb"
PYTHON_VERSION=3.6.6
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=ae1fc9ddd29ad8c1d5f7b0d799ff0787efeb9652
PATH="/usr/local/bin:$PATH"
if !(which brew); then
#
# Install prerequisites.
#
if ! [ -x "$(command -v brew)" ]; then
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
fi
brew update > /dev/null
if !(python3_version | grep "$desired_python3_version"); then
if brew list | grep python3; then
brew unlink python3
fi
brew install "$python3_formula"
brew switch python3 "$desired_python3_brew_version"
if ! [ -x "$(command -v grealpath)" ]; then
brew update > /dev/null
brew install coreutils
fi
echo "*** Using $(python3_version) ; $(python_version)"
echo "*** Using $(openssl_version)"
if !(which virtualenv); then
if ! [ -x "$(command -v python3)" ]; then
brew update > /dev/null
brew install python3
fi
if ! [ -x "$(command -v virtualenv)" ]; then
pip install virtualenv
fi
#
# Create toolchain directory.
#
BUILD_PATH="$(grealpath $(dirname $0)/../../build)"
mkdir -p ${BUILD_PATH}
TOOLCHAIN_PATH="${BUILD_PATH}/toolchain"
mkdir -p ${TOOLCHAIN_PATH}
#
# Set macOS SDK.
#
if [ ${SDK_FETCH} ]; then
SDK_PATH=${TOOLCHAIN_PATH}/MacOSX${DEPLOYMENT_TARGET}.sdk
fetch_tarball ${SDK_URL} ${SDK_PATH} ${SDK_SHA1}
else
SDK_PATH="$(xcode-select --print-path)/Platforms/MacOSX.platform/Developer/SDKs/MacOSX${DEPLOYMENT_TARGET}.sdk"
fi
#
# Build OpenSSL.
#
OPENSSL_SRC_PATH=${TOOLCHAIN_PATH}/openssl-${OPENSSL_VERSION}
if ! [ -f ${TOOLCHAIN_PATH}/bin/openssl ]; then
rm -rf ${OPENSSL_SRC_PATH}
fetch_tarball ${OPENSSL_URL} ${OPENSSL_SRC_PATH} ${OPENSSL_SHA1}
(
cd ${OPENSSL_SRC_PATH}
export MACOSX_DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET}
export SDKROOT=${SDK_PATH}
./Configure darwin64-x86_64-cc --prefix=${TOOLCHAIN_PATH}
make install_sw install_dev
)
fi
#
# Build Python.
#
PYTHON_SRC_PATH=${TOOLCHAIN_PATH}/Python-${PYTHON_VERSION}
if ! [ -f ${TOOLCHAIN_PATH}/bin/python3 ]; then
rm -rf ${PYTHON_SRC_PATH}
fetch_tarball ${PYTHON_URL} ${PYTHON_SRC_PATH} ${PYTHON_SHA1}
(
cd ${PYTHON_SRC_PATH}
./configure --prefix=${TOOLCHAIN_PATH} \
--enable-ipv6 --without-ensurepip --with-dtrace --without-gcc \
--datarootdir=${TOOLCHAIN_PATH}/share \
--datadir=${TOOLCHAIN_PATH}/share \
--enable-framework=${TOOLCHAIN_PATH}/Frameworks \
MACOSX_DEPLOYMENT_TARGET=${DEPLOYMENT_TARGET} \
CFLAGS="-isysroot ${SDK_PATH} -I${TOOLCHAIN_PATH}/include" \
CPPFLAGS="-I${SDK_PATH}/usr/include -I${TOOLCHAIN_PATH}include" \
LDFLAGS="-isysroot ${SDK_PATH} -L ${TOOLCHAIN_PATH}/lib"
make -j 4
make install PYTHONAPPSDIR=${TOOLCHAIN_PATH}
make frameworkinstallextras PYTHONAPPSDIR=${TOOLCHAIN_PATH}/share
)
fi
echo ""
echo "*** Targeting macOS: ${DEPLOYMENT_TARGET}"
echo "*** Using SDK ${SDK_PATH}"
echo "*** Using $(python3_version ${TOOLCHAIN_PATH})"
echo "*** Using $(openssl_version ${TOOLCHAIN_PATH})"

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Check file's ($1) SHA1 ($2).
check_sha1() {
echo -n "$2 *$1" | shasum -c -
}
# Download URL ($1) to path ($2).
download() {
curl -L $1 -o $2
}
# Extract tarball ($1) in folder ($2).
extract() {
tar xf $1 -C $2
}
# Download URL ($1), check SHA1 ($3), and extract utility ($2).
fetch_tarball() {
url=$1
tarball=$2.tarball
sha1=$3
download $url $tarball
check_sha1 $tarball $sha1
extract $tarball $(dirname $tarball)
}
# Version of Python at toolchain path ($1).
python3_version() {
$1/bin/python3 -V 2>&1
}
# Version of OpenSSL used by toolchain ($1) Python.
openssl_version() {
$1/bin/python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"
}
# System macOS version.
macos_version() {
sw_vers -productVersion | cut -f1,2 -d'.'
}

View File

@@ -5,7 +5,7 @@ set -ex
TAG="docker-compose:$(git rev-parse --short HEAD)"
# By default use the Dockerfile, but can be overriden to use an alternative file
# By default use the Dockerfile, but can be overridden to use an alternative file
# e.g DOCKERFILE=Dockerfile.armhf script/test/default
DOCKERFILE="${DOCKERFILE:-Dockerfile}"

View File

@@ -36,23 +36,24 @@ import requests
GITHUB_API = 'https://api.github.com/repos'
STAGES = ['tp', 'beta', 'rc']
class Version(namedtuple('_Version', 'major minor patch rc edition')):
class Version(namedtuple('_Version', 'major minor patch stage edition')):
@classmethod
def parse(cls, version):
edition = None
version = version.lstrip('v')
version, _, rc = version.partition('-')
if rc:
if 'rc' not in rc:
edition = rc
rc = None
elif '-' in rc:
edition, rc = rc.split('-')
version, _, stage = version.partition('-')
if stage:
if not any(marker in stage for marker in STAGES):
edition = stage
stage = None
elif '-' in stage:
edition, stage = stage.split('-')
major, minor, patch = version.split('.', 3)
return cls(major, minor, patch, rc, edition)
return cls(major, minor, patch, stage, edition)
@property
def major_minor(self):
@@ -63,14 +64,22 @@ class Version(namedtuple('_Version', 'major minor patch rc edition')):
"""Return a representation that allows this object to be sorted
correctly with the default comparator.
"""
# rc releases should appear before official releases
rc = (0, self.rc) if self.rc else (1, )
return (int(self.major), int(self.minor), int(self.patch)) + rc
# non-GA releases should appear before GA releases
# Order: tp -> beta -> rc -> GA
if self.stage:
for st in STAGES:
if st in self.stage:
stage = (STAGES.index(st), self.stage)
break
else:
stage = (len(STAGES),)
return (int(self.major), int(self.minor), int(self.patch)) + stage
def __str__(self):
rc = '-{}'.format(self.rc) if self.rc else ''
stage = '-{}'.format(self.stage) if self.stage else ''
edition = '-{}'.format(self.edition) if self.edition else ''
return '.'.join(map(str, self[:3])) + edition + rc
return '.'.join(map(str, self[:3])) + edition + stage
BLACKLIST = [ # List of versions known to be broken and should not be used
@@ -113,9 +122,9 @@ def get_latest_versions(versions, num=1):
def get_default(versions):
"""Return a :class:`Version` for the latest non-rc version."""
"""Return a :class:`Version` for the latest GA version."""
for version in versions:
if not version.rc:
if not version.stage:
return version
@@ -123,8 +132,9 @@ def get_versions(tags):
for tag in tags:
try:
v = Version.parse(tag['name'])
if v not in BLACKLIST:
yield v
if v in BLACKLIST:
continue
yield v
except ValueError:
print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr)

View File

@@ -33,10 +33,10 @@ install_requires = [
'cached-property >= 1.2.0, < 2',
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.19',
'requests >= 2.6.1, != 2.11.0, != 2.12.2, != 2.18.0, < 2.21',
'texttable >= 0.9.0, < 0.10',
'websocket-client >= 0.32.0, < 1.0',
'docker >= 3.3.0, < 4.0',
'docker >= 3.6.0, < 4.0',
'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
@@ -55,7 +55,7 @@ extras_require = {
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
':sys_platform == "win32"': ['colorama >= 0.3.9, < 0.4'],
':sys_platform == "win32"': ['colorama >= 0.4, < 0.5'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}
@@ -100,5 +100,6 @@ setup(
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)

View File

@@ -99,7 +99,14 @@ class ContainerStateCondition(object):
def __call__(self):
try:
container = self.client.inspect_container(self.name)
if self.name.endswith('*'):
ctnrs = self.client.containers(all=True, filters={'name': self.name[:-1]})
if len(ctnrs) > 0:
container = self.client.inspect_container(ctnrs[0]['Id'])
else:
return False
else:
container = self.client.inspect_container(self.name)
return container['State']['Status'] == self.status
except errors.APIError:
return False
@@ -222,6 +229,16 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/v2-full'
assert self.dispatch(['config', '--quiet']).stdout == ''
def test_config_with_hash_option(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config', '--hash=*'])
for service in self.project.get_services():
assert '{} {}\n'.format(service.name, service.config_hash) in result.stdout
svc = self.project.get_service('other')
result = self.dispatch(['config', '--hash=other'])
assert result.stdout == '{} {}\n'.format(svc.name, svc.config_hash)
def test_config_default(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config'])
@@ -293,6 +310,36 @@ class CLITestCase(DockerClientTestCase):
}
}
def test_config_with_dot_env(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'true',
'image': 'alpine:latest',
'ports': ['5643/tcp', '9999/tcp']
}
},
'version': '2.4'
}
def test_config_with_dot_env_and_override_dir(self):
self.base_dir = 'tests/fixtures/default-env-file'
result = self.dispatch(['--project-directory', 'alt/', 'config'])
json_result = yaml.load(result.stdout)
assert json_result == {
'services': {
'web': {
'command': 'echo uwu',
'image': 'alpine:3.4',
'ports': ['3341/tcp', '4449/tcp']
}
},
'version': '2.4'
}
def test_config_external_volume_v2(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v2.yml', 'config'])
@@ -481,6 +528,7 @@ class CLITestCase(DockerClientTestCase):
assert yaml.load(result.stdout) == {
'version': '2.3',
'volumes': {'foo': {'driver': 'default'}},
'networks': {'bar': {}},
'services': {
'foo': {
'command': '/bin/true',
@@ -490,9 +538,10 @@ class CLITestCase(DockerClientTestCase):
'mem_limit': '300M',
'mem_reservation': '100M',
'cpus': 0.7,
'volumes': ['foo:/bar:rw']
'volumes': ['foo:/bar:rw'],
'networks': {'bar': None},
}
}
},
}
def test_ps(self):
@@ -771,6 +820,13 @@ class CLITestCase(DockerClientTestCase):
assert 'does not exist, is not accessible, or is not a valid URL' in result.stderr
def test_build_parallel(self):
self.base_dir = 'tests/fixtures/build-multiple-composefile'
result = self.dispatch(['build', '--parallel'])
assert 'Successfully tagged build-multiple-composefile_a:latest' in result.stdout
assert 'Successfully tagged build-multiple-composefile_b:latest' in result.stdout
assert 'Successfully built' in result.stdout
def test_create(self):
self.dispatch(['create'])
service = self.project.get_service('simple')
@@ -909,11 +965,11 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['down', '--rmi=local', '--volumes'])
assert 'Stopping v2-full_web_1' in result.stderr
assert 'Stopping v2-full_other_1' in result.stderr
assert 'Stopping v2-full_web_run_2' in result.stderr
assert 'Stopping v2-full_web_run_' in result.stderr
assert 'Removing v2-full_web_1' in result.stderr
assert 'Removing v2-full_other_1' in result.stderr
assert 'Removing v2-full_web_run_1' in result.stderr
assert 'Removing v2-full_web_run_2' in result.stderr
assert 'Removing v2-full_web_run_' in result.stderr
assert 'Removing v2-full_web_run_' in result.stderr
assert 'Removing volume v2-full_data' in result.stderr
assert 'Removing image v2-full_web' in result.stderr
assert 'Removing image busybox' not in result.stderr
@@ -970,11 +1026,15 @@ class CLITestCase(DockerClientTestCase):
def test_up_attached(self):
self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-color'])
simple_name = self.project.get_service('simple').containers(stopped=True)[0].name_without_project
another_name = self.project.get_service('another').containers(
stopped=True
)[0].name_without_project
assert 'simple_1 | simple' in result.stdout
assert 'another_1 | another' in result.stdout
assert 'simple_1 exited with code 0' in result.stdout
assert 'another_1 exited with code 0' in result.stdout
assert '{} | simple'.format(simple_name) in result.stdout
assert '{} | another'.format(another_name) in result.stdout
assert '{} exited with code 0'.format(simple_name) in result.stdout
assert '{} exited with code 0'.format(another_name) in result.stdout
@v2_only()
def test_up(self):
@@ -1678,11 +1738,12 @@ class CLITestCase(DockerClientTestCase):
def test_run_rm(self):
self.base_dir = 'tests/fixtures/volume'
proc = start_process(self.base_dir, ['run', '--rm', 'test'])
service = self.project.get_service('test')
wait_on_condition(ContainerStateCondition(
self.project.client,
'volume_test_run_1',
'running'))
service = self.project.get_service('test')
'volume_test_run_*',
'running')
)
containers = service.containers(one_off=OneOffFilter.only)
assert len(containers) == 1
mounts = containers[0].get('Mounts')
@@ -2005,39 +2066,39 @@ class CLITestCase(DockerClientTestCase):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGINT)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
def test_run_handles_sigterm(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGTERM)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
def test_run_handles_sighup(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'running'))
os.kill(proc.pid, signal.SIGHUP)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simple-composefile_simple_run_1',
'simple-composefile_simple_run_*',
'exited'))
@mock.patch.dict(os.environ)
@@ -2237,19 +2298,44 @@ class CLITestCase(DockerClientTestCase):
proc = start_process(self.base_dir, ['logs', '-f'])
self.dispatch(['up', '-d', 'another'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'logs-composefile_another_1',
'exited'))
another_name = self.project.get_service('another').get_container().name_without_project
wait_on_condition(
ContainerStateCondition(
self.project.client,
'logs-composefile_another_*',
'exited'
)
)
simple_name = self.project.get_service('simple').get_container().name_without_project
self.dispatch(['kill', 'simple'])
result = wait_on_process(proc)
assert 'hello' in result.stdout
assert 'test' in result.stdout
assert 'logs-composefile_another_1 exited with code 0' in result.stdout
assert 'logs-composefile_simple_1 exited with code 137' in result.stdout
assert '{} exited with code 0'.format(another_name) in result.stdout
assert '{} exited with code 137'.format(simple_name) in result.stdout
def test_logs_follow_logs_from_restarted_containers(self):
self.base_dir = 'tests/fixtures/logs-restart-composefile'
proc = start_process(self.base_dir, ['up'])
wait_on_condition(
ContainerStateCondition(
self.project.client,
'logs-restart-composefile_another_*',
'exited'
)
)
self.dispatch(['kill', 'simple'])
result = wait_on_process(proc)
assert result.stdout.count(
r'logs-restart-composefile_another_1 exited with code 1'
) == 3
assert result.stdout.count('world') == 3
def test_logs_default(self):
self.base_dir = 'tests/fixtures/logs-composefile'
@@ -2274,17 +2360,17 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up', '-d'])
result = self.dispatch(['logs', '-f', '-t'])
assert re.search('(\d{4})-(\d{2})-(\d{2})T(\d{2})\:(\d{2})\:(\d{2})', result.stdout)
assert re.search(r'(\d{4})-(\d{2})-(\d{2})T(\d{2})\:(\d{2})\:(\d{2})', result.stdout)
def test_logs_tail(self):
self.base_dir = 'tests/fixtures/logs-tail-composefile'
self.dispatch(['up'])
result = self.dispatch(['logs', '--tail', '2'])
assert 'c\n' in result.stdout
assert 'd\n' in result.stdout
assert 'a\n' not in result.stdout
assert 'b\n' not in result.stdout
assert 'y\n' in result.stdout
assert 'z\n' in result.stdout
assert 'w\n' not in result.stdout
assert 'x\n' not in result.stdout
def test_kill(self):
self.dispatch(['up', '-d'], None)
@@ -2458,9 +2544,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', '--index=' + str(index), 'simple', str(number)])
return result.stdout.rstrip()
assert get_port(3000) == containers[0].get_local_port(3000)
assert get_port(3000, index=1) == containers[0].get_local_port(3000)
assert get_port(3000, index=2) == containers[1].get_local_port(3000)
assert get_port(3000) in (containers[0].get_local_port(3000), containers[1].get_local_port(3000))
assert get_port(3000, index=containers[0].number) == containers[0].get_local_port(3000)
assert get_port(3000, index=containers[1].number) == containers[1].get_local_port(3000)
assert get_port(3002) == ""
def test_events_json(self):
@@ -2496,7 +2582,7 @@ class CLITestCase(DockerClientTestCase):
container, = self.project.containers()
expected_template = ' container {} {}'
expected_meta_info = ['image=busybox:latest', 'name=simple-composefile_simple_1']
expected_meta_info = ['image=busybox:latest', 'name=simple-composefile_simple_']
assert expected_template.format('create', container.id) in lines[0]
assert expected_template.format('start', container.id) in lines[1]
@@ -2578,8 +2664,11 @@ class CLITestCase(DockerClientTestCase):
assert len(containers) == 2
web = containers[1]
db_name = containers[0].name_without_project
assert set(get_links(web)) == set(['db', 'mydb_1', 'extends_mydb_1'])
assert set(get_links(web)) == set(
['db', db_name, 'extends_{}'.format(db_name)]
)
expected_env = set([
"FOO=1",
@@ -2612,17 +2701,27 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/exit-code-from'
proc = start_process(
self.base_dir,
['up', '--abort-on-container-exit', '--exit-code-from', 'another'])
['up', '--abort-on-container-exit', '--exit-code-from', 'another']
)
result = wait_on_process(proc, returncode=1)
assert 'exit-code-from_another_1 exited with code 1' in result.stdout
def test_exit_code_from_signal_stop(self):
self.base_dir = 'tests/fixtures/exit-code-from'
proc = start_process(
self.base_dir,
['up', '--abort-on-container-exit', '--exit-code-from', 'simple']
)
result = wait_on_process(proc, returncode=137) # SIGKILL
name = self.project.get_service('another').containers(stopped=True)[0].name_without_project
assert '{} exited with code 1'.format(name) in result.stdout
def test_images(self):
self.project.get_service('simple').create_container()
result = self.dispatch(['images'])
assert 'busybox' in result.stdout
assert 'simple-composefile_simple_1' in result.stdout
assert 'simple-composefile_simple_' in result.stdout
def test_images_default_composefile(self):
self.base_dir = 'tests/fixtures/multiple-composefiles'
@@ -2670,3 +2769,13 @@ class CLITestCase(DockerClientTestCase):
with pytest.raises(DuplicateOverrideFileFound):
get_project(self.base_dir, [])
self.base_dir = None
def test_images_use_service_tag(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/images-service-tag'
self.dispatch(['up', '-d', '--build'])
result = self.dispatch(['images'])
assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
assert re.search(r'foo3.+_foo3[ \t]+latest', result.stdout) is not None

View File

@@ -0,0 +1,4 @@
FROM busybox:latest
RUN echo a
CMD top

View File

@@ -0,0 +1,4 @@
FROM busybox:latest
RUN echo b
CMD top

View File

@@ -0,0 +1,8 @@
version: "2"
services:
a:
build: ./a
b:
build: ./b

View File

@@ -16,7 +16,13 @@ services:
memory: 100M
volumes:
- foo:/bar
networks:
- bar
volumes:
foo:
driver: default
networks:
bar:
attachable: true

View File

@@ -0,0 +1,4 @@
IMAGE=alpine:3.4
COMMAND=echo uwu
PORT1=3341
PORT2=4449

View File

@@ -1,4 +1,6 @@
web:
version: '2.4'
services:
web:
image: ${IMAGE}
command: ${COMMAND}
ports:

View File

@@ -0,0 +1,2 @@
FROM busybox:latest
RUN touch /foo

View File

@@ -0,0 +1,10 @@
version: "2.4"
services:
foo1:
build: .
image: test:dev
foo2:
build: .
image: test:prod
foo3:
build: .

View File

@@ -0,0 +1,7 @@
simple:
image: busybox:latest
command: sh -c "echo hello && tail -f /dev/null"
another:
image: busybox:latest
command: sh -c "sleep 0.5 && echo world && /bin/false"
restart: "on-failure:2"

View File

@@ -1,3 +1,3 @@
simple:
image: busybox:latest
command: sh -c "echo a && echo b && echo c && echo d"
command: sh -c "echo w && echo x && echo y && echo z"

View File

@@ -2,17 +2,17 @@ version: "2"
services:
web:
image: busybox
image: alpine:3.7
command: top
networks: ["front"]
app:
image: busybox
image: alpine:3.7
command: top
networks: ["front", "back"]
links:
- "db:database"
db:
image: busybox
image: alpine:3.7
command: top
networks: ["back"]

View File

@@ -90,7 +90,8 @@ class ProjectTest(DockerClientTestCase):
project.up()
containers = project.containers(['web'])
assert [c.name for c in containers] == ['composetest_web_1']
assert len(containers) == 1
assert containers[0].name.startswith('composetest_web_')
def test_containers_with_extra_service(self):
web = self.create_service('web')
@@ -104,6 +105,23 @@ class ProjectTest(DockerClientTestCase):
project = Project('composetest', [web, db], self.client)
assert set(project.containers(stopped=True)) == set([web_1, db_1])
def test_parallel_pull_with_no_image(self):
config_data = build_config(
version=V2_3,
services=[{
'name': 'web',
'build': {'context': '.'},
}],
)
project = Project.from_config(
name='composetest',
config_data=config_data,
client=self.client
)
project.pull(parallel_pull=True)
def test_volumes_from_service(self):
project = Project.from_config(
name='composetest',
@@ -431,7 +449,7 @@ class ProjectTest(DockerClientTestCase):
project.up(strategy=ConvergenceStrategy.always)
assert len(project.containers()) == 2
db_container = [c for c in project.containers() if 'db' in c.name][0]
db_container = [c for c in project.containers() if c.service == 'db'][0]
assert db_container.id != old_db_id
assert db_container.get('Volumes./etc') == db_volume_path
@@ -451,7 +469,7 @@ class ProjectTest(DockerClientTestCase):
project.up(strategy=ConvergenceStrategy.always)
assert len(project.containers()) == 2
db_container = [c for c in project.containers() if 'db' in c.name][0]
db_container = [c for c in project.containers() if c.service == 'db'][0]
assert db_container.id != old_db_id
assert db_container.get_mount('/etc')['Source'] == db_volume_path
@@ -464,14 +482,14 @@ class ProjectTest(DockerClientTestCase):
project.up(['db'])
assert len(project.containers()) == 1
old_db_id = project.containers()[0].id
container, = project.containers()
old_db_id = container.id
db_volume_path = container.get_mount('/var/db')['Source']
project.up(strategy=ConvergenceStrategy.never)
assert len(project.containers()) == 2
db_container = [c for c in project.containers() if 'db' in c.name][0]
db_container = [c for c in project.containers() if c.name == container.name][0]
assert db_container.id == old_db_id
assert db_container.get_mount('/var/db')['Source'] == db_volume_path
@@ -498,7 +516,7 @@ class ProjectTest(DockerClientTestCase):
assert len(new_containers) == 2
assert [c.is_running for c in new_containers] == [True, True]
db_container = [c for c in new_containers if 'db' in c.name][0]
db_container = [c for c in new_containers if c.service == 'db'][0]
assert db_container.id == old_db_id
assert db_container.get_mount('/var/db')['Source'] == db_volume_path
@@ -1915,3 +1933,65 @@ class ProjectTest(DockerClientTestCase):
assert len(remote_secopts) == 1
assert remote_secopts[0].startswith('seccomp=')
assert json.loads(remote_secopts[0].lstrip('seccomp=')) == seccomp_data
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_project_up_name_starts_with_illegal_char(self):
config_dict = {
'version': '2.3',
'services': {
'svc1': {
'image': 'busybox:latest',
'command': 'ls',
'volumes': ['foo:/foo:rw'],
'networks': ['bar'],
},
},
'volumes': {
'foo': {},
},
'networks': {
'bar': {},
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='_underscoretest', config_data=config_data, client=self.client
)
project.up()
self.addCleanup(project.down, None, True)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert containers[0].name.startswith('underscoretest_svc1_')
assert containers[0].project == '_underscoretest'
full_vol_name = 'underscoretest_foo'
vol_data = self.get_volume_data(full_vol_name)
assert vol_data
assert vol_data['Labels'][LABEL_PROJECT] == '_underscoretest'
full_net_name = '_underscoretest_bar'
net_data = self.client.inspect_network(full_net_name)
assert net_data
assert net_data['Labels'][LABEL_PROJECT] == '_underscoretest'
project2 = Project.from_config(
name='-dashtest', config_data=config_data, client=self.client
)
project2.up()
self.addCleanup(project2.down, None, True)
containers = project2.containers(stopped=True)
assert len(containers) == 1
assert containers[0].name.startswith('dashtest_svc1_')
assert containers[0].project == '-dashtest'
full_vol_name = 'dashtest_foo'
vol_data = self.get_volume_data(full_vol_name)
assert vol_data
assert vol_data['Labels'][LABEL_PROJECT] == '-dashtest'
full_net_name = '-dashtest_bar'
net_data = self.client.inspect_network(full_net_name)
assert net_data
assert net_data['Labels'][LABEL_PROJECT] == '-dashtest'

View File

@@ -67,7 +67,7 @@ class ServiceTest(DockerClientTestCase):
create_and_start_container(foo)
assert len(foo.containers()) == 1
assert foo.containers()[0].name == 'composetest_foo_1'
assert foo.containers()[0].name.startswith('composetest_foo_')
assert len(bar.containers()) == 0
create_and_start_container(bar)
@@ -77,8 +77,8 @@ class ServiceTest(DockerClientTestCase):
assert len(bar.containers()) == 2
names = [c.name for c in bar.containers()]
assert 'composetest_bar_1' in names
assert 'composetest_bar_2' in names
assert len(names) == 2
assert all(name.startswith('composetest_bar_') for name in names)
def test_containers_one_off(self):
db = self.create_service('db')
@@ -89,18 +89,18 @@ class ServiceTest(DockerClientTestCase):
def test_project_is_added_to_container_name(self):
service = self.create_service('web')
create_and_start_container(service)
assert service.containers()[0].name == 'composetest_web_1'
assert service.containers()[0].name.startswith('composetest_web_')
def test_create_container_with_one_off(self):
db = self.create_service('db')
container = db.create_container(one_off=True)
assert container.name == 'composetest_db_run_1'
assert container.name.startswith('composetest_db_run_')
def test_create_container_with_one_off_when_existing_container_is_running(self):
db = self.create_service('db')
db.start()
container = db.create_container(one_off=True)
assert container.name == 'composetest_db_run_1'
assert container.name.startswith('composetest_db_run_')
def test_create_container_with_unspecified_volume(self):
service = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
@@ -424,6 +424,22 @@ class ServiceTest(DockerClientTestCase):
new_container = service.recreate_container(old_container)
assert new_container.get_mount('/data')['Source'] == volume_path
def test_recreate_volume_to_mount(self):
# https://github.com/docker/compose/issues/6280
service = Service(
project='composetest',
name='db',
client=self.client,
build={'context': 'tests/fixtures/dockerfile-with-volume'},
volumes=[MountSpec.parse({
'type': 'volume',
'target': '/data',
})]
)
old_container = create_and_start_container(service)
new_container = service.recreate_container(old_container)
assert new_container.get_mount('/data')['Source']
def test_duplicate_volume_trailing_slash(self):
"""
When an image specifies a volume, and the Compose file specifies a host path
@@ -489,7 +505,7 @@ class ServiceTest(DockerClientTestCase):
assert old_container.get('Config.Entrypoint') == ['top']
assert old_container.get('Config.Cmd') == ['-d', '1']
assert 'FOO=1' in old_container.get('Config.Env')
assert old_container.name == 'composetest_db_1'
assert old_container.name.startswith('composetest_db_')
service.start_container(old_container)
old_container.inspect() # reload volume data
volume_path = old_container.get_mount('/etc')['Source']
@@ -503,7 +519,7 @@ class ServiceTest(DockerClientTestCase):
assert new_container.get('Config.Entrypoint') == ['top']
assert new_container.get('Config.Cmd') == ['-d', '1']
assert 'FOO=2' in new_container.get('Config.Env')
assert new_container.name == 'composetest_db_1'
assert new_container.name.startswith('composetest_db_')
assert new_container.get_mount('/etc')['Source'] == volume_path
if not is_cluster(self.client):
assert (
@@ -836,13 +852,13 @@ class ServiceTest(DockerClientTestCase):
db = self.create_service('db')
web = self.create_service('web', links=[(db, None)])
create_and_start_container(db)
create_and_start_container(db)
db1 = create_and_start_container(db)
db2 = create_and_start_container(db)
create_and_start_container(web)
assert set(get_links(web.containers()[0])) == set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
db1.name, db1.name_without_project,
db2.name, db2.name_without_project,
'db'
])
@@ -851,30 +867,33 @@ class ServiceTest(DockerClientTestCase):
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'custom_link_name')])
create_and_start_container(db)
create_and_start_container(db)
db1 = create_and_start_container(db)
db2 = create_and_start_container(db)
create_and_start_container(web)
assert set(get_links(web.containers()[0])) == set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
db1.name, db1.name_without_project,
db2.name, db2.name_without_project,
'custom_link_name'
])
@no_cluster('No legacy links support in Swarm')
def test_start_container_with_external_links(self):
db = self.create_service('db')
web = self.create_service('web', external_links=['composetest_db_1',
'composetest_db_2',
'composetest_db_3:db_3'])
db_ctnrs = [create_and_start_container(db) for _ in range(3)]
web = self.create_service(
'web', external_links=[
db_ctnrs[0].name,
db_ctnrs[1].name,
'{}:db_3'.format(db_ctnrs[2].name)
]
)
for _ in range(3):
create_and_start_container(db)
create_and_start_container(web)
assert set(get_links(web.containers()[0])) == set([
'composetest_db_1',
'composetest_db_2',
db_ctnrs[0].name,
db_ctnrs[1].name,
'db_3'
])
@@ -892,14 +911,14 @@ class ServiceTest(DockerClientTestCase):
def test_start_one_off_container_creates_links_to_its_own_service(self):
db = self.create_service('db')
create_and_start_container(db)
create_and_start_container(db)
db1 = create_and_start_container(db)
db2 = create_and_start_container(db)
c = create_and_start_container(db, one_off=OneOffFilter.only)
assert set(get_links(c)) == set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
db1.name, db1.name_without_project,
db2.name, db2.name_without_project,
'db'
])
@@ -1137,6 +1156,21 @@ class ServiceTest(DockerClientTestCase):
service.build()
assert service.image()
def test_build_with_illegal_leading_chars(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write('FROM busybox\nRUN echo "Embodiment of Scarlet Devil"\n')
service = Service(
'build_leading_slug', client=self.client,
project='___-composetest', build={
'context': text_type(base_dir)
}
)
assert service.image_name == 'composetest_build_leading_slug'
service.build()
assert service.image()
def test_start_container_stays_unprivileged(self):
service = self.create_service('web')
container = create_and_start_container(service).inspect()
@@ -1234,17 +1268,15 @@ class ServiceTest(DockerClientTestCase):
test that those containers are restarted and not removed/recreated.
"""
service = self.create_service('web')
next_number = service._next_container_number()
valid_numbers = [next_number, next_number + 1]
service.create_container(number=next_number)
service.create_container(number=next_number + 1)
service.create_container(number=1)
service.create_container(number=2)
ParallelStreamWriter.instance = None
with mock.patch('sys.stderr', new_callable=StringIO) as mock_stderr:
service.scale(2)
for container in service.containers():
assert container.is_running
assert container.number in valid_numbers
assert container.number in [1, 2]
captured_output = mock_stderr.getvalue()
assert 'Creating' not in captured_output
@@ -1295,10 +1327,8 @@ class ServiceTest(DockerClientTestCase):
assert len(service.containers()) == 1
assert service.containers()[0].is_running
assert (
"ERROR: for composetest_web_2 Cannot create container for service"
" web: Boom" in mock_stderr.getvalue()
)
assert "ERROR: for composetest_web_" in mock_stderr.getvalue()
assert "Cannot create container for service web: Boom" in mock_stderr.getvalue()
def test_scale_with_unexpected_exception(self):
"""Test that when scaling if the API returns an error, that is not of type
@@ -1565,16 +1595,17 @@ class ServiceTest(DockerClientTestCase):
}
compose_labels = {
LABEL_CONTAINER_NUMBER: '1',
LABEL_ONE_OFF: 'False',
LABEL_PROJECT: 'composetest',
LABEL_SERVICE: 'web',
LABEL_VERSION: __version__,
LABEL_CONTAINER_NUMBER: '1'
}
expected = dict(labels_dict, **compose_labels)
service = self.create_service('web', labels=labels_dict)
labels = create_and_start_container(service).labels.items()
ctnr = create_and_start_container(service)
labels = ctnr.labels.items()
for pair in expected.items():
assert pair in labels
@@ -1640,7 +1671,7 @@ class ServiceTest(DockerClientTestCase):
def test_duplicate_containers(self):
service = self.create_service('web')
options = service._get_container_create_options({}, 1)
options = service._get_container_create_options({}, service._next_container_number())
original = Container.create(service.client, **options)
assert set(service.containers(stopped=True)) == set([original])

View File

@@ -55,8 +55,8 @@ class BasicProjectTest(ProjectTestCase):
def test_partial_change(self):
old_containers = self.run_up(self.cfg)
old_db = [c for c in old_containers if c.name_without_project == 'db_1'][0]
old_web = [c for c in old_containers if c.name_without_project == 'web_1'][0]
old_db = [c for c in old_containers if c.name_without_project.startswith('db_')][0]
old_web = [c for c in old_containers if c.name_without_project.startswith('web_')][0]
self.cfg['web']['command'] = '/bin/true'
@@ -71,7 +71,7 @@ class BasicProjectTest(ProjectTestCase):
created = list(new_containers - old_containers)
assert len(created) == 1
assert created[0].name_without_project == 'web_1'
assert created[0].name_without_project == old_web.name_without_project
assert created[0].get('Config.Cmd') == ['/bin/true']
def test_all_change(self):
@@ -114,7 +114,7 @@ class ProjectWithDependenciesTest(ProjectTestCase):
def test_up(self):
containers = self.run_up(self.cfg)
assert set(c.name_without_project for c in containers) == set(['db_1', 'web_1', 'nginx_1'])
assert set(c.service for c in containers) == set(['db', 'web', 'nginx'])
def test_change_leaf(self):
old_containers = self.run_up(self.cfg)
@@ -122,7 +122,7 @@ class ProjectWithDependenciesTest(ProjectTestCase):
self.cfg['nginx']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
assert set(c.name_without_project for c in new_containers - old_containers) == set(['nginx_1'])
assert set(c.service for c in new_containers - old_containers) == set(['nginx'])
def test_change_middle(self):
old_containers = self.run_up(self.cfg)
@@ -130,7 +130,7 @@ class ProjectWithDependenciesTest(ProjectTestCase):
self.cfg['web']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
assert set(c.name_without_project for c in new_containers - old_containers) == set(['web_1'])
assert set(c.service for c in new_containers - old_containers) == set(['web'])
def test_change_middle_always_recreate_deps(self):
old_containers = self.run_up(self.cfg, always_recreate_deps=True)
@@ -138,8 +138,7 @@ class ProjectWithDependenciesTest(ProjectTestCase):
self.cfg['web']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg, always_recreate_deps=True)
assert set(c.name_without_project
for c in new_containers - old_containers) == {'web_1', 'nginx_1'}
assert set(c.service for c in new_containers - old_containers) == {'web', 'nginx'}
def test_change_root(self):
old_containers = self.run_up(self.cfg)
@@ -147,7 +146,7 @@ class ProjectWithDependenciesTest(ProjectTestCase):
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
assert set(c.name_without_project for c in new_containers - old_containers) == set(['db_1'])
assert set(c.service for c in new_containers - old_containers) == set(['db'])
def test_change_root_always_recreate_deps(self):
old_containers = self.run_up(self.cfg, always_recreate_deps=True)
@@ -155,8 +154,9 @@ class ProjectWithDependenciesTest(ProjectTestCase):
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg, always_recreate_deps=True)
assert set(c.name_without_project
for c in new_containers - old_containers) == {'db_1', 'web_1', 'nginx_1'}
assert set(c.service for c in new_containers - old_containers) == {
'db', 'web', 'nginx'
}
def test_change_root_no_recreate(self):
old_containers = self.run_up(self.cfg)
@@ -195,9 +195,18 @@ class ProjectWithDependenciesTest(ProjectTestCase):
web, = [c for c in containers if c.service == 'web']
nginx, = [c for c in containers if c.service == 'nginx']
db, = [c for c in containers if c.service == 'db']
assert set(get_links(web)) == {'composetest_db_1', 'db', 'db_1'}
assert set(get_links(nginx)) == {'composetest_web_1', 'web', 'web_1'}
assert set(get_links(web)) == {
'composetest_db_1',
'db',
'db_1',
}
assert set(get_links(nginx)) == {
'composetest_web_1',
'web',
'web_1',
}
class ServiceStateTest(DockerClientTestCase):

View File

@@ -139,7 +139,9 @@ class DockerClientTestCase(unittest.TestCase):
def check_build(self, *args, **kwargs):
kwargs.setdefault('rm', True)
build_output = self.client.build(*args, **kwargs)
stream_output(build_output, open('/dev/null', 'w'))
with open(os.devnull, 'w') as devnull:
for event in stream_output(build_output, devnull):
pass
def require_api_version(self, minimum):
api_version = self.client.version()['ApiVersion']

View File

@@ -155,6 +155,14 @@ class TestCallDocker(object):
'docker', '--host', 'tcp://mydocker.net:2333', 'ps'
]
def test_with_http_host(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {'--host': 'http://mydocker.net:2333'})
assert fake_call.call_args[0][0] == [
'docker', '--host', 'tcp://mydocker.net:2333', 'ps',
]
def test_with_host_option_shorthand_equal(self):
with mock.patch('subprocess.call') as fake_call:
call_docker(['ps'], {'--host': '=tcp://mydocker.net:2333'})

View File

@@ -8,6 +8,7 @@ import os
import shutil
import tempfile
from operator import itemgetter
from random import shuffle
import py
import pytest
@@ -42,7 +43,7 @@ from tests import unittest
DEFAULT_VERSION = V2_0
def make_service_dict(name, service_dict, working_dir, filename=None):
def make_service_dict(name, service_dict, working_dir='.', filename=None):
"""Test helper function to construct a ServiceExtendsResolver
"""
resolver = config.ServiceExtendsResolver(
@@ -612,6 +613,19 @@ class ConfigTest(unittest.TestCase):
excinfo.exconly()
)
def test_config_integer_service_property_raise_validation_error(self):
with pytest.raises(ConfigurationError) as excinfo:
config.load(
build_config_details({
'version': '2.1',
'services': {'foobar': {'image': 'busybox', 1234: 'hah'}}
}, 'working_dir', 'filename.yml')
)
assert (
"Unsupported config option for services.foobar: '1234'" in excinfo.exconly()
)
def test_config_invalid_service_name_raise_validation_error(self):
with pytest.raises(ConfigurationError) as excinfo:
config.load(
@@ -1291,7 +1305,7 @@ class ConfigTest(unittest.TestCase):
assert tmpfs_mount.target == '/tmpfs'
assert not tmpfs_mount.is_named_volume
assert host_mount.source == os.path.normpath('/abc')
assert host_mount.source == '/abc'
assert host_mount.target == '/xyz'
assert not host_mount.is_named_volume
@@ -1344,6 +1358,38 @@ class ConfigTest(unittest.TestCase):
assert ('networks.foo.ipam.config contains an invalid type,'
' it should be an object') in excinfo.exconly()
def test_config_valid_ipam_config(self):
ipam_config = {
'subnet': '172.28.0.0/16',
'ip_range': '172.28.5.0/24',
'gateway': '172.28.5.254',
'aux_addresses': {
'host1': '172.28.1.5',
'host2': '172.28.1.6',
'host3': '172.28.1.7',
},
}
networks = config.load(
build_config_details(
{
'version': str(V2_1),
'networks': {
'foo': {
'driver': 'default',
'ipam': {
'driver': 'default',
'config': [ipam_config],
}
}
}
},
filename='filename.yml',
)
).networks
assert 'foo' in networks
assert networks['foo']['ipam']['config'] == [ipam_config]
def test_config_valid_service_names(self):
for valid_name in ['_', '-', '.__.', '_what-up.', 'what_.up----', 'whatup']:
services = config.load(
@@ -2611,6 +2657,45 @@ class ConfigTest(unittest.TestCase):
['c 7:128 rwm', 'x 3:244 rw', 'f 0:128 n']
)
def test_merge_isolation(self):
base = {
'image': 'bar',
'isolation': 'default',
}
override = {
'isolation': 'hyperv',
}
actual = config.merge_service_dicts(base, override, V2_3)
assert actual == {
'image': 'bar',
'isolation': 'hyperv',
}
def test_merge_storage_opt(self):
base = {
'image': 'bar',
'storage_opt': {
'size': '1G',
'readonly': 'false',
}
}
override = {
'storage_opt': {
'size': '2G',
'encryption': 'aes',
}
}
actual = config.merge_service_dicts(base, override, V2_3)
assert actual['storage_opt'] == {
'size': '2G',
'readonly': 'false',
'encryption': 'aes',
}
def test_external_volume_config(self):
config_details = build_config_details({
'version': '2',
@@ -3504,6 +3589,13 @@ class VolumeConfigTest(unittest.TestCase):
).services[0]
assert d['volumes'] == [VolumeSpec.parse('/host/path:/container/path')]
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths')
def test_volumes_order_is_preserved(self):
volumes = ['/{0}:/{0}'.format(i) for i in range(0, 6)]
shuffle(volumes)
cfg = make_service_dict('foo', {'build': '.', 'volumes': volumes})
assert cfg['volumes'] == volumes
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths')
@mock.patch.dict(os.environ)
def test_volume_binding_with_home(self):
@@ -5064,3 +5156,19 @@ class SerializeTest(unittest.TestCase):
serialized_config = yaml.load(serialize_config(config_dict))
serialized_service = serialized_config['services']['web']
assert serialized_service['command'] == 'echo 十六夜 咲夜'
def test_serialize_external_false(self):
cfg = {
'version': '3.4',
'volumes': {
'test': {
'name': 'test-false',
'external': False
}
}
}
config_dict = config.load(build_config_details(cfg))
serialized_config = yaml.load(serialize_config(config_dict))
serialized_volume = serialized_config['volumes']['test']
assert serialized_volume['external'] is False

View File

@@ -332,6 +332,37 @@ def test_interpolate_environment_external_resource_convert_types(mock_env):
assert value == expected
def test_interpolate_service_name_uses_dot(mock_env):
entry = {
'service.1': {
'image': 'busybox',
'ulimits': {
'nproc': '${POSINT}',
'nofile': {
'soft': '${POSINT}',
'hard': '${DEFAULT:-40000}'
},
},
}
}
expected = {
'service.1': {
'image': 'busybox',
'ulimits': {
'nproc': 50,
'nofile': {
'soft': 50,
'hard': 40000
},
},
}
}
value = interpolate_environment_variables(V3_4, entry, 'service', mock_env)
assert value == expected
def test_escaped_interpolation(defaults_interpolator):
assert defaults_interpolator('$${foo}') == '${foo}'

View File

@@ -5,6 +5,8 @@ import docker
from .. import mock
from .. import unittest
from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_SLUG
from compose.container import Container
from compose.container import get_container_name
@@ -30,7 +32,7 @@ class ContainerTest(unittest.TestCase):
"Labels": {
"com.docker.compose.project": "composetest",
"com.docker.compose.service": "web",
"com.docker.compose.container-number": 7,
"com.docker.compose.container-number": "7",
},
}
}
@@ -95,6 +97,15 @@ class ContainerTest(unittest.TestCase):
container = Container(None, self.container_dict, has_been_inspected=True)
assert container.name_without_project == "custom_name_of_container"
def test_name_without_project_one_off(self):
self.container_dict['Name'] = "/composetest_web_092cd63296f"
self.container_dict['Config']['Labels'][LABEL_SLUG] = (
"092cd63296fdc446ad432d3905dd1fcbe12a2ba6b52"
)
self.container_dict['Config']['Labels'][LABEL_ONE_OFF] = 'True'
container = Container(None, self.container_dict, has_been_inspected=True)
assert container.name_without_project == 'web_092cd63296fd'
def test_inspect_if_not_inspected(self):
mock_client = mock.create_autospec(docker.APIClient)
container = Container(mock_client, dict(Id="the_id"))

View File

@@ -21,7 +21,7 @@ class ProgressStreamTestCase(unittest.TestCase):
b'31019763, "start": 1413653874, "total": 62763875}, '
b'"progress": "..."}',
]
events = progress_stream.stream_output(output, StringIO())
events = list(progress_stream.stream_output(output, StringIO()))
assert len(events) == 1
def test_stream_output_div_zero(self):
@@ -30,7 +30,7 @@ class ProgressStreamTestCase(unittest.TestCase):
b'0, "start": 1413653874, "total": 0}, '
b'"progress": "..."}',
]
events = progress_stream.stream_output(output, StringIO())
events = list(progress_stream.stream_output(output, StringIO()))
assert len(events) == 1
def test_stream_output_null_total(self):
@@ -39,7 +39,7 @@ class ProgressStreamTestCase(unittest.TestCase):
b'0, "start": 1413653874, "total": null}, '
b'"progress": "..."}',
]
events = progress_stream.stream_output(output, StringIO())
events = list(progress_stream.stream_output(output, StringIO()))
assert len(events) == 1
def test_stream_output_progress_event_tty(self):
@@ -52,7 +52,7 @@ class ProgressStreamTestCase(unittest.TestCase):
return True
output = TTYStringIO()
events = progress_stream.stream_output(events, output)
events = list(progress_stream.stream_output(events, output))
assert len(output.getvalue()) > 0
def test_stream_output_progress_event_no_tty(self):
@@ -61,7 +61,7 @@ class ProgressStreamTestCase(unittest.TestCase):
]
output = StringIO()
events = progress_stream.stream_output(events, output)
events = list(progress_stream.stream_output(events, output))
assert len(output.getvalue()) == 0
def test_stream_output_no_progress_event_no_tty(self):
@@ -70,7 +70,7 @@ class ProgressStreamTestCase(unittest.TestCase):
]
output = StringIO()
events = progress_stream.stream_output(events, output)
events = list(progress_stream.stream_output(events, output))
assert len(output.getvalue()) > 0
def test_mismatched_encoding_stream_write(self):

View File

@@ -29,6 +29,7 @@ class ProjectTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
self.mock_client._general_configs = {}
self.mock_client.api_version = docker.constants.DEFAULT_DOCKER_API_VERSION
def test_from_config_v1(self):
config = Config(
@@ -578,21 +579,21 @@ class ProjectTest(unittest.TestCase):
)
project = Project.from_config(name='test', client=self.mock_client, config_data=config_data)
assert project.get_service('web').options.get('platform') is None
assert project.get_service('web').platform is None
project = Project.from_config(
name='test', client=self.mock_client, config_data=config_data, default_platform='windows'
)
assert project.get_service('web').options.get('platform') == 'windows'
assert project.get_service('web').platform == 'windows'
service_config['platform'] = 'linux/s390x'
project = Project.from_config(name='test', client=self.mock_client, config_data=config_data)
assert project.get_service('web').options.get('platform') == 'linux/s390x'
assert project.get_service('web').platform == 'linux/s390x'
project = Project.from_config(
name='test', client=self.mock_client, config_data=config_data, default_platform='windows'
)
assert project.get_service('web').options.get('platform') == 'linux/s390x'
assert project.get_service('web').platform == 'linux/s390x'
@mock.patch('compose.parallel.ParallelStreamWriter._write_noansi')
def test_error_parallel_pull(self, mock_write):

View File

@@ -21,6 +21,7 @@ from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.const import SECRETS_PATH
from compose.const import WINDOWS_LONGPATH_PREFIX
from compose.container import Container
from compose.errors import OperationFailedError
from compose.parallel import ParallelStreamWriter
@@ -38,6 +39,7 @@ from compose.service import NeedsBuildError
from compose.service import NetworkMode
from compose.service import NoSuchImageError
from compose.service import parse_repository_tag
from compose.service import rewrite_build_path
from compose.service import Service
from compose.service import ServiceNetworkMode
from compose.service import warn_on_masked_volume
@@ -317,13 +319,14 @@ class ServiceTest(unittest.TestCase):
self.mock_client.inspect_image.return_value = {'Id': 'abcd'}
prev_container = mock.Mock(
id='ababab',
image_config={'ContainerConfig': {}})
image_config={'ContainerConfig': {}}
)
prev_container.full_slug = 'abcdefff1234'
prev_container.get.return_value = None
opts = service._get_container_create_options(
{},
1,
previous_container=prev_container)
{}, 1, previous_container=prev_container
)
assert service.options['labels'] == labels
assert service.options['environment'] == environment
@@ -355,11 +358,13 @@ class ServiceTest(unittest.TestCase):
}.get(key, None)
prev_container.get.side_effect = container_get
prev_container.full_slug = 'abcdefff1234'
opts = service._get_container_create_options(
{},
1,
previous_container=prev_container)
previous_container=prev_container
)
assert opts['environment'] == ['affinity:container==ababab']
@@ -370,6 +375,7 @@ class ServiceTest(unittest.TestCase):
id='ababab',
image_config={'ContainerConfig': {}})
prev_container.get.return_value = None
prev_container.full_slug = 'abcdefff1234'
opts = service._get_container_create_options(
{},
@@ -386,7 +392,7 @@ class ServiceTest(unittest.TestCase):
@mock.patch('compose.service.Container', autospec=True)
def test_get_container(self, mock_container_class):
container_dict = dict(Name='default_foo_2')
container_dict = dict(Name='default_foo_2_bdfa3ed91e2c')
self.mock_client.containers.return_value = [container_dict]
service = Service('foo', image='foo', client=self.mock_client)
@@ -446,9 +452,24 @@ class ServiceTest(unittest.TestCase):
with pytest.raises(OperationFailedError):
service.pull()
def test_pull_image_with_default_platform(self):
self.mock_client.api_version = '1.35'
service = Service(
'foo', client=self.mock_client, image='someimage:sometag',
default_platform='linux'
)
assert service.platform == 'linux'
service.pull()
assert self.mock_client.pull.call_count == 1
call_args = self.mock_client.pull.call_args
assert call_args[1]['platform'] == 'linux'
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container(self, _):
mock_container = mock.create_autospec(Container)
mock_container.full_slug = 'abcdefff1234'
service = Service('foo', client=self.mock_client, image='someimage')
service.image = lambda: {'Id': 'abc123'}
new_container = service.recreate_container(mock_container)
@@ -462,6 +483,7 @@ class ServiceTest(unittest.TestCase):
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container_with_timeout(self, _):
mock_container = mock.create_autospec(Container)
mock_container.full_slug = 'abcdefff1234'
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
service = Service('foo', client=self.mock_client, image='someimage')
service.recreate_container(mock_container, timeout=1)
@@ -538,7 +560,7 @@ class ServiceTest(unittest.TestCase):
assert self.mock_client.build.call_count == 1
assert not self.mock_client.build.call_args[1]['pull']
def test_build_does_with_platform(self):
def test_build_with_platform(self):
self.mock_client.api_version = '1.35'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
@@ -551,6 +573,47 @@ class ServiceTest(unittest.TestCase):
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] == 'linux'
def test_build_with_default_platform(self):
self.mock_client.api_version = '1.35'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service(
'foo', client=self.mock_client, build={'context': '.'},
default_platform='linux'
)
assert service.platform == 'linux'
service.build()
assert self.mock_client.build.call_count == 1
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] == 'linux'
def test_service_platform_precedence(self):
self.mock_client.api_version = '1.35'
service = Service(
'foo', client=self.mock_client, platform='linux/arm',
default_platform='osx'
)
assert service.platform == 'linux/arm'
def test_service_ignore_default_platform_with_unsupported_api(self):
self.mock_client.api_version = '1.32'
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
]
service = Service(
'foo', client=self.mock_client, default_platform='windows', build={'context': '.'}
)
assert service.platform is None
service.build()
assert self.mock_client.build.call_count == 1
call_args = self.mock_client.build.call_args
assert call_args[1]['platform'] is None
def test_build_with_override_build_args(self):
self.mock_client.build.return_value = [
b'{"stream": "Successfully built 12345"}',
@@ -646,17 +709,19 @@ class ServiceTest(unittest.TestCase):
image='example.com/foo',
client=self.mock_client,
network_mode=NetworkMode('bridge'),
networks={'bridge': {}},
networks={'bridge': {}, 'net2': {}},
links=[(Service('one', client=self.mock_client), 'one')],
volumes_from=[VolumeFromSpec(Service('two', client=self.mock_client), 'rw', 'service')]
volumes_from=[VolumeFromSpec(Service('two', client=self.mock_client), 'rw', 'service')],
volumes=[VolumeSpec('/ext', '/int', 'ro')],
build={'context': 'some/random/path'},
)
config_hash = service.config_hash
for api_version in set(API_VERSIONS.values()):
self.mock_client.api_version = api_version
assert service._get_container_create_options({}, 1)['labels'][LABEL_CONFIG_HASH] == (
config_hash
)
assert service._get_container_create_options(
{}, 1
)['labels'][LABEL_CONFIG_HASH] == config_hash
def test_remove_image_none(self):
web = Service('web', image='example', client=self.mock_client)
@@ -974,6 +1039,23 @@ class ServiceTest(unittest.TestCase):
assert len(override_opts['binds']) == 1
assert override_opts['binds'][0] == 'vol:/data:rw'
def test_volumes_order_is_preserved(self):
service = Service('foo', client=self.mock_client)
volumes = [
VolumeSpec.parse(cfg) for cfg in [
'/v{0}:/v{0}:rw'.format(i) for i in range(6)
]
]
ctnr_opts, override_opts = service._build_container_volume_options(
previous_container=None,
container_options={
'volumes': volumes,
'environment': {},
},
override_options={},
)
assert override_opts['binds'] == [vol.repr() for vol in volumes]
class TestServiceNetwork(unittest.TestCase):
def setUp(self):
@@ -1406,3 +1488,28 @@ class ServiceSecretTest(unittest.TestCase):
assert volumes[0].source == secret1['file']
assert volumes[0].target == '{}/{}'.format(SECRETS_PATH, secret1['secret'].source)
class RewriteBuildPathTest(unittest.TestCase):
@mock.patch('compose.service.IS_WINDOWS_PLATFORM', True)
def test_rewrite_url_no_prefix(self):
urls = [
'http://test.com',
'https://test.com',
'git://test.com',
'github.com/test/test',
'git@test.com',
]
for u in urls:
assert rewrite_build_path(u) == u
@mock.patch('compose.service.IS_WINDOWS_PLATFORM', True)
def test_rewrite_windows_path(self):
assert rewrite_build_path('C:\\context') == WINDOWS_LONGPATH_PREFIX + 'C:\\context'
assert rewrite_build_path(
rewrite_build_path('C:\\context')
) == rewrite_build_path('C:\\context')
@mock.patch('compose.service.IS_WINDOWS_PLATFORM', False)
def test_rewrite_unix_path(self):
assert rewrite_build_path('/context') == '/context'

View File

@@ -68,3 +68,11 @@ class TestParseBytes(object):
assert utils.parse_bytes(123) == 123
assert utils.parse_bytes('foobar') is None
assert utils.parse_bytes('123') == 123
class TestMoreItertools(object):
def test_unique_everseen(self):
unique = utils.unique_everseen
assert list(unique([2, 1, 2, 1])) == [2, 1]
assert list(unique([2, 1, 2, 1], hash)) == [2, 1]
assert list(unique([2, 1, 2, 1], lambda x: 'key_%s' % x)) == [2, 1]

View File

@@ -1,5 +1,5 @@
[tox]
envlist = py27,py36,pre-commit
envlist = py27,py36,py37,pre-commit
[testenv]
usedevelop=True