Compare commits

..

85 Commits

Author SHA1 Message Date
Joffrey F
e12f3b9465 Bump 1.15.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-25 16:19:49 -07:00
Kirin Rastogi
daa6ae69f2 Add exclusion for networkname
Signed-off-by: Kirin Rastogi <kirin.Rastogi@avg.com>
Signed-off-by: Kirin Rastogi <rastogikirin@gmail.com>
2017-07-25 16:05:07 -07:00
Joffrey F
046d12fb33 Scripts build and push compose-tests image
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-25 16:05:07 -07:00
Joffrey F
34db8cc9e8 Some more test adjustments for Swarm support
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-25 16:05:07 -07:00
Evan Shaw
d2a1c3128c Always silence pull output with --parallel
This is how things were prior to the addition of the --quiet flag.
Making it not silent produces output that's weird and difficult to read.

Signed-off-by: Evan Shaw <evan@vendhq.com>
2017-07-25 16:05:04 -07:00
NikitaVlaznev
d8316704dd Fix double silent argument value
Fix for "TypeError: pull() got multiple values for keyword argument 'silent'."
This change e9b6cc23fc caused additional value to be passed for the 'silent' argument, that was already passed there: f85da99ef3

Signed-off-by: Nikita Vlaznev <nikita.dto@gmail.com>
2017-07-25 16:05:00 -07:00
Joel Barciauskas
27f48f6481 Add --quiet parameter to docker-compose pull, using existing silent flag
Signed-off-by: Joel Barciauskas <barciajo@gmail.com>
2017-07-25 16:04:53 -07:00
Alexey Rokhin
b4bec63ea8 service_test.py reorder imports
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-07-25 16:04:53 -07:00
Alexey Rokhin
f4824416a4 skip cpu_percent test for Linux
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-07-25 16:04:53 -07:00
Joffrey F
344a69331c Bump 1.15.0-rc1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
56a23bfcd2 Improved version comparisons throughout the codebase
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Vadim Semenov
6ff6528d45 Optimize "extends" without file specification
Loading the same config file add about 100ms per each extension
service, which results in painfully slow CLI calls when a config
consists of a couple of dozens of services.

This patch makes Compose re-use config files.

Signed-off-by: Vadim Semenov <protoss.player@gmail.com>
2017-07-13 17:37:26 -07:00
Joffrey F
c41057aa52 Code warning for the well-intentioned folks that keep wanting to change this
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
2d21bf6a50 Make sure y/n values are quoted in serialized output
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
af182bd3cc Add 'socks' extra to help with proxy environment.
SOCKS support will be included in the bundled (binary) version

Update some packages in requirements.txt and add some implicit deps

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
d475e0c1e3 Add "network" field to build configuration
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
0916f124d0 scale property should be merged according to standard scalar rules
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
ec4ba7752f Fix override volume merging + add acceptance test
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Andy Neff
4796e04cae Change --volume behavior to add instead of replace mounts
Signed-off-by: Andy Neff <andrew.neff@visionsystemsinc.com>
2017-07-13 17:37:26 -07:00
Evan Shaw
154adc5807 Align status output for parallel_execute
Previously docker-compose would output lines that looked like:

    Starting service ... done
    Starting short ...
    Starting service-with-a-long-name ... done

It's difficult to scan down this output and get an idea of what's happening.

Now the statuses are aligned, and output looks like this:

    Starting service                  ... done
    Starting short                    ...
    Starting service-with-a-long-name ... done

To me, this is quite a bit easier to read.

Signed-off-by: Evan Shaw <evan@vendhq.com>
2017-07-13 17:37:26 -07:00
Joffrey F
41976b0f7f Add support for service:name pid config
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Evan Shaw
a891fc1d9a Always silence pull output with --parallel
This is how things were prior to the addition of the --quiet flag.
Making it not silent produces output that's weird and difficult to read.

Signed-off-by: Evan Shaw <evan@vendhq.com>
2017-07-13 17:37:26 -07:00
Joffrey F
5ee7aacca0 Bump docker Python SDK version -> 2.4.2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
b4eaddf984 Add storage_opt to 2.2 schema
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
e22524474a Ignore test failures in storage_opt test
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
dinesh
6a957294df Add storage_opt in v2.1
Signed-off-by: dinesh <dineshpy07@gmail.com>
2017-07-13 17:37:26 -07:00
Joffrey F
1dfdbe6f94 Fix ports sorting on Python 3
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
bb4adf2b0f 1.15.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
NikitaVlaznev
3bd5a37429 Fix double silent argument value
Fix for "TypeError: pull() got multiple values for keyword argument 'silent'."
This change e9b6cc23fc caused additional value to be passed for the 'silent' argument, that was already passed there: f85da99ef3

Signed-off-by: Nikita Vlaznev <nikita.dto@gmail.com>
2017-07-13 17:37:26 -07:00
Joffrey F
a0119ae1a5 Rewriting tests to be UCP/Swarm compatible
- Event may contain more information in some cases.
  Don't assume order or format
- Don't assume ports are always exposed on 0.0.0.0 by default
- Absence of HostConfig in a create payload sometimes causes an error at the
  engine level
- In Swarm, volume names are prefixed by "<node_name>/"
- When testing against Swarm, the default network driver is overlay
- Ensure custom test networks are always attachable
- Handle Swarm network names
- Some params moved to host config in recent (1.21+) version
- Conditional test skips for Swarm environments

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joel Barciauskas
59c4c2388e Add --quiet parameter to docker-compose pull, using existing silent flag
Signed-off-by: Joel Barciauskas <barciajo@gmail.com>
2017-07-13 17:37:26 -07:00
Joffrey F
86a0e36348 s/docker daemon/dockerd/
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Joffrey F
85d2c0a314 Take editions into account when selecting test engine versions
Get candidates from moby/moby and docker/docker-ce repos

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-07-13 17:37:26 -07:00
Stefan Pietsch
33c7c750e8 check hash sums of downloaded files
Signed-off-by: Stefan Pietsch <mail.ipv4v6+gh@gmail.com>
2017-07-13 17:37:26 -07:00
Sebastiaan van Stijn
74f5037f78 Add Joffrey to maintainers
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2017-07-13 17:37:26 -07:00
Colin Hebert
645d35612d Add support for labels during build
Signed-off-by: Colin Hebert <hebert.colin@gmail.com>
2017-07-13 17:37:26 -07:00
Alexey Rokhin
5067f7a77b service_test.py reorder imports
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-07-13 17:37:26 -07:00
Alexey Rokhin
50d405fea3 skip cpu_percent test for Linux
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-07-13 17:37:26 -07:00
Joffrey F
cffce0880b Bump 1.14.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-19 13:28:04 -07:00
Joffrey F
abac2eea37 Fix ps output to show all ports
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-19 13:28:04 -07:00
Joffrey F
5c3d0db3f2 ServicePort merge_field should account for external IP and protocol
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-19 13:28:04 -07:00
Joffrey F
cfe152f907 Bump 1.14.0-rc2
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
a85dddf83d Remedy test failures
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
e7b7480462 Interpolate configs values
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
bf3b62e2ff Add configs tests
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
70b2e64c1b Partial support for service configs
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
bfc7ac4995 Always convert port values in ServicePort to integer
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
ff720ba6b2 Bump docker version in requirements.txt
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-06 15:26:49 -07:00
Joffrey F
e6000051f7 Bump 1.14.0-rc1
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
909ef7f435 Add partial support (docker-compose config and warnings) for v3.3 credential_spec
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
5fb7675055 Add support for build labels in 2.1 and 2.2 format
Add cache_from in 2.2 format

Add integration test for build labels

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
f6aa53ea6c Network label mismatch now prints a warning instead of raising an error
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
150c44dc36 Merge all fields inside build dict
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
d29ed0d3e4 Fix improper use of project.stop
Add some better test coverage for rm --stop

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Pascal Vibet
c9ff9023b2 If COMPOSE_FILE is define then set this variable to the container
Signed-off-by: Pascal Vibet <pvibet@gmail.com>
2017-05-30 15:23:00 -07:00
Joffrey F
d2a8a9edaa Rewrite duplicate override error message
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Eli Atzaba
88fa8db79a Raise exception when override.yaml & override.yml coexist
Signed-off-by: Eli Atzaba <eliat123@gmail.com>
2017-05-30 15:23:00 -07:00
Eli Atzaba
d0b80f537b Fix for yaml extention does not work with override file
Signed-off-by: Eli Atzaba <eliat123@gmail.com>
2017-05-30 15:23:00 -07:00
Joffrey F
2ffa67cf92 Add 3.3 format support
Remove build.labels field from 3.2 schema

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Colin Hebert
2182329dae Fix test type
Signed-off-by: Colin Hebert <hebert.colin@gmail.com>
2017-05-30 15:23:00 -07:00
Colin Hebert
67e48ae4cb Add tests for the labels
Signed-off-by: Colin Hebert <hebert.colin@gmail.com>
2017-05-30 15:23:00 -07:00
Colin Hebert
3f920d515d Update tests to show labels set to None
Signed-off-by: Colin Hebert <hebert.colin@gmail.com>
2017-05-30 15:23:00 -07:00
Colin Hebert
d10d64ac82 Add support for labels during build
Signed-off-by: Colin Hebert <hebert.colin@gmail.com>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
201919824f move cpus validation to validation.py
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
b815a00e33 Implement review suggestions.
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
aeeed0cf2f service_test.py reorder imports
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
56f63c8586 skip cpu_percent test for Linux
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
e621117ab2 Fix testcases.py formatting
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
2d4fc2cd51 Fix cpu option checking.
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
Alexey Rokhin
93d1ce5a55 Add cpu_count, cpu_percent, cpus parameters.
Signed-off-by: Alexey Rokhin <arokhin@mail.ru>
2017-05-30 15:23:00 -07:00
mengskysama
511b981f11 fix python3.x _asdict() return None
Signed-off-by: mengskysama <mengskysama@gmail.com>
2017-05-30 15:23:00 -07:00
Joffrey F
9daced4c04 Prevent dependencies rescaling when executing docker-compose run
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Harald Albers
50437bd6ea Add docker-compose exec -u to docs and completion
Signed-off-by: Harald Albers <github@albersweb.de>
2017-05-30 15:23:00 -07:00
Victoria Bialas
f1fd9eb1d0 Updated CLI help for docker-compose pull command
removed reference to docker-stack.yml in pull command help

referenced generic Compose file, consistent naming in Help, init caps

Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
a5837ba358 Use different method to compute ServicePort.repr
Workaround for https://bugs.python.org/issue24931

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
570cf951ac New network config whitelist option in unit test
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Michael Friis
9c0bbaad36 add exception for windows networking
Signed-off-by: Michael Friis <friism@gmail.com>
2017-05-30 15:23:00 -07:00
Joffrey F
57f647f03f 1.14.0dev
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
e27dfe8ccd Script downloading release binaries from bintray and appveyor
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
mrfly
b1e3228d19 Not colon but a dot.
hum...

Signed-off-by: wrfly <mr.wrfly@gmail.com>
2017-05-30 15:23:00 -07:00
Joffrey F
d3ad2ae7fe Add deprecation warning to scale command
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
1be40656a1 Prevent docker-compose scale to be used with a v2.2 config file
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
1646e75591 Properly relay errors in execute_convergence_plan
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
ffb8f9f1b4 Implement --scale option on up command, allow scale config in v2.2 format
docker-compose scale modified to reuse code between up and scale

Signed-off-by: Joffrey F <joffrey@docker.com>
2017-05-30 15:23:00 -07:00
Joffrey F
10267a83dc Merge pull request #4785 from docker/bump-1.13.0
Bump 1.13.0
2017-05-02 12:29:13 -07:00
64 changed files with 2233 additions and 429 deletions

View File

@@ -7,3 +7,5 @@ coverage-html
docs/_site
venv
.tox
**/__pycache__
*.pyc

View File

@@ -1,6 +1,108 @@
Change log
==========
1.15.0 (2017-07-26)
-------------------
### New features
#### Compose file version 2.2
- Added support for the `network` parameter in build configurations.
#### Compose file version 2.1 and up
- The `pid` option in a service's definition now supports a `service:<name>`
value.
- Added support for the `storage_opt` parameter in in service definitions.
This option is not available for the v3 format
#### All formats
- Added `--quiet` flag to `docker-compose pull`, suppressing progress output
- Some improvements to CLI output
### Bugfixes
- Volumes specified through the `--volume` flag of `docker-compose run` now
complement volumes declared in the service's defintion instead of replacing
them
- Fixed a bug where using multiple Compose files would unset the scale value
defined inside the Compose file.
- Fixed an issue where the `credHelpers` entries in the `config.json` file
were not being honored by Compose
- Fixed a bug where using multiple Compose files with port declarations
would cause failures in Python 3 environments
- Fixed a bug where some proxy-related options present in the user's
environment would prevent Compose from running
- Fixed an issue where the output of `docker-compose config` would be invalid
if the original file used `Y` or `N` values
- Fixed an issue preventing `up` operations on a previously created stack on
Windows Engine.
1.14.0 (2017-06-19)
-------------------
### New features
#### Compose file version 3.3
- Introduced version 3.3 of the `docker-compose.yml` specification.
This version requires to be used with Docker Engine 17.06.0 or above.
Note: the `credential_spec` and `configs` keys only apply to Swarm services
and will be ignored by Compose
#### Compose file version 2.2
- Added the following parameters in service definitions: `cpu_count`,
`cpu_percent`, `cpus`
#### Compose file version 2.1
- Added support for build labels. This feature is also available in the
2.2 and 3.3 formats.
#### All formats
- Added shorthand `-u` for `--user` flag in `docker-compose exec`
- Differences in labels between the Compose file and remote network
will now print a warning instead of preventing redeployment.
### Bugfixes
- Fixed a bug where service's dependencies were being rescaled to their
default scale when running a `docker-compose run` command
- Fixed a bug where `docker-compose rm` with the `--stop` flag was not
behaving properly when provided with a list of services to remove
- Fixed a bug where `cache_from` in the build section would be ignored when
using more than one Compose file.
- Fixed a bug that prevented binding the same port to different IPs when
using more than one Compose file.
- Fixed a bug where override files would not be picked up by Compose if they
had the `.yaml` extension
- Fixed a bug on Windows Engine where networks would be incorrectly flagged
for recreation
- Fixed a bug where services declaring ports would cause crashes on some
versions of Python 3
- Fixed a bug where the output of `docker-compose config` would sometimes
contain invalid port definitions
1.13.0 (2017-05-02)
-------------------

View File

@@ -19,34 +19,47 @@ RUN set -ex; \
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \
-o /usr/local/bin/docker && \
SHA256=f024bc65c45a3778cf07213d26016075e8172de8f6e4b5702bedde06c241650f; \
echo "${SHA256} /usr/local/bin/docker" | sha256sum -c - && \
chmod +x /usr/local/bin/docker
# Build Python 2.7.13 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz | tar -xz; \
curl -LO https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
SHA256=a4f05a0720ce0fd92626f0278b6b433eee9a6173ddf2bced7957dfb599a5ece1; \
echo "${SHA256} Python-2.7.13.tgz" | sha256sum -c - && \
tar -xzf Python-2.7.13.tgz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.13
rm -rf /Python-2.7.13; \
rm Python-2.7.13.tgz
# Build python 3.4 from source
RUN set -ex; \
curl -L https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz | tar -xz; \
curl -LO https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz && \
SHA256=fe59daced99549d1d452727c050ae486169e9716a890cffb0d468b376d916b48; \
echo "${SHA256} Python-3.4.6.tgz" | sha256sum -c - && \
tar -xzf Python-3.4.6.tgz; \
cd Python-3.4.6; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.4.6
rm -rf /Python-3.4.6; \
rm Python-3.4.6.tgz
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install pip
RUN set -ex; \
curl -L https://bootstrap.pypa.io/get-pip.py | python
curl -LO https://bootstrap.pypa.io/get-pip.py && \
SHA256=19dae841a150c86e2a09d475b5eb0602861f2a5b7761ec268049a662dbd2bd0c; \
echo "${SHA256} get-pip.py" | sha256sum -c - && \
python get-pip.py
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen

View File

@@ -15,6 +15,7 @@
"bfirsh",
"dnephin",
"mnowster",
"shin-",
]
[people]
@@ -44,3 +45,8 @@
Name = "Mazz Mosley"
Email = "mazz@houseofmnowster.com"
GitHub = "mnowster"
[People.shin-]
Name = "Joffrey F"
Email = "joffrey@docker.com"
GitHub = "shin-"

View File

@@ -17,7 +17,7 @@ Using Compose is basically a three-step process.
1. Define your app's environment with a `Dockerfile` so it can be
reproduced anywhere.
2. Define the services that make up your app in `docker-compose.yml` so
they can be run together in an isolated environment:
they can be run together in an isolated environment.
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
A `docker-compose.yml` looks like this:

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.13.0'
__version__ = '1.15.0'

View File

@@ -17,6 +17,8 @@ try:
env[str('PIP_DISABLE_PIP_VERSION_CHECK')] = str('1')
s_cmd = subprocess.Popen(
# DO NOT replace this call with a `sys.executable` call. It breaks the binary
# distribution (with the binary calling itself recursively over and over).
['pip', 'freeze'], stderr=subprocess.PIPE, stdout=subprocess.PIPE,
env=env
)

View File

@@ -171,12 +171,12 @@ class TopLevelCommand(object):
in the client certificate (for example if your docker host
is an IP address)
--project-directory PATH Specify an alternate working directory
(default: the path of the compose file)
(default: the path of the Compose file)
Commands:
build Build or rebuild services
bundle Generate a Docker bundle from the Compose file
config Validate and view the compose file
config Validate and view the Compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
@@ -273,7 +273,7 @@ class TopLevelCommand(object):
def config(self, config_options, options):
"""
Validate and view the compose file.
Validate and view the Compose file.
Usage: config [options]
@@ -391,7 +391,7 @@ class TopLevelCommand(object):
Options:
-d Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
--user USER Run the command as this user.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
@@ -627,18 +627,20 @@ class TopLevelCommand(object):
def pull(self, options):
"""
Pulls images for services.
Pulls images for services defined in a Compose file, but does not start the containers.
Usage: pull [options] [SERVICE...]
Options:
--ignore-pull-failures Pull what it can and ignores images with pull failures.
--parallel Pull multiple images in parallel.
--quiet Pull without printing progress information
"""
self.project.pull(
service_names=options['SERVICE'],
ignore_pull_failures=options.get('--ignore-pull-failures'),
parallel_pull=options.get('--parallel')
parallel_pull=options.get('--parallel'),
silent=options.get('--quiet'),
)
def push(self, options):
@@ -680,13 +682,7 @@ class TopLevelCommand(object):
one_off = OneOffFilter.include
if options.get('--stop'):
running_containers = self.project.containers(
service_names=options['SERVICE'], stopped=False, one_off=one_off
)
self.project.stop(
service_names=running_containers,
one_off=one_off
)
self.project.stop(service_names=options['SERVICE'], one_off=one_off)
all_containers = self.project.containers(
service_names=options['SERVICE'], stopped=True, one_off=one_off
@@ -764,6 +760,9 @@ class TopLevelCommand(object):
$ docker-compose scale web=2 worker=3
This command is deprecated. Use the up command with the `--scale` flag
instead.
Usage: scale [options] [SERVICE=NUM...]
Options:
@@ -777,6 +776,11 @@ class TopLevelCommand(object):
'The scale command is incompatible with the v2.2 format. '
'Use the up command with the --scale flag instead.'
)
else:
log.warn(
'The scale command is deprecated. '
'Use the up command with the --scale flag instead.'
)
for service_name, num in parse_scale_args(options['SERVICE=NUM']).items():
self.project.get_service(service_name).scale(num, timeout=timeout)
@@ -1130,7 +1134,9 @@ def run_one_off_container(container_options, project, service, options):
project.up(
service_names=deps,
start_deps=True,
strategy=ConvergenceStrategy.never)
strategy=ConvergenceStrategy.never,
rescale=False
)
project.initialize()

View File

@@ -18,12 +18,14 @@ from ..const import COMPOSEFILE_V1 as V1
from ..utils import build_string_dict
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive
from ..version import ComposeVersion
from .environment import env_vars_from_file
from .environment import Environment
from .environment import split_env
from .errors import CircularReference
from .errors import ComposeFileNotFound
from .errors import ConfigurationError
from .errors import DuplicateOverrideFileFound
from .errors import VERSION_EXPLANATION
from .interpolation import interpolate_environment_variables
from .sort_services import get_container_name_from_network_mode
@@ -38,10 +40,12 @@ from .types import VolumeSpec
from .validation import match_named_volumes
from .validation import validate_against_config_schema
from .validation import validate_config_section
from .validation import validate_cpu
from .validation import validate_depends_on
from .validation import validate_extends_file_path
from .validation import validate_links
from .validation import validate_network_mode
from .validation import validate_pid_mode
from .validation import validate_service_constraints
from .validation import validate_top_level_object
from .validation import validate_ulimits
@@ -52,8 +56,11 @@ DOCKER_CONFIG_KEYS = [
'cap_drop',
'cgroup_parent',
'command',
'cpu_count',
'cpu_percent',
'cpu_quota',
'cpu_shares',
'cpus',
'cpuset',
'detach',
'devices',
@@ -103,12 +110,14 @@ DOCKER_CONFIG_KEYS = [
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'build',
'container_name',
'credential_spec',
'dockerfile',
'log_driver',
'log_opt',
'logging',
'network_mode',
'init',
'scale',
]
DOCKER_VALID_URL_PREFIXES = (
@@ -124,7 +133,7 @@ SUPPORTED_FILENAMES = [
'docker-compose.yaml',
]
DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml'
DEFAULT_OVERRIDE_FILENAMES = ('docker-compose.override.yml', 'docker-compose.override.yaml')
log = logging.getLogger(__name__)
@@ -180,15 +189,16 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
if version == '1':
raise ConfigurationError(
'Version in "{}" is invalid. {}'
.format(self.filename, VERSION_EXPLANATION))
.format(self.filename, VERSION_EXPLANATION)
)
if version == '2':
version = const.COMPOSEFILE_V2_0
return const.COMPOSEFILE_V2_0
if version == '3':
version = const.COMPOSEFILE_V3_0
return const.COMPOSEFILE_V3_0
return version
return ComposeVersion(version)
def get_service(self, name):
return self.get_service_dicts()[name]
@@ -205,8 +215,11 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
def get_secrets(self):
return {} if self.version < const.COMPOSEFILE_V3_1 else self.config.get('secrets', {})
def get_configs(self):
return {} if self.version < const.COMPOSEFILE_V3_3 else self.config.get('configs', {})
class Config(namedtuple('_Config', 'version services volumes networks secrets')):
class Config(namedtuple('_Config', 'version services volumes networks secrets configs')):
"""
:param version: configuration version
:type version: int
@@ -218,6 +231,8 @@ class Config(namedtuple('_Config', 'version services volumes networks secrets'))
:type networks: :class:`dict`
:param secrets: Dictionary mapping secret names to description dictionaries
:type secrets: :class:`dict`
:param configs: Dictionary mapping config names to description dictionaries
:type configs: :class:`dict`
"""
@@ -288,8 +303,12 @@ def get_default_config_files(base_dir):
def get_default_override_file(path):
override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME)
return [override_filename] if os.path.exists(override_filename) else []
override_files_in_path = [os.path.join(path, override_filename) for override_filename
in DEFAULT_OVERRIDE_FILENAMES
if os.path.exists(os.path.join(path, override_filename))]
if len(override_files_in_path) > 1:
raise DuplicateOverrideFileFound(override_files_in_path)
return override_files_in_path
def find_candidates_in_parent_dirs(filenames, path):
@@ -311,6 +330,28 @@ def find_candidates_in_parent_dirs(filenames, path):
return (candidates, path)
def check_swarm_only_config(service_dicts):
warning_template = (
"Some services ({services}) use the '{key}' key, which will be ignored. "
"Compose does not support '{key}' configuration - use "
"`docker stack deploy` to deploy to a swarm."
)
def check_swarm_only_key(service_dicts, key):
services = [s for s in service_dicts if s.get(key)]
if services:
log.warn(
warning_template.format(
services=", ".join(sorted(s['name'] for s in services)),
key=key
)
)
check_swarm_only_key(service_dicts, 'deploy')
check_swarm_only_key(service_dicts, 'credential_spec')
check_swarm_only_key(service_dicts, 'configs')
def load(config_details):
"""Load the configuration from a working directory and a list of
configuration files. Files are loaded in order, and merged on top
@@ -333,25 +374,24 @@ def load(config_details):
networks = load_mapping(
config_details.config_files, 'get_networks', 'Network'
)
secrets = load_secrets(config_details.config_files, config_details.working_dir)
secrets = load_mapping(
config_details.config_files, 'get_secrets', 'Secret', config_details.working_dir
)
configs = load_mapping(
config_details.config_files, 'get_configs', 'Config', config_details.working_dir
)
service_dicts = load_services(config_details, main_file)
if main_file.version != V1:
for service_dict in service_dicts:
match_named_volumes(service_dict, volumes)
services_using_deploy = [s for s in service_dicts if s.get('deploy')]
if services_using_deploy:
log.warn(
"Some services ({}) use the 'deploy' key, which will be ignored. "
"Compose does not support deploy configuration - use "
"`docker stack deploy` to deploy to a swarm."
.format(", ".join(sorted(s['name'] for s in services_using_deploy))))
check_swarm_only_config(service_dicts)
return Config(main_file.version, service_dicts, volumes, networks, secrets)
return Config(main_file.version, service_dicts, volumes, networks, secrets, configs)
def load_mapping(config_files, get_func, entity_type):
def load_mapping(config_files, get_func, entity_type, working_dir=None):
mapping = {}
for config_file in config_files:
@@ -376,6 +416,9 @@ def load_mapping(config_files, get_func, entity_type):
if 'labels' in config:
config['labels'] = parse_labels(config['labels'])
if 'file' in config:
config['file'] = expand_path(working_dir, config['file'])
return mapping
@@ -389,29 +432,6 @@ def validate_external(entity_type, name, config):
entity_type, name, ', '.join(k for k in config if k != 'external')))
def load_secrets(config_files, working_dir):
mapping = {}
for config_file in config_files:
for name, config in config_file.get_secrets().items():
mapping[name] = config or {}
if not config:
continue
external = config.get('external')
if external:
validate_external('Secret', name, config)
if isinstance(external, dict):
config['external_name'] = external.get('name')
else:
config['external_name'] = name
if 'file' in config:
config['file'] = expand_path(working_dir, config['file'])
return mapping
def load_services(config_details, config_file):
def build_service(service_name, service_dict, service_names):
service_config = ServiceConfig.with_abs_paths(
@@ -478,7 +498,7 @@ def process_config_file(config_file, environment, service_name=None):
'service',
environment)
if config_file.version != V1:
if config_file.version > V1:
processed_config = dict(config_file.config)
processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section(
@@ -491,12 +511,19 @@ def process_config_file(config_file, environment, service_name=None):
config_file.get_networks(),
'network',
environment)
if config_file.version in (const.COMPOSEFILE_V3_1, const.COMPOSEFILE_V3_2):
if config_file.version >= const.COMPOSEFILE_V3_1:
processed_config['secrets'] = interpolate_config_section(
config_file,
config_file.get_secrets(),
'secrets',
environment)
if config_file.version >= const.COMPOSEFILE_V3_3:
processed_config['configs'] = interpolate_config_section(
config_file,
config_file.get_configs(),
'configs',
environment
)
else:
processed_config = services
@@ -544,12 +571,21 @@ class ServiceExtendsResolver(object):
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
extends_file = ConfigFile.from_filename(config_path)
validate_config_version([self.config_file, extends_file])
extended_file = process_config_file(
extends_file, self.environment, service_name=service_name
)
service_config = extended_file.get_service(service_name)
if config_path == self.service_config.filename:
try:
service_config = self.config_file.get_service(service_name)
except KeyError:
raise ConfigurationError(
"Cannot extend service '{}' in {}: Service not found".format(
service_name, config_path)
)
else:
extends_file = ConfigFile.from_filename(config_path)
validate_config_version([self.config_file, extends_file])
extended_file = process_config_file(
extends_file, self.environment, service_name=service_name
)
service_config = extended_file.get_service(service_name)
return config_path, service_config, service_name
@@ -640,8 +676,10 @@ def validate_service(service_config, service_names, config_file):
validate_service_constraints(service_dict, service_name, config_file)
validate_paths(service_dict)
validate_cpu(service_config)
validate_ulimits(service_config)
validate_network_mode(service_config, service_names)
validate_pid_mode(service_config, service_names)
validate_depends_on(service_config, service_names)
validate_links(service_config, service_names)
@@ -789,6 +827,11 @@ def finalize_service(service_config, service_names, version, environment):
types.ServiceSecret.parse(s) for s in service_dict['secrets']
]
if 'configs' in service_dict:
service_dict['configs'] = [
types.ServiceConfig.parse(c) for c in service_dict['configs']
]
normalize_build(service_dict, service_config.working_dir, environment)
service_dict['name'] = service_config.name
@@ -874,12 +917,13 @@ def merge_service_dicts(base, override, version):
md.merge_mapping('environment', parse_environment)
md.merge_mapping('labels', parse_labels)
md.merge_mapping('ulimits', parse_ulimits)
md.merge_mapping('ulimits', parse_flat_dict)
md.merge_mapping('networks', parse_networks)
md.merge_mapping('sysctls', parse_sysctls)
md.merge_mapping('depends_on', parse_depends_on)
md.merge_sequence('links', ServiceLink.parse)
md.merge_sequence('secrets', types.ServiceSecret.parse)
md.merge_sequence('configs', types.ServiceConfig.parse)
md.merge_mapping('deploy', parse_deploy)
for field in ['volumes', 'devices']:
@@ -928,7 +972,7 @@ def merge_ports(md, base, override):
merged = parse_sequence_func(md.base.get(field, []))
merged.update(parse_sequence_func(md.override.get(field, [])))
md[field] = [item for item in sorted(merged.values())]
md[field] = [item for item in sorted(merged.values(), key=lambda x: x.target)]
def merge_build(output, base, override):
@@ -941,7 +985,10 @@ def merge_build(output, base, override):
md = MergeDict(to_dict(base), to_dict(override))
md.merge_scalar('context')
md.merge_scalar('dockerfile')
md.merge_scalar('network')
md.merge_mapping('args', parse_build_arguments)
md.merge_field('cache_from', merge_unique_items_lists, default=[])
md.merge_mapping('labels', parse_labels)
return dict(md)
@@ -1008,12 +1055,14 @@ parse_depends_on = functools.partial(
parse_deploy = functools.partial(parse_dict_or_list, split_kv, 'deploy')
def parse_ulimits(ulimits):
if not ulimits:
def parse_flat_dict(d):
if not d:
return {}
if isinstance(ulimits, dict):
return dict(ulimits)
if isinstance(d, dict):
return dict(d)
raise ConfigurationError("Invalid type: expected mapping")
def resolve_env_var(key, val, environment):

View File

@@ -58,7 +58,8 @@
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"}
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
}
@@ -228,6 +229,7 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {

View File

@@ -58,7 +58,10 @@
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"}
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"}
},
"additionalProperties": false
}
@@ -74,8 +77,11 @@
]
},
"container_name": {"type": "string"},
"cpu_count": {"type": "integer", "minimum": 0},
"cpu_percent": {"type": "integer", "minimum": 0, "maximum": 100},
"cpu_shares": {"type": ["number", "string"]},
"cpu_quota": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"cpuset": {"type": "string"},
"depends_on": {
"oneOf": [
@@ -230,6 +236,7 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {

View File

@@ -72,6 +72,7 @@
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"}
},
"additionalProperties": false

View File

@@ -0,0 +1,534 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.3.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/config"
}
},
"additionalProperties": false
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"configs": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"container_name": {"type": "string"},
"credential_spec": {"type": "object", "properties": {
"file": {"type": "string"},
"registry": {"type": "string"}
}},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{
"type": "object",
"properties": {
"mode": {"type": "string"},
"target": {"type": "integer"},
"published": {"type": "integer"},
"protocol": {"type": "string"}
},
"additionalProperties": false
}
]
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
"propagation": {"type": "string"}
}
},
"volume": {
"type": "object",
"properties": {
"nocopy": {"type": "boolean"}
}
}
}
}
],
"uniqueItems": true
}
},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"endpoint_mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
}
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}},
"preferences": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spread": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"attachable": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"config": {
"id": "#/definitions/config",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -4,7 +4,7 @@ from __future__ import unicode_literals
VERSION_EXPLANATION = (
'You might be seeing this error because you\'re using the wrong Compose file version. '
'Either specify a supported version ("2.0", "2.1", "3.0", "3.1", "3.2") and place '
'Either specify a supported version (e.g "2.2" or "3.3") and place '
'your service definitions under the `services` key, or omit the `version` key '
'and place your service definitions at the root of the file to use '
'version 1.\nFor more on the Compose file format versions, see '
@@ -44,3 +44,12 @@ class ComposeFileNotFound(ConfigurationError):
Supported filenames: %s
""" % ", ".join(supported_filenames))
class DuplicateOverrideFileFound(ConfigurationError):
def __init__(self, override_filenames):
self.override_filenames = override_filenames
super(DuplicateOverrideFileFound, self).__init__(
"Multiple override files found: {}. You may only use a single "
"override file.".format(", ".join(override_filenames))
)

View File

@@ -7,7 +7,6 @@ from string import Template
import six
from .errors import ConfigurationError
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
@@ -28,7 +27,7 @@ class Interpolator(object):
def interpolate_environment_variables(version, config, section, environment):
if version in (V2_0, V1):
if version <= V2_0:
interpolator = Interpolator(Template, environment)
else:
interpolator = Interpolator(TemplateWithDefaults, environment)

View File

@@ -7,8 +7,7 @@ import yaml
from compose.config import types
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_1 as V2_1
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V3_1 as V3_1
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V3_2 as V3_2
@@ -21,14 +20,27 @@ def serialize_dict_type(dumper, data):
return dumper.represent_dict(data.repr())
def serialize_string(dumper, data):
""" Ensure boolean-like strings are quoted in the output """
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
if data.lower() in ('y', 'n', 'yes', 'no', 'on', 'off', 'true', 'false'):
# Empirically only y/n appears to be an issue, but this might change
# depending on which PyYaml version is being used. Err on safe side.
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
return representer(data)
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
def denormalize_config(config, image_digests=None):
result = {'version': V2_1 if config.version == V1 else config.version}
result = {'version': str(V2_1) if config.version == V1 else str(config.version)}
denormalized_services = [
denormalize_service_dict(
service_dict,
@@ -40,21 +52,15 @@ def denormalize_config(config, image_digests=None):
service_dict.pop('name'): service_dict
for service_dict in denormalized_services
}
result['networks'] = config.networks.copy()
for net_name, net_conf in result['networks'].items():
if 'external_name' in net_conf:
del net_conf['external_name']
for key in ('networks', 'volumes', 'secrets', 'configs'):
config_dict = getattr(config, key)
if not config_dict:
continue
result[key] = config_dict.copy()
for name, conf in result[key].items():
if 'external_name' in conf:
del conf['external_name']
result['volumes'] = config.volumes.copy()
for vol_name, vol_conf in result['volumes'].items():
if 'external_name' in vol_conf:
del vol_conf['external_name']
if config.version in (V3_1, V3_2):
result['secrets'] = config.secrets.copy()
for secret_name, secret_conf in result['secrets'].items():
if 'external_name' in secret_conf:
del secret_conf['external_name']
return result
@@ -63,7 +69,8 @@ def serialize_config(config, image_digests=None):
denormalize_config(config, image_digests),
default_flow_style=False,
indent=2,
width=80)
width=80
)
def serialize_ns_time_value(value):
@@ -99,7 +106,7 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
if version == V1 and 'network_mode' not in service_dict:
service_dict['network_mode'] = 'bridge'
if 'depends_on' in service_dict and version not in (V2_1, V2_2):
if 'depends_on' in service_dict and (version < V2_1 or version >= V3_0):
service_dict['depends_on'] = sorted([
svc for svc in service_dict['depends_on'].keys()
])
@@ -114,7 +121,7 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
service_dict['healthcheck']['timeout']
)
if 'ports' in service_dict and version not in (V3_2,):
if 'ports' in service_dict and version < V3_2:
service_dict['ports'] = [
p.legacy_repr() if isinstance(p, types.ServicePort) else p
for p in service_dict['ports']

View File

@@ -38,6 +38,7 @@ def get_service_dependents(service_dict, services):
if (name in get_service_names(service.get('links', [])) or
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
name == get_service_name_from_network_mode(service.get('network_mode')) or
name == get_service_name_from_network_mode(service.get('pid')) or
name in service.get('depends_on', []))
]

View File

@@ -238,8 +238,7 @@ class ServiceLink(namedtuple('_ServiceLink', 'target alias')):
return self.alias
class ServiceSecret(namedtuple('_ServiceSecret', 'source target uid gid mode')):
class ServiceConfigBase(namedtuple('_ServiceConfigBase', 'source target uid gid mode')):
@classmethod
def parse(cls, spec):
if isinstance(spec, six.string_types):
@@ -258,11 +257,35 @@ class ServiceSecret(namedtuple('_ServiceSecret', 'source target uid gid mode')):
def repr(self):
return dict(
[(k, v) for k, v in self._asdict().items() if v is not None]
[(k, v) for k, v in zip(self._fields, self) if v is not None]
)
class ServiceSecret(ServiceConfigBase):
pass
class ServiceConfig(ServiceConfigBase):
pass
class ServicePort(namedtuple('_ServicePort', 'target published protocol mode external_ip')):
def __new__(cls, target, published, *args, **kwargs):
try:
if target:
target = int(target)
except ValueError:
raise ConfigurationError('Invalid target port: {}'.format(target))
try:
if published:
published = int(published)
except ValueError:
raise ConfigurationError('Invalid published port: {}'.format(published))
return super(ServicePort, cls).__new__(
cls, target, published, *args, **kwargs
)
@classmethod
def parse(cls, spec):
@@ -272,24 +295,28 @@ class ServicePort(namedtuple('_ServicePort', 'target published protocol mode ext
if not isinstance(spec, dict):
result = []
for k, v in build_port_bindings([spec]).items():
if '/' in k:
target, proto = k.split('/', 1)
else:
target, proto = (k, None)
for pub in v:
if pub is None:
result.append(
cls(target, None, proto, None, None)
)
elif isinstance(pub, tuple):
result.append(
cls(target, pub[1], proto, None, pub[0])
)
try:
for k, v in build_port_bindings([spec]).items():
if '/' in k:
target, proto = k.split('/', 1)
else:
result.append(
cls(target, pub, proto, None, None)
)
target, proto = (k, None)
for pub in v:
if pub is None:
result.append(
cls(target, None, proto, None, None)
)
elif isinstance(pub, tuple):
result.append(
cls(target, pub[1], proto, None, pub[0])
)
else:
result.append(
cls(target, pub, proto, None, None)
)
except ValueError as e:
raise ConfigurationError(str(e))
return result
return [cls(
@@ -302,11 +329,11 @@ class ServicePort(namedtuple('_ServicePort', 'target published protocol mode ext
@property
def merge_field(self):
return (self.target, self.published)
return (self.target, self.published, self.external_ip, self.protocol)
def repr(self):
return dict(
[(k, v) for k, v in self._asdict().items() if v is not None]
[(k, v) for k, v in zip(self._fields, self) if v is not None]
)
def legacy_repr(self):

View File

@@ -15,6 +15,7 @@ from jsonschema import RefResolver
from jsonschema import ValidationError
from ..const import COMPOSEFILE_V1 as V1
from ..const import NANOCPUS_SCALE
from .errors import ConfigurationError
from .errors import VERSION_EXPLANATION
from .sort_services import get_service_name_from_network_mode
@@ -171,6 +172,21 @@ def validate_network_mode(service_config, service_names):
"is undefined.".format(s=service_config, dep=dependency))
def validate_pid_mode(service_config, service_names):
pid_mode = service_config.config.get('pid')
if not pid_mode:
return
dependency = get_service_name_from_network_mode(pid_mode)
if not dependency:
return
if dependency not in service_names:
raise ConfigurationError(
"Service '{s.name}' uses the PID namespace of service '{dep}' which "
"is undefined.".format(s=service_config, dep=dependency)
)
def validate_links(service_config, service_names):
for link in service_config.config.get('links', []):
if link.split(':')[0] not in service_names:
@@ -387,6 +403,16 @@ def validate_service_constraints(config, service_name, config_file):
handle_errors(validator.iter_errors(config), handler, None)
def validate_cpu(service_config):
cpus = service_config.config.get('cpus')
if not cpus:
return
nano_cpus = cpus * NANOCPUS_SCALE
if isinstance(nano_cpus, float) and not nano_cpus.is_integer():
raise ConfigurationError(
"cpus must have nine or less digits after decimal point")
def get_schema_path():
return os.path.dirname(os.path.abspath(__file__))

View File

@@ -3,6 +3,8 @@ from __future__ import unicode_literals
import sys
from .version import ComposeVersion
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
@@ -15,17 +17,19 @@ LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'
LABEL_VOLUME = 'com.docker.compose.volume'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
NANOCPUS_SCALE = 1000000000
SECRETS_PATH = '/run/secrets'
COMPOSEFILE_V1 = '1'
COMPOSEFILE_V2_0 = '2.0'
COMPOSEFILE_V2_1 = '2.1'
COMPOSEFILE_V2_2 = '2.2'
COMPOSEFILE_V1 = ComposeVersion('1')
COMPOSEFILE_V2_0 = ComposeVersion('2.0')
COMPOSEFILE_V2_1 = ComposeVersion('2.1')
COMPOSEFILE_V2_2 = ComposeVersion('2.2')
COMPOSEFILE_V3_0 = '3.0'
COMPOSEFILE_V3_1 = '3.1'
COMPOSEFILE_V3_2 = '3.2'
COMPOSEFILE_V3_0 = ComposeVersion('3.0')
COMPOSEFILE_V3_1 = ComposeVersion('3.1')
COMPOSEFILE_V3_2 = ComposeVersion('3.2')
COMPOSEFILE_V3_3 = ComposeVersion('3.3')
API_VERSIONS = {
COMPOSEFILE_V1: '1.21',
@@ -35,6 +39,7 @@ API_VERSIONS = {
COMPOSEFILE_V3_0: '1.25',
COMPOSEFILE_V3_1: '1.25',
COMPOSEFILE_V3_2: '1.25',
COMPOSEFILE_V3_3: '1.30',
}
API_VERSION_TO_ENGINE_VERSION = {
@@ -45,4 +50,5 @@ API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V3_0]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_1]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_2]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_3]: '17.06.0',
}

View File

@@ -96,12 +96,16 @@ class Container(object):
def human_readable_ports(self):
def format_port(private, public):
if not public:
return private
return '{HostIp}:{HostPort}->{private}'.format(
private=private, **public[0])
return [private]
return [
'{HostIp}:{HostPort}->{private}'.format(private=private, **pub)
for pub in public
]
return ', '.join(format_port(*item)
for item in sorted(six.iteritems(self.ports)))
return ', '.join(
','.join(format_port(*item))
for item in sorted(six.iteritems(self.ports))
)
@property
def labels(self):

View File

@@ -18,6 +18,8 @@ log = logging.getLogger(__name__)
OPTS_EXCEPTIONS = [
'com.docker.network.driver.overlay.vxlanid_list',
'com.docker.network.windowsshim.hnsid',
'com.docker.network.windowsshim.networkname'
]
@@ -187,10 +189,13 @@ def check_remote_network_config(remote, local):
local_labels = local.labels or {}
remote_labels = remote.get('Labels', {})
for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
if k.startswith('com.docker.compose.'): # We are only interested in user-specified labels
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue
if remote_labels.get(k) != local_labels.get(k):
raise NetworkConfigChangedError(local.full_name, 'label "{}"'.format(k))
log.warn(
'Network {}: label "{}" has changed. It may need to be'
' recreated.'.format(local.full_name, k)
)
def build_networks(name, config_data, client):

View File

@@ -38,7 +38,8 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None):
writer = ParallelStreamWriter(stream, msg)
for obj in objects:
writer.initialize(get_name(obj))
writer.add_object(get_name(obj))
writer.write_initial()
events = parallel_execute_iter(objects, func, get_deps, limit)
@@ -224,12 +225,18 @@ class ParallelStreamWriter(object):
self.stream = stream
self.msg = msg
self.lines = []
self.width = 0
def initialize(self, obj_index):
def add_object(self, obj_index):
self.lines.append(obj_index)
self.width = max(self.width, len(obj_index))
def write_initial(self):
if self.msg is None:
return
self.lines.append(obj_index)
self.stream.write("{} {} ... \r\n".format(self.msg, obj_index))
for line in self.lines:
self.stream.write("{} {:<{width}} ... \r\n".format(self.msg, line,
width=self.width))
self.stream.flush()
def write(self, obj_index, status):
@@ -241,7 +248,8 @@ class ParallelStreamWriter(object):
self.stream.write("%c[%dA" % (27, diff))
# erase
self.stream.write("%c[2K\r" % 27)
self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status))
self.stream.write("{} {:<{width}} ... {}\r".format(self.msg, obj_index,
status, width=self.width))
# move back down
self.stream.write("%c[%dB" % (27, diff))
self.stream.flush()

View File

@@ -24,10 +24,13 @@ from .network import get_networks
from .network import ProjectNetworks
from .service import BuildAction
from .service import ContainerNetworkMode
from .service import ContainerPidMode
from .service import ConvergenceStrategy
from .service import NetworkMode
from .service import PidMode
from .service import Service
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import microseconds_from_time_nano
from .volume import ProjectVolumes
@@ -97,6 +100,7 @@ class Project(object):
network_mode = project.get_network_mode(
service_dict, list(service_networks.keys())
)
pid_mode = project.get_pid_mode(service_dict)
volumes_from = get_volumes_from(project, service_dict)
if config_data.version != V1:
@@ -121,6 +125,7 @@ class Project(object):
network_mode=network_mode,
volumes_from=volumes_from,
secrets=secrets,
pid_mode=pid_mode,
**service_dict)
)
@@ -224,6 +229,27 @@ class Project(object):
return NetworkMode(network_mode)
def get_pid_mode(self, service_dict):
pid_mode = service_dict.pop('pid', None)
if not pid_mode:
return PidMode(None)
service_name = get_service_name_from_network_mode(pid_mode)
if service_name:
return ServicePidMode(self.get_service(service_name))
container_name = get_container_name_from_network_mode(pid_mode)
if container_name:
try:
return ContainerPidMode(Container.from_id(self.client, container_name))
except APIError:
raise ConfigurationError(
"Service '{name}' uses the PID namespace of container '{dep}' which "
"does not exist.".format(name=service_dict['name'], dep=container_name)
)
return PidMode(pid_mode)
def start(self, service_names=None, **options):
containers = []
@@ -382,7 +408,8 @@ class Project(object):
timeout=None,
detached=False,
remove_orphans=False,
scale_override=None):
scale_override=None,
rescale=True):
warn_for_swarm_mode(self.client)
@@ -405,7 +432,8 @@ class Project(object):
plans[service.name],
timeout=timeout,
detached=detached,
scale_override=scale_override.get(service.name)
scale_override=scale_override.get(service.name),
rescale=rescale
)
def get_deps(service):
@@ -460,7 +488,7 @@ class Project(object):
return plans
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False):
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False):
services = self.get_services(service_names, include_deps=False)
if parallel_pull:
@@ -475,7 +503,7 @@ class Project(object):
limit=5)
else:
for service in services:
service.pull(ignore_pull_failures)
service.pull(ignore_pull_failures, silent=silent)
def push(self, service_names=None, ignore_push_failures=False):
for service in self.get_services(service_names, include_deps=False):

View File

@@ -34,6 +34,7 @@ from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
from .const import LABEL_SERVICE
from .const import LABEL_VERSION
from .const import NANOCPUS_SCALE
from .container import Container
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
@@ -52,7 +53,12 @@ HOST_CONFIG_KEYS = [
'cap_add',
'cap_drop',
'cgroup_parent',
'cpu_count',
'cpu_percent',
'cpu_quota',
'cpu_shares',
'cpus',
'cpuset',
'devices',
'dns',
'dns_search',
@@ -76,9 +82,11 @@ HOST_CONFIG_KEYS = [
'restart',
'security_opt',
'shm_size',
'storage_opt',
'sysctls',
'userns_mode',
'volumes_from',
'volume_driver',
]
CONDITION_STARTED = 'service_started'
@@ -149,6 +157,7 @@ class Service(object):
networks=None,
secrets=None,
scale=None,
pid_mode=None,
**options
):
self.name = name
@@ -158,6 +167,7 @@ class Service(object):
self.links = links or []
self.volumes_from = volumes_from or []
self.network_mode = network_mode or NetworkMode(None)
self.pid_mode = pid_mode or PidMode(None)
self.networks = networks or {}
self.secrets = secrets or []
self.scale_num = scale or 1
@@ -390,7 +400,7 @@ class Service(object):
return containers
def _execute_convergence_recreate(self, containers, scale, timeout, detached, start):
if len(containers) > scale:
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
@@ -408,14 +418,14 @@ class Service(object):
for error in errors.values():
raise OperationFailedError(error)
if len(containers) < scale:
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _execute_convergence_start(self, containers, scale, timeout, detached, start):
if len(containers) > scale:
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
@@ -429,7 +439,7 @@ class Service(object):
for error in errors.values():
raise OperationFailedError(error)
if len(containers) < scale:
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
@@ -448,7 +458,7 @@ class Service(object):
)
def execute_convergence_plan(self, plan, timeout=None, detached=False,
start=True, scale_override=None):
start=True, scale_override=None, rescale=True):
(action, containers) = plan
scale = scale_override if scale_override is not None else self.scale_num
containers = sorted(containers, key=attrgetter('number'))
@@ -460,6 +470,11 @@ class Service(object):
scale, detached, start
)
# The create action needs always needs an initial scale, but otherwise,
# we set scale to none in no-rescale scenarios (`run` dependencies)
if not rescale:
scale = None
if action == 'recreate':
return self._execute_convergence_recreate(
containers, scale, timeout, detached, start
@@ -594,15 +609,19 @@ class Service(object):
def get_dependency_names(self):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
configs = dict(
[(name, None) for name in self.get_linked_service_names()]
)
@@ -610,6 +629,7 @@ class Service(object):
[(name, None) for name in self.get_volumes_from_names()]
))
configs.update({net_name: None} if net_name else {})
configs.update({pid_namespace: None} if pid_namespace else {})
configs.update(self.options.get('depends_on', {}))
for svc, config in self.options.get('depends_on', {}).items():
if config['condition'] == CONDITION_STARTED:
@@ -716,6 +736,7 @@ class Service(object):
container_options = dict(
(k, self.options[k])
for k in DOCKER_CONFIG_KEYS if k in self.options)
override_volumes = override_options.pop('volumes', [])
container_options.update(override_options)
if not container_options.get('name'):
@@ -739,6 +760,11 @@ class Service(object):
formatted_ports(container_options.get('ports', [])),
self.options)
if 'volumes' in container_options or override_volumes:
container_options['volumes'] = list(set(
container_options.get('volumes', []) + override_volumes
))
container_options['environment'] = merge_environment(
self.options.get('environment'),
override_options.get('environment'))
@@ -793,6 +819,10 @@ class Service(object):
init_path = options.get('init')
options['init'] = True
nano_cpus = None
if 'cpus' in options:
nano_cpus = int(options.get('cpus') * NANOCPUS_SCALE)
return self.client.create_host_config(
links=self._get_links(link_to_self=one_off),
port_bindings=build_port_bindings(
@@ -816,7 +846,7 @@ class Service(object):
log_config=log_config,
extra_hosts=options.get('extra_hosts'),
read_only=options.get('read_only'),
pid_mode=options.get('pid'),
pid_mode=self.pid_mode.mode,
security_opt=options.get('security_opt'),
ipc_mode=options.get('ipc'),
cgroup_parent=options.get('cgroup_parent'),
@@ -832,6 +862,13 @@ class Service(object):
init=options.get('init', None),
init_path=init_path,
isolation=options.get('isolation'),
cpu_count=options.get('cpu_count'),
cpu_percent=options.get('cpu_percent'),
nano_cpus=nano_cpus,
volume_driver=options.get('volume_driver'),
cpuset_cpus=options.get('cpuset'),
cpu_shares=options.get('cpu_shares'),
storage_opt=options.get('storage_opt')
)
def get_secret_volumes(self):
@@ -868,7 +905,9 @@ class Service(object):
nocache=no_cache,
dockerfile=build_opts.get('dockerfile', None),
cache_from=build_opts.get('cache_from', None),
buildargs=build_args
labels=build_opts.get('labels', None),
buildargs=build_args,
network_mode=build_opts.get('network', None),
)
try:
@@ -1031,6 +1070,46 @@ def short_id_alias_exists(container, network):
return container.short_id in aliases
class PidMode(object):
def __init__(self, mode):
self._mode = mode
@property
def mode(self):
return self._mode
@property
def service_name(self):
return None
class ServicePidMode(PidMode):
def __init__(self, service):
self.service = service
@property
def service_name(self):
return self.service.name
@property
def mode(self):
containers = self.service.containers()
if containers:
return 'container:' + containers[0].id
log.warn(
"Service %s is trying to use reuse the PID namespace "
"of another service that is not running." % (self.service_name)
)
return None
class ContainerPidMode(PidMode):
def __init__(self, container):
self.container = container
self._mode = 'container:{}'.format(container.id)
class NetworkMode(object):
"""A `standard` network mode (ex: host, bridge)"""

10
compose/version.py Normal file
View File

@@ -0,0 +1,10 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from distutils.version import LooseVersion
class ComposeVersion(LooseVersion):
""" A hashable version object """
def __hash__(self):
return hash(self.vstring)

View File

@@ -224,14 +224,14 @@ _docker_compose_events() {
_docker_compose_exec() {
case "$prev" in
--index|--user)
--index|--user|-u)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-d --help --index --privileged -T --user" -- "$cur" ) )
COMPREPLY=( $( compgen -W "-d --help --index --privileged -T --user -u" -- "$cur" ) )
;;
*)
__docker_compose_services_running

View File

@@ -241,7 +241,7 @@ __docker-compose_subcommand() {
$opts_help \
'-d[Detached mode: Run command in the background.]' \
'--privileged[Give extended privileges to the process.]' \
'--user=[Run the command as this user.]:username:_users' \
'(-u --user)'{-u,--user=}'[Run the command as this user.]:username:_users' \
'-T[Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY.]' \
'--index=[Index of the container if there are multiple instances of a service \[default: 1\]]:index: ' \
'(-):running services:__docker-compose_runningservices' \

View File

@@ -52,6 +52,11 @@ exe = EXE(pyz,
'compose/config/config_schema_v3.2.json',
'DATA'
),
(
'compose/config/config_schema_v3.3.json',
'compose/config/config_schema_v3.3.json',
'DATA'
),
(
'compose/GITSHA',
'compose/GITSHA',

View File

@@ -1,16 +1,22 @@
PyYAML==3.11
PySocks==1.6.7
PyYAML==3.12
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.2.0
colorama==0.3.7
docker==2.2.1
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
colorama==0.3.9
docker==2.4.2
docker-pycreds==0.2.1
dockerpty==0.4.1
docopt==0.6.1
enum34==1.0.4; python_version < '3.4'
docopt==0.6.2
enum34==1.1.6; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
ipaddress==1.0.16
jsonschema==2.5.1
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
pypiwin32==219; sys_platform == 'win32'
requests==2.11.1
six==1.10.0
texttable==0.8.4
texttable==0.8.8
urllib3==1.21.1
websocket-client==0.32.0

17
script/build/test-image Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash
set -e
if [ -z "$1" ]; then
>&2 echo "First argument must be image tag."
exit 1
fi
TAG=$1
docker build -t docker-compose-tests:tmp .
ctnr_id=$(docker create --entrypoint=tox docker-compose-tests:tmp)
docker commit $ctnr_id docker/compose-tests:latest
docker tag docker/compose-tests:latest docker/compose-tests:$TAG
docker rm -f $ctnr_id
docker rmi -f docker-compose-tests:tmp

View File

@@ -27,6 +27,9 @@ script/build/linux
echo "Building the container distribution"
script/build/image $VERSION
echo "Building the compose-tests image"
script/build/test-image $VERSION
echo "Create a github release"
# TODO: script more of this https://developer.github.com/v3/repos/releases/
browser https://github.com/$REPO/releases/new

View File

@@ -0,0 +1,32 @@
#!/bin/bash
function usage() {
>&2 cat << EOM
Download Linux, Mac OS and Windows binaries from remote endpoints
Usage:
$0 <version>
Options:
version version string for the release (ex: 1.6.0)
EOM
exit 1
}
[ -n "$1" ] || usage
VERSION=$1
BASE_BINTRAY_URL=https://dl.bintray.com/docker-compose/bump-$VERSION/
DESTINATION=binaries-$VERSION
APPVEYOR_URL=https://ci.appveyor.com/api/projects/docker/compose/\
artifacts/dist%2Fdocker-compose-Windows-x86_64.exe?branch=bump-$VERSION
mkdir $DESTINATION
wget -O $DESTINATION/docker-compose-Darwin-x86_64 $BASE_BINTRAY_URL/docker-compose-Darwin-x86_64
wget -O $DESTINATION/docker-compose-Linux-x86_64 $BASE_BINTRAY_URL/docker-compose-Linux-x86_64
wget -O $DESTINATION/docker-compose-Windows-x86_64.exe $APPVEYOR_URL

View File

@@ -54,6 +54,10 @@ git push $GITHUB_REPO $VERSION
echo "Uploading the docker image"
docker push docker/compose:$VERSION
echo "Uploading the compose-tests image"
docker push docker/compose-tests:latest
docker push docker/compose-tests:$VERSION
echo "Uploading package to PyPI"
pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.13.0"
VERSION="1.15.0"
IMAGE="docker/compose:$VERSION"
@@ -35,6 +35,7 @@ if [ "$(pwd)" != '/' ]; then
VOLUMES="-v $(pwd):$(pwd)"
fi
if [ -n "$COMPOSE_FILE" ]; then
COMPOSE_OPTIONS="$COMPOSE_OPTIONS -e COMPOSE_FILE=$COMPOSE_FILE"
compose_dir=$(realpath $(dirname $COMPOSE_FILE))
fi
# TODO: also check --file argument

View File

@@ -14,7 +14,7 @@ docker run --rm \
get_versions="docker run --rm
--entrypoint=/code/.tox/py27/bin/python
$TAG
/code/script/test/versions.py docker/docker"
/code/script/test/versions.py docker/docker-ce,moby/moby"
if [ "$DOCKER_VERSIONS" == "" ]; then
DOCKER_VERSIONS="$($get_versions default)"
@@ -48,7 +48,7 @@ for version in $DOCKER_VERSIONS; do
--privileged \
--volume="/var/lib/docker" \
"$repo:$version" \
docker daemon -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
2>&1 | tail -n 10
docker run \

View File

@@ -37,14 +37,22 @@ import requests
GITHUB_API = 'https://api.github.com/repos'
class Version(namedtuple('_Version', 'major minor patch rc')):
class Version(namedtuple('_Version', 'major minor patch rc edition')):
@classmethod
def parse(cls, version):
edition = None
version = version.lstrip('v')
version, _, rc = version.partition('-')
if rc:
if 'rc' not in rc:
edition = rc
rc = None
elif '-' in rc:
edition, rc = rc.split('-')
major, minor, patch = version.split('.', 3)
return cls(major, minor, patch, rc)
return cls(major, minor, patch, rc, edition)
@property
def major_minor(self):
@@ -61,7 +69,8 @@ class Version(namedtuple('_Version', 'major minor patch rc')):
def __str__(self):
rc = '-{}'.format(self.rc) if self.rc else ''
return '.'.join(map(str, self[:3])) + rc
edition = '-{}'.format(self.edition) if self.edition else ''
return '.'.join(map(str, self[:3])) + edition + rc
def group_versions(versions):
@@ -94,6 +103,7 @@ def get_latest_versions(versions, num=1):
group.
"""
versions = group_versions(versions)
num = min(len(versions), num)
return [versions[index][0] for index in range(num)]
@@ -112,16 +122,18 @@ def get_versions(tags):
print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr)
def get_github_releases(project):
def get_github_releases(projects):
"""Query the Github API for a list of version tags and return them in
sorted order.
See https://developer.github.com/v3/repos/#list-tags
"""
url = '{}/{}/tags'.format(GITHUB_API, project)
response = requests.get(url)
response.raise_for_status()
versions = get_versions(response.json())
versions = []
for project in projects:
url = '{}/{}/tags'.format(GITHUB_API, project)
response = requests.get(url)
response.raise_for_status()
versions.extend(get_versions(response.json()))
return sorted(versions, reverse=True, key=operator.attrgetter('order'))
@@ -136,7 +148,7 @@ def parse_args(argv):
def main(argv=None):
args = parse_args(argv)
versions = get_github_releases(args.project)
versions = get_github_releases(args.project.split(','))
if args.command == 'recent':
print(' '.join(map(str, get_latest_versions(versions, args.num))))

View File

@@ -37,7 +37,7 @@ install_requires = [
'requests >= 2.6.1, != 2.11.0, < 2.12',
'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.32.0, < 1.0',
'docker >= 2.2.1, < 3.0',
'docker >= 2.4.2, < 3.0',
'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
@@ -56,6 +56,7 @@ extras_require = {
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}

View File

@@ -21,17 +21,20 @@ from docker import errors
from .. import mock
from ..helpers import create_host_file
from compose.cli.command import get_project
from compose.config.errors import DuplicateOverrideFileFound
from compose.container import Container
from compose.project import OneOffFilter
from compose.utils import nanoseconds_from_time_seconds
from tests.integration.testcases import DockerClientTestCase
from tests.integration.testcases import get_links
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import pull_busybox
from tests.integration.testcases import SWARM_SKIP_RM_VOLUMES
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
@@ -68,7 +71,8 @@ def wait_on_condition(condition, delay=0.1, timeout=40):
def kill_service(service):
for container in service.containers():
container.kill()
if container.is_running:
container.kill()
class ContainerCountCondition(object):
@@ -78,7 +82,7 @@ class ContainerCountCondition(object):
self.expected = expected
def __call__(self):
return len(self.project.containers()) == self.expected
return len([c for c in self.project.containers() if c.is_running]) == self.expected
def __str__(self):
return "waiting for counter count == %s" % self.expected
@@ -112,15 +116,18 @@ class CLITestCase(DockerClientTestCase):
def tearDown(self):
if self.base_dir:
self.project.kill()
self.project.remove_stopped()
self.project.down(None, True)
for container in self.project.containers(stopped=True, one_off=OneOffFilter.only):
container.remove(force=True)
networks = self.client.networks()
for n in networks:
if n['Name'].startswith('{}_'.format(self.project.name)):
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name)):
self.client.remove_network(n['Name'])
volumes = self.client.volumes().get('Volumes') or []
for v in volumes:
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name)):
self.client.remove_volume(v['Name'])
if hasattr(self, '_project'):
del self._project
@@ -175,7 +182,10 @@ class CLITestCase(DockerClientTestCase):
def test_host_not_reachable_volumes_from_container(self):
self.base_dir = 'tests/fixtures/volumes-from-container'
container = self.client.create_container('busybox', 'true', name='composetest_data_container')
container = self.client.create_container(
'busybox', 'true', name='composetest_data_container',
host_config={}
)
self.addCleanup(self.client.remove_container, container)
result = self.dispatch(['-H=tcp://doesnotexist:8000', 'ps'], returncode=1)
@@ -258,8 +268,6 @@ class CLITestCase(DockerClientTestCase):
'restart': ''
},
},
'networks': {},
'volumes': {},
}
def test_config_external_network(self):
@@ -311,8 +319,6 @@ class CLITestCase(DockerClientTestCase):
'network_mode': 'service:net',
},
},
'networks': {},
'volumes': {},
}
@v3_only()
@@ -322,8 +328,6 @@ class CLITestCase(DockerClientTestCase):
assert yaml.load(result.stdout) == {
'version': '3.2',
'networks': {},
'secrets': {},
'volumes': {
'foobar': {
'labels': {
@@ -437,6 +441,10 @@ class CLITestCase(DockerClientTestCase):
assert ('repository nonexisting-image not found' in result.stderr or
'image library/nonexisting-image:latest not found' in result.stderr)
def test_pull_with_quiet(self):
assert self.dispatch(['pull', '--quiet']).stderr == ''
assert self.dispatch(['pull', '--quiet']).stdout == ''
def test_build_plain(self):
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'])
@@ -547,42 +555,48 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['create'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(another.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertEqual(len(another.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
another_containers = another.containers(stopped=True)
assert len(service_containers) == 1
assert len(another_containers) == 1
assert not service_containers[0].is_running
assert not another_containers[0].is_running
def test_create_with_force_recreate(self):
self.dispatch(['create'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
old_ids = [c.id for c in service.containers(stopped=True)]
self.dispatch(['create', '--force-recreate'], None)
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
new_ids = [c.id for c in service.containers(stopped=True)]
new_ids = [c.id for c in service_containers]
self.assertNotEqual(old_ids, new_ids)
assert old_ids != new_ids
def test_create_with_no_recreate(self):
self.dispatch(['create'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
old_ids = [c.id for c in service.containers(stopped=True)]
self.dispatch(['create', '--no-recreate'], None)
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
new_ids = [c.id for c in service.containers(stopped=True)]
new_ids = [c.id for c in service_containers]
self.assertEqual(old_ids, new_ids)
assert old_ids == new_ids
def test_run_one_off_with_volume(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
@@ -595,8 +609,13 @@ class CLITestCase(DockerClientTestCase):
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
service = self.project.get_service('simple')
container_data = service.containers(one_off=OneOffFilter.only, stopped=True)[0]
mount = container_data.get('Mounts')[0]
assert mount['Source'] == volume_path
assert mount['Destination'] == '/data'
assert mount['Type'] == 'bind'
def test_run_one_off_with_multiple_volumes(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
@@ -610,8 +629,6 @@ class CLITestCase(DockerClientTestCase):
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
self.dispatch([
'run',
@@ -620,8 +637,30 @@ class CLITestCase(DockerClientTestCase):
'simple',
'test', '-f' '/data1/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
def test_run_one_off_with_volume_merge(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
volume_path = os.path.abspath(os.path.join(os.getcwd(), self.base_dir, 'files'))
create_host_file(self.client, os.path.join(volume_path, 'example.txt'))
self.dispatch([
'-f', 'docker-compose.merge.yml',
'run',
'-v', '{}:/data'.format(volume_path),
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
service = self.project.get_service('simple')
container_data = service.containers(one_off=OneOffFilter.only, stopped=True)[0]
mounts = container_data.get('Mounts')
assert len(mounts) == 2
config_mount = [m for m in mounts if m['Destination'] == '/data1'][0]
override_mount = [m for m in mounts if m['Destination'] == '/data'][0]
assert config_mount['Type'] == 'volume'
assert override_mount['Source'] == volume_path
assert override_mount['Type'] == 'bind'
def test_create_with_force_recreate_and_no_recreate(self):
self.dispatch(
@@ -689,7 +728,7 @@ class CLITestCase(DockerClientTestCase):
network_name = self.project.networks.networks['default'].full_name
networks = self.client.networks(names=[network_name])
self.assertEqual(len(networks), 1)
self.assertEqual(networks[0]['Driver'], 'bridge')
assert networks[0]['Driver'] == 'bridge' if not is_cluster(self.client) else 'overlay'
assert 'com.docker.network.bridge.enable_icc' not in networks[0]['Options']
network = self.client.inspect_network(networks[0]['Id'])
@@ -735,11 +774,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
# Two networks were created: back and front
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
assert sorted(n['Name'].split('/')[-1] for n in networks) == [back_name, front_name]
web_container = self.project.get_service('web').containers()[0]
back_aliases = web_container.get(
@@ -763,11 +802,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
# One network was created: internal
assert sorted(n['Name'] for n in networks) == [internal_net]
assert sorted(n['Name'].split('/')[-1] for n in networks) == [internal_net]
assert networks[0]['Internal'] is True
@@ -782,11 +821,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
# One networks was created: front
assert sorted(n['Name'] for n in networks) == [static_net]
assert sorted(n['Name'].split('/')[-1] for n in networks) == [static_net]
web_container = self.project.get_service('web').containers()[0]
ipam_config = web_container.get(
@@ -805,14 +844,19 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
# Two networks were created: back and front
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
assert sorted(n['Name'].split('/')[-1] for n in networks) == [back_name, front_name]
back_network = [n for n in networks if n['Name'] == back_name][0]
front_network = [n for n in networks if n['Name'] == front_name][0]
# lookup by ID instead of name in case of duplicates
back_network = self.client.inspect_network(
[n for n in networks if n['Name'] == back_name][0]['Id']
)
front_network = self.client.inspect_network(
[n for n in networks if n['Name'] == front_name][0]['Id']
)
web_container = self.project.get_service('web').containers()[0]
app_container = self.project.get_service('app').containers()[0]
@@ -849,8 +893,12 @@ class CLITestCase(DockerClientTestCase):
assert 'Service "web" uses an undefined network "foo"' in result.stderr
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_up_with_network_mode(self):
c = self.client.create_container('busybox', 'top', name='composetest_network_mode_container')
c = self.client.create_container(
'busybox', 'top', name='composetest_network_mode_container',
host_config={}
)
self.addCleanup(self.client.remove_container, c, force=True)
self.client.start(c)
container_mode_source = 'container:{}'.format(c['Id'])
@@ -864,7 +912,7 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert not networks
@@ -901,7 +949,7 @@ class CLITestCase(DockerClientTestCase):
network_names = ['{}_{}'.format(self.project.name, n) for n in ['foo', 'bar']]
for name in network_names:
self.client.create_network(name)
self.client.create_network(name, attachable=True)
self.dispatch(['-f', filename, 'up', '-d'])
container = self.project.containers()[0]
@@ -919,12 +967,12 @@ class CLITestCase(DockerClientTestCase):
networks = [
n['Name'] for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert not networks
network_name = 'composetest_external_network'
self.client.create_network(network_name)
self.client.create_network(network_name, attachable=True)
self.dispatch(['-f', filename, 'up', '-d'])
container = self.project.containers()[0]
@@ -943,10 +991,10 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert [n['Name'] for n in networks] == [network_with_label]
assert [n['Name'].split('/')[-1] for n in networks] == [network_with_label]
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels']['label_key'] == 'label_val'
@@ -963,10 +1011,10 @@ class CLITestCase(DockerClientTestCase):
volumes = [
v for v in self.client.volumes().get('Volumes', [])
if v['Name'].startswith('{}_'.format(self.project.name))
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert [v['Name'] for v in volumes] == [volume_with_label]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set([volume_with_label])
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -977,7 +1025,7 @@ class CLITestCase(DockerClientTestCase):
network_names = [
n['Name'] for n in self.client.networks()
if n['Name'].startswith('{}_'.format(self.project.name))
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
]
assert network_names == []
@@ -1012,6 +1060,7 @@ class CLITestCase(DockerClientTestCase):
assert "Unsupported config option for services.bar: 'net'" in result.stderr
@no_cluster("Legacy networking not supported on Swarm")
def test_up_with_net_v1(self):
self.base_dir = 'tests/fixtures/net-container'
self.dispatch(['up', '-d'], None)
@@ -1164,14 +1213,40 @@ class CLITestCase(DockerClientTestCase):
proc.wait()
self.assertEqual(proc.returncode, 1)
@v2_only()
@no_cluster('Container PID mode does not work across clusters')
def test_up_with_pid_mode(self):
c = self.client.create_container(
'busybox', 'top', name='composetest_pid_mode_container',
host_config={}
)
self.addCleanup(self.client.remove_container, c, force=True)
self.client.start(c)
container_mode_source = 'container:{}'.format(c['Id'])
self.base_dir = 'tests/fixtures/pid-mode'
self.dispatch(['up', '-d'], None)
service_mode_source = 'container:{}'.format(
self.project.get_service('container').containers()[0].id)
service_mode_container = self.project.get_service('service').containers()[0]
assert service_mode_container.get('HostConfig.PidMode') == service_mode_source
container_mode_container = self.project.get_service('container').containers()[0]
assert container_mode_container.get('HostConfig.PidMode') == container_mode_source
host_mode_container = self.project.get_service('host').containers()[0]
assert host_mode_container.get('HostConfig.PidMode') == 'host'
def test_exec_without_tty(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'console'])
self.assertEqual(len(self.project.containers()), 1)
stdout, stderr = self.dispatch(['exec', '-T', 'console', 'ls', '-1d', '/'])
self.assertEqual(stdout, "/\n")
self.assertEqual(stderr, "")
self.assertEqual(stdout, "/\n")
def test_exec_custom_user(self):
self.base_dir = 'tests/fixtures/links-composefile'
@@ -1211,6 +1286,17 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 0)
def test_run_service_with_scaled_dependencies(self):
self.base_dir = 'tests/fixtures/v2-dependencies'
self.dispatch(['up', '-d', '--scale', 'db=2', '--scale', 'console=0'])
db = self.project.get_service('db')
console = self.project.get_service('console')
assert len(db.containers()) == 2
assert len(console.containers()) == 0
self.dispatch(['run', 'web', '/bin/true'], None)
assert len(db.containers()) == 2
assert len(console.containers()) == 0
def test_run_with_no_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['run', '--no-deps', 'web', '/bin/true'])
@@ -1252,6 +1338,7 @@ class CLITestCase(DockerClientTestCase):
[u'/bin/true'],
)
@py.test.mark.skipif(SWARM_SKIP_RM_VOLUMES, reason='Swarm DELETE /containers/<id> bug')
def test_run_rm(self):
self.base_dir = 'tests/fixtures/volume'
proc = start_process(self.base_dir, ['run', '--rm', 'test'])
@@ -1265,7 +1352,7 @@ class CLITestCase(DockerClientTestCase):
mounts = containers[0].get('Mounts')
for mount in mounts:
if mount['Destination'] == '/container-path':
anonymousName = mount['Name']
anonymous_name = mount['Name']
break
os.kill(proc.pid, signal.SIGINT)
wait_on_process(proc, 1)
@@ -1278,9 +1365,11 @@ class CLITestCase(DockerClientTestCase):
if volume.internal == '/container-named-path':
name = volume.external
break
volumeNames = [v['Name'] for v in volumes]
assert name in volumeNames
assert anonymousName not in volumeNames
volume_names = [v['Name'].split('/')[-1] for v in volumes]
assert name in volume_names
if not is_cluster(self.client):
# The `-v` flag for `docker rm` in Swarm seems to be broken
assert anonymous_name not in volume_names
def test_run_service_with_dockerfile_entrypoint(self):
self.base_dir = 'tests/fixtures/entrypoint-dockerfile'
@@ -1402,11 +1491,10 @@ class CLITestCase(DockerClientTestCase):
container.stop()
# check the ports
self.assertNotEqual(port_random, None)
self.assertIn("0.0.0.0", port_random)
self.assertEqual(port_assigned, "0.0.0.0:49152")
self.assertEqual(port_range[0], "0.0.0.0:49153")
self.assertEqual(port_range[1], "0.0.0.0:49154")
assert port_random is not None
assert port_assigned.endswith(':49152')
assert port_range[0].endswith(':49153')
assert port_range[1].endswith(':49154')
def test_run_service_with_explicitly_mapped_ports(self):
# create one off container
@@ -1422,8 +1510,8 @@ class CLITestCase(DockerClientTestCase):
container.stop()
# check the ports
self.assertEqual(port_short, "0.0.0.0:30000")
self.assertEqual(port_full, "0.0.0.0:30001")
assert port_short.endswith(':30000')
assert port_full.endswith(':30001')
def test_run_service_with_explicitly_mapped_ip_ports(self):
# create one off container
@@ -1616,8 +1704,24 @@ class CLITestCase(DockerClientTestCase):
service = self.project.get_service('simple')
service.create_container()
self.dispatch(['rm', '-fs'], None)
self.assertEqual(len(service.containers(stopped=True)), 0)
def test_rm_stop(self):
self.dispatch(['up', '-d'], None)
simple = self.project.get_service('simple')
self.assertEqual(len(simple.containers()), 0)
another = self.project.get_service('another')
assert len(simple.containers()) == 1
assert len(another.containers()) == 1
self.dispatch(['rm', '-fs'], None)
assert len(simple.containers(stopped=True)) == 0
assert len(another.containers(stopped=True)) == 0
self.dispatch(['up', '-d'], None)
assert len(simple.containers()) == 1
assert len(another.containers()) == 1
self.dispatch(['rm', '-fs', 'another'], None)
assert len(simple.containers()) == 1
assert len(another.containers(stopped=True)) == 0
def test_rm_all(self):
service = self.project.get_service('simple')
@@ -1723,7 +1827,13 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['logs', '-f'])
assert result.stdout.count('\n') == 5
if not is_cluster(self.client):
assert result.stdout.count('\n') == 5
else:
# Sometimes logs are picked up from old containers that haven't yet
# been removed (removal in Swarm is async)
assert result.stdout.count('\n') >= 5
assert 'simple' in result.stdout
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
@@ -1779,7 +1889,10 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up'])
result = self.dispatch(['logs', '--tail', '2'])
assert result.stdout.count('\n') == 3
assert 'c\n' in result.stdout
assert 'd\n' in result.stdout
assert 'a\n' not in result.stdout
assert 'b\n' not in result.stdout
def test_kill(self):
self.dispatch(['up', '-d'], None)
@@ -1928,9 +2041,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', 'simple', str(number)])
return result.stdout.rstrip()
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "0.0.0.0:49153")
assert get_port(3000) == container.get_local_port(3000)
assert ':49152' in get_port(3001)
assert ':49153' in get_port(3002)
def test_expanded_port(self):
self.base_dir = 'tests/fixtures/ports-composefile'
@@ -1941,9 +2054,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', 'simple', str(number)])
return result.stdout.rstrip()
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "0.0.0.0:49153")
assert get_port(3000) == container.get_local_port(3000)
assert ':53222' in get_port(3001)
assert ':53223' in get_port(3002)
def test_port_with_scale(self):
self.base_dir = 'tests/fixtures/ports-composefile-scale'
@@ -1996,12 +2109,14 @@ class CLITestCase(DockerClientTestCase):
assert len(lines) == 2
container, = self.project.containers()
expected_template = (
' container {} {} (image=busybox:latest, '
'name=simplecomposefile_simple_1)')
expected_template = ' container {} {}'
expected_meta_info = ['image=busybox:latest', 'name=simplecomposefile_simple_1']
assert expected_template.format('create', container.id) in lines[0]
assert expected_template.format('start', container.id) in lines[1]
for line in lines:
for info in expected_meta_info:
assert info in line
assert has_timestamp(lines[0])
@@ -2044,7 +2159,6 @@ class CLITestCase(DockerClientTestCase):
'docker-compose.yml',
'docker-compose.override.yml',
'extra.yml',
]
self._project = get_project(self.base_dir, config_paths)
self.dispatch(
@@ -2061,7 +2175,6 @@ class CLITestCase(DockerClientTestCase):
web, other, db = containers
self.assertEqual(web.human_readable_command, 'top')
self.assertTrue({'db', 'other'} <= set(get_links(web)))
self.assertEqual(db.human_readable_command, 'top')
self.assertEqual(other.human_readable_command, 'top')
@@ -2138,3 +2251,25 @@ class CLITestCase(DockerClientTestCase):
assert 'busybox' in result.stdout
assert 'multiplecomposefiles_another_1' in result.stdout
assert 'multiplecomposefiles_simple_1' in result.stdout
def test_up_with_override_yaml(self):
self.base_dir = 'tests/fixtures/override-yaml-files'
self._project = get_project(self.base_dir, [])
self.dispatch(
[
'up', '-d',
],
None)
containers = self.project.containers()
self.assertEqual(len(containers), 2)
web, db = containers
self.assertEqual(web.human_readable_command, 'sleep 100')
self.assertEqual(db.human_readable_command, 'top')
def test_up_with_duplicate_override_yaml_files(self):
self.base_dir = 'tests/fixtures/duplicate-override-yaml-files'
with self.assertRaises(DuplicateOverrideFileFound):
get_project(self.base_dir, [])
self.base_dir = None

View File

@@ -0,0 +1,3 @@
db:
command: "top"

View File

@@ -0,0 +1,3 @@
db:
command: "sleep 300"

View File

@@ -0,0 +1,10 @@
web:
image: busybox:latest
command: "sleep 100"
links:
- db
db:
image: busybox:latest
command: "sleep 200"

View File

@@ -1,6 +1,7 @@
web:
version: '2.2'
services:
web:
command: "top"
db:
db:
command: "top"

View File

@@ -1,10 +1,10 @@
web:
version: '2.2'
services:
web:
image: busybox:latest
command: "sleep 200"
links:
depends_on:
- db
db:
db:
image: busybox:latest
command: "sleep 200"

View File

@@ -1,9 +1,10 @@
web:
links:
version: '2.2'
services:
web:
depends_on:
- db
- other
other:
other:
image: busybox:latest
command: "top"

View File

@@ -0,0 +1,3 @@
db:
command: "top"

View File

@@ -0,0 +1,10 @@
web:
image: busybox:latest
command: "sleep 100"
links:
- db
db:
image: busybox:latest
command: "sleep 200"

View File

@@ -0,0 +1,17 @@
version: "2.2"
services:
service:
image: busybox
command: top
pid: "service:container"
container:
image: busybox
command: top
pid: "container:composetest_pid_mode_container"
host:
image: busybox
command: top
pid: host

View File

@@ -6,10 +6,10 @@ services:
ports:
- target: 3000
- target: 3001
published: 49152
published: 53222
- target: 3002
published: 49153
published: 53223
protocol: tcp
- target: 3003
published: 49154
published: 53224
protocol: udp

View File

@@ -0,0 +1,9 @@
version: '2.2'
services:
simple:
image: busybox:latest
volumes:
- datastore:/data1
volumes:
datastore:

View File

@@ -6,12 +6,14 @@ import random
import py
import pytest
from docker.errors import APIError
from docker.errors import NotFound
from .. import mock
from ..helpers import build_config as load_config
from ..helpers import create_host_file
from .testcases import DockerClientTestCase
from .testcases import SWARM_SKIP_CONTAINERS_ALL
from compose.config import config
from compose.config import ConfigurationError
from compose.config import types
@@ -29,7 +31,10 @@ from compose.errors import NoHealthCheckConfigured
from compose.project import Project
from compose.project import ProjectError
from compose.service import ConvergenceStrategy
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_2_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
@@ -40,7 +45,9 @@ def build_config(**kwargs):
services=kwargs.get('services'),
volumes=kwargs.get('volumes'),
networks=kwargs.get('networks'),
secrets=kwargs.get('secrets'))
secrets=kwargs.get('secrets'),
configs=kwargs.get('configs'),
)
class ProjectTest(DockerClientTestCase):
@@ -55,6 +62,20 @@ class ProjectTest(DockerClientTestCase):
containers = project.containers()
self.assertEqual(len(containers), 2)
@pytest.mark.skipif(SWARM_SKIP_CONTAINERS_ALL, reason='Swarm /containers/json bug')
def test_containers_stopped(self):
web = self.create_service('web')
db = self.create_service('db')
project = Project('composetest', [web, db], self.client)
project.up()
assert len(project.containers()) == 2
assert len(project.containers(stopped=True)) == 2
project.stop()
assert len(project.containers()) == 0
assert len(project.containers(stopped=True)) == 2
def test_containers_with_service_names(self):
web = self.create_service('web')
db = self.create_service('db')
@@ -108,6 +129,7 @@ class ProjectTest(DockerClientTestCase):
volumes=['/var/data'],
name='composetest_data_container',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
project = Project.from_config(
name='composetest',
@@ -123,12 +145,13 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(db._get_volumes_from(), [data_container.id + ':rw'])
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_network_mode_from_service(self):
project = Project.from_config(
name='composetest',
client=self.client,
config_data=load_config({
'version': V2_0,
'version': str(V2_0),
'services': {
'net': {
'image': 'busybox:latest',
@@ -150,12 +173,13 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id)
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_network_mode_from_container(self):
def get_project():
return Project.from_config(
name='composetest',
config_data=load_config({
'version': V2_0,
'version': str(V2_0),
'services': {
'web': {
'image': 'busybox:latest',
@@ -177,6 +201,7 @@ class ProjectTest(DockerClientTestCase):
name='composetest_net_container',
command='top',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
net_container.start()
@@ -186,6 +211,7 @@ class ProjectTest(DockerClientTestCase):
web = project.get_service('web')
self.assertEqual(web.network_mode.mode, 'container:' + net_container.id)
@no_cluster('container networks not supported in Swarm')
def test_net_from_service_v1(self):
project = Project.from_config(
name='composetest',
@@ -209,6 +235,7 @@ class ProjectTest(DockerClientTestCase):
net = project.get_service('net')
self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id)
@no_cluster('container networks not supported in Swarm')
def test_net_from_container_v1(self):
def get_project():
return Project.from_config(
@@ -233,6 +260,7 @@ class ProjectTest(DockerClientTestCase):
name='composetest_net_container',
command='top',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
net_container.start()
@@ -258,12 +286,12 @@ class ProjectTest(DockerClientTestCase):
project.start(service_names=['web'])
self.assertEqual(
set(c.name for c in project.containers()),
set(c.name for c in project.containers() if c.is_running),
set([web_container_1.name, web_container_2.name]))
project.start()
self.assertEqual(
set(c.name for c in project.containers()),
set(c.name for c in project.containers() if c.is_running),
set([web_container_1.name, web_container_2.name, db_container.name]))
project.pause(service_names=['web'])
@@ -283,10 +311,12 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 0)
project.stop(service_names=['web'], timeout=1)
self.assertEqual(set(c.name for c in project.containers()), set([db_container.name]))
self.assertEqual(
set(c.name for c in project.containers() if c.is_running), set([db_container.name])
)
project.kill(service_names=['db'])
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len([c for c in project.containers() if c.is_running]), 0)
self.assertEqual(len(project.containers(stopped=True)), 3)
project.remove_stopped(service_names=['web'])
@@ -301,11 +331,13 @@ class ProjectTest(DockerClientTestCase):
project = Project('composetest', [web, db], self.client)
project.create(['db'])
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers(stopped=True)), 0)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert not containers[0].is_running
db_containers = db.containers(stopped=True)
assert len(db_containers) == 1
assert not db_containers[0].is_running
assert len(web.containers(stopped=True)) == 0
def test_create_twice(self):
web = self.create_service('web')
@@ -314,12 +346,14 @@ class ProjectTest(DockerClientTestCase):
project.create(['db', 'web'])
project.create(['db', 'web'])
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers()), 0)
self.assertEqual(len(web.containers(stopped=True)), 1)
containers = project.containers(stopped=True)
assert len(containers) == 2
db_containers = db.containers(stopped=True)
assert len(db_containers) == 1
assert not db_containers[0].is_running
web_containers = web.containers(stopped=True)
assert len(web_containers) == 1
assert not web_containers[0].is_running
def test_create_with_links(self):
db = self.create_service('db')
@@ -327,12 +361,11 @@ class ProjectTest(DockerClientTestCase):
project = Project('composetest', [db, web], self.client)
project.create(['web'])
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers()), 0)
self.assertEqual(len(web.containers(stopped=True)), 1)
# self.assertEqual(len(project.containers()), 0)
assert len(project.containers(stopped=True)) == 2
assert not [c for c in project.containers(stopped=True) if c.is_running]
assert len(db.containers(stopped=True)) == 1
assert len(web.containers(stopped=True)) == 1
def test_create_strategy_always(self):
db = self.create_service('db')
@@ -341,11 +374,11 @@ class ProjectTest(DockerClientTestCase):
old_id = project.containers(stopped=True)[0].id
project.create(['db'], strategy=ConvergenceStrategy.always)
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
assert len(project.containers(stopped=True)) == 1
db_container = project.containers(stopped=True)[0]
self.assertNotEqual(db_container.id, old_id)
assert not db_container.is_running
assert db_container.id != old_id
def test_create_strategy_never(self):
db = self.create_service('db')
@@ -354,11 +387,11 @@ class ProjectTest(DockerClientTestCase):
old_id = project.containers(stopped=True)[0].id
project.create(['db'], strategy=ConvergenceStrategy.never)
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
assert len(project.containers(stopped=True)) == 1
db_container = project.containers(stopped=True)[0]
self.assertEqual(db_container.id, old_id)
assert not db_container.is_running
assert db_container.id == old_id
def test_project_up(self):
web = self.create_service('web')
@@ -548,8 +581,8 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(project.get_service('web').containers()), 0)
self.assertEqual(len(project.get_service('db').containers()), 1)
self.assertEqual(len(project.get_service('data').containers()), 0)
self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1)
assert not project.get_service('data').containers(stopped=True)[0].is_running
self.assertEqual(len(project.get_service('console').containers()), 0)
def test_project_up_recreate_with_tmpfs_volume(self):
@@ -735,10 +768,10 @@ class ProjectTest(DockerClientTestCase):
"com.docker.compose.network.test": "9-29-045"
}
@v2_only()
@v2_1_only()
def test_up_with_network_static_addresses(self):
config_data = build_config(
version=V2_0,
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -764,7 +797,8 @@ class ProjectTest(DockerClientTestCase):
{"subnet": "fe80::/64",
"gateway": "fe80::1001:1"}
]
}
},
'enable_ipv6': True,
}
}
)
@@ -775,13 +809,8 @@ class ProjectTest(DockerClientTestCase):
)
project.up(detached=True)
network = self.client.networks(names=['static_test'])[0]
service_container = project.get_service('web').containers()[0]
assert network['Options'] == {
"com.docker.network.enable_ipv6": "true"
}
IPAMConfig = (service_container.inspect().get('NetworkSettings', {}).
get('Networks', {}).get('composetest_static_test', {}).
get('IPAMConfig', {}))
@@ -792,7 +821,7 @@ class ProjectTest(DockerClientTestCase):
def test_up_with_enable_ipv6(self):
self.require_api_version('1.23')
config_data = build_config(
version=V2_0,
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -823,7 +852,7 @@ class ProjectTest(DockerClientTestCase):
config_data=config_data,
)
project.up(detached=True)
network = self.client.networks(names=['static_test'])[0]
network = [n for n in self.client.networks() if 'static_test' in n['Name']][0]
service_container = project.get_service('web').containers()[0]
assert network['EnableIPv6'] is True
@@ -975,7 +1004,7 @@ class ProjectTest(DockerClientTestCase):
network_name = 'network_with_label'
config_data = build_config(
version=V2_0,
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -1024,8 +1053,8 @@ class ProjectTest(DockerClientTestCase):
project.up()
self.assertEqual(len(project.containers()), 1)
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
self.assertEqual(volume_data['Driver'], 'local')
@v2_1_only()
@@ -1035,7 +1064,7 @@ class ProjectTest(DockerClientTestCase):
volume_name = 'volume_with_label'
config_data = build_config(
version=V2_0,
version=V2_1,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -1060,10 +1089,12 @@ class ProjectTest(DockerClientTestCase):
volumes = [
v for v in self.client.volumes().get('Volumes', [])
if v['Name'].startswith('composetest_')
if v['Name'].split('/')[-1].startswith('composetest_')
]
assert [v['Name'] for v in volumes] == ['composetest_{}'.format(volume_name)]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set(
['composetest_{}'.format(volume_name)]
)
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -1073,7 +1104,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': V2_0,
'version': str(V2_0),
'services': {
'simple': {'image': 'busybox:latest', 'command': 'top'},
'another': {
@@ -1092,7 +1123,7 @@ class ProjectTest(DockerClientTestCase):
override_file = config.ConfigFile(
'override.yml',
{
'version': V2_0,
'version': str(V2_0),
'services': {
'another': {
'logging': {
@@ -1125,7 +1156,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': V2_0,
'version': str(V2_0),
'services': {
'simple': {
'image': 'busybox:latest',
@@ -1138,7 +1169,7 @@ class ProjectTest(DockerClientTestCase):
override_file = config.ConfigFile(
'override.yml',
{
'version': V2_0,
'version': str(V2_0),
'services': {
'simple': {
'ports': ['1234:1234']
@@ -1156,6 +1187,7 @@ class ProjectTest(DockerClientTestCase):
containers = project.containers()
self.assertEqual(len(containers), 1)
@v2_2_only()
def test_project_up_config_scale(self):
config_data = build_config(
version=V2_2,
@@ -1203,8 +1235,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.client.inspect_volume(full_vol_name)
assert volume_data['Name'] == full_vol_name
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
assert volume_data['Driver'] == 'local'
@v2_only()
@@ -1227,8 +1259,8 @@ class ProjectTest(DockerClientTestCase):
)
project.up()
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
self.assertEqual(volume_data['Driver'], 'local')
@v3_only()
@@ -1285,10 +1317,11 @@ class ProjectTest(DockerClientTestCase):
name='composetest',
config_data=config_data, client=self.client
)
with self.assertRaises(config.ConfigurationError):
with self.assertRaises(APIError if is_cluster(self.client) else config.ConfigurationError):
project.volumes.initialize()
@v2_only()
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_initialize_volumes_updated_driver(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
@@ -1308,8 +1341,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
self.assertEqual(volume_data['Driver'], 'local')
config_data = config_data._replace(
@@ -1346,8 +1379,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
self.assertEqual(volume_data['Driver'], 'local')
config_data = config_data._replace(
@@ -1359,11 +1392,12 @@ class ProjectTest(DockerClientTestCase):
client=self.client
)
project.volumes.initialize()
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
self.assertEqual(volume_data['Driver'], 'local')
@v2_only()
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_initialize_volumes_external_volumes(self):
# Use composetest_ prefix so it gets garbage-collected in tearDown()
vol_name = 'composetest_{0:x}'.format(random.getrandbits(32))
@@ -1422,7 +1456,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': V2_0,
'version': str(V2_0),
'services': {
'simple': {
'image': 'busybox:latest',

View File

@@ -16,9 +16,12 @@ from .. import mock
from .testcases import DockerClientTestCase
from .testcases import get_links
from .testcases import pull_busybox
from .testcases import SWARM_SKIP_CONTAINERS_ALL
from .testcases import SWARM_SKIP_CPU_SHARES
from compose import __version__
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM
from compose.const import LABEL_CONFIG_HASH
from compose.const import LABEL_CONTAINER_NUMBER
from compose.const import LABEL_ONE_OFF
@@ -31,8 +34,12 @@ from compose.project import OneOffFilter
from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy
from compose.service import NetworkMode
from compose.service import PidMode
from compose.service import Service
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_2_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
@@ -98,6 +105,7 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
self.assertEqual('foodriver', container.get('HostConfig.VolumeDriver'))
@pytest.mark.skipif(SWARM_SKIP_CPU_SHARES, reason='Swarm --cpu-shares bug')
def test_create_container_with_cpu_shares(self):
service = self.create_service('db', cpu_shares=73)
container = service.create_container()
@@ -110,6 +118,31 @@ class ServiceTest(DockerClientTestCase):
container.start()
self.assertEqual(container.get('HostConfig.CpuQuota'), 40000)
@v2_2_only()
def test_create_container_with_cpu_count(self):
self.require_api_version('1.25')
service = self.create_service('db', cpu_count=2)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.CpuCount'), 2)
@v2_2_only()
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='cpu_percent is not supported for Linux')
def test_create_container_with_cpu_percent(self):
self.require_api_version('1.25')
service = self.create_service('db', cpu_percent=12)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.CpuPercent'), 12)
@v2_2_only()
def test_create_container_with_cpus(self):
self.require_api_version('1.25')
service = self.create_service('db', cpus=1)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.NanoCpus'), 1000000000)
def test_create_container_with_shm_size(self):
self.require_api_version('1.22')
service = self.create_service('db', shm_size=67108864)
@@ -124,6 +157,7 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
assert container.get('HostConfig.Init') is True
@pytest.mark.xfail(True, reason='Option has been removed in Engine 17.06.0')
def test_create_container_with_init_path(self):
self.require_api_version('1.25')
docker_init_path = find_executable('docker-init')
@@ -175,6 +209,14 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt))
@pytest.mark.xfail(True, reason='Not supported on most drivers')
def test_create_container_with_storage_opt(self):
storage_opt = {'size': '1G'}
service = self.create_service('db', storage_opt=storage_opt)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.StorageOpt'), storage_opt)
def test_create_container_with_mac_address(self):
service = self.create_service('db', mac_address='02:42:ac:11:65:43')
container = service.create_container()
@@ -222,6 +264,7 @@ class ServiceTest(DockerClientTestCase):
'busybox', 'true',
volumes={container_path: {}},
labels={'com.docker.compose.test_image': 'true'},
host_config={}
)
image = self.client.commit(tmp_container)['Id']
@@ -251,6 +294,7 @@ class ServiceTest(DockerClientTestCase):
image='busybox:latest',
command=["top"],
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
host_service = self.create_service(
'host',
@@ -294,9 +338,15 @@ class ServiceTest(DockerClientTestCase):
self.assertIn('FOO=2', new_container.get('Config.Env'))
self.assertEqual(new_container.name, 'composetest_db_1')
self.assertEqual(new_container.get_mount('/etc')['Source'], volume_path)
self.assertIn(
'affinity:container==%s' % old_container.id,
new_container.get('Config.Env'))
if not is_cluster(self.client):
assert (
'affinity:container==%s' % old_container.id in
new_container.get('Config.Env')
)
else:
# In Swarm, the env marker is consumed and the container should be deployed
# on the same node.
assert old_container.get('Node.Name') == new_container.get('Node.Name')
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
self.assertNotEqual(old_container.id, new_container.id)
@@ -323,8 +373,13 @@ class ServiceTest(DockerClientTestCase):
ConvergencePlan('recreate', [orig_container]))
assert new_container.get_mount('/etc')['Source'] == volume_path
assert ('affinity:container==%s' % orig_container.id in
new_container.get('Config.Env'))
if not is_cluster(self.client):
assert ('affinity:container==%s' % orig_container.id in
new_container.get('Config.Env'))
else:
# In Swarm, the env marker is consumed and the container should be deployed
# on the same node.
assert orig_container.get('Node.Name') == new_container.get('Node.Name')
orig_container = new_container
@@ -437,18 +492,21 @@ class ServiceTest(DockerClientTestCase):
)
containers = service.execute_convergence_plan(ConvergencePlan('create', []), start=False)
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
containers = service.execute_convergence_plan(
ConvergencePlan('recreate', containers),
start=False)
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
service.execute_convergence_plan(ConvergencePlan('start', containers), start=False)
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
def test_start_container_passes_through_options(self):
db = self.create_service('db')
@@ -460,6 +518,7 @@ class ServiceTest(DockerClientTestCase):
create_and_start_container(db)
self.assertEqual(db.containers()[0].environment['FOO'], 'BAR')
@no_cluster('No legacy links support in Swarm')
def test_start_container_creates_links(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, None)])
@@ -476,6 +535,7 @@ class ServiceTest(DockerClientTestCase):
'db'])
)
@no_cluster('No legacy links support in Swarm')
def test_start_container_creates_links_with_names(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'custom_link_name')])
@@ -492,6 +552,7 @@ class ServiceTest(DockerClientTestCase):
'custom_link_name'])
)
@no_cluster('No legacy links support in Swarm')
def test_start_container_with_external_links(self):
db = self.create_service('db')
web = self.create_service('web', external_links=['composetest_db_1',
@@ -510,6 +571,7 @@ class ServiceTest(DockerClientTestCase):
'db_3']),
)
@no_cluster('No legacy links support in Swarm')
def test_start_normal_container_does_not_create_links_to_its_own_service(self):
db = self.create_service('db')
@@ -519,6 +581,7 @@ class ServiceTest(DockerClientTestCase):
c = create_and_start_container(db)
self.assertEqual(set(get_links(c)), set([]))
@no_cluster('No legacy links support in Swarm')
def test_start_one_off_container_creates_links_to_its_own_service(self):
db = self.create_service('db')
@@ -545,7 +608,7 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
container.wait()
self.assertIn(b'success', container.logs())
self.assertEqual(len(self.client.images(name='composetest_test')), 1)
assert len(self.client.images(name='composetest_test')) >= 1
def test_start_container_uses_tagged_image_if_it_exists(self):
self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test')
@@ -572,7 +635,10 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
self.create_service('web', build={'context': base_dir}).build()
service = self.create_service('web', build={'context': base_dir})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert self.client.inspect_image('composetest_web')
def test_build_non_ascii_filename(self):
@@ -585,7 +651,9 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir.encode('utf8'), b'foo\xE2bar'), 'w') as f:
f.write("hello world\n")
self.create_service('web', build={'context': text_type(base_dir)}).build()
service = self.create_service('web', build={'context': text_type(base_dir)})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert self.client.inspect_image('composetest_web')
def test_build_with_image_name(self):
@@ -620,6 +688,7 @@ class ServiceTest(DockerClientTestCase):
build={'context': text_type(base_dir),
'args': {"build_version": "1"}})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert "build_version=1" in service.image()['ContainerConfig']['Cmd']
@@ -636,9 +705,55 @@ class ServiceTest(DockerClientTestCase):
build={'context': text_type(base_dir),
'args': {"build_version": "1"}})
service.build(build_args_override={'build_version': '2'})
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert "build_version=2" in service.image()['ContainerConfig']['Cmd']
def test_build_with_build_labels(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write('FROM busybox\n')
service = self.create_service('buildlabels', build={
'context': text_type(base_dir),
'labels': {'com.docker.compose.test': 'true'}
})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert service.image()['Config']['Labels']['com.docker.compose.test'] == 'true'
@no_cluster('Container networks not on Swarm')
def test_build_with_network(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write('FROM busybox\n')
f.write('RUN ping -c1 google.local\n')
net_container = self.client.create_container(
'busybox', 'top', host_config=self.client.create_host_config(
extra_hosts={'google.local': '8.8.8.8'}
), name='composetest_build_network'
)
self.addCleanup(self.client.remove_container, net_container, force=True)
self.client.start(net_container)
service = self.create_service('buildwithnet', build={
'context': text_type(base_dir),
'network': 'container:{}'.format(net_container['Id'])
})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
def test_start_container_stays_unprivileged(self):
service = self.create_service('web')
container = create_and_start_container(service).inspect()
@@ -677,20 +792,27 @@ class ServiceTest(DockerClientTestCase):
'0.0.0.0:9001:9000/udp',
])
container = create_and_start_container(service).inspect()
self.assertEqual(container['NetworkSettings']['Ports'], {
'8000/tcp': [
{
'HostIp': '127.0.0.1',
'HostPort': '8001',
},
],
'9000/udp': [
{
'HostIp': '0.0.0.0',
'HostPort': '9001',
},
],
})
assert container['NetworkSettings']['Ports']['8000/tcp'] == [{
'HostIp': '127.0.0.1',
'HostPort': '8001',
}]
assert container['NetworkSettings']['Ports']['9000/udp'][0]['HostPort'] == '9001'
if not is_cluster(self.client):
assert container['NetworkSettings']['Ports']['9000/udp'][0]['HostIp'] == '0.0.0.0'
# self.assertEqual(container['NetworkSettings']['Ports'], {
# '8000/tcp': [
# {
# 'HostIp': '127.0.0.1',
# 'HostPort': '8001',
# },
# ],
# '9000/udp': [
# {
# 'HostIp': '0.0.0.0',
# 'HostPort': '9001',
# },
# ],
# })
def test_create_with_image_id(self):
# Get image id for the current busybox:latest
@@ -718,6 +840,10 @@ class ServiceTest(DockerClientTestCase):
service.scale(0)
self.assertEqual(len(service.containers()), 0)
@pytest.mark.skipif(
SWARM_SKIP_CONTAINERS_ALL,
reason='Swarm /containers/json bug'
)
def test_scale_with_stopped_containers(self):
"""
Given there are some stopped containers and scale is called with a
@@ -880,12 +1006,12 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
def test_pid_mode_none_defined(self):
service = self.create_service('web', pid=None)
service = self.create_service('web', pid_mode=None)
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), '')
def test_pid_mode_host(self):
service = self.create_service('web', pid='host')
service = self.create_service('web', pid_mode=PidMode('host'))
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), 'host')
@@ -1017,6 +1143,8 @@ class ServiceTest(DockerClientTestCase):
build={'context': base_dir,
'cache_from': ['build1']})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
@mock.patch.dict(os.environ)

View File

@@ -251,7 +251,7 @@ class ServiceStateTest(DockerClientTestCase):
container = web.create_container()
# update the image
c = self.client.create_container(image, ['touch', '/hello.txt'])
c = self.client.create_container(image, ['touch', '/hello.txt'], host_config={})
self.client.commit(c, repository=repo, tag=tag)
self.client.remove_container(c)

View File

@@ -4,8 +4,9 @@ from __future__ import unicode_literals
import functools
import os
import pytest
from docker.errors import APIError
from docker.utils import version_lt
from pytest import skip
from .. import unittest
from compose.cli.docker_client import docker_client
@@ -15,11 +16,19 @@ from compose.const import API_VERSIONS
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V2_0 as V2_1
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V3_2 as V3_2
from compose.const import COMPOSEFILE_V3_3 as V3_3
from compose.const import LABEL_PROJECT
from compose.progress_stream import stream_output
from compose.service import Service
SWARM_SKIP_CONTAINERS_ALL = os.environ.get('SWARM_SKIP_CONTAINERS_ALL', '0') != '0'
SWARM_SKIP_CPU_SHARES = os.environ.get('SWARM_SKIP_CPU_SHARES', '0') != '0'
SWARM_SKIP_RM_VOLUMES = os.environ.get('SWARM_SKIP_RM_VOLUMES', '0') != '0'
SWARM_ASSUME_MULTINODE = os.environ.get('SWARM_ASSUME_MULTINODE', '0') != '0'
def pull_busybox(client):
client.pull('busybox:latest', stream=False)
@@ -37,7 +46,7 @@ def get_links(container):
def engine_max_version():
if 'DOCKER_VERSION' not in os.environ:
return V3_2
return V3_3
version = os.environ['DOCKER_VERSION'].partition('-')[0]
if version_lt(version, '1.10'):
return V1
@@ -45,33 +54,32 @@ def engine_max_version():
return V2_0
if version_lt(version, '1.13'):
return V2_1
return V3_2
if version_lt(version, '17.06'):
return V3_2
return V3_3
def build_version_required_decorator(ignored_versions):
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
max_version = engine_max_version()
if max_version in ignored_versions:
skip("Engine version %s is too low" % max_version)
return
return f(self, *args, **kwargs)
return wrapper
return decorator
def min_version_skip(version):
return pytest.mark.skipif(
engine_max_version() < version,
reason="Engine version %s is too low" % version
)
def v2_only():
return build_version_required_decorator((V1,))
return min_version_skip(V2_0)
def v2_1_only():
return build_version_required_decorator((V1, V2_0))
return min_version_skip(V2_1)
def v2_2_only():
return min_version_skip(V2_2)
def v3_only():
return build_version_required_decorator((V1, V2_0, V2_1))
return min_version_skip(V3_0)
class DockerClientTestCase(unittest.TestCase):
@@ -92,7 +100,7 @@ class DockerClientTestCase(unittest.TestCase):
for i in self.client.images(
filters={'label': 'com.docker.compose.test_image'}):
self.client.remove_image(i)
self.client.remove_image(i, force=True)
volumes = self.client.volumes().get('Volumes') or []
for v in volumes:
@@ -127,4 +135,44 @@ class DockerClientTestCase(unittest.TestCase):
def require_api_version(self, minimum):
api_version = self.client.version()['ApiVersion']
if version_lt(api_version, minimum):
skip("API version is too low ({} < {})".format(api_version, minimum))
pytest.skip("API version is too low ({} < {})".format(api_version, minimum))
def get_volume_data(self, volume_name):
if not is_cluster(self.client):
return self.client.inspect_volume(volume_name)
volumes = self.client.volumes(filters={'name': volume_name})['Volumes']
assert len(volumes) > 0
return self.client.inspect_volume(volumes[0]['Name'])
def is_cluster(client):
if SWARM_ASSUME_MULTINODE:
return True
def get_nodes_number():
try:
return len(client.nodes())
except APIError:
# If the Engine is not part of a Swarm, the SDK will raise
# an APIError
return 0
if not hasattr(is_cluster, 'nodes') or is_cluster.nodes is None:
# Only make the API call if the value hasn't been cached yet
is_cluster.nodes = get_nodes_number()
return is_cluster.nodes > 1
def no_cluster(reason):
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if is_cluster(self.client):
pytest.skip("Test will not be run in cluster mode: %s" % reason)
return
return f(self, *args, **kwargs)
return wrapper
return decorator

View File

@@ -4,6 +4,7 @@ from __future__ import unicode_literals
from docker.errors import DockerException
from .testcases import DockerClientTestCase
from .testcases import no_cluster
from compose.const import LABEL_PROJECT
from compose.const import LABEL_VOLUME
from compose.volume import Volume
@@ -35,26 +36,28 @@ class VolumeTest(DockerClientTestCase):
def test_create_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
def test_recreate_existing_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
vol.create()
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_inspect_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = vol.inspect()
assert info['Name'] == vol.full_name
@no_cluster('remove volume by name defect on Swarm Classic')
def test_remove_volume(self):
vol = Volume(self.client, 'composetest', 'volume01')
vol.create()
@@ -62,6 +65,7 @@ class VolumeTest(DockerClientTestCase):
volumes = self.client.volumes()['Volumes']
assert len([v for v in volumes if v['Name'] == vol.full_name]) == 0
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_external_volume(self):
vol = self.create_volume('composetest_volume_ext', external=True)
assert vol.external is True
@@ -70,6 +74,7 @@ class VolumeTest(DockerClientTestCase):
info = vol.inspect()
assert info['Name'] == vol.name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_external_aliased_volume(self):
alias_name = 'composetest_alias01'
vol = self.create_volume('volume01', external=alias_name)
@@ -79,24 +84,28 @@ class VolumeTest(DockerClientTestCase):
info = vol.inspect()
assert info['Name'] == alias_name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists(self):
vol = self.create_volume('volume01')
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists_external(self):
vol = self.create_volume('volume01', external=True)
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists_external_aliased(self):
vol = self.create_volume('volume01', external='composetest_alias01')
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_volume_default_labels(self):
vol = self.create_volume('volume01')
vol.create()

View File

@@ -9,6 +9,7 @@ from compose import bundle
from compose import service
from compose.cli.errors import UserError
from compose.config.config import Config
from compose.const import COMPOSEFILE_V2_0 as V2_0
@pytest.fixture
@@ -74,11 +75,13 @@ def test_to_bundle():
{'name': 'b', 'build': './b'},
]
config = Config(
version=2,
version=V2_0,
services=services,
volumes={'special': {}},
networks={'extra': {}},
secrets={})
secrets={},
configs={}
)
with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log:
output = bundle.to_bundle(config, image_digests)

View File

@@ -27,9 +27,11 @@ from compose.config.types import VolumeSpec
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V2_1 as V2_1
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V3_1 as V3_1
from compose.const import COMPOSEFILE_V3_2 as V3_2
from compose.const import COMPOSEFILE_V3_3 as V3_3
from compose.const import IS_WINDOWS_PLATFORM
from compose.utils import nanoseconds_from_time_seconds
from tests import mock
@@ -174,6 +176,9 @@ class ConfigTest(unittest.TestCase):
cfg = config.load(build_config_details({'version': '2.1'}))
assert cfg.version == V2_1
cfg = config.load(build_config_details({'version': '2.2'}))
assert cfg.version == V2_2
for version in ['3', '3.0']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V3_0
@@ -373,7 +378,7 @@ class ConfigTest(unittest.TestCase):
base_file = config.ConfigFile(
'base.yaml',
{
'version': V2_1,
'version': str(V2_1),
'services': {
'web': {
'image': 'example/web',
@@ -821,6 +826,33 @@ class ConfigTest(unittest.TestCase):
assert service['build']['args']['opt1'] == '42'
assert service['build']['args']['opt2'] == 'foobar'
def test_load_with_build_labels(self):
service = config.load(
build_config_details(
{
'version': str(V3_3),
'services': {
'web': {
'build': {
'context': '.',
'dockerfile': 'Dockerfile-alt',
'labels': {
'label1': 42,
'label2': 'foobar'
}
}
}
}
},
'tests/fixtures/extends',
'filename.yml'
)
).services[0]
assert 'labels' in service['build']
assert 'label1' in service['build']['labels']
assert service['build']['labels']['label1'] == 42
assert service['build']['labels']['label2'] == 'foobar'
def test_build_args_allow_empty_properties(self):
service = config.load(
build_config_details(
@@ -1491,7 +1523,7 @@ class ConfigTest(unittest.TestCase):
def test_isolation_option(self):
actual = config.load(build_config_details({
'version': V2_1,
'version': str(V2_1),
'services': {
'web': {
'image': 'win10',
@@ -1583,6 +1615,22 @@ class ConfigTest(unittest.TestCase):
'ports': types.ServicePort.parse('5432')
}
def test_merge_service_dicts_ports_sorting(self):
base = {
'ports': [5432]
}
override = {
'image': 'alpine:edge',
'ports': ['5432/udp']
}
actual = config.merge_service_dicts_from_files(
base,
override,
DEFAULT_VERSION)
assert len(actual['ports']) == 2
assert types.ServicePort.parse('5432')[0] in actual['ports']
assert types.ServicePort.parse('5432/udp')[0] in actual['ports']
def test_merge_service_dicts_heterogeneous_volumes(self):
base = {
'volumes': ['/a:/b', '/x:/z'],
@@ -1833,7 +1881,7 @@ class ConfigTest(unittest.TestCase):
{
'target': '1245',
'published': '1245',
'protocol': 'tcp',
'protocol': 'udp',
}
]
}
@@ -1950,6 +1998,38 @@ class ConfigTest(unittest.TestCase):
actual = config.merge_service_dicts(base, override, V3_1)
assert actual['secrets'] == override['secrets']
def test_merge_different_configs(self):
base = {
'image': 'busybox',
'configs': [
{'source': 'src.txt'}
]
}
override = {'configs': ['other-src.txt']}
actual = config.merge_service_dicts(base, override, V3_3)
assert secret_sort(actual['configs']) == secret_sort([
{'source': 'src.txt'},
{'source': 'other-src.txt'}
])
def test_merge_configs_override(self):
base = {
'image': 'busybox',
'configs': ['src.txt'],
}
override = {
'configs': [
{
'source': 'src.txt',
'target': 'data.txt',
'mode': 0o400
}
]
}
actual = config.merge_service_dicts(base, override, V3_3)
assert actual['configs'] == override['configs']
def test_merge_deploy(self):
base = {
'image': 'busybox',
@@ -2001,6 +2081,36 @@ class ConfigTest(unittest.TestCase):
}
}
def test_merge_credential_spec(self):
base = {
'image': 'bb',
'credential_spec': {
'file': '/hello-world',
}
}
override = {
'credential_spec': {
'registry': 'revolution.com',
}
}
actual = config.merge_service_dicts(base, override, V3_3)
assert actual['credential_spec'] == override['credential_spec']
def test_merge_scale(self):
base = {
'image': 'bar',
'scale': 2,
}
override = {
'scale': 4,
}
actual = config.merge_service_dicts(base, override, V2_2)
assert actual == {'image': 'bar', 'scale': 4}
def test_external_volume_config(self):
config_details = build_config_details({
'version': '2',
@@ -2165,6 +2275,91 @@ class ConfigTest(unittest.TestCase):
]
assert service_sort(service_dicts) == service_sort(expected)
def test_load_configs(self):
base_file = config.ConfigFile(
'base.yaml',
{
'version': '3.3',
'services': {
'web': {
'image': 'example/web',
'configs': [
'one',
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
},
],
},
},
'configs': {
'one': {'file': 'secret.txt'},
},
})
details = config.ConfigDetails('.', [base_file])
service_dicts = config.load(details).services
expected = [
{
'name': 'web',
'image': 'example/web',
'configs': [
types.ServiceConfig('one', None, None, None, None),
types.ServiceConfig('source', 'target', '100', '200', 0o777),
],
},
]
assert service_sort(service_dicts) == service_sort(expected)
def test_load_configs_multi_file(self):
base_file = config.ConfigFile(
'base.yaml',
{
'version': '3.3',
'services': {
'web': {
'image': 'example/web',
'configs': ['one'],
},
},
'configs': {
'one': {'file': 'secret.txt'},
},
})
override_file = config.ConfigFile(
'base.yaml',
{
'version': '3.3',
'services': {
'web': {
'configs': [
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
},
],
},
},
})
details = config.ConfigDetails('.', [base_file, override_file])
service_dicts = config.load(details).services
expected = [
{
'name': 'web',
'image': 'example/web',
'configs': [
types.ServiceConfig('one', None, None, None, None),
types.ServiceConfig('source', 'target', '100', '200', 0o777),
],
},
]
assert service_sort(service_dicts) == service_sort(expected)
class NetworkModeTest(unittest.TestCase):
@@ -2484,6 +2679,24 @@ class InterpolationTest(unittest.TestCase):
}
}
@mock.patch.dict(os.environ)
def test_interpolation_configs_section(self):
os.environ['FOO'] = 'baz.bar'
config_dict = config.load(build_config_details({
'version': '3.3',
'configs': {
'configdata': {
'external': {'name': '$FOO'}
}
}
}))
assert config_dict.configs == {
'configdata': {
'external': {'name': 'baz.bar'},
'external_name': 'baz.bar'
}
}
class VolumeConfigTest(unittest.TestCase):
@@ -2815,6 +3028,74 @@ class MergeLabelsTest(unittest.TestCase):
assert service_dict['labels'] == {'foo': '1', 'bar': ''}
class MergeBuildTest(unittest.TestCase):
def test_full(self):
base = {
'context': '.',
'dockerfile': 'Dockerfile',
'args': {
'x': '1',
'y': '2',
},
'cache_from': ['ubuntu'],
'labels': ['com.docker.compose.test=true']
}
override = {
'context': './prod',
'dockerfile': 'Dockerfile.prod',
'args': ['x=12'],
'cache_from': ['debian'],
'labels': {
'com.docker.compose.test': 'false',
'com.docker.compose.prod': 'true',
}
}
result = config.merge_build(None, {'build': base}, {'build': override})
assert result['context'] == override['context']
assert result['dockerfile'] == override['dockerfile']
assert result['args'] == {'x': '12', 'y': '2'}
assert set(result['cache_from']) == set(['ubuntu', 'debian'])
assert result['labels'] == override['labels']
def test_empty_override(self):
base = {
'context': '.',
'dockerfile': 'Dockerfile',
'args': {
'x': '1',
'y': '2',
},
'cache_from': ['ubuntu'],
'labels': {
'com.docker.compose.test': 'true'
}
}
override = {}
result = config.merge_build(None, {'build': base}, {'build': override})
assert result == base
def test_empty_base(self):
base = {}
override = {
'context': './prod',
'dockerfile': 'Dockerfile.prod',
'args': {'x': '12'},
'cache_from': ['debian'],
'labels': {
'com.docker.compose.test': 'false',
'com.docker.compose.prod': 'true',
}
}
result = config.merge_build(None, {'build': base}, {'build': override})
assert result == override
class MemoryOptionsTest(unittest.TestCase):
def test_validation_fails_with_just_memswap_limit(self):
@@ -3841,13 +4122,62 @@ class SerializeTest(unittest.TestCase):
assert serialized_config['secrets']['two'] == secrets_dict['two']
def test_serialize_ports(self):
config_dict = config.Config(version='2.0', services=[
config_dict = config.Config(version=V2_0, services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
'name': 'web'
}
], volumes={}, networks={}, secrets={})
], volumes={}, networks={}, secrets={}, configs={})
serialized_config = yaml.load(serialize_config(config_dict))
assert '8080:80/tcp' in serialized_config['services']['web']['ports']
def test_serialize_configs(self):
service_dict = {
'image': 'example/web',
'configs': [
{'source': 'one'},
{
'source': 'source',
'target': 'target',
'uid': '100',
'gid': '200',
'mode': 0o777,
}
]
}
configs_dict = {
'one': {'file': '/one.txt'},
'source': {'file': '/source.pem'},
'two': {'external': True},
}
config_dict = config.load(build_config_details({
'version': '3.3',
'services': {'web': service_dict},
'configs': configs_dict
}))
serialized_config = yaml.load(serialize_config(config_dict))
serialized_service = serialized_config['services']['web']
assert secret_sort(serialized_service['configs']) == secret_sort(service_dict['configs'])
assert 'configs' in serialized_config
assert serialized_config['configs']['two'] == configs_dict['two']
def test_serialize_bool_string(self):
cfg = {
'version': '2.2',
'services': {
'web': {
'image': 'example/web',
'command': 'true',
'environment': {'FOO': 'Y', 'BAR': 'on'}
}
}
}
config_dict = config.load(build_config_details(cfg))
serialized_config = serialize_config(config_dict)
assert 'command: "true"\n' in serialized_config
assert 'FOO: "Y"\n' in serialized_config
assert 'BAR: "on"\n' in serialized_config

View File

@@ -8,6 +8,8 @@ from compose.config.interpolation import interpolate_environment_variables
from compose.config.interpolation import Interpolator
from compose.config.interpolation import InvalidInterpolation
from compose.config.interpolation import TemplateWithDefaults
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V3_1 as V3_1
@pytest.fixture
@@ -50,7 +52,7 @@ def test_interpolate_environment_variables_in_services(mock_env):
}
}
}
value = interpolate_environment_variables("2.0", services, 'service', mock_env)
value = interpolate_environment_variables(V2_0, services, 'service', mock_env)
assert value == expected
@@ -75,7 +77,7 @@ def test_interpolate_environment_variables_in_volumes(mock_env):
},
'other': {},
}
value = interpolate_environment_variables("2.0", volumes, 'volume', mock_env)
value = interpolate_environment_variables(V2_0, volumes, 'volume', mock_env)
assert value == expected
@@ -100,7 +102,7 @@ def test_interpolate_environment_variables_in_secrets(mock_env):
},
'other': {},
}
value = interpolate_environment_variables("3.1", secrets, 'volume', mock_env)
value = interpolate_environment_variables(V3_1, secrets, 'volume', mock_env)
assert value == expected

View File

@@ -57,15 +57,15 @@ class TestServicePort(object):
def test_parse_simple_target_port(self):
ports = ServicePort.parse(8000)
assert len(ports) == 1
assert ports[0].target == '8000'
assert ports[0].target == 8000
def test_parse_complete_port_definition(self):
port_def = '1.1.1.1:3000:3000/udp'
ports = ServicePort.parse(port_def)
assert len(ports) == 1
assert ports[0].repr() == {
'target': '3000',
'published': '3000',
'target': 3000,
'published': 3000,
'external_ip': '1.1.1.1',
'protocol': 'udp',
}
@@ -77,7 +77,7 @@ class TestServicePort(object):
assert len(ports) == 1
assert ports[0].legacy_repr() == port_def + '/tcp'
assert ports[0].repr() == {
'target': '3000',
'target': 3000,
'external_ip': '1.1.1.1',
}
@@ -86,14 +86,19 @@ class TestServicePort(object):
assert len(ports) == 2
reprs = [p.repr() for p in ports]
assert {
'target': '4000',
'published': '25000'
'target': 4000,
'published': 25000
} in reprs
assert {
'target': '4001',
'published': '25001'
'target': 4001,
'published': 25001
} in reprs
def test_parse_invalid_port(self):
port_def = '4000p'
with pytest.raises(ConfigurationError):
ServicePort.parse(port_def)
class TestVolumeSpec(object):

View File

@@ -3,6 +3,7 @@ from __future__ import unicode_literals
import pytest
from .. import mock
from .. import unittest
from compose.network import check_remote_network_config
from compose.network import Network
@@ -66,7 +67,8 @@ class NetworkTest(unittest.TestCase):
options = {'com.docker.network.driver.foo': 'bar'}
remote_options = {
'com.docker.network.driver.overlay.vxlanid_list': '257',
'com.docker.network.driver.foo': 'bar'
'com.docker.network.driver.foo': 'bar',
'com.docker.network.windowsshim.hnsid': 'aac3fd4887daaec1e3b',
}
net = Network(
None, 'compose_test', 'net1', 'overlay',
@@ -151,7 +153,9 @@ class NetworkTest(unittest.TestCase):
'com.project.touhou.character': 'marisa.kirisame',
}
}
with pytest.raises(NetworkConfigChangedError) as e:
with mock.patch('compose.network.log') as mock_log:
check_remote_network_config(remote, net)
assert 'label "com.project.touhou.character" has changed' in str(e.value)
mock_log.warn.assert_called_once_with(mock.ANY)
_, args, kwargs = mock_log.warn.mock_calls[0]
assert 'label "com.project.touhou.character" has changed' in args[0]

View File

@@ -115,3 +115,18 @@ def test_parallel_execute_with_upstream_errors():
assert (data_volume, None, APIError) in events
assert (db, None, UpstreamError) in events
assert (web, None, UpstreamError) in events
def test_parallel_execute_alignment(capsys):
results, errors = parallel_execute(
objects=["short", "a very long name"],
func=lambda x: x,
get_name=six.text_type,
msg="Aligning",
)
assert errors == {}
_, err = capsys.readouterr()
a, b = err.split('\n')[:2]
assert a.index('...') == b.index('...')

View File

@@ -10,6 +10,8 @@ from .. import mock
from .. import unittest
from compose.config.config import Config
from compose.config.types import VolumeFromSpec
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.project import Project
@@ -21,9 +23,9 @@ class ProjectTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
def test_from_config(self):
def test_from_config_v1(self):
config = Config(
version=None,
version=V1,
services=[
{
'name': 'web',
@@ -37,6 +39,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
)
project = Project.from_config(
name='composetest',
@@ -52,7 +55,7 @@ class ProjectTest(unittest.TestCase):
def test_from_config_v2(self):
config = Config(
version=2,
version=V2_0,
services=[
{
'name': 'web',
@@ -66,6 +69,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
)
project = Project.from_config('composetest', config, None)
self.assertEqual(len(project.services), 2)
@@ -164,7 +168,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V2_0,
services=[{
'name': 'test',
'image': 'busybox:latest',
@@ -173,6 +177,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
assert project.get_service('test')._get_volumes_from() == [container_id + ":rw"]
@@ -191,7 +196,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V2_0,
services=[
{
'name': 'vol',
@@ -206,6 +211,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
assert project.get_service('test')._get_volumes_from() == [container_name + ":rw"]
@@ -217,7 +223,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=None,
config_data=Config(
version=None,
version=V2_0,
services=[
{
'name': 'vol',
@@ -232,6 +238,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
with mock.patch.object(Service, 'containers') as mock_return:
@@ -356,7 +363,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V1,
services=[
{
'name': 'test',
@@ -366,6 +373,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
service = project.get_service('test')
@@ -380,7 +388,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V2_0,
services=[
{
'name': 'test',
@@ -391,6 +399,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
service = project.get_service('test')
@@ -410,7 +419,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V2_0,
services=[
{
'name': 'aaa',
@@ -425,6 +434,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
@@ -436,7 +446,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=2,
version=V2_0,
services=[
{
'name': 'foo',
@@ -446,6 +456,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
@@ -456,7 +467,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=2,
version=V2_0,
services=[
{
'name': 'foo',
@@ -467,6 +478,7 @@ class ProjectTest(unittest.TestCase):
networks={'custom': {}},
volumes=None,
secrets=None,
configs=None,
),
)
@@ -490,7 +502,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=None,
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -498,6 +510,7 @@ class ProjectTest(unittest.TestCase):
networks=None,
volumes=None,
secrets=None,
configs=None,
),
)
self.assertEqual([c.id for c in project.containers()], ['1'])
@@ -507,7 +520,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version='2',
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -515,6 +528,7 @@ class ProjectTest(unittest.TestCase):
networks={'default': {}},
volumes={'data': {}},
secrets=None,
configs=None,
),
)
self.mock_client.remove_network.side_effect = NotFound(None, None, 'oops')

View File

@@ -471,7 +471,9 @@ class ServiceTest(unittest.TestCase):
nocache=False,
rm=True,
buildargs={},
labels=None,
cache_from=None,
network_mode=None,
)
def test_ensure_image_exists_no_build(self):
@@ -508,7 +510,9 @@ class ServiceTest(unittest.TestCase):
nocache=False,
rm=True,
buildargs={},
labels=None,
cache_from=None,
network_mode=None,
)
def test_build_does_not_pull(self):

View File

@@ -9,6 +9,8 @@ passenv =
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY
DOCKER_VERSION
SWARM_SKIP_*
SWARM_ASSUME_MULTINODE
setenv =
HOME=/tmp
deps =