Compare commits

..

79 Commits
1.5.0 ... 1.5.2

Author SHA1 Message Date
Daniel Nephin
7240ff35ee Bump 1.5.2
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-03 11:18:32 -08:00
Daniel Nephin
aaf66e3485 FAQ document for Compose
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-03 11:05:06 -08:00
Aanand Prasad
96f4a42a35 Validate the 'expose' option
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-12-03 11:05:05 -08:00
Aanand Prasad
e6fbca42a1 Split out ports validation tests into type, uniqueness, format
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-12-03 11:05:05 -08:00
Aanand Prasad
527bf3b023 Fix ports validation message
- The `raises` kwarg to the `cls_check` decorator was being used
  incorrectly (it should be an exception class, not an object).

- We need to check for `error.cause` and get the message out of the
  exception object.

NB: The particular case where validation fails in the case of `ports` is
only when ranges don't match in length - no further validation is
currently performed client-side.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-12-03 11:05:05 -08:00
Aanand Prasad
ab36c9c6cd Refactor ports section of fields schema
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-12-03 11:05:05 -08:00
Aanand Prasad
e67419065a Fix ports validation test
We were essentially only testing that *at least one* of the invalid
values fails the validation check, rather than that *all* of them fail.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-12-03 11:05:05 -08:00
Daniel Nephin
69e956ce8b Add integration test and docs for build with a git url.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-03 11:05:05 -08:00
Jonas Eckerström
0dbd99bad2 Added support for url buid paths
Signed-off-by: Jonas Eckerström <jonaseck@gmail.com>
2015-12-03 11:05:05 -08:00
Daniel Nephin
fa975d7fbe Properly resolve environment from all sources.
Split env resolving into two phases. The first phase is to expand the paths
of env_files, which is done before merging extends. Once all files are merged
together, the final phase is to read the env_files and use them as the base
for environment variables.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-03 11:05:01 -08:00
Daniel Nephin
81f0e72bd2 Move service sorting to config package.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
da27f8e7e2 Remove unnecessary intermediate variables in get_container_host_config.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
8572d50903 Move volume parsing to config.types module
This removes the last of the old service.ConfigError

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
5d39813e1b Fixes #2008 - re-use list_or_dict schema for all the types
At the same time, moves extra_hosts validation to the config module.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
b19315b57e Move restart spec to the config.types module.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
e549875e89 Move parsing of volumes_from to the last step of config parsing.
Includes creating a new compose.config.types module for all the domain objects.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
7e21b05f05 Remove project name validation
project name is already normalized to a valid name before creating a service.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
bea2072b95 Add the git sha to version output
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
3b6cc7a7bb Add missing assert and autospec.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
a264470cc0 Make sure we always have the latest busybox image, so that build --pull tests don't flake.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
844e2c3d26 Fix use case link in readme.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
210a14cf28 Add note about required pip version.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Brandon Burton
9ce4024951 Fixing matrix include so os: linux goes to trusty
Signed-off-by: Brandon Burton <brandon@inatree.org>
2015-12-02 17:31:48 -08:00
Daniel Nephin
8fb6fb7b19 Fix env_file and environment when used with extends.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
83760d0e9e Handle both SIGINT and SIGTERM for docker-compose run.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
be5b7b6f0e Handle both SIGINT and SIGTERM for docker-compose up.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
e5a02d3052 Fix extra warnings on masked volumes.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Stéphane Seguin
3a395892fc Fix restart with stopped containers. Fixes #1814
Signed-off-by: Stéphane Seguin <stephseguin93@gmail.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
09f6a876cf Fixes #2398 - the build progress stream can contain empty json objects.
Previously these empty objects would hit a bug in splitting objects causing it crash.
With this fix the empty objects are returned properly.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Stefan Scherer
0117148a36 Use uname to build target name for different platforms
Signed-off-by: Stefan Scherer <scherer_stefan@icloud.com>
2015-12-02 17:31:48 -08:00
Simon van der Veldt
8f70c8cdeb run.sh script: Also pass DOCKER_TLS_VERIFY and DOCKER_CERT_PATH env vars to compose container
Signed-off-by: Simon van der Veldt <simon.vanderveldt@gmail.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
16a74f3797 Fix texttable dep. 0.8.2 was removed from pypi.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-12-02 17:31:48 -08:00
Viranch Mehta
c42918ec7c Fix specifies_host_port() to handle port binding with host IP but no host port
Signed-off-by: Viranch Mehta <viranch.mehta@gmail.com>
2015-12-02 17:31:48 -08:00
Mazz Mosley
d28b2027b8 Clarify dockerfile requires build key
Credit to @funkyfuture for the first PR addressing the clarification.
https://github.com/docker/compose/pull/1767

Signed-off-by: Mazz Mosley <mazz@houseofmnowster.com>
2015-12-02 17:31:48 -08:00
Mazz Mosley
8d816fc2f3 Add cross references for env/cli
Signed-off-by: Mazz Mosley <mazz@houseofmnowster.com>
2015-12-02 17:31:48 -08:00
Daniel Nephin
f476436027 Merge remote-tracking branch 'docker/release' into bump-1.5.2 2015-12-02 16:56:55 -08:00
Daniel Nephin
fae20305ec Merge pull request #2384 from dnephin/bump-1.5.1
**WIP** Bump 1.5.1
2015-11-12 17:29:43 -05:00
Daniel Nephin
4628e93fb2 Bump 1.5.1
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 15:02:12 -05:00
Daniel Nephin
82086a4e92 Remove name field from the list of ALLOWED_KEYS
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 15:02:12 -05:00
Daniel Nephin
96e9b47059 Inclide the filename in validation errors.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 15:02:12 -05:00
Daniel Nephin
34166ef5a4 Refactor process_errors into smaller functions
So that it passed new max-complexity requirement

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 15:02:12 -05:00
Daniel Nephin
285e52cc7c Add ids to config schemas
Also enforce a max complexity for functions and add some new tests for config.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 15:02:12 -05:00
Joffrey F
d52c969f94 Add test for environment variable dashes support
Signed-off-by: Joffrey F <joffrey@docker.com>
2015-11-12 13:54:41 -05:00
Joffrey F
63c3e6f58c Allow dashes in environment variable names
See http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html
Environment variable names used by the utilities in the Shell and
Utilities volume of POSIX.1-2008 consist solely of uppercase letters,
digits, and the <underscore> ( '_' ) from the characters defined in
Portable Character Set and do not begin with a digit. Other characters may
be permitted by an implementation; applications shall tolerate the
presence of such names.

Signed-off-by: Joffrey F <joffrey@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
0ab76bb8bc Add a test for invalid field 'name', and fix an existing test for invalid service names.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
7fc577c31d Remove name from config schema.
Refactors config validation of a service to use a ServiceConfig data object.
Instead of passing around a bunch of related scalars, we can use the
ServiceConfig object as a parameter to most of the service validation functions.

This allows for a fix to the config schema, where the name is a field in the
schema, but not actually in the configuration. My passing the name around as
part of the ServiceConfig object, we don't need to add it to the config options.
Fixes #2299

validate_against_service_schema() is moved from a conditional branch in
ServiceExtendsResolver() to happen as one of the last steps after all
configuration is merged. This schema only contains constraints which only need
to be true at the very end of merging.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
3a43110f06 Fix a bug in ExtendsResolver where the service name of the extended service was wrong.
This bug can be seen by the change to the test case. When the extended service
uses a different name, the error was reported incorrectly.

By fixing this bug we can simplify self.signature and self.detect_cycles to
always use self.service_name.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
87d79d4d99 Rename ServiceLoader to ServiceExtendsResolver
ServiceLoader has evolved to be not really all that related to "loading" a
service. It's responsibility is more to do with handling the `extends`
field, which is only part of loading.  The class and its primary method
(make_service_dict()) were renamed to better reflect their responsibility.

As part of that change process_container_options() was removed from
make_service_dict() and renamed to process_service().  It contains logic for
handling the non-extends options.

This change allows us to remove the hacks from testcase.py and only call
the functions we need to format a service dict correctly for integration tests.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
83581c3a0f Validate additional files before merging them.
Consolidates all the top level config handling into `process_config_file` which
is now used for both files and merge sources.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Joffrey F
ba90f55075 Reorganize conditional branches to improve readability
Signed-off-by: Joffrey F <joffrey@docker.com>
2015-11-12 13:54:41 -05:00
Yves Peter
3313dcb1ce Fixes #1490 progress_stream would print a lot of new lines on "docker-compose pull" if there's no tty.
Signed-off-by: Yves Peter <ypdraw@gmail.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
92d56fab47 Add a warning when the host volume config is being ignored.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
1208f92d9c Update doc wording for ulimits.
and move tests to the correct module

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Kevin Greene
8444551373 Added ulimits functionality to docker compose
Signed-off-by: Kevin Greene <kevin@spantree.net>
2015-11-12 13:54:41 -05:00
Daniel Nephin
73ebd7e560 Only create the default network if at least one service needs it.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
0a96f86f74 Cleanup workaround in testcase.py
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
36176befb0 Fix #1549 - flush after each line of logs.
Includes some refactoring of log_printer_test to support checking for flush(), and so that each test calls the unit-under-test directly, instead of through a helper function.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
de08da278d Re-order flags in bash completion
and remove unnecessary variables from build command.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Adrian Budau
4c2eb17ccd Added --force-rm to compose build.
It's a flag passed to docker build that removes the intermediate
containers left behind on fail builds.

Signed-off-by: Adrian Budau <budau.adi@gmail.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
8fb44db92b Cleanup some unit tests and whitespace.
Remove some unnecessary newlines.
Remove a unittest that was attempting to test behaviour that was removed a while ago, so isn't testing anything.
Updated some unit tests to use mocks instead of a custom fake.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:41 -05:00
Daniel Nephin
4105c3017c Move cli tests to a new testing package.
These cli tests are now a different kind of that that run the compose binary. They are not the same as integration tests that test some internal interface.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
7f2f4eef48 Update cli tests to use subprocess.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Joffrey F
666c3cb1c7 Use exit code 1 when encountering a ReadTimeout
Signed-off-by: Joffrey F <joffrey@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
886134c1f3 Recreate dependents when a dependency is created (not just when it's recreated).
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
ba61a6c5fb Don't set the hostname to the service name with networking.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
3f14df374f Handle non-utf8 unicode without raising an error.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
e6755d1e7c Use VolumeSpec instead of re-parsing the volume string.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:40 -05:00
Daniel Nephin
c4f59e731d Make working_dir consistent in the config package.
- make it a positional arg, since it's required
- make it the first argument for all functions that require it
- remove an unnecessary one-line function that was only called in one place

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
805ed344c0 Refactor ServiceLoader to be immutable.
Mutable objects are harder to debug and harder to reason about. ServiceLoader was almost immutable. There was just a single function which set fields for a second function. Instead of mutating the object, we can pass those values as parameters to the next function.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
a5959d9be2 Some minor style cleanup
- fixed a docstring to make it PEP257 compliant
- wrapped some long lines
- used a more specific error

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Mazz Mosley
0375dccf64 Handle non-ascii chars in volume directories
Signed-off-by: Mazz Mosley <mazz@houseofmnowster.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
3c4bb5358e Upgrade pyyaml to 3.11
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
e317d2db9d Remove service.start_container()
It has been an unnecessary wrapper around container.start() for a little while now, so we can call it directly.

Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Joffrey F
3daecfa8e4 Update service config_dict computation to include volumes_from mode
Ensure config_hash is updated when volumes_from mode is changed, and
service is recreated on next up as a result.

Signed-off-by: Joffrey F <joffrey@docker.com>
2015-11-12 13:54:39 -05:00
Aanand Prasad
cf93362368 Fix parallel output
We were outputting an extra line, which in *some* cases, on *some*
terminals, was causing the output of parallel actions to get messed up.

In particular, it would happen when the terminal had just been cleared
or hadn't yet filled up with a screen's worth of text.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
23d4eda2a5 Fix service recreate when image changes to build.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
718ae13ae1 Move config hash tests to service_test.py
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-11-12 13:54:39 -05:00
Daniel Nephin
cddbe9fbf1 Merge remote-tracking branch 'docker/release' into bump-1.5.1 2015-11-12 13:03:47 -05:00
Aanand Prasad
9c8173dbfd Merge pull request #2309 from dnephin/bump-1.5.0
WIP: Bump 1.5.0
2015-11-03 18:31:56 +00:00
67 changed files with 2606 additions and 1591 deletions

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@
/docs/_site
/venv
README.rst
compose/GITSHA

View File

@@ -2,16 +2,14 @@ sudo: required
language: python
services:
- docker
matrix:
include:
- os: linux
services:
- docker
- os: osx
language: generic
install: ./script/travis/install
script:

View File

@@ -1,6 +1,81 @@
Change log
==========
1.5.2 (2015-12-03)
------------------
- Fixed a bug which broke the use of `environment` and `env_file` with
`extends`, and caused environment keys without values to have a `None`
value, instead of a value from the host environment.
- Fixed a regression in 1.5.1 that caused a warning about volumes to be
raised incorrectly when containers were recreated.
- Fixed a bug which prevented building a `Dockerfile` that used `ADD <url>`
- Fixed a bug with `docker-compose restart` which prevented it from
starting stopped containers.
- Fixed handling of SIGTERM and SIGINT to properly stop containers
- Add support for using a url as the value of `build`
- Improved the validation of the `expose` option
1.5.1 (2015-11-12)
------------------
- Add the `--force-rm` option to `build`.
- Add the `ulimit` option for services in the Compose file.
- Fixed a bug where `up` would error with "service needs to be built" if
a service changed from using `image` to using `build`.
- Fixed a bug that would cause incorrect output of parallel operations
on some terminals.
- Fixed a bug that prevented a container from being recreated when the
mode of a `volumes_from` was changed.
- Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause
`up` or `logs` to crash.
- Fixed a regression in 1.5.0 where Compose would use a success exit status
code when a command fails due to an HTTP timeout communicating with the
docker daemon.
- Fixed a regression in 1.5.0 where `name` was being accepted as a valid
service option which would override the actual name of the service.
- When using `--x-networking` Compose no longer sets the hostname to the
container name.
- When using `--x-networking` Compose will only create the default network
if at least one container is using the network.
- When printings logs during `up` or `logs`, flush the output buffer after
each line to prevent buffering issues from hideing logs.
- Recreate a container if one of it's dependencies is being created.
Previously a container was only recreated if it's dependencies already
existed, but were being recreated as well.
- Add a warning when a `volume` in the Compose file is being ignored
and masked by a container volume from a previous container.
- Improve the output of `pull` when run without a tty.
- When using multiple Compose files, validate each before attempting to merge
them together. Previously invalid files would result in not helpful errors.
- Allow dashes in keys in the `environment` service option.
- Improve validation error messages by including the filename as part of the
error message.
1.5.0 (2015-11-03)
------------------

View File

@@ -8,6 +8,6 @@ COPY requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
ADD dist/docker-compose-release.tar.gz /code/docker-compose
RUN pip install /code/docker-compose/docker-compose-*
RUN pip install --no-deps /code/docker-compose/docker-compose-*
ENTRYPOINT ["/usr/bin/docker-compose"]

View File

@@ -7,6 +7,7 @@ include *.md
exclude README.md
include README.rst
include compose/config/*.json
include compose/GITSHA
recursive-include contrib/completion *
recursive-include tests *
global-exclude *.pyc

View File

@@ -10,7 +10,7 @@ see [the list of features](docs/index.md#features).
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](#common-use-cases).
[Common Use Cases](docs/index.md#common-use-cases).
Using Compose is basically a three-step process.

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals
__version__ = '1.5.0'
__version__ = '1.5.2'

View File

@@ -12,12 +12,11 @@ from requests.exceptions import SSLError
from . import errors
from . import verbose_proxy
from .. import __version__
from .. import config
from ..project import Project
from ..service import ConfigError
from .docker_client import docker_client
from .utils import call_silently
from .utils import get_version_info
from .utils import is_mac
from .utils import is_ubuntu
@@ -71,7 +70,7 @@ def get_client(verbose=False, version=None):
client = docker_client(version=version)
if verbose:
version_info = six.iteritems(client.version())
log.info("Compose version %s", __version__)
log.info(get_version_info('full'))
log.info("Docker base_url: %s", client.base_url)
log.info("Docker version: %s",
", ".join("%s=%s" % item for item in version_info))
@@ -84,16 +83,12 @@ def get_project(base_dir, config_path=None, project_name=None, verbose=False,
config_details = config.find(base_dir, config_path)
api_version = '1.21' if use_networking else None
try:
return Project.from_dicts(
get_project_name(config_details.working_dir, project_name),
config.load(config_details),
get_client(verbose=verbose, version=api_version),
use_networking=use_networking,
network_driver=network_driver,
)
except ConfigError as e:
raise errors.UserError(six.text_type(e))
return Project.from_dicts(
get_project_name(config_details.working_dir, project_name),
config.load(config_details),
get_client(verbose=verbose, version=api_version),
use_networking=use_networking,
network_driver=network_driver)
def get_project_name(working_dir, project_name=None):

View File

@@ -26,6 +26,7 @@ class LogPrinter(object):
generators = list(self._make_log_generators(self.monochrome, prefix_width))
for line in Multiplexer(generators).loop():
self.output.write(line)
self.output.flush()
def _make_log_generators(self, monochrome, prefix_width):
def no_color(text):

View File

@@ -13,12 +13,12 @@ from requests.exceptions import ReadTimeout
from .. import __version__
from .. import legacy
from ..config import ConfigurationError
from ..config import parse_environment
from ..const import DEFAULT_TIMEOUT
from ..const import HTTP_TIMEOUT
from ..const import IS_WINDOWS_PLATFORM
from ..progress_stream import StreamOutputError
from ..project import ConfigurationError
from ..project import NoSuchService
from ..service import BuildError
from ..service import ConvergenceStrategy
@@ -80,6 +80,7 @@ def main():
"If you encounter this issue regularly because of slow network conditions, consider setting "
"COMPOSE_HTTP_TIMEOUT to a higher value (current value: %s)." % HTTP_TIMEOUT
)
sys.exit(1)
def setup_logging():
@@ -180,12 +181,15 @@ class TopLevelCommand(DocoptCommand):
Usage: build [options] [SERVICE...]
Options:
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
"""
no_cache = bool(options.get('--no-cache', False))
pull = bool(options.get('--pull', False))
project.build(service_names=options['SERVICE'], no_cache=no_cache, pull=pull)
project.build(
service_names=options['SERVICE'],
no_cache=bool(options.get('--no-cache', False)),
pull=bool(options.get('--pull', False)),
force_rm=bool(options.get('--force-rm', False)))
def help(self, project, options):
"""
@@ -364,7 +368,6 @@ class TopLevelCommand(DocoptCommand):
allocates a TTY.
"""
service = project.get_service(options['SERVICE'])
detach = options['-d']
if IS_WINDOWS_PLATFORM and not detach:
@@ -376,22 +379,6 @@ class TopLevelCommand(DocoptCommand):
if options['--allow-insecure-ssl']:
log.warn(INSECURE_SSL_WARNING)
if not options['--no-deps']:
deps = service.get_linked_service_names()
if len(deps) > 0:
project.up(
service_names=deps,
start_deps=True,
strategy=ConvergenceStrategy.never,
)
elif project.use_networking:
project.ensure_network_exists()
tty = True
if detach or options['-T'] or not sys.stdin.isatty():
tty = False
if options['COMMAND']:
command = [options['COMMAND']] + options['ARGS']
else:
@@ -399,7 +386,7 @@ class TopLevelCommand(DocoptCommand):
container_options = {
'command': command,
'tty': tty,
'tty': not (detach or options['-T'] or not sys.stdin.isatty()),
'stdin_open': not detach,
'detach': detach,
}
@@ -431,31 +418,7 @@ class TopLevelCommand(DocoptCommand):
if options['--name']:
container_options['name'] = options['--name']
try:
container = service.create_container(
quiet=True,
one_off=True,
**container_options
)
except APIError as e:
legacy.check_for_legacy_containers(
project.client,
project.name,
[service.name],
allow_one_off=False,
)
raise e
if detach:
service.start_container(container)
print(container.name)
else:
dockerpty.start(project.client, container.id, interactive=not options['-T'])
exit_code = container.wait()
if options['--rm']:
project.client.remove_container(container.id)
sys.exit(exit_code)
run_one_off_container(container_options, project, service, options)
def scale(self, project, options):
"""
@@ -643,6 +606,58 @@ def convergence_strategy_from_opts(options):
return ConvergenceStrategy.changed
def run_one_off_container(container_options, project, service, options):
if not options['--no-deps']:
deps = service.get_linked_service_names()
if deps:
project.up(
service_names=deps,
start_deps=True,
strategy=ConvergenceStrategy.never)
if project.use_networking:
project.ensure_network_exists()
try:
container = service.create_container(
quiet=True,
one_off=True,
**container_options)
except APIError:
legacy.check_for_legacy_containers(
project.client,
project.name,
[service.name],
allow_one_off=False)
raise
if options['-d']:
container.start()
print(container.name)
return
def remove_container(force=False):
if options['--rm']:
project.client.remove_container(container.id, force=True)
def force_shutdown(signal, frame):
project.client.kill(container.id)
remove_container(force=True)
sys.exit(2)
def shutdown(signal, frame):
set_signal_handler(force_shutdown)
project.client.stop(container.id)
remove_container()
sys.exit(1)
set_signal_handler(shutdown)
dockerpty.start(project.client, container.id, interactive=not options['-T'])
exit_code = container.wait()
remove_container()
sys.exit(exit_code)
def build_log_printer(containers, service_names, monochrome):
if service_names:
containers = [
@@ -653,18 +668,25 @@ def build_log_printer(containers, service_names, monochrome):
def attach_to_logs(project, log_printer, service_names, timeout):
print("Attaching to", list_containers(log_printer.containers))
try:
log_printer.run()
finally:
def handler(signal, frame):
project.kill(service_names=service_names)
sys.exit(0)
signal.signal(signal.SIGINT, handler)
def force_shutdown(signal, frame):
project.kill(service_names=service_names)
sys.exit(2)
def shutdown(signal, frame):
set_signal_handler(force_shutdown)
print("Gracefully stopping... (press Ctrl+C again to force)")
project.stop(service_names=service_names, timeout=timeout)
print("Attaching to", list_containers(log_printer.containers))
set_signal_handler(shutdown)
log_printer.run()
def set_signal_handler(handler):
signal.signal(signal.SIGINT, handler)
signal.signal(signal.SIGTERM, handler)
def list_containers(containers):
return ", ".join(c.name for c in containers)

View File

@@ -7,10 +7,10 @@ import platform
import ssl
import subprocess
from docker import version as docker_py_version
import docker
from six.moves import input
from .. import __version__
import compose
def yesno(prompt, default=None):
@@ -57,13 +57,32 @@ def is_ubuntu():
def get_version_info(scope):
versioninfo = 'docker-compose version: %s' % __version__
versioninfo = 'docker-compose version {}, build {}'.format(
compose.__version__,
get_build_version())
if scope == 'compose':
return versioninfo
elif scope == 'full':
return versioninfo + '\n' \
+ "docker-py version: %s\n" % docker_py_version \
+ "%s version: %s\n" % (platform.python_implementation(), platform.python_version()) \
+ "OpenSSL version: %s" % ssl.OPENSSL_VERSION
else:
raise RuntimeError('passed unallowed value to `cli.utils.get_version_info`')
if scope == 'full':
return (
"{}\n"
"docker-py version: {}\n"
"{} version: {}\n"
"OpenSSL version: {}"
).format(
versioninfo,
docker.version,
platform.python_implementation(),
platform.python_version(),
ssl.OPENSSL_VERSION)
raise ValueError("{} is not a valid version scope".format(scope))
def get_build_version():
filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA')
if not os.path.exists(filename):
return 'unknown'
with open(filename) as fh:
return fh.read().strip()

View File

@@ -1,9 +1,7 @@
# flake8: noqa
from .config import ConfigDetails
from .config import ConfigurationError
from .config import DOCKER_CONFIG_KEYS
from .config import find
from .config import get_service_name_from_net
from .config import load
from .config import merge_environment
from .config import parse_environment

View File

@@ -1,3 +1,5 @@
from __future__ import absolute_import
import codecs
import logging
import os
@@ -11,9 +13,14 @@ from .errors import CircularReference
from .errors import ComposeFileNotFound
from .errors import ConfigurationError
from .interpolation import interpolate_environment_variables
from .sort_services import get_service_name_from_net
from .sort_services import sort_service_dicts
from .types import parse_extra_hosts
from .types import parse_restart_spec
from .types import VolumeFromSpec
from .types import VolumeSpec
from .validation import validate_against_fields_schema
from .validation import validate_against_service_schema
from .validation import validate_extended_service_exists
from .validation import validate_extends_file_path
from .validation import validate_top_level_object
@@ -66,9 +73,15 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'dockerfile',
'expose',
'external_links',
'name',
]
DOCKER_VALID_URL_PREFIXES = (
'http://',
'https://',
'git://',
'github.com/',
'git@',
)
SUPPORTED_FILENAMES = [
'docker-compose.yml',
@@ -99,6 +112,24 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
:type config: :class:`dict`
"""
@classmethod
def from_filename(cls, filename):
return cls(filename, load_yaml(filename))
class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')):
@classmethod
def with_abs_paths(cls, working_dir, filename, name, config):
if not working_dir:
raise ValueError("No working_dir for ServiceConfig.")
return cls(
os.path.abspath(working_dir),
os.path.abspath(filename) if filename else filename,
name,
config)
def find(base_dir, filenames):
if filenames == ['-']:
@@ -114,7 +145,7 @@ def find(base_dir, filenames):
log.debug("Using configuration files: {}".format(",".join(filenames)))
return ConfigDetails(
os.path.dirname(filenames[0]),
[ConfigFile(f, load_yaml(f)) for f in filenames])
[ConfigFile.from_filename(f) for f in filenames])
def get_default_config_files(base_dir):
@@ -174,22 +205,27 @@ def load(config_details):
"""
def build_service(filename, service_name, service_dict):
loader = ServiceLoader(
service_config = ServiceConfig.with_abs_paths(
config_details.working_dir,
filename,
service_name,
service_dict)
service_dict = loader.make_service_dict()
resolver = ServiceExtendsResolver(service_config)
service_dict = process_service(resolver.run())
# TODO: move to validate_service()
validate_against_service_schema(service_dict, service_config.name)
validate_paths(service_dict)
service_dict = finalize_service(service_config._replace(config=service_dict))
service_dict['name'] = service_config.name
return service_dict
def load_file(filename, config):
processed_config = interpolate_environment_variables(config)
validate_against_fields_schema(processed_config)
return [
build_service(filename, name, service_config)
for name, service_config in processed_config.items()
]
def build_services(config_file):
return sort_service_dicts([
build_service(config_file.filename, name, service_dict)
for name, service_dict in config_file.config.items()
])
def merge_services(base, override):
all_service_names = set(base) | set(override)
@@ -200,159 +236,186 @@ def load(config_details):
for name in all_service_names
}
config_file = config_details.config_files[0]
validate_top_level_object(config_file.config)
config_file = process_config_file(config_details.config_files[0])
for next_file in config_details.config_files[1:]:
validate_top_level_object(next_file.config)
next_file = process_config_file(next_file)
config_file = ConfigFile(
config_file.filename,
merge_services(config_file.config, next_file.config))
config = merge_services(config_file.config, next_file.config)
config_file = config_file._replace(config=config)
return load_file(config_file.filename, config_file.config)
return build_services(config_file)
class ServiceLoader(object):
def __init__(self, working_dir, filename, service_name, service_dict, already_seen=None):
if working_dir is None:
raise Exception("No working_dir passed to ServiceLoader()")
def process_config_file(config_file, service_name=None):
validate_top_level_object(config_file)
processed_config = interpolate_environment_variables(config_file.config)
validate_against_fields_schema(processed_config, config_file.filename)
self.working_dir = os.path.abspath(working_dir)
if service_name and service_name not in processed_config:
raise ConfigurationError(
"Cannot extend service '{}' in {}: Service not found".format(
service_name, config_file.filename))
if filename:
self.filename = os.path.abspath(filename)
else:
self.filename = filename
return config_file._replace(config=processed_config)
class ServiceExtendsResolver(object):
def __init__(self, service_config, already_seen=None):
self.service_config = service_config
self.working_dir = service_config.working_dir
self.already_seen = already_seen or []
self.service_dict = service_dict.copy()
self.service_name = service_name
self.service_dict['name'] = service_name
def detect_cycle(self, name):
if self.signature(name) in self.already_seen:
raise CircularReference(self.already_seen + [self.signature(name)])
@property
def signature(self):
return self.service_config.filename, self.service_config.name
def make_service_dict(self):
self.resolve_environment()
if 'extends' in self.service_dict:
self.validate_and_construct_extends()
self.service_dict = self.resolve_extends()
def detect_cycle(self):
if self.signature in self.already_seen:
raise CircularReference(self.already_seen + [self.signature])
if not self.already_seen:
validate_against_service_schema(self.service_dict, self.service_name)
def run(self):
self.detect_cycle()
return process_container_options(self.service_dict, working_dir=self.working_dir)
if 'extends' in self.service_config.config:
service_dict = self.resolve_extends(*self.validate_and_construct_extends())
return self.service_config._replace(config=service_dict)
def resolve_environment(self):
"""
Unpack any environment variables from an env_file, if set.
Interpolate environment values if set.
"""
if 'environment' not in self.service_dict and 'env_file' not in self.service_dict:
return
env = {}
if 'env_file' in self.service_dict:
for f in get_env_files(self.service_dict, working_dir=self.working_dir):
env.update(env_vars_from_file(f))
del self.service_dict['env_file']
env.update(parse_environment(self.service_dict.get('environment')))
env = dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
self.service_dict['environment'] = env
return self.service_config
def validate_and_construct_extends(self):
extends = self.service_dict['extends']
extends = self.service_config.config['extends']
if not isinstance(extends, dict):
extends = {'service': extends}
validate_extends_file_path(
self.service_name,
extends,
self.filename
)
self.extended_config_path = self.get_extended_config_path(extends)
self.extended_service_name = extends['service']
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
config = load_yaml(self.extended_config_path)
validate_top_level_object(config)
full_extended_config = interpolate_environment_variables(config)
extended_file = process_config_file(
ConfigFile.from_filename(config_path),
service_name=service_name)
service_config = extended_file.config[service_name]
return config_path, service_config, service_name
validate_extended_service_exists(
self.extended_service_name,
full_extended_config,
self.extended_config_path
)
validate_against_fields_schema(full_extended_config)
def resolve_extends(self, extended_config_path, service_dict, service_name):
resolver = ServiceExtendsResolver(
ServiceConfig.with_abs_paths(
os.path.dirname(extended_config_path),
extended_config_path,
service_name,
service_dict),
already_seen=self.already_seen + [self.signature])
self.extended_config = full_extended_config[self.extended_service_name]
def resolve_extends(self):
other_working_dir = os.path.dirname(self.extended_config_path)
other_already_seen = self.already_seen + [self.signature(self.service_name)]
other_loader = ServiceLoader(
working_dir=other_working_dir,
filename=self.extended_config_path,
service_name=self.service_name,
service_dict=self.extended_config,
already_seen=other_already_seen,
)
other_loader.detect_cycle(self.extended_service_name)
other_service_dict = other_loader.make_service_dict()
service_config = resolver.run()
other_service_dict = process_service(service_config)
validate_extended_service_dict(
other_service_dict,
filename=self.extended_config_path,
service=self.extended_service_name,
extended_config_path,
service_name,
)
return merge_service_dicts(other_service_dict, self.service_dict)
return merge_service_dicts(other_service_dict, self.service_config.config)
def get_extended_config_path(self, extends_options):
"""
Service we are extending either has a value for 'file' set, which we
"""Service we are extending either has a value for 'file' set, which we
need to obtain a full path too or we are extending from a service
defined in our own file.
"""
filename = self.service_config.filename
validate_extends_file_path(
self.service_config.name,
extends_options,
filename)
if 'file' in extends_options:
extends_from_filename = extends_options['file']
return expand_path(self.working_dir, extends_from_filename)
return expand_path(self.working_dir, extends_options['file'])
return filename
return self.filename
def signature(self, name):
return (self.filename, name)
def resolve_environment(service_dict):
"""Unpack any environment variables from an env_file, if set.
Interpolate environment values if set.
"""
env = {}
for env_file in service_dict.get('env_file', []):
env.update(env_vars_from_file(env_file))
env.update(parse_environment(service_dict.get('environment')))
return dict(resolve_env_var(k, v) for k, v in six.iteritems(env))
def validate_extended_service_dict(service_dict, filename, service):
error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
if 'links' in service_dict:
raise ConfigurationError("%s services with 'links' cannot be extended" % error_prefix)
raise ConfigurationError(
"%s services with 'links' cannot be extended" % error_prefix)
if 'volumes_from' in service_dict:
raise ConfigurationError("%s services with 'volumes_from' cannot be extended" % error_prefix)
raise ConfigurationError(
"%s services with 'volumes_from' cannot be extended" % error_prefix)
if 'net' in service_dict:
if get_service_name_from_net(service_dict['net']) is not None:
raise ConfigurationError("%s services with 'net: container' cannot be extended" % error_prefix)
raise ConfigurationError(
"%s services with 'net: container' cannot be extended" % error_prefix)
def process_container_options(service_dict, working_dir=None):
service_dict = service_dict.copy()
def validate_ulimits(ulimit_config):
for limit_name, soft_hard_values in six.iteritems(ulimit_config):
if isinstance(soft_hard_values, dict):
if not soft_hard_values['soft'] <= soft_hard_values['hard']:
raise ConfigurationError(
"ulimit_config \"{}\" cannot contain a 'soft' value higher "
"than 'hard' value".format(ulimit_config))
# TODO: rename to normalize_service
def process_service(service_config):
working_dir = service_config.working_dir
service_dict = dict(service_config.config)
if 'env_file' in service_dict:
service_dict['env_file'] = [
expand_path(working_dir, path)
for path in to_list(service_dict['env_file'])
]
if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
service_dict['volumes'] = resolve_volume_paths(service_dict, working_dir=working_dir)
service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
if 'build' in service_dict:
service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir)
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
if 'labels' in service_dict:
service_dict['labels'] = parse_labels(service_dict['labels'])
if 'extra_hosts' in service_dict:
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
# TODO: move to a validate_service()
if 'ulimits' in service_dict:
validate_ulimits(service_dict['ulimits'])
return service_dict
def finalize_service(service_config):
service_dict = dict(service_config.config)
if 'environment' in service_dict or 'env_file' in service_dict:
service_dict['environment'] = resolve_environment(service_dict)
service_dict.pop('env_file', None)
if 'volumes_from' in service_dict:
service_dict['volumes_from'] = [
VolumeFromSpec.parse(vf) for vf in service_dict['volumes_from']]
if 'volumes' in service_dict:
service_dict['volumes'] = [
VolumeSpec.parse(v) for v in service_dict['volumes']]
if 'restart' in service_dict:
service_dict['restart'] = parse_restart_spec(service_dict['restart'])
return service_dict
@@ -403,7 +466,7 @@ def merge_service_dicts(base, override):
if key in base or key in override:
d[key] = base.get(key, []) + override.get(key, [])
list_or_string_keys = ['dns', 'dns_search']
list_or_string_keys = ['dns', 'dns_search', 'env_file']
for key in list_or_string_keys:
if key in base or key in override:
@@ -424,17 +487,6 @@ def merge_environment(base, override):
return env
def get_env_files(options, working_dir=None):
if 'env_file' not in options:
return {}
env_files = options.get('env_file', [])
if not isinstance(env_files, list):
env_files = [env_files]
return [expand_path(working_dir, path) for path in env_files]
def parse_environment(environment):
if not environment:
return {}
@@ -453,7 +505,7 @@ def parse_environment(environment):
def split_env(env):
if isinstance(env, six.binary_type):
env = env.decode('utf-8')
env = env.decode('utf-8', 'replace')
if '=' in env:
return env.split('=', 1)
else:
@@ -484,39 +536,45 @@ def env_vars_from_file(filename):
return env
def resolve_volume_paths(service_dict, working_dir=None):
if working_dir is None:
raise Exception("No working_dir passed to resolve_volume_paths()")
def resolve_volume_paths(working_dir, service_dict):
return [
resolve_volume_path(v, working_dir, service_dict['name'])
for v in service_dict['volumes']
resolve_volume_path(working_dir, volume)
for volume in service_dict['volumes']
]
def resolve_volume_path(volume, working_dir, service_name):
def resolve_volume_path(working_dir, volume):
container_path, host_path = split_path_mapping(volume)
if host_path is not None:
if host_path.startswith('.'):
host_path = expand_path(working_dir, host_path)
host_path = os.path.expanduser(host_path)
return "{}:{}".format(host_path, container_path)
return u"{}:{}".format(host_path, container_path)
else:
return container_path
def resolve_build_path(build_path, working_dir=None):
if working_dir is None:
raise Exception("No working_dir passed to resolve_build_path")
def resolve_build_path(working_dir, build_path):
if is_url(build_path):
return build_path
return expand_path(working_dir, build_path)
def is_url(build_path):
return build_path.startswith(DOCKER_VALID_URL_PREFIXES)
def validate_paths(service_dict):
if 'build' in service_dict:
build_path = service_dict['build']
if not os.path.exists(build_path) or not os.access(build_path, os.R_OK):
raise ConfigurationError("build path %s either does not exist or is not accessible." % build_path)
if (
not is_url(build_path) and
(not os.path.exists(build_path) or not os.access(build_path, os.R_OK))
):
raise ConfigurationError(
"build path %s either does not exist, is not accessible, "
"or is not a valid URL." % build_path)
def merge_path_mappings(base, override):
@@ -578,7 +636,7 @@ def parse_labels(labels):
return dict(split_label(e) for e in labels)
if isinstance(labels, dict):
return labels
return dict(labels)
def split_label(label):
@@ -601,17 +659,6 @@ def to_list(value):
return value
def get_service_name_from_net(net_config):
if not net_config:
return
if not net_config.startswith('container:'):
return
_, net_name = net_config.split(':', 1)
return net_name
def load_yaml(filename):
try:
with open(filename, 'r') as fh:

View File

@@ -6,6 +6,10 @@ class ConfigurationError(Exception):
return self.msg
class DependencyError(ConfigurationError):
pass
class CircularReference(ConfigurationError):
def __init__(self, trail):
self.trail = trail

View File

@@ -2,15 +2,18 @@
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"id": "fields_schema.json",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
@@ -34,26 +37,14 @@
"domainname": {"type": "string"},
"entrypoint": {"$ref": "#/definitions/string_or_list"},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {
"oneOf": [
{
"type": "object",
"patternProperties": {
"^[^-]+$": {
"type": ["string", "number", "boolean", "null"],
"format": "environment"
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {"type": ["string", "number"]},
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
@@ -89,23 +80,14 @@
"mac_address": {"type": "string"},
"mem_limit": {"type": ["number", "string"]},
"memswap_limit": {"type": ["number", "string"]},
"name": {"type": "string"},
"net": {"type": "string"},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{
"type": "string",
"format": "ports"
},
{
"type": "number",
"format": "ports"
}
]
"type": ["string", "number"],
"format": "ports"
},
"uniqueItems": true
},
@@ -116,6 +98,25 @@
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"stdin_open": {"type": "boolean"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"volume_driver": {"type": "string"},
@@ -144,11 +145,18 @@
"list_or_dict": {
"oneOf": [
{"type": "array", "items": {"type": "string"}, "uniqueItems": true},
{"type": "object"}
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "boolean", "null"],
"format": "bool-value-in-mapping"
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
}
},
"additionalProperties": false
}
}

View File

@@ -18,13 +18,6 @@ def interpolate_environment_variables(config):
def process_service(service_name, service_dict, mapping):
if not isinstance(service_dict, dict):
raise ConfigurationError(
'Service "%s" doesn\'t have any configuration options. '
'All top level keys in your docker-compose.yml must map '
'to a dictionary of configuration options.' % service_name
)
return dict(
(key, interpolate_value(service_name, key, val, mapping))
for (key, val) in service_dict.items()

View File

@@ -1,21 +1,17 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "service_schema.json",
"type": "object",
"properties": {
"name": {"type": "string"}
},
"required": ["name"],
"allOf": [
{"$ref": "fields_schema.json#/definitions/service"},
{"$ref": "#/definitions/service_constraints"}
{"$ref": "#/definitions/constraints"}
],
"definitions": {
"service_constraints": {
"constraints": {
"id": "#/definitions/constraints",
"anyOf": [
{
"required": ["build"],
@@ -27,13 +23,8 @@
{"required": ["build"]},
{"required": ["dockerfile"]}
]}
},
{
"required": ["extends"],
"not": {"required": ["build", "image"]}
}
]
}
}
}

View File

@@ -0,0 +1,55 @@
from compose.config.errors import DependencyError
def get_service_name_from_net(net_config):
if not net_config:
return
if not net_config.startswith('container:'):
return
_, net_name = net_config.split(':', 1)
return net_name
def sort_service_dicts(services):
# Topological sort (Cormen/Tarjan algorithm).
unmarked = services[:]
temporary_marked = set()
sorted_services = []
def get_service_names(links):
return [link.split(':')[0] for link in links]
def get_service_names_from_volumes_from(volumes_from):
return [volume_from.source for volume_from in volumes_from]
def get_service_dependents(service_dict, services):
name = service_dict['name']
return [
service for service in services
if (name in get_service_names(service.get('links', [])) or
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
name == get_service_name_from_net(service.get('net')))
]
def visit(n):
if n['name'] in temporary_marked:
if n['name'] in get_service_names(n.get('links', [])):
raise DependencyError('A service can not link to itself: %s' % n['name'])
if n['name'] in n.get('volumes_from', []):
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
else:
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
if n in unmarked:
temporary_marked.add(n['name'])
for m in get_service_dependents(n, services):
visit(m)
temporary_marked.remove(n['name'])
unmarked.remove(n)
sorted_services.insert(0, n)
while unmarked:
visit(unmarked[-1])
return sorted_services

120
compose/config/types.py Normal file
View File

@@ -0,0 +1,120 @@
"""
Types for objects parsed from the configuration.
"""
from __future__ import absolute_import
from __future__ import unicode_literals
import os
from collections import namedtuple
from compose.config.errors import ConfigurationError
from compose.const import IS_WINDOWS_PLATFORM
class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode')):
@classmethod
def parse(cls, volume_from_config):
parts = volume_from_config.split(':')
if len(parts) > 2:
raise ConfigurationError(
"volume_from {} has incorrect format, should be "
"service[:mode]".format(volume_from_config))
if len(parts) == 1:
source = parts[0]
mode = 'rw'
else:
source, mode = parts
return cls(source, mode)
def parse_restart_spec(restart_config):
if not restart_config:
return None
parts = restart_config.split(':')
if len(parts) > 2:
raise ConfigurationError(
"Restart %s has incorrect format, should be "
"mode[:max_retry]" % restart_config)
if len(parts) == 2:
name, max_retry_count = parts
else:
name, = parts
max_retry_count = 0
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
def parse_extra_hosts(extra_hosts_config):
if not extra_hosts_config:
return {}
if isinstance(extra_hosts_config, dict):
return dict(extra_hosts_config)
if isinstance(extra_hosts_config, list):
extra_hosts_dict = {}
for extra_hosts_line in extra_hosts_config:
# TODO: validate string contains ':' ?
host, ip = extra_hosts_line.split(':')
extra_hosts_dict[host.strip()] = ip.strip()
return extra_hosts_dict
def normalize_paths_for_engine(external_path, internal_path):
"""Windows paths, c:\my\path\shiny, need to be changed to be compatible with
the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
"""
if not IS_WINDOWS_PLATFORM:
return external_path, internal_path
if external_path:
drive, tail = os.path.splitdrive(external_path)
if drive:
external_path = '/' + drive.lower().rstrip(':') + tail
external_path = external_path.replace('\\', '/')
return external_path, internal_path.replace('\\', '/')
class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
@classmethod
def parse(cls, volume_config):
"""Parse a volume_config path and split it into external:internal[:mode]
parts to be returned as a valid VolumeSpec.
"""
if IS_WINDOWS_PLATFORM:
# relative paths in windows expand to include the drive, eg C:\
# so we join the first 2 parts back together to count as one
drive, tail = os.path.splitdrive(volume_config)
parts = tail.split(":")
if drive:
parts[0] = drive + parts[0]
else:
parts = volume_config.split(':')
if len(parts) > 3:
raise ConfigurationError(
"Volume %s has incorrect format, should be "
"external:internal[:mode]" % volume_config)
if len(parts) == 1:
external, internal = normalize_paths_for_engine(
None,
os.path.normpath(parts[0]))
else:
external, internal = normalize_paths_for_engine(
os.path.normpath(parts[0]),
os.path.normpath(parts[1]))
mode = 'rw'
if len(parts) == 3:
mode = parts[2]
return cls(external, internal, mode)

View File

@@ -1,6 +1,7 @@
import json
import logging
import os
import re
import sys
import six
@@ -34,22 +35,29 @@ DOCKER_CONFIG_HINTS = {
VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
VALID_EXPOSE_FORMAT = r'^\d+(\/[a-zA-Z]+)?$'
@FormatChecker.cls_checks(
format="ports",
raises=ValidationError(
"Invalid port formatting, it should be "
"'[[remote_ip:]remote_port:]port[/protocol]'"))
@FormatChecker.cls_checks(format="ports", raises=ValidationError)
def format_ports(instance):
try:
split_port(instance)
except ValueError:
return False
except ValueError as e:
raise ValidationError(six.text_type(e))
return True
@FormatChecker.cls_checks(format="environment")
@FormatChecker.cls_checks(format="expose", raises=ValidationError)
def format_expose(instance):
if isinstance(instance, six.string_types):
if not re.match(VALID_EXPOSE_FORMAT, instance):
raise ValidationError(
"should be of the format 'PORT[/PROTOCOL]'")
return True
@FormatChecker.cls_checks(format="bool-value-in-mapping")
def format_boolean_in_environment(instance):
"""
Check if there is a boolean in the environment and display a warning.
@@ -66,21 +74,38 @@ def format_boolean_in_environment(instance):
return True
def validate_service_names(config):
for service_name in config.keys():
def validate_top_level_service_objects(config_file):
"""Perform some high level validation of the service name and value.
This validation must happen before interpolation, which must happen
before the rest of validation, which is why it's separate from the
rest of the service validation.
"""
for service_name, service_dict in config_file.config.items():
if not isinstance(service_name, six.string_types):
raise ConfigurationError(
"Service name: {} needs to be a string, eg '{}'".format(
"In file '{}' service name: {} needs to be a string, eg '{}'".format(
config_file.filename,
service_name,
service_name))
if not isinstance(service_dict, dict):
raise ConfigurationError(
"In file '{}' service '{}' doesn\'t have any configuration options. "
"All top level keys in your docker-compose.yml must map "
"to a dictionary of configuration options.".format(
config_file.filename,
service_name))
def validate_top_level_object(config):
if not isinstance(config, dict):
def validate_top_level_object(config_file):
if not isinstance(config_file.config, dict):
raise ConfigurationError(
"Top level object needs to be a dictionary. Check your .yml file "
"that you have defined a service at the top level.")
validate_service_names(config)
"Top level object in '{}' needs to be an object not '{}'. Check "
"that you have defined a service at the top level.".format(
config_file.filename,
type(config_file.config)))
validate_top_level_service_objects(config_file)
def validate_extends_file_path(service_name, extends_options, filename):
@@ -96,14 +121,6 @@ def validate_extends_file_path(service_name, extends_options, filename):
)
def validate_extended_service_exists(extended_service_name, full_extended_config, extended_config_path):
if extended_service_name not in full_extended_config:
msg = (
"Cannot extend service '%s' in %s: Service not found"
) % (extended_service_name, extended_config_path)
raise ConfigurationError(msg)
def get_unsupported_config_msg(service_name, error_key):
msg = "Unsupported config option for '{}' service: '{}'".format(service_name, error_key)
if error_key in DOCKER_CONFIG_HINTS:
@@ -117,189 +134,175 @@ def anglicize_validator(validator):
return 'a ' + validator
def process_errors(errors, service_name=None):
def handle_error_for_schema_with_id(error, service_name):
schema_id = error.schema['id']
if schema_id == 'fields_schema.json' and error.validator == 'additionalProperties':
return "Invalid service name '{}' - only {} characters are allowed".format(
# The service_name is the key to the json object
list(error.instance)[0],
VALID_NAME_CHARS)
if schema_id == '#/definitions/constraints':
if 'image' in error.instance and 'build' in error.instance:
return (
"Service '{}' has both an image and build path specified. "
"A service can either be built to image or use an existing "
"image, not both.".format(service_name))
if 'image' not in error.instance and 'build' not in error.instance:
return (
"Service '{}' has neither an image nor a build path "
"specified. Exactly one must be provided.".format(service_name))
if 'image' in error.instance and 'dockerfile' in error.instance:
return (
"Service '{}' has both an image and alternate Dockerfile. "
"A service can either be built to image or use an existing "
"image, not both.".format(service_name))
if schema_id == '#/definitions/service':
if error.validator == 'additionalProperties':
invalid_config_key = parse_key_from_error_msg(error)
return get_unsupported_config_msg(service_name, invalid_config_key)
def handle_generic_service_error(error, service_name):
config_key = " ".join("'%s'" % k for k in error.path)
msg_format = None
error_msg = error.message
if error.validator == 'oneOf':
msg_format = "Service '{}' configuration key {} {}"
error_msg = _parse_oneof_validator(error)
elif error.validator == 'type':
msg_format = ("Service '{}' configuration key {} contains an invalid "
"type, it should be {}")
error_msg = _parse_valid_types_from_validator(error.validator_value)
# TODO: no test case for this branch, there are no config options
# which exercise this branch
elif error.validator == 'required':
msg_format = "Service '{}' configuration key '{}' is invalid, {}"
elif error.validator == 'dependencies':
msg_format = "Service '{}' configuration key '{}' is invalid: {}"
config_key = list(error.validator_value.keys())[0]
required_keys = ",".join(error.validator_value[config_key])
error_msg = "when defining '{}' you must set '{}' as well".format(
config_key,
required_keys)
elif error.cause:
error_msg = six.text_type(error.cause)
msg_format = "Service '{}' configuration key {} is invalid: {}"
elif error.path:
msg_format = "Service '{}' configuration key {} value {}"
if msg_format:
return msg_format.format(service_name, config_key, error_msg)
return error.message
def parse_key_from_error_msg(error):
return error.message.split("'")[1]
def _parse_valid_types_from_validator(validator):
"""A validator value can be either an array of valid types or a string of
a valid type. Parse the valid types and prefix with the correct article.
"""
jsonschema gives us an error tree full of information to explain what has
if not isinstance(validator, list):
return anglicize_validator(validator)
if len(validator) == 1:
return anglicize_validator(validator[0])
return "{}, or {}".format(
", ".join([anglicize_validator(validator[0])] + validator[1:-1]),
anglicize_validator(validator[-1]))
def _parse_oneof_validator(error):
"""oneOf has multiple schemas, so we need to reason about which schema, sub
schema or constraint the validation is failing on.
Inspecting the context value of a ValidationError gives us information about
which sub schema failed and which kind of error it is.
"""
types = []
for context in error.context:
if context.validator == 'required':
return context.message
if context.validator == 'additionalProperties':
invalid_config_key = parse_key_from_error_msg(context)
return "contains unsupported option: '{}'".format(invalid_config_key)
if context.path:
invalid_config_key = " ".join(
"'{}' ".format(fragment) for fragment in context.path
if isinstance(fragment, six.string_types)
)
return "{}contains {}, which is an invalid type, it should be {}".format(
invalid_config_key,
context.instance,
_parse_valid_types_from_validator(context.validator_value))
if context.validator == 'uniqueItems':
return "contains non unique items, please remove duplicates from {}".format(
context.instance)
if context.validator == 'type':
types.append(context.validator_value)
valid_types = _parse_valid_types_from_validator(types)
return "contains an invalid type, it should be {}".format(valid_types)
def process_errors(errors, service_name=None):
"""jsonschema gives us an error tree full of information to explain what has
gone wrong. Process each error and pull out relevant information and re-write
helpful error messages that are relevant.
"""
def _parse_key_from_error_msg(error):
return error.message.split("'")[1]
def format_error_message(error, service_name):
if not service_name and error.path:
# field_schema errors will have service name on the path
service_name = error.path.popleft()
def _clean_error_message(message):
return message.replace("u'", "'")
if 'id' in error.schema:
error_msg = handle_error_for_schema_with_id(error, service_name)
if error_msg:
return error_msg
def _parse_valid_types_from_validator(validator):
"""
A validator value can be either an array of valid types or a string of
a valid type. Parse the valid types and prefix with the correct article.
"""
if isinstance(validator, list):
if len(validator) >= 2:
first_type = anglicize_validator(validator[0])
last_type = anglicize_validator(validator[-1])
types_from_validator = ", ".join([first_type] + validator[1:-1])
return handle_generic_service_error(error, service_name)
msg = "{} or {}".format(
types_from_validator,
last_type
)
else:
msg = "{}".format(anglicize_validator(validator[0]))
else:
msg = "{}".format(anglicize_validator(validator))
return msg
def _parse_oneof_validator(error):
"""
oneOf has multiple schemas, so we need to reason about which schema, sub
schema or constraint the validation is failing on.
Inspecting the context value of a ValidationError gives us information about
which sub schema failed and which kind of error it is.
"""
required = [context for context in error.context if context.validator == 'required']
if required:
return required[0].message
additionalProperties = [context for context in error.context if context.validator == 'additionalProperties']
if additionalProperties:
invalid_config_key = _parse_key_from_error_msg(additionalProperties[0])
return "contains unsupported option: '{}'".format(invalid_config_key)
constraint = [context for context in error.context if len(context.path) > 0]
if constraint:
valid_types = _parse_valid_types_from_validator(constraint[0].validator_value)
invalid_config_key = "".join(
"'{}' ".format(fragment) for fragment in constraint[0].path
if isinstance(fragment, six.string_types)
)
msg = "{}contains {}, which is an invalid type, it should be {}".format(
invalid_config_key,
constraint[0].instance,
valid_types
)
return msg
uniqueness = [context for context in error.context if context.validator == 'uniqueItems']
if uniqueness:
msg = "contains non unique items, please remove duplicates from {}".format(
uniqueness[0].instance
)
return msg
types = [context.validator_value for context in error.context if context.validator == 'type']
valid_types = _parse_valid_types_from_validator(types)
msg = "contains an invalid type, it should be {}".format(valid_types)
return msg
root_msgs = []
invalid_keys = []
required = []
type_errors = []
other_errors = []
for error in errors:
# handle root level errors
if len(error.path) == 0 and not error.instance.get('name'):
if error.validator == 'type':
msg = "Top level object needs to be a dictionary. Check your .yml file that you have defined a service at the top level."
root_msgs.append(msg)
elif error.validator == 'additionalProperties':
invalid_service_name = _parse_key_from_error_msg(error)
msg = "Invalid service name '{}' - only {} characters are allowed".format(invalid_service_name, VALID_NAME_CHARS)
root_msgs.append(msg)
else:
root_msgs.append(_clean_error_message(error.message))
else:
if not service_name:
# field_schema errors will have service name on the path
service_name = error.path[0]
error.path.popleft()
else:
# service_schema errors have the service name passed in, as that
# is not available on error.path or necessarily error.instance
service_name = service_name
if error.validator == 'additionalProperties':
invalid_config_key = _parse_key_from_error_msg(error)
invalid_keys.append(get_unsupported_config_msg(service_name, invalid_config_key))
elif error.validator == 'anyOf':
if 'image' in error.instance and 'build' in error.instance:
required.append(
"Service '{}' has both an image and build path specified. "
"A service can either be built to image or use an existing "
"image, not both.".format(service_name))
elif 'image' not in error.instance and 'build' not in error.instance:
required.append(
"Service '{}' has neither an image nor a build path "
"specified. Exactly one must be provided.".format(service_name))
elif 'image' in error.instance and 'dockerfile' in error.instance:
required.append(
"Service '{}' has both an image and alternate Dockerfile. "
"A service can either be built to image or use an existing "
"image, not both.".format(service_name))
else:
required.append(_clean_error_message(error.message))
elif error.validator == 'oneOf':
config_key = error.path[0]
msg = _parse_oneof_validator(error)
type_errors.append("Service '{}' configuration key '{}' {}".format(
service_name, config_key, msg)
)
elif error.validator == 'type':
msg = _parse_valid_types_from_validator(error.validator_value)
if len(error.path) > 0:
config_key = " ".join(["'%s'" % k for k in error.path])
type_errors.append(
"Service '{}' configuration key {} contains an invalid "
"type, it should be {}".format(
service_name,
config_key,
msg))
else:
root_msgs.append(
"Service '{}' doesn\'t have any configuration options. "
"All top level keys in your docker-compose.yml must map "
"to a dictionary of configuration options.'".format(service_name))
elif error.validator == 'required':
config_key = error.path[0]
required.append(
"Service '{}' option '{}' is invalid, {}".format(
service_name,
config_key,
_clean_error_message(error.message)))
elif error.validator == 'dependencies':
dependency_key = list(error.validator_value.keys())[0]
required_keys = ",".join(error.validator_value[dependency_key])
required.append("Invalid '{}' configuration for '{}' service: when defining '{}' you must set '{}' as well".format(
dependency_key, service_name, dependency_key, required_keys))
else:
config_key = " ".join(["'%s'" % k for k in error.path])
err_msg = "Service '{}' configuration key {} value {}".format(service_name, config_key, error.message)
other_errors.append(err_msg)
return "\n".join(root_msgs + invalid_keys + required + type_errors + other_errors)
return '\n'.join(format_error_message(error, service_name) for error in errors)
def validate_against_fields_schema(config):
schema_filename = "fields_schema.json"
format_checkers = ["ports", "environment"]
return _validate_against_schema(config, schema_filename, format_checkers)
def validate_against_fields_schema(config, filename):
_validate_against_schema(
config,
"fields_schema.json",
format_checker=["ports", "expose", "bool-value-in-mapping"],
filename=filename)
def validate_against_service_schema(config, service_name):
schema_filename = "service_schema.json"
format_checkers = ["ports"]
return _validate_against_schema(config, schema_filename, format_checkers, service_name)
_validate_against_schema(
config,
"service_schema.json",
format_checker=["ports"],
service_name=service_name)
def _validate_against_schema(config, schema_filename, format_checker=[], service_name=None):
def _validate_against_schema(
config,
schema_filename,
format_checker=(),
service_name=None,
filename=None):
config_source_dir = os.path.dirname(os.path.abspath(__file__))
if sys.platform == "win32":
@@ -315,9 +318,17 @@ def _validate_against_schema(config, schema_filename, format_checker=[], service
schema = json.load(schema_fh)
resolver = RefResolver(resolver_full_path, schema)
validation_output = Draft4Validator(schema, resolver=resolver, format_checker=FormatChecker(format_checker))
validation_output = Draft4Validator(
schema,
resolver=resolver,
format_checker=FormatChecker(format_checker))
errors = [error for error in sorted(validation_output.iter_errors(config), key=str)]
if errors:
error_msg = process_errors(errors, service_name)
raise ConfigurationError("Validation failed, reason(s):\n{}".format(error_msg))
if not errors:
return
error_msg = process_errors(errors, service_name)
file_msg = " in file '{}'".format(filename) if filename else ''
raise ConfigurationError("Validation failed{}, reason(s):\n{}".format(
file_msg,
error_msg))

View File

@@ -14,26 +14,34 @@ def stream_output(output, stream):
for event in utils.json_stream(output):
all_events.append(event)
is_progress_event = 'progress' in event or 'progressDetail' in event
if 'progress' in event or 'progressDetail' in event:
image_id = event.get('id')
if not image_id:
continue
if not is_progress_event:
print_output_event(event, stream, is_terminal)
stream.flush()
continue
if image_id in lines:
diff = len(lines) - lines[image_id]
else:
lines[image_id] = len(lines)
stream.write("\n")
diff = 0
if not is_terminal:
continue
if is_terminal:
# move cursor up `diff` rows
stream.write("%c[%dA" % (27, diff))
# if it's a progress event and we have a terminal, then display the progress bars
image_id = event.get('id')
if not image_id:
continue
if image_id in lines:
diff = len(lines) - lines[image_id]
else:
lines[image_id] = len(lines)
stream.write("\n")
diff = 0
# move cursor up `diff` rows
stream.write("%c[%dA" % (27, diff))
print_output_event(event, stream, is_terminal)
if 'id' in event and is_terminal:
if 'id' in event:
# move cursor back down
stream.write("%c[%dB" % (27, diff))

View File

@@ -8,7 +8,7 @@ from docker.errors import APIError
from docker.errors import NotFound
from .config import ConfigurationError
from .config import get_service_name_from_net
from .config.sort_services import get_service_name_from_net
from .const import DEFAULT_TIMEOUT
from .const import LABEL_ONE_OFF
from .const import LABEL_PROJECT
@@ -18,62 +18,14 @@ from .legacy import check_for_legacy_containers
from .service import ContainerNet
from .service import ConvergenceStrategy
from .service import Net
from .service import parse_volume_from_spec
from .service import Service
from .service import ServiceNet
from .service import VolumeFromSpec
from .utils import parallel_execute
log = logging.getLogger(__name__)
def sort_service_dicts(services):
# Topological sort (Cormen/Tarjan algorithm).
unmarked = services[:]
temporary_marked = set()
sorted_services = []
def get_service_names(links):
return [link.split(':')[0] for link in links]
def get_service_names_from_volumes_from(volumes_from):
return [
parse_volume_from_spec(volume_from).source
for volume_from in volumes_from
]
def get_service_dependents(service_dict, services):
name = service_dict['name']
return [
service for service in services
if (name in get_service_names(service.get('links', [])) or
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
name == get_service_name_from_net(service.get('net')))
]
def visit(n):
if n['name'] in temporary_marked:
if n['name'] in get_service_names(n.get('links', [])):
raise DependencyError('A service can not link to itself: %s' % n['name'])
if n['name'] in n.get('volumes_from', []):
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
else:
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
if n in unmarked:
temporary_marked.add(n['name'])
for m in get_service_dependents(n, services):
visit(m)
temporary_marked.remove(n['name'])
unmarked.remove(n)
sorted_services.insert(0, n)
while unmarked:
visit(unmarked[-1])
return sorted_services
class Project(object):
"""
A collection of services.
@@ -101,7 +53,7 @@ class Project(object):
if use_networking:
remove_links(service_dicts)
for service_dict in sort_service_dicts(service_dicts):
for service_dict in service_dicts:
links = project.get_links(service_dict)
volumes_from = project.get_volumes_from(service_dict)
net = project.get_net(service_dict)
@@ -192,16 +144,15 @@ class Project(object):
def get_volumes_from(self, service_dict):
volumes_from = []
if 'volumes_from' in service_dict:
for volume_from_config in service_dict.get('volumes_from', []):
volume_from_spec = parse_volume_from_spec(volume_from_config)
for volume_from_spec in service_dict.get('volumes_from', []):
# Get service
try:
service_name = self.get_service(volume_from_spec.source)
volume_from_spec = VolumeFromSpec(service_name, volume_from_spec.mode)
service = self.get_service(volume_from_spec.source)
volume_from_spec = volume_from_spec._replace(source=service)
except NoSuchService:
try:
container_name = Container.from_id(self.client, volume_from_spec.source)
volume_from_spec = VolumeFromSpec(container_name, volume_from_spec.mode)
container = Container.from_id(self.client, volume_from_spec.source)
volume_from_spec = volume_from_spec._replace(source=container)
except APIError:
raise ConfigurationError(
'Service "%s" mounts volumes from "%s", which is '
@@ -278,10 +229,10 @@ class Project(object):
for service in self.get_services(service_names):
service.restart(**options)
def build(self, service_names=None, no_cache=False, pull=False):
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False):
for service in self.get_services(service_names):
if service.can_be_built():
service.build(no_cache, pull)
service.build(no_cache, pull, force_rm)
else:
log.info('%s uses an image, skipping' % service.name)
@@ -300,7 +251,7 @@ class Project(object):
plans = self._get_convergence_plans(services, strategy)
if self.use_networking:
if self.use_networking and self.uses_default_network():
self.ensure_network_exists()
return [
@@ -322,7 +273,7 @@ class Project(object):
name
for name in service.get_dependency_names()
if name in plans
and plans[name].action == 'recreate'
and plans[name].action in ('recreate', 'create')
]
if updated_dependencies and strategy.allows_recreate:
@@ -383,7 +334,10 @@ class Project(object):
def remove_network(self):
network = self.get_network()
if network:
self.client.remove_network(network['id'])
self.client.remove_network(network['Id'])
def uses_default_network(self):
return any(service.net.mode == self.name for service in self.services)
def _inject_deps(self, acc, service):
dep_names = service.get_dependency_names()
@@ -427,7 +381,3 @@ class NoSuchService(Exception):
def __str__(self):
return self.msg
class DependencyError(ConfigurationError):
pass

View File

@@ -2,7 +2,6 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import logging
import os
import re
import sys
from collections import namedtuple
@@ -18,9 +17,8 @@ from docker.utils.ports import split_port
from . import __version__
from .config import DOCKER_CONFIG_KEYS
from .config import merge_environment
from .config.validation import VALID_NAME_CHARS
from .config.types import VolumeSpec
from .const import DEFAULT_TIMEOUT
from .const import IS_WINDOWS_PLATFORM
from .const import LABEL_CONFIG_HASH
from .const import LABEL_CONTAINER_NUMBER
from .const import LABEL_ONE_OFF
@@ -68,10 +66,6 @@ class BuildError(Exception):
self.reason = reason
class ConfigError(ValueError):
pass
class NeedsBuildError(Exception):
def __init__(self, service):
self.service = service
@@ -81,12 +75,6 @@ class NoSuchImageError(Exception):
pass
VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
VolumeFromSpec = namedtuple('VolumeFromSpec', 'source mode')
ServiceName = namedtuple('ServiceName', 'project service number')
@@ -119,9 +107,6 @@ class Service(object):
net=None,
**options
):
if not re.match('^%s+$' % VALID_NAME_CHARS, project):
raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
self.name = name
self.client = client
self.project = project
@@ -185,7 +170,7 @@ class Service(object):
c.kill(**options)
def restart(self, **options):
for c in self.containers():
for c in self.containers(stopped=True):
log.info("Restarting %s" % c.name)
c.restart(**options)
@@ -300,9 +285,7 @@ class Service(object):
Create a container for this service. If the image doesn't exist, attempt to pull
it.
"""
self.ensure_image_exists(
do_build=do_build,
)
self.ensure_image_exists(do_build=do_build)
container_options = self._get_container_create_options(
override_options,
@@ -316,9 +299,7 @@ class Service(object):
return Container.create(self.client, **container_options)
def ensure_image_exists(self,
do_build=True):
def ensure_image_exists(self, do_build=True):
try:
self.image()
return
@@ -410,7 +391,7 @@ class Service(object):
if should_attach_logs:
container.attach_log_stream()
self.start_container(container)
container.start()
return [container]
@@ -418,6 +399,7 @@ class Service(object):
return [
self.recreate_container(
container,
do_build=do_build,
timeout=timeout,
attach_logs=should_attach_logs
)
@@ -439,10 +421,12 @@ class Service(object):
else:
raise Exception("Invalid action: {}".format(action))
def recreate_container(self,
container,
timeout=DEFAULT_TIMEOUT,
attach_logs=False):
def recreate_container(
self,
container,
do_build=False,
timeout=DEFAULT_TIMEOUT,
attach_logs=False):
"""Recreate a container.
The original container is renamed to a temporary name so that data
@@ -454,28 +438,23 @@ class Service(object):
container.stop(timeout=timeout)
container.rename_to_tmp_name()
new_container = self.create_container(
do_build=False,
do_build=do_build,
previous_container=container,
number=container.labels.get(LABEL_CONTAINER_NUMBER),
quiet=True,
)
if attach_logs:
new_container.attach_log_stream()
self.start_container(new_container)
new_container.start()
container.remove()
return new_container
def start_container_if_stopped(self, container, attach_logs=False):
if container.is_running:
return container
else:
if not container.is_running:
log.info("Starting %s" % container.name)
if attach_logs:
container.attach_log_stream()
return self.start_container(container)
def start_container(self, container):
container.start()
container.start()
return container
def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT):
@@ -508,7 +487,9 @@ class Service(object):
'image_id': self.image()['Id'],
'links': self.get_link_names(),
'net': self.net.id,
'volumes_from': self.get_volumes_from_names(),
'volumes_from': [
(v.source.name, v.mode) for v in self.volumes_from if isinstance(v.source, Service)
],
}
def get_dependency_names(self):
@@ -530,7 +511,7 @@ class Service(object):
# TODO: Implement issue #652 here
return build_container_name(self.project, self.name, number, one_off)
# TODO: this would benefit from github.com/docker/docker/pull/11943
# TODO: this would benefit from github.com/docker/docker/pull/14699
# to remove the need to inspect every container
def _next_container_number(self, one_off=False):
containers = filter(None, [
@@ -605,9 +586,6 @@ class Service(object):
container_options['hostname'] = parts[0]
container_options['domainname'] = parts[2]
if 'hostname' not in container_options and self.use_networking:
container_options['hostname'] = self.name
if 'ports' in container_options or 'expose' in self.options:
ports = []
all_ports = container_options.get('ports', []) + self.options.get('expose', [])
@@ -626,8 +604,7 @@ class Service(object):
if 'volumes' in container_options:
container_options['volumes'] = dict(
(parse_volume_spec(v).internal, {})
for v in container_options['volumes'])
(v.internal, {}) for v in container_options['volumes'])
container_options['environment'] = merge_environment(
self.options.get('environment'),
@@ -656,59 +633,37 @@ class Service(object):
def _get_container_host_config(self, override_options, one_off=False):
options = dict(self.options, **override_options)
port_bindings = build_port_bindings(options.get('ports') or [])
privileged = options.get('privileged', False)
cap_add = options.get('cap_add', None)
cap_drop = options.get('cap_drop', None)
log_config = LogConfig(
type=options.get('log_driver', ""),
config=options.get('log_opt', None)
)
pid = options.get('pid', None)
security_opt = options.get('security_opt', None)
dns = options.get('dns', None)
if isinstance(dns, six.string_types):
dns = [dns]
dns_search = options.get('dns_search', None)
if isinstance(dns_search, six.string_types):
dns_search = [dns_search]
restart = parse_restart_spec(options.get('restart', None))
extra_hosts = build_extra_hosts(options.get('extra_hosts', None))
read_only = options.get('read_only', None)
devices = options.get('devices', None)
cgroup_parent = options.get('cgroup_parent', None)
return self.client.create_host_config(
links=self._get_links(link_to_self=one_off),
port_bindings=port_bindings,
port_bindings=build_port_bindings(options.get('ports') or []),
binds=options.get('binds'),
volumes_from=self._get_volumes_from(),
privileged=privileged,
privileged=options.get('privileged', False),
network_mode=self.net.mode,
devices=devices,
dns=dns,
dns_search=dns_search,
restart_policy=restart,
cap_add=cap_add,
cap_drop=cap_drop,
devices=options.get('devices'),
dns=options.get('dns'),
dns_search=options.get('dns_search'),
restart_policy=options.get('restart'),
cap_add=options.get('cap_add'),
cap_drop=options.get('cap_drop'),
mem_limit=options.get('mem_limit'),
memswap_limit=options.get('memswap_limit'),
ulimits=build_ulimits(options.get('ulimits')),
log_config=log_config,
extra_hosts=extra_hosts,
read_only=read_only,
pid_mode=pid,
security_opt=security_opt,
extra_hosts=options.get('extra_hosts'),
read_only=options.get('read_only'),
pid_mode=options.get('pid'),
security_opt=options.get('security_opt'),
ipc_mode=options.get('ipc'),
cgroup_parent=cgroup_parent
cgroup_parent=options.get('cgroup_parent'),
)
def build(self, no_cache=False, pull=False):
def build(self, no_cache=False, pull=False, force_rm=False):
log.info('Building %s' % self.name)
path = self.options['build']
@@ -722,6 +677,7 @@ class Service(object):
tag=self.image_name,
stream=True,
rm=True,
forcerm=force_rm,
pull=pull,
nocache=no_cache,
dockerfile=self.options.get('dockerfile', None),
@@ -771,10 +727,28 @@ class Service(object):
return self.options.get('container_name')
def specifies_host_port(self):
for port in self.options.get('ports', []):
if ':' in str(port):
def has_host_port(binding):
_, external_bindings = split_port(binding)
# there are no external bindings
if external_bindings is None:
return False
# we only need to check the first binding from the range
external_binding = external_bindings[0]
# non-tuple binding means there is a host port specified
if not isinstance(external_binding, tuple):
return True
return False
# extract actual host port from tuple of (host_ip, host_port)
_, host_port = external_binding
if host_port is not None:
return True
return False
return any(has_host_port(binding) for binding in self.options.get('ports', []))
def pull(self, ignore_pull_failures=False):
if 'image' not in self.options:
@@ -895,18 +869,20 @@ def parse_repository_tag(repo_path):
# Volumes
def merge_volume_bindings(volumes_option, previous_container):
def merge_volume_bindings(volumes, previous_container):
"""Return a list of volume bindings for a container. Container data volumes
are replaced by those from the previous container.
"""
volume_bindings = dict(
build_volume_binding(parse_volume_spec(volume))
for volume in volumes_option or []
if ':' in volume)
build_volume_binding(volume)
for volume in volumes
if volume.external)
if previous_container:
data_volumes = get_container_data_volumes(previous_container, volumes)
warn_on_masked_volume(volumes, data_volumes, previous_container.service)
volume_bindings.update(
get_container_data_volumes(previous_container, volumes_option))
build_volume_binding(volume) for volume in data_volumes)
return list(volume_bindings.values())
@@ -916,13 +892,14 @@ def get_container_data_volumes(container, volumes_option):
a mapping of volume bindings for those volumes.
"""
volumes = []
volumes_option = volumes_option or []
container_volumes = container.get('Volumes') or {}
image_volumes = container.image_config['ContainerConfig'].get('Volumes') or {}
image_volumes = [
VolumeSpec.parse(volume)
for volume in
container.image_config['ContainerConfig'].get('Volumes') or {}
]
for volume in set(volumes_option + list(image_volumes)):
volume = parse_volume_spec(volume)
for volume in set(volumes_option + image_volumes):
# No need to preserve host volumes
if volume.external:
continue
@@ -934,65 +911,36 @@ def get_container_data_volumes(container, volumes_option):
# Copy existing volume from old container
volume = volume._replace(external=volume_path)
volumes.append(build_volume_binding(volume))
volumes.append(volume)
return dict(volumes)
return volumes
def warn_on_masked_volume(volumes_option, container_volumes, service):
container_volumes = dict(
(volume.internal, volume.external)
for volume in container_volumes)
for volume in volumes_option:
if (
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
):
log.warn((
"Service \"{service}\" is using volume \"{volume}\" from the "
"previous container. Host mapping \"{host_path}\" has no effect. "
"Remove the existing containers (with `docker-compose rm {service}`) "
"to use the host volume mapping."
).format(
service=service,
volume=volume.internal,
host_path=volume.external))
def build_volume_binding(volume_spec):
return volume_spec.internal, "{}:{}:{}".format(*volume_spec)
def normalize_paths_for_engine(external_path, internal_path):
"""Windows paths, c:\my\path\shiny, need to be changed to be compatible with
the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
"""
if not IS_WINDOWS_PLATFORM:
return external_path, internal_path
if external_path:
drive, tail = os.path.splitdrive(external_path)
if drive:
external_path = '/' + drive.lower().rstrip(':') + tail
external_path = external_path.replace('\\', '/')
return external_path, internal_path.replace('\\', '/')
def parse_volume_spec(volume_config):
"""
Parse a volume_config path and split it into external:internal[:mode]
parts to be returned as a valid VolumeSpec.
"""
if IS_WINDOWS_PLATFORM:
# relative paths in windows expand to include the drive, eg C:\
# so we join the first 2 parts back together to count as one
drive, tail = os.path.splitdrive(volume_config)
parts = tail.split(":")
if drive:
parts[0] = drive + parts[0]
else:
parts = volume_config.split(':')
if len(parts) > 3:
raise ConfigError("Volume %s has incorrect format, should be "
"external:internal[:mode]" % volume_config)
if len(parts) == 1:
external, internal = normalize_paths_for_engine(None, os.path.normpath(parts[0]))
else:
external, internal = normalize_paths_for_engine(os.path.normpath(parts[0]), os.path.normpath(parts[1]))
mode = 'rw'
if len(parts) == 3:
mode = parts[2]
return VolumeSpec(external, internal, mode)
def build_volume_from(volume_from_spec):
"""
volume_from can be either a service or a container. We want to return the
@@ -1009,21 +957,6 @@ def build_volume_from(volume_from_spec):
return ["{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode)]
def parse_volume_from_spec(volume_from_config):
parts = volume_from_config.split(':')
if len(parts) > 2:
raise ConfigError("Volume %s has incorrect format, should be "
"external:internal[:mode]" % volume_from_config)
if len(parts) == 1:
source = parts[0]
mode = 'rw'
else:
source, mode = parts
return VolumeFromSpec(source, mode)
# Labels
@@ -1040,48 +973,19 @@ def build_container_labels(label_options, service_labels, number, config_hash):
return labels
# Restart policy
# Ulimits
def parse_restart_spec(restart_config):
if not restart_config:
def build_ulimits(ulimit_config):
if not ulimit_config:
return None
parts = restart_config.split(':')
if len(parts) > 2:
raise ConfigError("Restart %s has incorrect format, should be "
"mode[:max_retry]" % restart_config)
if len(parts) == 2:
name, max_retry_count = parts
else:
name, = parts
max_retry_count = 0
ulimits = []
for limit_name, soft_hard_values in six.iteritems(ulimit_config):
if isinstance(soft_hard_values, six.integer_types):
ulimits.append({'name': limit_name, 'soft': soft_hard_values, 'hard': soft_hard_values})
elif isinstance(soft_hard_values, dict):
ulimit_dict = {'name': limit_name}
ulimit_dict.update(soft_hard_values)
ulimits.append(ulimit_dict)
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
# Extra hosts
def build_extra_hosts(extra_hosts_config):
if not extra_hosts_config:
return {}
if isinstance(extra_hosts_config, list):
extra_hosts_dict = {}
for extra_hosts_line in extra_hosts_config:
if not isinstance(extra_hosts_line, six.string_types):
raise ConfigError(
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
extra_hosts_config
)
host, ip = extra_hosts_line.split(':')
extra_hosts_dict.update({host.strip(): ip.strip()})
extra_hosts_config = extra_hosts_dict
if isinstance(extra_hosts_config, dict):
return extra_hosts_config
raise ConfigError(
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
extra_hosts_config
)
return ulimits

View File

@@ -95,14 +95,14 @@ def stream_as_text(stream):
"""
for data in stream:
if not isinstance(data, six.text_type):
data = data.decode('utf-8')
data = data.decode('utf-8', 'replace')
yield data
def line_splitter(buffer, separator=u'\n'):
index = buffer.find(six.text_type(separator))
if index == -1:
return None, None
return None
return buffer[:index + 1], buffer[index + 1:]
@@ -120,11 +120,11 @@ def split_buffer(stream, splitter=None, decoder=lambda a: a):
for data in stream_as_text(stream):
buffered += data
while True:
item, rest = splitter(buffered)
if not item:
buffer_split = splitter(buffered)
if buffer_split is None:
break
buffered = rest
item, buffered = buffer_split
yield item
if buffered:
@@ -140,7 +140,7 @@ def json_splitter(buffer):
rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end():]
return obj, rest
except ValueError:
return None, None
return None
def json_stream(stream):
@@ -148,7 +148,7 @@ def json_stream(stream):
This handles streams which are inconsistently buffered (some entries may
be newline delimited, and others are not).
"""
return split_buffer(stream_as_text(stream), json_splitter, json_decoder.decode)
return split_buffer(stream, json_splitter, json_decoder.decode)
def write_out_msg(stream, lines, msg_index, msg, status="done"):
@@ -164,7 +164,7 @@ def write_out_msg(stream, lines, msg_index, msg, status="done"):
stream.write("%c[%dA" % (27, diff))
# erase
stream.write("%c[2K\r" % 27)
stream.write("{} {} ... {}\n".format(msg, obj_index, status))
stream.write("{} {} ... {}\r".format(msg, obj_index, status))
# move back down
stream.write("%c[%dB" % (27, diff))
else:

View File

@@ -87,7 +87,7 @@ __docker_compose_services_stopped() {
_docker_compose_build() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --no-cache --pull" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--force-rm --help --no-cache --pull" -- "$cur" ) )
;;
*)
__docker_compose_services_from_build

View File

@@ -192,6 +192,7 @@ __docker-compose_subcommand() {
(build)
_arguments \
$opts_help \
'--force-rm[Always remove intermediate containers.]' \
'--no-cache[Do not use cache when building the image]' \
'--pull[Always attempt to pull a newer version of the image.]' \
'*:services:__docker-compose_services_from_build' && ret=0

View File

@@ -9,18 +9,32 @@ a = Analysis(['bin/docker-compose'],
runtime_hooks=None,
cipher=block_cipher)
pyz = PYZ(a.pure,
cipher=block_cipher)
pyz = PYZ(a.pure, cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[('compose/config/fields_schema.json', 'compose/config/fields_schema.json', 'DATA')],
[('compose/config/service_schema.json', 'compose/config/service_schema.json', 'DATA')],
[
(
'compose/config/fields_schema.json',
'compose/config/fields_schema.json',
'DATA'
),
(
'compose/config/service_schema.json',
'compose/config/service_schema.json',
'DATA'
),
(
'compose/GITSHA',
'compose/GITSHA',
'DATA'
)
],
name='docker-compose',
debug=False,
strip=None,
upx=True,
console=True )
console=True)

View File

@@ -31,15 +31,18 @@ definition.
### build
Path to a directory containing a Dockerfile. When the value supplied is a
relative path, it is interpreted as relative to the location of the yml file
itself. This directory is also the build context that is sent to the Docker daemon.
Either a path to a directory containing a Dockerfile, or a url to a git repository.
When the value supplied is a relative path, it is interpreted as relative to the
location of the Compose file. This directory is also the build context that is
sent to the Docker daemon.
Compose will build and tag it with a generated name, and use that image thereafter.
build: /path/to/build/dir
Using `build` together with `image` is not allowed. Attempting to do so results in an error.
Using `build` together with `image` is not allowed. Attempting to do so results in
an error.
### cap_add, cap_drop
@@ -105,8 +108,10 @@ Custom DNS search domains. Can be a single value or a list.
Alternate Dockerfile.
Compose will use an alternate file to build with.
Compose will use an alternate file to build with. A build path must also be
specified using the `build` key.
build: /path/to/build/dir
dockerfile: Dockerfile-alternate
Using `dockerfile` together with `image` is not allowed. Attempting to do so results in an error.
@@ -331,6 +336,18 @@ Override the default labeling scheme for each container.
- label:user:USER
- label:role:ROLE
### ulimits
Override the default ulimits for a container. You can either specify a single
limit as an integer or soft/hard limits as a mapping.
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
### volumes, volume\_driver
Mount paths as volumes, optionally specifying a path on the host machine

139
docs/faq.md Normal file
View File

@@ -0,0 +1,139 @@
<!--[metadata]>
+++
title = "Frequently Asked Questions"
description = "Docker Compose FAQ"
keywords = "documentation, docs, docker, compose, faq"
[menu.main]
parent="smn_workw_compose"
weight=9
+++
<![end-metadata]-->
# Frequently asked questions
If you dont see your question here, feel free to drop by `#docker-compose` on
freenode IRC and ask the community.
## Why do my services take 10 seconds to stop?
Compose stop attempts to stop a container by sending a `SIGTERM`. It then waits
for a [default timeout of 10 seconds](./reference/stop.md). After the timeout,
a `SIGKILL` is sent to the container to forcefully kill it. If you
are waiting for this timeout, it means that your containers aren't shutting down
when they receive the `SIGTERM` signal.
There has already been a lot written about this problem of
[processes handling signals](https://medium.com/@gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86)
in containers.
To fix this problem, try the following:
* Make sure you're using the JSON form of `CMD` and `ENTRYPOINT`
in your Dockerfile.
For example use `["program", "arg1", "arg2"]` not `"program arg1 arg2"`.
Using the string form causes Docker to run your process using `bash` which
doesn't handle signals properly. Compose always uses the JSON form, so don't
worry if you override the command or entrypoint in your Compose file.
* If you are able, modify the application that you're running to
add an explicit signal handler for `SIGTERM`.
* If you can't modify the application, wrap the application in a lightweight init
system (like [s6](http://skarnet.org/software/s6/)) or a signal proxy (like
[dumb-init](https://github.com/Yelp/dumb-init) or
[tini](https://github.com/krallin/tini)). Either of these wrappers take care of
handling `SIGTERM` properly.
## How do I run multiple copies of a Compose file on the same host?
Compose uses the project name to create unique identifiers for all of a
project's containers and other resources. To run multiple copies of a project,
set a custom project name using the [`-p` command line
option](./reference/docker-compose.md) or the [`COMPOSE_PROJECT_NAME`
environment variable](./reference/overview.md#compose-project-name).
## What's the difference between `up`, `run`, and `start`?
Typically, you want `docker-compose up`. Use `up` to start or restart all the
services defined in a `docker-compose.yml`. In the default "attached"
mode, you'll see all the logs from all the containers. In "detached" mode (`-d`),
Compose exits after starting the containers, but the containers continue to run
in the background.
The `docker-compose run` command is for running "one-off" or "adhoc" tasks. It
requires the service name you want to run and only starts containers for services
that the running service depends on. Use `run` to run tests or perform
an administrative task such as removing or adding data to a data volume
container. The `run` command acts like `docker run -ti` in that it opens an
interactive terminal to the container and returns an exit status matching the
exit status of the process in the container.
The `docker-compose start` command is useful only to restart containers
that were previously created, but were stopped. It never creates new
containers.
## Can I use json instead of yaml for my Compose file?
Yes. [Yaml is a superset of json](http://stackoverflow.com/a/1729545/444646) so
any JSON file should be valid Yaml. To use a JSON file with Compose,
specify the filename to use, for example:
```bash
docker-compose -f docker-compose.json up
```
## How do I get Compose to wait for my database to be ready before starting my application?
Unfortunately, Compose won't do that for you but for a good reason.
The problem of waiting for a database to be ready is really just a subset of a
much larger problem of distributed systems. In production, your database could
become unavailable or move hosts at any time. The application needs to be
resilient to these types of failures.
To handle this, the application would attempt to re-establish a connection to
the database after a failure. If the application retries the connection,
it should eventually be able to connect to the database.
To wait for the application to be in a good state, you can implement a
healthcheck. A healthcheck makes a request to the application and checks
the response for a success status code. If it is not successful it waits
for a short period of time, and tries again. After some timeout value, the check
stops trying and report a failure.
If you need to run tests against your application, you can start by running a
healthcheck. Once the healthcheck gets a successful response, you can start
running your tests.
## Should I include my code with `COPY`/`ADD` or a volume?
You can add your code to the image using `COPY` or `ADD` directive in a
`Dockerfile`. This is useful if you need to relocate your code along with the
Docker image, for example when you're sending code to another environment
(production, CI, etc).
You should use a `volume` if you want to make changes to your code and see them
reflected immediately, for example when you're developing code and your server
supports hot code reloading or live-reload.
There may be cases where you'll want to use both. You can have the image
include the code using a `COPY`, and use a `volume` in your Compose file to
include the code from the host during development. The volume overrides
the directory contents of the image.
## Where can I find example compose files?
There are [many examples of Compose files on
github](https://github.com/search?q=in%3Apath+docker-compose.yml+extension%3Ayml&type=Code).
## Compose documentation
- [Installing Compose](install.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with WordPress](wordpress.md)
- [Command line reference](./reference/index.md)
- [Compose file reference](compose-file.md)

View File

@@ -59,6 +59,7 @@ Compose has commands for managing the whole lifecycle of your application:
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with WordPress](wordpress.md)
- [Frequently asked questions](faq.md)
- [Command line reference](./reference/index.md)
- [Compose file reference](compose-file.md)

View File

@@ -39,7 +39,7 @@ which the release page specifies, in your terminal.
The following is an example command illustrating the format:
curl -L https://github.com/docker/compose/releases/download/VERSION_NUM/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
If you have problems installing with `curl`, see
[Alternative Install Options](#alternative-install-options).
@@ -54,7 +54,7 @@ which the release page specifies, in your terminal.
7. Test the installation.
$ docker-compose --version
docker-compose version: 1.5.0
docker-compose version: 1.5.2
## Alternative install options
@@ -70,13 +70,14 @@ to get started.
$ pip install docker-compose
> **Note:** pip version 6.0 or greater is required
### Install as a container
Compose can also be run inside a container, from a small bash script wrapper.
To install compose as a container run:
$ curl -L https://github.com/docker/compose/releases/download/1.5.0/run.sh > /usr/local/bin/docker-compose
$ curl -L https://github.com/docker/compose/releases/download/1.5.2/run.sh > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
## Master builds

View File

@@ -15,6 +15,7 @@ parent = "smn_compose_cli"
Usage: build [options] [SERVICE...]
Options:
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
```

View File

@@ -87,15 +87,18 @@ relative to the current working directory.
The `-f` flag is optional. If you don't provide this flag on the command line,
Compose traverses the working directory and its subdirectories looking for a
`docker-compose.yml` and a `docker-compose.override.yml` file. You must supply
at least the `docker-compose.yml` file. If both files are present, Compose
combines the two files into a single configuration. The configuration in the
`docker-compose.override.yml` file is applied over and in addition to the values
in the `docker-compose.yml` file.
`docker-compose.yml` and a `docker-compose.override.yml` file. You must
supply at least the `docker-compose.yml` file. If both files are present,
Compose combines the two files into a single configuration. The configuration
in the `docker-compose.override.yml` file is applied over and in addition to
the values in the `docker-compose.yml` file.
See also the `COMPOSE_FILE` [environment variable](overview.md#compose-file).
Each configuration has a project name. If you supply a `-p` flag, you can
specify a project name. If you don't specify the flag, Compose uses the current
directory name.
directory name. See also the `COMPOSE_PROJECT_NAME` [environment variable](
overview.md#compose-project-name)
## Where to go next

View File

@@ -32,11 +32,16 @@ Docker command-line client. If you're using `docker-machine`, then the `eval "$(
Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively.
Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME` defaults to the `basename` of the current working directory.
Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME`
defaults to the `basename` of the project directory. See also the `-p`
[command-line option](docker-compose.md).
### COMPOSE\_FILE
Specify the file containing the compose configuration. If not provided, Compose looks for a file named `docker-compose.yml` in the current directory and then each parent directory in succession until a file by that name is found.
Specify the file containing the compose configuration. If not provided,
Compose looks for a file named `docker-compose.yml` in the current directory
and then each parent directory in succession until a file by that name is
found. See also the `-f` [command-line option](docker-compose.md).
### COMPOSE\_API\_VERSION

View File

@@ -1,4 +1,4 @@
PyYAML==3.10
PyYAML==3.11
docker-py==1.5.0
dockerpty==0.3.4
docopt==0.6.1
@@ -6,5 +6,5 @@ enum34==1.0.4
jsonschema==2.5.1
requests==2.7.0
six==1.7.3
texttable==0.8.2
texttable==0.8.4
websocket-client==0.32.0

View File

@@ -10,6 +10,7 @@ fi
TAG=$1
VERSION="$(python setup.py --version)"
./script/write-git-sha
python setup.py sdist
cp dist/docker-compose-$VERSION.tar.gz dist/docker-compose-release.tar.gz
docker build -t docker/compose:$TAG -f Dockerfile.run .

View File

@@ -9,4 +9,5 @@ docker build -t "$TAG" . | tail -n 200
docker run \
--rm --entrypoint="script/build-linux-inner" \
-v $(pwd)/dist:/code/dist \
-v $(pwd)/.git:/code/.git \
"$TAG"

View File

@@ -2,13 +2,14 @@
set -ex
TARGET=dist/docker-compose-Linux-x86_64
TARGET=dist/docker-compose-$(uname -s)-$(uname -m)
VENV=/code/.tox/py27
mkdir -p `pwd`/dist
chmod 777 `pwd`/dist
$VENV/bin/pip install -q -r requirements-build.txt
./script/write-git-sha
su -c "$VENV/bin/pyinstaller docker-compose.spec" user
mv dist/docker-compose $TARGET
$TARGET version

View File

@@ -9,6 +9,7 @@ virtualenv -p /usr/local/bin/python venv
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-build.txt
venv/bin/pip install --no-deps .
./script/write-git-sha
venv/bin/pyinstaller docker-compose.spec
mv dist/docker-compose dist/docker-compose-Darwin-x86_64
dist/docker-compose-Darwin-x86_64 version

View File

@@ -47,6 +47,8 @@ virtualenv .\venv
.\venv\Scripts\pip install --no-deps .
.\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt
git rev-parse --short HEAD | out-file -encoding ASCII compose\GITSHA
# Build binary
# pyinstaller has lots of warnings, so we need to run with ErrorAction = Continue
$ErrorActionPreference = "Continue"

View File

@@ -57,6 +57,7 @@ docker push docker/compose:$VERSION
echo "Uploading sdist to pypi"
pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst
./script/write-git-sha
python setup.py sdist
if [ "$(command -v twine 2> /dev/null)" ]; then
twine upload ./dist/docker-compose-${VERSION}.tar.gz

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.5.0"
VERSION="1.5.2"
IMAGE="docker/compose:$VERSION"
@@ -26,7 +26,7 @@ fi
if [ -S "$DOCKER_HOST" ]; then
DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST"
else
DOCKER_ADDR="-e DOCKER_HOST"
DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH"
fi

View File

@@ -1,4 +1,6 @@
#!/usr/bin/env python
from __future__ import print_function
import datetime
import os.path
import sys
@@ -6,4 +8,4 @@ import sys
os.environ['DATE'] = str(datetime.date.today())
for line in sys.stdin:
print os.path.expandvars(line),
print(os.path.expandvars(line), end='')

7
script/write-git-sha Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
#
# Write the current commit sha to the file GITSHA. This file is included in
# packaging so that `docker-compose version` can include the git sha.
#
set -e
git rev-parse --short HEAD > compose/GITSHA

View File

View File

@@ -2,30 +2,94 @@ from __future__ import absolute_import
import os
import shlex
import sys
import signal
import subprocess
import time
from collections import namedtuple
from operator import attrgetter
from six import StringIO
from docker import errors
from .. import mock
from .testcases import DockerClientTestCase
from compose.cli.command import get_project
from compose.cli.docker_client import docker_client
from compose.cli.errors import UserError
from compose.cli.main import TopLevelCommand
from compose.project import NoSuchService
from compose.container import Container
from tests.integration.testcases import DockerClientTestCase
from tests.integration.testcases import pull_busybox
ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
BUILD_CACHE_TEXT = 'Using cache'
BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:latest'
def start_process(base_dir, options):
proc = subprocess.Popen(
['docker-compose'] + options,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=base_dir)
print("Running process: %s" % proc.pid)
return proc
def wait_on_process(proc, returncode=0):
stdout, stderr = proc.communicate()
if proc.returncode != returncode:
print(stderr.decode('utf-8'))
assert proc.returncode == returncode
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
def wait_on_condition(condition, delay=0.1, timeout=5):
start_time = time.time()
while not condition():
if time.time() - start_time > timeout:
raise AssertionError("Timeout: %s" % condition)
time.sleep(delay)
class ContainerCountCondition(object):
def __init__(self, project, expected):
self.project = project
self.expected = expected
def __call__(self):
return len(self.project.containers()) == self.expected
def __str__(self):
return "waiting for counter count == %s" % self.expected
class ContainerStateCondition(object):
def __init__(self, client, name, running):
self.client = client
self.name = name
self.running = running
# State.Running == true
def __call__(self):
try:
container = self.client.inspect_container(self.name)
return container['State']['Running'] == self.running
except errors.APIError:
return False
def __str__(self):
return "waiting for container to have state %s" % self.expected
class CLITestCase(DockerClientTestCase):
def setUp(self):
super(CLITestCase, self).setUp()
self.old_sys_exit = sys.exit
sys.exit = lambda code=0: None
self.command = TopLevelCommand()
self.command.base_dir = 'tests/fixtures/simple-composefile'
self.base_dir = 'tests/fixtures/simple-composefile'
def tearDown(self):
sys.exit = self.old_sys_exit
self.project.kill()
self.project.remove_stopped()
for container in self.project.containers(stopped=True, one_off=True):
@@ -34,129 +98,141 @@ class CLITestCase(DockerClientTestCase):
@property
def project(self):
# Hack: allow project to be overridden. This needs refactoring so that
# the project object is built exactly once, by the command object, and
# accessed by the test case object.
if hasattr(self, '_project'):
return self._project
# Hack: allow project to be overridden
if not hasattr(self, '_project'):
self._project = get_project(self.base_dir)
return self._project
return get_project(self.command.base_dir)
def dispatch(self, options, project_options=None, returncode=0):
project_options = project_options or []
proc = start_process(self.base_dir, project_options + options)
return wait_on_process(proc, returncode=returncode)
def test_help(self):
old_base_dir = self.command.base_dir
self.command.base_dir = 'tests/fixtures/no-composefile'
with self.assertRaises(SystemExit) as exc_context:
self.command.dispatch(['help', 'up'], None)
self.assertIn('Usage: up [options] [SERVICE...]', str(exc_context.exception))
old_base_dir = self.base_dir
self.base_dir = 'tests/fixtures/no-composefile'
result = self.dispatch(['help', 'up'], returncode=1)
assert 'Usage: up [options] [SERVICE...]' in result.stderr
# self.project.kill() fails during teardown
# unless there is a composefile.
self.command.base_dir = old_base_dir
self.base_dir = old_base_dir
# TODO: address the "Inappropriate ioctl for device" warnings in test output
def test_ps(self):
self.project.get_service('simple').create_container()
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['ps'], None)
self.assertIn('simplecomposefile_simple_1', mock_stdout.getvalue())
result = self.dispatch(['ps'])
assert 'simplecomposefile_simple_1' in result.stdout
def test_ps_default_composefile(self):
self.command.base_dir = 'tests/fixtures/multiple-composefiles'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['up', '-d'], None)
self.command.dispatch(['ps'], None)
self.base_dir = 'tests/fixtures/multiple-composefiles'
self.dispatch(['up', '-d'])
result = self.dispatch(['ps'])
output = mock_stdout.getvalue()
self.assertIn('multiplecomposefiles_simple_1', output)
self.assertIn('multiplecomposefiles_another_1', output)
self.assertNotIn('multiplecomposefiles_yetanother_1', output)
self.assertIn('multiplecomposefiles_simple_1', result.stdout)
self.assertIn('multiplecomposefiles_another_1', result.stdout)
self.assertNotIn('multiplecomposefiles_yetanother_1', result.stdout)
def test_ps_alternate_composefile(self):
config_path = os.path.abspath(
'tests/fixtures/multiple-composefiles/compose2.yml')
self._project = get_project(self.command.base_dir, [config_path])
self._project = get_project(self.base_dir, [config_path])
self.command.base_dir = 'tests/fixtures/multiple-composefiles'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['-f', 'compose2.yml', 'up', '-d'], None)
self.command.dispatch(['-f', 'compose2.yml', 'ps'], None)
self.base_dir = 'tests/fixtures/multiple-composefiles'
self.dispatch(['-f', 'compose2.yml', 'up', '-d'])
result = self.dispatch(['-f', 'compose2.yml', 'ps'])
output = mock_stdout.getvalue()
self.assertNotIn('multiplecomposefiles_simple_1', output)
self.assertNotIn('multiplecomposefiles_another_1', output)
self.assertIn('multiplecomposefiles_yetanother_1', output)
self.assertNotIn('multiplecomposefiles_simple_1', result.stdout)
self.assertNotIn('multiplecomposefiles_another_1', result.stdout)
self.assertIn('multiplecomposefiles_yetanother_1', result.stdout)
@mock.patch('compose.service.log')
def test_pull(self, mock_logging):
self.command.dispatch(['pull'], None)
mock_logging.info.assert_any_call('Pulling simple (busybox:latest)...')
mock_logging.info.assert_any_call('Pulling another (busybox:latest)...')
def test_pull(self):
result = self.dispatch(['pull'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling another (busybox:latest)...',
'Pulling simple (busybox:latest)...',
]
@mock.patch('compose.service.log')
def test_pull_with_digest(self, mock_logging):
self.command.dispatch(['-f', 'digest.yml', 'pull'], None)
mock_logging.info.assert_any_call('Pulling simple (busybox:latest)...')
mock_logging.info.assert_any_call(
'Pulling digest (busybox@'
'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d)...')
def test_pull_with_digest(self):
result = self.dispatch(['-f', 'digest.yml', 'pull'])
@mock.patch('compose.service.log')
def test_pull_with_ignore_pull_failures(self, mock_logging):
self.command.dispatch(['-f', 'ignore-pull-failures.yml', 'pull', '--ignore-pull-failures'], None)
mock_logging.info.assert_any_call('Pulling simple (busybox:latest)...')
mock_logging.info.assert_any_call('Pulling another (nonexisting-image:latest)...')
mock_logging.error.assert_any_call('Error: image library/nonexisting-image:latest not found')
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert ('Pulling digest (busybox@'
'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b520'
'04ee8502d)...') in result.stderr
def test_pull_with_ignore_pull_failures(self):
result = self.dispatch([
'-f', 'ignore-pull-failures.yml',
'pull', '--ignore-pull-failures'])
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling another (nonexisting-image:latest)...' in result.stderr
assert 'Error: image library/nonexisting-image:latest not found' in result.stderr
def test_build_plain(self):
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
self.command.dispatch(['build', 'simple'], None)
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'])
cache_indicator = 'Using cache'
pull_indicator = 'Status: Image is up to date for busybox:latest'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['build', 'simple'], None)
output = mock_stdout.getvalue()
self.assertIn(cache_indicator, output)
self.assertNotIn(pull_indicator, output)
result = self.dispatch(['build', 'simple'])
assert BUILD_CACHE_TEXT in result.stdout
assert BUILD_PULL_TEXT not in result.stdout
def test_build_no_cache(self):
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
self.command.dispatch(['build', 'simple'], None)
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'])
cache_indicator = 'Using cache'
pull_indicator = 'Status: Image is up to date for busybox:latest'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['build', '--no-cache', 'simple'], None)
output = mock_stdout.getvalue()
self.assertNotIn(cache_indicator, output)
self.assertNotIn(pull_indicator, output)
result = self.dispatch(['build', '--no-cache', 'simple'])
assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT not in result.stdout
def test_build_pull(self):
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
self.command.dispatch(['build', 'simple'], None)
# Make sure we have the latest busybox already
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'], None)
cache_indicator = 'Using cache'
pull_indicator = 'Status: Image is up to date for busybox:latest'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['build', '--pull', 'simple'], None)
output = mock_stdout.getvalue()
self.assertIn(cache_indicator, output)
self.assertIn(pull_indicator, output)
result = self.dispatch(['build', '--pull', 'simple'])
assert BUILD_CACHE_TEXT in result.stdout
assert BUILD_PULL_TEXT in result.stdout
def test_build_no_cache_pull(self):
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
self.command.dispatch(['build', 'simple'], None)
# Make sure we have the latest busybox already
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'])
cache_indicator = 'Using cache'
pull_indicator = 'Status: Image is up to date for busybox:latest'
with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout:
self.command.dispatch(['build', '--no-cache', '--pull', 'simple'], None)
output = mock_stdout.getvalue()
self.assertNotIn(cache_indicator, output)
self.assertIn(pull_indicator, output)
result = self.dispatch(['build', '--no-cache', '--pull', 'simple'])
assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT in result.stdout
def test_build_failed(self):
self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
self.dispatch(['build', 'simple'], returncode=1)
labels = ["com.docker.compose.test_failing_image=true"]
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(
all=True,
filters={"label": labels})
]
assert len(containers) == 1
def test_build_failed_forcerm(self):
self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
self.dispatch(['build', '--force-rm', 'simple'], returncode=1)
labels = ["com.docker.compose.test_failing_image=true"]
containers = [
Container.from_ps(self.project.client, c)
for c in self.project.client.containers(
all=True,
filters={"label": labels})
]
assert not containers
def test_up_detached(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 1)
@@ -169,28 +245,17 @@ class CLITestCase(DockerClientTestCase):
self.assertFalse(container.get('Config.AttachStdin'))
def test_up_attached(self):
with mock.patch(
'compose.cli.main.attach_to_logs',
autospec=True
) as mock_attach:
self.command.dispatch(['up'], None)
_, args, kwargs = mock_attach.mock_calls[0]
_project, log_printer, _names, _timeout = args
self.base_dir = 'tests/fixtures/echo-services'
result = self.dispatch(['up', '--no-color'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 1)
self.assertEqual(len(another.containers()), 1)
self.assertEqual(
set(log_printer.containers),
set(self.project.containers())
)
assert 'simple_1 | simple' in result.stdout
assert 'another_1 | another' in result.stdout
def test_up_without_networking(self):
self.require_api_version('1.21')
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['up', '-d'], None)
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d'], None)
client = docker_client(version='1.21')
networks = client.networks(names=[self.project.name])
@@ -207,8 +272,8 @@ class CLITestCase(DockerClientTestCase):
def test_up_with_networking(self):
self.require_api_version('1.21')
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['--x-networking', 'up', '-d'], None)
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['--x-networking', 'up', '-d'], None)
client = docker_client(version='1.21')
services = self.project.get_services()
@@ -226,14 +291,13 @@ class CLITestCase(DockerClientTestCase):
containers = service.containers()
self.assertEqual(len(containers), 1)
self.assertIn(containers[0].id, network['Containers'])
self.assertEqual(containers[0].get('Config.Hostname'), service.name)
web_container = self.project.get_service('web').containers()[0]
self.assertFalse(web_container.get('HostConfig.Links'))
def test_up_with_links(self):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['up', '-d', 'web'], None)
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'web'], None)
web = self.project.get_service('web')
db = self.project.get_service('db')
console = self.project.get_service('console')
@@ -242,8 +306,8 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(console.containers()), 0)
def test_up_with_no_deps(self):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['up', '-d', '--no-deps', 'web'], None)
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', '--no-deps', 'web'], None)
web = self.project.get_service('web')
db = self.project.get_service('db')
console = self.project.get_service('console')
@@ -252,13 +316,13 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(console.containers()), 0)
def test_up_with_force_recreate(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
old_ids = [c.id for c in service.containers()]
self.command.dispatch(['up', '-d', '--force-recreate'], None)
self.dispatch(['up', '-d', '--force-recreate'], None)
self.assertEqual(len(service.containers()), 1)
new_ids = [c.id for c in service.containers()]
@@ -266,13 +330,13 @@ class CLITestCase(DockerClientTestCase):
self.assertNotEqual(old_ids, new_ids)
def test_up_with_no_recreate(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
old_ids = [c.id for c in service.containers()]
self.command.dispatch(['up', '-d', '--no-recreate'], None)
self.dispatch(['up', '-d', '--no-recreate'], None)
self.assertEqual(len(service.containers()), 1)
new_ids = [c.id for c in service.containers()]
@@ -280,11 +344,12 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(old_ids, new_ids)
def test_up_with_force_recreate_and_no_recreate(self):
with self.assertRaises(UserError):
self.command.dispatch(['up', '-d', '--force-recreate', '--no-recreate'], None)
self.dispatch(
['up', '-d', '--force-recreate', '--no-recreate'],
returncode=1)
def test_up_with_timeout(self):
self.command.dispatch(['up', '-d', '-t', '1'], None)
self.dispatch(['up', '-d', '-t', '1'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 1)
@@ -296,10 +361,23 @@ class CLITestCase(DockerClientTestCase):
self.assertFalse(config['AttachStdout'])
self.assertFalse(config['AttachStdin'])
@mock.patch('dockerpty.start')
def test_run_service_without_links(self, mock_stdout):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['run', 'console', '/bin/true'], None)
def test_up_handles_sigint(self):
proc = start_process(self.base_dir, ['up', '-t', '2'])
wait_on_condition(ContainerCountCondition(self.project, 2))
os.kill(proc.pid, signal.SIGINT)
wait_on_condition(ContainerCountCondition(self.project, 0))
def test_up_handles_sigterm(self):
proc = start_process(self.base_dir, ['up', '-t', '2'])
wait_on_condition(ContainerCountCondition(self.project, 2))
os.kill(proc.pid, signal.SIGTERM)
wait_on_condition(ContainerCountCondition(self.project, 0))
def test_run_service_without_links(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['run', 'console', '/bin/true'])
self.assertEqual(len(self.project.containers()), 0)
# Ensure stdin/out was open
@@ -309,44 +387,40 @@ class CLITestCase(DockerClientTestCase):
self.assertTrue(config['AttachStdout'])
self.assertTrue(config['AttachStdin'])
@mock.patch('dockerpty.start')
def test_run_service_with_links(self, _):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['run', 'web', '/bin/true'], None)
def test_run_service_with_links(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['run', 'web', '/bin/true'], None)
db = self.project.get_service('db')
console = self.project.get_service('console')
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 0)
@mock.patch('dockerpty.start')
def test_run_with_no_deps(self, _):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['run', '--no-deps', 'web', '/bin/true'], None)
def test_run_with_no_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['run', '--no-deps', 'web', '/bin/true'])
db = self.project.get_service('db')
self.assertEqual(len(db.containers()), 0)
@mock.patch('dockerpty.start')
def test_run_does_not_recreate_linked_containers(self, _):
self.command.base_dir = 'tests/fixtures/links-composefile'
self.command.dispatch(['up', '-d', 'db'], None)
def test_run_does_not_recreate_linked_containers(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'db'])
db = self.project.get_service('db')
self.assertEqual(len(db.containers()), 1)
old_ids = [c.id for c in db.containers()]
self.command.dispatch(['run', 'web', '/bin/true'], None)
self.dispatch(['run', 'web', '/bin/true'], None)
self.assertEqual(len(db.containers()), 1)
new_ids = [c.id for c in db.containers()]
self.assertEqual(old_ids, new_ids)
@mock.patch('dockerpty.start')
def test_run_without_command(self, _):
self.command.base_dir = 'tests/fixtures/commands-composefile'
def test_run_without_command(self):
self.base_dir = 'tests/fixtures/commands-composefile'
self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test')
self.command.dispatch(['run', 'implicit'], None)
self.dispatch(['run', 'implicit'])
service = self.project.get_service('implicit')
containers = service.containers(stopped=True, one_off=True)
self.assertEqual(
@@ -354,7 +428,7 @@ class CLITestCase(DockerClientTestCase):
[u'/bin/sh -c echo "success"'],
)
self.command.dispatch(['run', 'explicit'], None)
self.dispatch(['run', 'explicit'])
service = self.project.get_service('explicit')
containers = service.containers(stopped=True, one_off=True)
self.assertEqual(
@@ -362,14 +436,10 @@ class CLITestCase(DockerClientTestCase):
[u'/bin/true'],
)
@mock.patch('dockerpty.start')
def test_run_service_with_entrypoint_overridden(self, _):
self.command.base_dir = 'tests/fixtures/dockerfile_with_entrypoint'
def test_run_service_with_entrypoint_overridden(self):
self.base_dir = 'tests/fixtures/dockerfile_with_entrypoint'
name = 'service'
self.command.dispatch(
['run', '--entrypoint', '/bin/echo', name, 'helloworld'],
None
)
self.dispatch(['run', '--entrypoint', '/bin/echo', name, 'helloworld'])
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
self.assertEqual(
@@ -377,37 +447,34 @@ class CLITestCase(DockerClientTestCase):
[u'/bin/echo', u'helloworld'],
)
@mock.patch('dockerpty.start')
def test_run_service_with_user_overridden(self, _):
self.command.base_dir = 'tests/fixtures/user-composefile'
def test_run_service_with_user_overridden(self):
self.base_dir = 'tests/fixtures/user-composefile'
name = 'service'
user = 'sshd'
args = ['run', '--user={user}'.format(user=user), name]
self.command.dispatch(args, None)
self.dispatch(['run', '--user={user}'.format(user=user), name], returncode=1)
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
self.assertEqual(user, container.get('Config.User'))
@mock.patch('dockerpty.start')
def test_run_service_with_user_overridden_short_form(self, _):
self.command.base_dir = 'tests/fixtures/user-composefile'
def test_run_service_with_user_overridden_short_form(self):
self.base_dir = 'tests/fixtures/user-composefile'
name = 'service'
user = 'sshd'
args = ['run', '-u', user, name]
self.command.dispatch(args, None)
self.dispatch(['run', '-u', user, name], returncode=1)
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
self.assertEqual(user, container.get('Config.User'))
@mock.patch('dockerpty.start')
def test_run_service_with_environement_overridden(self, _):
def test_run_service_with_environement_overridden(self):
name = 'service'
self.command.base_dir = 'tests/fixtures/environment-composefile'
self.command.dispatch(
['run', '-e', 'foo=notbar', '-e', 'allo=moto=bobo',
'-e', 'alpha=beta', name],
None
)
self.base_dir = 'tests/fixtures/environment-composefile'
self.dispatch([
'run', '-e', 'foo=notbar',
'-e', 'allo=moto=bobo',
'-e', 'alpha=beta',
name,
'/bin/true',
])
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
# env overriden
@@ -419,11 +486,10 @@ class CLITestCase(DockerClientTestCase):
# make sure a value with a = don't crash out
self.assertEqual('moto=bobo', container.environment['allo'])
@mock.patch('dockerpty.start')
def test_run_service_without_map_ports(self, _):
def test_run_service_without_map_ports(self):
# create one off container
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['run', '-d', 'simple'], None)
self.base_dir = 'tests/fixtures/ports-composefile'
self.dispatch(['run', '-d', 'simple'])
container = self.project.get_service('simple').containers(one_off=True)[0]
# get port information
@@ -437,12 +503,10 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(port_random, None)
self.assertEqual(port_assigned, None)
@mock.patch('dockerpty.start')
def test_run_service_with_map_ports(self, _):
def test_run_service_with_map_ports(self):
# create one off container
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['run', '-d', '--service-ports', 'simple'], None)
self.base_dir = 'tests/fixtures/ports-composefile'
self.dispatch(['run', '-d', '--service-ports', 'simple'])
container = self.project.get_service('simple').containers(one_off=True)[0]
# get port information
@@ -460,12 +524,10 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(port_range[0], "0.0.0.0:49153")
self.assertEqual(port_range[1], "0.0.0.0:49154")
@mock.patch('dockerpty.start')
def test_run_service_with_explicitly_maped_ports(self, _):
def test_run_service_with_explicitly_maped_ports(self):
# create one off container
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['run', '-d', '-p', '30000:3000', '--publish', '30001:3001', 'simple'], None)
self.base_dir = 'tests/fixtures/ports-composefile'
self.dispatch(['run', '-d', '-p', '30000:3000', '--publish', '30001:3001', 'simple'])
container = self.project.get_service('simple').containers(one_off=True)[0]
# get port information
@@ -479,12 +541,10 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(port_short, "0.0.0.0:30000")
self.assertEqual(port_full, "0.0.0.0:30001")
@mock.patch('dockerpty.start')
def test_run_service_with_explicitly_maped_ip_ports(self, _):
def test_run_service_with_explicitly_maped_ip_ports(self):
# create one off container
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['run', '-d', '-p', '127.0.0.1:30000:3000', '--publish', '127.0.0.1:30001:3001', 'simple'], None)
self.base_dir = 'tests/fixtures/ports-composefile'
self.dispatch(['run', '-d', '-p', '127.0.0.1:30000:3000', '--publish', '127.0.0.1:30001:3001', 'simple'], None)
container = self.project.get_service('simple').containers(one_off=True)[0]
# get port information
@@ -498,22 +558,20 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(port_short, "127.0.0.1:30000")
self.assertEqual(port_full, "127.0.0.1:30001")
@mock.patch('dockerpty.start')
def test_run_with_custom_name(self, _):
self.command.base_dir = 'tests/fixtures/environment-composefile'
def test_run_with_custom_name(self):
self.base_dir = 'tests/fixtures/environment-composefile'
name = 'the-container-name'
self.command.dispatch(['run', '--name', name, 'service'], None)
self.dispatch(['run', '--name', name, 'service', '/bin/true'])
service = self.project.get_service('service')
container, = service.containers(stopped=True, one_off=True)
self.assertEqual(container.name, name)
@mock.patch('dockerpty.start')
def test_run_with_networking(self, _):
def test_run_with_networking(self):
self.require_api_version('1.21')
client = docker_client(version='1.21')
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
self.command.dispatch(['--x-networking', 'run', 'simple', 'true'], None)
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['--x-networking', 'run', 'simple', 'true'], None)
service = self.project.get_service('simple')
container, = service.containers(stopped=True, one_off=True)
networks = client.networks(names=[self.project.name])
@@ -522,76 +580,101 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(networks), 1)
self.assertEqual(container.human_readable_command, u'true')
def test_run_handles_sigint(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
running=True))
os.kill(proc.pid, signal.SIGINT)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
running=False))
def test_run_handles_sigterm(self):
proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top'])
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
running=True))
os.kill(proc.pid, signal.SIGTERM)
wait_on_condition(ContainerStateCondition(
self.project.client,
'simplecomposefile_simple_run_1',
running=False))
def test_rm(self):
service = self.project.get_service('simple')
service.create_container()
service.kill()
self.assertEqual(len(service.containers(stopped=True)), 1)
self.command.dispatch(['rm', '--force'], None)
self.dispatch(['rm', '--force'], None)
self.assertEqual(len(service.containers(stopped=True)), 0)
service = self.project.get_service('simple')
service.create_container()
service.kill()
self.assertEqual(len(service.containers(stopped=True)), 1)
self.command.dispatch(['rm', '-f'], None)
self.dispatch(['rm', '-f'], None)
self.assertEqual(len(service.containers(stopped=True)), 0)
def test_stop(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['stop', '-t', '1'], None)
self.dispatch(['stop', '-t', '1'], None)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertFalse(service.containers(stopped=True)[0].is_running)
def test_pause_unpause(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertFalse(service.containers()[0].is_paused)
self.command.dispatch(['pause'], None)
self.dispatch(['pause'], None)
self.assertTrue(service.containers()[0].is_paused)
self.command.dispatch(['unpause'], None)
self.dispatch(['unpause'], None)
self.assertFalse(service.containers()[0].is_paused)
def test_logs_invalid_service_name(self):
with self.assertRaises(NoSuchService):
self.command.dispatch(['logs', 'madeupname'], None)
self.dispatch(['logs', 'madeupname'], returncode=1)
def test_kill(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['kill'], None)
self.dispatch(['kill'], None)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertFalse(service.containers(stopped=True)[0].is_running)
def test_kill_signal_sigstop(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.assertEqual(len(service.containers()), 1)
# The container is still running. It has only been paused
self.assertTrue(service.containers()[0].is_running)
def test_kill_stopped_service(self):
self.command.dispatch(['up', '-d'], None)
self.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['kill', '-s', 'SIGKILL'], None)
self.dispatch(['kill', '-s', 'SIGKILL'], None)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertFalse(service.containers(stopped=True)[0].is_running)
@@ -599,9 +682,9 @@ class CLITestCase(DockerClientTestCase):
def test_restart(self):
service = self.project.get_service('simple')
container = service.create_container()
service.start_container(container)
container.start()
started_at = container.dictionary['State']['StartedAt']
self.command.dispatch(['restart', '-t', '1'], None)
self.dispatch(['restart', '-t', '1'], None)
container.inspect()
self.assertNotEqual(
container.dictionary['State']['FinishedAt'],
@@ -612,56 +695,63 @@ class CLITestCase(DockerClientTestCase):
started_at,
)
def test_restart_stopped_container(self):
service = self.project.get_service('simple')
container = service.create_container()
container.start()
container.kill()
self.assertEqual(len(service.containers(stopped=True)), 1)
self.dispatch(['restart', '-t', '1'], None)
self.assertEqual(len(service.containers(stopped=False)), 1)
def test_scale(self):
project = self.project
self.command.scale(project, {'SERVICE=NUM': ['simple=1']})
self.dispatch(['scale', 'simple=1'])
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.command.scale(project, {'SERVICE=NUM': ['simple=3', 'another=2']})
self.dispatch(['scale', 'simple=3', 'another=2'])
self.assertEqual(len(project.get_service('simple').containers()), 3)
self.assertEqual(len(project.get_service('another').containers()), 2)
self.command.scale(project, {'SERVICE=NUM': ['simple=1', 'another=1']})
self.dispatch(['scale', 'simple=1', 'another=1'])
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.assertEqual(len(project.get_service('another').containers()), 1)
self.command.scale(project, {'SERVICE=NUM': ['simple=1', 'another=1']})
self.dispatch(['scale', 'simple=1', 'another=1'])
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.assertEqual(len(project.get_service('another').containers()), 1)
self.command.scale(project, {'SERVICE=NUM': ['simple=0', 'another=0']})
self.dispatch(['scale', 'simple=0', 'another=0'])
self.assertEqual(len(project.get_service('simple').containers()), 0)
self.assertEqual(len(project.get_service('another').containers()), 0)
def test_port(self):
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['up', '-d'], None)
self.base_dir = 'tests/fixtures/ports-composefile'
self.dispatch(['up', '-d'], None)
container = self.project.get_service('simple').get_container()
@mock.patch('sys.stdout', new_callable=StringIO)
def get_port(number, mock_stdout):
self.command.dispatch(['port', 'simple', str(number)], None)
return mock_stdout.getvalue().rstrip()
def get_port(number):
result = self.dispatch(['port', 'simple', str(number)])
return result.stdout.rstrip()
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "0.0.0.0:49153")
def test_port_with_scale(self):
self.command.base_dir = 'tests/fixtures/ports-composefile-scale'
self.command.dispatch(['scale', 'simple=2'], None)
self.base_dir = 'tests/fixtures/ports-composefile-scale'
self.dispatch(['scale', 'simple=2'], None)
containers = sorted(
self.project.containers(service_names=['simple']),
key=attrgetter('name'))
@mock.patch('sys.stdout', new_callable=StringIO)
def get_port(number, mock_stdout, index=None):
def get_port(number, index=None):
if index is None:
self.command.dispatch(['port', 'simple', str(number)], None)
result = self.dispatch(['port', 'simple', str(number)])
else:
self.command.dispatch(['port', '--index=' + str(index), 'simple', str(number)], None)
return mock_stdout.getvalue().rstrip()
result = self.dispatch(['port', '--index=' + str(index), 'simple', str(number)])
return result.stdout.rstrip()
self.assertEqual(get_port(3000), containers[0].get_local_port(3000))
self.assertEqual(get_port(3000, index=1), containers[0].get_local_port(3000))
@@ -670,8 +760,8 @@ class CLITestCase(DockerClientTestCase):
def test_env_file_relative_to_compose_file(self):
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
self.command.dispatch(['-f', config_path, 'up', '-d'], None)
self._project = get_project(self.command.base_dir, [config_path])
self.dispatch(['-f', config_path, 'up', '-d'], None)
self._project = get_project(self.base_dir, [config_path])
containers = self.project.containers(stopped=True)
self.assertEqual(len(containers), 1)
@@ -681,20 +771,18 @@ class CLITestCase(DockerClientTestCase):
def test_home_and_env_var_in_volume_path(self):
os.environ['VOLUME_NAME'] = 'my-volume'
os.environ['HOME'] = '/tmp/home-dir'
expected_host_path = os.path.join(os.environ['HOME'], os.environ['VOLUME_NAME'])
self.command.base_dir = 'tests/fixtures/volume-path-interpolation'
self.command.dispatch(['up', '-d'], None)
self.base_dir = 'tests/fixtures/volume-path-interpolation'
self.dispatch(['up', '-d'], None)
container = self.project.containers(stopped=True)[0]
actual_host_path = container.get('Volumes')['/container-path']
components = actual_host_path.split('/')
self.assertTrue(components[-2:] == ['home-dir', 'my-volume'],
msg="Last two components differ: %s, %s" % (actual_host_path, expected_host_path))
assert components[-2:] == ['home-dir', 'my-volume']
def test_up_with_default_override_file(self):
self.command.base_dir = 'tests/fixtures/override-files'
self.command.dispatch(['up', '-d'], None)
self.base_dir = 'tests/fixtures/override-files'
self.dispatch(['up', '-d'], None)
containers = self.project.containers()
self.assertEqual(len(containers), 2)
@@ -704,15 +792,15 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(db.human_readable_command, 'top')
def test_up_with_multiple_files(self):
self.command.base_dir = 'tests/fixtures/override-files'
self.base_dir = 'tests/fixtures/override-files'
config_paths = [
'docker-compose.yml',
'docker-compose.override.yml',
'extra.yml',
]
self._project = get_project(self.command.base_dir, config_paths)
self.command.dispatch(
self._project = get_project(self.base_dir, config_paths)
self.dispatch(
[
'-f', config_paths[0],
'-f', config_paths[1],
@@ -731,8 +819,8 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(other.human_readable_command, 'top')
def test_up_with_extends(self):
self.command.base_dir = 'tests/fixtures/extends'
self.command.dispatch(['up', '-d'], None)
self.base_dir = 'tests/fixtures/extends'
self.dispatch(['up', '-d'], None)
self.assertEqual(
set([s.name for s in self.project.services]),

View File

@@ -0,0 +1,6 @@
simple:
image: busybox:latest
command: echo simple
another:
image: busybox:latest
command: echo another

View File

@@ -5,7 +5,7 @@ bar:
web:
extends:
file: circle-2.yml
service: web
service: other
baz:
image: busybox
quux:

View File

@@ -2,7 +2,7 @@ foo:
image: busybox
bar:
image: busybox
web:
other:
extends:
file: circle-1.yml
service: web

View File

@@ -0,0 +1,7 @@
FROM busybox:latest
LABEL com.docker.compose.test_image=true
LABEL com.docker.compose.test_failing_image=true
# With the following label the container wil be cleaned up automatically
# Must be kept in sync with LABEL_PROJECT from compose/const.py
LABEL com.docker.compose.project=composetest
RUN exit 1

View File

@@ -0,0 +1,2 @@
simple:
build: .

View File

@@ -3,11 +3,13 @@ from __future__ import unicode_literals
from .testcases import DockerClientTestCase
from compose.cli.docker_client import docker_client
from compose.config import config
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import LABEL_PROJECT
from compose.container import Container
from compose.project import Project
from compose.service import ConvergenceStrategy
from compose.service import VolumeFromSpec
from compose.service import Net
def build_service_dicts(service_config):
@@ -111,6 +113,7 @@ class ProjectTest(DockerClientTestCase):
network_name = 'network_does_exist'
project = Project(network_name, [], client)
client.create_network(network_name)
self.addCleanup(client.remove_network, network_name)
assert project.get_network()['Name'] == network_name
def test_net_from_service(self):
@@ -212,7 +215,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
project = Project('composetest', [web, db], self.client)
project.start()
self.assertEqual(len(project.containers()), 0)
@@ -236,7 +239,7 @@ class ProjectTest(DockerClientTestCase):
def test_recreate_preserves_volumes(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/etc'])
db = self.create_service('db', volumes=[VolumeSpec.parse('/etc')])
project = Project('composetest', [web, db], self.client)
project.start()
self.assertEqual(len(project.containers()), 0)
@@ -255,7 +258,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_with_no_recreate_running(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
project = Project('composetest', [web, db], self.client)
project.start()
self.assertEqual(len(project.containers()), 0)
@@ -275,7 +278,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_with_no_recreate_stopped(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
project = Project('composetest', [web, db], self.client)
project.start()
self.assertEqual(len(project.containers()), 0)
@@ -314,7 +317,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_starts_links(self):
console = self.create_service('console')
db = self.create_service('db', volumes=['/var/db'])
db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
web = self.create_service('web', links=[(db, 'db')])
project = Project('composetest', [web, db, console], self.client)
@@ -398,6 +401,20 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1)
self.assertEqual(len(project.get_service('console').containers()), 0)
def test_project_up_with_custom_network(self):
self.require_api_version('1.21')
client = docker_client(version='1.21')
network_name = 'composetest-custom'
client.create_network(network_name)
self.addCleanup(client.remove_network, network_name)
web = self.create_service('web', net=Net(network_name))
project = Project('composetest', [web], client, use_networking=True)
project.up()
assert project.get_network() is None
def test_unscale_after_restart(self):
web = self.create_service('web')
project = Project('composetest', [web], self.client)

View File

@@ -3,17 +3,21 @@ from __future__ import unicode_literals
from .. import mock
from .testcases import DockerClientTestCase
from compose.config.types import VolumeSpec
from compose.project import Project
from compose.service import ConvergenceStrategy
class ResilienceTest(DockerClientTestCase):
def setUp(self):
self.db = self.create_service('db', volumes=['/var/db'], command='top')
self.db = self.create_service(
'db',
volumes=[VolumeSpec.parse('/var/db')],
command='top')
self.project = Project('composetest', [self.db], self.client)
container = self.db.create_container()
self.db.start_container(container)
container.start()
self.host_path = container.get('Volumes')['/var/db']
def test_successful_recreate(self):
@@ -31,7 +35,7 @@ class ResilienceTest(DockerClientTestCase):
self.assertEqual(container.get('Volumes')['/var/db'], self.host_path)
def test_start_failure(self):
with mock.patch('compose.service.Service.start_container', crash):
with mock.patch('compose.container.Container.start', crash):
with self.assertRaises(Crash):
self.project.up(strategy=ConvergenceStrategy.always)

View File

@@ -14,23 +14,25 @@ from .. import mock
from .testcases import DockerClientTestCase
from .testcases import pull_busybox
from compose import __version__
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import LABEL_CONFIG_HASH
from compose.const import LABEL_CONTAINER_NUMBER
from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.const import LABEL_VERSION
from compose.container import Container
from compose.service import build_extra_hosts
from compose.service import ConfigError
from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy
from compose.service import Net
from compose.service import Service
from compose.service import VolumeFromSpec
def create_and_start_container(service, **override_options):
container = service.create_container(**override_options)
return service.start_container(container)
container.start()
return container
class ServiceTest(DockerClientTestCase):
@@ -113,59 +115,28 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(container.name, 'composetest_db_run_1')
def test_create_container_with_unspecified_volume(self):
service = self.create_service('db', volumes=['/var/db'])
service = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')])
container = service.create_container()
service.start_container(container)
container.start()
self.assertIn('/var/db', container.get('Volumes'))
def test_create_container_with_volume_driver(self):
service = self.create_service('db', volume_driver='foodriver')
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual('foodriver', container.get('Config.VolumeDriver'))
def test_create_container_with_cpu_shares(self):
service = self.create_service('db', cpu_shares=73)
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(container.get('HostConfig.CpuShares'), 73)
def test_build_extra_hosts(self):
# string
self.assertRaises(ConfigError, lambda: build_extra_hosts("www.example.com: 192.168.0.17"))
# list of strings
self.assertEqual(build_extra_hosts(
["www.example.com:192.168.0.17"]),
{'www.example.com': '192.168.0.17'})
self.assertEqual(build_extra_hosts(
["www.example.com: 192.168.0.17"]),
{'www.example.com': '192.168.0.17'})
self.assertEqual(build_extra_hosts(
["www.example.com: 192.168.0.17",
"static.example.com:192.168.0.19",
"api.example.com: 192.168.0.18"]),
{'www.example.com': '192.168.0.17',
'static.example.com': '192.168.0.19',
'api.example.com': '192.168.0.18'})
# list of dictionaries
self.assertRaises(ConfigError, lambda: build_extra_hosts(
[{'www.example.com': '192.168.0.17'},
{'api.example.com': '192.168.0.18'}]))
# dictionaries
self.assertEqual(build_extra_hosts(
{'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'}),
{'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'})
def test_create_container_with_extra_hosts_list(self):
extra_hosts = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
service = self.create_service('db', extra_hosts=extra_hosts)
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts))
def test_create_container_with_extra_hosts_dicts(self):
@@ -173,42 +144,44 @@ class ServiceTest(DockerClientTestCase):
extra_hosts_list = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
service = self.create_service('db', extra_hosts=extra_hosts)
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts_list))
def test_create_container_with_cpu_set(self):
service = self.create_service('db', cpuset='0')
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(container.get('HostConfig.CpusetCpus'), '0')
def test_create_container_with_read_only_root_fs(self):
read_only = True
service = self.create_service('db', read_only=read_only)
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(container.get('HostConfig.ReadonlyRootfs'), read_only, container.get('HostConfig'))
def test_create_container_with_security_opt(self):
security_opt = ['label:disable']
service = self.create_service('db', security_opt=security_opt)
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt))
def test_create_container_with_mac_address(self):
service = self.create_service('db', mac_address='02:42:ac:11:65:43')
container = service.create_container()
service.start_container(container)
container.start()
self.assertEqual(container.inspect()['Config']['MacAddress'], '02:42:ac:11:65:43')
def test_create_container_with_specified_volume(self):
host_path = '/tmp/host-path'
container_path = '/container-path'
service = self.create_service('db', volumes=['%s:%s' % (host_path, container_path)])
service = self.create_service(
'db',
volumes=[VolumeSpec(host_path, container_path, 'rw')])
container = service.create_container()
service.start_container(container)
container.start()
volumes = container.inspect()['Volumes']
self.assertIn(container_path, volumes)
@@ -219,11 +192,10 @@ class ServiceTest(DockerClientTestCase):
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
def test_recreate_preserves_volume_with_trailing_slash(self):
"""
When the Compose file specifies a trailing slash in the container path, make
"""When the Compose file specifies a trailing slash in the container path, make
sure we copy the volume over when recreating.
"""
service = self.create_service('data', volumes=['/data/'])
service = self.create_service('data', volumes=[VolumeSpec.parse('/data/')])
old_container = create_and_start_container(service)
volume_path = old_container.get('Volumes')['/data']
@@ -237,7 +209,7 @@ class ServiceTest(DockerClientTestCase):
"""
host_path = '/tmp/data'
container_path = '/data'
volumes = ['{}:{}/'.format(host_path, container_path)]
volumes = [VolumeSpec.parse('{}:{}/'.format(host_path, container_path))]
tmp_container = self.client.create_container(
'busybox', 'true',
@@ -281,7 +253,7 @@ class ServiceTest(DockerClientTestCase):
]
)
host_container = host_service.create_container()
host_service.start_container(host_container)
host_container.start()
self.assertIn(volume_container_1.id + ':rw',
host_container.get('HostConfig.VolumesFrom'))
self.assertIn(volume_container_2.id + ':rw',
@@ -291,7 +263,7 @@ class ServiceTest(DockerClientTestCase):
service = self.create_service(
'db',
environment={'FOO': '1'},
volumes=['/etc'],
volumes=[VolumeSpec.parse('/etc')],
entrypoint=['top'],
command=['-d', '1']
)
@@ -300,7 +272,7 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(old_container.get('Config.Cmd'), ['-d', '1'])
self.assertIn('FOO=1', old_container.get('Config.Env'))
self.assertEqual(old_container.name, 'composetest_db_1')
service.start_container(old_container)
old_container.start()
old_container.inspect() # reload volume data
volume_path = old_container.get('Volumes')['/etc']
@@ -329,7 +301,7 @@ class ServiceTest(DockerClientTestCase):
service = self.create_service(
'db',
environment={'FOO': '1'},
volumes=['/var/db'],
volumes=[VolumeSpec.parse('/var/db')],
entrypoint=['top'],
command=['-d', '1']
)
@@ -366,6 +338,31 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(list(new_container.get('Volumes')), ['/data'])
self.assertEqual(new_container.get('Volumes')['/data'], volume_path)
def test_execute_convergence_plan_when_image_volume_masks_config(self):
service = self.create_service(
'db',
build='tests/fixtures/dockerfile-with-volume',
)
old_container = create_and_start_container(service)
self.assertEqual(list(old_container.get('Volumes').keys()), ['/data'])
volume_path = old_container.get('Volumes')['/data']
service.options['volumes'] = [VolumeSpec.parse('/tmp:/data')]
with mock.patch('compose.service.log') as mock_log:
new_container, = service.execute_convergence_plan(
ConvergencePlan('recreate', [old_container]))
mock_log.warn.assert_called_once_with(mock.ANY)
_, args, kwargs = mock_log.warn.mock_calls[0]
self.assertIn(
"Service \"db\" is using volume \"/data\" from the previous container",
args[0])
self.assertEqual(list(new_container.get('Volumes')), ['/data'])
self.assertEqual(new_container.get('Volumes')['/data'], volume_path)
def test_start_container_passes_through_options(self):
db = self.create_service('db')
create_and_start_container(db, environment={'FOO': 'BAR'})
@@ -504,6 +501,13 @@ class ServiceTest(DockerClientTestCase):
self.create_service('web', build=text_type(base_dir)).build()
self.assertEqual(len(self.client.images(name='composetest_web')), 1)
def test_build_with_git_url(self):
build_url = "https://github.com/dnephin/docker-build-from-url.git"
service = self.create_service('buildwithurl', build=build_url)
self.addCleanup(self.client.remove_image, service.image_name)
service.build()
assert service.image()
def test_start_container_stays_unpriviliged(self):
service = self.create_service('web')
container = create_and_start_container(service).inspect()
@@ -749,23 +753,21 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
self.assertIsNone(container.get('HostConfig.Dns'))
def test_dns_single_value(self):
service = self.create_service('web', dns='8.8.8.8')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.Dns'), ['8.8.8.8'])
def test_dns_list(self):
service = self.create_service('web', dns=['8.8.8.8', '9.9.9.9'])
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.Dns'), ['8.8.8.8', '9.9.9.9'])
def test_restart_always_value(self):
service = self.create_service('web', restart='always')
service = self.create_service('web', restart={'Name': 'always'})
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'always')
def test_restart_on_failure_value(self):
service = self.create_service('web', restart='on-failure:5')
service = self.create_service('web', restart={
'Name': 'on-failure',
'MaximumRetryCount': 5
})
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'on-failure')
self.assertEqual(container.get('HostConfig.RestartPolicy.MaximumRetryCount'), 5)
@@ -780,17 +782,7 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.CapDrop'), ['SYS_ADMIN', 'NET_ADMIN'])
def test_dns_search_no_value(self):
service = self.create_service('web')
container = create_and_start_container(service)
self.assertIsNone(container.get('HostConfig.DnsSearch'))
def test_dns_search_single_value(self):
service = self.create_service('web', dns_search='example.com')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.DnsSearch'), ['example.com'])
def test_dns_search_list(self):
def test_dns_search(self):
service = self.create_service('web', dns_search=['dc1.example.com', 'dc2.example.com'])
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.DnsSearch'), ['dc1.example.com', 'dc2.example.com'])
@@ -812,7 +804,13 @@ class ServiceTest(DockerClientTestCase):
environment=['ONE=1', 'TWO=2', 'THREE=3'],
env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env'])
env = create_and_start_container(service).environment
for k, v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
for k, v in {
'ONE': '1',
'TWO': '2',
'THREE': '3',
'FOO': 'baz',
'DOO': 'dah'
}.items():
self.assertEqual(env[k], v)
@mock.patch.dict(os.environ)
@@ -820,9 +818,22 @@ class ServiceTest(DockerClientTestCase):
os.environ['FILE_DEF'] = 'E1'
os.environ['FILE_DEF_EMPTY'] = 'E2'
os.environ['ENV_DEF'] = 'E3'
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
service = self.create_service(
'web',
environment={
'FILE_DEF': 'F1',
'FILE_DEF_EMPTY': '',
'ENV_DEF': None,
'NO_DEF': None
}
)
env = create_and_start_container(service).environment
for k, v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
for k, v in {
'FILE_DEF': 'F1',
'FILE_DEF_EMPTY': '',
'ENV_DEF': 'E3',
'NO_DEF': ''
}.items():
self.assertEqual(env[k], v)
def test_with_high_enough_api_version_we_get_default_network_mode(self):
@@ -853,22 +864,11 @@ class ServiceTest(DockerClientTestCase):
for pair in expected.items():
self.assertIn(pair, labels)
service.kill()
service.remove_stopped()
labels_list = ["%s=%s" % pair for pair in labels_dict.items()]
service = self.create_service('web', labels=labels_list)
labels = create_and_start_container(service).labels.items()
for pair in expected.items():
self.assertIn(pair, labels)
def test_empty_labels(self):
labels_list = ['foo', 'bar']
service = self.create_service('web', labels=labels_list)
labels_dict = {'foo': '', 'bar': ''}
service = self.create_service('web', labels=labels_dict)
labels = create_and_start_container(service).labels.items()
for name in labels_list:
for name in labels_dict:
self.assertIn((name, ''), labels)
def test_custom_container_name(self):
@@ -929,3 +929,38 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(set(service.containers(stopped=True)), set([original, duplicate]))
self.assertEqual(set(service.duplicate_containers()), set([duplicate]))
def converge(service,
strategy=ConvergenceStrategy.changed,
do_build=True):
"""Create a converge plan from a strategy and execute the plan."""
plan = service.convergence_plan(strategy)
return service.execute_convergence_plan(plan, do_build=do_build, timeout=1)
class ConfigHashTest(DockerClientTestCase):
def test_no_config_hash_when_one_off(self):
web = self.create_service('web')
container = web.create_container(one_off=True)
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_no_config_hash_when_overriding_options(self):
web = self.create_service('web')
container = web.create_container(environment={'FOO': '1'})
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_config_hash_with_custom_labels(self):
web = self.create_service('web', labels={'foo': '1'})
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
self.assertIn('foo', container.labels)
def test_config_hash_sticks_around(self):
web = self.create_service('web', command=["top"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
web = self.create_service('web', command=["top", "-d", "1"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)

View File

@@ -4,13 +4,10 @@ by `docker-compose up`.
"""
from __future__ import unicode_literals
import os
import shutil
import tempfile
import py
from .testcases import DockerClientTestCase
from compose.config import config
from compose.const import LABEL_CONFIG_HASH
from compose.project import Project
from compose.service import ConvergenceStrategy
@@ -179,13 +176,18 @@ class ProjectWithDependenciesTest(ProjectTestCase):
containers = self.run_up(next_cfg)
self.assertEqual(len(containers), 2)
def test_service_recreated_when_dependency_created(self):
containers = self.run_up(self.cfg, service_names=['web'], start_deps=False)
self.assertEqual(len(containers), 1)
def converge(service,
strategy=ConvergenceStrategy.changed,
do_build=True):
"""Create a converge plan from a strategy and execute the plan."""
plan = service.convergence_plan(strategy)
return service.execute_convergence_plan(plan, do_build=do_build, timeout=1)
containers = self.run_up(self.cfg)
self.assertEqual(len(containers), 3)
web, = [c for c in containers if c.service == 'web']
nginx, = [c for c in containers if c.service == 'nginx']
self.assertEqual(web.links(), ['composetest_db_1', 'db', 'db_1'])
self.assertEqual(nginx.links(), ['composetest_web_1', 'web', 'web_1'])
class ServiceStateTest(DockerClientTestCase):
@@ -241,67 +243,49 @@ class ServiceStateTest(DockerClientTestCase):
image_id = self.client.images(name='busybox')[0]['Id']
self.client.tag(image_id, repository=repo, tag=tag)
self.addCleanup(self.client.remove_image, image)
try:
web = self.create_service('web', image=image)
container = web.create_container()
web = self.create_service('web', image=image)
container = web.create_container()
# update the image
c = self.client.create_container(image, ['touch', '/hello.txt'])
self.client.commit(c, repository=repo, tag=tag)
self.client.remove_container(c)
# update the image
c = self.client.create_container(image, ['touch', '/hello.txt'])
self.client.commit(c, repository=repo, tag=tag)
self.client.remove_container(c)
web = self.create_service('web', image=image)
self.assertEqual(('recreate', [container]), web.convergence_plan())
finally:
self.client.remove_image(image)
web = self.create_service('web', image=image)
self.assertEqual(('recreate', [container]), web.convergence_plan())
def test_trigger_recreate_with_build(self):
context = tempfile.mkdtemp()
context = py.test.ensuretemp('test_trigger_recreate_with_build')
self.addCleanup(context.remove)
base_image = "FROM busybox\nLABEL com.docker.compose.test_image=true\n"
dockerfile = context.join('Dockerfile')
dockerfile.write(base_image)
try:
dockerfile = os.path.join(context, 'Dockerfile')
web = self.create_service('web', build=str(context))
container = web.create_container()
with open(dockerfile, 'w') as f:
f.write(base_image)
dockerfile.write(base_image + 'CMD echo hello world\n')
web.build()
web = self.create_service('web', build=context)
container = web.create_container()
web = self.create_service('web', build=str(context))
self.assertEqual(('recreate', [container]), web.convergence_plan())
with open(dockerfile, 'w') as f:
f.write(base_image + 'CMD echo hello world\n')
web.build()
def test_image_changed_to_build(self):
context = py.test.ensuretemp('test_image_changed_to_build')
self.addCleanup(context.remove)
context.join('Dockerfile').write("""
FROM busybox
LABEL com.docker.compose.test_image=true
""")
web = self.create_service('web', build=context)
self.assertEqual(('recreate', [container]), web.convergence_plan())
finally:
shutil.rmtree(context)
web = self.create_service('web', image='busybox')
container = web.create_container()
class ConfigHashTest(DockerClientTestCase):
def test_no_config_hash_when_one_off(self):
web = self.create_service('web')
container = web.create_container(one_off=True)
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_no_config_hash_when_overriding_options(self):
web = self.create_service('web')
container = web.create_container(environment={'FOO': '1'})
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_config_hash_with_custom_labels(self):
web = self.create_service('web', labels={'foo': '1'})
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
self.assertIn('foo', container.labels)
def test_config_hash_sticks_around(self):
web = self.create_service('web', command=["top"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
web = self.create_service('web', command=["top", "-d", "1"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
web = self.create_service('web', build=str(context))
plan = web.convergence_plan()
self.assertEqual(('recreate', [container]), plan)
containers = web.execute_convergence_plan(plan)
self.assertEqual(len(containers), 1)

View File

@@ -1,23 +1,19 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from docker import errors
from docker.utils import version_lt
from pytest import skip
from .. import unittest
from compose.cli.docker_client import docker_client
from compose.config.config import ServiceLoader
from compose.config.config import resolve_environment
from compose.const import LABEL_PROJECT
from compose.progress_stream import stream_output
from compose.service import Service
def pull_busybox(client):
try:
client.inspect_image('busybox:latest')
except errors.APIError:
client.pull('busybox:latest', stream=False)
client.pull('busybox:latest', stream=False)
class DockerClientTestCase(unittest.TestCase):
@@ -42,34 +38,11 @@ class DockerClientTestCase(unittest.TestCase):
if 'command' not in kwargs:
kwargs['command'] = ["top"]
links = kwargs.get('links', None)
volumes_from = kwargs.get('volumes_from', None)
net = kwargs.get('net', None)
workaround_options = ['links', 'volumes_from', 'net']
for key in workaround_options:
try:
del kwargs[key]
except KeyError:
pass
options = ServiceLoader(working_dir='.', filename=None, service_name=name, service_dict=kwargs).make_service_dict()
labels = options.setdefault('labels', {})
kwargs['environment'] = resolve_environment(kwargs)
labels = dict(kwargs.setdefault('labels', {}))
labels['com.docker.compose.test-name'] = self.id()
if links:
options['links'] = links
if volumes_from:
options['volumes_from'] = volumes_from
if net:
options['net'] = net
return Service(
project='composetest',
client=self.client,
**options
)
return Service(name, client=self.client, project='composetest', **kwargs)
def check_build(self, *args, **kwargs):
kwargs.setdefault('rm', True)

View File

@@ -1,13 +1,13 @@
from __future__ import absolute_import
from __future__ import unicode_literals
import mock
import pytest
import six
from compose.cli.log_printer import LogPrinter
from compose.cli.log_printer import wait_on_exit
from compose.container import Container
from tests import unittest
from tests import mock
def build_mock_container(reader):
@@ -22,40 +22,52 @@ def build_mock_container(reader):
)
class LogPrinterTest(unittest.TestCase):
def get_default_output(self, monochrome=False):
def reader(*args, **kwargs):
yield b"hello\nworld"
container = build_mock_container(reader)
output = run_log_printer([container], monochrome=monochrome)
return output
@pytest.fixture
def output_stream():
output = six.StringIO()
output.flush = mock.Mock()
return output
def test_single_container(self):
output = self.get_default_output()
self.assertIn('hello', output)
self.assertIn('world', output)
@pytest.fixture
def mock_container():
def reader(*args, **kwargs):
yield b"hello\nworld"
return build_mock_container(reader)
def test_monochrome(self):
output = self.get_default_output(monochrome=True)
self.assertNotIn('\033[', output)
def test_polychrome(self):
output = self.get_default_output()
self.assertIn('\033[', output)
class TestLogPrinter(object):
def test_unicode(self):
def test_single_container(self, output_stream, mock_container):
LogPrinter([mock_container], output=output_stream).run()
output = output_stream.getvalue()
assert 'hello' in output
assert 'world' in output
# Call count is 2 lines + "container exited line"
assert output_stream.flush.call_count == 3
def test_monochrome(self, output_stream, mock_container):
LogPrinter([mock_container], output=output_stream, monochrome=True).run()
assert '\033[' not in output_stream.getvalue()
def test_polychrome(self, output_stream, mock_container):
LogPrinter([mock_container], output=output_stream).run()
assert '\033[' in output_stream.getvalue()
def test_unicode(self, output_stream):
glyph = u'\u2022'
def reader(*args, **kwargs):
yield glyph.encode('utf-8') + b'\n'
container = build_mock_container(reader)
output = run_log_printer([container])
LogPrinter([container], output=output_stream).run()
output = output_stream.getvalue()
if six.PY2:
output = output.decode('utf-8')
self.assertIn(glyph, output)
assert glyph in output
def test_wait_on_exit(self):
exit_status = 3
@@ -65,24 +77,12 @@ class LogPrinterTest(unittest.TestCase):
wait=mock.Mock(return_value=exit_status))
expected = '{} exited with code {}\n'.format(mock_container.name, exit_status)
self.assertEqual(expected, wait_on_exit(mock_container))
assert expected == wait_on_exit(mock_container)
def test_generator_with_no_logs(self):
mock_container = mock.Mock(
spec=Container,
has_api_logs=False,
log_driver='none',
name_without_project='web_1',
wait=mock.Mock(return_value=0))
def test_generator_with_no_logs(self, mock_container, output_stream):
mock_container.has_api_logs = False
mock_container.log_driver = 'none'
LogPrinter([mock_container], output=output_stream).run()
output = run_log_printer([mock_container])
self.assertIn(
"WARNING: no logs are available with the 'none' log driver\n",
output
)
def run_log_printer(containers, monochrome=False):
output = six.StringIO()
LogPrinter(containers, output=output, monochrome=monochrome).run()
return output.getvalue()
output = output_stream.getvalue()
assert "WARNING: no logs are available with the 'none' log driver\n" in output

View File

@@ -57,11 +57,11 @@ class CLIMainTestCase(unittest.TestCase):
with mock.patch('compose.cli.main.signal', autospec=True) as mock_signal:
attach_to_logs(project, log_printer, service_names, timeout)
mock_signal.signal.assert_called_once_with(mock_signal.SIGINT, mock.ANY)
assert mock_signal.signal.mock_calls == [
mock.call(mock_signal.SIGINT, mock.ANY),
mock.call(mock_signal.SIGTERM, mock.ANY),
]
log_printer.run.assert_called_once_with()
project.stop.assert_called_once_with(
service_names=service_names,
timeout=timeout)
class SetupConsoleHandlerTestCase(unittest.TestCase):

View File

@@ -124,7 +124,7 @@ class CLITestCase(unittest.TestCase):
mock_project.get_service.return_value = Service(
'service',
client=mock_client,
restart='always',
restart={'Name': 'always', 'MaximumRetryCount': 0},
image='someimage')
command.run(mock_project, {
'SERVICE': 'service',

View File

@@ -10,7 +10,9 @@ import py
import pytest
from compose.config import config
from compose.config.config import resolve_environment
from compose.config.errors import ConfigurationError
from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM
from tests import mock
from tests import unittest
@@ -18,20 +20,21 @@ from tests import unittest
def make_service_dict(name, service_dict, working_dir, filename=None):
"""
Test helper function to construct a ServiceLoader
Test helper function to construct a ServiceExtendsResolver
"""
return config.ServiceLoader(
resolver = config.ServiceExtendsResolver(config.ServiceConfig(
working_dir=working_dir,
filename=filename,
service_name=name,
service_dict=service_dict).make_service_dict()
name=name,
config=service_dict))
return config.process_service(resolver.run())
def service_sort(services):
return sorted(services, key=itemgetter('name'))
def build_config_details(contents, working_dir, filename):
def build_config_details(contents, working_dir='working_dir', filename='filename.yml'):
return config.ConfigDetails(
working_dir,
[config.ConfigFile(filename, contents)])
@@ -75,19 +78,39 @@ class ConfigTest(unittest.TestCase):
)
)
def test_config_invalid_service_names(self):
with self.assertRaises(ConfigurationError):
for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']:
config.load(
build_config_details(
{invalid_name: {'image': 'busybox'}},
'working_dir',
'filename.yml'
)
)
def test_load_config_invalid_service_names(self):
for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']:
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details(
{invalid_name: {'image': 'busybox'}},
'working_dir',
'filename.yml'))
assert 'Invalid service name \'%s\'' % invalid_name in exc.exconly()
def test_load_with_invalid_field_name(self):
config_details = build_config_details(
{'web': {'image': 'busybox', 'name': 'bogus'}},
'working_dir',
'filename.yml')
with pytest.raises(ConfigurationError) as exc:
config.load(config_details)
error_msg = "Unsupported config option for 'web' service: 'name'"
assert error_msg in exc.exconly()
assert "Validation failed in file 'filename.yml'" in exc.exconly()
def test_load_invalid_service_definition(self):
config_details = build_config_details(
{'web': 'wrong'},
'working_dir',
'filename.yml')
with pytest.raises(ConfigurationError) as exc:
config.load(config_details)
error_msg = "service 'web' doesn't have any configuration options"
assert error_msg in exc.exconly()
def test_config_integer_service_name_raise_validation_error(self):
expected_error_msg = "Service name: 1 needs to be a string, eg '1'"
expected_error_msg = ("In file 'filename.yml' service name: 1 needs to "
"be a string, eg '1'")
with self.assertRaisesRegexp(ConfigurationError, expected_error_msg):
config.load(
build_config_details(
@@ -126,7 +149,7 @@ class ConfigTest(unittest.TestCase):
'name': 'web',
'build': '/',
'links': ['db'],
'volumes': ['/home/user/project:/code'],
'volumes': [VolumeSpec.parse('/home/user/project:/code')],
},
{
'name': 'db',
@@ -137,25 +160,26 @@ class ConfigTest(unittest.TestCase):
def test_load_with_multiple_files_and_empty_override(self):
base_file = config.ConfigFile(
'base.yaml',
'base.yml',
{'web': {'image': 'example/web'}})
override_file = config.ConfigFile('override.yaml', None)
override_file = config.ConfigFile('override.yml', None)
details = config.ConfigDetails('.', [base_file, override_file])
with pytest.raises(ConfigurationError) as exc:
config.load(details)
assert 'Top level object needs to be a dictionary' in exc.exconly()
error_msg = "Top level object in 'override.yml' needs to be an object"
assert error_msg in exc.exconly()
def test_load_with_multiple_files_and_empty_base(self):
base_file = config.ConfigFile('base.yaml', None)
base_file = config.ConfigFile('base.yml', None)
override_file = config.ConfigFile(
'override.yaml',
'override.yml',
{'web': {'image': 'example/web'}})
details = config.ConfigDetails('.', [base_file, override_file])
with pytest.raises(ConfigurationError) as exc:
config.load(details)
assert 'Top level object needs to be a dictionary' in exc.exconly()
assert "Top level object in 'base.yml' needs to be an object" in exc.exconly()
def test_load_with_multiple_files_and_extends_in_override_file(self):
base_file = config.ConfigFile(
@@ -177,6 +201,7 @@ class ConfigTest(unittest.TestCase):
details = config.ConfigDetails('.', [base_file, override_file])
tmpdir = py.test.ensuretemp('config_test')
self.addCleanup(tmpdir.remove)
tmpdir.join('common.yml').write("""
base:
labels: ['label=one']
@@ -188,44 +213,55 @@ class ConfigTest(unittest.TestCase):
{
'name': 'web',
'image': 'example/web',
'volumes': ['/home/user/project:/code'],
'volumes': [VolumeSpec.parse('/home/user/project:/code')],
'labels': {'label': 'one'},
},
]
self.assertEqual(service_sort(service_dicts), service_sort(expected))
def test_load_with_multiple_files_and_invalid_override(self):
base_file = config.ConfigFile(
'base.yaml',
{'web': {'image': 'example/web'}})
override_file = config.ConfigFile(
'override.yaml',
{'bogus': 'thing'})
details = config.ConfigDetails('.', [base_file, override_file])
with pytest.raises(ConfigurationError) as exc:
config.load(details)
assert "service 'bogus' doesn't have any configuration" in exc.exconly()
assert "In file 'override.yaml'" in exc.exconly()
def test_load_sorts_in_dependency_order(self):
config_details = build_config_details({
'web': {
'image': 'busybox:latest',
'links': ['db'],
},
'db': {
'image': 'busybox:latest',
'volumes_from': ['volume:ro']
},
'volume': {
'image': 'busybox:latest',
'volumes': ['/tmp'],
}
})
services = config.load(config_details)
assert services[0]['name'] == 'volume'
assert services[1]['name'] == 'db'
assert services[2]['name'] == 'web'
def test_config_valid_service_names(self):
for valid_name in ['_', '-', '.__.', '_what-up.', 'what_.up----', 'whatup']:
config.load(
services = config.load(
build_config_details(
{valid_name: {'image': 'busybox'}},
'tests/fixtures/extends',
'common.yml'
)
)
def test_config_invalid_ports_format_validation(self):
expected_error_msg = "Service 'web' configuration key 'ports' contains an invalid type"
with self.assertRaisesRegexp(ConfigurationError, expected_error_msg):
for invalid_ports in [{"1": "8000"}, False, 0, "8000", 8000, ["8000", "8000"]]:
config.load(
build_config_details(
{'web': {'image': 'busybox', 'ports': invalid_ports}},
'working_dir',
'filename.yml'
)
)
def test_config_valid_ports_format_validation(self):
valid_ports = [["8000", "9000"], ["8000/8050"], ["8000"], [8000], ["49153-49154:3002-3003"]]
for ports in valid_ports:
config.load(
build_config_details(
{'web': {'image': 'busybox', 'ports': ports}},
'working_dir',
'filename.yml'
)
)
'common.yml'))
assert services[0]['name'] == valid_name
def test_config_hint(self):
expected_error_msg = "(did you mean 'privileged'?)"
@@ -267,7 +303,8 @@ class ConfigTest(unittest.TestCase):
)
def test_invalid_config_not_a_dictionary(self):
expected_error_msg = "Top level object needs to be a dictionary."
expected_error_msg = ("Top level object in 'filename.yml' needs to be "
"an object.")
with self.assertRaisesRegexp(ConfigurationError, expected_error_msg):
config.load(
build_config_details(
@@ -348,6 +385,60 @@ class ConfigTest(unittest.TestCase):
)
)
def test_config_ulimits_invalid_keys_validation_error(self):
expected = ("Service 'web' configuration key 'ulimits' 'nofile' contains "
"unsupported option: 'not_soft_or_hard'")
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details(
{
'web': {
'image': 'busybox',
'ulimits': {
'nofile': {
"not_soft_or_hard": 100,
"soft": 10000,
"hard": 20000,
}
}
}
},
'working_dir',
'filename.yml'))
assert expected in exc.exconly()
def test_config_ulimits_required_keys_validation_error(self):
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details(
{
'web': {
'image': 'busybox',
'ulimits': {'nofile': {"soft": 10000}}
}
},
'working_dir',
'filename.yml'))
assert "Service 'web' configuration key 'ulimits' 'nofile'" in exc.exconly()
assert "'hard' is a required property" in exc.exconly()
def test_config_ulimits_soft_greater_than_hard_error(self):
expected = "cannot contain a 'soft' value higher than 'hard' value"
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details(
{
'web': {
'image': 'busybox',
'ulimits': {
'nofile': {"soft": 10000, "hard": 1000}
}
}
},
'working_dir',
'filename.yml'))
assert expected in exc.exconly()
def test_valid_config_which_allows_two_type_definitions(self):
expose_values = [["8000"], [8000]]
for expose in expose_values:
@@ -395,23 +486,22 @@ class ConfigTest(unittest.TestCase):
self.assertTrue(mock_logging.warn.called)
self.assertTrue(expected_warning_msg in mock_logging.warn.call_args[0][0])
def test_config_invalid_environment_dict_key_raises_validation_error(self):
expected_error_msg = "Service 'web' configuration key 'environment' contains unsupported option: '---'"
with self.assertRaisesRegexp(ConfigurationError, expected_error_msg):
config.load(
build_config_details(
{'web': {
'image': 'busybox',
'environment': {'---': 'nope'}
}},
'working_dir',
'filename.yml'
)
def test_config_valid_environment_dict_key_contains_dashes(self):
services = config.load(
build_config_details(
{'web': {
'image': 'busybox',
'environment': {'SPRING_JPA_HIBERNATE_DDL-AUTO': 'none'}
}},
'working_dir',
'filename.yml'
)
)
self.assertEqual(services[0]['environment']['SPRING_JPA_HIBERNATE_DDL-AUTO'], 'none')
def test_load_yaml_with_yaml_error(self):
tmpdir = py.test.ensuretemp('invalid_yaml_test')
self.addCleanup(tmpdir.remove)
invalid_yaml_file = tmpdir.join('docker-compose.yml')
invalid_yaml_file.write("""
web:
@@ -422,6 +512,120 @@ class ConfigTest(unittest.TestCase):
assert 'line 3, column 32' in exc.exconly()
def test_validate_extra_hosts_invalid(self):
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details({
'web': {
'image': 'alpine',
'extra_hosts': "www.example.com: 192.168.0.17",
}
}))
assert "'extra_hosts' contains an invalid type" in exc.exconly()
def test_validate_extra_hosts_invalid_list(self):
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details({
'web': {
'image': 'alpine',
'extra_hosts': [
{'www.example.com': '192.168.0.17'},
{'api.example.com': '192.168.0.18'}
],
}
}))
assert "which is an invalid type" in exc.exconly()
class PortsTest(unittest.TestCase):
INVALID_PORTS_TYPES = [
{"1": "8000"},
False,
"8000",
8000,
]
NON_UNIQUE_SINGLE_PORTS = [
["8000", "8000"],
]
INVALID_PORT_MAPPINGS = [
["8000-8001:8000"],
]
VALID_SINGLE_PORTS = [
["8000"],
["8000/tcp"],
["8000", "9000"],
[8000],
[8000, 9000],
]
VALID_PORT_MAPPINGS = [
["8000:8050"],
["49153-49154:3002-3003"],
]
def test_config_invalid_ports_type_validation(self):
for invalid_ports in self.INVALID_PORTS_TYPES:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'ports': invalid_ports})
assert "contains an invalid type" in exc.value.msg
def test_config_non_unique_ports_validation(self):
for invalid_ports in self.NON_UNIQUE_SINGLE_PORTS:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'ports': invalid_ports})
assert "non-unique" in exc.value.msg
def test_config_invalid_ports_format_validation(self):
for invalid_ports in self.INVALID_PORT_MAPPINGS:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'ports': invalid_ports})
assert "Port ranges don't match in length" in exc.value.msg
def test_config_valid_ports_format_validation(self):
for valid_ports in self.VALID_SINGLE_PORTS + self.VALID_PORT_MAPPINGS:
self.check_config({'ports': valid_ports})
def test_config_invalid_expose_type_validation(self):
for invalid_expose in self.INVALID_PORTS_TYPES:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'expose': invalid_expose})
assert "contains an invalid type" in exc.value.msg
def test_config_non_unique_expose_validation(self):
for invalid_expose in self.NON_UNIQUE_SINGLE_PORTS:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'expose': invalid_expose})
assert "non-unique" in exc.value.msg
def test_config_invalid_expose_format_validation(self):
# Valid port mappings ARE NOT valid 'expose' entries
for invalid_expose in self.INVALID_PORT_MAPPINGS + self.VALID_PORT_MAPPINGS:
with pytest.raises(ConfigurationError) as exc:
self.check_config({'expose': invalid_expose})
assert "should be of the format" in exc.value.msg
def test_config_valid_expose_format_validation(self):
# Valid single ports ARE valid 'expose' entries
for valid_expose in self.VALID_SINGLE_PORTS:
self.check_config({'expose': valid_expose})
def check_config(self, cfg):
config.load(
build_config_details(
{'web': dict(image='busybox', **cfg)},
'working_dir',
'filename.yml'
)
)
class InterpolationTest(unittest.TestCase):
@mock.patch.dict(os.environ)
@@ -513,14 +717,11 @@ class VolumeConfigTest(unittest.TestCase):
@mock.patch.dict(os.environ)
def test_volume_binding_with_environment_variable(self):
os.environ['VOLUME_PATH'] = '/host/path'
d = config.load(
build_config_details(
{'foo': {'build': '.', 'volumes': ['${VOLUME_PATH}:/container/path']}},
'.',
None,
)
)[0]
self.assertEqual(d['volumes'], ['/host/path:/container/path'])
d = config.load(build_config_details(
{'foo': {'build': '.', 'volumes': ['${VOLUME_PATH}:/container/path']}},
'.',
))[0]
self.assertEqual(d['volumes'], [VolumeSpec.parse('/host/path:/container/path')])
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths')
@mock.patch.dict(os.environ)
@@ -573,6 +774,11 @@ class VolumeConfigTest(unittest.TestCase):
}, working_dir='.')
self.assertEqual(d['volumes'], ['~:/data'])
def test_volume_path_with_non_ascii_directory(self):
volume = u'/Füü/data:/data'
container_path = config.resolve_volume_path(".", volume)
self.assertEqual(container_path, volume)
class MergePathMappingTest(object):
def config_name(self):
@@ -768,7 +974,7 @@ class MemoryOptionsTest(unittest.TestCase):
a mem_limit
"""
expected_error_msg = (
"Invalid 'memswap_limit' configuration for 'foo' service: when "
"Service 'foo' configuration key 'memswap_limit' is invalid: when "
"defining 'memswap_limit' you must set 'mem_limit' as well"
)
with self.assertRaisesRegexp(ConfigurationError, expected_error_msg):
@@ -836,65 +1042,54 @@ class EnvTest(unittest.TestCase):
os.environ['FILE_DEF_EMPTY'] = 'E2'
os.environ['ENV_DEF'] = 'E3'
service_dict = make_service_dict(
'foo', {
'build': '.',
'environment': {
'FILE_DEF': 'F1',
'FILE_DEF_EMPTY': '',
'ENV_DEF': None,
'NO_DEF': None
},
service_dict = {
'build': '.',
'environment': {
'FILE_DEF': 'F1',
'FILE_DEF_EMPTY': '',
'ENV_DEF': None,
'NO_DEF': None
},
'tests/'
)
}
self.assertEqual(
service_dict['environment'],
resolve_environment(service_dict),
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
)
def test_env_from_file(self):
service_dict = make_service_dict(
'foo',
{'build': '.', 'env_file': 'one.env'},
'tests/fixtures/env',
)
def test_resolve_environment_from_env_file(self):
self.assertEqual(
service_dict['environment'],
resolve_environment({'env_file': ['tests/fixtures/env/one.env']}),
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'bar'},
)
def test_env_from_multiple_files(self):
service_dict = make_service_dict(
'foo',
{'build': '.', 'env_file': ['one.env', 'two.env']},
'tests/fixtures/env',
)
def test_resolve_environment_with_multiple_env_files(self):
service_dict = {
'env_file': [
'tests/fixtures/env/one.env',
'tests/fixtures/env/two.env'
]
}
self.assertEqual(
service_dict['environment'],
resolve_environment(service_dict),
{'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'},
)
def test_env_nonexistent_file(self):
options = {'env_file': 'nonexistent.env'}
self.assertRaises(
ConfigurationError,
lambda: make_service_dict('foo', options, 'tests/fixtures/env'),
)
def test_resolve_environment_nonexistent_file(self):
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details(
{'foo': {'image': 'example', 'env_file': 'nonexistent.env'}},
working_dir='tests/fixtures/env'))
assert 'Couldn\'t find env file' in exc.exconly()
assert 'nonexistent.env' in exc.exconly()
@mock.patch.dict(os.environ)
def test_resolve_environment_from_file(self):
def test_resolve_environment_from_env_file_with_empty_values(self):
os.environ['FILE_DEF'] = 'E1'
os.environ['FILE_DEF_EMPTY'] = 'E2'
os.environ['ENV_DEF'] = 'E3'
service_dict = make_service_dict(
'foo',
{'build': '.', 'env_file': 'resolve.env'},
'tests/fixtures/env',
)
self.assertEqual(
service_dict['environment'],
resolve_environment({'env_file': ['tests/fixtures/env/resolve.env']}),
{
'FILE_DEF': u'bär',
'FILE_DEF_EMPTY': '',
@@ -913,19 +1108,21 @@ class EnvTest(unittest.TestCase):
build_config_details(
{'foo': {'build': '.', 'volumes': ['$HOSTENV:$CONTAINERENV']}},
"tests/fixtures/env",
None,
)
)[0]
self.assertEqual(set(service_dict['volumes']), set(['/tmp:/host/tmp']))
self.assertEqual(
set(service_dict['volumes']),
set([VolumeSpec.parse('/tmp:/host/tmp')]))
service_dict = config.load(
build_config_details(
{'foo': {'build': '.', 'volumes': ['/opt${HOSTENV}:/opt${CONTAINERENV}']}},
"tests/fixtures/env",
None,
)
)[0]
self.assertEqual(set(service_dict['volumes']), set(['/opt/tmp:/opt/host/tmp']))
self.assertEqual(
set(service_dict['volumes']),
set([VolumeSpec.parse('/opt/tmp:/opt/host/tmp')]))
def load_from_filename(filename):
@@ -999,18 +1196,19 @@ class ExtendsTest(unittest.TestCase):
]))
def test_circular(self):
try:
with pytest.raises(config.CircularReference) as exc:
load_from_filename('tests/fixtures/extends/circle-1.yml')
raise Exception("Expected config.CircularReference to be raised")
except config.CircularReference as e:
self.assertEqual(
[(os.path.basename(filename), service_name) for (filename, service_name) in e.trail],
[
('circle-1.yml', 'web'),
('circle-2.yml', 'web'),
('circle-1.yml', 'web'),
],
)
path = [
(os.path.basename(filename), service_name)
for (filename, service_name) in exc.value.trail
]
expected = [
('circle-1.yml', 'web'),
('circle-2.yml', 'other'),
('circle-1.yml', 'web'),
]
self.assertEqual(path, expected)
def test_extends_validation_empty_dictionary(self):
with self.assertRaisesRegexp(ConfigurationError, 'service'):
@@ -1171,8 +1369,14 @@ class ExtendsTest(unittest.TestCase):
dicts = load_from_filename('tests/fixtures/volume-path/docker-compose.yml')
paths = [
'%s:/foo' % os.path.abspath('tests/fixtures/volume-path/common/foo'),
'%s:/bar' % os.path.abspath('tests/fixtures/volume-path/bar'),
VolumeSpec(
os.path.abspath('tests/fixtures/volume-path/common/foo'),
'/foo',
'rw'),
VolumeSpec(
os.path.abspath('tests/fixtures/volume-path/bar'),
'/bar',
'rw')
]
self.assertEqual(set(dicts[0]['volumes']), set(paths))
@@ -1221,6 +1425,70 @@ class ExtendsTest(unittest.TestCase):
},
]))
def test_extends_with_environment_and_env_files(self):
tmpdir = py.test.ensuretemp('test_extends_with_environment')
self.addCleanup(tmpdir.remove)
commondir = tmpdir.mkdir('common')
commondir.join('base.yml').write("""
app:
image: 'example/app'
env_file:
- 'envs'
environment:
- SECRET
- TEST_ONE=common
- TEST_TWO=common
""")
tmpdir.join('docker-compose.yml').write("""
ext:
extends:
file: common/base.yml
service: app
env_file:
- 'envs'
environment:
- THING
- TEST_ONE=top
""")
commondir.join('envs').write("""
COMMON_ENV_FILE
TEST_ONE=common-env-file
TEST_TWO=common-env-file
TEST_THREE=common-env-file
TEST_FOUR=common-env-file
""")
tmpdir.join('envs').write("""
TOP_ENV_FILE
TEST_ONE=top-env-file
TEST_TWO=top-env-file
TEST_THREE=top-env-file
""")
expected = [
{
'name': 'ext',
'image': 'example/app',
'environment': {
'SECRET': 'secret',
'TOP_ENV_FILE': 'secret',
'COMMON_ENV_FILE': 'secret',
'THING': 'thing',
'TEST_ONE': 'top',
'TEST_TWO': 'common',
'TEST_THREE': 'top-env-file',
'TEST_FOUR': 'common-env-file',
},
},
]
with mock.patch.dict(os.environ):
os.environ['SECRET'] = 'secret'
os.environ['THING'] = 'thing'
os.environ['COMMON_ENV_FILE'] = 'secret'
os.environ['TOP_ENV_FILE'] = 'secret'
config = load_from_filename(str(tmpdir.join('docker-compose.yml')))
assert config == expected
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
class ExpandPathTest(unittest.TestCase):
@@ -1297,6 +1565,34 @@ class BuildPathTest(unittest.TestCase):
service_dict = load_from_filename('tests/fixtures/build-path/docker-compose.yml')
self.assertEquals(service_dict, [{'name': 'foo', 'build': self.abs_context_path}])
def test_valid_url_in_build_path(self):
valid_urls = [
'git://github.com/docker/docker',
'git@github.com:docker/docker.git',
'git@bitbucket.org:atlassianlabs/atlassian-docker.git',
'https://github.com/docker/docker.git',
'http://github.com/docker/docker.git',
'github.com/docker/docker.git',
]
for valid_url in valid_urls:
service_dict = config.load(build_config_details({
'validurl': {'build': valid_url},
}, '.', None))
assert service_dict[0]['build'] == valid_url
def test_invalid_url_in_build_path(self):
invalid_urls = [
'example.com/bogus',
'ftp://example.com/',
'/path/does/not/exist',
]
for invalid_url in invalid_urls:
with pytest.raises(ConfigurationError) as exc:
config.load(build_config_details({
'invalidurl': {'build': invalid_url},
}, '.', None))
assert 'build path' in exc.exconly()
class GetDefaultConfigFilesTestCase(unittest.TestCase):

View File

@@ -1,6 +1,7 @@
from .. import unittest
from compose.project import DependencyError
from compose.project import sort_service_dicts
from compose.config.errors import DependencyError
from compose.config.sort_services import sort_service_dicts
from compose.config.types import VolumeFromSpec
from tests import unittest
class SortServiceTest(unittest.TestCase):
@@ -73,7 +74,7 @@ class SortServiceTest(unittest.TestCase):
},
{
'name': 'parent',
'volumes_from': ['child']
'volumes_from': [VolumeFromSpec('child', 'rw')]
},
{
'links': ['parent'],
@@ -116,7 +117,7 @@ class SortServiceTest(unittest.TestCase):
},
{
'name': 'parent',
'volumes_from': ['child']
'volumes_from': [VolumeFromSpec('child', 'ro')]
},
{
'name': 'child'
@@ -141,7 +142,7 @@ class SortServiceTest(unittest.TestCase):
},
{
'name': 'two',
'volumes_from': ['one']
'volumes_from': [VolumeFromSpec('one', 'rw')]
},
{
'name': 'one'

View File

@@ -0,0 +1,66 @@
import pytest
from compose.config.errors import ConfigurationError
from compose.config.types import parse_extra_hosts
from compose.config.types import VolumeSpec
from compose.const import IS_WINDOWS_PLATFORM
def test_parse_extra_hosts_list():
expected = {'www.example.com': '192.168.0.17'}
assert parse_extra_hosts(["www.example.com:192.168.0.17"]) == expected
expected = {'www.example.com': '192.168.0.17'}
assert parse_extra_hosts(["www.example.com: 192.168.0.17"]) == expected
assert parse_extra_hosts([
"www.example.com: 192.168.0.17",
"static.example.com:192.168.0.19",
"api.example.com: 192.168.0.18"
]) == {
'www.example.com': '192.168.0.17',
'static.example.com': '192.168.0.19',
'api.example.com': '192.168.0.18'
}
def test_parse_extra_hosts_dict():
assert parse_extra_hosts({
'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'
}) == {
'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'
}
class TestVolumeSpec(object):
def test_parse_volume_spec_only_one_path(self):
spec = VolumeSpec.parse('/the/volume')
assert spec == (None, '/the/volume', 'rw')
def test_parse_volume_spec_internal_and_external(self):
spec = VolumeSpec.parse('external:interval')
assert spec == ('external', 'interval', 'rw')
def test_parse_volume_spec_with_mode(self):
spec = VolumeSpec.parse('external:interval:ro')
assert spec == ('external', 'interval', 'ro')
spec = VolumeSpec.parse('external:interval:z')
assert spec == ('external', 'interval', 'z')
def test_parse_volume_spec_too_many_parts(self):
with pytest.raises(ConfigurationError) as exc:
VolumeSpec.parse('one:two:three:four')
assert 'has incorrect format' in exc.exconly()
@pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive')
def test_parse_volume_windows_absolute_path(self):
windows_path = "c:\\Users\\me\\Documents\\shiny\\config:\\opt\\shiny\\config:ro"
assert VolumeSpec.parse(windows_path) == (
"/c/Users/me/Documents/shiny/config",
"/opt/shiny/config",
"ro"
)

View File

@@ -34,3 +34,34 @@ class ProgressStreamTestCase(unittest.TestCase):
]
events = progress_stream.stream_output(output, StringIO())
self.assertEqual(len(events), 1)
def test_stream_output_progress_event_tty(self):
events = [
b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}'
]
class TTYStringIO(StringIO):
def isatty(self):
return True
output = TTYStringIO()
events = progress_stream.stream_output(events, output)
self.assertTrue(len(output.getvalue()) > 0)
def test_stream_output_progress_event_no_tty(self):
events = [
b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}'
]
output = StringIO()
events = progress_stream.stream_output(events, output)
self.assertEqual(len(output.getvalue()), 0)
def test_stream_output_no_progress_event_no_tty(self):
events = [
b'{"status": "Pulling from library/xy", "id": "latest"}'
]
output = StringIO()
events = progress_stream.stream_output(events, output)
self.assertTrue(len(output.getvalue()) > 0)

View File

@@ -4,9 +4,12 @@ import docker
from .. import mock
from .. import unittest
from compose.config.types import VolumeFromSpec
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.project import Project
from compose.service import ContainerNet
from compose.service import Net
from compose.service import Service
@@ -31,29 +34,6 @@ class ProjectTest(unittest.TestCase):
self.assertEqual(project.get_service('db').name, 'db')
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
def test_from_dict_sorts_in_dependency_order(self):
project = Project.from_dicts('composetest', [
{
'name': 'web',
'image': 'busybox:latest',
'links': ['db'],
},
{
'name': 'db',
'image': 'busybox:latest',
'volumes_from': ['volume']
},
{
'name': 'volume',
'image': 'busybox:latest',
'volumes': ['/tmp'],
}
], None)
self.assertEqual(project.services[0].name, 'volume')
self.assertEqual(project.services[1].name, 'db')
self.assertEqual(project.services[2].name, 'web')
def test_from_config(self):
dicts = [
{
@@ -165,7 +145,7 @@ class ProjectTest(unittest.TestCase):
{
'name': 'test',
'image': 'busybox:latest',
'volumes_from': ['aaa']
'volumes_from': [VolumeFromSpec('aaa', 'rw')]
}
], self.mock_client)
self.assertEqual(project.get_service('test')._get_volumes_from(), [container_id + ":rw"])
@@ -188,17 +168,13 @@ class ProjectTest(unittest.TestCase):
{
'name': 'test',
'image': 'busybox:latest',
'volumes_from': ['vol']
'volumes_from': [VolumeFromSpec('vol', 'rw')]
}
], self.mock_client)
self.assertEqual(project.get_service('test')._get_volumes_from(), [container_name + ":rw"])
@mock.patch.object(Service, 'containers')
def test_use_volumes_from_service_container(self, mock_return):
def test_use_volumes_from_service_container(self):
container_ids = ['aabbccddee', '12345']
mock_return.return_value = [
mock.Mock(id=container_id, spec=Container)
for container_id in container_ids]
project = Project.from_dicts('test', [
{
@@ -208,10 +184,16 @@ class ProjectTest(unittest.TestCase):
{
'name': 'test',
'image': 'busybox:latest',
'volumes_from': ['vol']
'volumes_from': [VolumeFromSpec('vol', 'rw')]
}
], None)
self.assertEqual(project.get_service('test')._get_volumes_from(), [container_ids[0] + ':rw'])
with mock.patch.object(Service, 'containers') as mock_return:
mock_return.return_value = [
mock.Mock(id=container_id, spec=Container)
for container_id in container_ids]
self.assertEqual(
project.get_service('test')._get_volumes_from(),
[container_ids[0] + ':rw'])
def test_net_unset(self):
project = Project.from_dicts('test', [
@@ -263,6 +245,32 @@ class ProjectTest(unittest.TestCase):
service = project.get_service('test')
self.assertEqual(service.net.mode, 'container:' + container_name)
def test_uses_default_network_true(self):
web = Service('web', project='test', image="alpine", net=Net('test'))
db = Service('web', project='test', image="alpine", net=Net('other'))
project = Project('test', [web, db], None)
assert project.uses_default_network()
def test_uses_default_network_custom_name(self):
web = Service('web', project='test', image="alpine", net=Net('other'))
project = Project('test', [web], None)
assert not project.uses_default_network()
def test_uses_default_network_host(self):
web = Service('web', project='test', image="alpine", net=Net('host'))
project = Project('test', [web], None)
assert not project.uses_default_network()
def test_uses_default_network_container(self):
container = mock.Mock(id='test')
web = Service(
'web',
project='test',
image="alpine",
net=ContainerNet(container))
project = Project('test', [web], None)
assert not project.uses_default_network()
def test_container_without_name(self):
self.mock_client.containers.return_value = [
{'Image': 'busybox:latest', 'Id': '1', 'Name': '1'},

View File

@@ -2,18 +2,18 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import docker
import pytest
from .. import mock
from .. import unittest
from compose.const import IS_WINDOWS_PLATFORM
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import LABEL_CONFIG_HASH
from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.service import build_ulimits
from compose.service import build_volume_binding
from compose.service import ConfigError
from compose.service import ContainerNet
from compose.service import get_container_data_volumes
from compose.service import merge_volume_bindings
@@ -21,10 +21,9 @@ from compose.service import NeedsBuildError
from compose.service import Net
from compose.service import NoSuchImageError
from compose.service import parse_repository_tag
from compose.service import parse_volume_spec
from compose.service import Service
from compose.service import ServiceNet
from compose.service import VolumeFromSpec
from compose.service import warn_on_masked_volume
class ServiceTest(unittest.TestCase):
@@ -32,11 +31,6 @@ class ServiceTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
def test_project_validation(self):
self.assertRaises(ConfigError, lambda: Service(name='foo', project='>', image='foo'))
Service(name='foo', project='bar.bar__', image='foo')
def test_containers(self):
service = Service('db', self.mock_client, 'myproject', image='foo')
self.mock_client.containers.return_value = []
@@ -213,16 +207,6 @@ class ServiceTest(unittest.TestCase):
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertIsNone(opts.get('hostname'))
def test_hostname_defaults_to_service_name_when_using_networking(self):
service = Service(
'foo',
image='foo',
use_networking=True,
client=self.mock_client,
)
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertEqual(opts['hostname'], 'foo')
def test_get_container_create_options_with_name_option(self):
service = Service(
'foo',
@@ -349,44 +333,38 @@ class ServiceTest(unittest.TestCase):
self.assertEqual(parse_repository_tag("user/repo@sha256:digest"), ("user/repo", "sha256:digest", "@"))
self.assertEqual(parse_repository_tag("url:5000/repo@sha256:digest"), ("url:5000/repo", "sha256:digest", "@"))
@mock.patch('compose.service.Container', autospec=True)
def test_create_container_latest_is_used_when_no_tag_specified(self, mock_container):
service = Service('foo', client=self.mock_client, image='someimage')
images = []
def pull(repo, tag=None, **kwargs):
self.assertEqual('someimage', repo)
self.assertEqual('latest', tag)
images.append({'Id': 'abc123'})
return []
service.image = lambda *args, **kwargs: mock_get_image(images)
self.mock_client.pull = pull
service.create_container()
self.assertEqual(1, len(images))
def test_create_container_with_build(self):
service = Service('foo', client=self.mock_client, build='.')
images = []
service.image = lambda *args, **kwargs: mock_get_image(images)
service.build = lambda: images.append({'Id': 'abc123'})
self.mock_client.inspect_image.side_effect = [
NoSuchImageError,
{'Id': 'abc123'},
]
self.mock_client.build.return_value = [
'{"stream": "Successfully built abcd"}',
]
service.create_container(do_build=True)
self.assertEqual(1, len(images))
self.mock_client.build.assert_called_once_with(
tag='default_foo',
dockerfile=None,
stream=True,
path='.',
pull=False,
forcerm=False,
nocache=False,
rm=True,
)
def test_create_container_no_build(self):
service = Service('foo', client=self.mock_client, build='.')
service.image = lambda: {'Id': 'abc123'}
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
service.create_container(do_build=False)
self.assertFalse(self.mock_client.build.called)
def test_create_container_no_build_but_needs_build(self):
service = Service('foo', client=self.mock_client, build='.')
service.image = lambda *args, **kwargs: mock_get_image([])
self.mock_client.inspect_image.side_effect = NoSuchImageError
with self.assertRaises(NeedsBuildError):
service.create_container(do_build=False)
@@ -417,7 +395,7 @@ class ServiceTest(unittest.TestCase):
'options': {'image': 'example.com/foo'},
'links': [('one', 'one')],
'net': 'other',
'volumes_from': ['two'],
'volumes_from': [('two', 'rw')],
}
self.assertEqual(config_dict, expected)
@@ -442,6 +420,68 @@ class ServiceTest(unittest.TestCase):
}
self.assertEqual(config_dict, expected)
def test_specifies_host_port_with_no_ports(self):
service = Service(
'foo',
image='foo')
self.assertEqual(service.specifies_host_port(), False)
def test_specifies_host_port_with_container_port(self):
service = Service(
'foo',
image='foo',
ports=["2000"])
self.assertEqual(service.specifies_host_port(), False)
def test_specifies_host_port_with_host_port(self):
service = Service(
'foo',
image='foo',
ports=["1000:2000"])
self.assertEqual(service.specifies_host_port(), True)
def test_specifies_host_port_with_host_ip_no_port(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1::2000"])
self.assertEqual(service.specifies_host_port(), False)
def test_specifies_host_port_with_host_ip_and_port(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1:1000:2000"])
self.assertEqual(service.specifies_host_port(), True)
def test_specifies_host_port_with_container_port_range(self):
service = Service(
'foo',
image='foo',
ports=["2000-3000"])
self.assertEqual(service.specifies_host_port(), False)
def test_specifies_host_port_with_host_port_range(self):
service = Service(
'foo',
image='foo',
ports=["1000-2000:2000-3000"])
self.assertEqual(service.specifies_host_port(), True)
def test_specifies_host_port_with_host_ip_no_port_range(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1::2000-3000"])
self.assertEqual(service.specifies_host_port(), False)
def test_specifies_host_port_with_host_ip_and_port_range(self):
service = Service(
'foo',
image='foo',
ports=["127.0.0.1:1000-2000:2000-3000"])
self.assertEqual(service.specifies_host_port(), True)
def test_get_links_with_networking(self):
service = Service(
'foo',
@@ -451,6 +491,47 @@ class ServiceTest(unittest.TestCase):
self.assertEqual(service._get_links(link_to_self=True), [])
def sort_by_name(dictionary_list):
return sorted(dictionary_list, key=lambda k: k['name'])
class BuildUlimitsTestCase(unittest.TestCase):
def test_build_ulimits_with_dict(self):
ulimits = build_ulimits(
{
'nofile': {'soft': 10000, 'hard': 20000},
'nproc': {'soft': 65535, 'hard': 65535}
}
)
expected = [
{'name': 'nofile', 'soft': 10000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
def test_build_ulimits_with_ints(self):
ulimits = build_ulimits({'nofile': 20000, 'nproc': 65535})
expected = [
{'name': 'nofile', 'soft': 20000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
def test_build_ulimits_with_integers_and_dicts(self):
ulimits = build_ulimits(
{
'nproc': 65535,
'nofile': {'soft': 10000, 'hard': 20000}
}
)
expected = [
{'name': 'nofile', 'soft': 10000, 'hard': 20000},
{'name': 'nproc', 'soft': 65535, 'hard': 65535}
]
assert sort_by_name(ulimits) == sort_by_name(expected)
class NetTestCase(unittest.TestCase):
def test_net(self):
@@ -494,62 +575,21 @@ class NetTestCase(unittest.TestCase):
self.assertEqual(net.service_name, service_name)
def mock_get_image(images):
if images:
return images[0]
else:
raise NoSuchImageError()
class ServiceVolumesTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
def test_parse_volume_spec_only_one_path(self):
spec = parse_volume_spec('/the/volume')
self.assertEqual(spec, (None, '/the/volume', 'rw'))
def test_parse_volume_spec_internal_and_external(self):
spec = parse_volume_spec('external:interval')
self.assertEqual(spec, ('external', 'interval', 'rw'))
def test_parse_volume_spec_with_mode(self):
spec = parse_volume_spec('external:interval:ro')
self.assertEqual(spec, ('external', 'interval', 'ro'))
spec = parse_volume_spec('external:interval:z')
self.assertEqual(spec, ('external', 'interval', 'z'))
def test_parse_volume_spec_too_many_parts(self):
with self.assertRaises(ConfigError):
parse_volume_spec('one:two:three:four')
@pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive')
def test_parse_volume_windows_absolute_path(self):
windows_absolute_path = "c:\\Users\\me\\Documents\\shiny\\config:\\opt\\shiny\\config:ro"
spec = parse_volume_spec(windows_absolute_path)
self.assertEqual(
spec,
(
"/c/Users/me/Documents/shiny/config",
"/opt/shiny/config",
"ro"
)
)
def test_build_volume_binding(self):
binding = build_volume_binding(parse_volume_spec('/outside:/inside'))
self.assertEqual(binding, ('/inside', '/outside:/inside:rw'))
binding = build_volume_binding(VolumeSpec.parse('/outside:/inside'))
assert binding == ('/inside', '/outside:/inside:rw')
def test_get_container_data_volumes(self):
options = [
options = [VolumeSpec.parse(v) for v in [
'/host/volume:/host/volume:ro',
'/new/volume',
'/existing/volume',
]
]]
self.mock_client.inspect_image.return_value = {
'ContainerConfig': {
@@ -568,20 +608,20 @@ class ServiceVolumesTest(unittest.TestCase):
},
}, has_been_inspected=True)
expected = {
'/existing/volume': '/var/lib/docker/aaaaaaaa:/existing/volume:rw',
'/mnt/image/data': '/var/lib/docker/cccccccc:/mnt/image/data:rw',
}
expected = [
VolumeSpec.parse('/var/lib/docker/aaaaaaaa:/existing/volume:rw'),
VolumeSpec.parse('/var/lib/docker/cccccccc:/mnt/image/data:rw'),
]
binds = get_container_data_volumes(container, options)
self.assertEqual(binds, expected)
volumes = get_container_data_volumes(container, options)
assert sorted(volumes) == sorted(expected)
def test_merge_volume_bindings(self):
options = [
'/host/volume:/host/volume:ro',
'/host/rw/volume:/host/rw/volume',
'/new/volume',
'/existing/volume',
VolumeSpec.parse('/host/volume:/host/volume:ro'),
VolumeSpec.parse('/host/rw/volume:/host/rw/volume'),
VolumeSpec.parse('/new/volume'),
VolumeSpec.parse('/existing/volume'),
]
self.mock_client.inspect_image.return_value = {
@@ -607,8 +647,8 @@ class ServiceVolumesTest(unittest.TestCase):
'web',
image='busybox',
volumes=[
'/host/path:/data1',
'/host/path:/data2',
VolumeSpec.parse('/host/path:/data1'),
VolumeSpec.parse('/host/path:/data2'),
],
client=self.mock_client,
)
@@ -637,7 +677,7 @@ class ServiceVolumesTest(unittest.TestCase):
service = Service(
'web',
image='busybox',
volumes=['/host/path:/data'],
volumes=[VolumeSpec.parse('/host/path:/data')],
client=self.mock_client,
)
@@ -669,25 +709,53 @@ class ServiceVolumesTest(unittest.TestCase):
['/mnt/sda1/host/path:/data:rw'],
)
def test_warn_on_masked_volume_no_warning_when_no_container_volumes(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = []
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
assert not mock_log.warn.called
def test_warn_on_masked_volume_when_masked(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = [
VolumeSpec('/var/lib/docker/path', '/path', 'rw'),
VolumeSpec('/var/lib/docker/path', '/other', 'rw'),
]
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
mock_log.warn.assert_called_once_with(mock.ANY)
def test_warn_on_masked_no_warning_with_same_path(self):
volumes_option = [VolumeSpec('/home/user', '/path', 'rw')]
container_volumes = [VolumeSpec('/home/user', '/path', 'rw')]
service = 'service_name'
with mock.patch('compose.service.log', autospec=True) as mock_log:
warn_on_masked_volume(volumes_option, container_volumes, service)
assert not mock_log.warn.called
def test_create_with_special_volume_mode(self):
self.mock_client.inspect_image.return_value = {'Id': 'imageid'}
create_calls = []
def create_container(*args, **kwargs):
create_calls.append((args, kwargs))
return {'Id': 'containerid'}
self.mock_client.create_container = create_container
volumes = ['/tmp:/foo:z']
self.mock_client.create_container.return_value = {'Id': 'containerid'}
volume = '/tmp:/foo:z'
Service(
'web',
client=self.mock_client,
image='busybox',
volumes=volumes,
volumes=[VolumeSpec.parse(volume)],
).create_container()
self.assertEqual(len(create_calls), 1)
self.assertEqual(self.mock_client.create_host_config.call_args[1]['binds'], volumes)
assert self.mock_client.create_container.call_count == 1
self.assertEqual(
self.mock_client.create_host_config.call_args[1]['binds'],
[volume])

View File

@@ -1,16 +1,44 @@
from .. import unittest
# encoding: utf-8
from __future__ import unicode_literals
from compose import utils
class JsonSplitterTestCase(unittest.TestCase):
class TestJsonSplitter(object):
def test_json_splitter_no_object(self):
data = '{"foo": "bar'
self.assertEqual(utils.json_splitter(data), (None, None))
assert utils.json_splitter(data) is None
def test_json_splitter_with_object(self):
data = '{"foo": "bar"}\n \n{"next": "obj"}'
self.assertEqual(
utils.json_splitter(data),
({'foo': 'bar'}, '{"next": "obj"}')
)
assert utils.json_splitter(data) == ({'foo': 'bar'}, '{"next": "obj"}')
class TestStreamAsText(object):
def test_stream_with_non_utf_unicode_character(self):
stream = [b'\xed\xf3\xf3']
output, = utils.stream_as_text(stream)
assert output == '<EFBFBD><EFBFBD><EFBFBD>'
def test_stream_with_utf_character(self):
stream = ['ěĝ'.encode('utf-8')]
output, = utils.stream_as_text(stream)
assert output == 'ěĝ'
class TestJsonStream(object):
def test_with_falsy_entries(self):
stream = [
'{"one": "two"}\n{}\n',
"[1, 2, 3]\n[]\n",
]
output = list(utils.json_stream(stream))
assert output == [
{'one': 'two'},
{},
[1, 2, 3],
[],
]

View File

@@ -43,4 +43,6 @@ directory = coverage-html
[flake8]
# Allow really long lines for now
max-line-length = 140
# Set this high for now
max-complexity = 20
exclude = compose/packages