Compare commits

..

3 Commits

Author SHA1 Message Date
Joffrey F
c7bdf9e392 Bump 1.14.0
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-15 17:10:46 -07:00
Joffrey F
4f532f6f20 Fix ps output to show all ports
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-15 17:07:59 -07:00
Joffrey F
696ddf616b ServicePort merge_field should account for external IP and protocol
Signed-off-by: Joffrey F <joffrey@docker.com>
2017-06-15 17:07:59 -07:00
85 changed files with 552 additions and 3996 deletions

View File

@@ -7,5 +7,3 @@ coverage-html
docs/_site
venv
.tox
**/__pycache__
*.pyc

View File

@@ -1,5 +1,5 @@
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 'v0.9.1'
sha: 'v0.4.2'
hooks:
- id: check-added-large-files
- id: check-docstring-first
@@ -14,7 +14,7 @@
- id: requirements-txt-fixer
- id: trailing-whitespace
- repo: git://github.com/asottile/reorder_python_imports
sha: v0.3.5
sha: v0.1.0
hooks:
- id: reorder-python-imports
language_version: 'python2.7'

View File

@@ -1,186 +1,6 @@
Change log
==========
1.17.1 (2017-11-08)
------------------
### Bugfixes
- Fixed a bug that would prevent creating new containers when using
container labels in the list format as part of the service's definition.
1.17.0 (2017-11-02)
-------------------
### New features
#### Compose file version 3.4
- Introduced version 3.4 of the `docker-compose.yml` specification.
This version requires to be used with Docker Engine 17.06.0 or above.
- Added support for `cache_from`, `network` and `target` options in build
configurations
- Added support for the `order` parameter in the `update_config` section
- Added support for setting a custom name in volume definitions using
the `name` parameter
#### Compose file version 2.3
- Added support for `shm_size` option in build configuration
#### Compose file version 2.x
- Added support for extension fields (`x-*`). Also available for v3.4 files
#### All formats
- Added new `--no-start` to the `up` command, allowing users to create all
resources (networks, volumes, containers) without starting services.
The `create` command is deprecated in favor of this new option
### Bugfixes
- Fixed a bug where `extra_hosts` values would be overridden by extension
files instead of merging together
- Fixed a bug where the validation for v3.2 files would prevent using the
`consistency` field in service volume definitions
- Fixed a bug that would cause a crash when configuration fields expecting
unique items would contain duplicates
- Fixed a bug where mount overrides with a different mode would create a
duplicate entry instead of overriding the original entry
- Fixed a bug where build labels declared as a list wouldn't be properly
parsed
- Fixed a bug where the output of `docker-compose config` would be invalid
for some versions if the file contained custom-named external volumes
- Improved error handling when issuing a build command on Windows using an
unsupported file version
- Fixed an issue where networks with identical names would sometimes be
created when running `up` commands concurrently.
1.16.1 (2017-09-01)
-------------------
### Bugfixes
- Fixed bug that prevented using `extra_hosts` in several configuration files.
1.16.0 (2017-08-31)
-------------------
### New features
#### Compose file version 2.3
- Introduced version 2.3 of the `docker-compose.yml` specification.
This version requires to be used with Docker Engine 17.06.0 or above.
- Added support for the `target` parameter in build configurations
- Added support for the `start_period` parameter in healthcheck
configurations
#### Compose file version 2.x
- Added support for the `blkio_config` parameter in service definitions
- Added support for setting a custom name in volume definitions using
the `name` parameter (not available for version 2.0)
#### All formats
- Added new CLI flag `--no-ansi` to suppress ANSI control characters in
output
### Bugfixes
- Fixed a bug where nested `extends` instructions weren't resolved
properly, causing "file not found" errors
- Fixed several issues with `.dockerignore` parsing
- Fixed issues where logs of TTY-enabled services were being printed
incorrectly and causing `MemoryError` exceptions
- Fixed a bug where printing application logs would sometimes be interrupted
by a `UnicodeEncodeError` exception on Python 3
- The `$` character in the output of `docker-compose config` is now
properly escaped
- Fixed a bug where running `docker-compose top` would sometimes fail
with an uncaught exception
- Fixed a bug where `docker-compose pull` with the `--parallel` flag
would return a `0` exit code when failing
- Fixed an issue where keys in `deploy.resources` were not being validated
- Fixed an issue where the `logging` options in the output of
`docker-compose config` would be set to `null`, an invalid value
- Fixed the output of the `docker-compose images` command when an image
would come from a private repository using an explicit port number
- Fixed the output of `docker-compose config` when a port definition used
`0` as the value for the published port
1.15.0 (2017-07-26)
-------------------
### New features
#### Compose file version 2.2
- Added support for the `network` parameter in build configurations.
#### Compose file version 2.1 and up
- The `pid` option in a service's definition now supports a `service:<name>`
value.
- Added support for the `storage_opt` parameter in in service definitions.
This option is not available for the v3 format
#### All formats
- Added `--quiet` flag to `docker-compose pull`, suppressing progress output
- Some improvements to CLI output
### Bugfixes
- Volumes specified through the `--volume` flag of `docker-compose run` now
complement volumes declared in the service's defintion instead of replacing
them
- Fixed a bug where using multiple Compose files would unset the scale value
defined inside the Compose file.
- Fixed an issue where the `credHelpers` entries in the `config.json` file
were not being honored by Compose
- Fixed a bug where using multiple Compose files with port declarations
would cause failures in Python 3 environments
- Fixed a bug where some proxy-related options present in the user's
environment would prevent Compose from running
- Fixed an issue where the output of `docker-compose config` would be invalid
if the original file used `Y` or `N` values
- Fixed an issue preventing `up` operations on a previously created stack on
Windows Engine.
1.14.0 (2017-06-19)
-------------------

View File

@@ -19,47 +19,34 @@ RUN set -ex; \
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \
-o /usr/local/bin/docker && \
SHA256=f024bc65c45a3778cf07213d26016075e8172de8f6e4b5702bedde06c241650f; \
echo "${SHA256} /usr/local/bin/docker" | sha256sum -c - && \
chmod +x /usr/local/bin/docker
# Build Python 2.7.13 from source
RUN set -ex; \
curl -LO https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz && \
SHA256=a4f05a0720ce0fd92626f0278b6b433eee9a6173ddf2bced7957dfb599a5ece1; \
echo "${SHA256} Python-2.7.13.tgz" | sha256sum -c - && \
tar -xzf Python-2.7.13.tgz; \
curl -L https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz | tar -xz; \
cd Python-2.7.13; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.13; \
rm Python-2.7.13.tgz
rm -rf /Python-2.7.13
# Build python 3.4 from source
RUN set -ex; \
curl -LO https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz && \
SHA256=fe59daced99549d1d452727c050ae486169e9716a890cffb0d468b376d916b48; \
echo "${SHA256} Python-3.4.6.tgz" | sha256sum -c - && \
tar -xzf Python-3.4.6.tgz; \
curl -L https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz | tar -xz; \
cd Python-3.4.6; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-3.4.6; \
rm Python-3.4.6.tgz
rm -rf /Python-3.4.6
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install pip
RUN set -ex; \
curl -LO https://bootstrap.pypa.io/get-pip.py && \
SHA256=19dae841a150c86e2a09d475b5eb0602861f2a5b7761ec268049a662dbd2bd0c; \
echo "${SHA256} get-pip.py" | sha256sum -c - && \
python get-pip.py
curl -L https://bootstrap.pypa.io/get-pip.py | python
# Python3 requires a valid locale
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen

View File

@@ -1,15 +0,0 @@
FROM s390x/alpine:3.6
ARG COMPOSE_VERSION=1.16.1
RUN apk add --update --no-cache \
python \
py-pip \
&& pip install --no-cache-dir docker-compose==$COMPOSE_VERSION \
&& rm -rf /var/cache/apk/*
WORKDIR /data
VOLUME /data
ENTRYPOINT ["docker-compose"]

View File

@@ -15,7 +15,6 @@
"bfirsh",
"dnephin",
"mnowster",
"shin-",
]
[people]
@@ -45,8 +44,3 @@
Name = "Mazz Mosley"
Email = "mazz@houseofmnowster.com"
GitHub = "mnowster"
[People.shin-]
Name = "Joffrey F"
Email = "joffrey@docker.com"
GitHub = "shin-"

View File

@@ -1,6 +1,3 @@
#!/usr/bin/env python
from __future__ import absolute_import
from __future__ import unicode_literals
from compose.cli.main import main
main()

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.17.1'
__version__ = '1.14.0'

View File

@@ -121,7 +121,7 @@ def get_image_digest(service, allow_push=False):
def push_image(service):
try:
digest = service.push()
except Exception:
except:
log.error(
"Failed to push image for service '{s.name}'. Please use an "
"image tag that can be pushed to a Docker "

View File

@@ -17,8 +17,6 @@ try:
env[str('PIP_DISABLE_PIP_VERSION_CHECK')] = str('1')
s_cmd = subprocess.Popen(
# DO NOT replace this call with a `sys.executable` call. It breaks the binary
# distribution (with the binary calling itself recursively over and over).
['pip', 'freeze'], stderr=subprocess.PIPE, stdout=subprocess.PIPE,
env=env
)

View File

@@ -1,7 +1,7 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from ..const import IS_WINDOWS_PLATFORM
import colorama
NAMES = [
'grey',
@@ -33,9 +33,7 @@ def make_color_fn(code):
return lambda s: ansi_color(code, s)
if IS_WINDOWS_PLATFORM:
import colorama
colorama.init(strip=False)
colorama.init(strip=False)
for (name, code) in get_pairs():
globals()[name] = make_color_fn(code)

View File

@@ -57,26 +57,6 @@ def handle_connection_errors(client):
except (ReadTimeout, socket.timeout) as e:
log_timeout_error(client.timeout)
raise ConnectionError()
except Exception as e:
if is_windows():
import pywintypes
if isinstance(e, pywintypes.error):
log_windows_pipe_error(e)
raise ConnectionError()
raise
def log_windows_pipe_error(exc):
if exc.winerror == 232: # https://github.com/docker/compose/issues/5005
log.error(
"The current Compose file version is not compatible with your engine version. "
"Please upgrade your Compose file to a more recent version, or set "
"a COMPOSE_API_VERSION in your environment."
)
else:
log.error(
"Windows named pipe error: {} (code: {})".format(exc.strerror, exc.winerror)
)
def log_timeout_error(timeout):

View File

@@ -102,18 +102,8 @@ class LogPrinter(object):
# active containers to tail, so continue
continue
self.write(line)
def write(self, line):
try:
self.output.write(line)
except UnicodeEncodeError:
# This may happen if the user's locale settings don't support UTF-8
# and UTF-8 characters are present in the log line. The following
# will output a "degraded" log with unsupported characters
# replaced by `?`
self.output.write(line.encode('ascii', 'replace').decode())
self.output.flush()
self.output.flush()
def remove_stopped_threads(thread_map):

View File

@@ -97,10 +97,7 @@ def dispatch():
{'options_first': True, 'version': get_version_info('compose')})
options, handler, command_options = dispatcher.parse(sys.argv[1:])
setup_console_handler(console_handler, options.get('--verbose'), options.get('--no-ansi'))
setup_parallel_logger(options.get('--no-ansi'))
if options.get('--no-ansi'):
command_options['--no-color'] = True
setup_console_handler(console_handler, options.get('--verbose'))
return functools.partial(perform_command, options, handler, command_options)
@@ -130,14 +127,8 @@ def setup_logging():
logging.getLogger("requests").propagate = False
def setup_parallel_logger(noansi):
if noansi:
import compose.parallel
compose.parallel.ParallelStreamWriter.set_noansi()
def setup_console_handler(handler, verbose, noansi=False):
if handler.stream.isatty() and noansi is False:
def setup_console_handler(handler, verbose):
if handler.stream.isatty():
format_class = ConsoleWarningFormatter
else:
format_class = logging.Formatter
@@ -168,7 +159,6 @@ class TopLevelCommand(object):
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
--no-ansi Do not print ANSI control characters
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
@@ -319,7 +309,6 @@ class TopLevelCommand(object):
def create(self, options):
"""
Creates containers for a service.
This command is deprecated. Use the `up` command with `--no-start` instead.
Usage: create [options] [SERVICE...]
@@ -333,11 +322,6 @@ class TopLevelCommand(object):
"""
service_names = options['SERVICE']
log.warn(
'The create command is deprecated. '
'Use the up command with the --no-start flag instead.'
)
self.project.create(
service_names=service_names,
strategy=convergence_strategy_from_opts(options),
@@ -514,7 +498,7 @@ class TopLevelCommand(object):
rows = []
for container in containers:
image_config = container.image_config
repo_tags = image_config['RepoTags'][0].rsplit(':', 1)
repo_tags = image_config['RepoTags'][0].split(':')
image_id = image_config['Id'].split(':')[1][:12]
size = human_readable_file_size(image_config['Size'])
rows.append([
@@ -650,13 +634,11 @@ class TopLevelCommand(object):
Options:
--ignore-pull-failures Pull what it can and ignores images with pull failures.
--parallel Pull multiple images in parallel.
--quiet Pull without printing progress information
"""
self.project.pull(
service_names=options['SERVICE'],
ignore_pull_failures=options.get('--ignore-pull-failures'),
parallel_pull=options.get('--parallel'),
silent=options.get('--quiet'),
parallel_pull=options.get('--parallel')
)
def push(self, options):
@@ -908,7 +890,6 @@ class TopLevelCommand(object):
--no-recreate If containers already exist, don't recreate them.
Incompatible with --force-recreate.
--no-build Don't build an image, even if it's missing.
--no-start Don't start the services after creating them.
--build Build images before starting containers.
--abort-on-container-exit Stops all containers if any container was stopped.
Incompatible with -d.
@@ -929,16 +910,10 @@ class TopLevelCommand(object):
timeout = timeout_from_opts(options)
remove_orphans = options['--remove-orphans']
detached = options.get('-d')
no_start = options.get('--no-start')
if detached and (cascade_stop or exit_value_from):
if detached and cascade_stop:
raise UserError("--abort-on-container-exit and -d cannot be combined.")
if no_start:
for excluded in ['-d', '--abort-on-container-exit', '--exit-code-from']:
if options.get(excluded):
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
with up_shutdown_context(self.project, service_names, timeout, detached):
to_attach = self.project.up(
service_names=service_names,
@@ -949,10 +924,9 @@ class TopLevelCommand(object):
detached=detached,
remove_orphans=remove_orphans,
scale_override=parse_scale_args(options['--scale']),
start=not no_start
)
if detached or no_start:
if detached:
return
attached_containers = filter_containers_to_service_names(to_attach, service_names)
@@ -969,10 +943,33 @@ class TopLevelCommand(object):
if cascade_stop:
print("Aborting on container exit...")
all_containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
exit_code = compute_exit_code(
exit_value_from, attached_containers, cascade_starter, all_containers
)
exit_code = 0
if exit_value_from:
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
exit_code = 2
elif len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
exit_code = exit_values[0]
else:
exit_code = candidates[0].inspect()['State']['ExitCode']
else:
for e in self.project.containers(service_names=options['SERVICE'], stopped=True):
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
self.project.stop(service_names=service_names, timeout=timeout)
sys.exit(exit_code)
@@ -993,37 +990,6 @@ class TopLevelCommand(object):
print(get_version_info('full'))
def compute_exit_code(exit_value_from, attached_containers, cascade_starter, all_containers):
exit_code = 0
if exit_value_from:
candidates = list(filter(
lambda c: c.service == exit_value_from,
attached_containers))
if not candidates:
log.error(
'No containers matching the spec "{0}" '
'were run.'.format(exit_value_from)
)
exit_code = 2
elif len(candidates) > 1:
exit_values = filter(
lambda e: e != 0,
[c.inspect()['State']['ExitCode'] for c in candidates]
)
exit_code = exit_values[0]
else:
exit_code = candidates[0].inspect()['State']['ExitCode']
else:
for e in all_containers:
if (not e.is_running and cascade_starter == e.name):
if not e.exit_code == 0:
exit_code = e.exit_code
break
return exit_code
def convergence_strategy_from_opts(options):
no_recreate = options['--no-recreate']
force_recreate = options['--force-recreate']

View File

@@ -15,14 +15,9 @@ from cached_property import cached_property
from . import types
from .. import const
from ..const import COMPOSEFILE_V1 as V1
from ..const import COMPOSEFILE_V2_1 as V2_1
from ..const import COMPOSEFILE_V3_0 as V3_0
from ..const import COMPOSEFILE_V3_4 as V3_4
from ..utils import build_string_dict
from ..utils import parse_bytes
from ..utils import parse_nanoseconds_int
from ..utils import splitdrive
from ..version import ComposeVersion
from .environment import env_vars_from_file
from .environment import Environment
from .environment import split_env
@@ -49,7 +44,6 @@ from .validation import validate_depends_on
from .validation import validate_extends_file_path
from .validation import validate_links
from .validation import validate_network_mode
from .validation import validate_pid_mode
from .validation import validate_service_constraints
from .validation import validate_top_level_object
from .validation import validate_ulimits
@@ -112,7 +106,6 @@ DOCKER_CONFIG_KEYS = [
]
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'blkio_config',
'build',
'container_name',
'credential_spec',
@@ -122,7 +115,6 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'logging',
'network_mode',
'init',
'scale',
]
DOCKER_VALID_URL_PREFIXES = (
@@ -194,16 +186,15 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
if version == '1':
raise ConfigurationError(
'Version in "{}" is invalid. {}'
.format(self.filename, VERSION_EXPLANATION)
)
.format(self.filename, VERSION_EXPLANATION))
if version == '2':
return const.COMPOSEFILE_V2_0
version = const.COMPOSEFILE_V2_0
if version == '3':
return const.COMPOSEFILE_V3_0
version = const.COMPOSEFILE_V3_0
return ComposeVersion(version)
return version
def get_service(self, name):
return self.get_service_dicts()[name]
@@ -407,12 +398,11 @@ def load_mapping(config_files, get_func, entity_type, working_dir=None):
external = config.get('external')
if external:
name_field = 'name' if entity_type == 'Volume' else 'external_name'
validate_external(entity_type, name, config, config_file.version)
validate_external(entity_type, name, config)
if isinstance(external, dict):
config[name_field] = external.get('name')
elif not config.get('name'):
config[name_field] = name
config['external_name'] = external.get('name')
else:
config['external_name'] = name
if 'driver_opts' in config:
config['driver_opts'] = build_string_dict(
@@ -428,12 +418,14 @@ def load_mapping(config_files, get_func, entity_type, working_dir=None):
return mapping
def validate_external(entity_type, name, config, version):
if (version < V2_1 or (version >= V3_0 and version < V3_4)) and len(config.keys()) > 1:
raise ConfigurationError(
"{} {} declared as external but specifies additional attributes "
"({}).".format(
entity_type, name, ', '.join(k for k in config if k != 'external')))
def validate_external(entity_type, name, config):
if len(config.keys()) <= 1:
return
raise ConfigurationError(
"{} {} declared as external but specifies additional attributes "
"({}).".format(
entity_type, name, ', '.join(k for k in config if k != 'external')))
def load_services(config_details, config_file):
@@ -502,7 +494,7 @@ def process_config_file(config_file, environment, service_name=None):
'service',
environment)
if config_file.version > V1:
if config_file.version != V1:
processed_config = dict(config_file.config)
processed_config['services'] = services
processed_config['volumes'] = interpolate_config_section(
@@ -515,13 +507,14 @@ def process_config_file(config_file, environment, service_name=None):
config_file.get_networks(),
'network',
environment)
if config_file.version >= const.COMPOSEFILE_V3_1:
if config_file.version in (const.COMPOSEFILE_V3_1, const.COMPOSEFILE_V3_2,
const.COMPOSEFILE_V3_3):
processed_config['secrets'] = interpolate_config_section(
config_file,
config_file.get_secrets(),
'secrets',
environment)
if config_file.version >= const.COMPOSEFILE_V3_3:
if config_file.version in (const.COMPOSEFILE_V3_3):
processed_config['configs'] = interpolate_config_section(
config_file,
config_file.get_configs(),
@@ -575,21 +568,12 @@ class ServiceExtendsResolver(object):
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
if config_path == self.config_file.filename:
try:
service_config = self.config_file.get_service(service_name)
except KeyError:
raise ConfigurationError(
"Cannot extend service '{}' in {}: Service not found".format(
service_name, config_path)
)
else:
extends_file = ConfigFile.from_filename(config_path)
validate_config_version([self.config_file, extends_file])
extended_file = process_config_file(
extends_file, self.environment, service_name=service_name
)
service_config = extended_file.get_service(service_name)
extends_file = ConfigFile.from_filename(config_path)
validate_config_version([self.config_file, extends_file])
extended_file = process_config_file(
extends_file, self.environment, service_name=service_name
)
service_config = extended_file.get_service(service_name)
return config_path, service_config, service_name
@@ -683,7 +667,6 @@ def validate_service(service_config, service_names, config_file):
validate_cpu(service_config)
validate_ulimits(service_config)
validate_network_mode(service_config, service_names)
validate_pid_mode(service_config, service_names)
validate_depends_on(service_config, service_names)
validate_links(service_config, service_names)
@@ -706,41 +689,36 @@ def process_service(service_config):
]
if 'build' in service_dict:
process_build_section(service_dict, working_dir)
if isinstance(service_dict['build'], six.string_types):
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']:
path = service_dict['build']['context']
service_dict['build']['context'] = resolve_build_path(working_dir, path)
if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
if 'sysctls' in service_dict:
service_dict['sysctls'] = build_string_dict(parse_sysctls(service_dict['sysctls']))
if 'labels' in service_dict:
service_dict['labels'] = parse_labels(service_dict['labels'])
if 'extra_hosts' in service_dict:
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
if 'sysctls' in service_dict:
service_dict['sysctls'] = build_string_dict(parse_sysctls(service_dict['sysctls']))
service_dict = process_depends_on(service_dict)
for field in ['dns', 'dns_search', 'tmpfs']:
if field in service_dict:
service_dict[field] = to_list(service_dict[field])
service_dict = process_blkio_config(process_ports(
process_healthcheck(service_dict, service_config.name)
))
service_dict = process_healthcheck(service_dict, service_config.name)
service_dict = process_ports(service_dict)
return service_dict
def process_build_section(service_dict, working_dir):
if isinstance(service_dict['build'], six.string_types):
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
elif isinstance(service_dict['build'], dict):
if 'context' in service_dict['build']:
path = service_dict['build']['context']
service_dict['build']['context'] = resolve_build_path(working_dir, path)
if 'labels' in service_dict['build']:
service_dict['build']['labels'] = parse_labels(service_dict['build']['labels'])
def process_ports(service_dict):
if 'ports' not in service_dict:
return service_dict
@@ -763,31 +741,6 @@ def process_depends_on(service_dict):
return service_dict
def process_blkio_config(service_dict):
if not service_dict.get('blkio_config'):
return service_dict
for field in ['device_read_bps', 'device_write_bps']:
if field in service_dict['blkio_config']:
for v in service_dict['blkio_config'].get(field, []):
rate = v.get('rate', 0)
v['rate'] = parse_bytes(rate)
if v['rate'] is None:
raise ConfigurationError('Invalid format for bytes value: "{}"'.format(rate))
for field in ['device_read_iops', 'device_write_iops']:
if field in service_dict['blkio_config']:
for v in service_dict['blkio_config'].get(field, []):
try:
v['rate'] = int(v.get('rate', 0))
except ValueError:
raise ConfigurationError(
'Invalid IOPS value: "{}". Must be a positive integer.'.format(v.get('rate'))
)
return service_dict
def process_healthcheck(service_dict, service_name):
if 'healthcheck' not in service_dict:
return service_dict
@@ -805,12 +758,16 @@ def process_healthcheck(service_dict, service_name):
elif 'test' in raw:
hc['test'] = raw['test']
for field in ['interval', 'timeout', 'start_period']:
if field in raw:
if not isinstance(raw[field], six.integer_types):
hc[field] = parse_nanoseconds_int(raw[field])
else: # Conversion has been done previously
hc[field] = raw[field]
if 'interval' in raw:
if not isinstance(raw['interval'], six.integer_types):
hc['interval'] = parse_nanoseconds_int(raw['interval'])
else: # Conversion has been done previously
hc['interval'] = raw['interval']
if 'timeout' in raw:
if not isinstance(raw['timeout'], six.integer_types):
hc['timeout'] = parse_nanoseconds_int(raw['timeout'])
else: # Conversion has been done previously
hc['timeout'] = raw['timeout']
if 'retries' in raw:
hc['retries'] = raw['retries']
@@ -955,7 +912,6 @@ def merge_service_dicts(base, override, version):
md.merge_sequence('secrets', types.ServiceSecret.parse)
md.merge_sequence('configs', types.ServiceConfig.parse)
md.merge_mapping('deploy', parse_deploy)
md.merge_mapping('extra_hosts', parse_extra_hosts)
for field in ['volumes', 'devices']:
md.merge_field(field, merge_path_mappings)
@@ -971,8 +927,6 @@ def merge_service_dicts(base, override, version):
md.merge_field('logging', merge_logging, default={})
merge_ports(md, base, override)
md.merge_field('blkio_config', merge_blkio_config, default={})
md.merge_field('healthcheck', merge_healthchecks, default={})
for field in set(ALLOWED_KEYS) - set(md):
md.merge_scalar(field)
@@ -991,14 +945,6 @@ def merge_unique_items_lists(base, override):
return sorted(set().union(base, override))
def merge_healthchecks(base, override):
if override.get('disabled') is True:
return override
result = base.copy()
result.update(override)
return result
def merge_ports(md, base, override):
def parse_sequence_func(seq):
acc = []
@@ -1013,7 +959,7 @@ def merge_ports(md, base, override):
merged = parse_sequence_func(md.base.get(field, []))
merged.update(parse_sequence_func(md.override.get(field, [])))
md[field] = [item for item in sorted(merged.values(), key=lambda x: x.target)]
md[field] = [item for item in sorted(merged.values())]
def merge_build(output, base, override):
@@ -1026,42 +972,19 @@ def merge_build(output, base, override):
md = MergeDict(to_dict(base), to_dict(override))
md.merge_scalar('context')
md.merge_scalar('dockerfile')
md.merge_scalar('network')
md.merge_scalar('target')
md.merge_scalar('shm_size')
md.merge_mapping('args', parse_build_arguments)
md.merge_field('cache_from', merge_unique_items_lists, default=[])
md.merge_mapping('labels', parse_labels)
return dict(md)
def merge_blkio_config(base, override):
md = MergeDict(base, override)
md.merge_scalar('weight')
def merge_blkio_limits(base, override):
index = dict((b['path'], b) for b in base)
for o in override:
index[o['path']] = o
return sorted(list(index.values()), key=lambda x: x['path'])
for field in [
"device_read_bps", "device_read_iops", "device_write_bps",
"device_write_iops", "weight_device",
]:
md.merge_field(field, merge_blkio_limits, default=[])
return dict(md)
def merge_logging(base, override):
md = MergeDict(base, override)
md.merge_scalar('driver')
if md.get('driver') == base.get('driver') or base.get('driver') is None:
md.merge_mapping('options', lambda m: m or {})
elif override.get('options'):
md['options'] = override.get('options', {})
else:
md['options'] = override.get('options')
return dict(md)
@@ -1145,30 +1068,24 @@ def resolve_volume_paths(working_dir, service_dict):
def resolve_volume_path(working_dir, volume):
mount_params = None
if isinstance(volume, dict):
container_path = volume.get('target')
host_path = volume.get('source')
mode = None
container_path = volume.get('target')
if host_path:
if volume.get('read_only'):
mode = 'ro'
container_path += ':ro'
if volume.get('volume', {}).get('nocopy'):
mode = 'nocopy'
mount_params = (host_path, mode)
container_path += ':nocopy'
else:
container_path, mount_params = split_path_mapping(volume)
container_path, host_path = split_path_mapping(volume)
if mount_params is not None:
host_path, mode = mount_params
if host_path is None:
return container_path
if host_path is not None:
if host_path.startswith('.'):
host_path = expand_path(working_dir, host_path)
host_path = os.path.expanduser(host_path)
return u"{}:{}{}".format(host_path, container_path, (':' + mode if mode else ''))
return container_path
return u"{}:{}".format(host_path, container_path)
else:
return container_path
def normalize_build(service_dict, working_dir, environment):
@@ -1248,12 +1165,7 @@ def split_path_mapping(volume_path):
if ':' in volume_config:
(host, container) = volume_config.split(':', 1)
container_drive, container_path = splitdrive(container)
mode = None
if ':' in container_path:
container_path, mode = container_path.rsplit(':', 1)
return (container_drive + container_path, (drive + host, mode))
return (container, drive + host)
else:
return (volume_path, None)
@@ -1265,11 +1177,7 @@ def join_path_mapping(pair):
elif host is None:
return container
else:
host, mode = host
result = ":".join((host, container))
if mode:
result += ":" + mode
return result
return ":".join((host, container))
def expand_path(working_dir, path):

View File

@@ -41,7 +41,6 @@
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
@@ -51,33 +50,6 @@
"type": "object",
"properties": {
"blkio_config": {
"type": "object",
"properties": {
"device_read_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_read_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"weight": {"type": "integer"},
"weight_device": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_weight"}
}
},
"additionalProperties": false
},
"build": {
"oneOf": [
{"type": "string"},
@@ -354,23 +326,6 @@
]
},
"blkio_limit": {
"type": "object",
"properties": {
"path": {"type": "string"},
"rate": {"type": ["integer", "string"]}
},
"additionalProperties": false
},
"blkio_weight": {
"type": "object",
"properties": {
"path": {"type": "string"},
"weight": {"type": "integer"}
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",

View File

@@ -41,7 +41,6 @@
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
@@ -51,34 +50,6 @@
"type": "object",
"properties": {
"blkio_config": {
"type": "object",
"properties": {
"device_read_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_read_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"weight": {"type": "integer"},
"weight_device": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_weight"}
}
},
"additionalProperties": false
},
"build": {
"oneOf": [
{"type": "string"},
@@ -258,7 +229,6 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
@@ -372,8 +342,7 @@
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"},
"name": {"type": "string"}
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
@@ -406,23 +375,6 @@
]
},
"blkio_limit": {
"type": "object",
"properties": {
"path": {"type": "string"},
"rate": {"type": ["integer", "string"]}
},
"additionalProperties": false
},
"blkio_weight": {
"type": "object",
"properties": {
"path": {"type": "string"},
"weight": {"type": "integer"}
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",

View File

@@ -41,7 +41,6 @@
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
@@ -51,34 +50,6 @@
"type": "object",
"properties": {
"blkio_config": {
"type": "object",
"properties": {
"device_read_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_read_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"weight": {"type": "integer"},
"weight_device": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_weight"}
}
},
"additionalProperties": false
},
"build": {
"oneOf": [
{"type": "string"},
@@ -89,8 +60,7 @@
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"}
"cache_from": {"$ref": "#/definitions/list_of_strings"}
},
"additionalProperties": false
}
@@ -265,7 +235,6 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
@@ -379,8 +348,7 @@
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"},
"name": {"type": "string"}
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
@@ -413,23 +381,6 @@
]
},
"blkio_limit": {
"type": "object",
"properties": {
"path": {"type": "string"},
"rate": {"type": ["integer", "string"]}
},
"additionalProperties": false
},
"blkio_weight": {
"type": "object",
"properties": {
"path": {"type": "string"},
"weight": {"type": "integer"}
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",

View File

@@ -1,451 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v2.3.json",
"type": "object",
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"blkio_config": {
"type": "object",
"properties": {
"device_read_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_read_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_bps": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"device_write_iops": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_limit"}
},
"weight": {"type": "integer"},
"weight_device": {
"type": "array",
"items": {"$ref": "#/definitions/blkio_weight"}
}
},
"additionalProperties": false
},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"container_name": {"type": "string"},
"cpu_count": {"type": "integer", "minimum": 0},
"cpu_percent": {"type": "integer", "minimum": 0, "maximum": 100},
"cpu_shares": {"type": ["number", "string"]},
"cpu_quota": {"type": ["number", "string"]},
"cpus": {"type": "number", "minimum": 0},
"cpuset": {"type": "string"},
"depends_on": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"additionalProperties": false,
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"type": "object",
"additionalProperties": false,
"properties": {
"condition": {
"type": "string",
"enum": ["service_started", "service_healthy"]
}
},
"required": ["condition"]
}
}
}
]
},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns_opt": {
"type": "array",
"items": {
"type": "string"
},
"uniqueItems": true
},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"extends": {
"oneOf": [
{
"type": "string"
},
{
"type": "object",
"properties": {
"service": {"type": "string"},
"file": {"type": "string"}
},
"required": ["service"],
"additionalProperties": false
}
]
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"init": {"type": ["boolean", "string"]},
"ipc": {"type": "string"},
"isolation": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {"type": "object"}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"mem_limit": {"type": ["number", "string"]},
"mem_reservation": {"type": ["string", "integer"]},
"mem_swappiness": {"type": "integer"},
"memswap_limit": {"type": ["number", "string"]},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"},
"link_local_ips": {"$ref": "#/definitions/list_of_strings"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"oom_score_adj": {"type": "integer", "minimum": -1000, "maximum": 1000},
"group_add": {
"type": "array",
"items": {
"type": ["string", "number"]
},
"uniqueItems": true
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "ports"
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"scale": {"type": "integer"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"pids_limit": {"type": ["number", "string"]},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"volume_driver": {"type": "string"},
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"working_dir": {"type": "string"}
},
"dependencies": {
"memswap_limit": ["mem_limit"]
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"start_period": {"type": "string"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"network": {
"id": "#/definitions/network",
"type": "object",
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array"
},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": "string"}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"enable_ipv6": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"},
"name": {"type": "string"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"blkio_limit": {
"type": "object",
"properties": {
"path": {"type": "string"},
"rate": {"type": ["integer", "string"]}
},
"additionalProperties": false
},
"blkio_weight": {
"type": "object",
"properties": {
"path": {"type": "string"},
"weight": {"type": "integer"}
},
"additionalProperties": false
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -240,8 +240,7 @@
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
}
},
"restart_policy": {
"type": "object",

View File

@@ -269,8 +269,7 @@
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
}
},
"restart_policy": {
"type": "object",

View File

@@ -72,7 +72,6 @@
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"}
},
"additionalProperties": false
@@ -170,8 +169,7 @@
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{"type": ["string", "number"], "format": "ports"},
{
"type": "object",
"properties": {
@@ -250,7 +248,6 @@
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
@@ -315,8 +312,7 @@
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
}
},
"restart_policy": {
"type": "object",

View File

@@ -348,8 +348,7 @@
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
}
},
"restart_policy": {
"type": "object",

View File

@@ -1,544 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.4.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/config"
}
},
"additionalProperties": false
}
},
"patternProperties": {"^x-": {}},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"configs": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"container_name": {"type": "string"},
"credential_spec": {"type": "object", "properties": {
"file": {"type": "string"},
"registry": {"type": "string"}
}},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{
"type": "object",
"properties": {
"mode": {"type": "string"},
"target": {"type": "integer"},
"published": {"type": "integer"},
"protocol": {"type": "string"}
},
"additionalProperties": false
}
]
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
"propagation": {"type": "string"}
}
},
"volume": {
"type": "object",
"properties": {
"nocopy": {"type": "boolean"}
}
}
}
}
],
"uniqueItems": true
}
},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string", "format": "duration"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string", "format": "duration"},
"start_period": {"type": "string", "format": "duration"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"endpoint_mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"},
"order": {"type": "string", "enum": [
"start-first", "stop-first"
]}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}},
"preferences": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spread": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"attachable": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"config": {
"id": "#/definitions/config",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -1,542 +0,0 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"id": "config_schema_v3.5.json",
"type": "object",
"required": ["version"],
"properties": {
"version": {
"type": "string"
},
"services": {
"id": "#/properties/services",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/service"
}
},
"additionalProperties": false
},
"networks": {
"id": "#/properties/networks",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/network"
}
}
},
"volumes": {
"id": "#/properties/volumes",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/volume"
}
},
"additionalProperties": false
},
"secrets": {
"id": "#/properties/secrets",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/secret"
}
},
"additionalProperties": false
},
"configs": {
"id": "#/properties/configs",
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"$ref": "#/definitions/config"
}
},
"additionalProperties": false
}
},
"additionalProperties": false,
"definitions": {
"service": {
"id": "#/definitions/service",
"type": "object",
"properties": {
"deploy": {"$ref": "#/definitions/deployment"},
"build": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"context": {"type": "string"},
"dockerfile": {"type": "string"},
"args": {"$ref": "#/definitions/list_or_dict"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"cache_from": {"$ref": "#/definitions/list_of_strings"},
"network": {"type": "string"},
"target": {"type": "string"},
"shm_size": {"type": ["integer", "string"]}
},
"additionalProperties": false
}
]
},
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"cgroup_parent": {"type": "string"},
"command": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"configs": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"container_name": {"type": "string"},
"credential_spec": {"type": "object", "properties": {
"file": {"type": "string"},
"registry": {"type": "string"}
}},
"depends_on": {"$ref": "#/definitions/list_of_strings"},
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"dns": {"$ref": "#/definitions/string_or_list"},
"dns_search": {"$ref": "#/definitions/string_or_list"},
"domainname": {"type": "string"},
"entrypoint": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"env_file": {"$ref": "#/definitions/string_or_list"},
"environment": {"$ref": "#/definitions/list_or_dict"},
"expose": {
"type": "array",
"items": {
"type": ["string", "number"],
"format": "expose"
},
"uniqueItems": true
},
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
"healthcheck": {"$ref": "#/definitions/healthcheck"},
"hostname": {"type": "string"},
"image": {"type": "string"},
"ipc": {"type": "string"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"logging": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"options": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number", "null"]}
}
}
},
"additionalProperties": false
},
"mac_address": {"type": "string"},
"network_mode": {"type": "string"},
"networks": {
"oneOf": [
{"$ref": "#/definitions/list_of_strings"},
{
"type": "object",
"patternProperties": {
"^[a-zA-Z0-9._-]+$": {
"oneOf": [
{
"type": "object",
"properties": {
"aliases": {"$ref": "#/definitions/list_of_strings"},
"ipv4_address": {"type": "string"},
"ipv6_address": {"type": "string"}
},
"additionalProperties": false
},
{"type": "null"}
]
}
},
"additionalProperties": false
}
]
},
"pid": {"type": ["string", "null"]},
"ports": {
"type": "array",
"items": {
"oneOf": [
{"type": "number", "format": "ports"},
{"type": "string", "format": "ports"},
{
"type": "object",
"properties": {
"mode": {"type": "string"},
"target": {"type": "integer"},
"published": {"type": "integer"},
"protocol": {"type": "string"}
},
"additionalProperties": false
}
]
},
"uniqueItems": true
},
"privileged": {"type": "boolean"},
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
"shm_size": {"type": ["number", "string"]},
"secrets": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"properties": {
"source": {"type": "string"},
"target": {"type": "string"},
"uid": {"type": "string"},
"gid": {"type": "string"},
"mode": {"type": "number"}
}
}
]
}
},
"sysctls": {"$ref": "#/definitions/list_or_dict"},
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {
"type": "object",
"patternProperties": {
"^[a-z]+$": {
"oneOf": [
{"type": "integer"},
{
"type":"object",
"properties": {
"hard": {"type": "integer"},
"soft": {"type": "integer"}
},
"required": ["soft", "hard"],
"additionalProperties": false
}
]
}
}
},
"user": {"type": "string"},
"userns_mode": {"type": "string"},
"volumes": {
"type": "array",
"items": {
"oneOf": [
{"type": "string"},
{
"type": "object",
"required": ["type"],
"properties": {
"type": {"type": "string"},
"source": {"type": "string"},
"target": {"type": "string"},
"read_only": {"type": "boolean"},
"consistency": {"type": "string"},
"bind": {
"type": "object",
"properties": {
"propagation": {"type": "string"}
}
},
"volume": {
"type": "object",
"properties": {
"nocopy": {"type": "boolean"}
}
}
}
}
],
"uniqueItems": true
}
},
"working_dir": {"type": "string"}
},
"additionalProperties": false
},
"healthcheck": {
"id": "#/definitions/healthcheck",
"type": "object",
"additionalProperties": false,
"properties": {
"disable": {"type": "boolean"},
"interval": {"type": "string"},
"retries": {"type": "number"},
"test": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]
},
"timeout": {"type": "string"}
}
},
"deployment": {
"id": "#/definitions/deployment",
"type": ["object", "null"],
"properties": {
"mode": {"type": "string"},
"endpoint_mode": {"type": "string"},
"replicas": {"type": "integer"},
"labels": {"$ref": "#/definitions/list_or_dict"},
"update_config": {
"type": "object",
"properties": {
"parallelism": {"type": "integer"},
"delay": {"type": "string", "format": "duration"},
"failure_action": {"type": "string"},
"monitor": {"type": "string", "format": "duration"},
"max_failure_ratio": {"type": "number"},
"order": {"type": "string", "enum": [
"start-first", "stop-first"
]}
},
"additionalProperties": false
},
"resources": {
"type": "object",
"properties": {
"limits": {"$ref": "#/definitions/resource"},
"reservations": {"$ref": "#/definitions/resource"}
},
"additionalProperties": false
},
"restart_policy": {
"type": "object",
"properties": {
"condition": {"type": "string"},
"delay": {"type": "string", "format": "duration"},
"max_attempts": {"type": "integer"},
"window": {"type": "string", "format": "duration"}
},
"additionalProperties": false
},
"placement": {
"type": "object",
"properties": {
"constraints": {"type": "array", "items": {"type": "string"}},
"preferences": {
"type": "array",
"items": {
"type": "object",
"properties": {
"spread": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
}
},
"additionalProperties": false
},
"resource": {
"id": "#/definitions/resource",
"type": "object",
"properties": {
"cpus": {"type": "string"},
"memory": {"type": "string"}
},
"additionalProperties": false
},
"network": {
"id": "#/definitions/network",
"type": ["object", "null"],
"properties": {
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"ipam": {
"type": "object",
"properties": {
"driver": {"type": "string"},
"config": {
"type": "array",
"items": {
"type": "object",
"properties": {
"subnet": {"type": "string"}
},
"additionalProperties": false
}
}
},
"additionalProperties": false
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"internal": {"type": "boolean"},
"attachable": {"type": "boolean"},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"volume": {
"id": "#/definitions/volume",
"type": ["object", "null"],
"properties": {
"name": {"type": "string"},
"driver": {"type": "string"},
"driver_opts": {
"type": "object",
"patternProperties": {
"^.+$": {"type": ["string", "number"]}
}
},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
},
"additionalProperties": false
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"secret": {
"id": "#/definitions/secret",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"config": {
"id": "#/definitions/config",
"type": "object",
"properties": {
"file": {"type": "string"},
"external": {
"type": ["boolean", "object"],
"properties": {
"name": {"type": "string"}
}
},
"labels": {"$ref": "#/definitions/list_or_dict"}
},
"additionalProperties": false
},
"string_or_list": {
"oneOf": [
{"type": "string"},
{"$ref": "#/definitions/list_of_strings"}
]
},
"list_of_strings": {
"type": "array",
"items": {"type": "string"},
"uniqueItems": true
},
"list_or_dict": {
"oneOf": [
{
"type": "object",
"patternProperties": {
".+": {
"type": ["string", "number", "null"]
}
},
"additionalProperties": false
},
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
]
},
"constraints": {
"service": {
"id": "#/definitions/constraints/service",
"anyOf": [
{"required": ["build"]},
{"required": ["image"]}
],
"properties": {
"build": {
"required": ["context"]
}
}
}
}
}
}

View File

@@ -4,7 +4,7 @@ from __future__ import unicode_literals
VERSION_EXPLANATION = (
'You might be seeing this error because you\'re using the wrong Compose file version. '
'Either specify a supported version (e.g "2.2" or "3.3") and place '
'Either specify a supported version ("2.0", "2.1", "3.0", "3.1", "3.2") and place '
'your service definitions under the `services` key, or omit the `version` key '
'and place your service definitions at the root of the file to use '
'version 1.\nFor more on the Compose file format versions, see '

View File

@@ -7,6 +7,7 @@ from string import Template
import six
from .errors import ConfigurationError
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
@@ -27,7 +28,7 @@ class Interpolator(object):
def interpolate_environment_variables(version, config, section, environment):
if version <= V2_0:
if version in (V2_0, V1):
interpolator = Interpolator(Template, environment)
else:
interpolator = Interpolator(TemplateWithDefaults, environment)

View File

@@ -7,9 +7,9 @@ import yaml
from compose.config import types
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_1 as V2_1
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V3_2 as V3_2
from compose.const import COMPOSEFILE_V3_4 as V3_4
from compose.const import COMPOSEFILE_V3_3 as V3_3
def serialize_config_type(dumper, data):
@@ -21,30 +21,15 @@ def serialize_dict_type(dumper, data):
return dumper.represent_dict(data.repr())
def serialize_string(dumper, data):
""" Ensure boolean-like strings are quoted in the output and escape $ characters """
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
data = data.replace('$', '$$')
if data.lower() in ('y', 'n', 'yes', 'no', 'on', 'off', 'true', 'false'):
# Empirically only y/n appears to be an issue, but this might change
# depending on which PyYaml version is being used. Err on safe side.
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
return representer(data)
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
yaml.SafeDumper.add_representer(types.ServiceSecret, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServiceConfig, serialize_dict_type)
yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
yaml.SafeDumper.add_representer(str, serialize_string)
yaml.SafeDumper.add_representer(six.text_type, serialize_string)
def denormalize_config(config, image_digests=None):
result = {'version': str(V2_1) if config.version == V1 else str(config.version)}
result = {'version': V2_1 if config.version == V1 else config.version}
denormalized_services = [
denormalize_service_dict(
service_dict,
@@ -56,7 +41,6 @@ def denormalize_config(config, image_digests=None):
service_dict.pop('name'): service_dict
for service_dict in denormalized_services
}
for key in ('networks', 'volumes', 'secrets', 'configs'):
config_dict = getattr(config, key)
if not config_dict:
@@ -66,12 +50,6 @@ def denormalize_config(config, image_digests=None):
if 'external_name' in conf:
del conf['external_name']
if 'name' in conf:
if config.version < V2_1 or (config.version >= V3_0 and config.version < V3_4):
del conf['name']
elif 'external' in conf:
conf['external'] = True
return result
@@ -80,8 +58,7 @@ def serialize_config(config, image_digests=None):
denormalize_config(config, image_digests),
default_flow_style=False,
indent=2,
width=80
)
width=80)
def serialize_ns_time_value(value):
@@ -117,7 +94,7 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
if version == V1 and 'network_mode' not in service_dict:
service_dict['network_mode'] = 'bridge'
if 'depends_on' in service_dict and (version < V2_1 or version >= V3_0):
if 'depends_on' in service_dict and version not in (V2_1, V2_2):
service_dict['depends_on'] = sorted([
svc for svc in service_dict['depends_on'].keys()
])
@@ -132,11 +109,7 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
service_dict['healthcheck']['timeout']
)
if 'start_period' in service_dict['healthcheck']:
service_dict['healthcheck']['start_period'] = serialize_ns_time_value(
service_dict['healthcheck']['start_period']
)
if 'ports' in service_dict and version < V3_2:
if 'ports' in service_dict and version not in (V3_2, V3_3):
service_dict['ports'] = [
p.legacy_repr() if isinstance(p, types.ServicePort) else p
for p in service_dict['ports']

View File

@@ -38,7 +38,6 @@ def get_service_dependents(service_dict, services):
if (name in get_service_names(service.get('links', [])) or
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
name == get_service_name_from_network_mode(service.get('network_mode')) or
name == get_service_name_from_network_mode(service.get('pid')) or
name in service.get('depends_on', []))
]

View File

@@ -295,28 +295,24 @@ class ServicePort(namedtuple('_ServicePort', 'target published protocol mode ext
if not isinstance(spec, dict):
result = []
try:
for k, v in build_port_bindings([spec]).items():
if '/' in k:
target, proto = k.split('/', 1)
for k, v in build_port_bindings([spec]).items():
if '/' in k:
target, proto = k.split('/', 1)
else:
target, proto = (k, None)
for pub in v:
if pub is None:
result.append(
cls(target, None, proto, None, None)
)
elif isinstance(pub, tuple):
result.append(
cls(target, pub[1], proto, None, pub[0])
)
else:
target, proto = (k, None)
for pub in v:
if pub is None:
result.append(
cls(target, None, proto, None, None)
)
elif isinstance(pub, tuple):
result.append(
cls(target, pub[1], proto, None, pub[0])
)
else:
result.append(
cls(target, pub, proto, None, None)
)
except ValueError as e:
raise ConfigurationError(str(e))
result.append(
cls(target, pub, proto, None, None)
)
return result
return [cls(
@@ -343,7 +339,7 @@ class ServicePort(namedtuple('_ServicePort', 'target published protocol mode ext
def normalize_port_dict(port):
return '{external_ip}{has_ext_ip}{published}{is_pub}{target}/{protocol}'.format(
published=port.get('published', ''),
is_pub=(':' if port.get('published') is not None or port.get('external_ip') else ''),
is_pub=(':' if port.get('published') or port.get('external_ip') else ''),
target=port.get('target'),
protocol=port.get('protocol', 'tcp'),
external_ip=port.get('external_ip', ''),

View File

@@ -172,21 +172,6 @@ def validate_network_mode(service_config, service_names):
"is undefined.".format(s=service_config, dep=dependency))
def validate_pid_mode(service_config, service_names):
pid_mode = service_config.config.get('pid')
if not pid_mode:
return
dependency = get_service_name_from_network_mode(pid_mode)
if not dependency:
return
if dependency not in service_names:
raise ConfigurationError(
"Service '{s.name}' uses the PID namespace of service '{dep}' which "
"is undefined.".format(s=service_config, dep=dependency)
)
def validate_links(service_config, service_names):
for link in service_config.config.get('links', []):
if link.split(':')[0] not in service_names:
@@ -239,16 +224,6 @@ def handle_error_for_schema_with_id(error, path):
invalid_config_key = parse_key_from_error_msg(error)
return get_unsupported_config_msg(path, invalid_config_key)
if schema_id.startswith('config_schema_v'):
invalid_config_key = parse_key_from_error_msg(error)
return ('Invalid top-level property "{key}". Valid top-level '
'sections for this Compose file are: {properties}, and '
'extensions starting with "x-".\n\n{explanation}').format(
key=invalid_config_key,
properties=', '.join(error.schema['properties'].keys()),
explanation=VERSION_EXPLANATION
)
if not error.path:
return '{}\n\n{}'.format(error.message, VERSION_EXPLANATION)
@@ -325,6 +300,7 @@ def _parse_oneof_validator(error):
"""
types = []
for context in error.context:
if context.validator == 'oneOf':
_, error_msg = _parse_oneof_validator(context)
return path_string(context.path), error_msg
@@ -336,13 +312,6 @@ def _parse_oneof_validator(error):
invalid_config_key = parse_key_from_error_msg(context)
return (None, "contains unsupported option: '{}'".format(invalid_config_key))
if context.validator == 'uniqueItems':
return (
path_string(context.path) if context.path else None,
"contains non-unique items, please remove duplicates from {}".format(
context.instance),
)
if context.path:
return (
path_string(context.path),
@@ -351,6 +320,13 @@ def _parse_oneof_validator(error):
_parse_valid_types_from_validator(context.validator_value)),
)
if context.validator == 'uniqueItems':
return (
None,
"contains non unique items, please remove duplicates from {}".format(
context.instance),
)
if context.validator == 'type':
types.append(context.validator_value)

View File

@@ -3,8 +3,6 @@ from __future__ import unicode_literals
import sys
from .version import ComposeVersion
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IMAGE_EVENTS = ['delete', 'import', 'load', 'pull', 'push', 'save', 'tag', 'untag']
@@ -21,31 +19,25 @@ NANOCPUS_SCALE = 1000000000
SECRETS_PATH = '/run/secrets'
COMPOSEFILE_V1 = ComposeVersion('1')
COMPOSEFILE_V2_0 = ComposeVersion('2.0')
COMPOSEFILE_V2_1 = ComposeVersion('2.1')
COMPOSEFILE_V2_2 = ComposeVersion('2.2')
COMPOSEFILE_V2_3 = ComposeVersion('2.3')
COMPOSEFILE_V1 = '1'
COMPOSEFILE_V2_0 = '2.0'
COMPOSEFILE_V2_1 = '2.1'
COMPOSEFILE_V2_2 = '2.2'
COMPOSEFILE_V3_0 = ComposeVersion('3.0')
COMPOSEFILE_V3_1 = ComposeVersion('3.1')
COMPOSEFILE_V3_2 = ComposeVersion('3.2')
COMPOSEFILE_V3_3 = ComposeVersion('3.3')
COMPOSEFILE_V3_4 = ComposeVersion('3.4')
COMPOSEFILE_V3_5 = ComposeVersion('3.5')
COMPOSEFILE_V3_0 = '3.0'
COMPOSEFILE_V3_1 = '3.1'
COMPOSEFILE_V3_2 = '3.2'
COMPOSEFILE_V3_3 = '3.3'
API_VERSIONS = {
COMPOSEFILE_V1: '1.21',
COMPOSEFILE_V2_0: '1.22',
COMPOSEFILE_V2_1: '1.24',
COMPOSEFILE_V2_2: '1.25',
COMPOSEFILE_V2_3: '1.30',
COMPOSEFILE_V3_0: '1.25',
COMPOSEFILE_V3_1: '1.25',
COMPOSEFILE_V3_2: '1.25',
COMPOSEFILE_V3_3: '1.30',
COMPOSEFILE_V3_4: '1.30',
COMPOSEFILE_V3_5: '1.30',
}
API_VERSION_TO_ENGINE_VERSION = {
@@ -53,11 +45,8 @@ API_VERSION_TO_ENGINE_VERSION = {
API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0',
API_VERSIONS[COMPOSEFILE_V2_1]: '1.12.0',
API_VERSIONS[COMPOSEFILE_V2_2]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V2_3]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_0]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_1]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_2]: '1.13.0',
API_VERSIONS[COMPOSEFILE_V3_3]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_4]: '17.06.0',
API_VERSIONS[COMPOSEFILE_V3_5]: '17.06.0',
}

View File

@@ -18,8 +18,7 @@ log = logging.getLogger(__name__)
OPTS_EXCEPTIONS = [
'com.docker.network.driver.overlay.vxlanid_list',
'com.docker.network.windowsshim.hnsid',
'com.docker.network.windowsshim.networkname'
'com.docker.network.windowsshim.hnsid'
]
@@ -79,7 +78,6 @@ class Network(object):
enable_ipv6=self.enable_ipv6,
labels=self._labels,
attachable=version_gte(self.client._version, '1.24') or None,
check_duplicate=True,
)
def remove(self):

View File

@@ -38,8 +38,7 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None):
writer = ParallelStreamWriter(stream, msg)
for obj in objects:
writer.add_object(get_name(obj))
writer.write_initial()
writer.initialize(get_name(obj))
events = parallel_execute_iter(objects, func, get_deps, limit)
@@ -49,16 +48,16 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None):
for obj, result, exception in events:
if exception is None:
writer.write(get_name(obj), 'done', green)
writer.write(get_name(obj), green('done'))
results.append(result)
elif isinstance(exception, APIError):
errors[get_name(obj)] = exception.explanation
writer.write(get_name(obj), 'error', red)
writer.write(get_name(obj), red('error'))
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured)):
errors[get_name(obj)] = exception.msg
writer.write(get_name(obj), 'error', red)
writer.write(get_name(obj), red('error'))
elif isinstance(exception, UpstreamError):
writer.write(get_name(obj), 'error', red)
writer.write(get_name(obj), red('error'))
else:
errors[get_name(obj)] = exception
error_to_reraise = exception
@@ -221,64 +220,39 @@ class ParallelStreamWriter(object):
to jump to the correct line, and write over the line.
"""
noansi = False
@classmethod
def set_noansi(cls, value=True):
cls.noansi = value
def __init__(self, stream, msg):
self.stream = stream
self.msg = msg
self.lines = []
self.width = 0
def add_object(self, obj_index):
self.lines.append(obj_index)
self.width = max(self.width, len(obj_index))
def write_initial(self):
def initialize(self, obj_index):
if self.msg is None:
return
for line in self.lines:
self.stream.write("{} {:<{width}} ... \r\n".format(self.msg, line,
width=self.width))
self.lines.append(obj_index)
self.stream.write("{} {} ... \r\n".format(self.msg, obj_index))
self.stream.flush()
def _write_ansi(self, obj_index, status):
def write(self, obj_index, status):
if self.msg is None:
return
position = self.lines.index(obj_index)
diff = len(self.lines) - position
# move up
self.stream.write("%c[%dA" % (27, diff))
# erase
self.stream.write("%c[2K\r" % 27)
self.stream.write("{} {:<{width}} ... {}\r".format(self.msg, obj_index,
status, width=self.width))
self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status))
# move back down
self.stream.write("%c[%dB" % (27, diff))
self.stream.flush()
def _write_noansi(self, obj_index, status):
self.stream.write("{} {:<{width}} ... {}\r\n".format(self.msg, obj_index,
status, width=self.width))
self.stream.flush()
def write(self, obj_index, status, color_func):
if self.msg is None:
return
if self.noansi:
self._write_noansi(obj_index, status)
else:
self._write_ansi(obj_index, color_func(status))
def parallel_operation(containers, operation, options, message):
parallel_execute(
containers,
operator.methodcaller(operation, **options),
operator.attrgetter('name'),
message,
)
message)
def parallel_remove(containers, options):

View File

@@ -24,13 +24,10 @@ from .network import get_networks
from .network import ProjectNetworks
from .service import BuildAction
from .service import ContainerNetworkMode
from .service import ContainerPidMode
from .service import ConvergenceStrategy
from .service import NetworkMode
from .service import PidMode
from .service import Service
from .service import ServiceNetworkMode
from .service import ServicePidMode
from .utils import microseconds_from_time_nano
from .volume import ProjectVolumes
@@ -100,7 +97,6 @@ class Project(object):
network_mode = project.get_network_mode(
service_dict, list(service_networks.keys())
)
pid_mode = project.get_pid_mode(service_dict)
volumes_from = get_volumes_from(project, service_dict)
if config_data.version != V1:
@@ -125,7 +121,6 @@ class Project(object):
network_mode=network_mode,
volumes_from=volumes_from,
secrets=secrets,
pid_mode=pid_mode,
**service_dict)
)
@@ -229,27 +224,6 @@ class Project(object):
return NetworkMode(network_mode)
def get_pid_mode(self, service_dict):
pid_mode = service_dict.pop('pid', None)
if not pid_mode:
return PidMode(None)
service_name = get_service_name_from_network_mode(pid_mode)
if service_name:
return ServicePidMode(self.get_service(service_name))
container_name = get_container_name_from_network_mode(pid_mode)
if container_name:
try:
return ContainerPidMode(Container.from_id(self.client, container_name))
except APIError:
raise ConfigurationError(
"Service '{name}' uses the PID namespace of container '{dep}' which "
"does not exist.".format(name=service_dict['name'], dep=container_name)
)
return PidMode(pid_mode)
def start(self, service_names=None, **options):
containers = []
@@ -270,8 +244,7 @@ class Project(object):
start_service,
operator.attrgetter('name'),
'Starting',
get_deps,
)
get_deps)
return containers
@@ -289,8 +262,7 @@ class Project(object):
self.build_container_operation_with_timeout_func('stop', options),
operator.attrgetter('name'),
'Stopping',
get_deps,
)
get_deps)
def pause(self, service_names=None, **options):
containers = self.containers(service_names)
@@ -333,8 +305,7 @@ class Project(object):
containers,
self.build_container_operation_with_timeout_func('restart', options),
operator.attrgetter('name'),
'Restarting',
)
'Restarting')
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, build_args=None):
@@ -412,8 +383,7 @@ class Project(object):
detached=False,
remove_orphans=False,
scale_override=None,
rescale=True,
start=True):
rescale=True):
warn_for_swarm_mode(self.client)
@@ -437,8 +407,7 @@ class Project(object):
timeout=timeout,
detached=detached,
scale_override=scale_override.get(service.name),
rescale=rescale,
start=start
rescale=rescale
)
def get_deps(service):
@@ -452,7 +421,7 @@ class Project(object):
do,
operator.attrgetter('name'),
None,
get_deps,
get_deps
)
if errors:
raise ProjectError(
@@ -493,25 +462,22 @@ class Project(object):
return plans
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False):
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False):
services = self.get_services(service_names, include_deps=False)
if parallel_pull:
def pull_service(service):
service.pull(ignore_pull_failures, True)
_, errors = parallel.parallel_execute(
parallel.parallel_execute(
services,
pull_service,
operator.attrgetter('name'),
'Pulling',
limit=5,
)
if len(errors):
raise ProjectError(b"\n".join(errors.values()))
limit=5)
else:
for service in services:
service.pull(ignore_pull_failures, silent=silent)
service.pull(ignore_pull_failures)
def push(self, service_names=None, ignore_push_failures=False):
for service in self.get_services(service_names, include_deps=False):

View File

@@ -43,7 +43,6 @@ from .parallel import parallel_execute
from .progress_stream import stream_output
from .progress_stream import StreamOutputError
from .utils import json_hash
from .utils import parse_bytes
from .utils import parse_seconds_float
@@ -57,9 +56,7 @@ HOST_CONFIG_KEYS = [
'cpu_count',
'cpu_percent',
'cpu_quota',
'cpu_shares',
'cpus',
'cpuset',
'devices',
'dns',
'dns_search',
@@ -83,11 +80,9 @@ HOST_CONFIG_KEYS = [
'restart',
'security_opt',
'shm_size',
'storage_opt',
'sysctls',
'userns_mode',
'volumes_from',
'volume_driver',
]
CONDITION_STARTED = 'service_started'
@@ -158,7 +153,6 @@ class Service(object):
networks=None,
secrets=None,
scale=None,
pid_mode=None,
**options
):
self.name = name
@@ -168,7 +162,6 @@ class Service(object):
self.links = links or []
self.volumes_from = volumes_from or []
self.network_mode = network_mode or NetworkMode(None)
self.pid_mode = pid_mode or PidMode(None)
self.networks = networks or {}
self.secrets = secrets or []
self.scale_num = scale or 1
@@ -393,7 +386,7 @@ class Service(object):
range(i, i + scale),
lambda n: create_and_start(self, n),
lambda n: self.get_container_name(n),
"Creating",
"Creating"
)
for error in errors.values():
raise OperationFailedError(error)
@@ -414,7 +407,7 @@ class Service(object):
containers,
recreate,
lambda c: c.name,
"Recreating",
"Recreating"
)
for error in errors.values():
raise OperationFailedError(error)
@@ -434,7 +427,7 @@ class Service(object):
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached),
lambda c: c.name,
"Starting",
"Starting"
)
for error in errors.values():
@@ -610,19 +603,15 @@ class Service(object):
def get_dependency_names(self):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
configs = dict(
[(name, None) for name in self.get_linked_service_names()]
)
@@ -630,7 +619,6 @@ class Service(object):
[(name, None) for name in self.get_volumes_from_names()]
))
configs.update({net_name: None} if net_name else {})
configs.update({pid_namespace: None} if pid_namespace else {})
configs.update(self.options.get('depends_on', {}))
for svc, config in self.options.get('depends_on', {}).items():
if config['condition'] == CONDITION_STARTED:
@@ -737,7 +725,6 @@ class Service(object):
container_options = dict(
(k, self.options[k])
for k in DOCKER_CONFIG_KEYS if k in self.options)
override_volumes = override_options.pop('volumes', [])
container_options.update(override_options)
if not container_options.get('name'):
@@ -761,11 +748,6 @@ class Service(object):
formatted_ports(container_options.get('ports', [])),
self.options)
if 'volumes' in container_options or override_volumes:
container_options['volumes'] = list(set(
container_options.get('volumes', []) + override_volumes
))
container_options['environment'] = merge_environment(
self.options.get('environment'),
override_options.get('environment'))
@@ -814,7 +796,6 @@ class Service(object):
options = dict(self.options, **override_options)
logging_dict = options.get('logging', None)
blkio_config = convert_blkio_config(options.get('blkio_config', None))
log_config = get_log_config(logging_dict)
init_path = None
if isinstance(options.get('init'), six.string_types):
@@ -848,7 +829,7 @@ class Service(object):
log_config=log_config,
extra_hosts=options.get('extra_hosts'),
read_only=options.get('read_only'),
pid_mode=self.pid_mode.mode,
pid_mode=options.get('pid'),
security_opt=options.get('security_opt'),
ipc_mode=options.get('ipc'),
cgroup_parent=options.get('cgroup_parent'),
@@ -867,26 +848,13 @@ class Service(object):
cpu_count=options.get('cpu_count'),
cpu_percent=options.get('cpu_percent'),
nano_cpus=nano_cpus,
volume_driver=options.get('volume_driver'),
cpuset_cpus=options.get('cpuset'),
cpu_shares=options.get('cpu_shares'),
storage_opt=options.get('storage_opt'),
blkio_weight=blkio_config.get('weight'),
blkio_weight_device=blkio_config.get('weight_device'),
device_read_bps=blkio_config.get('device_read_bps'),
device_read_iops=blkio_config.get('device_read_iops'),
device_write_bps=blkio_config.get('device_write_bps'),
device_write_iops=blkio_config.get('device_write_iops'),
)
def get_secret_volumes(self):
def build_spec(secret):
target = secret['secret'].target
if target is None:
target = '{}/{}'.format(const.SECRETS_PATH, secret['secret'].source)
elif not os.path.isabs(target):
target = '{}/{}'.format(const.SECRETS_PATH, target)
target = '{}/{}'.format(
const.SECRETS_PATH,
secret['secret'].target or secret['secret'].source)
return VolumeSpec(secret['file'], target, 'ro')
return [build_spec(secret) for secret in self.secrets]
@@ -917,10 +885,7 @@ class Service(object):
dockerfile=build_opts.get('dockerfile', None),
cache_from=build_opts.get('cache_from', None),
labels=build_opts.get('labels', None),
buildargs=build_args,
network_mode=build_opts.get('network', None),
target=build_opts.get('target', None),
shmsize=parse_bytes(build_opts.get('shm_size')) if build_opts.get('shm_size') else None,
buildargs=build_args
)
try:
@@ -1083,46 +1048,6 @@ def short_id_alias_exists(container, network):
return container.short_id in aliases
class PidMode(object):
def __init__(self, mode):
self._mode = mode
@property
def mode(self):
return self._mode
@property
def service_name(self):
return None
class ServicePidMode(PidMode):
def __init__(self, service):
self.service = service
@property
def service_name(self):
return self.service.name
@property
def mode(self):
containers = self.service.containers()
if containers:
return 'container:' + containers[0].id
log.warn(
"Service %s is trying to use reuse the PID namespace "
"of another service that is not running." % (self.service_name)
)
return None
class ContainerPidMode(PidMode):
def __init__(self, container):
self.container = container
self._mode = 'container:{}'.format(container.id)
class NetworkMode(object):
"""A `standard` network mode (ex: host, bridge)"""
@@ -1407,22 +1332,3 @@ def build_container_ports(container_ports, options):
port = tuple(port.split('/'))
ports.append(port)
return ports
def convert_blkio_config(blkio_config):
result = {}
if blkio_config is None:
return result
result['weight'] = blkio_config.get('weight')
for field in [
"device_read_bps", "device_read_iops", "device_write_bps",
"device_write_iops", "weight_device",
]:
if field not in blkio_config:
continue
arr = []
for item in blkio_config[field]:
arr.append(dict([(k.capitalize(), v) for k, v in item.items()]))
result[field] = arr
return result

View File

@@ -9,11 +9,8 @@ import logging
import ntpath
import six
from docker.errors import DockerException
from docker.utils import parse_bytes as sdk_parse_bytes
from .errors import StreamParseError
from .timeparse import MULTIPLIERS
from .timeparse import timeparse
@@ -112,7 +109,7 @@ def microseconds_from_time_nano(time_nano):
def nanoseconds_from_time_seconds(time_seconds):
return int(time_seconds / MULTIPLIERS['nano'])
return time_seconds * 1000000000
def parse_seconds_float(value):
@@ -123,7 +120,7 @@ def parse_nanoseconds_int(value):
parsed = timeparse(value or '')
if parsed is None:
return None
return nanoseconds_from_time_seconds(parsed)
return int(parsed * 1000000000)
def build_string_dict(source_dict):
@@ -136,10 +133,3 @@ def splitdrive(path):
if path[0] in ['.', '\\', '/', '~']:
return ('', path)
return ntpath.splitdrive(path)
def parse_bytes(n):
try:
return sdk_parse_bytes(n)
except DockerException:
return None

View File

@@ -1,10 +0,0 @@
from __future__ import absolute_import
from __future__ import unicode_literals
from distutils.version import LooseVersion
class ComposeVersion(LooseVersion):
""" A hashable version object """
def __hash__(self):
return hash(self.vstring)

View File

@@ -15,15 +15,14 @@ log = logging.getLogger(__name__)
class Volume(object):
def __init__(self, client, project, name, driver=None, driver_opts=None,
external=False, labels=None, custom_name=False):
external_name=None, labels=None):
self.client = client
self.project = project
self.name = name
self.driver = driver
self.driver_opts = driver_opts
self.external = external
self.external_name = external_name
self.labels = labels
self.custom_name = custom_name
def create(self):
return self.client.create_volume(
@@ -47,10 +46,14 @@ class Volume(object):
return False
return True
@property
def external(self):
return bool(self.external_name)
@property
def full_name(self):
if self.custom_name:
return self.name
if self.external_name:
return self.external_name
return '{0}_{1}'.format(self.project, self.name)
@property
@@ -77,12 +80,11 @@ class ProjectVolumes(object):
vol_name: Volume(
client=client,
project=name,
name=data.get('name', vol_name),
name=vol_name,
driver=data.get('driver'),
driver_opts=data.get('driver_opts'),
custom_name=data.get('name') is not None,
labels=data.get('labels'),
external=bool(data.get('external', False))
external_name=data.get('external_name'),
labels=data.get('labels')
)
for vol_name, data in config_volumes.items()
}

View File

@@ -149,7 +149,7 @@ _docker_compose_config() {
_docker_compose_create() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--build --force-recreate --help --no-build --no-recreate" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--force-recreate --help --no-build --no-recreate" -- "$cur" ) )
;;
*)
__docker_compose_services_all
@@ -179,7 +179,7 @@ _docker_compose_docker_compose() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "$top_level_boolean_options $top_level_options_with_args --help -h --no-ansi --verbose --version -v" -- "$cur" ) )
COMPREPLY=( $( compgen -W "$top_level_boolean_options $top_level_options_with_args --help -h --verbose --version -v" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
@@ -341,7 +341,7 @@ _docker_compose_ps() {
_docker_compose_pull() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --ignore-pull-failures --parallel --quiet" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--help --ignore-pull-failures --parallel" -- "$cur" ) )
;;
*)
__docker_compose_services_from_image
@@ -518,7 +518,7 @@ _docker_compose_up() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--abort-on-container-exit --build -d --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --remove-orphans --scale --timeout -t" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--abort-on-container-exit --build -d --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --remove-orphans --scale --timeout -t" -- "$cur" ) )
;;
*)
__docker_compose_services_all
@@ -569,10 +569,8 @@ _docker_compose() {
version
)
# Options for the docker daemon that have to be passed to secondary calls to
# docker-compose executed by this script.
# Other global otions that are not relevant for secondary calls are defined in
# `_docker_compose_docker_compose`.
# options for the docker daemon that have to be passed to secondary calls to
# docker-compose executed by this script
local top_level_boolean_options="
--skip-hostname-check
--tls

View File

@@ -37,11 +37,6 @@ exe = EXE(pyz,
'compose/config/config_schema_v2.2.json',
'DATA'
),
(
'compose/config/config_schema_v2.3.json',
'compose/config/config_schema_v2.3.json',
'DATA'
),
(
'compose/config/config_schema_v3.0.json',
'compose/config/config_schema_v3.0.json',
@@ -62,11 +57,6 @@ exe = EXE(pyz,
'compose/config/config_schema_v3.3.json',
'DATA'
),
(
'compose/config/config_schema_v3.4.json',
'compose/config/config_schema_v3.4.json',
'DATA'
),
(
'compose/GITSHA',
'compose/GITSHA',

View File

@@ -24,7 +24,7 @@ As part of this script you'll be asked to:
If the next release will be an RC, append `-rcN`, e.g. `1.4.0-rc1`.
2. Write release notes in `CHANGELOG.md`.
2. Write release notes in `CHANGES.md`.
Almost every feature enhancement should be mentioned, with the most
visible/exciting ones first. Use descriptive sentences and give context
@@ -67,13 +67,16 @@ Check out the bump branch and run the `build-binaries` script
When prompted build the non-linux binaries and test them.
1. Download the different platform binaries by running the following script:
1. Download the osx binary from Bintray. Make sure that the latest Travis
build has finished, otherwise you'll be downloading an old binary.
`./script/release/download-binaries $VERSION`
https://dl.bintray.com/docker-compose/$BRANCH_NAME/
The binaries for Linux, OSX and Windows will be downloaded in the `binaries-$VERSION` folder.
2. Download the windows binary from AppVeyor
3. Draft a release from the tag on GitHub (the `build-binaries` script will open the window for
https://ci.appveyor.com/project/docker/compose
3. Draft a release from the tag on GitHub (the script will open the window for
you)
The tag will only be present on Github when you run the `push-release`
@@ -84,30 +87,18 @@ When prompted build the non-linux binaries and test them.
If you're a Mac or Windows user, the best way to install Compose and keep it up-to-date is **[Docker for Mac and Windows](https://www.docker.com/products/docker)**.
Docker for Mac and Windows will automatically install the latest version of Docker Engine for you.
Note that Compose 1.9.0 requires Docker Engine 1.10.0 or later for version 2 of the Compose File format, and Docker Engine 1.9.1 or later for version 1. Docker for Mac and Windows will automatically install the latest version of Docker Engine for you.
Alternatively, you can use the usual commands to install or upgrade Compose:
```
curl -L https://github.com/docker/compose/releases/download/1.16.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
See the [install docs](https://docs.docker.com/compose/install/) for more install options and instructions.
## Compose file format compatibility matrix
| Compose file format | Docker Engine |
| --- | --- |
| 3.3 | 17.06.0+ |
| 3.0 &ndash; 3.2 | 1.13.0+ |
| 2.3| 17.06.0+ |
| 2.2 | 1.13.0+ |
| 2.1 | 1.12.0+ |
| 2.0 | 1.10.0+ |
| 1.0 | 1.9.1+ |
## Changes
Here's what's new:
...release notes go here...
@@ -128,7 +119,7 @@ When prompted build the non-linux binaries and test them.
9. Check that all the binaries download (following the install instructions) and run.
10. Announce the release on the appropriate Slack channel(s).
10. Email maintainers@dockerproject.org and engineering@docker.com about the new release.
## If its a stable release (not an RC)

View File

@@ -1,5 +1,4 @@
coverage==3.7.1
flake8==3.5.0
mock>=1.0.1
pytest==2.7.2
pytest-cov==2.1.0

View File

@@ -1,22 +1,16 @@
PyYAML==3.11
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
colorama==0.3.9; sys_platform == 'win32'
docker==2.5.1
docker-pycreds==0.2.1
cached-property==1.2.0
colorama==0.3.7
docker==2.3.0
dockerpty==0.4.1
docopt==0.6.2
enum34==1.1.6; python_version < '3.4'
docopt==0.6.1
enum34==1.0.4; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
ipaddress==1.0.16
jsonschema==2.5.1
pypiwin32==219; sys_platform == 'win32'
PySocks==1.6.7
PyYAML==3.12
requests==2.11.1
six==1.10.0
texttable==0.9.1
urllib3==1.21.1
texttable==0.8.4
websocket-client==0.32.0

View File

@@ -1,17 +0,0 @@
#!/bin/bash
set -e
if [ -z "$1" ]; then
>&2 echo "First argument must be image tag."
exit 1
fi
TAG=$1
docker build -t docker-compose-tests:tmp .
ctnr_id=$(docker create --entrypoint=tox docker-compose-tests:tmp)
docker commit $ctnr_id docker/compose-tests:latest
docker tag docker/compose-tests:latest docker/compose-tests:$TAG
docker rm -f $ctnr_id
docker rmi -f docker-compose-tests:tmp

View File

@@ -27,9 +27,6 @@ script/build/linux
echo "Building the container distribution"
script/build/image $VERSION
echo "Building the compose-tests image"
script/build/test-image $VERSION
echo "Create a github release"
# TODO: script more of this https://developer.github.com/v3/repos/releases/
browser https://github.com/$REPO/releases/new

View File

@@ -65,7 +65,8 @@ git config "branch.${BRANCH}.release" $VERSION
editor=${EDITOR:-vim}
echo "Update versions in compose/__init__.py, script/run/run.sh"
echo "Update versions in docs/install.md, compose/__init__.py, script/run/run.sh"
$editor docs/install.md
$editor compose/__init__.py
$editor script/run/run.sh

View File

@@ -54,10 +54,6 @@ git push $GITHUB_REPO $VERSION
echo "Uploading the docker image"
docker push docker/compose:$VERSION
echo "Uploading the compose-tests image"
docker push docker/compose-tests:latest
docker push docker/compose-tests:$VERSION
echo "Uploading package to PyPI"
pandoc -f markdown -t rst README.md -o README.rst
sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.17.1"
VERSION="1.14.0"
IMAGE="docker/compose:$VERSION"

View File

@@ -50,3 +50,4 @@ echo "*** Using $(openssl_version)"
if !(which virtualenv); then
pip install virtualenv
fi

View File

@@ -14,7 +14,7 @@ docker run --rm \
get_versions="docker run --rm
--entrypoint=/code/.tox/py27/bin/python
$TAG
/code/script/test/versions.py docker/docker-ce,moby/moby"
/code/script/test/versions.py docker/docker"
if [ "$DOCKER_VERSIONS" == "" ]; then
DOCKER_VERSIONS="$($get_versions default)"
@@ -48,7 +48,7 @@ for version in $DOCKER_VERSIONS; do
--privileged \
--volume="/var/lib/docker" \
"$repo:$version" \
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
docker daemon -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
2>&1 | tail -n 10
docker run \

View File

@@ -37,22 +37,14 @@ import requests
GITHUB_API = 'https://api.github.com/repos'
class Version(namedtuple('_Version', 'major minor patch rc edition')):
class Version(namedtuple('_Version', 'major minor patch rc')):
@classmethod
def parse(cls, version):
edition = None
version = version.lstrip('v')
version, _, rc = version.partition('-')
if rc:
if 'rc' not in rc:
edition = rc
rc = None
elif '-' in rc:
edition, rc = rc.split('-')
major, minor, patch = version.split('.', 3)
return cls(major, minor, patch, rc, edition)
return cls(major, minor, patch, rc)
@property
def major_minor(self):
@@ -69,8 +61,7 @@ class Version(namedtuple('_Version', 'major minor patch rc edition')):
def __str__(self):
rc = '-{}'.format(self.rc) if self.rc else ''
edition = '-{}'.format(self.edition) if self.edition else ''
return '.'.join(map(str, self[:3])) + edition + rc
return '.'.join(map(str, self[:3])) + rc
def group_versions(versions):
@@ -103,7 +94,6 @@ def get_latest_versions(versions, num=1):
group.
"""
versions = group_versions(versions)
num = min(len(versions), num)
return [versions[index][0] for index in range(num)]
@@ -122,18 +112,16 @@ def get_versions(tags):
print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr)
def get_github_releases(projects):
def get_github_releases(project):
"""Query the Github API for a list of version tags and return them in
sorted order.
See https://developer.github.com/v3/repos/#list-tags
"""
versions = []
for project in projects:
url = '{}/{}/tags'.format(GITHUB_API, project)
response = requests.get(url)
response.raise_for_status()
versions.extend(get_versions(response.json()))
url = '{}/{}/tags'.format(GITHUB_API, project)
response = requests.get(url)
response.raise_for_status()
versions = get_versions(response.json())
return sorted(versions, reverse=True, key=operator.attrgetter('order'))
@@ -148,7 +136,7 @@ def parse_args(argv):
def main(argv=None):
args = parse_args(argv)
versions = get_github_releases(args.project.split(','))
versions = get_github_releases(args.project)
if args.command == 'recent':
print(' '.join(map(str, get_latest_versions(versions, args.num))))

View File

@@ -31,12 +31,13 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'colorama >= 0.3.7, < 0.4',
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.6.1, != 2.11.0, < 2.12',
'texttable >= 0.9.0, < 0.10',
'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.32.0, < 1.0',
'docker >= 2.5.1, < 3.0',
'docker >= 2.3.0, < 3.0',
'dockerpty >= 0.4.1, < 0.5',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
@@ -55,8 +56,6 @@ extras_require = {
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5'],
':python_version < "3.3"': ['ipaddress >= 1.0.16'],
':sys_platform == "win32"': ['colorama >= 0.3.7, < 0.4'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}

View File

@@ -6,7 +6,6 @@ import datetime
import json
import os
import os.path
import re
import signal
import subprocess
import time
@@ -14,7 +13,7 @@ from collections import Counter
from collections import namedtuple
from operator import attrgetter
import pytest
import py
import six
import yaml
from docker import errors
@@ -28,10 +27,7 @@ from compose.project import OneOffFilter
from compose.utils import nanoseconds_from_time_seconds
from tests.integration.testcases import DockerClientTestCase
from tests.integration.testcases import get_links
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import pull_busybox
from tests.integration.testcases import SWARM_SKIP_RM_VOLUMES
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
@@ -72,8 +68,7 @@ def wait_on_condition(condition, delay=0.1, timeout=40):
def kill_service(service):
for container in service.containers():
if container.is_running:
container.kill()
container.kill()
class ContainerCountCondition(object):
@@ -83,7 +78,7 @@ class ContainerCountCondition(object):
self.expected = expected
def __call__(self):
return len([c for c in self.project.containers() if c.is_running]) == self.expected
return len(self.project.containers()) == self.expected
def __str__(self):
return "waiting for counter count == %s" % self.expected
@@ -117,18 +112,15 @@ class CLITestCase(DockerClientTestCase):
def tearDown(self):
if self.base_dir:
self.project.kill()
self.project.down(None, True)
self.project.remove_stopped()
for container in self.project.containers(stopped=True, one_off=OneOffFilter.only):
container.remove(force=True)
networks = self.client.networks()
for n in networks:
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name)):
if n['Name'].startswith('{}_'.format(self.project.name)):
self.client.remove_network(n['Name'])
volumes = self.client.volumes().get('Volumes') or []
for v in volumes:
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name)):
self.client.remove_volume(v['Name'])
if hasattr(self, '_project'):
del self._project
@@ -183,10 +175,7 @@ class CLITestCase(DockerClientTestCase):
def test_host_not_reachable_volumes_from_container(self):
self.base_dir = 'tests/fixtures/volumes-from-container'
container = self.client.create_container(
'busybox', 'true', name='composetest_data_container',
host_config={}
)
container = self.client.create_container('busybox', 'true', name='composetest_data_container')
self.addCleanup(self.client.remove_container, container)
result = self.dispatch(['-H=tcp://doesnotexist:8000', 'ps'], returncode=1)
@@ -285,67 +274,17 @@ class CLITestCase(DockerClientTestCase):
}
}
def test_config_external_volume_v2(self):
def test_config_external_volume(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v2.yml', 'config'])
result = self.dispatch(['-f', 'external-volumes.yml', 'config'])
json_result = yaml.load(result.stdout)
assert 'volumes' in json_result
assert json_result['volumes'] == {
'foo': {
'external': True,
'external': True
},
'bar': {
'external': {
'name': 'some_bar',
},
}
}
def test_config_external_volume_v2_x(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v2-x.yml', 'config'])
json_result = yaml.load(result.stdout)
assert 'volumes' in json_result
assert json_result['volumes'] == {
'foo': {
'external': True,
'name': 'some_foo',
},
'bar': {
'external': True,
'name': 'some_bar',
}
}
def test_config_external_volume_v3_x(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v3-x.yml', 'config'])
json_result = yaml.load(result.stdout)
assert 'volumes' in json_result
assert json_result['volumes'] == {
'foo': {
'external': True,
},
'bar': {
'external': {
'name': 'some_bar',
},
}
}
def test_config_external_volume_v3_4(self):
self.base_dir = 'tests/fixtures/volumes'
result = self.dispatch(['-f', 'external-volumes-v3-4.yml', 'config'])
json_result = yaml.load(result.stdout)
assert 'volumes' in json_result
assert json_result['volumes'] == {
'foo': {
'external': True,
'name': 'some_foo',
},
'bar': {
'external': True,
'name': 'some_bar',
'external': {'name': 'some_bar'}
}
}
@@ -490,32 +429,14 @@ class CLITestCase(DockerClientTestCase):
assert 'Pulling simple (busybox:latest)...' in result.stderr
assert 'Pulling another (nonexisting-image:latest)...' in result.stderr
assert ('repository nonexisting-image not found' in result.stderr or
'image library/nonexisting-image:latest not found' in result.stderr or
'pull access denied for nonexisting-image' in result.stderr)
def test_pull_with_parallel_failure(self):
result = self.dispatch([
'-f', 'ignore-pull-failures.yml', 'pull', '--parallel'],
returncode=1
)
self.assertRegexpMatches(result.stderr, re.compile('^Pulling simple', re.MULTILINE))
self.assertRegexpMatches(result.stderr, re.compile('^Pulling another', re.MULTILINE))
self.assertRegexpMatches(result.stderr,
re.compile('^ERROR: for another .*does not exist.*', re.MULTILINE))
self.assertRegexpMatches(result.stderr,
re.compile('''^(ERROR: )?(b')?.* nonexisting-image''',
re.MULTILINE))
def test_pull_with_quiet(self):
assert self.dispatch(['pull', '--quiet']).stderr == ''
assert self.dispatch(['pull', '--quiet']).stdout == ''
'image library/nonexisting-image:latest not found' in result.stderr)
def test_build_plain(self):
self.base_dir = 'tests/fixtures/simple-dockerfile'
self.dispatch(['build', 'simple'])
result = self.dispatch(['build', 'simple'])
assert BUILD_CACHE_TEXT in result.stdout
assert BUILD_PULL_TEXT not in result.stdout
def test_build_no_cache(self):
@@ -533,9 +454,7 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['build', 'simple'], None)
result = self.dispatch(['build', '--pull', 'simple'])
if not is_cluster(self.client):
# If previous build happened on another node, cache won't be available
assert BUILD_CACHE_TEXT in result.stdout
assert BUILD_CACHE_TEXT in result.stdout
assert BUILD_PULL_TEXT in result.stdout
def test_build_no_cache_pull(self):
@@ -548,7 +467,6 @@ class CLITestCase(DockerClientTestCase):
assert BUILD_CACHE_TEXT not in result.stdout
assert BUILD_PULL_TEXT in result.stdout
@pytest.mark.xfail(reason='17.10.0 RC bug remove after GA https://github.com/moby/moby/issues/35116')
def test_build_failed(self):
self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
self.dispatch(['build', 'simple'], returncode=1)
@@ -562,7 +480,6 @@ class CLITestCase(DockerClientTestCase):
]
assert len(containers) == 1
@pytest.mark.xfail(reason='17.10.0 RC bug remove after GA https://github.com/moby/moby/issues/35116')
def test_build_failed_forcerm(self):
self.base_dir = 'tests/fixtures/simple-failing-dockerfile'
self.dispatch(['build', '--force-rm', 'simple'], returncode=1)
@@ -577,15 +494,9 @@ class CLITestCase(DockerClientTestCase):
]
assert not containers
def test_build_shm_size_build_option(self):
pull_busybox(self.client)
self.base_dir = 'tests/fixtures/build-shm-size'
result = self.dispatch(['build', '--no-cache'], None)
assert 'shm_size: 96' in result.stdout
def test_bundle_with_digests(self):
self.base_dir = 'tests/fixtures/bundle-with-digests/'
tmpdir = pytest.ensuretemp('cli_test_bundle')
tmpdir = py.test.ensuretemp('cli_test_bundle')
self.addCleanup(tmpdir.remove)
filename = str(tmpdir.join('example.dab'))
@@ -630,115 +541,81 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['create'])
service = self.project.get_service('simple')
another = self.project.get_service('another')
service_containers = service.containers(stopped=True)
another_containers = another.containers(stopped=True)
assert len(service_containers) == 1
assert len(another_containers) == 1
assert not service_containers[0].is_running
assert not another_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(another.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertEqual(len(another.containers(stopped=True)), 1)
def test_create_with_force_recreate(self):
self.dispatch(['create'], None)
service = self.project.get_service('simple')
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
old_ids = [c.id for c in service.containers(stopped=True)]
self.dispatch(['create', '--force-recreate'], None)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
new_ids = [c.id for c in service_containers]
new_ids = [c.id for c in service.containers(stopped=True)]
assert old_ids != new_ids
self.assertNotEqual(old_ids, new_ids)
def test_create_with_no_recreate(self):
self.dispatch(['create'], None)
service = self.project.get_service('simple')
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
old_ids = [c.id for c in service.containers(stopped=True)]
self.dispatch(['create', '--no-recreate'], None)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
new_ids = [c.id for c in service_containers]
new_ids = [c.id for c in service.containers(stopped=True)]
assert old_ids == new_ids
self.assertEqual(old_ids, new_ids)
def test_run_one_off_with_volume(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
volume_path = os.path.abspath(os.path.join(os.getcwd(), self.base_dir, 'files'))
node = create_host_file(self.client, os.path.join(volume_path, 'example.txt'))
self.dispatch([
'run',
'-v', '{}:/data'.format(volume_path),
'-e', 'constraint:node=={}'.format(node if node is not None else '*'),
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
service = self.project.get_service('simple')
container_data = service.containers(one_off=OneOffFilter.only, stopped=True)[0]
mount = container_data.get('Mounts')[0]
assert mount['Source'] == volume_path
assert mount['Destination'] == '/data'
assert mount['Type'] == 'bind'
def test_run_one_off_with_multiple_volumes(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
volume_path = os.path.abspath(os.path.join(os.getcwd(), self.base_dir, 'files'))
node = create_host_file(self.client, os.path.join(volume_path, 'example.txt'))
self.dispatch([
'run',
'-v', '{}:/data'.format(volume_path),
'-v', '{}:/data1'.format(volume_path),
'-e', 'constraint:node=={}'.format(node if node is not None else '*'),
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
self.dispatch([
'run',
'-v', '{}:/data'.format(volume_path),
'-v', '{}:/data1'.format(volume_path),
'-e', 'constraint:node=={}'.format(node if node is not None else '*'),
'simple',
'test', '-f' '/data1/example.txt'
], returncode=0)
def test_run_one_off_with_volume_merge(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
volume_path = os.path.abspath(os.path.join(os.getcwd(), self.base_dir, 'files'))
create_host_file(self.client, os.path.join(volume_path, 'example.txt'))
self.dispatch([
'-f', 'docker-compose.merge.yml',
'run',
'-v', '{}:/data'.format(volume_path),
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
service = self.project.get_service('simple')
container_data = service.containers(one_off=OneOffFilter.only, stopped=True)[0]
mounts = container_data.get('Mounts')
assert len(mounts) == 2
config_mount = [m for m in mounts if m['Destination'] == '/data1'][0]
override_mount = [m for m in mounts if m['Destination'] == '/data'][0]
def test_run_one_off_with_multiple_volumes(self):
self.base_dir = 'tests/fixtures/simple-composefile-volume-ready'
volume_path = os.path.abspath(os.path.join(os.getcwd(), self.base_dir, 'files'))
create_host_file(self.client, os.path.join(volume_path, 'example.txt'))
assert config_mount['Type'] == 'volume'
assert override_mount['Source'] == volume_path
assert override_mount['Type'] == 'bind'
self.dispatch([
'run',
'-v', '{}:/data'.format(volume_path),
'-v', '{}:/data1'.format(volume_path),
'simple',
'test', '-f', '/data/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
self.dispatch([
'run',
'-v', '{}:/data'.format(volume_path),
'-v', '{}:/data1'.format(volume_path),
'simple',
'test', '-f' '/data1/example.txt'
], returncode=0)
# FIXME: does not work with Python 3
# assert cmd_result.stdout.strip() == 'FILE_CONTENT'
def test_create_with_force_recreate_and_no_recreate(self):
self.dispatch(
@@ -806,7 +683,7 @@ class CLITestCase(DockerClientTestCase):
network_name = self.project.networks.networks['default'].full_name
networks = self.client.networks(names=[network_name])
self.assertEqual(len(networks), 1)
assert networks[0]['Driver'] == 'bridge' if not is_cluster(self.client) else 'overlay'
self.assertEqual(networks[0]['Driver'], 'bridge')
assert 'com.docker.network.bridge.enable_icc' not in networks[0]['Options']
network = self.client.inspect_network(networks[0]['Id'])
@@ -828,45 +705,6 @@ class CLITestCase(DockerClientTestCase):
for service in services:
assert self.lookup(container, service.name)
@v2_only()
def test_up_no_start(self):
self.base_dir = 'tests/fixtures/v2-full'
self.dispatch(['up', '--no-start'], None)
services = self.project.get_services()
default_network = self.project.networks.networks['default'].full_name
front_network = self.project.networks.networks['front'].full_name
networks = self.client.networks(names=[default_network, front_network])
assert len(networks) == 2
for service in services:
containers = service.containers(stopped=True)
assert len(containers) == 1
container = containers[0]
assert not container.is_running
assert container.get('State.Status') == 'created'
volumes = self.project.volumes.volumes
assert 'data' in volumes
volume = volumes['data']
# The code below is a Swarm-compatible equivalent to volume.exists()
remote_volumes = [
v for v in self.client.volumes().get('Volumes', [])
if v['Name'].split('/')[-1] == volume.full_name
]
assert len(remote_volumes) > 0
@v2_only()
def test_up_no_ansi(self):
self.base_dir = 'tests/fixtures/v2-simple'
result = self.dispatch(['--no-ansi', 'up', '-d'], None)
assert "%c[2K\r" % 27 not in result.stderr
assert "%c[1A" % 27 not in result.stderr
assert "%c[1B" % 27 not in result.stderr
@v2_only()
def test_up_with_default_network_config(self):
filename = 'default-network-config.yml'
@@ -891,11 +729,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
# Two networks were created: back and front
assert sorted(n['Name'].split('/')[-1] for n in networks) == [back_name, front_name]
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
web_container = self.project.get_service('web').containers()[0]
back_aliases = web_container.get(
@@ -919,11 +757,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
# One network was created: internal
assert sorted(n['Name'].split('/')[-1] for n in networks) == [internal_net]
assert sorted(n['Name'] for n in networks) == [internal_net]
assert networks[0]['Internal'] is True
@@ -938,11 +776,11 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
# One networks was created: front
assert sorted(n['Name'].split('/')[-1] for n in networks) == [static_net]
assert sorted(n['Name'] for n in networks) == [static_net]
web_container = self.project.get_service('web').containers()[0]
ipam_config = web_container.get(
@@ -961,19 +799,14 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
# Two networks were created: back and front
assert sorted(n['Name'].split('/')[-1] for n in networks) == [back_name, front_name]
assert sorted(n['Name'] for n in networks) == [back_name, front_name]
# lookup by ID instead of name in case of duplicates
back_network = self.client.inspect_network(
[n for n in networks if n['Name'] == back_name][0]['Id']
)
front_network = self.client.inspect_network(
[n for n in networks if n['Name'] == front_name][0]['Id']
)
back_network = [n for n in networks if n['Name'] == back_name][0]
front_network = [n for n in networks if n['Name'] == front_name][0]
web_container = self.project.get_service('web').containers()[0]
app_container = self.project.get_service('app').containers()[0]
@@ -1010,12 +843,8 @@ class CLITestCase(DockerClientTestCase):
assert 'Service "web" uses an undefined network "foo"' in result.stderr
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_up_with_network_mode(self):
c = self.client.create_container(
'busybox', 'top', name='composetest_network_mode_container',
host_config={}
)
c = self.client.create_container('busybox', 'top', name='composetest_network_mode_container')
self.addCleanup(self.client.remove_container, c, force=True)
self.client.start(c)
container_mode_source = 'container:{}'.format(c['Id'])
@@ -1029,7 +858,7 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
assert not networks
@@ -1066,7 +895,7 @@ class CLITestCase(DockerClientTestCase):
network_names = ['{}_{}'.format(self.project.name, n) for n in ['foo', 'bar']]
for name in network_names:
self.client.create_network(name, attachable=True)
self.client.create_network(name)
self.dispatch(['-f', filename, 'up', '-d'])
container = self.project.containers()[0]
@@ -1084,12 +913,12 @@ class CLITestCase(DockerClientTestCase):
networks = [
n['Name'] for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
assert not networks
network_name = 'composetest_external_network'
self.client.create_network(network_name, attachable=True)
self.client.create_network(network_name)
self.dispatch(['-f', filename, 'up', '-d'])
container = self.project.containers()[0]
@@ -1108,10 +937,10 @@ class CLITestCase(DockerClientTestCase):
networks = [
n for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
assert [n['Name'].split('/')[-1] for n in networks] == [network_with_label]
assert [n['Name'] for n in networks] == [network_with_label]
assert 'label_key' in networks[0]['Labels']
assert networks[0]['Labels']['label_key'] == 'label_val'
@@ -1128,10 +957,10 @@ class CLITestCase(DockerClientTestCase):
volumes = [
v for v in self.client.volumes().get('Volumes', [])
if v['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if v['Name'].startswith('{}_'.format(self.project.name))
]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set([volume_with_label])
assert [v['Name'] for v in volumes] == [volume_with_label]
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -1142,7 +971,7 @@ class CLITestCase(DockerClientTestCase):
network_names = [
n['Name'] for n in self.client.networks()
if n['Name'].split('/')[-1].startswith('{}_'.format(self.project.name))
if n['Name'].startswith('{}_'.format(self.project.name))
]
assert network_names == []
@@ -1177,7 +1006,6 @@ class CLITestCase(DockerClientTestCase):
assert "Unsupported config option for services.bar: 'net'" in result.stderr
@no_cluster("Legacy networking not supported on Swarm")
def test_up_with_net_v1(self):
self.base_dir = 'tests/fixtures/net-container'
self.dispatch(['up', '-d'], None)
@@ -1330,40 +1158,14 @@ class CLITestCase(DockerClientTestCase):
proc.wait()
self.assertEqual(proc.returncode, 1)
@v2_only()
@no_cluster('Container PID mode does not work across clusters')
def test_up_with_pid_mode(self):
c = self.client.create_container(
'busybox', 'top', name='composetest_pid_mode_container',
host_config={}
)
self.addCleanup(self.client.remove_container, c, force=True)
self.client.start(c)
container_mode_source = 'container:{}'.format(c['Id'])
self.base_dir = 'tests/fixtures/pid-mode'
self.dispatch(['up', '-d'], None)
service_mode_source = 'container:{}'.format(
self.project.get_service('container').containers()[0].id)
service_mode_container = self.project.get_service('service').containers()[0]
assert service_mode_container.get('HostConfig.PidMode') == service_mode_source
container_mode_container = self.project.get_service('container').containers()[0]
assert container_mode_container.get('HostConfig.PidMode') == container_mode_source
host_mode_container = self.project.get_service('host').containers()[0]
assert host_mode_container.get('HostConfig.PidMode') == 'host'
def test_exec_without_tty(self):
self.base_dir = 'tests/fixtures/links-composefile'
self.dispatch(['up', '-d', 'console'])
self.assertEqual(len(self.project.containers()), 1)
stdout, stderr = self.dispatch(['exec', '-T', 'console', 'ls', '-1d', '/'])
self.assertEqual(stderr, "")
self.assertEqual(stdout, "/\n")
self.assertEqual(stderr, "")
def test_exec_custom_user(self):
self.base_dir = 'tests/fixtures/links-composefile'
@@ -1455,7 +1257,6 @@ class CLITestCase(DockerClientTestCase):
[u'/bin/true'],
)
@pytest.mark.skipif(SWARM_SKIP_RM_VOLUMES, reason='Swarm DELETE /containers/<id> bug')
def test_run_rm(self):
self.base_dir = 'tests/fixtures/volume'
proc = start_process(self.base_dir, ['run', '--rm', 'test'])
@@ -1469,7 +1270,7 @@ class CLITestCase(DockerClientTestCase):
mounts = containers[0].get('Mounts')
for mount in mounts:
if mount['Destination'] == '/container-path':
anonymous_name = mount['Name']
anonymousName = mount['Name']
break
os.kill(proc.pid, signal.SIGINT)
wait_on_process(proc, 1)
@@ -1482,9 +1283,9 @@ class CLITestCase(DockerClientTestCase):
if volume.internal == '/container-named-path':
name = volume.external
break
volume_names = [v['Name'].split('/')[-1] for v in volumes]
assert name in volume_names
assert anonymous_name not in volume_names
volumeNames = [v['Name'] for v in volumes]
assert name in volumeNames
assert anonymousName not in volumeNames
def test_run_service_with_dockerfile_entrypoint(self):
self.base_dir = 'tests/fixtures/entrypoint-dockerfile'
@@ -1606,10 +1407,11 @@ class CLITestCase(DockerClientTestCase):
container.stop()
# check the ports
assert port_random is not None
assert port_assigned.endswith(':49152')
assert port_range[0].endswith(':49153')
assert port_range[1].endswith(':49154')
self.assertNotEqual(port_random, None)
self.assertIn("0.0.0.0", port_random)
self.assertEqual(port_assigned, "0.0.0.0:49152")
self.assertEqual(port_range[0], "0.0.0.0:49153")
self.assertEqual(port_range[1], "0.0.0.0:49154")
def test_run_service_with_explicitly_mapped_ports(self):
# create one off container
@@ -1625,8 +1427,8 @@ class CLITestCase(DockerClientTestCase):
container.stop()
# check the ports
assert port_short.endswith(':30000')
assert port_full.endswith(':30001')
self.assertEqual(port_short, "0.0.0.0:30000")
self.assertEqual(port_full, "0.0.0.0:30001")
def test_run_service_with_explicitly_mapped_ip_ports(self):
# create one off container
@@ -1942,13 +1744,7 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['logs', '-f'])
if not is_cluster(self.client):
assert result.stdout.count('\n') == 5
else:
# Sometimes logs are picked up from old containers that haven't yet
# been removed (removal in Swarm is async)
assert result.stdout.count('\n') >= 5
assert result.stdout.count('\n') == 5
assert 'simple' in result.stdout
assert 'another' in result.stdout
assert 'exited with code 0' in result.stdout
@@ -2004,10 +1800,7 @@ class CLITestCase(DockerClientTestCase):
self.dispatch(['up'])
result = self.dispatch(['logs', '--tail', '2'])
assert 'c\n' in result.stdout
assert 'd\n' in result.stdout
assert 'a\n' not in result.stdout
assert 'b\n' not in result.stdout
assert result.stdout.count('\n') == 3
def test_kill(self):
self.dispatch(['up', '-d'], None)
@@ -2156,9 +1949,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', 'simple', str(number)])
return result.stdout.rstrip()
assert get_port(3000) == container.get_local_port(3000)
assert ':49152' in get_port(3001)
assert ':49153' in get_port(3002)
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "0.0.0.0:49153")
def test_expanded_port(self):
self.base_dir = 'tests/fixtures/ports-composefile'
@@ -2169,9 +1962,9 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['port', 'simple', str(number)])
return result.stdout.rstrip()
assert get_port(3000) == container.get_local_port(3000)
assert ':53222' in get_port(3001)
assert ':53223' in get_port(3002)
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "0.0.0.0:49153")
def test_port_with_scale(self):
self.base_dir = 'tests/fixtures/ports-composefile-scale'
@@ -2224,14 +2017,12 @@ class CLITestCase(DockerClientTestCase):
assert len(lines) == 2
container, = self.project.containers()
expected_template = ' container {} {}'
expected_meta_info = ['image=busybox:latest', 'name=simplecomposefile_simple_1']
expected_template = (
' container {} {} (image=busybox:latest, '
'name=simplecomposefile_simple_1)')
assert expected_template.format('create', container.id) in lines[0]
assert expected_template.format('start', container.id) in lines[1]
for line in lines:
for info in expected_meta_info:
assert info in line
assert has_timestamp(lines[0])
@@ -2274,6 +2065,7 @@ class CLITestCase(DockerClientTestCase):
'docker-compose.yml',
'docker-compose.override.yml',
'extra.yml',
]
self._project = get_project(self.base_dir, config_paths)
self.dispatch(
@@ -2290,6 +2082,7 @@ class CLITestCase(DockerClientTestCase):
web, other, db = containers
self.assertEqual(web.human_readable_command, 'top')
self.assertTrue({'db', 'other'} <= set(get_links(web)))
self.assertEqual(db.human_readable_command, 'top')
self.assertEqual(other.human_readable_command, 'top')

View File

@@ -1,4 +0,0 @@
FROM busybox
# Report the shm_size (through the size of /dev/shm)
RUN echo "shm_size:" $(df -h /dev/shm | tail -n 1 | awk '{print $2}')

View File

@@ -1,7 +0,0 @@
version: '3.5'
services:
custom_shm_size:
build:
context: .
shm_size: 100663296 # =96M

View File

@@ -1,4 +1,4 @@
IMAGE=alpine:latest
COMMAND=true
PORT1=5643
PORT2=9999
PORT2=9999

View File

@@ -1 +1 @@
FOO=1
FOO=1

View File

@@ -1,7 +1,6 @@
version: '2.2'
services:
web:
web:
command: "top"
db:
db:
command: "top"

View File

@@ -1,10 +1,10 @@
version: '2.2'
services:
web:
web:
image: busybox:latest
command: "sleep 200"
depends_on:
links:
- db
db:
db:
image: busybox:latest
command: "sleep 200"

View File

@@ -1,10 +1,9 @@
version: '2.2'
services:
web:
depends_on:
web:
links:
- db
- other
other:
other:
image: busybox:latest
command: "top"

View File

@@ -1,17 +0,0 @@
version: "2.2"
services:
service:
image: busybox
command: top
pid: "service:container"
container:
image: busybox
command: top
pid: "container:composetest_pid_mode_container"
host:
image: busybox
command: top
pid: host

View File

@@ -6,10 +6,10 @@ services:
ports:
- target: 3000
- target: 3001
published: 53222
published: 49152
- target: 3002
published: 53223
published: 49153
protocol: tcp
- target: 3003
published: 53224
published: 49154
protocol: udp

View File

@@ -1,9 +0,0 @@
version: '2.2'
services:
simple:
image: busybox:latest
volumes:
- datastore:/data1
volumes:
datastore:

View File

@@ -1,17 +0,0 @@
version: "2.1"
services:
web:
image: busybox
command: top
volumes:
- foo:/var/lib/
- bar:/etc/
volumes:
foo:
external: true
name: some_foo
bar:
external:
name: some_bar

View File

@@ -1,16 +0,0 @@
version: "2"
services:
web:
image: busybox
command: top
volumes:
- foo:/var/lib/
- bar:/etc/
volumes:
foo:
external: true
bar:
external:
name: some_bar

View File

@@ -1,17 +0,0 @@
version: "3.4"
services:
web:
image: busybox
command: top
volumes:
- foo:/var/lib/
- bar:/etc/
volumes:
foo:
external: true
name: some_foo
bar:
external:
name: some_bar

View File

@@ -1,4 +1,4 @@
version: "3.0"
version: "2.1"
services:
web:

View File

@@ -42,9 +42,5 @@ def create_host_file(client, filename):
output = client.logs(container)
raise Exception(
"Container exited with code {}:\n{}".format(exitcode, output))
container_info = client.inspect_container(container)
if 'Node' in container_info:
return container_info['Node']['Name']
finally:
client.remove_container(container, force=True)

View File

@@ -6,14 +6,12 @@ import random
import py
import pytest
from docker.errors import APIError
from docker.errors import NotFound
from .. import mock
from ..helpers import build_config as load_config
from ..helpers import create_host_file
from .testcases import DockerClientTestCase
from .testcases import SWARM_SKIP_CONTAINERS_ALL
from compose.config import config
from compose.config import ConfigurationError
from compose.config import types
@@ -31,10 +29,7 @@ from compose.errors import NoHealthCheckConfigured
from compose.project import Project
from compose.project import ProjectError
from compose.service import ConvergenceStrategy
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_2_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
@@ -62,20 +57,6 @@ class ProjectTest(DockerClientTestCase):
containers = project.containers()
self.assertEqual(len(containers), 2)
@pytest.mark.skipif(SWARM_SKIP_CONTAINERS_ALL, reason='Swarm /containers/json bug')
def test_containers_stopped(self):
web = self.create_service('web')
db = self.create_service('db')
project = Project('composetest', [web, db], self.client)
project.up()
assert len(project.containers()) == 2
assert len(project.containers(stopped=True)) == 2
project.stop()
assert len(project.containers()) == 0
assert len(project.containers(stopped=True)) == 2
def test_containers_with_service_names(self):
web = self.create_service('web')
db = self.create_service('db')
@@ -129,7 +110,6 @@ class ProjectTest(DockerClientTestCase):
volumes=['/var/data'],
name='composetest_data_container',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
project = Project.from_config(
name='composetest',
@@ -145,13 +125,12 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(db._get_volumes_from(), [data_container.id + ':rw'])
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_network_mode_from_service(self):
project = Project.from_config(
name='composetest',
client=self.client,
config_data=load_config({
'version': str(V2_0),
'version': V2_0,
'services': {
'net': {
'image': 'busybox:latest',
@@ -173,13 +152,12 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id)
@v2_only()
@no_cluster('container networks not supported in Swarm')
def test_network_mode_from_container(self):
def get_project():
return Project.from_config(
name='composetest',
config_data=load_config({
'version': str(V2_0),
'version': V2_0,
'services': {
'web': {
'image': 'busybox:latest',
@@ -201,7 +179,6 @@ class ProjectTest(DockerClientTestCase):
name='composetest_net_container',
command='top',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
net_container.start()
@@ -211,7 +188,6 @@ class ProjectTest(DockerClientTestCase):
web = project.get_service('web')
self.assertEqual(web.network_mode.mode, 'container:' + net_container.id)
@no_cluster('container networks not supported in Swarm')
def test_net_from_service_v1(self):
project = Project.from_config(
name='composetest',
@@ -235,7 +211,6 @@ class ProjectTest(DockerClientTestCase):
net = project.get_service('net')
self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id)
@no_cluster('container networks not supported in Swarm')
def test_net_from_container_v1(self):
def get_project():
return Project.from_config(
@@ -260,7 +235,6 @@ class ProjectTest(DockerClientTestCase):
name='composetest_net_container',
command='top',
labels={LABEL_PROJECT: 'composetest'},
host_config={},
)
net_container.start()
@@ -286,12 +260,12 @@ class ProjectTest(DockerClientTestCase):
project.start(service_names=['web'])
self.assertEqual(
set(c.name for c in project.containers() if c.is_running),
set(c.name for c in project.containers()),
set([web_container_1.name, web_container_2.name]))
project.start()
self.assertEqual(
set(c.name for c in project.containers() if c.is_running),
set(c.name for c in project.containers()),
set([web_container_1.name, web_container_2.name, db_container.name]))
project.pause(service_names=['web'])
@@ -311,12 +285,10 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 0)
project.stop(service_names=['web'], timeout=1)
self.assertEqual(
set(c.name for c in project.containers() if c.is_running), set([db_container.name])
)
self.assertEqual(set(c.name for c in project.containers()), set([db_container.name]))
project.kill(service_names=['db'])
self.assertEqual(len([c for c in project.containers() if c.is_running]), 0)
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 3)
project.remove_stopped(service_names=['web'])
@@ -331,13 +303,11 @@ class ProjectTest(DockerClientTestCase):
project = Project('composetest', [web, db], self.client)
project.create(['db'])
containers = project.containers(stopped=True)
assert len(containers) == 1
assert not containers[0].is_running
db_containers = db.containers(stopped=True)
assert len(db_containers) == 1
assert not db_containers[0].is_running
assert len(web.containers(stopped=True)) == 0
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers(stopped=True)), 0)
def test_create_twice(self):
web = self.create_service('web')
@@ -346,14 +316,12 @@ class ProjectTest(DockerClientTestCase):
project.create(['db', 'web'])
project.create(['db', 'web'])
containers = project.containers(stopped=True)
assert len(containers) == 2
db_containers = db.containers(stopped=True)
assert len(db_containers) == 1
assert not db_containers[0].is_running
web_containers = web.containers(stopped=True)
assert len(web_containers) == 1
assert not web_containers[0].is_running
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers()), 0)
self.assertEqual(len(web.containers(stopped=True)), 1)
def test_create_with_links(self):
db = self.create_service('db')
@@ -361,11 +329,12 @@ class ProjectTest(DockerClientTestCase):
project = Project('composetest', [db, web], self.client)
project.create(['web'])
# self.assertEqual(len(project.containers()), 0)
assert len(project.containers(stopped=True)) == 2
assert not [c for c in project.containers(stopped=True) if c.is_running]
assert len(db.containers(stopped=True)) == 1
assert len(web.containers(stopped=True)) == 1
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(db.containers(stopped=True)), 1)
self.assertEqual(len(web.containers()), 0)
self.assertEqual(len(web.containers(stopped=True)), 1)
def test_create_strategy_always(self):
db = self.create_service('db')
@@ -374,11 +343,11 @@ class ProjectTest(DockerClientTestCase):
old_id = project.containers(stopped=True)[0].id
project.create(['db'], strategy=ConvergenceStrategy.always)
assert len(project.containers(stopped=True)) == 1
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
db_container = project.containers(stopped=True)[0]
assert not db_container.is_running
assert db_container.id != old_id
self.assertNotEqual(db_container.id, old_id)
def test_create_strategy_never(self):
db = self.create_service('db')
@@ -387,11 +356,11 @@ class ProjectTest(DockerClientTestCase):
old_id = project.containers(stopped=True)[0].id
project.create(['db'], strategy=ConvergenceStrategy.never)
assert len(project.containers(stopped=True)) == 1
self.assertEqual(len(project.containers()), 0)
self.assertEqual(len(project.containers(stopped=True)), 1)
db_container = project.containers(stopped=True)[0]
assert not db_container.is_running
assert db_container.id == old_id
self.assertEqual(db_container.id, old_id)
def test_project_up(self):
web = self.create_service('web')
@@ -581,8 +550,8 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.containers(stopped=True)), 2)
self.assertEqual(len(project.get_service('web').containers()), 0)
self.assertEqual(len(project.get_service('db').containers()), 1)
self.assertEqual(len(project.get_service('data').containers()), 0)
self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1)
assert not project.get_service('data').containers(stopped=True)[0].is_running
self.assertEqual(len(project.get_service('console').containers()), 0)
def test_project_up_recreate_with_tmpfs_volume(self):
@@ -768,10 +737,10 @@ class ProjectTest(DockerClientTestCase):
"com.docker.compose.network.test": "9-29-045"
}
@v2_1_only()
@v2_only()
def test_up_with_network_static_addresses(self):
config_data = build_config(
version=V2_1,
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -797,8 +766,7 @@ class ProjectTest(DockerClientTestCase):
{"subnet": "fe80::/64",
"gateway": "fe80::1001:1"}
]
},
'enable_ipv6': True,
}
}
}
)
@@ -809,8 +777,13 @@ class ProjectTest(DockerClientTestCase):
)
project.up(detached=True)
network = self.client.networks(names=['static_test'])[0]
service_container = project.get_service('web').containers()[0]
assert network['Options'] == {
"com.docker.network.enable_ipv6": "true"
}
IPAMConfig = (service_container.inspect().get('NetworkSettings', {}).
get('Networks', {}).get('composetest_static_test', {}).
get('IPAMConfig', {}))
@@ -821,7 +794,7 @@ class ProjectTest(DockerClientTestCase):
def test_up_with_enable_ipv6(self):
self.require_api_version('1.23')
config_data = build_config(
version=V2_1,
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -852,7 +825,7 @@ class ProjectTest(DockerClientTestCase):
config_data=config_data,
)
project.up(detached=True)
network = [n for n in self.client.networks() if 'static_test' in n['Name']][0]
network = self.client.networks(names=['static_test'])[0]
service_container = project.get_service('web').containers()[0]
assert network['EnableIPv6'] is True
@@ -1004,7 +977,7 @@ class ProjectTest(DockerClientTestCase):
network_name = 'network_with_label'
config_data = build_config(
version=V2_1,
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -1053,8 +1026,8 @@ class ProjectTest(DockerClientTestCase):
project.up()
self.assertEqual(len(project.containers()), 1)
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
@v2_1_only()
@@ -1064,7 +1037,7 @@ class ProjectTest(DockerClientTestCase):
volume_name = 'volume_with_label'
config_data = build_config(
version=V2_1,
version=V2_0,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -1089,12 +1062,10 @@ class ProjectTest(DockerClientTestCase):
volumes = [
v for v in self.client.volumes().get('Volumes', [])
if v['Name'].split('/')[-1].startswith('composetest_')
if v['Name'].startswith('composetest_')
]
assert set([v['Name'].split('/')[-1] for v in volumes]) == set(
['composetest_{}'.format(volume_name)]
)
assert [v['Name'] for v in volumes] == ['composetest_{}'.format(volume_name)]
assert 'label_key' in volumes[0]['Labels']
assert volumes[0]['Labels']['label_key'] == 'label_val'
@@ -1104,7 +1075,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': str(V2_0),
'version': V2_0,
'services': {
'simple': {'image': 'busybox:latest', 'command': 'top'},
'another': {
@@ -1123,7 +1094,7 @@ class ProjectTest(DockerClientTestCase):
override_file = config.ConfigFile(
'override.yml',
{
'version': str(V2_0),
'version': V2_0,
'services': {
'another': {
'logging': {
@@ -1156,7 +1127,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': str(V2_0),
'version': V2_0,
'services': {
'simple': {
'image': 'busybox:latest',
@@ -1169,7 +1140,7 @@ class ProjectTest(DockerClientTestCase):
override_file = config.ConfigFile(
'override.yml',
{
'version': str(V2_0),
'version': V2_0,
'services': {
'simple': {
'ports': ['1234:1234']
@@ -1187,7 +1158,6 @@ class ProjectTest(DockerClientTestCase):
containers = project.containers()
self.assertEqual(len(containers), 1)
@v2_2_only()
def test_project_up_config_scale(self):
config_data = build_config(
version=V2_2,
@@ -1235,8 +1205,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
assert volume_data['Name'] == full_vol_name
assert volume_data['Driver'] == 'local'
@v2_only()
@@ -1259,13 +1229,13 @@ class ProjectTest(DockerClientTestCase):
)
project.up()
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
@v3_only()
def test_project_up_with_secrets(self):
node = create_host_file(self.client, os.path.abspath('tests/fixtures/secrets/default'))
create_host_file(self.client, os.path.abspath('tests/fixtures/secrets/default'))
config_data = build_config(
version=V3_1,
@@ -1276,7 +1246,6 @@ class ProjectTest(DockerClientTestCase):
'secrets': [
types.ServiceSecret.parse({'source': 'super', 'target': 'special'}),
],
'environment': ['constraint:node=={}'.format(node if node is not None else '*')]
}],
secrets={
'super': {
@@ -1318,11 +1287,10 @@ class ProjectTest(DockerClientTestCase):
name='composetest',
config_data=config_data, client=self.client
)
with self.assertRaises(APIError if is_cluster(self.client) else config.ConfigurationError):
with self.assertRaises(config.ConfigurationError):
project.volumes.initialize()
@v2_only()
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_initialize_volumes_updated_driver(self):
vol_name = '{0:x}'.format(random.getrandbits(32))
full_vol_name = 'composetest_{0}'.format(vol_name)
@@ -1342,8 +1310,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
config_data = config_data._replace(
@@ -1380,8 +1348,8 @@ class ProjectTest(DockerClientTestCase):
)
project.volumes.initialize()
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
config_data = config_data._replace(
@@ -1393,12 +1361,11 @@ class ProjectTest(DockerClientTestCase):
client=self.client
)
project.volumes.initialize()
volume_data = self.get_volume_data(full_vol_name)
assert volume_data['Name'].split('/')[-1] == full_vol_name
volume_data = self.client.inspect_volume(full_vol_name)
self.assertEqual(volume_data['Name'], full_vol_name)
self.assertEqual(volume_data['Driver'], 'local')
@v2_only()
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_initialize_volumes_external_volumes(self):
# Use composetest_ prefix so it gets garbage-collected in tearDown()
vol_name = 'composetest_{0:x}'.format(random.getrandbits(32))
@@ -1412,7 +1379,7 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={
vol_name: {'external': True, 'name': vol_name}
vol_name: {'external': True, 'external_name': vol_name}
},
)
project = Project.from_config(
@@ -1436,7 +1403,7 @@ class ProjectTest(DockerClientTestCase):
'command': 'top'
}],
volumes={
vol_name: {'external': True, 'name': vol_name}
vol_name: {'external': True, 'external_name': vol_name}
},
)
project = Project.from_config(
@@ -1457,7 +1424,7 @@ class ProjectTest(DockerClientTestCase):
base_file = config.ConfigFile(
'base.yml',
{
'version': str(V2_0),
'version': V2_0,
'services': {
'simple': {
'image': 'busybox:latest',

View File

@@ -16,8 +16,6 @@ from .. import mock
from .testcases import DockerClientTestCase
from .testcases import get_links
from .testcases import pull_busybox
from .testcases import SWARM_SKIP_CONTAINERS_ALL
from .testcases import SWARM_SKIP_CPU_SHARES
from compose import __version__
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
@@ -34,14 +32,9 @@ from compose.project import OneOffFilter
from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy
from compose.service import NetworkMode
from compose.service import PidMode
from compose.service import Service
from compose.utils import parse_nanoseconds_int
from tests.integration.testcases import is_cluster
from tests.integration.testcases import no_cluster
from tests.integration.testcases import v2_1_only
from tests.integration.testcases import v2_2_only
from tests.integration.testcases import v2_3_only
from tests.integration.testcases import v2_only
from tests.integration.testcases import v3_only
@@ -107,7 +100,6 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
self.assertEqual('foodriver', container.get('HostConfig.VolumeDriver'))
@pytest.mark.skipif(SWARM_SKIP_CPU_SHARES, reason='Swarm --cpu-shares bug')
def test_create_container_with_cpu_shares(self):
service = self.create_service('db', cpu_shares=73)
container = service.create_container()
@@ -159,7 +151,6 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
assert container.get('HostConfig.Init') is True
@pytest.mark.xfail(True, reason='Option has been removed in Engine 17.06.0')
def test_create_container_with_init_path(self):
self.require_api_version('1.25')
docker_init_path = find_executable('docker-init')
@@ -204,34 +195,6 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
assert container.get('HostConfig.ReadonlyRootfs') == read_only
def test_create_container_with_blkio_config(self):
blkio_config = {
'weight': 300,
'weight_device': [{'path': '/dev/sda', 'weight': 200}],
'device_read_bps': [{'path': '/dev/sda', 'rate': 1024 * 1024 * 100}],
'device_read_iops': [{'path': '/dev/sda', 'rate': 1000}],
'device_write_bps': [{'path': '/dev/sda', 'rate': 1024 * 1024}],
'device_write_iops': [{'path': '/dev/sda', 'rate': 800}]
}
service = self.create_service('web', blkio_config=blkio_config)
container = service.create_container()
assert container.get('HostConfig.BlkioWeight') == 300
assert container.get('HostConfig.BlkioWeightDevice') == [{
'Path': '/dev/sda', 'Weight': 200
}]
assert container.get('HostConfig.BlkioDeviceReadBps') == [{
'Path': '/dev/sda', 'Rate': 1024 * 1024 * 100
}]
assert container.get('HostConfig.BlkioDeviceWriteBps') == [{
'Path': '/dev/sda', 'Rate': 1024 * 1024
}]
assert container.get('HostConfig.BlkioDeviceReadIOps') == [{
'Path': '/dev/sda', 'Rate': 1000
}]
assert container.get('HostConfig.BlkioDeviceWriteIOps') == [{
'Path': '/dev/sda', 'Rate': 800
}]
def test_create_container_with_security_opt(self):
security_opt = ['label:disable']
service = self.create_service('db', security_opt=security_opt)
@@ -239,15 +202,6 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt))
# @pytest.mark.xfail(True, reason='Not supported on most drivers')
@pytest.mark.skipif(True, reason='https://github.com/moby/moby/issues/34270')
def test_create_container_with_storage_opt(self):
storage_opt = {'size': '1G'}
service = self.create_service('db', storage_opt=storage_opt)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.StorageOpt'), storage_opt)
def test_create_container_with_mac_address(self):
service = self.create_service('db', mac_address='02:42:ac:11:65:43')
container = service.create_container()
@@ -271,24 +225,6 @@ class ServiceTest(DockerClientTestCase):
self.assertTrue(path.basename(actual_host_path) == path.basename(host_path),
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
def test_create_container_with_healthcheck_config(self):
one_second = parse_nanoseconds_int('1s')
healthcheck = {
'test': ['true'],
'interval': 2 * one_second,
'timeout': 5 * one_second,
'retries': 5,
'start_period': 2 * one_second
}
service = self.create_service('db', healthcheck=healthcheck)
container = service.create_container()
remote_healthcheck = container.get('Config.Healthcheck')
assert remote_healthcheck['Test'] == healthcheck['test']
assert remote_healthcheck['Interval'] == healthcheck['interval']
assert remote_healthcheck['Timeout'] == healthcheck['timeout']
assert remote_healthcheck['Retries'] == healthcheck['retries']
assert remote_healthcheck['StartPeriod'] == healthcheck['start_period']
def test_recreate_preserves_volume_with_trailing_slash(self):
"""When the Compose file specifies a trailing slash in the container path, make
sure we copy the volume over when recreating.
@@ -313,7 +249,6 @@ class ServiceTest(DockerClientTestCase):
'busybox', 'true',
volumes={container_path: {}},
labels={'com.docker.compose.test_image': 'true'},
host_config={}
)
image = self.client.commit(tmp_container)['Id']
@@ -343,16 +278,13 @@ class ServiceTest(DockerClientTestCase):
image='busybox:latest',
command=["top"],
labels={LABEL_PROJECT: 'composetest'},
host_config={},
environment=['affinity:container=={}'.format(volume_container_1.id)],
)
host_service = self.create_service(
'host',
volumes_from=[
VolumeFromSpec(volume_service, 'rw', 'service'),
VolumeFromSpec(volume_container_2, 'rw', 'container')
],
environment=['affinity:container=={}'.format(volume_container_1.id)],
]
)
host_container = host_service.create_container()
host_service.start_container(host_container)
@@ -389,15 +321,9 @@ class ServiceTest(DockerClientTestCase):
self.assertIn('FOO=2', new_container.get('Config.Env'))
self.assertEqual(new_container.name, 'composetest_db_1')
self.assertEqual(new_container.get_mount('/etc')['Source'], volume_path)
if not is_cluster(self.client):
assert (
'affinity:container==%s' % old_container.id in
new_container.get('Config.Env')
)
else:
# In Swarm, the env marker is consumed and the container should be deployed
# on the same node.
assert old_container.get('Node.Name') == new_container.get('Node.Name')
self.assertIn(
'affinity:container==%s' % old_container.id,
new_container.get('Config.Env'))
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
self.assertNotEqual(old_container.id, new_container.id)
@@ -424,13 +350,8 @@ class ServiceTest(DockerClientTestCase):
ConvergencePlan('recreate', [orig_container]))
assert new_container.get_mount('/etc')['Source'] == volume_path
if not is_cluster(self.client):
assert ('affinity:container==%s' % orig_container.id in
new_container.get('Config.Env'))
else:
# In Swarm, the env marker is consumed and the container should be deployed
# on the same node.
assert orig_container.get('Node.Name') == new_container.get('Node.Name')
assert ('affinity:container==%s' % orig_container.id in
new_container.get('Config.Env'))
orig_container = new_container
@@ -543,21 +464,18 @@ class ServiceTest(DockerClientTestCase):
)
containers = service.execute_convergence_plan(ConvergencePlan('create', []), start=False)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
containers = service.execute_convergence_plan(
ConvergencePlan('recreate', containers),
start=False)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
service.execute_convergence_plan(ConvergencePlan('start', containers), start=False)
service_containers = service.containers(stopped=True)
assert len(service_containers) == 1
assert not service_containers[0].is_running
self.assertEqual(len(service.containers()), 0)
self.assertEqual(len(service.containers(stopped=True)), 1)
def test_start_container_passes_through_options(self):
db = self.create_service('db')
@@ -569,7 +487,6 @@ class ServiceTest(DockerClientTestCase):
create_and_start_container(db)
self.assertEqual(db.containers()[0].environment['FOO'], 'BAR')
@no_cluster('No legacy links support in Swarm')
def test_start_container_creates_links(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, None)])
@@ -586,7 +503,6 @@ class ServiceTest(DockerClientTestCase):
'db'])
)
@no_cluster('No legacy links support in Swarm')
def test_start_container_creates_links_with_names(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'custom_link_name')])
@@ -603,7 +519,6 @@ class ServiceTest(DockerClientTestCase):
'custom_link_name'])
)
@no_cluster('No legacy links support in Swarm')
def test_start_container_with_external_links(self):
db = self.create_service('db')
web = self.create_service('web', external_links=['composetest_db_1',
@@ -622,7 +537,6 @@ class ServiceTest(DockerClientTestCase):
'db_3']),
)
@no_cluster('No legacy links support in Swarm')
def test_start_normal_container_does_not_create_links_to_its_own_service(self):
db = self.create_service('db')
@@ -632,7 +546,6 @@ class ServiceTest(DockerClientTestCase):
c = create_and_start_container(db)
self.assertEqual(set(get_links(c)), set([]))
@no_cluster('No legacy links support in Swarm')
def test_start_one_off_container_creates_links_to_its_own_service(self):
db = self.create_service('db')
@@ -659,7 +572,7 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
container.wait()
self.assertIn(b'success', container.logs())
assert len(self.client.images(name='composetest_test')) >= 1
self.assertEqual(len(self.client.images(name='composetest_test')), 1)
def test_start_container_uses_tagged_image_if_it_exists(self):
self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test')
@@ -686,10 +599,7 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
service = self.create_service('web', build={'context': base_dir})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
self.create_service('web', build={'context': base_dir}).build()
assert self.client.inspect_image('composetest_web')
def test_build_non_ascii_filename(self):
@@ -702,9 +612,7 @@ class ServiceTest(DockerClientTestCase):
with open(os.path.join(base_dir.encode('utf8'), b'foo\xE2bar'), 'w') as f:
f.write("hello world\n")
service = self.create_service('web', build={'context': text_type(base_dir)})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
self.create_service('web', build={'context': text_type(base_dir)}).build()
assert self.client.inspect_image('composetest_web')
def test_build_with_image_name(self):
@@ -739,7 +647,6 @@ class ServiceTest(DockerClientTestCase):
build={'context': text_type(base_dir),
'args': {"build_version": "1"}})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert "build_version=1" in service.image()['ContainerConfig']['Cmd']
@@ -756,8 +663,6 @@ class ServiceTest(DockerClientTestCase):
build={'context': text_type(base_dir),
'args': {"build_version": "1"}})
service.build(build_args_override={'build_version': '2'})
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert "build_version=2" in service.image()['ContainerConfig']['Cmd']
@@ -773,61 +678,9 @@ class ServiceTest(DockerClientTestCase):
'labels': {'com.docker.compose.test': 'true'}
})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
assert service.image()['Config']['Labels']['com.docker.compose.test'] == 'true'
@no_cluster('Container networks not on Swarm')
def test_build_with_network(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write('FROM busybox\n')
f.write('RUN ping -c1 google.local\n')
net_container = self.client.create_container(
'busybox', 'top', host_config=self.client.create_host_config(
extra_hosts={'google.local': '127.0.0.1'}
), name='composetest_build_network'
)
self.addCleanup(self.client.remove_container, net_container, force=True)
self.client.start(net_container)
service = self.create_service('buildwithnet', build={
'context': text_type(base_dir),
'network': 'container:{}'.format(net_container['Id'])
})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
@v2_3_only()
@no_cluster('Not supported on UCP 2.2.0-beta1') # FIXME: remove once support is added
def test_build_with_target(self):
self.require_api_version('1.30')
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write('FROM busybox as one\n')
f.write('LABEL com.docker.compose.test=true\n')
f.write('LABEL com.docker.compose.test.target=one\n')
f.write('FROM busybox as two\n')
f.write('LABEL com.docker.compose.test.target=two\n')
service = self.create_service('buildtarget', build={
'context': text_type(base_dir),
'target': 'one'
})
service.build()
assert service.image()
assert service.image()['Config']['Labels']['com.docker.compose.test.target'] == 'one'
def test_start_container_stays_unprivileged(self):
service = self.create_service('web')
container = create_and_start_container(service).inspect()
@@ -866,27 +719,20 @@ class ServiceTest(DockerClientTestCase):
'0.0.0.0:9001:9000/udp',
])
container = create_and_start_container(service).inspect()
assert container['NetworkSettings']['Ports']['8000/tcp'] == [{
'HostIp': '127.0.0.1',
'HostPort': '8001',
}]
assert container['NetworkSettings']['Ports']['9000/udp'][0]['HostPort'] == '9001'
if not is_cluster(self.client):
assert container['NetworkSettings']['Ports']['9000/udp'][0]['HostIp'] == '0.0.0.0'
# self.assertEqual(container['NetworkSettings']['Ports'], {
# '8000/tcp': [
# {
# 'HostIp': '127.0.0.1',
# 'HostPort': '8001',
# },
# ],
# '9000/udp': [
# {
# 'HostIp': '0.0.0.0',
# 'HostPort': '9001',
# },
# ],
# })
self.assertEqual(container['NetworkSettings']['Ports'], {
'8000/tcp': [
{
'HostIp': '127.0.0.1',
'HostPort': '8001',
},
],
'9000/udp': [
{
'HostIp': '0.0.0.0',
'HostPort': '9001',
},
],
})
def test_create_with_image_id(self):
# Get image id for the current busybox:latest
@@ -914,10 +760,6 @@ class ServiceTest(DockerClientTestCase):
service.scale(0)
self.assertEqual(len(service.containers()), 0)
@pytest.mark.skipif(
SWARM_SKIP_CONTAINERS_ALL,
reason='Swarm /containers/json bug'
)
def test_scale_with_stopped_containers(self):
"""
Given there are some stopped containers and scale is called with a
@@ -1080,12 +922,12 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
def test_pid_mode_none_defined(self):
service = self.create_service('web', pid_mode=None)
service = self.create_service('web', pid=None)
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), '')
def test_pid_mode_host(self):
service = self.create_service('web', pid_mode=PidMode('host'))
service = self.create_service('web', pid='host')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), 'host')
@@ -1217,8 +1059,6 @@ class ServiceTest(DockerClientTestCase):
build={'context': base_dir,
'cache_from': ['build1']})
service.build()
self.addCleanup(self.client.remove_image, service.image_name)
assert service.image()
@mock.patch.dict(os.environ)

View File

@@ -6,11 +6,9 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import py
from docker.errors import ImageNotFound
from .testcases import DockerClientTestCase
from .testcases import get_links
from .testcases import no_cluster
from compose.config import config
from compose.project import Project
from compose.service import ConvergenceStrategy
@@ -245,34 +243,21 @@ class ServiceStateTest(DockerClientTestCase):
tag = 'latest'
image = '{}:{}'.format(repo, tag)
def safe_remove_image(image):
try:
self.client.remove_image(image)
except ImageNotFound:
pass
image_id = self.client.images(name='busybox')[0]['Id']
self.client.tag(image_id, repository=repo, tag=tag)
self.addCleanup(safe_remove_image, image)
self.addCleanup(self.client.remove_image, image)
web = self.create_service('web', image=image)
container = web.create_container()
# update the image
c = self.client.create_container(image, ['touch', '/hello.txt'], host_config={})
# In the case of a cluster, there's a chance we pick up the old image when
# calculating the new hash. To circumvent that, untag the old image first
# See also: https://github.com/moby/moby/issues/26852
self.client.remove_image(image, force=True)
c = self.client.create_container(image, ['touch', '/hello.txt'])
self.client.commit(c, repository=repo, tag=tag)
self.client.remove_container(c)
web = self.create_service('web', image=image)
self.assertEqual(('recreate', [container]), web.convergence_plan())
@no_cluster('Can not guarantee the build will be run on the same node the service is deployed')
def test_trigger_recreate_with_build(self):
context = py.test.ensuretemp('test_trigger_recreate_with_build')
self.addCleanup(context.remove)

View File

@@ -4,9 +4,8 @@ from __future__ import unicode_literals
import functools
import os
import pytest
from docker.errors import APIError
from docker.utils import version_lt
from pytest import skip
from .. import unittest
from compose.cli.docker_client import docker_client
@@ -17,19 +16,11 @@ from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V2_0 as V2_1
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V2_3 as V2_3
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V3_2 as V3_2
from compose.const import COMPOSEFILE_V3_3 as V3_3
from compose.const import LABEL_PROJECT
from compose.progress_stream import stream_output
from compose.service import Service
SWARM_SKIP_CONTAINERS_ALL = os.environ.get('SWARM_SKIP_CONTAINERS_ALL', '0') != '0'
SWARM_SKIP_CPU_SHARES = os.environ.get('SWARM_SKIP_CPU_SHARES', '0') != '0'
SWARM_SKIP_RM_VOLUMES = os.environ.get('SWARM_SKIP_RM_VOLUMES', '0') != '0'
SWARM_ASSUME_MULTINODE = os.environ.get('SWARM_ASSUME_MULTINODE', '0') != '0'
def pull_busybox(client):
client.pull('busybox:latest', stream=False)
@@ -47,7 +38,7 @@ def get_links(container):
def engine_max_version():
if 'DOCKER_VERSION' not in os.environ:
return V3_3
return V3_2
version = os.environ['DOCKER_VERSION'].partition('-')[0]
if version_lt(version, '1.10'):
return V1
@@ -55,36 +46,37 @@ def engine_max_version():
return V2_0
if version_lt(version, '1.13'):
return V2_1
if version_lt(version, '17.06'):
return V3_2
return V3_3
return V3_2
def min_version_skip(version):
return pytest.mark.skipif(
engine_max_version() < version,
reason="Engine version %s is too low" % version
)
def build_version_required_decorator(ignored_versions):
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
max_version = engine_max_version()
if max_version in ignored_versions:
skip("Engine version %s is too low" % max_version)
return
return f(self, *args, **kwargs)
return wrapper
return decorator
def v2_only():
return min_version_skip(V2_0)
return build_version_required_decorator((V1,))
def v2_1_only():
return min_version_skip(V2_1)
return build_version_required_decorator((V1, V2_0))
def v2_2_only():
return min_version_skip(V2_2)
def v2_3_only():
return min_version_skip(V2_3)
return build_version_required_decorator((V1, V2_0, V2_1))
def v3_only():
return min_version_skip(V3_0)
return build_version_required_decorator((V1, V2_0, V2_1, V2_2))
class DockerClientTestCase(unittest.TestCase):
@@ -105,11 +97,7 @@ class DockerClientTestCase(unittest.TestCase):
for i in self.client.images(
filters={'label': 'com.docker.compose.test_image'}):
try:
self.client.remove_image(i, force=True)
except APIError as e:
if e.is_server_error():
pass
self.client.remove_image(i)
volumes = self.client.volumes().get('Volumes') or []
for v in volumes:
@@ -144,44 +132,4 @@ class DockerClientTestCase(unittest.TestCase):
def require_api_version(self, minimum):
api_version = self.client.version()['ApiVersion']
if version_lt(api_version, minimum):
pytest.skip("API version is too low ({} < {})".format(api_version, minimum))
def get_volume_data(self, volume_name):
if not is_cluster(self.client):
return self.client.inspect_volume(volume_name)
volumes = self.client.volumes(filters={'name': volume_name})['Volumes']
assert len(volumes) > 0
return self.client.inspect_volume(volumes[0]['Name'])
def is_cluster(client):
if SWARM_ASSUME_MULTINODE:
return True
def get_nodes_number():
try:
return len(client.nodes())
except APIError:
# If the Engine is not part of a Swarm, the SDK will raise
# an APIError
return 0
if not hasattr(is_cluster, 'nodes') or is_cluster.nodes is None:
# Only make the API call if the value hasn't been cached yet
is_cluster.nodes = get_nodes_number()
return is_cluster.nodes > 1
def no_cluster(reason):
def decorator(f):
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if is_cluster(self.client):
pytest.skip("Test will not be run in cluster mode: %s" % reason)
return
return f(self, *args, **kwargs)
return wrapper
return decorator
skip("API version is too low ({} < {})".format(api_version, minimum))

View File

@@ -1,11 +1,9 @@
from __future__ import absolute_import
from __future__ import unicode_literals
import six
from docker.errors import DockerException
from .testcases import DockerClientTestCase
from .testcases import no_cluster
from compose.const import LABEL_PROJECT
from compose.const import LABEL_VOLUME
from compose.volume import Volume
@@ -24,15 +22,12 @@ class VolumeTest(DockerClientTestCase):
del self.tmp_volumes
super(VolumeTest, self).tearDown()
def create_volume(self, name, driver=None, opts=None, external=None, custom_name=False):
if external:
custom_name = True
if isinstance(external, six.text_type):
name = external
def create_volume(self, name, driver=None, opts=None, external=None):
if external and isinstance(external, bool):
external = name
vol = Volume(
self.client, 'composetest', name, driver=driver, driver_opts=opts,
external=bool(external), custom_name=custom_name
external_name=external
)
self.tmp_volumes.append(vol)
return vol
@@ -40,35 +35,26 @@ class VolumeTest(DockerClientTestCase):
def test_create_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
def test_create_volume_custom_name(self):
vol = self.create_volume('volume01', custom_name=True)
assert vol.name == vol.full_name
vol.create()
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.name
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
def test_recreate_existing_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
vol.create()
info = self.get_volume_data(vol.full_name)
assert info['Name'].split('/')[-1] == vol.full_name
info = self.client.inspect_volume(vol.full_name)
assert info['Name'] == vol.full_name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_inspect_volume(self):
vol = self.create_volume('volume01')
vol.create()
info = vol.inspect()
assert info['Name'] == vol.full_name
@no_cluster('remove volume by name defect on Swarm Classic')
def test_remove_volume(self):
vol = Volume(self.client, 'composetest', 'volume01')
vol.create()
@@ -76,7 +62,6 @@ class VolumeTest(DockerClientTestCase):
volumes = self.client.volumes()['Volumes']
assert len([v for v in volumes if v['Name'] == vol.full_name]) == 0
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_external_volume(self):
vol = self.create_volume('composetest_volume_ext', external=True)
assert vol.external is True
@@ -85,7 +70,6 @@ class VolumeTest(DockerClientTestCase):
info = vol.inspect()
assert info['Name'] == vol.name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_external_aliased_volume(self):
alias_name = 'composetest_alias01'
vol = self.create_volume('volume01', external=alias_name)
@@ -95,28 +79,24 @@ class VolumeTest(DockerClientTestCase):
info = vol.inspect()
assert info['Name'] == alias_name
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists(self):
vol = self.create_volume('volume01')
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists_external(self):
vol = self.create_volume('volume01', external=True)
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_exists_external_aliased(self):
vol = self.create_volume('volume01', external='composetest_alias01')
assert vol.exists() is False
vol.create()
assert vol.exists() is True
@no_cluster('inspect volume by name defect on Swarm Classic')
def test_volume_default_labels(self):
vol = self.create_volume('volume01')
vol.create()

View File

@@ -9,7 +9,6 @@ from compose import bundle
from compose import service
from compose.cli.errors import UserError
from compose.config.config import Config
from compose.const import COMPOSEFILE_V2_0 as V2_0
@pytest.fixture
@@ -75,7 +74,7 @@ def test_to_bundle():
{'name': 'b', 'build': './b'},
]
config = Config(
version=V2_0,
version=2,
services=services,
volumes={'special': {}},
networks={'extra': {}},

View File

@@ -7,7 +7,6 @@ from requests.exceptions import ConnectionError
from compose.cli import errors
from compose.cli.errors import handle_connection_errors
from compose.const import IS_WINDOWS_PLATFORM
from tests import mock
@@ -66,23 +65,3 @@ class TestHandleConnectionErrors(object):
raise APIError(None, None, msg)
mock_logging.error.assert_called_once_with(msg)
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='Needs pywin32')
def test_windows_pipe_error_no_data(self, mock_logging):
import pywintypes
with pytest.raises(errors.ConnectionError):
with handle_connection_errors(mock.Mock(api_version='1.22')):
raise pywintypes.error(232, 'WriteFile', 'The pipe is being closed.')
_, args, _ = mock_logging.error.mock_calls[0]
assert "The current Compose file version is not compatible with your engine version." in args[0]
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='Needs pywin32')
def test_windows_pipe_error_misc(self, mock_logging):
import pywintypes
with pytest.raises(errors.ConnectionError):
with handle_connection_errors(mock.Mock(api_version='1.22')):
raise pywintypes.error(231, 'WriteFile', 'The pipe is busy.')
_, args, _ = mock_logging.error.mock_calls[0]
assert "Windows named pipe error: The pipe is busy. (code: 231)" == args[0]

View File

@@ -28,7 +28,6 @@ from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V2_1 as V2_1
from compose.const import COMPOSEFILE_V2_2 as V2_2
from compose.const import COMPOSEFILE_V2_3 as V2_3
from compose.const import COMPOSEFILE_V3_0 as V3_0
from compose.const import COMPOSEFILE_V3_1 as V3_1
from compose.const import COMPOSEFILE_V3_2 as V3_2
@@ -180,9 +179,6 @@ class ConfigTest(unittest.TestCase):
cfg = config.load(build_config_details({'version': '2.2'}))
assert cfg.version == V2_2
cfg = config.load(build_config_details({'version': '2.3'}))
assert cfg.version == V2_3
for version in ['3', '3.0']:
cfg = config.load(build_config_details({'version': version}))
assert cfg.version == V3_0
@@ -251,7 +247,7 @@ class ConfigTest(unittest.TestCase):
)
)
assert 'Invalid top-level property "web"' in excinfo.exconly()
assert 'Additional properties are not allowed' in excinfo.exconly()
assert VERSION_EXPLANATION in excinfo.exconly()
def test_named_volume_config_empty(self):
@@ -382,7 +378,7 @@ class ConfigTest(unittest.TestCase):
base_file = config.ConfigFile(
'base.yaml',
{
'version': str(V2_1),
'version': V2_1,
'services': {
'web': {
'image': 'example/web',
@@ -407,32 +403,6 @@ class ConfigTest(unittest.TestCase):
}
}
def test_load_config_service_labels(self):
base_file = config.ConfigFile(
'base.yaml',
{
'version': '2.1',
'services': {
'web': {
'image': 'example/web',
'labels': ['label_key=label_val']
},
'db': {
'image': 'example/db',
'labels': {
'label_key': 'label_val'
}
}
},
}
)
details = config.ConfigDetails('.', [base_file])
service_dicts = config.load(details).services
for service in service_dicts:
assert service['labels'] == {
'label_key': 'label_val'
}
def test_load_config_volume_and_network_labels(self):
base_file = config.ConfigFile(
'base.yaml',
@@ -461,23 +431,30 @@ class ConfigTest(unittest.TestCase):
)
details = config.ConfigDetails('.', [base_file])
loaded_config = config.load(details)
network_dict = config.load(details).networks
volume_dict = config.load(details).volumes
assert loaded_config.networks == {
'with_label': {
'labels': {
'label_key': 'label_val'
self.assertEqual(
network_dict,
{
'with_label': {
'labels': {
'label_key': 'label_val'
}
}
}
}
)
assert loaded_config.volumes == {
'with_label': {
'labels': {
'label_key': 'label_val'
self.assertEqual(
volume_dict,
{
'with_label': {
'labels': {
'label_key': 'label_val'
}
}
}
}
)
def test_load_config_invalid_service_names(self):
for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']:
@@ -600,20 +577,6 @@ class ConfigTest(unittest.TestCase):
assert 'Invalid service name \'mong\\o\'' in excinfo.exconly()
def test_config_duplicate_cache_from_values_validation_error(self):
with pytest.raises(ConfigurationError) as exc:
config.load(
build_config_details({
'version': '2.3',
'services': {
'test': {'build': {'context': '.', 'cache_from': ['a', 'b', 'a']}}
}
})
)
assert 'build.cache_from contains non-unique items' in exc.exconly()
def test_load_with_multiple_files_v1(self):
base_file = config.ConfigFile(
'base.yaml',
@@ -735,42 +698,6 @@ class ConfigTest(unittest.TestCase):
]
self.assertEqual(service_sort(service_dicts), service_sort(expected))
def test_load_mixed_extends_resolution(self):
main_file = config.ConfigFile(
'main.yml', {
'version': '2.2',
'services': {
'prodweb': {
'extends': {
'service': 'web',
'file': 'base.yml'
},
'environment': {'PROD': 'true'},
},
},
}
)
tmpdir = pytest.ensuretemp('config_test')
self.addCleanup(tmpdir.remove)
tmpdir.join('base.yml').write("""
version: '2.2'
services:
base:
image: base
web:
extends: base
""")
details = config.ConfigDetails('.', [main_file])
with tmpdir.as_cwd():
service_dicts = config.load(details).services
assert service_dicts[0] == {
'name': 'prodweb',
'image': 'base',
'environment': {'PROD': 'true'},
}
def test_load_with_multiple_files_and_invalid_override(self):
base_file = config.ConfigFile(
'base.yaml',
@@ -806,18 +733,6 @@ class ConfigTest(unittest.TestCase):
assert services[1]['name'] == 'db'
assert services[2]['name'] == 'web'
def test_load_with_extensions(self):
config_details = build_config_details({
'version': '2.3',
'x-data': {
'lambda': 3,
'excess': [True, {}]
}
})
config_data = config.load(config_details)
assert config_data.services == []
def test_config_build_configuration(self):
service = config.load(
build_config_details(
@@ -911,11 +826,11 @@ class ConfigTest(unittest.TestCase):
assert service['build']['args']['opt1'] == '42'
assert service['build']['args']['opt2'] == 'foobar'
def test_load_build_labels_dict(self):
def test_load_with_build_labels(self):
service = config.load(
build_config_details(
{
'version': str(V3_3),
'version': V3_3,
'services': {
'web': {
'build': {
@@ -938,28 +853,6 @@ class ConfigTest(unittest.TestCase):
assert service['build']['labels']['label1'] == 42
assert service['build']['labels']['label2'] == 'foobar'
def test_load_build_labels_list(self):
base_file = config.ConfigFile(
'base.yml',
{
'version': '2.3',
'services': {
'web': {
'build': {
'context': '.',
'labels': ['foo=bar', 'baz=true', 'foobar=1']
},
},
},
}
)
details = config.ConfigDetails('.', [base_file])
service = config.load(details).services[0]
assert service['build']['labels'] == {
'foo': 'bar', 'baz': 'true', 'foobar': '1'
}
def test_build_args_allow_empty_properties(self):
service = config.load(
build_config_details(
@@ -1142,38 +1035,6 @@ class ConfigTest(unittest.TestCase):
['/anonymous', '/c:/b:rw', 'vol:/x:ro']
)
@mock.patch.dict(os.environ)
def test_volume_mode_override(self):
os.environ['COMPOSE_CONVERT_WINDOWS_PATHS'] = 'true'
base_file = config.ConfigFile(
'base.yaml',
{
'version': '2.3',
'services': {
'web': {
'image': 'example/web',
'volumes': ['/c:/b:rw']
}
},
}
)
override_file = config.ConfigFile(
'override.yaml',
{
'version': '2.3',
'services': {
'web': {
'volumes': ['/c:/b:ro']
}
}
}
)
details = config.ConfigDetails('.', [base_file, override_file])
service_dicts = config.load(details).services
svc_volumes = list(map(lambda v: v.repr(), service_dicts[0]['volumes']))
assert svc_volumes == ['/c:/b:ro']
def test_undeclared_volume_v2(self):
base_file = config.ConfigFile(
'base.yaml',
@@ -1662,7 +1523,7 @@ class ConfigTest(unittest.TestCase):
def test_isolation_option(self):
actual = config.load(build_config_details({
'version': str(V2_1),
'version': V2_1,
'services': {
'web': {
'image': 'win10',
@@ -1754,22 +1615,6 @@ class ConfigTest(unittest.TestCase):
'ports': types.ServicePort.parse('5432')
}
def test_merge_service_dicts_ports_sorting(self):
base = {
'ports': [5432]
}
override = {
'image': 'alpine:edge',
'ports': ['5432/udp']
}
actual = config.merge_service_dicts_from_files(
base,
override,
DEFAULT_VERSION)
assert len(actual['ports']) == 2
assert types.ServicePort.parse('5432')[0] in actual['ports']
assert types.ServicePort.parse('5432/udp')[0] in actual['ports']
def test_merge_service_dicts_heterogeneous_volumes(self):
base = {
'volumes': ['/a:/b', '/x:/z'],
@@ -1963,6 +1808,7 @@ class ConfigTest(unittest.TestCase):
'image': 'alpine:edge',
'logging': {
'driver': 'syslog',
'options': None
}
}
@@ -2236,153 +2082,6 @@ class ConfigTest(unittest.TestCase):
actual = config.merge_service_dicts(base, override, V3_3)
assert actual['credential_spec'] == override['credential_spec']
def test_merge_scale(self):
base = {
'image': 'bar',
'scale': 2,
}
override = {
'scale': 4,
}
actual = config.merge_service_dicts(base, override, V2_2)
assert actual == {'image': 'bar', 'scale': 4}
def test_merge_blkio_config(self):
base = {
'image': 'bar',
'blkio_config': {
'weight': 300,
'weight_device': [
{'path': '/dev/sda1', 'weight': 200}
],
'device_read_iops': [
{'path': '/dev/sda1', 'rate': 300}
],
'device_write_iops': [
{'path': '/dev/sda1', 'rate': 1000}
]
}
}
override = {
'blkio_config': {
'weight': 450,
'weight_device': [
{'path': '/dev/sda2', 'weight': 400}
],
'device_read_iops': [
{'path': '/dev/sda1', 'rate': 2000}
],
'device_read_bps': [
{'path': '/dev/sda1', 'rate': 1024}
]
}
}
actual = config.merge_service_dicts(base, override, V2_2)
assert actual == {
'image': 'bar',
'blkio_config': {
'weight': override['blkio_config']['weight'],
'weight_device': (
base['blkio_config']['weight_device'] +
override['blkio_config']['weight_device']
),
'device_read_iops': override['blkio_config']['device_read_iops'],
'device_read_bps': override['blkio_config']['device_read_bps'],
'device_write_iops': base['blkio_config']['device_write_iops']
}
}
def test_merge_extra_hosts(self):
base = {
'image': 'bar',
'extra_hosts': {
'foo': '1.2.3.4',
}
}
override = {
'extra_hosts': ['bar:5.6.7.8', 'foo:127.0.0.1']
}
actual = config.merge_service_dicts(base, override, V2_0)
assert actual['extra_hosts'] == {
'foo': '127.0.0.1',
'bar': '5.6.7.8',
}
def test_merge_healthcheck_config(self):
base = {
'image': 'bar',
'healthcheck': {
'start_period': 1000,
'interval': 3000,
'test': ['true']
}
}
override = {
'healthcheck': {
'interval': 5000,
'timeout': 10000,
'test': ['echo', 'OK'],
}
}
actual = config.merge_service_dicts(base, override, V2_3)
assert actual['healthcheck'] == {
'start_period': base['healthcheck']['start_period'],
'test': override['healthcheck']['test'],
'interval': override['healthcheck']['interval'],
'timeout': override['healthcheck']['timeout'],
}
def test_merge_healthcheck_override_disables(self):
base = {
'image': 'bar',
'healthcheck': {
'start_period': 1000,
'interval': 3000,
'timeout': 2000,
'retries': 3,
'test': ['true']
}
}
override = {
'healthcheck': {
'disabled': True
}
}
actual = config.merge_service_dicts(base, override, V2_3)
assert actual['healthcheck'] == {'disabled': True}
def test_merge_healthcheck_override_enables(self):
base = {
'image': 'bar',
'healthcheck': {
'disabled': True
}
}
override = {
'healthcheck': {
'disabled': False,
'start_period': 1000,
'interval': 3000,
'timeout': 2000,
'retries': 3,
'test': ['true']
}
}
actual = config.merge_service_dicts(base, override, V2_3)
assert actual['healthcheck'] == override['healthcheck']
def test_external_volume_config(self):
config_details = build_config_details({
'version': '2',
@@ -2838,12 +2537,11 @@ class PortsTest(unittest.TestCase):
def check_config(self, cfg):
config.load(
build_config_details({
'version': '2.3',
'services': {
'web': dict(image='busybox', **cfg)
},
}, 'working_dir', 'filename.yml')
build_config_details(
{'web': dict(image='busybox', **cfg)},
'working_dir',
'filename.yml'
)
)
@@ -4091,7 +3789,7 @@ class VolumePathTest(unittest.TestCase):
def test_split_path_mapping_with_windows_path(self):
host_path = "c:\\Users\\msamblanet\\Documents\\anvil\\connect\\config"
windows_volume_path = host_path + ":/opt/connect/config:ro"
expected_mapping = ("/opt/connect/config", (host_path, 'ro'))
expected_mapping = ("/opt/connect/config:ro", host_path)
mapping = config.split_path_mapping(windows_volume_path)
assert mapping == expected_mapping
@@ -4099,7 +3797,7 @@ class VolumePathTest(unittest.TestCase):
def test_split_path_mapping_with_windows_path_in_container(self):
host_path = 'c:\\Users\\remilia\\data'
container_path = 'c:\\scarletdevil\\data'
expected_mapping = (container_path, (host_path, None))
expected_mapping = (container_path, host_path)
mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path))
assert mapping == expected_mapping
@@ -4107,7 +3805,7 @@ class VolumePathTest(unittest.TestCase):
def test_split_path_mapping_with_root_mount(self):
host_path = '/'
container_path = '/var/hostroot'
expected_mapping = (container_path, (host_path, None))
expected_mapping = (container_path, host_path)
mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path))
assert mapping == expected_mapping
@@ -4195,7 +3893,6 @@ class HealthcheckTest(unittest.TestCase):
'interval': '1s',
'timeout': '1m',
'retries': 3,
'start_period': '10s'
}},
'.',
)
@@ -4205,7 +3902,6 @@ class HealthcheckTest(unittest.TestCase):
'interval': nanoseconds_from_time_seconds(1),
'timeout': nanoseconds_from_time_seconds(60),
'retries': 3,
'start_period': nanoseconds_from_time_seconds(10)
}
def test_disable(self):
@@ -4336,17 +4032,15 @@ class SerializeTest(unittest.TestCase):
'test': 'exit 1',
'interval': '1m40s',
'timeout': '30s',
'retries': 5,
'start_period': '2s90ms'
'retries': 5
}
}
processed_service = config.process_service(config.ServiceConfig(
'.', 'test', 'test', service_dict
))
denormalized_service = denormalize_service_dict(processed_service, V2_3)
denormalized_service = denormalize_service_dict(processed_service, V2_1)
assert denormalized_service['healthcheck']['interval'] == '100s'
assert denormalized_service['healthcheck']['timeout'] == '30s'
assert denormalized_service['healthcheck']['start_period'] == '2090ms'
def test_denormalize_image_has_digest(self):
service_dict = {
@@ -4399,7 +4093,7 @@ class SerializeTest(unittest.TestCase):
assert serialized_config['secrets']['two'] == secrets_dict['two']
def test_serialize_ports(self):
config_dict = config.Config(version=V2_0, services=[
config_dict = config.Config(version='2.0', services=[
{
'ports': [types.ServicePort('80', '8080', None, None, None)],
'image': 'alpine',
@@ -4440,43 +4134,3 @@ class SerializeTest(unittest.TestCase):
assert secret_sort(serialized_service['configs']) == secret_sort(service_dict['configs'])
assert 'configs' in serialized_config
assert serialized_config['configs']['two'] == configs_dict['two']
def test_serialize_bool_string(self):
cfg = {
'version': '2.2',
'services': {
'web': {
'image': 'example/web',
'command': 'true',
'environment': {'FOO': 'Y', 'BAR': 'on'}
}
}
}
config_dict = config.load(build_config_details(cfg))
serialized_config = serialize_config(config_dict)
assert 'command: "true"\n' in serialized_config
assert 'FOO: "Y"\n' in serialized_config
assert 'BAR: "on"\n' in serialized_config
def test_serialize_escape_dollar_sign(self):
cfg = {
'version': '2.2',
'services': {
'web': {
'image': 'busybox',
'command': 'echo $$FOO',
'environment': {
'CURRENCY': '$$'
},
'entrypoint': ['$$SHELL', '-c'],
}
}
}
config_dict = config.load(build_config_details(cfg))
serialized_config = yaml.load(serialize_config(config_dict))
serialized_service = serialized_config['services']['web']
assert serialized_service['environment']['CURRENCY'] == '$$'
assert serialized_service['command'] == 'echo $$FOO'
assert serialized_service['entrypoint'][0] == '$$SHELL'

View File

@@ -8,8 +8,6 @@ from compose.config.interpolation import interpolate_environment_variables
from compose.config.interpolation import Interpolator
from compose.config.interpolation import InvalidInterpolation
from compose.config.interpolation import TemplateWithDefaults
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import COMPOSEFILE_V3_1 as V3_1
@pytest.fixture
@@ -52,7 +50,7 @@ def test_interpolate_environment_variables_in_services(mock_env):
}
}
}
value = interpolate_environment_variables(V2_0, services, 'service', mock_env)
value = interpolate_environment_variables("2.0", services, 'service', mock_env)
assert value == expected
@@ -77,7 +75,7 @@ def test_interpolate_environment_variables_in_volumes(mock_env):
},
'other': {},
}
value = interpolate_environment_variables(V2_0, volumes, 'volume', mock_env)
value = interpolate_environment_variables("2.0", volumes, 'volume', mock_env)
assert value == expected
@@ -102,7 +100,7 @@ def test_interpolate_environment_variables_in_secrets(mock_env):
},
'other': {},
}
value = interpolate_environment_variables(V3_1, secrets, 'volume', mock_env)
value = interpolate_environment_variables("3.1", secrets, 'volume', mock_env)
assert value == expected

View File

@@ -81,12 +81,6 @@ class TestServicePort(object):
'external_ip': '1.1.1.1',
}
def test_repr_published_port_0(self):
port_def = '0:4000'
ports = ServicePort.parse(port_def)
assert len(ports) == 1
assert ports[0].legacy_repr() == port_def + '/tcp'
def test_parse_port_range(self):
ports = ServicePort.parse('25000-25001:4000-4001')
assert len(ports) == 2

View File

@@ -8,7 +8,6 @@ from docker.errors import APIError
from compose.parallel import parallel_execute
from compose.parallel import parallel_execute_iter
from compose.parallel import ParallelStreamWriter
from compose.parallel import UpstreamError
@@ -63,7 +62,7 @@ def test_parallel_execute_with_limit():
limit=limit,
)
assert results == tasks * [None]
assert results == tasks*[None]
assert errors == {}
@@ -116,48 +115,3 @@ def test_parallel_execute_with_upstream_errors():
assert (data_volume, None, APIError) in events
assert (db, None, UpstreamError) in events
assert (web, None, UpstreamError) in events
def test_parallel_execute_alignment(capsys):
results, errors = parallel_execute(
objects=["short", "a very long name"],
func=lambda x: x,
get_name=six.text_type,
msg="Aligning",
)
assert errors == {}
_, err = capsys.readouterr()
a, b = err.split('\n')[:2]
assert a.index('...') == b.index('...')
def test_parallel_execute_ansi(capsys):
ParallelStreamWriter.set_noansi(value=False)
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,
get_name=six.text_type,
msg="Control characters",
)
assert errors == {}
_, err = capsys.readouterr()
assert "\x1b" in err
def test_parallel_execute_noansi(capsys):
ParallelStreamWriter.set_noansi()
results, errors = parallel_execute(
objects=["something", "something more"],
func=lambda x: x,
get_name=six.text_type,
msg="Control characters",
)
assert errors == {}
_, err = capsys.readouterr()
assert "\x1b" not in err

View File

@@ -10,8 +10,6 @@ from .. import mock
from .. import unittest
from compose.config.config import Config
from compose.config.types import VolumeFromSpec
from compose.const import COMPOSEFILE_V1 as V1
from compose.const import COMPOSEFILE_V2_0 as V2_0
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.project import Project
@@ -23,9 +21,9 @@ class ProjectTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
def test_from_config_v1(self):
def test_from_config(self):
config = Config(
version=V1,
version=None,
services=[
{
'name': 'web',
@@ -55,7 +53,7 @@ class ProjectTest(unittest.TestCase):
def test_from_config_v2(self):
config = Config(
version=V2_0,
version=2,
services=[
{
'name': 'web',
@@ -168,7 +166,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=None,
services=[{
'name': 'test',
'image': 'busybox:latest',
@@ -196,7 +194,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=None,
services=[
{
'name': 'vol',
@@ -223,7 +221,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=None,
config_data=Config(
version=V2_0,
version=None,
services=[
{
'name': 'vol',
@@ -363,7 +361,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V1,
version=None,
services=[
{
'name': 'test',
@@ -388,7 +386,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=None,
services=[
{
'name': 'test',
@@ -419,7 +417,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=None,
services=[
{
'name': 'aaa',
@@ -446,7 +444,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=2,
services=[
{
'name': 'foo',
@@ -467,7 +465,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=2,
services=[
{
'name': 'foo',
@@ -502,7 +500,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version=None,
services=[{
'name': 'web',
'image': 'busybox:latest',
@@ -520,7 +518,7 @@ class ProjectTest(unittest.TestCase):
name='test',
client=self.mock_client,
config_data=Config(
version=V2_0,
version='2',
services=[{
'name': 'web',
'image': 'busybox:latest',

View File

@@ -9,14 +9,12 @@ from .. import mock
from .. import unittest
from compose.config.errors import DependencyError
from compose.config.types import ServicePort
from compose.config.types import ServiceSecret
from compose.config.types import VolumeFromSpec
from compose.config.types import VolumeSpec
from compose.const import LABEL_CONFIG_HASH
from compose.const import LABEL_ONE_OFF
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.const import SECRETS_PATH
from compose.container import Container
from compose.project import OneOffFilter
from compose.service import build_ulimits
@@ -475,9 +473,6 @@ class ServiceTest(unittest.TestCase):
buildargs={},
labels=None,
cache_from=None,
network_mode=None,
target=None,
shmsize=None,
)
def test_ensure_image_exists_no_build(self):
@@ -516,9 +511,6 @@ class ServiceTest(unittest.TestCase):
buildargs={},
labels=None,
cache_from=None,
network_mode=None,
target=None,
shmsize=None
)
def test_build_does_not_pull(self):
@@ -1091,56 +1083,3 @@ class ServiceVolumesTest(unittest.TestCase):
self.assertEqual(
self.mock_client.create_host_config.call_args[1]['binds'],
[volume])
class ServiceSecretTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.APIClient)
def test_get_secret_volumes(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1', 'target': 'b.txt'}),
'file': 'a.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].external == secret1['file']
assert volumes[0].internal == '{}/{}'.format(SECRETS_PATH, secret1['secret'].target)
def test_get_secret_volumes_abspath(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1', 'target': '/d.txt'}),
'file': 'c.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].external == secret1['file']
assert volumes[0].internal == secret1['secret'].target
def test_get_secret_volumes_no_target(self):
secret1 = {
'secret': ServiceSecret.parse({'source': 'secret1'}),
'file': 'c.txt'
}
service = Service(
'web',
client=self.mock_client,
image='busybox',
secrets=[secret1]
)
volumes = service.get_secret_volumes()
assert volumes[0].external == secret1['file']
assert volumes[0].internal == '{}/{}'.format(SECRETS_PATH, secret1['secret'].source)

View File

@@ -60,11 +60,3 @@ class TestJsonStream(object):
{'three': 'four'},
{'x': 2}
]
class TestParseBytes(object):
def test_parse_bytes(self):
assert utils.parse_bytes('123kb') == 123 * 1024
assert utils.parse_bytes(123) == 123
assert utils.parse_bytes('foobar') is None
assert utils.parse_bytes('123') == 123

View File

@@ -21,6 +21,6 @@ class TestVolume(object):
mock_client.remove_volume.assert_called_once_with('foo_project')
def test_remove_external_volume(self, mock_client):
vol = volume.Volume(mock_client, 'foo', 'project', external=True)
vol = volume.Volume(mock_client, 'foo', 'project', external_name='data')
vol.remove()
assert not mock_client.remove_volume.called

View File

@@ -9,8 +9,6 @@ passenv =
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY
DOCKER_VERSION
SWARM_SKIP_*
SWARM_ASSUME_MULTINODE
setenv =
HOME=/tmp
deps =
@@ -18,7 +16,6 @@ deps =
-rrequirements-dev.txt
commands =
py.test -v \
--full-trace \
--cov=compose \
--cov-report html \
--cov-report term \