Compare commits

..

24 Commits

Author SHA1 Message Date
aiordache
67630359cf "Bump 1.28.2"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:23:53 +01:00
aiordache
c99c1556aa Add cgroup1 label to Release.Jenkinsfile
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
aiordache
0e529bf29b "Bump 1.28.1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Harald Albers
27d039d39a Fix formatting of help output for up|logs --no-log-prefix
Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-26 20:15:15 +01:00
Harald Albers
ad1baff1b3 Add bash completion for logs|up --no-log-prefix
This adds bash completion for https://github.com/docker/compose/pull/7435

Signed-off-by: Harald Albers <github@albersweb.de>
2021-01-26 20:15:15 +01:00
Chris Crone
59e9ebe428 build.linux: Revert to Python 3.7
This allows us to revert from Debian Buster to Stretch which allows
us to relax the glibc version requirements.

Signed-off-by: Chris Crone <christopher.crone@docker.com>
2021-01-26 20:15:15 +01:00
aiordache
90373e9e63 "Bump 1.28.0"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Ulysses Souza
786822e921 Update compose-spec
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:15 +01:00
Ulysses Souza
95c6adeecf Remove restriction on docker version
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:15 +01:00
Mike Seplowitz
b6ddddc31a Improve control over ANSI output (#6858)
* Move global console_handler into function scope

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>

* Improve control over ANSI output

- Disabled parallel logger ANSI output if not attached to a tty.
  The console handler and progress stream already checked whether the
  output stream is a tty, but ParallelStreamWriter did not.

- Added --ansi=(never|always|auto) option to allow clearer control over
  ANSI output. Since --no-ansi is the same as --ansi=never, --no-ansi is
  now deprecated.

Signed-off-by: Mike Seplowitz <mseplowitz@bloomberg.net>
2021-01-26 20:15:15 +01:00
aiordache
e1fb1e9a3a "Bump 1.28.0-rc3"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:15 +01:00
Mark Gallagher
c27c73efae Remove duplicate values check for build.cache_from
The `docker` command accepts duplicate values, so there is no benefit to
performing this check.

Fixes #7342.

Signed-off-by: Mark Gallagher <mark@fts.scot>
2021-01-26 20:15:15 +01:00
Sebastiaan van Stijn
a5863de31a Make COMPOSE_DOCKER_CLI_BUILD=1 the default
This changes compose to use "native" build through the CLI
by default. With this, docker-compose can take advantage of
BuildKit (which is now enabled by default on Docker Desktop
2.5 and up).

Users that want to use the python client for building can
opt-out of this feature by setting COMPOSE_DOCKER_CLI_BUILD=0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-01-26 20:15:14 +01:00
guillaume.tardif
97056552dc Support windows npipe, set content type & corrrect URL /usage. Also fixed socket name for desktop mac
Signed-off-by: guillaume.tardif <guillaume.tardif@gmail.com>
2021-01-26 20:15:14 +01:00
Ulysses Souza
318741ca5e Add metrics
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
aiordache
aa8b7bb392 "Bump 1.28.0-rc2"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
Daniil Sigalov
a8ffcfaefb Only attach services we'll read logs from in up
When 'up' is run with explicit list of services, compose will
start them together with their dependencies. It will attach to all
started services, but won't read output from dependencies (their
logs are not printed by 'up') - so the receive buffer of
dependencies will fill and at some point will start blocking those
services. Fix that by only attaching to services given in the
list.
To do that, move logic of choosing which services to attach from
cli/main.py to utils.py and use it from project.py to decide if
service should be attached.

Fixes #6018

Signed-off-by: Daniil Sigalov <asterite@seclab.cs.msu.ru>
2021-01-26 20:15:14 +01:00
Ulysses Souza
97e009a8cb Avoid setting unsuported parameter for subprocess.Popen on Windows
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
aiordache
186e3913f0 "Bump 1.28.0-rc1"
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
dependabot-preview[bot]
7bc945654f Bump virtualenv from 20.0.30 to 20.2.2
Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
(cherry picked from commit 8785279ffd)
Signed-off-by: Ulysses Souza <ulyssessouza@gmail.com>
2021-01-26 20:15:14 +01:00
dependabot-preview[bot]
cc299f5cd5 Bump bcrypt from 3.1.7 to 3.2.0
Bumps [bcrypt](https://github.com/pyca/bcrypt) from 3.1.7 to 3.2.0.
- [Release notes](https://github.com/pyca/bcrypt/releases)
- [Changelog](https://github.com/pyca/bcrypt/blob/master/release.py)
- [Commits](https://github.com/pyca/bcrypt/compare/3.1.7...3.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2021-01-26 20:15:14 +01:00
Anca Iordache
536bea0859 Revert "Bump virtualenv from 20.0.30 to 20.2.1" (#7975)
This reverts commit 8785279ffd.

Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
Anca Iordache
db7b666e40 Revert "Bump gitpython from 3.1.7 to 3.1.11" (#7974)
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
aiordache
945123145f Bump docker-py in setup.py
Signed-off-by: aiordache <anca.iordache@docker.com>
2021-01-26 20:15:14 +01:00
37 changed files with 185 additions and 509 deletions

View File

@@ -1,122 +1,6 @@
Change log
==========
1.29.2 (2021-05-10)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/59?closed=1)
### Miscellaneous
- Remove advertisement for `docker compose` in the `up` command to avoid annoyance
- Bump `py` to `1.10.0` in `requirements-indirect.txt`
1.29.1 (2021-04-13)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/58?closed=1)
### Bugs
- Fix for invalid handler warning on Windows builds
- Fix config hash to trigger container recreation on IPC mode updates
- Fix conversion map for `placement.max_replicas_per_node`
- Remove extra scan suggestion on build
1.29.0 (2021-04-06)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/56?closed=1)
### Features
- Add profile filter to `docker-compose config`
- Add a `depends_on` condition to wait for successful service completion
### Miscellaneous
- Add image scan message on build
- Update warning message for `--no-ansi` to mention `--ansi never` as alternative
- Bump docker-py to 5.0.0
- Bump PyYAML to 5.4.1
- Bump python-dotenv to 0.17.0
1.28.6 (2021-03-23)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/57?closed=1)
### Bugs
- Make `--env-file` relative to the current working directory and error out for invalid paths. Environment file paths set with `--env-file` are relative to the current working directory while the default `.env` file is located in the project directory which by default is the base directory of the Compose file.
- Fix missing service property `storage_opt` by updating the compose schema
- Fix build `extra_hosts` list format
- Remove extra error message on `exec`
### Miscellaneous
- Add `compose.yml` and `compose.yaml` to default filename list
1.28.5 (2021-02-25)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/55?closed=1)
### Bugs
- Fix OpenSSL version mismatch error when shelling out to the ssh client (via bump to docker-py 4.4.4 which contains the fix)
- Add missing build flags to the native builder: `platform`, `isolation` and `extra_hosts`
- Remove info message on native build
- Avoid fetching logs when service logging driver is set to 'none'
1.28.4 (2021-02-18)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/54?closed=1)
### Bugs
- Fix SSH port parsing by bumping docker-py to 4.4.3
### Miscellaneous
- Bump Python to 3.7.10
1.28.3 (2021-02-17)
-------------------
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/53?closed=1)
### Bugs
- Fix SSH hostname parsing when it contains leading s/h, and remove the quiet option that was hiding the error (via docker-py bump to 4.4.2)
- Fix key error for '--no-log-prefix' option
- Fix incorrect CLI environment variable name for service profiles: `COMPOSE_PROFILES` instead of `COMPOSE_PROFILE`
- Fix fish completion
### Miscellaneous
- Bump cryptography to 3.3.2
- Remove log driver filter
1.28.2 (2021-01-26)
-------------------
@@ -197,7 +81,6 @@ Change log
- Updates of READMEs
1.27.4 (2020-09-24)
-------------------

View File

@@ -1,5 +1,5 @@
ARG DOCKER_VERSION=19.03
ARG PYTHON_VERSION=3.7.10
ARG PYTHON_VERSION=3.7.9
ARG BUILD_ALPINE_VERSION=3.12
ARG BUILD_CENTOS_VERSION=7

12
Jenkinsfile vendored
View File

@@ -23,7 +23,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
label 'ubuntu && amd64 && !zfs'
}
steps {
buildImage('alpine')
@@ -31,7 +31,7 @@ pipeline {
}
stage('debian') {
agent {
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
label 'ubuntu && amd64 && !zfs'
}
steps {
buildImage('debian')
@@ -62,7 +62,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -87,9 +87,9 @@ def buildImage(baseImage) {
def runTests(dockerVersion, pythonVersion, baseImage) {
return {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("ubuntu-2004 && amd64 && !zfs && cgroup1") {
node("ubuntu && amd64 && !zfs") {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -99,8 +99,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\

View File

@@ -23,7 +23,7 @@ pipeline {
parallel {
stage('alpine') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
buildImage('alpine')
@@ -31,7 +31,7 @@ pipeline {
}
stage('debian') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
buildImage('debian')
@@ -41,7 +41,7 @@ pipeline {
}
stage('Test') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
// TODO use declarative 1.5.0 `matrix` once available on CI
@@ -61,7 +61,7 @@ pipeline {
}
stage('Generate Changelog') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
checkout scm
@@ -98,7 +98,7 @@ pipeline {
}
stage('linux binary') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
checkout scm
@@ -134,7 +134,7 @@ pipeline {
}
stage('alpine image') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
buildRuntimeImage('alpine')
@@ -142,7 +142,7 @@ pipeline {
}
stage('debian image') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
buildRuntimeImage('debian')
@@ -157,7 +157,7 @@ pipeline {
parallel {
stage('Pushing images') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
steps {
pushRuntimeImage('alpine')
@@ -166,7 +166,7 @@ pipeline {
}
stage('Creating Github Release') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
environment {
GITHUB_TOKEN = credentials('github-release-token')
@@ -198,7 +198,7 @@ pipeline {
}
stage('Publishing Python packages') {
agent {
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
label 'linux && docker && ubuntu-2004 && cgroup1'
}
environment {
PYPIRC = credentials('pypirc-docker-dsg-cibot')
@@ -222,7 +222,7 @@ pipeline {
def buildImage(baseImage) {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
image = docker.image(imageName)
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -247,9 +247,9 @@ def buildImage(baseImage) {
def runTests(dockerVersion, pythonVersion, baseImage) {
return {
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
node("linux && docker && ubuntu-2004 && amd64 && cgroup1") {
node("linux && docker && ubuntu-2004 && cgroup1") {
def scmvar = checkout(scm)
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
echo "Using local system's storage driver: ${storageDriver}"
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
@@ -259,8 +259,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
--privileged \\
--volume="\$(pwd)/.git:/code/.git" \\
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
-e "DOCKER_TLS_CERTDIR=" \\
-e "TAG=${imageName}" \\
-e "STORAGE_DRIVER=${storageDriver}" \\
-e "DOCKER_VERSIONS=${dockerVersion}" \\

View File

@@ -1 +1 @@
__version__ = '1.29.2'
__version__ = '1.28.2'

View File

@@ -129,7 +129,7 @@ def get_profiles_from_options(options, environment):
if profile_option:
return profile_option
profiles = environment.get('COMPOSE_PROFILES')
profiles = environment.get('COMPOSE_PROFILE')
if profiles:
return profiles.split(',')

View File

@@ -158,8 +158,10 @@ class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
def tail_container_logs(container, presenter, queue, log_args):
generator = get_log_generator(container)
try:
for item in build_log_generator(container, log_args):
for item in generator(container, log_args):
queue.put(QueueItem.new(presenter.present(container, item)))
except Exception as e:
queue.put(QueueItem.exception(e))
@@ -169,6 +171,20 @@ def tail_container_logs(container, presenter, queue, log_args):
queue.put(QueueItem.stop(container.name))
def get_log_generator(container):
if container.has_api_logs:
return build_log_generator
return build_no_log_generator
def build_no_log_generator(container, log_args):
"""Return a generator that prints a warning about logs and waits for
container to exit.
"""
yield "WARNING: no logs are available with the '{}' log driver\n".format(
container.log_driver)
def build_log_generator(container, log_args):
# if the container doesn't have a log_stream we need to attach to container
# before log printer starts running

View File

@@ -23,7 +23,6 @@ from ..config import resolve_build_args
from ..config.environment import Environment
from ..config.serialize import serialize_config
from ..config.types import VolumeSpec
from ..const import IS_LINUX_PLATFORM
from ..const import IS_WINDOWS_PLATFORM
from ..errors import StreamParseError
from ..metrics.decorator import metrics
@@ -79,10 +78,8 @@ def main(): # noqa: C901
try:
command_func = dispatch()
command_func()
if not IS_LINUX_PLATFORM and command == 'help':
print("\nDocker Compose is now in the Docker CLI, try `docker compose` help")
except (KeyboardInterrupt, signals.ShutdownException):
exit_with_metrics(command, "Aborting.", status=Status.CANCELED)
exit_with_metrics(command, "Aborting.", status=Status.FAILURE)
except (UserError, NoSuchService, ConfigurationError,
ProjectError, OperationFailedError) as e:
exit_with_metrics(command, e.msg, status=Status.FAILURE)
@@ -101,10 +98,7 @@ def main(): # noqa: C901
e.service.name), status=Status.FAILURE)
except NoSuchCommand as e:
commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
if not IS_LINUX_PLATFORM:
commands += "\n\nDocker Compose is now in the Docker CLI, try `docker compose`"
exit_with_metrics("", log_msg="No such command: {}\n\n{}".format(
e.command, commands), status=Status.FAILURE)
exit_with_metrics(e.command, "No such command: {}\n\n{}".format(e.command, commands))
except (errors.ConnectionError, StreamParseError):
exit_with_metrics(command, status=Status.FAILURE)
except SystemExit as e:
@@ -122,10 +116,6 @@ def main(): # noqa: C901
code = 0
if isinstance(e.code, int):
code = e.code
if not IS_LINUX_PLATFORM and not command:
msg += "\n\nDocker Compose is now in the Docker CLI, try `docker compose`"
exit_with_metrics(command, log_msg=msg, status=status,
exit_code=code)
@@ -138,7 +128,7 @@ def get_filtered_args(args):
def exit_with_metrics(command, log_msg=None, status=Status.SUCCESS, exit_code=1):
if log_msg and command != 'exec':
if log_msg:
if not exit_code:
log.info(log_msg)
else:
@@ -172,8 +162,7 @@ def dispatch():
if options.get("--no-ansi"):
if options.get("--ansi"):
raise UserError("--no-ansi and --ansi cannot be combined.")
log.warning('--no-ansi option is deprecated and will be removed in future versions. '
'Use `--ansi never` instead.')
log.warning('--no-ansi option is deprecated and will be removed in future versions.')
ansi_mode = AnsiMode.NEVER
setup_console_handler(console_handler,
@@ -392,7 +381,6 @@ class TopLevelCommand:
--no-interpolate Don't interpolate environment variables.
-q, --quiet Only validate the configuration, don't print
anything.
--profiles Print the profile names, one per line.
--services Print the service names, one per line.
--volumes Print the volume names, one per line.
--hash="*" Print the service config hash, one per line.
@@ -412,15 +400,6 @@ class TopLevelCommand:
if options['--quiet']:
return
if options['--profiles']:
profiles = set()
for service in compose_config.services:
if 'profiles' in service:
for profile in service['profiles']:
profiles.add(profile)
print('\n'.join(sorted(profiles)))
return
if options['--services']:
print('\n'.join(service['name'] for service in compose_config.services))
return
@@ -1142,7 +1121,7 @@ class TopLevelCommand:
detached = options.get('--detach')
no_start = options.get('--no-start')
attach_dependencies = options.get('--attach-dependencies')
keep_prefix = not options.get('--no-log-prefix')
keep_prefix = not options['--no-log-prefix']
if detached and (cascade_stop or exit_value_from or attach_dependencies):
raise UserError(
@@ -1503,7 +1482,7 @@ def log_printer_from_project(
keep_prefix=True,
):
return LogPrinter(
[c for c in containers if c.log_driver not in (None, 'none')],
containers,
build_log_presenters(project.service_names, monochrome, keep_prefix),
event_stream or project.events(),
cascade_stop=cascade_stop,

View File

@@ -188,7 +188,7 @@
"properties": {
"condition": {
"type": "string",
"enum": ["service_started", "service_healthy", "service_completed_successfully"]
"enum": ["service_started", "service_healthy"]
}
},
"required": ["condition"]
@@ -335,6 +335,7 @@
"read_only": {"type": "boolean"},
"restart": {"type": "string"},
"runtime": {
"deprecated": true,
"type": "string"
},
"scale": {
@@ -366,7 +367,6 @@
"stdin_open": {"type": "boolean"},
"stop_grace_period": {"type": "string", "format": "duration"},
"stop_signal": {"type": "string"},
"storage_opt": {"type": "object"},
"tmpfs": {"$ref": "#/definitions/string_or_list"},
"tty": {"type": "boolean"},
"ulimits": {

View File

@@ -10,11 +10,7 @@ from operator import attrgetter
from operator import itemgetter
import yaml
try:
from functools import cached_property
except ImportError:
from cached_property import cached_property
from cached_property import cached_property
from . import types
from ..const import COMPOSE_SPEC as VERSION
@@ -153,14 +149,9 @@ DOCKER_VALID_URL_PREFIXES = (
SUPPORTED_FILENAMES = [
'docker-compose.yml',
'docker-compose.yaml',
'compose.yml',
'compose.yaml',
]
DEFAULT_OVERRIDE_FILENAMES = ('docker-compose.override.yml',
'docker-compose.override.yaml',
'compose.override.yml',
'compose.override.yaml')
DEFAULT_OVERRIDE_FILENAMES = ('docker-compose.override.yml', 'docker-compose.override.yaml')
log = logging.getLogger(__name__)
@@ -313,16 +304,7 @@ def find(base_dir, filenames, environment, override_dir=None):
if filenames:
filenames = [os.path.join(base_dir, f) for f in filenames]
else:
# search for compose files in the base dir and its parents
filenames = get_default_config_files(base_dir)
if not filenames and not override_dir:
# none found in base_dir and no override_dir defined
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
if not filenames:
# search for compose files in the project directory and its parents
filenames = get_default_config_files(override_dir)
if not filenames:
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
log.debug("Using configuration files: {}".format(",".join(filenames)))
return ConfigDetails(
@@ -353,7 +335,7 @@ def get_default_config_files(base_dir):
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
if not candidates:
return None
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
winner = candidates[0]
@@ -574,7 +556,8 @@ def process_config_section(config_file, config, section, environment, interpolat
config_file.version,
config,
section,
environment)
environment
)
else:
return config

View File

@@ -54,10 +54,9 @@ class Environment(dict):
if base_dir is None:
return result
if env_file:
env_file_path = os.path.join(os.getcwd(), env_file)
return cls(env_vars_from_file(env_file_path))
env_file_path = os.path.join(base_dir, '.env')
env_file_path = os.path.join(base_dir, env_file)
else:
env_file_path = os.path.join(base_dir, '.env')
try:
return cls(env_vars_from_file(env_file_path))
except EnvFileNotFound:

View File

@@ -243,7 +243,6 @@ class ConversionMap:
service_path('healthcheck', 'disable'): to_boolean,
service_path('deploy', 'labels', PATH_JOKER): to_str,
service_path('deploy', 'replicas'): to_int,
service_path('deploy', 'placement', 'max_replicas_per_node'): to_int,
service_path('deploy', 'resources', 'limits', "cpus"): to_float,
service_path('deploy', 'update_config', 'parallelism'): to_int,
service_path('deploy', 'update_config', 'max_failure_ratio'): to_float,

View File

@@ -5,7 +5,6 @@ from .version import ComposeVersion
DEFAULT_TIMEOUT = 10
HTTP_TIMEOUT = 60
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
IS_LINUX_PLATFORM = (sys.platform == "linux")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'

View File

@@ -186,6 +186,11 @@ class Container:
def log_driver(self):
return self.get('HostConfig.LogConfig.Type')
@property
def has_api_logs(self):
log_type = self.log_driver
return not log_type or log_type in ('json-file', 'journald', 'local')
@property
def human_readable_health_status(self):
""" Generate UP status string with up time and health
@@ -199,7 +204,11 @@ class Container:
return status_string
def attach_log_stream(self):
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
"""A log stream can only be attached if the container uses a
json-file, journald or local log driver.
"""
if self.has_api_logs:
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
def get(self, key):
"""Return a value from the container or None if the value is not set.

View File

@@ -27,8 +27,3 @@ class NoHealthCheckConfigured(HealthCheckException):
service_name
)
)
class CompletedUnsuccessfully(Exception):
def __init__(self, container_id, exit_code):
self.msg = 'Container "{}" exited with code {}.'.format(container_id, exit_code)

View File

@@ -36,7 +36,7 @@ class MetricsCommand(requests.Session):
context_type=None, status=Status.SUCCESS,
source=MetricsSource.CLI, uri=None):
super().__init__()
self.command = ("compose " + command).strip() if command else "compose --help"
self.command = "compose " + command if command else "compose --help"
self.context = context_type or ContextAPI.get_current_context().context_type or 'moby'
self.source = source
self.status = status.value

View File

@@ -16,7 +16,6 @@ from compose.cli.colors import green
from compose.cli.colors import red
from compose.cli.signals import ShutdownException
from compose.const import PARALLEL_LIMIT
from compose.errors import CompletedUnsuccessfully
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.errors import OperationFailedError
@@ -62,8 +61,7 @@ def parallel_execute_watch(events, writer, errors, results, msg, get_name, fail_
elif isinstance(exception, APIError):
errors[get_name(obj)] = exception.explanation
writer.write(msg, get_name(obj), 'error', red)
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured,
CompletedUnsuccessfully)):
elif isinstance(exception, (OperationFailedError, HealthCheckFailed, NoHealthCheckConfigured)):
errors[get_name(obj)] = exception.msg
writer.write(msg, get_name(obj), 'error', red)
elif isinstance(exception, UpstreamError):
@@ -243,12 +241,6 @@ def feed_queue(objects, func, get_deps, results, state, limiter):
'not processing'.format(obj)
)
results.put((obj, None, e))
except CompletedUnsuccessfully as e:
log.debug(
'Service(s) upstream of {} did not completed successfully - '
'not processing'.format(obj)
)
results.put((obj, None, e))
if state.is_done():
results.put(STOP)

View File

@@ -490,6 +490,8 @@ class Project:
log.info('%s uses an image, skipping' % service.name)
if cli:
log.info("Building with native build. Learn about native build in Compose here: "
"https://docs.docker.com/go/compose-native-build/")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
@@ -649,6 +651,10 @@ class Project:
override_options=None,
):
if cli:
log.info("Building with native build. Learn about native build in Compose here: "
"https://docs.docker.com/go/compose-native-build/")
self.initialize()
if not ignore_orphans:
self.find_orphan_containers(remove_orphans)

View File

@@ -1,5 +1,6 @@
import enum
import itertools
import json
import logging
import os
import re
@@ -44,7 +45,6 @@ from .const import LABEL_VERSION
from .const import NANOCPUS_SCALE
from .const import WINDOWS_LONGPATH_PREFIX
from .container import Container
from .errors import CompletedUnsuccessfully
from .errors import HealthCheckFailed
from .errors import NoHealthCheckConfigured
from .errors import OperationFailedError
@@ -112,7 +112,6 @@ HOST_CONFIG_KEYS = [
CONDITION_STARTED = 'service_started'
CONDITION_HEALTHY = 'service_healthy'
CONDITION_COMPLETED_SUCCESSFULLY = 'service_completed_successfully'
class BuildError(Exception):
@@ -713,7 +712,6 @@ class Service:
'image_id': image_id(),
'links': self.get_link_names(),
'net': self.network_mode.id,
'ipc_mode': self.ipc_mode.mode,
'networks': self.networks,
'secrets': self.secrets,
'volumes_from': [
@@ -755,8 +753,6 @@ class Service:
configs[svc] = lambda s: True
elif config['condition'] == CONDITION_HEALTHY:
configs[svc] = lambda s: s.is_healthy()
elif config['condition'] == CONDITION_COMPLETED_SUCCESSFULLY:
configs[svc] = lambda s: s.is_completed_successfully()
else:
# The config schema already prevents this, but it might be
# bypassed if Compose is called programmatically.
@@ -1107,9 +1103,8 @@ class Service:
'Impossible to perform platform-targeted builds for API version < 1.35'
)
builder = _ClientBuilder(self.client) if not cli else _CLIBuilder(progress)
return builder.build(
service=self,
builder = self.client if not cli else _CLIBuilder(progress)
build_output = builder.build(
path=path,
tag=self.image_name,
rm=rm,
@@ -1130,7 +1125,30 @@ class Service:
gzip=gzip,
isolation=build_opts.get('isolation', self.options.get('isolation', None)),
platform=self.platform,
output_stream=output_stream)
)
try:
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(self, str(e))
# Ensure the HTTP connection is not reused for another
# streaming command, as the Docker daemon can sometimes
# complain about it
self.client.close()
image_id = None
for event in all_events:
if 'stream' in event:
match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
if match:
image_id = match.group(1)
if image_id is None:
raise BuildError(self, event if all_events else 'Unknown')
return image_id
def get_cache_from(self, build_opts):
cache_from = build_opts.get('cache_from', None)
@@ -1286,21 +1304,6 @@ class Service:
raise HealthCheckFailed(ctnr.short_id)
return result
def is_completed_successfully(self):
""" Check that all containers for this service has completed successfully
Returns false if at least one container does not exited and
raises CompletedUnsuccessfully exception if at least one container
exited with non-zero exit code.
"""
result = True
for ctnr in self.containers(stopped=True):
ctnr.inspect()
if ctnr.get('State.Status') != 'exited':
result = False
elif ctnr.exit_code != 0:
raise CompletedUnsuccessfully(ctnr.short_id, ctnr.exit_code)
return result
def _parse_proxy_config(self):
client = self.client
if 'proxies' not in client._general_configs:
@@ -1787,77 +1790,20 @@ def rewrite_build_path(path):
return path
class _ClientBuilder:
def __init__(self, client):
self.client = client
def build(self, service, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True, output_stream=sys.stdout):
build_output = self.client.build(
path=path,
tag=tag,
nocache=nocache,
rm=rm,
pull=pull,
forcerm=forcerm,
dockerfile=dockerfile,
labels=labels,
cache_from=cache_from,
buildargs=buildargs,
network_mode=network_mode,
target=target,
shmsize=shmsize,
extra_hosts=extra_hosts,
container_limits=container_limits,
gzip=gzip,
isolation=isolation,
platform=platform)
try:
all_events = list(stream_output(build_output, output_stream))
except StreamOutputError as e:
raise BuildError(service, str(e))
# Ensure the HTTP connection is not reused for another
# streaming command, as the Docker daemon can sometimes
# complain about it
self.client.close()
image_id = None
for event in all_events:
if 'stream' in event:
match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', ''))
if match:
image_id = match.group(1)
if image_id is None:
raise BuildError(service, event if all_events else 'Unknown')
return image_id
class _CLIBuilder:
def __init__(self, progress):
self._progress = progress
def build(self, service, path, tag=None, quiet=False, fileobj=None,
def build(self, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True, output_stream=sys.stdout):
use_config_proxy=True):
"""
Args:
service (str): Service to be built
path (str): Path to the directory containing the Dockerfile
buildargs (dict): A dictionary of build arguments
cache_from (:py:class:`list`): A list of images used for build
@@ -1906,11 +1852,10 @@ class _CLIBuilder:
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
output_stream (writer): stream to use for build logs
Returns:
A generator for the build output.
"""
if dockerfile and os.path.isdir(path):
if dockerfile:
dockerfile = os.path.join(path, dockerfile)
iidfile = tempfile.mktemp()
@@ -1928,29 +1873,35 @@ class _CLIBuilder:
command_builder.add_arg("--tag", tag)
command_builder.add_arg("--target", target)
command_builder.add_arg("--iidfile", iidfile)
command_builder.add_arg("--platform", platform)
command_builder.add_arg("--isolation", isolation)
if extra_hosts:
if isinstance(extra_hosts, dict):
extra_hosts = ["{}:{}".format(host, ip) for host, ip in extra_hosts.items()]
for host in extra_hosts:
command_builder.add_arg("--add-host", "{}".format(host))
args = command_builder.build([path])
with subprocess.Popen(args, stdout=output_stream, stderr=sys.stderr,
magic_word = "Successfully built "
appear = False
with subprocess.Popen(args, stdout=subprocess.PIPE,
universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
if line.startswith(magic_word):
appear = True
yield json.dumps({"stream": line})
p.communicate()
if p.returncode != 0:
raise BuildError(service, "Build failed")
raise StreamOutputError()
with open(iidfile) as f:
line = f.readline()
image_id = line.split(":")[1].strip()
os.remove(iidfile)
return image_id
# In case of `DOCKER_BUILDKIT=1`
# there is no success message already present in the output.
# Since that's the way `Service::build` gets the `image_id`
# it has to be added `manually`
if not appear:
yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
class _CommandBuilder:

View File

@@ -138,7 +138,7 @@ _docker_compose_config() {
;;
esac
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --profiles --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--hash --help --no-interpolate --quiet -q --resolve-image-digests --services --volumes" -- "$cur" ) )
}
@@ -172,10 +172,6 @@ _docker_compose_docker_compose() {
COMPREPLY=( $( compgen -W "debug info warning error critical" -- "$cur" ) )
return
;;
--profile)
COMPREPLY=( $( compgen -W "$(__docker_compose_q config --profiles)" -- "$cur" ) )
return
;;
--project-directory)
_filedir -d
return
@@ -622,11 +618,10 @@ _docker_compose() {
--tlskey
"
# These options require special treatment when searching the command.
# These options are require special treatment when searching the command.
local top_level_options_with_args="
--ansi
--log-level
--profile
"
COMPREPLY=()

View File

@@ -22,6 +22,6 @@ complete -c docker-compose -l tlskey -r -d 'Path to TLS key fi
complete -c docker-compose -l tlsverify -d 'Use TLS and verify the remote'
complete -c docker-compose -l skip-hostname-check -d "Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)"
complete -c docker-compose -l no-ansi -d 'Do not print ANSI control characters'
complete -c docker-compose -l ansi -a 'never always auto' -d 'Control when to print ANSI control characters'
complete -c docker-compose -l ansi -a never always auto -d 'Control when to print ANSI control characters'
complete -c docker-compose -s h -l help -d 'Print usage'
complete -c docker-compose -s v -l version -d 'Print version and exit'

View File

@@ -1,5 +1,5 @@
Click==7.1.2
coverage==5.5
coverage==5.2.1
ddt==1.4.1
flake8==3.8.3
gitpython==3.1.11
@@ -7,3 +7,4 @@ mock==3.0.5
pytest==6.0.1; python_version >= '3.5'
pytest==4.6.5; python_version < '3.5'
pytest-cov==2.10.1
PyYAML==5.3.1

View File

@@ -3,7 +3,7 @@ appdirs==1.4.4
attrs==20.3.0
bcrypt==3.2.0
cffi==1.14.4
cryptography==3.3.2
cryptography==3.2.1
distlib==0.3.1
entrypoints==0.3
filelock==3.0.12
@@ -11,9 +11,9 @@ gitdb2==4.0.2
mccabe==0.6.1
more-itertools==8.6.0; python_version >= '3.5'
more-itertools==5.0.0; python_version < '3.5'
packaging==20.9
packaging==20.4
pluggy==0.13.1
py==1.10.0
py==1.9.0
pycodestyle==2.6.0
pycparser==2.20
pyflakes==2.2.0

View File

@@ -1,10 +1,10 @@
backports.shutil_get_terminal_size==1.0.0
cached-property==1.5.1; python_version < '3.8'
cached-property==1.5.1
certifi==2020.6.20
chardet==3.0.4
colorama==0.4.3; sys_platform == 'win32'
distro==1.5.0
docker==5.0.0
docker==4.4.1
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
@@ -13,9 +13,9 @@ ipaddress==1.0.23
jsonschema==3.2.0
paramiko==2.7.1
PySocks==1.7.1
python-dotenv==0.17.0
python-dotenv==0.14.0
pywin32==227; sys_platform == 'win32'
PyYAML==5.4.1
PyYAML==5.3.1
requests==2.24.0
texttable==1.6.2
urllib3==1.25.10; python_version == '3.3'

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.29.2"
VERSION="1.28.2"
IMAGE="docker/compose:$VERSION"

View File

@@ -38,23 +38,17 @@ for version in $DOCKER_VERSIONS; do
trap "on_exit" EXIT
repo="dockerswarm/dind"
docker run \
-d \
--name "$daemon_container" \
--privileged \
--volume="/var/lib/docker" \
-e "DOCKER_TLS_CERTDIR=" \
"docker:$version-dind" \
"$repo:$version" \
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
2>&1 | tail -n 10
docker exec "$daemon_container" sh -c "apk add --no-cache git"
# copy docker config from host for authentication with Docker Hub
docker exec "$daemon_container" sh -c "mkdir /root/.docker"
docker cp /root/.docker/config.json $daemon_container:/root/.docker/config.json
docker exec "$daemon_container" sh -c "chmod 644 /root/.docker/config.json"
docker run \
--rm \
--tty \

View File

@@ -25,13 +25,14 @@ def find_version(*file_paths):
install_requires = [
'cached-property >= 1.2.0, < 2',
'docopt >= 0.6.1, < 1',
'PyYAML >= 3.10, < 6',
'requests >= 2.20.0, < 3',
'texttable >= 0.9.0, < 2',
'websocket-client >= 0.32.0, < 1',
'distro >= 1.5.0, < 2',
'docker[ssh] >= 5',
'docker[ssh] >= 4.4.0, < 5',
'dockerpty >= 0.4.1, < 1',
'jsonschema >= 2.5.1, < 4',
'python-dotenv >= 0.13.0, < 1',
@@ -49,7 +50,6 @@ if sys.version_info[:2] < (3, 4):
extras_require = {
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
':python_version < "3.8"': ['cached-property >= 1.2.0, < 2'],
':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
'tests': tests_require,

View File

@@ -237,11 +237,6 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['-H=tcp://doesnotexist:8000', 'ps'], returncode=1)
assert "Couldn't connect to Docker daemon" in result.stderr
def test_config_list_profiles(self):
self.base_dir = 'tests/fixtures/config-profiles'
result = self.dispatch(['config', '--profiles'])
assert set(result.stdout.rstrip().split('\n')) == {'debug', 'frontend', 'gui'}
def test_config_list_services(self):
self.base_dir = 'tests/fixtures/v2-full'
result = self.dispatch(['config', '--services'])

View File

@@ -1,15 +0,0 @@
version: '3.8'
services:
frontend:
image: frontend
profiles: ["frontend", "gui"]
phpmyadmin:
image: phpmyadmin
depends_on:
- db
profiles:
- debug
backend:
image: backend
db:
image: mysql

View File

@@ -1 +0,0 @@
WHEREAMI=default

View File

@@ -1,6 +1,5 @@
import tempfile
import pytest
from ddt import data
from ddt import ddt
@@ -9,7 +8,6 @@ from ..acceptance.cli_test import dispatch
from compose.cli.command import get_project
from compose.cli.command import project_from_options
from compose.config.environment import Environment
from compose.config.errors import EnvFileNotFound
from tests.integration.testcases import DockerClientTestCase
@@ -57,36 +55,13 @@ services:
class EnvironmentOverrideFileTest(DockerClientTestCase):
def test_env_file_override(self):
base_dir = 'tests/fixtures/env-file-override'
# '--env-file' are relative to the current working dir
env = Environment.from_env_file(base_dir, base_dir+'/.env.override')
dispatch(base_dir, ['--env-file', '.env.override', 'up'])
project = get_project(project_dir=base_dir,
config_path=['docker-compose.yml'],
environment=env,
environment=Environment.from_env_file(base_dir, '.env.override'),
override_dir=base_dir)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert "WHEREAMI=override" in containers[0].get('Config.Env')
assert "DEFAULT_CONF_LOADED=true" in containers[0].get('Config.Env')
dispatch(base_dir, ['--env-file', '.env.override', 'down'], None)
def test_env_file_not_found_error(self):
base_dir = 'tests/fixtures/env-file-override'
with pytest.raises(EnvFileNotFound) as excinfo:
Environment.from_env_file(base_dir, '.env.override')
assert "Couldn't find env file" in excinfo.exconly()
def test_dot_env_file(self):
base_dir = 'tests/fixtures/env-file-override'
# '.env' is relative to the project_dir (base_dir)
env = Environment.from_env_file(base_dir, None)
dispatch(base_dir, ['up'])
project = get_project(project_dir=base_dir,
config_path=['docker-compose.yml'],
environment=env,
override_dir=base_dir)
containers = project.containers(stopped=True)
assert len(containers) == 1
assert "WHEREAMI=default" in containers[0].get('Config.Env')
dispatch(base_dir, ['down'], None)

View File

@@ -25,7 +25,6 @@ from compose.const import COMPOSE_SPEC as VERSION
from compose.const import LABEL_PROJECT
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.errors import CompletedUnsuccessfully
from compose.errors import HealthCheckFailed
from compose.errors import NoHealthCheckConfigured
from compose.project import Project
@@ -1900,106 +1899,6 @@ class ProjectTest(DockerClientTestCase):
with pytest.raises(NoHealthCheckConfigured):
svc1.is_healthy()
def test_project_up_completed_successfully_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'true'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert svc2.containers()[0].is_running
assert len(svc1.containers()) == 0
assert svc1.is_completed_successfully()
def test_project_up_completed_unsuccessfully_dependency(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'false'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
assert 'svc1' in svc2.get_dependency_names()
assert len(svc2.containers()) == 0
with pytest.raises(CompletedUnsuccessfully):
svc1.is_completed_successfully()
def test_project_up_completed_differently_dependencies(self):
config_dict = {
'version': '2.1',
'services': {
'svc1': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'true'
},
'svc2': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'false'
},
'svc3': {
'image': BUSYBOX_IMAGE_WITH_TAG,
'command': 'top',
'depends_on': {
'svc1': {'condition': 'service_completed_successfully'},
'svc2': {'condition': 'service_completed_successfully'},
}
}
}
}
config_data = load_config(config_dict)
project = Project.from_config(
name='composetest', config_data=config_data, client=self.client
)
with pytest.raises(ProjectError):
project.up()
svc1 = project.get_service('svc1')
svc2 = project.get_service('svc2')
svc3 = project.get_service('svc3')
assert ['svc1', 'svc2'] == svc3.get_dependency_names()
assert svc1.is_completed_successfully()
assert len(svc3.containers()) == 0
with pytest.raises(CompletedUnsuccessfully):
svc2.is_completed_successfully()
def test_project_up_seccomp_profile(self):
seccomp_data = {
'defaultAction': 'SCMP_ACT_ALLOW',

View File

@@ -8,6 +8,7 @@ from docker.errors import APIError
from compose.cli.log_printer import build_log_generator
from compose.cli.log_printer import build_log_presenters
from compose.cli.log_printer import build_no_log_generator
from compose.cli.log_printer import consume_queue
from compose.cli.log_printer import QueueItem
from compose.cli.log_printer import wait_on_exit
@@ -74,6 +75,14 @@ def test_wait_on_exit_raises():
assert expected in wait_on_exit(mock_container)
def test_build_no_log_generator(mock_container):
mock_container.has_api_logs = False
mock_container.log_driver = 'none'
output, = build_no_log_generator(mock_container, None)
assert "WARNING: no logs are available with the 'none' log driver\n" in output
assert "exited with code" not in output
class TestBuildLogGenerator:
def test_no_log_stream(self, mock_container):

View File

@@ -2397,8 +2397,7 @@ web:
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'}
'app2': {'condition': 'service_healthy'}
}
}
override = {}
@@ -2410,12 +2409,11 @@ web:
'image': 'busybox',
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'}
'app2': {'condition': 'service_healthy'}
}
}
override = {
'depends_on': ['app4']
'depends_on': ['app3']
}
actual = config.merge_service_dicts(base, override, VERSION)
@@ -2424,8 +2422,7 @@ web:
'depends_on': {
'app1': {'condition': 'service_started'},
'app2': {'condition': 'service_healthy'},
'app3': {'condition': 'service_completed_successfully'},
'app4': {'condition': 'service_started'},
'app3': {'condition': 'service_started'}
}
}
@@ -3570,11 +3567,9 @@ class InterpolationTest(unittest.TestCase):
@mock.patch.dict(os.environ)
def test_config_file_with_options_environment_file(self):
project_dir = 'tests/fixtures/default-env-file'
# env-file is relative to current working dir
env = Environment.from_env_file(project_dir, project_dir + '/.env2')
service_dicts = config.load(
config.find(
project_dir, None, env
project_dir, None, Environment.from_env_file(project_dir, '.env2')
)
).services
@@ -5238,8 +5233,6 @@ class GetDefaultConfigFilesTestCase(unittest.TestCase):
files = [
'docker-compose.yml',
'docker-compose.yaml',
'compose.yml',
'compose.yaml',
]
def test_get_config_path_default_file_in_basedir(self):
@@ -5273,10 +5266,8 @@ def get_config_filename_for_files(filenames, subdir=None):
base_dir = tempfile.mkdtemp(dir=project_dir)
else:
base_dir = project_dir
filenames = config.get_default_config_files(base_dir)
if not filenames:
raise config.ComposeFileNotFound(config.SUPPORTED_FILENAMES)
return os.path.basename(filenames[0])
filename, = config.get_default_config_files(base_dir)
return os.path.basename(filename)
finally:
shutil.rmtree(project_dir)

View File

@@ -221,6 +221,34 @@ class ContainerTest(unittest.TestCase):
container = Container(None, self.container_dict, has_been_inspected=True)
assert container.short_id == self.container_id[:12]
def test_has_api_logs(self):
container_dict = {
'HostConfig': {
'LogConfig': {
'Type': 'json-file'
}
}
}
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'none'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'syslog'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
container_dict['HostConfig']['LogConfig']['Type'] = 'journald'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is True
container_dict['HostConfig']['LogConfig']['Type'] = 'foobar'
container = Container(None, container_dict, has_been_inspected=True)
assert container.has_api_logs is False
class GetContainerNameTestCase(unittest.TestCase):

View File

@@ -330,7 +330,7 @@ class ServiceTest(unittest.TestCase):
assert service.options['environment'] == environment
assert opts['labels'][LABEL_CONFIG_HASH] == \
'6da0f3ec0d5adf901de304bdc7e0ee44ec5dd7adb08aebc20fe0dd791d4ee5a8'
'689149e6041a85f6fb4945a2146a497ed43c8a5cbd8991753d875b165f1b4de4'
assert opts['environment'] == ['also=real']
def test_get_container_create_options_sets_affinity_with_binds(self):
@@ -700,7 +700,6 @@ class ServiceTest(unittest.TestCase):
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'ipc_mode': None,
'options': {'image': 'example.com/foo'},
'links': [('one', 'one')],
'net': 'other',
@@ -724,7 +723,6 @@ class ServiceTest(unittest.TestCase):
config_dict = service.config_dict()
expected = {
'image_id': 'abcd',
'ipc_mode': None,
'options': {'image': 'example.com/foo'},
'links': [],
'networks': {},

View File

@@ -50,7 +50,7 @@ directory = coverage-html
[flake8]
max-line-length = 105
# Set this high for now
max-complexity = 12
max-complexity = 11
exclude = compose/packages
[pytest]