Compare commits

...

28 Commits
py2 ... v1.24.0

Author SHA1 Message Date
Chris Crone
f85950ebec Merge pull request #6763 from docker/1.24.1-patch
1.24.1 patch
2019-06-24 10:28:50 +02:00
Djordje Lukic
3fbb9fe51e Bump docker-py
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-06-21 17:30:32 +02:00
Ulysses Souza
d9fa8158c3 Merge pull request #6616 from docker/bump-1.24.0
Bump 1.24.0
2019-03-28 18:58:40 +01:00
Ulysses Souza
0aa590649c "Bump 1.24.0"
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-28 18:34:02 +01:00
Ulysses Souza
eb2fdf81b4 Bump docker-py 3.7.2
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-28 18:29:27 +01:00
Ulysses Souza
917c2701f2 Fix script for release file already present case
This avoids a:
"AttributeError: 'HTTPError' object has no attribute 'message'"

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-28 18:29:25 +01:00
Ulysses Souza
3a3288c54b Merge pull request #6609 from docker/bump-1.24.0-rc3
Bump 1.24.0-rc3
2019-03-22 15:51:21 +01:00
Ulysses Souza
428942498b "Bump 1.24.0-rc3"
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:52 +01:00
Ulysses Souza
c54341758a Fix bintray docker-compose link
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:22 +01:00
Ulysses Souza
662761dbba Fix typo on finalize
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:22 +01:00
Ulysses Souza
0e05ac6d2c Use os.system() instead of run_setup()
Use `os.system()` instead of `run_setup()` because the last
is not taking any effect

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:22 +01:00
Ulysses Souza
295dd9abda Bump docker-py version to 3.7.1
This docker-py version includes ssh fixes

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:21 +01:00
Ben Firshman
81b30c4380 Enable bootloader_ignore_signals in pyinstaller
Fixes #3347

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2019-03-22 15:15:19 +01:00
Michael Irwin
360753ecc1 Added test case to verify fix for #6525
Signed-off-by: Michael Irwin <mikesir87@gmail.com>
2019-03-22 15:15:18 +01:00
Michael Irwin
3fae0119ca Fix merging of compose files when network has None config
Signed-off-by: Michael Irwin <mikesir87@gmail.com>

Resolves #6525
2019-03-22 15:15:18 +01:00
Christopher Crone
0fdb9783cd circleci: Fix virtualenv version to 16.2.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-03-22 15:15:17 +01:00
Ulysses Souza
0dec6b5ff1 Fix Flake8 lint
This removes extra indentation and replace the use of `is` by `==` when
comparing strings

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:16 +01:00
Christopher Crone
e0412a2488 Dockerfile: Force version of virtualenv to 16.2.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-03-22 15:15:16 +01:00
Christopher Crone
3fc5c6f563 script.build.linux: Do not tail image build logs
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-03-22 15:15:16 +01:00
Christopher Crone
28310b3ba4 requirements-dev: Fix version of mock to 2.0.0
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-03-22 15:15:16 +01:00
Christopher Crone
4585db124a macOS: Bump Python and OpenSSL
Signed-off-by: Christopher Crone <christopher.crone@docker.com>
2019-03-22 15:15:16 +01:00
Ulysses Souza
1f9b20d97b Add --parallel to docker build's options in bash and zsh completion
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-03-22 15:15:13 +01:00
Djordje Lukic
82a89aef1c Support for credential_spec
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-03-22 15:15:12 +01:00
Harald Albers
3934617e37 Add bash completion for ps --all|-a
Signed-off-by: Harald Albers <github@albersweb.de>
2019-03-22 15:15:10 +01:00
Djordje Lukic
82db4fd4f2 Merge pull request #6455 from docker/bump-1.24.0-rc1
Bump 1.24.0-rc1
2019-01-14 17:54:34 +01:00
Djordje Lukic
0f3d4ddaa7 "Bump 1.24.0-rc1"
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-01-11 18:28:30 +01:00
Djordje Lukic
2007951731 "Bump 1.24.0-rc1"
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-01-11 17:57:05 +01:00
Djordje Lukic
60f8ce09f9 "Bump 1.24.0-rc1"
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-01-11 17:43:47 +01:00
28 changed files with 194 additions and 80 deletions

View File

@@ -10,7 +10,7 @@ jobs:
command: ./script/setup/osx
- run:
name: install tox
command: sudo pip install --upgrade tox==2.1.1
command: sudo pip install --upgrade tox==2.1.1 virtualenv==16.2.0
- run:
name: unit tests
command: tox -e py27,py36,py37 -- tests/unit
@@ -22,7 +22,7 @@ jobs:
- checkout
- run:
name: upgrade python tools
command: sudo pip install --upgrade pip virtualenv
command: sudo pip install --upgrade pip virtualenv==16.2.0
- run:
name: setup script
command: DEPLOYMENT_TARGET=10.11 ./script/setup/osx

View File

@@ -1,6 +1,58 @@
Change log
==========
1.24.0 (2019-03-22)
-------------------
### Features
- Added support for connecting to the Docker Engine using the `ssh` protocol.
- Added a `--all` flag to `docker-compose ps` to include stopped one-off containers
in the command's output.
- Add bash completion for `ps --all|-a`
- Support for credential_spec
- Add `--parallel` to `docker build`'s options in `bash` and `zsh` completion
### Bugfixes
- Fixed a bug where some valid credential helpers weren't properly handled by Compose
when attempting to pull images from private registries.
- Fixed an issue where the output of `docker-compose start` before containers were created
was misleading
- To match the Docker CLI behavior and to avoid confusing issues, Compose will no longer
accept whitespace in variable names sourced from environment files.
- Compose will now report a configuration error if a service attempts to declare
duplicate mount points in the volumes section.
- Fixed an issue with the containerized version of Compose that prevented users from
writing to stdin during interactive sessions started by `run` or `exec`.
- One-off containers started by `run` no longer adopt the restart policy of the service,
and are instead set to never restart.
- Fixed an issue that caused some container events to not appear in the output of
the `docker-compose events` command.
- Missing images will no longer stop the execution of `docker-compose down` commands
(a warning will be displayed instead).
- Force `virtualenv` version for macOS CI
- Fix merging of compose files when network has `None` config
- Fix `CTRL+C` issues by enabling `bootloader_ignore_signals` in `pyinstaller`
- Bump `docker-py` version to `3.7.2` to fix SSH and proxy config issues
- Fix release script and some typos on release documentation
1.23.2 (2018-11-28)
-------------------

View File

@@ -17,6 +17,8 @@ ENV LANG en_US.UTF-8
RUN useradd -d /home/user -m -s /bin/bash user
WORKDIR /code/
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
RUN pip install virtualenv==16.2.0
RUN pip install tox==2.1.1
ADD requirements.txt /code/
@@ -25,6 +27,7 @@ ADD .pre-commit-config.yaml /code/
ADD setup.py /code/
ADD tox.ini /code/
ADD compose /code/compose/
ADD README.md /code/
RUN tox --notest
ADD . /code/

View File

@@ -1,4 +1,4 @@
from __future__ import absolute_import
from __future__ import unicode_literals
__version__ = '1.24.0dev'
__version__ = '1.24.0'

View File

@@ -206,8 +206,8 @@ class TopLevelCommand(object):
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert deploy
keys in v3 files to their non-Swarm equivalent
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent
Commands:
build Build or rebuild services

View File

@@ -51,6 +51,7 @@ from .validation import match_named_volumes
from .validation import validate_against_config_schema
from .validation import validate_config_section
from .validation import validate_cpu
from .validation import validate_credential_spec
from .validation import validate_depends_on
from .validation import validate_extends_file_path
from .validation import validate_healthcheck
@@ -369,7 +370,6 @@ def check_swarm_only_config(service_dicts, compatibility=False):
)
if not compatibility:
check_swarm_only_key(service_dicts, 'deploy')
check_swarm_only_key(service_dicts, 'credential_spec')
check_swarm_only_key(service_dicts, 'configs')
@@ -706,6 +706,7 @@ def validate_service(service_config, service_names, config_file):
validate_depends_on(service_config, service_names)
validate_links(service_config, service_names)
validate_healthcheck(service_config)
validate_credential_spec(service_config)
if not service_dict.get('image') and has_uppercase(service_name):
raise ConfigurationError(
@@ -894,6 +895,7 @@ def finalize_service(service_config, service_names, version, environment, compat
normalize_build(service_dict, service_config.working_dir, environment)
if compatibility:
service_dict = translate_credential_spec_to_security_opt(service_dict)
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
service_dict
)
@@ -930,6 +932,25 @@ def convert_restart_policy(name):
raise ConfigurationError('Invalid restart policy "{}"'.format(name))
def convert_credential_spec_to_security_opt(credential_spec):
if 'file' in credential_spec:
return 'file://{file}'.format(file=credential_spec['file'])
return 'registry://{registry}'.format(registry=credential_spec['registry'])
def translate_credential_spec_to_security_opt(service_dict):
result = []
if 'credential_spec' in service_dict:
spec = convert_credential_spec_to_security_opt(service_dict['credential_spec'])
result.append('credentialspec={spec}'.format(spec=spec))
if result:
service_dict['security_opt'] = result
return service_dict
def translate_deploy_keys_to_container_config(service_dict):
if 'credential_spec' in service_dict:
del service_dict['credential_spec']
@@ -1172,7 +1193,7 @@ def merge_networks(base, override):
base = {k: {} for k in base} if isinstance(base, list) else base
override = {k: {} for k in override} if isinstance(override, list) else override
for network_name in all_network_names:
md = MergeDict(base.get(network_name, {}), override.get(network_name, {}))
md = MergeDict(base.get(network_name) or {}, override.get(network_name) or {})
md.merge_field('aliases', merge_unique_items_lists, [])
md.merge_field('link_local_ips', merge_unique_items_lists, [])
md.merge_scalar('priority')

View File

@@ -240,6 +240,18 @@ def validate_depends_on(service_config, service_names):
)
def validate_credential_spec(service_config):
credential_spec = service_config.config.get('credential_spec')
if not credential_spec:
return
if 'registry' not in credential_spec and 'file' not in credential_spec:
raise ConfigurationError(
"Service '{s.name}' is missing 'credential_spec.file' or "
"credential_spec.registry'".format(s=service_config)
)
def get_unsupported_config_msg(path, error_key):
msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
if error_key in DOCKER_CONFIG_HINTS:

View File

@@ -291,7 +291,7 @@ class Service(object):
c for c in stopped_containers if self._containers_have_diverged([c])
]
for c in divergent_containers:
c.remove()
c.remove()
all_containers = list(set(all_containers) - set(divergent_containers))
@@ -461,50 +461,50 @@ class Service(object):
def _execute_convergence_recreate(self, containers, scale, timeout, detached, start,
renew_anonymous_volumes):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
def recreate(container):
return self.recreate_container(
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
lambda c: c.name,
"Recreating",
def recreate(container):
return self.recreate_container(
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
lambda c: c.name,
"Recreating",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _execute_convergence_start(self, containers, scale, timeout, detached, start):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
_, errors = parallel_execute(
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _execute_convergence_start(self, containers, scale, timeout, detached, start):
if scale is not None and len(containers) > scale:
self._downscale(containers[scale:], timeout)
containers = containers[:scale]
if start:
_, errors = parallel_execute(
containers,
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
lambda c: c.name,
"Starting",
)
for error in errors.values():
raise OperationFailedError(error)
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
if scale is not None and len(containers) < scale:
containers.extend(self._execute_convergence_create(
scale - len(containers), detached, start
))
return containers
def _downscale(self, containers, timeout=None):
def stop_and_remove(container):

View File

@@ -114,7 +114,7 @@ _docker_compose_build() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory --no-cache --pull" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--build-arg --compress --force-rm --help --memory --no-cache --pull --parallel" -- "$cur" ) )
;;
*)
__docker_compose_complete_services --filter source=build
@@ -361,7 +361,7 @@ _docker_compose_ps() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --quiet -q --services --filter" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--all -a --filter --help --quiet -q --services" -- "$cur" ) )
;;
*)
__docker_compose_complete_services

View File

@@ -117,6 +117,7 @@ __docker-compose_subcommand() {
'--no-cache[Do not use cache when building the image.]' \
'--pull[Always attempt to pull a newer version of the image.]' \
'--compress[Compress the build context using gzip.]' \
'--parallel[Build images in parallel.]' \
'*:services:__docker-compose_services_from_build' && ret=0
;;
(bundle)
@@ -339,7 +340,7 @@ _docker-compose() {
'(- :)'{-h,--help}'[Get help]' \
'*'{-f,--file}"[${file_description}]:file:_files -g '*.yml'" \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
"--compatibility[If set, Compose will attempt to convert deploy keys in v3 files to their non-Swarm equivalent]" \
"--compatibility[If set, Compose will attempt to convert keys in v3 files to their non-Swarm equivalent]" \
'(- :)'{-v,--version}'[Print version and exit]' \
'--verbose[Show more output]' \
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \

View File

@@ -98,4 +98,5 @@ exe = EXE(pyz,
debug=False,
strip=None,
upx=True,
console=True)
console=True,
bootloader_ignore_signals=True)

View File

@@ -1,5 +1,5 @@
coverage==4.4.2
flake8==3.5.0
mock>=1.0.1
mock==2.0.0
pytest==3.6.3
pytest-cov==2.5.1

View File

@@ -3,7 +3,7 @@ cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
colorama==0.4.0; sys_platform == 'win32'
docker==3.7.0
docker==3.7.3
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
@@ -21,4 +21,4 @@ requests==2.20.0
six==1.10.0
texttable==0.9.1
urllib3==1.21.1; python_version == '3.3'
websocket-client==0.32.0
websocket-client==0.56.0

View File

@@ -5,7 +5,7 @@ set -ex
./script/clean
TAG="docker-compose"
docker build -t "$TAG" . | tail -n 200
docker build -t "$TAG" .
docker run \
--rm --entrypoint="script/build/linux-entrypoint" \
-v $(pwd)/dist:/code/dist \

View File

@@ -40,7 +40,7 @@ This API token should be exposed to the release script through the
### A Bintray account and Bintray API key
Your Bintray account will need to be an admin member of the
[docker-compose organization](https://github.com/settings/tokens).
[docker-compose organization](https://bintray.com/docker-compose).
Additionally, you should generate a personal API key. To do so, click your
username in the top-right hand corner and select "Edit profile" ; on the new
page, select "API key" in the left-side menu.
@@ -129,7 +129,7 @@ assets public), proceed to the "Finalize a release" section of this guide.
Once you're ready to make your release public, you may execute the following
command from the root of the Compose repository:
```
./script/release/release.sh -b <BINTRAY_USERNAME> finalize RELEAE_VERSION
./script/release/release.sh -b <BINTRAY_USERNAME> finalize RELEASE_VERSION
```
Note that this command will create and publish versioned assets to the public.

View File

@@ -7,7 +7,6 @@ import os
import shutil
import sys
import time
from distutils.core import run_setup
from jinja2 import Template
from release.bintray import BintrayAPI
@@ -276,7 +275,8 @@ def finalize(args):
repository.checkout_branch(br_name)
run_setup(os.path.join(REPO_ROOT, 'setup.py'), script_args=['sdist', 'bdist_wheel'])
os.system('python {setup_script} sdist bdist_wheel'.format(
setup_script=os.path.join(REPO_ROOT, 'setup.py')))
merge_status = pr_data.merge()
if not merge_status.merged and not args.finalize_resume:

View File

@@ -18,7 +18,7 @@ def pypi_upload(args):
'dist/docker-compose-{}*.tar.gz'.format(rel)
])
except HTTPError as e:
if e.response.status_code == 400 and 'File already exists' in e.message:
if e.response.status_code == 400 and 'File already exists' in str(e):
if not args.finalize_resume:
raise ScriptError(
'Package already uploaded on PyPi.'

View File

@@ -219,6 +219,8 @@ def get_contributors(pr_data):
commits = pr_data.get_commits()
authors = {}
for commit in commits:
if not commit.author:
continue
author = commit.author.login
authors[author] = authors.get(author, 0) + 1
return [x[0] for x in sorted(list(authors.items()), key=lambda x: x[1])]

View File

@@ -15,7 +15,7 @@
set -e
VERSION="1.23.2"
VERSION="1.24.0"
IMAGE="docker/compose:$VERSION"

View File

@@ -13,13 +13,13 @@ if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
fi
OPENSSL_VERSION=1.1.0h
OPENSSL_VERSION=1.1.0j
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
OPENSSL_SHA1=0fc39f6aa91b6e7f4d05018f7c5e991e1d2491fd
OPENSSL_SHA1=dcad1efbacd9a4ed67d4514470af12bbe2a1d60a
PYTHON_VERSION=3.6.6
PYTHON_VERSION=3.6.8
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
PYTHON_SHA1=ae1fc9ddd29ad8c1d5f7b0d799ff0787efeb9652
PYTHON_SHA1=09fcc4edaef0915b4dedbfb462f1cd15f82d3a6f
#
# Install prerequisites.

View File

@@ -40,7 +40,7 @@ ProcessResult = namedtuple('ProcessResult', 'stdout stderr')
BUILD_CACHE_TEXT = 'Using cache'
BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:latest'
BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:1.27.2'
def start_process(base_dir, options):
@@ -658,15 +658,15 @@ class CLITestCase(DockerClientTestCase):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', '--no-parallel', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling web (busybox:latest)...',
'Pulling web (busybox:1.27.2)...',
]
def test_pull_with_include_deps(self):
self.base_dir = 'tests/fixtures/links-composefile'
result = self.dispatch(['pull', '--no-parallel', '--include-deps', 'web'])
assert sorted(result.stderr.split('\n'))[1:] == [
'Pulling db (busybox:latest)...',
'Pulling web (busybox:latest)...',
'Pulling db (busybox:1.27.2)...',
'Pulling web (busybox:1.27.2)...',
]
def test_build_plain(self):
@@ -2592,7 +2592,7 @@ class CLITestCase(DockerClientTestCase):
container, = self.project.containers()
expected_template = ' container {} {}'
expected_meta_info = ['image=busybox:latest', 'name=simple-composefile_simple_']
expected_meta_info = ['image=busybox:1.27.2', 'name=simple-composefile_simple_']
assert expected_template.format('create', container.id) in lines[0]
assert expected_template.format('start', container.id) in lines[1]

View File

@@ -2,7 +2,7 @@ version: "2.2"
services:
service:
image: busybox:latest
image: busybox:1.27.2
command: top
environment:

View File

@@ -1,11 +1,11 @@
db:
image: busybox:latest
image: busybox:1.27.2
command: top
web:
image: busybox:latest
image: busybox:1.27.2
command: top
links:
- db:db
console:
image: busybox:latest
image: busybox:1.27.2
command: top

View File

@@ -1,5 +1,5 @@
simple:
image: busybox:latest
image: busybox:1.27.2
command: top
another:
image: busybox:latest

View File

@@ -1,3 +1,3 @@
FROM busybox:latest
FROM busybox:1.27.2
LABEL com.docker.compose.test_image=true
CMD echo "success"

View File

@@ -1,8 +1,8 @@
version: "2"
services:
simple:
image: busybox:latest
image: busybox:1.27.2
command: top
another:
image: busybox:latest
image: busybox:1.27.2
command: top

View File

@@ -193,7 +193,7 @@ class TestConsumeQueue(object):
queue.put(item)
generator = consume_queue(queue, True)
assert next(generator) is 'foobar-1'
assert next(generator) == 'foobar-1'
def test_item_is_none_when_timeout_is_hit(self):
queue = Queue()

View File

@@ -3593,6 +3593,9 @@ class InterpolationTest(unittest.TestCase):
'reservations': {'memory': '100M'},
},
},
'credential_spec': {
'file': 'spec.json'
},
},
},
})
@@ -3610,7 +3613,8 @@ class InterpolationTest(unittest.TestCase):
'mem_limit': '300M',
'mem_reservation': '100M',
'cpus': 0.7,
'name': 'foo'
'name': 'foo',
'security_opt': ['credentialspec=file://spec.json'],
}
@mock.patch.dict(os.environ)
@@ -3928,6 +3932,24 @@ class MergeNetworksTest(unittest.TestCase, MergeListsTest):
}
}
def test_network_has_none_value(self):
service_dict = config.merge_service_dicts(
{self.config_name: {
'default': None
}},
{self.config_name: {
'default': {
'aliases': []
}
}},
DEFAULT_VERSION)
assert service_dict[self.config_name] == {
'default': {
'aliases': []
}
}
def test_all_properties(self):
service_dict = config.merge_service_dicts(
{self.config_name: {