Compare commits

...

71 Commits

Author SHA1 Message Date
Nicolas De Loof
9ad10575d1 Prepare drop of python 2.x support
see https://github.com/docker/compose/issues/6890

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-11-20 16:00:53 +01:00
Ulysses Souza
2887d82d16 Merge pull request #6982 from smamessier/fix_non_ascii_error
Fixed non-ascii error when using COMPOSE_DOCKER_CLI_BUILD=1 for Buildkit
2019-11-18 16:45:04 +01:00
Ulysses Souza
2919bebea4 Fix non ascii chars error. Python2 only
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-11-18 15:43:50 +01:00
Djordje Lukic
5478c966f1 Merge pull request #7008 from zelahi/fix-readme-link
Fixed broken README link for common use cases
2019-11-07 09:59:49 +01:00
Zuhayr Elahi
e546533cfe Fixed broken README link for common use cases
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-11-06 17:10:48 -08:00
Jean-Christophe Sirot
abef11b2a6 Merge pull request #6996 from ajlai/fix-color-order-and-remove-red
Make container service color deterministic, remove red from chosen colors
2019-11-06 16:11:57 +01:00
Anthony Lai
802fa20228 Make container service color deterministic, remove red from chosen colors
Signed-off-by: Anthony Lai <anthonyjlai@gmail.com>
2019-11-03 23:44:31 +00:00
Djordje Lukic
fa34ee7362 Merge pull request #6973 from glours/set_no_color_if_clicolor_defined_to_0
Set no-colors to true if CLICOLOR env variable is set to 0
2019-10-31 16:45:10 +01:00
Sebastien Mamessier
a3a23bf949 Fixed error when using startswith on non-ascii string
Signed-off-by: Sebastien Mamessier <smamessier@uber.com>
2019-10-30 13:57:08 +01:00
Jean-Christophe Sirot
cfc48f2c13 Merge pull request #6986 from rumpl/fix-unit-test-close-fd
Cleanup all open files
2019-10-28 16:07:18 +01:00
Djordje Lukic
f8142a899c Cleanup all open files
If the fd is not closed the cleanup will fail on windows.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2019-10-28 15:36:05 +01:00
Guillaume Lours
2e7493a889 Set no-colors to true if CLICOLOR env variable is set to 0
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-21 11:37:46 +02:00
Jean-Christophe Sirot
4be2fa010a Merge pull request #6972 from glours/align_image_size_display_to_docker_cli
Format image size as decimal to be align with Docker CLI
2019-10-18 15:26:15 +02:00
Guillaume Lours
386bdda246 Format image size as decimal to be align with Docker CLI
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-18 12:50:38 +02:00
okor
17bbbba7d6 update docker-py
Signed-off-by: Jason Ormand <jason.ormand1@gmail.com>
2019-10-18 09:37:24 +02:00
Nicolas De Loof
1ca10f90fb Fix acceptance tests
tty is now (correclty) reported to have 80 columns, which split service
ID in two lines

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Nicolas De Loof
452880af7c Use python Posix support to get tty size
stty is not portable outside *nix
Note: shutil.get_terminal_size require python 3.3

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-16 14:31:27 +02:00
Guillaume LOURS
944660048d Merge pull request #6964 from guillaumerose/addmorelabels
Add working dir, config files and env file in service labels
2019-10-15 10:06:42 +02:00
Guillaume Rose
dbe4d7323e Add working dir, config files and env file in service labels
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-15 09:18:09 +02:00
Guillaume Rose
1678a4fbe4 Run CI on amd64
Signed-off-by: Guillaume Rose <guillaume.rose@docker.com>
2019-10-14 22:01:04 +02:00
Guillaume LOURS
4e83bafec6 Merge pull request #6955 from ndeloof/paramiko
Bump paramiko to 2.6.0
2019-10-10 10:59:44 +02:00
Nicolas De Loof
8973a940e6 Bump paramiko to 2.6.0
close #6953

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2019-10-10 08:55:15 +02:00
Zuhayr Elahi
8835056ce4 UPDATED log message
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Zuhayr Elahi
3135a0a839 Added log message to check compose file
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-10-10 07:08:42 +02:00
Guillaume Lours
cdae06a89c exclude issue flagged with kind/feature from stale process
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:51:34 +02:00
Guillaume Lours
79bf9ed652 correct invalid yaml indentation
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 21:10:18 +02:00
Nicolas De loof
29af1a84ca Merge pull request #6952 from glours/stale_configuration
Add config file for @probot/stale
2019-10-09 16:41:58 +02:00
Guillaume Lours
9375c15bad Add config file for @probot/stale
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-09 16:16:57 +02:00
Chris Crone
8ebb1a6f19 Merge pull request #6949 from jcsirot/fix-pushbin-script-verbosity
Remove set -x to make this script less verbose
2019-10-09 12:04:24 +02:00
Jean-Christophe Sirot
37be2ad9cd Remove set -x to make this script less verbose
Signed-off-by: Jean-Christophe Sirot <jean-christophe.sirot@docker.com>
2019-10-09 10:51:17 +02:00
Nicolas De loof
6fe35498a5 Add dependencies for ARM build (#6908)
Add dependencies for ARM build
2019-10-09 09:38:58 +02:00
Stefan Scherer
ce52f597a0 Enhance build script for different CPU architectures
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
2019-10-09 09:11:29 +02:00
Stefan Scherer
79f29dda23 Add dependencies for ARM build
Signed-off-by: Stefan Scherer <scherer_stefan@icloud.com>
2019-10-09 09:11:29 +02:00
Nicolas De loof
7172849913 Fix "extends" same file optimization (#6425)
Fix "extends" same file optimization
2019-10-09 08:50:54 +02:00
Aleksandr Mezin
c24b7b6464 Fix same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Aleksandr Mezin
74f892de95 Add test to verify same file 'extends' optimization
Signed-off-by: Aleksandr Mezin <mezin.alexander@gmail.com>
2019-10-09 11:36:17 +06:00
Nicolas De loof
09acc5febf [TAR-995] ADDED a stage for executing License Scans (#6875)
[TAR-995] ADDED a stage for executing License Scans
2019-10-08 16:25:28 +02:00
Nicolas De loof
1f16a7929d Merge pull request #6864 from samueljsb/formatter_class
Change Formatter.table method to staticmethod
2019-10-08 16:24:40 +02:00
Nicolas De loof
f9113202e8 Add automatic labeling of bug, feature & question issues (#6944)
Add automatic labeling of bug, feature & question issues
2019-10-08 16:23:15 +02:00
Nicolas De loof
5f2161cad9 Merge pull request #6912 from cranzy/fixing_broken_link
Fixing features broken link
2019-10-08 16:19:31 +02:00
Guillaume LOURS
70f8e38b1d Add automatic labeling of bug, feature & question issues
Signed-off-by: Guillaume Lours <guillaume.lours@docker.com>
2019-10-08 11:07:04 +02:00
Ulysses Souza
186aa6e5c3 Merge pull request #6914 from lukas9393/6913-progress-arg
Fix --progress arg when run docker-compose build
2019-10-07 12:28:49 +02:00
Guillaume LOURS
bc57a1bd54 Merge pull request #6925 from ulyssessouza/fix-secrets-warning-message
Fix secret missing warning
2019-09-27 10:51:37 +02:00
ulyssessouza
eca358e2f0 Fix secret missing warning
Signed-off-by: ulyssessouza <ulyssessouza@gmail.com>
2019-09-27 09:10:49 +02:00
Lukas Hettwer
32ac6edb86 Fix --progress arg when run docker-compose build
--progress is no longer processed as flag but as argument with value.

Signed-off-by: Lukas Hettwer <lukas.hettwer@aboutyou.de>

Resolve: [#6913]
2019-09-24 16:02:12 +02:00
Dimitar Dimitrov
475f8199f7 Fixing features broken link
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@docker.com>
2019-09-24 13:31:30 +03:00
Zuhayr Elahi
98d7cc8d0c ADDED a stage for executing License Scans
Signed-off-by: Zuhayr Elahi <elahi.zuhayr@gmail.com>
2019-09-13 14:25:06 -07:00
Ulysses Souza
d7c7e21921 Merge pull request #6131 from sagarafr/fix-5920-missing-secret-message
Add a warning message to secret file
2019-09-09 17:45:08 +02:00
Ulysses Souza
70ead597d2 Add tests to 'get_secret' warnings
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-09 10:04:05 +02:00
Marian Gappa
b9092cacdb Fix missing secret error message
Add a warning message when the secret file doesn't exist

Fixes #5920

Signed-off-by: Marian Gappa <marian.gappa@gmail.com>
2019-09-09 10:04:05 +02:00
Silvin Lubecki
1566930a70 Merge pull request #6862 from deathtracktor/master
Fix KeyError when remote network labels are None.
2019-09-06 11:13:48 +02:00
Danil Kister
a5fbf91b72 Prevent KeyError when remote network labels are None.
Signed-off-by: Danil Kister <danil.kister@gmail.com>
2019-09-05 21:36:10 +02:00
Ulysses Souza
ecf03fe280 Merge pull request #6882 from ulyssessouza/fix_attach_restarting_container
Fix race condition on watch_events
2019-09-05 16:46:14 +02:00
Ulysses Souza
47d170b06a Fix race condition on watch_events
Avoid to attach to restarting containers and ignore
race conditions when trying to attach to already
dead containers

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-09-04 17:55:05 +02:00
Chris Crone
9973f051ba Merge pull request #6878 from ulyssessouza/bump-debian
Bump runtime debian
2019-08-30 16:42:56 +02:00
Ulysses Souza
2199278b44 Merge pull request #6865 from ulyssessouza/support-cli-build
Add support to CLI build
2019-08-30 13:46:21 +02:00
Ulysses Souza
5add9192ac Rename envvar switch to COMPOSE_DOCKER_CLI_BUILD
From `COMPOSE_NATIVE_BUILDER` to `COMPOSE_DOCKER_CLI_BUILD`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-30 12:11:09 +02:00
Ulysses Souza
0c6fce271e Bump runtime debian
From `stretch-20190708-slim` to `stretch-20190812-slim`

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 17:45:21 +02:00
Ulysses Souza
9d7ad3bac1 Add comment on native build and fix typo
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 16:30:50 +02:00
Nao YONASHIRO
719a1b0581 fix: use subprocess32 for python2
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-29 14:21:19 +02:00
Ulysses Souza
bbdb3cab88 Add integration tests to native builder
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-29 09:31:16 +02:00
Ulysses Souza
ee8ca5d6f8 Rephrase warnings when building with the cli
Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
15e8edca3c feat: add a warning if someone uses the --compress or --parallel flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:15 +02:00
Nao YONASHIRO
81e223d499 feat: add --progress flag
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
862a13b8f3 fix: add build flags
Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>
2019-08-28 17:24:14 +02:00
Nao YONASHIRO
cacbcccc0c Add support to CLI build
This includes can be enabled by setting the env var
`COMPOSE_NATIVE_BUILDER=1`.

Signed-off-by: Nao YONASHIRO <yonashiro@r.recruit.co.jp>

Signed-off-by: Ulysses Souza <ulysses.souza@docker.com>
2019-08-28 17:24:14 +02:00
Samuel Searles-Bryant
672ced8742 Change Formatter.table method to staticmethod
Make this a staticmethod so it's easier to use without needing to init a
Formatter object first.

Signed-off-by: Samuel Searles-Bryant <samuel.searles-bryant@unipart.io>
2019-08-22 14:25:15 +01:00
Djordje Lukic
4cfa622de8 Merge pull request #6631 from chibby0ne/update_jsonschema_dependency
requirements: update jsonschema dependency
2019-08-22 12:54:48 +02:00
Ulysses Souza
525bc9ef7a Merge pull request #6856 from aiordache/bump-alpine
update alpine version to 3.10.1
2019-08-21 15:49:14 +02:00
aiordache
60dcf87cc0 update alpine version to 3.10.1
Signed-off-by: aiordache <anca.iordache@docker.com>
2019-08-20 12:10:26 +02:00
Antonio Gutierrez
66856e884c requirements: update jsonschema dependency
Fixes: https://github.com/docker/compose/issues/6347

Signed-off-by: Antonio Gutierrez <chibby0ne@gmail.com>
2019-07-27 21:43:40 +02:00
38 changed files with 606 additions and 117 deletions

View File

@@ -1,6 +1,9 @@
---
name: Bug report
about: Report a bug encountered while using docker-compose
title: ''
labels: kind/bug
assignees: ''
---

View File

@@ -1,6 +1,9 @@
---
name: Feature request
about: Suggest an idea to improve Compose
title: ''
labels: kind/feature
assignees: ''
---

View File

@@ -1,6 +1,9 @@
---
name: Question about using Compose
about: This is not the appropriate channel
title: ''
labels: kind/question
assignees: ''
---

59
.github/stale.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 180
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
daysUntilClose: 7
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
exemptLabels:
- kind/feature
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: false
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: false
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: true
# Label to use when marking as stale
staleLabel: stale
# Comment to post when marking as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
# Comment to post when removing the stale label.
unmarkComment: >
This issue has been automatically marked as not stale anymore due to the recent activity.
# Comment to post when closing a stale Issue or Pull Request.
closeComment: >
This issue has been automatically closed because it had not recent activity during the stale period.
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
# Limit to only `issues` or `pulls`
only: issues
# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
# pulls:
# daysUntilStale: 30
# markComment: >
# This pull request has been automatically marked as stale because it has not had
# recent activity. It will be closed if no further activity occurs. Thank you
# for your contributions.
# issues:
# exemptLabels:
# - confirmed

View File

@@ -2,8 +2,8 @@ ARG DOCKER_VERSION=18.09.7
ARG PYTHON_VERSION=3.7.4
ARG BUILD_ALPINE_VERSION=3.10
ARG BUILD_DEBIAN_VERSION=slim-stretch
ARG RUNTIME_ALPINE_VERSION=3.10.0
ARG RUNTIME_DEBIAN_VERSION=stretch-20190708-slim
ARG RUNTIME_ALPINE_VERSION=3.10.1
ARG RUNTIME_DEBIAN_VERSION=stretch-20190812-slim
ARG BUILD_PLATFORM=alpine
@@ -30,15 +30,18 @@ RUN apk add --no-cache \
ENV BUILD_BOOTLOADER=1
FROM python:${PYTHON_VERSION}-${BUILD_DEBIAN_VERSION} AS build-debian
RUN apt-get update && apt-get install -y \
RUN apt-get update && apt-get install --no-install-recommends -y \
curl \
gcc \
git \
libc-dev \
libffi-dev \
libgcc-6-dev \
libssl-dev \
make \
openssl \
python2.7-dev
python2.7-dev \
zlib1g-dev
FROM build-${BUILD_PLATFORM} AS build
COPY docker-compose-entrypoint.sh /usr/local/bin/

View File

@@ -1,4 +1,4 @@
FROM s390x/alpine:3.6
FROM s390x/alpine:3.10.1
ARG COMPOSE_VERSION=1.16.1

15
Jenkinsfile vendored
View File

@@ -2,7 +2,7 @@
def buildImage = { String baseImage ->
def image
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("build image for \"${baseImage}\"") {
checkout(scm)
def imageName = "dockerbuildbot/compose:${baseImage}-${gitCommit()}"
@@ -29,9 +29,9 @@ def buildImage = { String baseImage ->
def get_versions = { String imageId, int number ->
def docker_versions
wrappedNode(label: "ubuntu && !zfs") {
wrappedNode(label: "ubuntu && amd64 && !zfs") {
def result = sh(script: """docker run --rm \\
--entrypoint=/code/.tox/py27/bin/python \\
--entrypoint=/code/.tox/py37/bin/python \\
${imageId} \\
/code/script/test/versions.py -n ${number} docker/docker-ce recent
""", returnStdout: true
@@ -48,14 +48,14 @@ def runTests = { Map settings ->
def imageName = settings.get("image", null)
if (!pythonVersions) {
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py27,py37')`")
throw new Exception("Need Python versions to test. e.g.: `runTests(pythonVersions: 'py37')`")
}
if (!dockerVersions) {
throw new Exception("Need Docker versions to test. e.g.: `runTests(dockerVersions: 'all')`")
}
{ ->
wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
stage("test python=${pythonVersions} / docker=${dockerVersions} / baseImage=${baseImage}") {
checkout(scm)
def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
@@ -82,13 +82,10 @@ def runTests = { Map settings ->
def testMatrix = [failFast: true]
def baseImages = ['alpine', 'debian']
def pythonVersions = ['py27', 'py37']
baseImages.each { baseImage ->
def imageName = buildImage(baseImage)
get_versions(imageName, 2).each { dockerVersion ->
pythonVersions.each { pyVersion ->
testMatrix["${baseImage}_${dockerVersion}_${pyVersion}"] = runTests([baseImage: baseImage, image: imageName, dockerVersions: dockerVersion, pythonVersions: pyVersion])
}
testMatrix["${baseImage}_${dockerVersion}"] = runTests([baseImage: baseImage, image: imageName, dockerVersions: dockerVersion, pythonVersions: 'py37'])
}
}

View File

@@ -6,11 +6,11 @@ Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a Compose file to configure your application's services.
Then, using a single command, you create and start all the services
from your configuration. To learn more about all the features of Compose
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
Compose is great for development, testing, and staging environments, as well as
CI workflows. You can learn more about each case in
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases).
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
Using Compose is basically a three-step process.

View File

@@ -41,9 +41,9 @@ for (name, code) in get_pairs():
def rainbow():
cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue',
cs = ['cyan', 'yellow', 'green', 'magenta', 'blue',
'intense_cyan', 'intense_yellow', 'intense_green',
'intense_magenta', 'intense_red', 'intense_blue']
'intense_magenta', 'intense_blue']
for c in cs:
yield globals()[c]

View File

@@ -13,6 +13,9 @@ from .. import config
from .. import parallel
from ..config.environment import Environment
from ..const import API_VERSIONS
from ..const import LABEL_CONFIG_FILES
from ..const import LABEL_ENVIRONMENT_FILE
from ..const import LABEL_WORKING_DIR
from ..project import Project
from .docker_client import docker_client
from .docker_client import get_tls_version
@@ -57,7 +60,8 @@ def project_from_options(project_dir, options, additional_options={}):
environment=environment,
override_dir=override_dir,
compatibility=options.get('--compatibility'),
interpolate=(not additional_options.get('--no-interpolate'))
interpolate=(not additional_options.get('--no-interpolate')),
environment_file=environment_file
)
@@ -125,7 +129,7 @@ def get_client(environment, verbose=False, version=None, tls_config=None, host=N
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
host=None, tls_config=None, environment=None, override_dir=None,
compatibility=False, interpolate=True):
compatibility=False, interpolate=True, environment_file=None):
if not environment:
environment = Environment.from_env_file(project_dir)
config_details = config.find(project_dir, config_path, environment, override_dir)
@@ -145,10 +149,30 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
with errors.handle_connection_errors(client):
return Project.from_config(
project_name, config_data, client, environment.get('DOCKER_DEFAULT_PLATFORM')
project_name,
config_data,
client,
environment.get('DOCKER_DEFAULT_PLATFORM'),
execution_context_labels(config_details, environment_file),
)
def execution_context_labels(config_details, environment_file):
extra_labels = [
'{0}={1}'.format(LABEL_WORKING_DIR, os.path.abspath(config_details.working_dir)),
'{0}={1}'.format(LABEL_CONFIG_FILES, config_files_label(config_details)),
]
if environment_file is not None:
extra_labels.append('{0}={1}'.format(LABEL_ENVIRONMENT_FILE,
os.path.normpath(environment_file)))
return extra_labels
def config_files_label(config_details):
return ",".join(
map(str, (os.path.normpath(c.filename) for c in config_details.config_files)))
def get_project_name(working_dir, project_name=None, environment=None):
def normalize_name(name):
return re.sub(r'[^-_a-z0-9]', '', name.lower())

View File

@@ -2,25 +2,32 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import logging
import os
import shutil
import six
import texttable
from compose.cli import colors
if hasattr(shutil, "get_terminal_size"):
from shutil import get_terminal_size
else:
from backports.shutil_get_terminal_size import get_terminal_size
def get_tty_width():
tty_size = os.popen('stty size 2> /dev/null', 'r').read().split()
if len(tty_size) != 2:
try:
width, _ = get_terminal_size()
return int(width)
except OSError:
return 0
_, width = tty_size
return int(width)
class Formatter(object):
class Formatter:
"""Format tabular data for printing."""
def table(self, headers, rows):
@staticmethod
def table(headers, rows):
table = texttable.Texttable(max_width=get_tty_width())
table.set_cols_dtype(['t' for h in headers])
table.add_rows([headers] + rows)

View File

@@ -134,7 +134,10 @@ def build_thread(container, presenter, queue, log_args):
def build_thread_map(initial_containers, presenters, thread_args):
return {
container.id: build_thread(container, next(presenters), *thread_args)
for container in initial_containers
# Container order is unspecified, so they are sorted by name in order to make
# container:presenter (log color) assignment deterministic when given a list of containers
# with the same names.
for container in sorted(initial_containers, key=lambda c: c.name)
}
@@ -230,7 +233,13 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
# Container crashed so we should reattach to it
if event['id'] in crashed_containers:
event['container'].attach_log_stream()
container = event['container']
if not container.is_restarting:
try:
container.attach_log_stream()
except APIError:
# Just ignore errors when reattaching to already crashed containers
pass
crashed_containers.remove(event['id'])
thread_map[event['id']] = build_thread(

View File

@@ -6,6 +6,7 @@ import contextlib
import functools
import json
import logging
import os
import pipes
import re
import subprocess
@@ -102,9 +103,9 @@ def dispatch():
options, handler, command_options = dispatcher.parse(sys.argv[1:])
setup_console_handler(console_handler,
options.get('--verbose'),
options.get('--no-ansi'),
set_no_color_if_clicolor(options.get('--no-ansi')),
options.get("--log-level"))
setup_parallel_logger(options.get('--no-ansi'))
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
if options.get('--no-ansi'):
command_options['--no-color'] = True
return functools.partial(perform_command, options, handler, command_options)
@@ -263,14 +264,17 @@ class TopLevelCommand(object):
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--build-arg key=val Set build-time variables for services.
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
-m, --memory MEM Set memory limit for the build container.
--no-cache Do not use cache when building the image.
--no-rm Do not remove intermediate containers after a successful build.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--parallel Build images in parallel.
--progress string Set type of progress output (auto, plain, tty).
EXPERIMENTAL flag for native builder.
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
"""
service_names = options['SERVICE']
@@ -283,6 +287,8 @@ class TopLevelCommand(object):
)
build_args = resolve_build_args(build_args, self.toplevel_environment)
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
self.project.build(
service_names=options['SERVICE'],
no_cache=bool(options.get('--no-cache', False)),
@@ -293,7 +299,9 @@ class TopLevelCommand(object):
build_args=build_args,
gzip=options.get('--compress', False),
parallel_build=options.get('--parallel', False),
silent=options.get('--quiet', False)
silent=options.get('--quiet', False),
cli=native_builder,
progress=options.get('--progress'),
)
def bundle(self, options):
@@ -613,7 +621,7 @@ class TopLevelCommand(object):
image_id,
size
])
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def kill(self, options):
"""
@@ -659,7 +667,7 @@ class TopLevelCommand(object):
log_printer_from_project(
self.project,
containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
log_args,
event_stream=self.project.events(service_names=options['SERVICE'])).run()
@@ -747,7 +755,7 @@ class TopLevelCommand(object):
container.human_readable_state,
container.human_readable_ports,
])
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def pull(self, options):
"""
@@ -987,7 +995,7 @@ class TopLevelCommand(object):
rows.append(process)
print(container.name)
print(Formatter().table(headers, rows))
print(Formatter.table(headers, rows))
def unpause(self, options):
"""
@@ -1071,6 +1079,8 @@ class TopLevelCommand(object):
for excluded in [x for x in opts if options.get(x) and no_start]:
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
with up_shutdown_context(self.project, service_names, timeout, detached):
warn_for_swarm_mode(self.project.client)
@@ -1090,6 +1100,7 @@ class TopLevelCommand(object):
reset_container_image=rebuild,
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
silent=options.get('--quiet-pull'),
cli=native_builder,
)
try:
@@ -1114,7 +1125,7 @@ class TopLevelCommand(object):
log_printer = log_printer_from_project(
self.project,
attached_containers,
options['--no-color'],
set_no_color_if_clicolor(options['--no-color']),
{'follow': True},
cascade_stop,
event_stream=self.project.events(service_names=service_names))
@@ -1592,3 +1603,7 @@ def warn_for_swarm_mode(client):
"To deploy your application across the swarm, "
"use `docker stack deploy`.\n"
)
def set_no_color_if_clicolor(no_color_flag):
return no_color_flag or os.environ.get('CLICOLOR') == "0"

View File

@@ -133,12 +133,12 @@ def generate_user_agent():
def human_readable_file_size(size):
suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
order = int(math.log(size, 2) / 10) if size else 0
order = int(math.log(size, 1000)) if size else 0
if order >= len(suffixes):
order = len(suffixes) - 1
return '{0:.4g} {1}'.format(
size / float(1 << (order * 10)),
size / pow(10, order * 3),
suffixes[order]
)

View File

@@ -615,7 +615,7 @@ class ServiceExtendsResolver(object):
config_path = self.get_extended_config_path(extends)
service_name = extends['service']
if config_path == self.config_file.filename:
if config_path == os.path.abspath(self.config_file.filename):
try:
service_config = self.config_file.get_service(service_name)
except KeyError:

View File

@@ -11,6 +11,9 @@ IS_WINDOWS_PLATFORM = (sys.platform == "win32")
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'
LABEL_WORKING_DIR = 'com.docker.compose.project.working_dir'
LABEL_CONFIG_FILES = 'com.docker.compose.project.config_files'
LABEL_ENVIRONMENT_FILE = 'com.docker.compose.project.environment_file'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_NETWORK = 'com.docker.compose.network'
LABEL_VERSION = 'com.docker.compose.version'

View File

@@ -226,7 +226,7 @@ def check_remote_network_config(remote, local):
raise NetworkConfigChangedError(local.true_name, 'enable_ipv6')
local_labels = local.labels or {}
remote_labels = remote.get('Labels', {})
remote_labels = remote.get('Labels') or {}
for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
if k.startswith('com.docker.'): # We are only interested in user-specified labels
continue

View File

@@ -6,6 +6,7 @@ import logging
import operator
import re
from functools import reduce
from os import path
import enum
import six
@@ -82,7 +83,7 @@ class Project(object):
return labels
@classmethod
def from_config(cls, name, config_data, client, default_platform=None):
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=[]):
"""
Construct a Project from a config.Config object.
"""
@@ -135,6 +136,7 @@ class Project(object):
pid_mode=pid_mode,
platform=service_dict.pop('platform', None),
default_platform=default_platform,
extra_labels=extra_labels,
**service_dict)
)
@@ -355,7 +357,8 @@ class Project(object):
return containers
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
build_args=None, gzip=False, parallel_build=False, rm=True, silent=False):
build_args=None, gzip=False, parallel_build=False, rm=True, silent=False, cli=False,
progress=None):
services = []
for service in self.get_services(service_names):
@@ -364,8 +367,17 @@ class Project(object):
elif not silent:
log.info('%s uses an image, skipping' % service.name)
if cli:
log.warning("Native build is an experimental feature and could change at any time")
if parallel_build:
log.warning("Flag '--parallel' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
if gzip:
log.warning("Flag '--compress' is ignored when building with "
"COMPOSE_DOCKER_CLI_BUILD=1")
def build_service(service):
service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent)
service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent, cli, progress)
if parallel_build:
_, errors = parallel.parallel_execute(
services,
@@ -509,8 +521,12 @@ class Project(object):
reset_container_image=False,
renew_anonymous_volumes=False,
silent=False,
cli=False,
):
if cli:
log.warning("Native build is an experimental feature and could change at any time")
self.initialize()
if not ignore_orphans:
self.find_orphan_containers(remove_orphans)
@@ -523,7 +539,7 @@ class Project(object):
include_deps=start_deps)
for svc in services:
svc.ensure_image_exists(do_build=do_build, silent=silent)
svc.ensure_image_exists(do_build=do_build, silent=silent, cli=cli)
plans = self._get_convergence_plans(
services, strategy, always_recreate_deps=always_recreate_deps)
@@ -793,7 +809,15 @@ def get_secrets(service, service_secrets, secret_defs):
)
)
secrets.append({'secret': secret, 'file': secret_def.get('file')})
secret_file = secret_def.get('file')
if not path.isfile(str(secret_file)):
log.warning(
"Service \"{service}\" uses an undefined secret file \"{secret_file}\", "
"the following file should be created \"{secret_file}\"".format(
service=service, secret_file=secret_file
)
)
secrets.append({'secret': secret, 'file': secret_file})
return secrets

View File

@@ -2,10 +2,12 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import itertools
import json
import logging
import os
import re
import sys
import tempfile
from collections import namedtuple
from collections import OrderedDict
from operator import attrgetter
@@ -59,8 +61,12 @@ from .utils import parse_seconds_float
from .utils import truncate_id
from .utils import unique_everseen
log = logging.getLogger(__name__)
if six.PY2:
import subprocess32 as subprocess
else:
import subprocess
log = logging.getLogger(__name__)
HOST_CONFIG_KEYS = [
'cap_add',
@@ -130,7 +136,6 @@ class NoSuchImageError(Exception):
ServiceName = namedtuple('ServiceName', 'project service number')
ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
@@ -166,20 +171,21 @@ class BuildAction(enum.Enum):
class Service(object):
def __init__(
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=1,
pid_mode=None,
default_platform=None,
**options
self,
name,
client=None,
project='default',
use_networking=False,
links=None,
volumes_from=None,
network_mode=None,
networks=None,
secrets=None,
scale=1,
pid_mode=None,
default_platform=None,
extra_labels=[],
**options
):
self.name = name
self.client = client
@@ -194,6 +200,7 @@ class Service(object):
self.scale_num = scale
self.default_platform = default_platform
self.options = options
self.extra_labels = extra_labels
def __repr__(self):
return '<Service: {}>'.format(self.name)
@@ -208,7 +215,7 @@ class Service(object):
for container in self.client.containers(
all=stopped,
filters=filters)])
)
)
if result:
return result
@@ -338,9 +345,9 @@ class Service(object):
raise OperationFailedError("Cannot create container for service %s: %s" %
(self.name, ex.explanation))
def ensure_image_exists(self, do_build=BuildAction.none, silent=False):
def ensure_image_exists(self, do_build=BuildAction.none, silent=False, cli=False):
if self.can_be_built() and do_build == BuildAction.force:
self.build()
self.build(cli=cli)
return
try:
@@ -356,7 +363,7 @@ class Service(object):
if do_build == BuildAction.skip:
raise NeedsBuildError(self)
self.build()
self.build(cli=cli)
log.warning(
"Image for service {} was built because it did not already exist. To "
"rebuild this image you must use `docker-compose build` or "
@@ -397,8 +404,8 @@ class Service(object):
return ConvergencePlan('start', containers)
if (
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
strategy is ConvergenceStrategy.always or
self._containers_have_diverged(containers)
):
return ConvergencePlan('recreate', containers)
@@ -475,6 +482,7 @@ class Service(object):
container, timeout=timeout, attach_logs=not detached,
start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
)
containers, errors = parallel_execute(
containers,
recreate,
@@ -616,6 +624,8 @@ class Service(object):
try:
container.start()
except APIError as ex:
if "driver failed programming external connectivity" in ex.explanation:
log.warn("Host is already in use by another container")
raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation))
return container
@@ -696,11 +706,11 @@ class Service(object):
net_name = self.network_mode.service_name
pid_namespace = self.pid_mode.service_name
return (
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
self.get_linked_service_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []) +
([pid_namespace] if pid_namespace else []) +
list(self.options.get('depends_on', {}).keys())
)
def get_dependency_configs(self):
@@ -890,7 +900,7 @@ class Service(object):
container_options['labels'] = build_container_labels(
container_options.get('labels', {}),
self.labels(one_off=one_off),
self.labels(one_off=one_off) + self.extra_labels,
number,
self.config_hash if add_config_hash else None,
slug
@@ -1049,7 +1059,7 @@ class Service(object):
return [build_spec(secret) for secret in self.secrets]
def build(self, no_cache=False, pull=False, force_rm=False, memory=None, build_args_override=None,
gzip=False, rm=True, silent=False):
gzip=False, rm=True, silent=False, cli=False, progress=None):
output_stream = open(os.devnull, 'w')
if not silent:
output_stream = sys.stdout
@@ -1070,7 +1080,8 @@ class Service(object):
'Impossible to perform platform-targeted builds for API version < 1.35'
)
build_output = self.client.build(
builder = self.client if not cli else _CLIBuilder(progress)
build_output = builder.build(
path=path,
tag=self.image_name,
rm=rm,
@@ -1542,9 +1553,9 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
for volume in volumes_option:
if (
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
volume.external and
volume.internal in container_volumes and
container_volumes.get(volume.internal) != volume.external
):
log.warning((
"Service \"{service}\" is using volume \"{volume}\" from the "
@@ -1591,6 +1602,7 @@ def build_mount(mount_spec):
read_only=mount_spec.read_only, consistency=mount_spec.consistency, **kwargs
)
# Labels
@@ -1645,6 +1657,7 @@ def format_environment(environment):
if isinstance(value, six.binary_type):
value = value.decode('utf-8')
return '{key}={value}'.format(key=key, value=value)
return [format_env(*item) for item in environment.items()]
@@ -1701,3 +1714,139 @@ def rewrite_build_path(path):
path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
return path
class _CLIBuilder(object):
def __init__(self, progress):
self._progress = progress
def build(self, path, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, timeout=None,
custom_context=False, encoding=None, pull=False,
forcerm=False, dockerfile=None, container_limits=None,
decode=False, buildargs=None, gzip=False, shmsize=None,
labels=None, cache_from=None, target=None, network_mode=None,
squash=None, extra_hosts=None, platform=None, isolation=None,
use_config_proxy=True):
"""
Args:
path (str): Path to the directory containing the Dockerfile
buildargs (dict): A dictionary of build arguments
cache_from (:py:class:`list`): A list of images used for build
cache resolution
container_limits (dict): A dictionary of limits applied to each
container created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable
swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g.,
``"0-3"``, ``"0,1"``
custom_context (bool): Optional if using ``fileobj``
decode (bool): If set to ``True``, the returned stream will be
decoded into dicts on the fly. Default ``False``
dockerfile (str): path within the build context to the Dockerfile
encoding (str): The encoding for a stream. Set to ``gzip`` for
compressing
extra_hosts (dict): Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
fileobj: A file object to use as the Dockerfile. (Or a file-like
object)
forcerm (bool): Always remove intermediate containers, even after
unsuccessful builds
isolation (str): Isolation technology used during build.
Default: `None`.
labels (dict): A dictionary of labels to set on the image
network_mode (str): networking mode for the run commands during
build
nocache (bool): Don't use the cache when set to ``True``
platform (str): Platform in the format ``os[/arch[/variant]]``
pull (bool): Downloads any updates to the FROM image in Dockerfiles
quiet (bool): Whether to return the status
rm (bool): Remove intermediate containers. The ``docker build``
command now defaults to ``--rm=true``, but we have kept the old
default of `False` to preserve backward compatibility
shmsize (int): Size of `/dev/shm` in bytes. The size must be
greater than 0. If omitted the system uses 64MB
squash (bool): Squash the resulting images layers into a
single layer.
tag (str): A tag to add to the final image
target (str): Name of the build-stage to build in a multi-stage
Dockerfile
timeout (int): HTTP timeout
use_config_proxy (bool): If ``True``, and if the docker client
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
Returns:
A generator for the build output.
"""
if dockerfile:
dockerfile = os.path.join(path, dockerfile)
iidfile = tempfile.mktemp()
command_builder = _CommandBuilder()
command_builder.add_params("--build-arg", buildargs)
command_builder.add_list("--cache-from", cache_from)
command_builder.add_arg("--file", dockerfile)
command_builder.add_flag("--force-rm", forcerm)
command_builder.add_arg("--memory", container_limits.get("memory"))
command_builder.add_flag("--no-cache", nocache)
command_builder.add_arg("--progress", self._progress)
command_builder.add_flag("--pull", pull)
command_builder.add_arg("--tag", tag)
command_builder.add_arg("--target", target)
command_builder.add_arg("--iidfile", iidfile)
args = command_builder.build([path])
magic_word = "Successfully built "
appear = False
with subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True) as p:
while True:
line = p.stdout.readline()
if not line:
break
# Fix non ascii chars on Python2. To remove when #6890 is complete.
if six.PY2:
magic_word = str(magic_word)
if line.startswith(magic_word):
appear = True
yield json.dumps({"stream": line})
with open(iidfile) as f:
line = f.readline()
image_id = line.split(":")[1].strip()
os.remove(iidfile)
# In case of `DOCKER_BUILDKIT=1`
# there is no success message already present in the output.
# Since that's the way `Service::build` gets the `image_id`
# it has to be added `manually`
if not appear:
yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
class _CommandBuilder(object):
def __init__(self):
self._args = ["docker", "build"]
def add_arg(self, name, value):
if value:
self._args.extend([name, str(value)])
def add_flag(self, name, flag):
if flag:
self._args.extend([name])
def add_params(self, name, params):
if params:
for key, val in params.items():
self._args.extend([name, "{}={}".format(key, val)])
def add_list(self, name, values):
if values:
for val in values:
self._args.extend([name, val])
def build(self, args):
return self._args + args

View File

@@ -1 +1 @@
pyinstaller==3.4
pyinstaller==3.5

View File

@@ -1,9 +1,10 @@
backports.shutil_get_terminal_size==1.0.0
backports.ssl-match-hostname==3.5.0.1; python_version < '3'
cached-property==1.3.0
certifi==2017.4.17
chardet==3.0.4
colorama==0.4.0; sys_platform == 'win32'
docker==4.0.1
docker==4.1.0
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
@@ -11,14 +12,14 @@ enum34==1.1.6; python_version < '3.4'
functools32==3.2.3.post2; python_version < '3.2'
idna==2.5
ipaddress==1.0.18
jsonschema==2.6.0
paramiko==2.4.2
jsonschema==3.0.1
paramiko==2.6.0
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
PySocks==1.6.7
PyYAML==4.2b1
requests==2.22.0
six==1.10.0
six==1.12.0
texttable==1.6.2
urllib3==1.24.2; python_version == '3.3'
websocket-client==0.32.0

20
script/Jenkinsfile.fossa Normal file
View File

@@ -0,0 +1,20 @@
pipeline {
agent any
stages {
stage("License Scan") {
agent {
label 'ubuntu-1604-aufs-edge'
}
steps {
withCredentials([
string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')
]) {
checkout scm
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' BRANCH_NAME='${env.BRANCH_NAME}' make -f script/fossa.mk fossa-analyze"
sh "FOSSA_API_KEY='${FOSSA_API_KEY}' make -f script/fossa.mk fossa-test"
}
}
}
}
}

View File

@@ -12,6 +12,7 @@ docker build -t "${TAG}" . \
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
TMP_CONTAINER=$(docker create "${TAG}")
mkdir -p dist
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose dist/docker-compose-Linux-x86_64
ARCH=$(uname -m)
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
docker container rm -f "${TMP_CONTAINER}"
docker image rm -f "${TAG}"

View File

@@ -20,10 +20,11 @@ echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
export PATH="${CODE_PATH}/pyinstaller:${PATH}"
if [ ! -z "${BUILD_BOOTLOADER}" ]; then
# Build bootloader for alpine
git clone --single-branch --branch master https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
# Build bootloader for alpine; develop is the main branch
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
cd /tmp/pyinstaller/bootloader
git checkout v3.4
# Checkout commit corresponding to version in requirements-build
git checkout v3.5
"${VENV}"/bin/python3 ./waf configure --no-lsb all
"${VENV}"/bin/pip3 install ..
cd "${CODE_PATH}"

View File

@@ -1,7 +1,5 @@
#!/bin/bash
set -x
curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}

16
script/fossa.mk Normal file
View File

@@ -0,0 +1,16 @@
# Variables for Fossa
BUILD_ANALYZER?=docker/fossa-analyzer
FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true
fossa-analyze:
docker run --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) analyze ${FOSSA_OPTS} --branch ${BRANCH_NAME}
# This command is used to run the fossa test command
fossa-test:
docker run -i -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
-w /go/src/github.com/docker/compose \
$(BUILD_ANALYZER) test

View File

@@ -39,7 +39,7 @@ install_requires = [
'docker[ssh] >= 3.7.0, < 5',
'dockerpty >= 0.4.1, < 1',
'six >= 1.3.0, < 2',
'jsonschema >= 2.5.1, < 3',
'jsonschema >= 2.5.1, < 4',
]
@@ -52,9 +52,11 @@ if sys.version_info[:2] < (3, 4):
tests_require.append('mock >= 1.0.1, < 4')
extras_require = {
':python_version < "3.2"': ['subprocess32 >= 3.5.4, < 4'],
':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
':python_version < "3.3"': ['ipaddress >= 1.0.16, < 2'],
':python_version < "3.3"': ['backports.shutil_get_terminal_size == 1.0.0',
'ipaddress >= 1.0.16, < 2'],
':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
}

View File

@@ -360,7 +360,7 @@ class CLITestCase(DockerClientTestCase):
'services': {
'web': {
'command': 'echo uwu',
'image': 'alpine:3.4',
'image': 'alpine:3.10.1',
'ports': ['3341/tcp', '4449/tcp']
}
},
@@ -559,7 +559,7 @@ class CLITestCase(DockerClientTestCase):
'services': {
'foo': {
'command': '/bin/true',
'image': 'alpine:3.7',
'image': 'alpine:3.10.1',
'scale': 3,
'restart': 'always:7',
'mem_limit': '300M',
@@ -2816,8 +2816,8 @@ class CLITestCase(DockerClientTestCase):
result = self.dispatch(['images'])
assert 'busybox' in result.stdout
assert 'multiple-composefiles_another_1' in result.stdout
assert 'multiple-composefiles_simple_1' in result.stdout
assert '_another_1' in result.stdout
assert '_simple_1' in result.stdout
@mock.patch.dict(os.environ)
def test_images_tagless_image(self):
@@ -2865,4 +2865,4 @@ class CLITestCase(DockerClientTestCase):
assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
assert re.search(r'foo3.+_foo3[ \t]+latest', result.stdout) is not None
assert re.search(r'foo3.+test[ \t]+latest', result.stdout) is not None

View File

@@ -1,7 +1,7 @@
version: '3.5'
services:
foo:
image: alpine:3.7
image: alpine:3.10.1
command: /bin/true
deploy:
replicas: 3

View File

@@ -1,4 +1,4 @@
IMAGE=alpine:3.4
IMAGE=alpine:3.10.1
COMMAND=echo uwu
PORT1=3341
PORT2=4449

View File

@@ -8,3 +8,4 @@ services:
image: test:prod
foo3:
build: .
image: test:latest

View File

@@ -2,17 +2,17 @@ version: "2"
services:
web:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["front"]
app:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["front", "back"]
links:
- "db:database"
db:
image: alpine:3.7
image: alpine:3.10.1
command: top
networks: ["back"]

View File

@@ -38,6 +38,8 @@ from compose.container import Container
from compose.errors import OperationFailedError
from compose.parallel import ParallelStreamWriter
from compose.project import OneOffFilter
from compose.project import Project
from compose.service import BuildAction
from compose.service import ConvergencePlan
from compose.service import ConvergenceStrategy
from compose.service import NetworkMode
@@ -966,6 +968,43 @@ class ServiceTest(DockerClientTestCase):
assert self.client.inspect_image('composetest_web')
def test_build_cli(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
service = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
service.build(cli=True)
self.addCleanup(self.client.remove_image, service.image_name)
assert self.client.inspect_image('composetest_web')
def test_up_build_cli(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
web = self.create_service('web',
build={'context': base_dir},
environment={
'COMPOSE_DOCKER_CLI_BUILD': '1',
'DOCKER_BUILDKIT': '1',
})
project = Project('composetest', [web], self.client)
project.up(do_build=BuildAction.force)
containers = project.containers(['web'])
assert len(containers) == 1
assert containers[0].name.startswith('composetest_web_')
def test_build_non_ascii_filename(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)

View File

@@ -152,6 +152,17 @@ class TestWatchEvents(object):
*thread_args)
assert container_id in thread_map
def test_container_attach_event(self, thread_map, mock_presenters):
container_id = 'abcd'
mock_container = mock.Mock(is_restarting=False)
mock_container.attach_log_stream.side_effect = APIError("race condition")
event_die = {'action': 'die', 'id': container_id}
event_start = {'action': 'start', 'id': container_id, 'container': mock_container}
event_stream = [event_die, event_start]
thread_args = 'foo', 'bar'
watch_events(thread_map, event_stream, mock_presenters, thread_args)
assert mock_container.attach_log_stream.called
def test_other_event(self, thread_map, mock_presenters):
container_id = 'abcd'
event_stream = [{'action': 'create', 'id': container_id}]

View File

@@ -29,16 +29,20 @@ class HumanReadableFileSizeTest(unittest.TestCase):
assert human_readable_file_size(100) == '100 B'
def test_1kb(self):
assert human_readable_file_size(1024) == '1 kB'
assert human_readable_file_size(1000) == '1 kB'
assert human_readable_file_size(1024) == '1.024 kB'
def test_1023b(self):
assert human_readable_file_size(1023) == '1023 B'
assert human_readable_file_size(1023) == '1.023 kB'
def test_999b(self):
assert human_readable_file_size(999) == '999 B'
def test_units(self):
assert human_readable_file_size((2 ** 10) ** 0) == '1 B'
assert human_readable_file_size((2 ** 10) ** 1) == '1 kB'
assert human_readable_file_size((2 ** 10) ** 2) == '1 MB'
assert human_readable_file_size((2 ** 10) ** 3) == '1 GB'
assert human_readable_file_size((2 ** 10) ** 4) == '1 TB'
assert human_readable_file_size((2 ** 10) ** 5) == '1 PB'
assert human_readable_file_size((2 ** 10) ** 6) == '1 EB'
assert human_readable_file_size((10 ** 3) ** 0) == '1 B'
assert human_readable_file_size((10 ** 3) ** 1) == '1 kB'
assert human_readable_file_size((10 ** 3) ** 2) == '1 MB'
assert human_readable_file_size((10 ** 3) ** 3) == '1 GB'
assert human_readable_file_size((10 ** 3) ** 4) == '1 TB'
assert human_readable_file_size((10 ** 3) ** 5) == '1 PB'
assert human_readable_file_size((10 ** 3) ** 6) == '1 EB'

View File

@@ -18,6 +18,7 @@ from ...helpers import build_config_details
from ...helpers import BUSYBOX_IMAGE_WITH_TAG
from compose.config import config
from compose.config import types
from compose.config.config import ConfigFile
from compose.config.config import resolve_build_args
from compose.config.config import resolve_environment
from compose.config.environment import Environment
@@ -3620,7 +3621,7 @@ class InterpolationTest(unittest.TestCase):
'version': '3.5',
'services': {
'foo': {
'image': 'alpine:3.7',
'image': 'alpine:3.10.1',
'deploy': {
'replicas': 3,
'restart_policy': {
@@ -3646,7 +3647,7 @@ class InterpolationTest(unittest.TestCase):
service_dict = cfg.services[0]
assert service_dict == {
'image': 'alpine:3.7',
'image': 'alpine:3.10.1',
'scale': 3,
'restart': {'MaximumRetryCount': 7, 'Name': 'always'},
'mem_limit': '300M',
@@ -4887,6 +4888,11 @@ class ExtendsTest(unittest.TestCase):
assert types.SecurityOpt.parse('apparmor:unconfined') in svc['security_opt']
assert types.SecurityOpt.parse('seccomp:unconfined') in svc['security_opt']
@mock.patch.object(ConfigFile, 'from_filename', wraps=ConfigFile.from_filename)
def test_extends_same_file_optimization(self, from_filename_mock):
load_from_filename('tests/fixtures/extends/no-file-specified.yml')
from_filename_mock.assert_called_once()
@pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
class ExpandPathTest(unittest.TestCase):

View File

@@ -168,3 +168,8 @@ class NetworkTest(unittest.TestCase):
mock_log.warning.assert_called_once_with(mock.ANY)
_, args, kwargs = mock_log.warning.mock_calls[0]
assert 'label "com.project.touhou.character" has changed' in args[0]
def test_remote_config_labels_none(self):
remote = {'Labels': None}
local = Network(None, 'test_project', 'test_network')
check_remote_network_config(remote, local)

View File

@@ -3,6 +3,8 @@ from __future__ import absolute_import
from __future__ import unicode_literals
import datetime
import os
import tempfile
import docker
import pytest
@@ -11,6 +13,7 @@ from docker.errors import NotFound
from .. import mock
from .. import unittest
from ..helpers import BUSYBOX_IMAGE_WITH_TAG
from compose.config import ConfigurationError
from compose.config.config import Config
from compose.config.types import VolumeFromSpec
from compose.const import COMPOSEFILE_V1 as V1
@@ -21,6 +24,7 @@ from compose.const import DEFAULT_TIMEOUT
from compose.const import LABEL_SERVICE
from compose.container import Container
from compose.errors import OperationFailedError
from compose.project import get_secrets
from compose.project import NoSuchService
from compose.project import Project
from compose.project import ProjectError
@@ -841,3 +845,84 @@ class ProjectTest(unittest.TestCase):
with mock.patch('compose.service.Service.push') as fake_push:
project.push()
assert fake_push.call_count == 2
def test_get_secrets_no_secret_def(self):
service = 'foo'
secret_source = 'bar'
secret_defs = mock.Mock()
secret_defs.get.return_value = None
secret = mock.Mock(source=secret_source)
with self.assertRaises(ConfigurationError):
get_secrets(service, [secret], secret_defs)
def test_get_secrets_external_warning(self):
service = 'foo'
secret_source = 'bar'
secret_def = mock.Mock()
secret_def.get.return_value = True
secret_defs = mock.Mock()
secret_defs.get.side_effect = secret_def
secret = mock.Mock(source=secret_source)
with mock.patch('compose.project.log') as mock_log:
get_secrets(service, [secret], secret_defs)
mock_log.warning.assert_called_with("Service \"{service}\" uses secret \"{secret}\" "
"which is external. External secrets are not available"
" to containers created by docker-compose."
.format(service=service, secret=secret_source))
def test_get_secrets_uid_gid_mode_warning(self):
service = 'foo'
secret_source = 'bar'
fd, filename_path = tempfile.mkstemp()
os.close(fd)
self.addCleanup(os.remove, filename_path)
def mock_get(key):
return {'external': False, 'file': filename_path}[key]
secret_def = mock.MagicMock()
secret_def.get = mock.MagicMock(side_effect=mock_get)
secret_defs = mock.Mock()
secret_defs.get.return_value = secret_def
secret = mock.Mock(uid=True, gid=True, mode=True, source=secret_source)
with mock.patch('compose.project.log') as mock_log:
get_secrets(service, [secret], secret_defs)
mock_log.warning.assert_called_with("Service \"{service}\" uses secret \"{secret}\" with uid, "
"gid, or mode. These fields are not supported by this "
"implementation of the Compose file"
.format(service=service, secret=secret_source))
def test_get_secrets_secret_file_warning(self):
service = 'foo'
secret_source = 'bar'
not_a_path = 'NOT_A_PATH'
def mock_get(key):
return {'external': False, 'file': not_a_path}[key]
secret_def = mock.MagicMock()
secret_def.get = mock.MagicMock(side_effect=mock_get)
secret_defs = mock.Mock()
secret_defs.get.return_value = secret_def
secret = mock.Mock(uid=False, gid=False, mode=False, source=secret_source)
with mock.patch('compose.project.log') as mock_log:
get_secrets(service, [secret], secret_defs)
mock_log.warning.assert_called_with("Service \"{service}\" uses an undefined secret file "
"\"{secret_file}\", the following file should be created "
"\"{secret_file}\""
.format(service=service, secret_file=not_a_path))