mirror of
https://github.com/docker/compose.git
synced 2026-02-10 10:39:23 +08:00
Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
18f557f920 | ||
|
|
c5d32da9ad | ||
|
|
509cfb9979 | ||
|
|
4d8d0769a4 | ||
|
|
980ec85bf4 | ||
|
|
ad87891ef8 | ||
|
|
7d2a308b44 | ||
|
|
b920010afe | ||
|
|
687ba365cd | ||
|
|
45c6730e64 | ||
|
|
dcb1d3b781 | ||
|
|
0f651d71c7 | ||
|
|
b6e84b0f1c | ||
|
|
0dad2367e6 | ||
|
|
dfd5ff396a | ||
|
|
3f18d599b4 | ||
|
|
ae5e505de0 | ||
|
|
89cf753299 | ||
|
|
ea7772d599 | ||
|
|
2963363240 | ||
|
|
8041319bfd | ||
|
|
824b4943ed | ||
|
|
35d71511b3 | ||
|
|
7d73cb76b3 | ||
|
|
796588ec35 | ||
|
|
d120a6f07b | ||
|
|
5ddd881dbd | ||
|
|
22b0f5d20c | ||
|
|
f74ff28728 | ||
|
|
1285960d3c |
158
CHANGELOG.md
158
CHANGELOG.md
@@ -1,164 +1,6 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
1.28.5 (2021-02-25)
|
||||
-------------------
|
||||
|
||||
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/55?closed=1)
|
||||
|
||||
### Bugs
|
||||
|
||||
- Fix OpenSSL version mismatch error when shelling out to the ssh client (via bump to docker-py 4.4.4 which contains the fix)
|
||||
|
||||
- Add missing build flags to the native builder: `platform`, `isolation` and `extra_hosts`
|
||||
|
||||
- Remove info message on native build
|
||||
|
||||
- Avoid fetching logs when service logging driver is set to 'none'
|
||||
|
||||
1.28.4 (2021-02-18)
|
||||
-------------------
|
||||
|
||||
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/54?closed=1)
|
||||
|
||||
### Bugs
|
||||
|
||||
- Fix SSH port parsing by bumping docker-py to 4.4.3
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
- Bump Python to 3.7.10
|
||||
|
||||
1.28.3 (2021-02-17)
|
||||
-------------------
|
||||
|
||||
[List of PRs / issues for this release](https://github.com/docker/compose/milestone/53?closed=1)
|
||||
|
||||
### Bugs
|
||||
|
||||
- Fix SSH hostname parsing when it contains leading s/h, and remove the quiet option that was hiding the error (via docker-py bump to 4.4.2)
|
||||
|
||||
- Fix key error for '--no-log-prefix' option
|
||||
|
||||
- Fix incorrect CLI environment variable name for service profiles: `COMPOSE_PROFILES` instead of `COMPOSE_PROFILE`
|
||||
|
||||
- Fix fish completion
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
- Bump cryptography to 3.3.2
|
||||
|
||||
- Remove log driver filter
|
||||
|
||||
1.28.2 (2021-01-26)
|
||||
-------------------
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
- CI setup update
|
||||
|
||||
1.28.1 (2021-01-25)
|
||||
-------------------
|
||||
|
||||
### Bugs
|
||||
|
||||
- Revert to Python 3.7 bump for Linux static builds
|
||||
|
||||
- Add bash completion for `docker-compose logs|up --no-log-prefix`
|
||||
|
||||
1.28.0 (2021-01-20)
|
||||
-------------------
|
||||
|
||||
### Features
|
||||
|
||||
- Support for Nvidia GPUs via device requests
|
||||
|
||||
- Support for service profiles
|
||||
|
||||
- Change the SSH connection approach to the Docker CLI's via shellout to the local SSH client (old behaviour enabled by setting `COMPOSE_PARAMIKO_SSH` environment variable)
|
||||
|
||||
- Add flag to disable log prefix
|
||||
|
||||
- Add flag for ansi output control
|
||||
|
||||
### Bugs
|
||||
|
||||
- Make `parallel_pull=True` by default
|
||||
|
||||
- Bring back warning for configs in non-swarm mode
|
||||
|
||||
- Take `--file` in account when defining `project_dir`
|
||||
|
||||
- On `compose up`, attach only to services we read logs from
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
- Make COMPOSE_DOCKER_CLI_BUILD=1 the default
|
||||
|
||||
- Add usage metrics
|
||||
|
||||
- Sync schema with COMPOSE specification
|
||||
|
||||
- Improve failure report for missing mandatory environment variables
|
||||
|
||||
- Bump attrs to 20.3.0
|
||||
|
||||
- Bump more_itertools to 8.6.0
|
||||
|
||||
- Bump cryptograhy to 3.2.1
|
||||
|
||||
- Bump cffi to 1.14.4
|
||||
|
||||
- Bump virtualenv to 20.2.2
|
||||
|
||||
- Bump bcrypt to 3.2.0
|
||||
|
||||
- Bump gitpython to 3.1.11
|
||||
|
||||
- Bump docker-py to 4.4.1
|
||||
|
||||
- Bump Python to 3.9
|
||||
|
||||
- Linux: bump Debian base image from stretch to buster (required for Python 3.9)
|
||||
|
||||
- macOS: OpenSSL 1.1.1g to 1.1.1h, Python 3.7.7 to 3.9.0
|
||||
|
||||
- Bump pyinstaller 4.1
|
||||
|
||||
- Loosen restriction on base images to latest minor
|
||||
|
||||
- Updates of READMEs
|
||||
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
|
||||
>>>>>>> master
|
||||
1.27.4 (2020-09-24)
|
||||
-------------------
|
||||
|
||||
### Bugs
|
||||
|
||||
- Remove path checks for bind mounts
|
||||
|
||||
- Fix port rendering to output long form syntax for non-v1
|
||||
|
||||
- Add protocol to the docker socket address
|
||||
|
||||
1.27.3 (2020-09-16)
|
||||
-------------------
|
||||
|
||||
### Bugs
|
||||
|
||||
- Merge `max_replicas_per_node` on `docker-compose config`
|
||||
|
||||
- Fix `depends_on` serialization on `docker-compose config`
|
||||
|
||||
- Fix scaling when some containers are not running on `docker-compose up`
|
||||
|
||||
- Enable relative paths for `driver_opts.device` for `local` driver
|
||||
|
||||
- Allow strings for `cpus` fields
|
||||
|
||||
1.27.2 (2020-09-10)
|
||||
-------------------
|
||||
|
||||
|
||||
60
Dockerfile
60
Dockerfile
@@ -1,15 +1,11 @@
|
||||
ARG DOCKER_VERSION=19.03
|
||||
ARG PYTHON_VERSION=3.7.10
|
||||
|
||||
ARG BUILD_ALPINE_VERSION=3.12
|
||||
ARG BUILD_CENTOS_VERSION=7
|
||||
ARG DOCKER_VERSION=19.03.8
|
||||
ARG PYTHON_VERSION=3.7.7
|
||||
ARG BUILD_ALPINE_VERSION=3.11
|
||||
ARG BUILD_DEBIAN_VERSION=slim-stretch
|
||||
ARG RUNTIME_ALPINE_VERSION=3.11.5
|
||||
ARG RUNTIME_DEBIAN_VERSION=stretch-20200414-slim
|
||||
|
||||
ARG RUNTIME_ALPINE_VERSION=3.12
|
||||
ARG RUNTIME_CENTOS_VERSION=7
|
||||
ARG RUNTIME_DEBIAN_VERSION=stretch-slim
|
||||
|
||||
ARG DISTRO=alpine
|
||||
ARG BUILD_PLATFORM=alpine
|
||||
|
||||
FROM docker:${DOCKER_VERSION} AS docker-cli
|
||||
|
||||
@@ -44,56 +40,32 @@ RUN apt-get update && apt-get install --no-install-recommends -y \
|
||||
openssl \
|
||||
zlib1g-dev
|
||||
|
||||
FROM centos:${BUILD_CENTOS_VERSION} AS build-centos
|
||||
RUN yum install -y \
|
||||
gcc \
|
||||
git \
|
||||
libffi-devel \
|
||||
make \
|
||||
openssl \
|
||||
openssl-devel
|
||||
WORKDIR /tmp/python3/
|
||||
ARG PYTHON_VERSION
|
||||
RUN curl -L https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz | tar xzf - \
|
||||
&& cd Python-${PYTHON_VERSION} \
|
||||
&& ./configure --enable-optimizations --enable-shared --prefix=/usr LDFLAGS="-Wl,-rpath /usr/lib" \
|
||||
&& make altinstall
|
||||
RUN alternatives --install /usr/bin/python python /usr/bin/python2.7 50
|
||||
RUN alternatives --install /usr/bin/python python /usr/bin/python$(echo "${PYTHON_VERSION%.*}") 60
|
||||
RUN curl https://bootstrap.pypa.io/get-pip.py | python -
|
||||
|
||||
FROM build-${DISTRO} AS build
|
||||
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
|
||||
WORKDIR /code/
|
||||
FROM build-${BUILD_PLATFORM} AS build
|
||||
COPY docker-compose-entrypoint.sh /usr/local/bin/
|
||||
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
|
||||
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
|
||||
RUN pip install \
|
||||
virtualenv==20.4.0 \
|
||||
tox==3.21.2
|
||||
COPY requirements-dev.txt .
|
||||
WORKDIR /code/
|
||||
# FIXME(chris-crone): virtualenv 16.3.0 breaks build, force 16.2.0 until fixed
|
||||
RUN pip install virtualenv==20.0.30
|
||||
RUN pip install tox==3.19.0
|
||||
|
||||
COPY requirements-indirect.txt .
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt -r requirements-indirect.txt -r requirements-dev.txt
|
||||
COPY requirements-dev.txt .
|
||||
COPY .pre-commit-config.yaml .
|
||||
COPY tox.ini .
|
||||
COPY setup.py .
|
||||
COPY README.md .
|
||||
COPY compose compose/
|
||||
RUN tox -e py37 --notest
|
||||
RUN tox --notest
|
||||
COPY . .
|
||||
ARG GIT_COMMIT=unknown
|
||||
ENV DOCKER_COMPOSE_GITSHA=$GIT_COMMIT
|
||||
RUN script/build/linux-entrypoint
|
||||
|
||||
FROM scratch AS bin
|
||||
ARG TARGETARCH
|
||||
ARG TARGETOS
|
||||
COPY --from=build /usr/local/bin/docker-compose /docker-compose-${TARGETOS}-${TARGETARCH}
|
||||
|
||||
FROM alpine:${RUNTIME_ALPINE_VERSION} AS runtime-alpine
|
||||
FROM debian:${RUNTIME_DEBIAN_VERSION} AS runtime-debian
|
||||
FROM centos:${RUNTIME_CENTOS_VERSION} AS runtime-centos
|
||||
FROM runtime-${DISTRO} AS runtime
|
||||
FROM runtime-${BUILD_PLATFORM} AS runtime
|
||||
COPY docker-compose-entrypoint.sh /usr/local/bin/
|
||||
ENTRYPOINT ["sh", "/usr/local/bin/docker-compose-entrypoint.sh"]
|
||||
COPY --from=docker-cli /usr/local/bin/docker /usr/local/bin/docker
|
||||
|
||||
17
Jenkinsfile
vendored
17
Jenkinsfile
vendored
@@ -1,6 +1,6 @@
|
||||
#!groovy
|
||||
|
||||
def dockerVersions = ['19.03.13']
|
||||
def dockerVersions = ['19.03.8']
|
||||
def baseImages = ['alpine', 'debian']
|
||||
def pythonVersions = ['py37']
|
||||
|
||||
@@ -13,9 +13,6 @@ pipeline {
|
||||
timeout(time: 2, unit: 'HOURS')
|
||||
timestamps()
|
||||
}
|
||||
environment {
|
||||
DOCKER_BUILDKIT="1"
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Build test images') {
|
||||
@@ -23,7 +20,7 @@ pipeline {
|
||||
parallel {
|
||||
stage('alpine') {
|
||||
agent {
|
||||
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
|
||||
label 'ubuntu && amd64 && !zfs'
|
||||
}
|
||||
steps {
|
||||
buildImage('alpine')
|
||||
@@ -31,7 +28,7 @@ pipeline {
|
||||
}
|
||||
stage('debian') {
|
||||
agent {
|
||||
label 'ubuntu-2004 && amd64 && !zfs && cgroup1'
|
||||
label 'ubuntu && amd64 && !zfs'
|
||||
}
|
||||
steps {
|
||||
buildImage('debian')
|
||||
@@ -62,7 +59,7 @@ pipeline {
|
||||
|
||||
def buildImage(baseImage) {
|
||||
def scmvar = checkout(scm)
|
||||
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
image = docker.image(imageName)
|
||||
|
||||
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
|
||||
@@ -72,7 +69,7 @@ def buildImage(baseImage) {
|
||||
ansiColor('xterm') {
|
||||
sh """docker build -t ${imageName} \\
|
||||
--target build \\
|
||||
--build-arg DISTRO="${baseImage}" \\
|
||||
--build-arg BUILD_PLATFORM="${baseImage}" \\
|
||||
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
|
||||
.\\
|
||||
"""
|
||||
@@ -89,7 +86,7 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
|
||||
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
|
||||
node("ubuntu && amd64 && !zfs") {
|
||||
def scmvar = checkout(scm)
|
||||
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
|
||||
echo "Using local system's storage driver: ${storageDriver}"
|
||||
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
|
||||
@@ -99,8 +96,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
|
||||
--privileged \\
|
||||
--volume="\$(pwd)/.git:/code/.git" \\
|
||||
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
|
||||
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
|
||||
-e "DOCKER_TLS_CERTDIR=" \\
|
||||
-e "TAG=${imageName}" \\
|
||||
-e "STORAGE_DRIVER=${storageDriver}" \\
|
||||
-e "DOCKER_VERSIONS=${dockerVersion}" \\
|
||||
|
||||
57
Makefile
57
Makefile
@@ -1,57 +0,0 @@
|
||||
TAG = "docker-compose:alpine-$(shell git rev-parse --short HEAD)"
|
||||
GIT_VOLUME = "--volume=$(shell pwd)/.git:/code/.git"
|
||||
|
||||
DOCKERFILE ?="Dockerfile"
|
||||
DOCKER_BUILD_TARGET ?="build"
|
||||
|
||||
UNAME_S := $(shell uname -s)
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
BUILD_SCRIPT = linux
|
||||
endif
|
||||
ifeq ($(UNAME_S),Darwin)
|
||||
BUILD_SCRIPT = osx
|
||||
endif
|
||||
|
||||
COMPOSE_SPEC_SCHEMA_PATH = "compose/config/compose_spec.json"
|
||||
COMPOSE_SPEC_RAW_URL = "https://raw.githubusercontent.com/compose-spec/compose-spec/master/schema/compose-spec.json"
|
||||
|
||||
all: cli
|
||||
|
||||
cli: download-compose-spec ## Compile the cli
|
||||
./script/build/$(BUILD_SCRIPT)
|
||||
|
||||
download-compose-spec: ## Download the compose-spec schema from it's repo
|
||||
curl -so $(COMPOSE_SPEC_SCHEMA_PATH) $(COMPOSE_SPEC_RAW_URL)
|
||||
|
||||
cache-clear: ## Clear the builder cache
|
||||
@docker builder prune --force --filter type=exec.cachemount --filter=unused-for=24h
|
||||
|
||||
base-image: ## Builds base image
|
||||
docker build -f $(DOCKERFILE) -t $(TAG) --target $(DOCKER_BUILD_TARGET) .
|
||||
|
||||
lint: base-image ## Run linter
|
||||
docker run --rm \
|
||||
--tty \
|
||||
$(GIT_VOLUME) \
|
||||
$(TAG) \
|
||||
tox -e pre-commit
|
||||
|
||||
test-unit: base-image ## Run tests
|
||||
docker run --rm \
|
||||
--tty \
|
||||
$(GIT_VOLUME) \
|
||||
$(TAG) \
|
||||
pytest -v tests/unit/
|
||||
|
||||
test: ## Run all tests
|
||||
./script/test/default
|
||||
|
||||
pre-commit: lint test-unit cli
|
||||
|
||||
help: ## Show help
|
||||
@echo Please specify a build target. The choices are:
|
||||
@grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
FORCE:
|
||||
|
||||
.PHONY: all cli download-compose-spec cache-clear base-image lint test-unit test pre-commit help
|
||||
106
README.md
106
README.md
@@ -1,86 +1,62 @@
|
||||
Docker Compose
|
||||
==============
|
||||
[](https://ci-next.docker.com/public/job/compose/job/master/)
|
||||
|
||||

|
||||
|
||||
Docker Compose is a tool for running multi-container applications on Docker
|
||||
defined using the [Compose file format](https://compose-spec.io).
|
||||
A Compose file is used to define how the one or more containers that make up
|
||||
your application are configured.
|
||||
Once you have a Compose file, you can create and start your application with a
|
||||
single command: `docker-compose up`.
|
||||
Compose is a tool for defining and running multi-container Docker applications.
|
||||
With Compose, you use a Compose file to configure your application's services.
|
||||
Then, using a single command, you create and start all the services
|
||||
from your configuration. To learn more about all the features of Compose
|
||||
see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
|
||||
|
||||
Compose files can be used to deploy applications locally, or to the cloud on
|
||||
[Amazon ECS](https://aws.amazon.com/ecs) or
|
||||
[Microsoft ACI](https://azure.microsoft.com/services/container-instances/) using
|
||||
the Docker CLI. You can read more about how to do this:
|
||||
- [Compose for Amazon ECS](https://docs.docker.com/engine/context/ecs-integration/)
|
||||
- [Compose for Microsoft ACI](https://docs.docker.com/engine/context/aci-integration/)
|
||||
Compose is great for development, testing, and staging environments, as well as
|
||||
CI workflows. You can learn more about each case in
|
||||
[Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/index.md#common-use-cases).
|
||||
|
||||
Where to get Docker Compose
|
||||
----------------------------
|
||||
Using Compose is basically a three-step process.
|
||||
|
||||
### Windows and macOS
|
||||
|
||||
Docker Compose is included in
|
||||
[Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
for Windows and macOS.
|
||||
|
||||
### Linux
|
||||
|
||||
You can download Docker Compose binaries from the
|
||||
[release page](https://github.com/docker/compose/releases) on this repository.
|
||||
|
||||
### Using pip
|
||||
|
||||
If your platform is not supported, you can download Docker Compose using `pip`:
|
||||
|
||||
```console
|
||||
pip install docker-compose
|
||||
```
|
||||
|
||||
> **Note:** Docker Compose requires Python 3.6 or later.
|
||||
|
||||
Quick Start
|
||||
-----------
|
||||
|
||||
Using Docker Compose is basically a three-step process:
|
||||
1. Define your app's environment with a `Dockerfile` so it can be
|
||||
reproduced anywhere.
|
||||
reproduced anywhere.
|
||||
2. Define the services that make up your app in `docker-compose.yml` so
|
||||
they can be run together in an isolated environment.
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire
|
||||
app.
|
||||
they can be run together in an isolated environment.
|
||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
||||
|
||||
A Compose file looks like this:
|
||||
A `docker-compose.yml` looks like this:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
version: '2'
|
||||
|
||||
You can find examples of Compose applications in our
|
||||
[Awesome Compose repository](https://github.com/docker/awesome-compose).
|
||||
services:
|
||||
web:
|
||||
build: .
|
||||
ports:
|
||||
- "5000:5000"
|
||||
volumes:
|
||||
- .:/code
|
||||
redis:
|
||||
image: redis
|
||||
|
||||
For more information about the Compose format, see the
|
||||
[Compose file reference](https://docs.docker.com/compose/compose-file/).
|
||||
For more information about the Compose file, see the
|
||||
[Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md).
|
||||
|
||||
Compose has commands for managing the whole lifecycle of your application:
|
||||
|
||||
* Start, stop and rebuild services
|
||||
* View the status of running services
|
||||
* Stream the log output of running services
|
||||
* Run a one-off command on a service
|
||||
|
||||
Installation and documentation
|
||||
------------------------------
|
||||
|
||||
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
|
||||
- Code repository for Compose is on [GitHub](https://github.com/docker/compose).
|
||||
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new/choose). Thank you!
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
||||
Want to help develop Docker Compose? Check out our
|
||||
[contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
|
||||
[](https://ci-next.docker.com/public/job/compose/job/master/)
|
||||
|
||||
If you find an issue, please report it on the
|
||||
[issue tracker](https://github.com/docker/compose/issues/new/choose).
|
||||
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
|
||||
|
||||
Releasing
|
||||
---------
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!groovy
|
||||
|
||||
def dockerVersions = ['19.03.13', '18.09.9']
|
||||
def dockerVersions = ['19.03.8', '18.09.9']
|
||||
def baseImages = ['alpine', 'debian']
|
||||
def pythonVersions = ['py37']
|
||||
|
||||
@@ -13,9 +13,6 @@ pipeline {
|
||||
timeout(time: 2, unit: 'HOURS')
|
||||
timestamps()
|
||||
}
|
||||
environment {
|
||||
DOCKER_BUILDKIT="1"
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Build test images') {
|
||||
@@ -23,7 +20,7 @@ pipeline {
|
||||
parallel {
|
||||
stage('alpine') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
buildImage('alpine')
|
||||
@@ -31,7 +28,7 @@ pipeline {
|
||||
}
|
||||
stage('debian') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
buildImage('debian')
|
||||
@@ -41,7 +38,7 @@ pipeline {
|
||||
}
|
||||
stage('Test') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
// TODO use declarative 1.5.0 `matrix` once available on CI
|
||||
@@ -61,7 +58,7 @@ pipeline {
|
||||
}
|
||||
stage('Generate Changelog') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
checkout scm
|
||||
@@ -84,7 +81,7 @@ pipeline {
|
||||
steps {
|
||||
checkout scm
|
||||
sh './script/setup/osx'
|
||||
sh 'tox -e py39 -- tests/unit'
|
||||
sh 'tox -e py37 -- tests/unit'
|
||||
sh './script/build/osx'
|
||||
dir ('dist') {
|
||||
checksum('docker-compose-Darwin-x86_64')
|
||||
@@ -98,7 +95,7 @@ pipeline {
|
||||
}
|
||||
stage('linux binary') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
checkout scm
|
||||
@@ -117,11 +114,11 @@ pipeline {
|
||||
label 'windows-python'
|
||||
}
|
||||
environment {
|
||||
PATH = "C:\\Python39;C:\\Python39\\Scripts;$PATH"
|
||||
PATH = "$PATH;C:\\Python37;C:\\Python37\\Scripts"
|
||||
}
|
||||
steps {
|
||||
checkout scm
|
||||
bat 'tox.exe -e py39 -- tests/unit'
|
||||
bat 'tox.exe -e py37 -- tests/unit'
|
||||
powershell '.\\script\\build\\windows.ps1'
|
||||
dir ('dist') {
|
||||
checksum('docker-compose-Windows-x86_64.exe')
|
||||
@@ -134,7 +131,7 @@ pipeline {
|
||||
}
|
||||
stage('alpine image') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
buildRuntimeImage('alpine')
|
||||
@@ -142,7 +139,7 @@ pipeline {
|
||||
}
|
||||
stage('debian image') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
buildRuntimeImage('debian')
|
||||
@@ -157,7 +154,7 @@ pipeline {
|
||||
parallel {
|
||||
stage('Pushing images') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
steps {
|
||||
pushRuntimeImage('alpine')
|
||||
@@ -166,7 +163,7 @@ pipeline {
|
||||
}
|
||||
stage('Creating Github Release') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
environment {
|
||||
GITHUB_TOKEN = credentials('github-release-token')
|
||||
@@ -198,7 +195,7 @@ pipeline {
|
||||
}
|
||||
stage('Publishing Python packages') {
|
||||
agent {
|
||||
label 'linux && docker && ubuntu-2004 && amd64 && cgroup1'
|
||||
label 'linux && docker && ubuntu-2004'
|
||||
}
|
||||
environment {
|
||||
PYPIRC = credentials('pypirc-docker-dsg-cibot')
|
||||
@@ -222,7 +219,7 @@ pipeline {
|
||||
|
||||
def buildImage(baseImage) {
|
||||
def scmvar = checkout(scm)
|
||||
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
image = docker.image(imageName)
|
||||
|
||||
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
|
||||
@@ -232,7 +229,7 @@ def buildImage(baseImage) {
|
||||
ansiColor('xterm') {
|
||||
sh """docker build -t ${imageName} \\
|
||||
--target build \\
|
||||
--build-arg DISTRO="${baseImage}" \\
|
||||
--build-arg BUILD_PLATFORM="${baseImage}" \\
|
||||
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT}" \\
|
||||
.\\
|
||||
"""
|
||||
@@ -247,9 +244,9 @@ def buildImage(baseImage) {
|
||||
def runTests(dockerVersion, pythonVersion, baseImage) {
|
||||
return {
|
||||
stage("python=${pythonVersion} docker=${dockerVersion} ${baseImage}") {
|
||||
node("linux && docker && ubuntu-2004 && amd64 && cgroup1") {
|
||||
node("linux && docker && ubuntu-2004") {
|
||||
def scmvar = checkout(scm)
|
||||
def imageName = "dockerpinata/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def imageName = "dockerbuildbot/compose:${baseImage}-${scmvar.GIT_COMMIT}"
|
||||
def storageDriver = sh(script: "docker info -f \'{{.Driver}}\'", returnStdout: true).trim()
|
||||
echo "Using local system's storage driver: ${storageDriver}"
|
||||
withDockerRegistry(credentialsId:'dockerbuildbot-index.docker.io') {
|
||||
@@ -259,8 +256,6 @@ def runTests(dockerVersion, pythonVersion, baseImage) {
|
||||
--privileged \\
|
||||
--volume="\$(pwd)/.git:/code/.git" \\
|
||||
--volume="/var/run/docker.sock:/var/run/docker.sock" \\
|
||||
--volume="\${DOCKER_CONFIG}/config.json:/root/.docker/config.json" \\
|
||||
-e "DOCKER_TLS_CERTDIR=" \\
|
||||
-e "TAG=${imageName}" \\
|
||||
-e "STORAGE_DRIVER=${storageDriver}" \\
|
||||
-e "DOCKER_VERSIONS=${dockerVersion}" \\
|
||||
@@ -281,7 +276,7 @@ def buildRuntimeImage(baseImage) {
|
||||
def imageName = "docker/compose:${baseImage}-${env.BRANCH_NAME}"
|
||||
ansiColor('xterm') {
|
||||
sh """docker build -t ${imageName} \\
|
||||
--build-arg DISTRO="${baseImage}" \\
|
||||
--build-arg BUILD_PLATFORM="${baseImage}" \\
|
||||
--build-arg GIT_COMMIT="${scmvar.GIT_COMMIT.take(7)}" \\
|
||||
.
|
||||
"""
|
||||
|
||||
@@ -1 +1 @@
|
||||
__version__ = '1.28.5'
|
||||
__version__ = '1.27.2'
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
import enum
|
||||
import os
|
||||
|
||||
from ..const import IS_WINDOWS_PLATFORM
|
||||
|
||||
NAMES = [
|
||||
@@ -15,21 +12,6 @@ NAMES = [
|
||||
]
|
||||
|
||||
|
||||
@enum.unique
|
||||
class AnsiMode(enum.Enum):
|
||||
"""Enumeration for when to output ANSI colors."""
|
||||
NEVER = "never"
|
||||
ALWAYS = "always"
|
||||
AUTO = "auto"
|
||||
|
||||
def use_ansi_codes(self, stream):
|
||||
if self is AnsiMode.ALWAYS:
|
||||
return True
|
||||
if self is AnsiMode.NEVER or os.environ.get('CLICOLOR') == '0':
|
||||
return False
|
||||
return stream.isatty()
|
||||
|
||||
|
||||
def get_pairs():
|
||||
for i, name in enumerate(NAMES):
|
||||
yield (name, str(30 + i))
|
||||
|
||||
@@ -35,7 +35,7 @@ SILENT_COMMANDS = {
|
||||
|
||||
def project_from_options(project_dir, options, additional_options=None):
|
||||
additional_options = additional_options or {}
|
||||
override_dir = get_project_dir(options)
|
||||
override_dir = options.get('--project-directory')
|
||||
environment_file = options.get('--env-file')
|
||||
environment = Environment.from_env_file(override_dir or project_dir, environment_file)
|
||||
environment.silent = options.get('COMMAND', None) in SILENT_COMMANDS
|
||||
@@ -59,15 +59,14 @@ def project_from_options(project_dir, options, additional_options=None):
|
||||
|
||||
return get_project(
|
||||
project_dir,
|
||||
get_config_path_from_options(options, environment),
|
||||
get_config_path_from_options(project_dir, options, environment),
|
||||
project_name=options.get('--project-name'),
|
||||
verbose=options.get('--verbose'),
|
||||
context=context,
|
||||
environment=environment,
|
||||
override_dir=override_dir,
|
||||
interpolate=(not additional_options.get('--no-interpolate')),
|
||||
environment_file=environment_file,
|
||||
enabled_profiles=get_profiles_from_options(options, environment)
|
||||
environment_file=environment_file
|
||||
)
|
||||
|
||||
|
||||
@@ -87,29 +86,21 @@ def set_parallel_limit(environment):
|
||||
parallel.GlobalLimit.set_global_limit(parallel_limit)
|
||||
|
||||
|
||||
def get_project_dir(options):
|
||||
override_dir = None
|
||||
files = get_config_path_from_options(options, os.environ)
|
||||
if files:
|
||||
if files[0] == '-':
|
||||
return '.'
|
||||
override_dir = os.path.dirname(files[0])
|
||||
return options.get('--project-directory') or override_dir
|
||||
|
||||
|
||||
def get_config_from_options(base_dir, options, additional_options=None):
|
||||
additional_options = additional_options or {}
|
||||
override_dir = get_project_dir(options)
|
||||
override_dir = options.get('--project-directory')
|
||||
environment_file = options.get('--env-file')
|
||||
environment = Environment.from_env_file(override_dir or base_dir, environment_file)
|
||||
config_path = get_config_path_from_options(options, environment)
|
||||
config_path = get_config_path_from_options(
|
||||
base_dir, options, environment
|
||||
)
|
||||
return config.load(
|
||||
config.find(base_dir, config_path, environment, override_dir),
|
||||
not additional_options.get('--no-interpolate')
|
||||
)
|
||||
|
||||
|
||||
def get_config_path_from_options(options, environment):
|
||||
def get_config_path_from_options(base_dir, options, environment):
|
||||
def unicode_paths(paths):
|
||||
return [p.decode('utf-8') if isinstance(p, bytes) else p for p in paths]
|
||||
|
||||
@@ -124,21 +115,9 @@ def get_config_path_from_options(options, environment):
|
||||
return None
|
||||
|
||||
|
||||
def get_profiles_from_options(options, environment):
|
||||
profile_option = options.get('--profile')
|
||||
if profile_option:
|
||||
return profile_option
|
||||
|
||||
profiles = environment.get('COMPOSE_PROFILES')
|
||||
if profiles:
|
||||
return profiles.split(',')
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
|
||||
context=None, environment=None, override_dir=None,
|
||||
interpolate=True, environment_file=None, enabled_profiles=None):
|
||||
interpolate=True, environment_file=None):
|
||||
if not environment:
|
||||
environment = Environment.from_env_file(project_dir)
|
||||
config_details = config.find(project_dir, config_path, environment, override_dir)
|
||||
@@ -160,7 +139,6 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
|
||||
client,
|
||||
environment.get('DOCKER_DEFAULT_PLATFORM'),
|
||||
execution_context_labels(config_details, environment_file),
|
||||
enabled_profiles,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -166,8 +166,8 @@ def docker_client(environment, version=None, context=None, tls_version=None):
|
||||
kwargs['credstore_env'] = {
|
||||
'LD_LIBRARY_PATH': environment.get('LD_LIBRARY_PATH_ORIG'),
|
||||
}
|
||||
use_paramiko_ssh = int(environment.get('COMPOSE_PARAMIKO_SSH', 0))
|
||||
client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
|
||||
|
||||
client = APIClient(**kwargs)
|
||||
client._original_base_url = kwargs.get('base_url')
|
||||
|
||||
return client
|
||||
|
||||
@@ -17,16 +17,10 @@ class DocoptDispatcher:
|
||||
self.command_class = command_class
|
||||
self.options = options
|
||||
|
||||
@classmethod
|
||||
def get_command_and_options(cls, doc_entity, argv, options):
|
||||
command_help = getdoc(doc_entity)
|
||||
opt = docopt_full_help(command_help, argv, **options)
|
||||
command = opt['COMMAND']
|
||||
return command_help, opt, command
|
||||
|
||||
def parse(self, argv):
|
||||
command_help, options, command = DocoptDispatcher.get_command_and_options(
|
||||
self.command_class, argv, self.options)
|
||||
command_help = getdoc(self.command_class)
|
||||
options = docopt_full_help(command_help, argv, **self.options)
|
||||
command = options['COMMAND']
|
||||
|
||||
if command is None:
|
||||
raise SystemExit(command_help)
|
||||
|
||||
@@ -16,22 +16,18 @@ from compose.utils import split_buffer
|
||||
|
||||
class LogPresenter:
|
||||
|
||||
def __init__(self, prefix_width, color_func, keep_prefix=True):
|
||||
def __init__(self, prefix_width, color_func):
|
||||
self.prefix_width = prefix_width
|
||||
self.color_func = color_func
|
||||
self.keep_prefix = keep_prefix
|
||||
|
||||
def present(self, container, line):
|
||||
to_log = '{line}'.format(line=line)
|
||||
|
||||
if self.keep_prefix:
|
||||
prefix = container.name_without_project.ljust(self.prefix_width)
|
||||
to_log = '{prefix} '.format(prefix=self.color_func(prefix + ' |')) + to_log
|
||||
|
||||
return to_log
|
||||
prefix = container.name_without_project.ljust(self.prefix_width)
|
||||
return '{prefix} {line}'.format(
|
||||
prefix=self.color_func(prefix + ' |'),
|
||||
line=line)
|
||||
|
||||
|
||||
def build_log_presenters(service_names, monochrome, keep_prefix=True):
|
||||
def build_log_presenters(service_names, monochrome):
|
||||
"""Return an iterable of functions.
|
||||
|
||||
Each function can be used to format the logs output of a container.
|
||||
@@ -42,7 +38,7 @@ def build_log_presenters(service_names, monochrome, keep_prefix=True):
|
||||
return text
|
||||
|
||||
for color_func in cycle([no_color] if monochrome else colors.rainbow()):
|
||||
yield LogPresenter(prefix_width, color_func, keep_prefix)
|
||||
yield LogPresenter(prefix_width, color_func)
|
||||
|
||||
|
||||
def max_name_width(service_names, max_index_width=3):
|
||||
@@ -158,8 +154,10 @@ class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
|
||||
|
||||
|
||||
def tail_container_logs(container, presenter, queue, log_args):
|
||||
generator = get_log_generator(container)
|
||||
|
||||
try:
|
||||
for item in build_log_generator(container, log_args):
|
||||
for item in generator(container, log_args):
|
||||
queue.put(QueueItem.new(presenter.present(container, item)))
|
||||
except Exception as e:
|
||||
queue.put(QueueItem.exception(e))
|
||||
@@ -169,6 +167,20 @@ def tail_container_logs(container, presenter, queue, log_args):
|
||||
queue.put(QueueItem.stop(container.name))
|
||||
|
||||
|
||||
def get_log_generator(container):
|
||||
if container.has_api_logs:
|
||||
return build_log_generator
|
||||
return build_no_log_generator
|
||||
|
||||
|
||||
def build_no_log_generator(container, log_args):
|
||||
"""Return a generator that prints a warning about logs and waits for
|
||||
container to exit.
|
||||
"""
|
||||
yield "WARNING: no logs are available with the '{}' log driver\n".format(
|
||||
container.log_driver)
|
||||
|
||||
|
||||
def build_log_generator(container, log_args):
|
||||
# if the container doesn't have a log_stream we need to attach to container
|
||||
# before log printer starts running
|
||||
|
||||
@@ -2,6 +2,7 @@ import contextlib
|
||||
import functools
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pipes
|
||||
import re
|
||||
import subprocess
|
||||
@@ -25,8 +26,6 @@ from ..config.serialize import serialize_config
|
||||
from ..config.types import VolumeSpec
|
||||
from ..const import IS_WINDOWS_PLATFORM
|
||||
from ..errors import StreamParseError
|
||||
from ..metrics.decorator import metrics
|
||||
from ..parallel import ParallelStreamWriter
|
||||
from ..progress_stream import StreamOutputError
|
||||
from ..project import get_image_digests
|
||||
from ..project import MissingDigests
|
||||
@@ -39,10 +38,7 @@ from ..service import ConvergenceStrategy
|
||||
from ..service import ImageType
|
||||
from ..service import NeedsBuildError
|
||||
from ..service import OperationFailedError
|
||||
from ..utils import filter_attached_for_up
|
||||
from .colors import AnsiMode
|
||||
from .command import get_config_from_options
|
||||
from .command import get_project_dir
|
||||
from .command import project_from_options
|
||||
from .docopt_command import DocoptDispatcher
|
||||
from .docopt_command import get_handler
|
||||
@@ -55,122 +51,60 @@ from .log_printer import LogPrinter
|
||||
from .utils import get_version_info
|
||||
from .utils import human_readable_file_size
|
||||
from .utils import yesno
|
||||
from compose.metrics.client import MetricsCommand
|
||||
from compose.metrics.client import Status
|
||||
|
||||
|
||||
if not IS_WINDOWS_PLATFORM:
|
||||
from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
console_handler = logging.StreamHandler(sys.stderr)
|
||||
|
||||
|
||||
def main(): # noqa: C901
|
||||
def main():
|
||||
signals.ignore_sigpipe()
|
||||
command = None
|
||||
try:
|
||||
_, opts, command = DocoptDispatcher.get_command_and_options(
|
||||
TopLevelCommand,
|
||||
get_filtered_args(sys.argv[1:]),
|
||||
{'options_first': True, 'version': get_version_info('compose')})
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
command_func = dispatch()
|
||||
command_func()
|
||||
command = dispatch()
|
||||
command()
|
||||
except (KeyboardInterrupt, signals.ShutdownException):
|
||||
exit_with_metrics(command, "Aborting.", status=Status.FAILURE)
|
||||
log.error("Aborting.")
|
||||
sys.exit(1)
|
||||
except (UserError, NoSuchService, ConfigurationError,
|
||||
ProjectError, OperationFailedError) as e:
|
||||
exit_with_metrics(command, e.msg, status=Status.FAILURE)
|
||||
log.error(e.msg)
|
||||
sys.exit(1)
|
||||
except BuildError as e:
|
||||
reason = ""
|
||||
if e.reason:
|
||||
reason = " : " + e.reason
|
||||
exit_with_metrics(command,
|
||||
"Service '{}' failed to build{}".format(e.service.name, reason),
|
||||
status=Status.FAILURE)
|
||||
log.error("Service '{}' failed to build{}".format(e.service.name, reason))
|
||||
sys.exit(1)
|
||||
except StreamOutputError as e:
|
||||
exit_with_metrics(command, e, status=Status.FAILURE)
|
||||
log.error(e)
|
||||
sys.exit(1)
|
||||
except NeedsBuildError as e:
|
||||
exit_with_metrics(command,
|
||||
"Service '{}' needs to be built, but --no-build was passed.".format(
|
||||
e.service.name), status=Status.FAILURE)
|
||||
log.error("Service '{}' needs to be built, but --no-build was passed.".format(e.service.name))
|
||||
sys.exit(1)
|
||||
except NoSuchCommand as e:
|
||||
commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))
|
||||
exit_with_metrics(e.command, "No such command: {}\n\n{}".format(e.command, commands))
|
||||
log.error("No such command: %s\n\n%s", e.command, commands)
|
||||
sys.exit(1)
|
||||
except (errors.ConnectionError, StreamParseError):
|
||||
exit_with_metrics(command, status=Status.FAILURE)
|
||||
except SystemExit as e:
|
||||
status = Status.SUCCESS
|
||||
if len(sys.argv) > 1 and '--help' not in sys.argv:
|
||||
status = Status.FAILURE
|
||||
|
||||
if command and len(sys.argv) >= 3 and sys.argv[2] == '--help':
|
||||
command = '--help ' + command
|
||||
|
||||
if not command and len(sys.argv) >= 2 and sys.argv[1] == '--help':
|
||||
command = '--help'
|
||||
|
||||
msg = e.args[0] if len(e.args) else ""
|
||||
code = 0
|
||||
if isinstance(e.code, int):
|
||||
code = e.code
|
||||
exit_with_metrics(command, log_msg=msg, status=status,
|
||||
exit_code=code)
|
||||
|
||||
|
||||
def get_filtered_args(args):
|
||||
if args[0] in ('-h', '--help'):
|
||||
return []
|
||||
if args[0] == '--version':
|
||||
return ['version']
|
||||
|
||||
|
||||
def exit_with_metrics(command, log_msg=None, status=Status.SUCCESS, exit_code=1):
|
||||
if log_msg:
|
||||
if not exit_code:
|
||||
log.info(log_msg)
|
||||
else:
|
||||
log.error(log_msg)
|
||||
|
||||
MetricsCommand(command, status=status).send_metrics()
|
||||
sys.exit(exit_code)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def dispatch():
|
||||
console_stream = sys.stderr
|
||||
console_handler = logging.StreamHandler(console_stream)
|
||||
setup_logging(console_handler)
|
||||
setup_logging()
|
||||
dispatcher = DocoptDispatcher(
|
||||
TopLevelCommand,
|
||||
{'options_first': True, 'version': get_version_info('compose')})
|
||||
|
||||
options, handler, command_options = dispatcher.parse(sys.argv[1:])
|
||||
|
||||
ansi_mode = AnsiMode.AUTO
|
||||
try:
|
||||
if options.get("--ansi"):
|
||||
ansi_mode = AnsiMode(options.get("--ansi"))
|
||||
except ValueError:
|
||||
raise UserError(
|
||||
'Invalid value for --ansi: {}. Expected one of {}.'.format(
|
||||
options.get("--ansi"),
|
||||
', '.join(m.value for m in AnsiMode)
|
||||
)
|
||||
)
|
||||
if options.get("--no-ansi"):
|
||||
if options.get("--ansi"):
|
||||
raise UserError("--no-ansi and --ansi cannot be combined.")
|
||||
log.warning('--no-ansi option is deprecated and will be removed in future versions.')
|
||||
ansi_mode = AnsiMode.NEVER
|
||||
|
||||
setup_console_handler(console_handler,
|
||||
options.get('--verbose'),
|
||||
ansi_mode.use_ansi_codes(console_handler.stream),
|
||||
set_no_color_if_clicolor(options.get('--no-ansi')),
|
||||
options.get("--log-level"))
|
||||
setup_parallel_logger(ansi_mode)
|
||||
if ansi_mode is AnsiMode.NEVER:
|
||||
setup_parallel_logger(set_no_color_if_clicolor(options.get('--no-ansi')))
|
||||
if options.get('--no-ansi'):
|
||||
command_options['--no-color'] = True
|
||||
return functools.partial(perform_command, options, handler, command_options)
|
||||
|
||||
@@ -192,23 +126,23 @@ def perform_command(options, handler, command_options):
|
||||
handler(command, command_options)
|
||||
|
||||
|
||||
def setup_logging(console_handler):
|
||||
def setup_logging():
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.addHandler(console_handler)
|
||||
root_logger.setLevel(logging.DEBUG)
|
||||
|
||||
# Disable requests and docker-py logging
|
||||
logging.getLogger("urllib3").propagate = False
|
||||
# Disable requests logging
|
||||
logging.getLogger("requests").propagate = False
|
||||
logging.getLogger("docker").propagate = False
|
||||
|
||||
|
||||
def setup_parallel_logger(ansi_mode):
|
||||
ParallelStreamWriter.set_default_ansi_mode(ansi_mode)
|
||||
def setup_parallel_logger(noansi):
|
||||
if noansi:
|
||||
import compose.parallel
|
||||
compose.parallel.ParallelStreamWriter.set_noansi()
|
||||
|
||||
|
||||
def setup_console_handler(handler, verbose, use_console_formatter=True, level=None):
|
||||
if use_console_formatter:
|
||||
def setup_console_handler(handler, verbose, noansi=False, level=None):
|
||||
if handler.stream.isatty() and noansi is False:
|
||||
format_class = ConsoleWarningFormatter
|
||||
else:
|
||||
format_class = logging.Formatter
|
||||
@@ -248,7 +182,7 @@ class TopLevelCommand:
|
||||
"""Define and run multi-container applications with Docker.
|
||||
|
||||
Usage:
|
||||
docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
|
||||
docker-compose [-f <arg>...] [options] [--] [COMMAND] [ARGS...]
|
||||
docker-compose -h|--help
|
||||
|
||||
Options:
|
||||
@@ -256,12 +190,10 @@ class TopLevelCommand:
|
||||
(default: docker-compose.yml)
|
||||
-p, --project-name NAME Specify an alternate project name
|
||||
(default: directory name)
|
||||
--profile NAME Specify a profile to enable
|
||||
-c, --context NAME Specify a context name
|
||||
--verbose Show more output
|
||||
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
|
||||
--ansi (never|always|auto) Control when to print ANSI control characters
|
||||
--no-ansi Do not print ANSI control characters (DEPRECATED)
|
||||
--no-ansi Do not print ANSI control characters
|
||||
-v, --version Print version and exit
|
||||
-H, --host HOST Daemon socket to connect to
|
||||
|
||||
@@ -282,7 +214,7 @@ class TopLevelCommand:
|
||||
build Build or rebuild services
|
||||
config Validate and view the Compose file
|
||||
create Create services
|
||||
down Stop and remove resources
|
||||
down Stop and remove containers, networks, images, and volumes
|
||||
events Receive real time events from containers
|
||||
exec Execute a command in a running container
|
||||
help Get help on a command
|
||||
@@ -312,14 +244,13 @@ class TopLevelCommand:
|
||||
|
||||
@property
|
||||
def project_dir(self):
|
||||
return get_project_dir(self.toplevel_options)
|
||||
return self.toplevel_options.get('--project-directory') or '.'
|
||||
|
||||
@property
|
||||
def toplevel_environment(self):
|
||||
environment_file = self.toplevel_options.get('--env-file')
|
||||
return Environment.from_env_file(self.project_dir, environment_file)
|
||||
|
||||
@metrics()
|
||||
def build(self, options):
|
||||
"""
|
||||
Build or rebuild services.
|
||||
@@ -339,6 +270,8 @@ class TopLevelCommand:
|
||||
--no-rm Do not remove intermediate containers after a successful build.
|
||||
--parallel Build images in parallel.
|
||||
--progress string Set type of progress output (auto, plain, tty).
|
||||
EXPERIMENTAL flag for native builder.
|
||||
To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
|
||||
--pull Always attempt to pull a newer version of the image.
|
||||
-q, --quiet Don't print anything to STDOUT
|
||||
"""
|
||||
@@ -352,7 +285,7 @@ class TopLevelCommand:
|
||||
)
|
||||
build_args = resolve_build_args(build_args, self.toplevel_environment)
|
||||
|
||||
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
|
||||
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
|
||||
|
||||
self.project.build(
|
||||
service_names=options['SERVICE'],
|
||||
@@ -369,7 +302,6 @@ class TopLevelCommand:
|
||||
progress=options.get('--progress'),
|
||||
)
|
||||
|
||||
@metrics()
|
||||
def config(self, options):
|
||||
"""
|
||||
Validate and view the Compose file.
|
||||
@@ -419,7 +351,6 @@ class TopLevelCommand:
|
||||
|
||||
print(serialize_config(compose_config, image_digests, not options['--no-interpolate']))
|
||||
|
||||
@metrics()
|
||||
def create(self, options):
|
||||
"""
|
||||
Creates containers for a service.
|
||||
@@ -448,7 +379,6 @@ class TopLevelCommand:
|
||||
do_build=build_action_from_opts(options),
|
||||
)
|
||||
|
||||
@metrics()
|
||||
def down(self, options):
|
||||
"""
|
||||
Stops containers and removes containers, networks, volumes, and images
|
||||
@@ -500,7 +430,6 @@ class TopLevelCommand:
|
||||
Options:
|
||||
--json Output events as a stream of json objects
|
||||
"""
|
||||
|
||||
def format_event(event):
|
||||
attributes = ["%s=%s" % item for item in event['attributes'].items()]
|
||||
return ("{time} {type} {action} {id} ({attrs})").format(
|
||||
@@ -517,7 +446,6 @@ class TopLevelCommand:
|
||||
print(formatter(event))
|
||||
sys.stdout.flush()
|
||||
|
||||
@metrics("exec")
|
||||
def exec_command(self, options):
|
||||
"""
|
||||
Execute a command in a running container
|
||||
@@ -594,7 +522,6 @@ class TopLevelCommand:
|
||||
sys.exit(exit_code)
|
||||
|
||||
@classmethod
|
||||
@metrics()
|
||||
def help(cls, options):
|
||||
"""
|
||||
Get help on a command.
|
||||
@@ -608,7 +535,6 @@ class TopLevelCommand:
|
||||
|
||||
print(getdoc(subject))
|
||||
|
||||
@metrics()
|
||||
def images(self, options):
|
||||
"""
|
||||
List images used by the created containers.
|
||||
@@ -663,7 +589,6 @@ class TopLevelCommand:
|
||||
])
|
||||
print(Formatter.table(headers, rows))
|
||||
|
||||
@metrics()
|
||||
def kill(self, options):
|
||||
"""
|
||||
Force stop service containers.
|
||||
@@ -678,7 +603,6 @@ class TopLevelCommand:
|
||||
|
||||
self.project.kill(service_names=options['SERVICE'], signal=signal)
|
||||
|
||||
@metrics()
|
||||
def logs(self, options):
|
||||
"""
|
||||
View output from containers.
|
||||
@@ -686,12 +610,11 @@ class TopLevelCommand:
|
||||
Usage: logs [options] [--] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--no-color Produce monochrome output.
|
||||
-f, --follow Follow log output.
|
||||
-t, --timestamps Show timestamps.
|
||||
--tail="all" Number of lines to show from the end of the logs
|
||||
for each container.
|
||||
--no-log-prefix Don't print prefix in logs.
|
||||
--no-color Produce monochrome output.
|
||||
-f, --follow Follow log output.
|
||||
-t, --timestamps Show timestamps.
|
||||
--tail="all" Number of lines to show from the end of the logs
|
||||
for each container.
|
||||
"""
|
||||
containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
|
||||
|
||||
@@ -710,12 +633,10 @@ class TopLevelCommand:
|
||||
log_printer_from_project(
|
||||
self.project,
|
||||
containers,
|
||||
options['--no-color'],
|
||||
set_no_color_if_clicolor(options['--no-color']),
|
||||
log_args,
|
||||
event_stream=self.project.events(service_names=options['SERVICE']),
|
||||
keep_prefix=not options['--no-log-prefix']).run()
|
||||
event_stream=self.project.events(service_names=options['SERVICE'])).run()
|
||||
|
||||
@metrics()
|
||||
def pause(self, options):
|
||||
"""
|
||||
Pause services.
|
||||
@@ -725,7 +646,6 @@ class TopLevelCommand:
|
||||
containers = self.project.pause(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to pause', 1)
|
||||
|
||||
@metrics()
|
||||
def port(self, options):
|
||||
"""
|
||||
Print the public port for a port binding.
|
||||
@@ -747,7 +667,6 @@ class TopLevelCommand:
|
||||
options['PRIVATE_PORT'],
|
||||
protocol=options.get('--protocol') or 'tcp') or '')
|
||||
|
||||
@metrics()
|
||||
def ps(self, options):
|
||||
"""
|
||||
List containers.
|
||||
@@ -804,7 +723,6 @@ class TopLevelCommand:
|
||||
])
|
||||
print(Formatter.table(headers, rows))
|
||||
|
||||
@metrics()
|
||||
def pull(self, options):
|
||||
"""
|
||||
Pulls images for services defined in a Compose file, but does not start the containers.
|
||||
@@ -828,7 +746,6 @@ class TopLevelCommand:
|
||||
include_deps=options.get('--include-deps'),
|
||||
)
|
||||
|
||||
@metrics()
|
||||
def push(self, options):
|
||||
"""
|
||||
Pushes images for services.
|
||||
@@ -843,7 +760,6 @@ class TopLevelCommand:
|
||||
ignore_push_failures=options.get('--ignore-push-failures')
|
||||
)
|
||||
|
||||
@metrics()
|
||||
def rm(self, options):
|
||||
"""
|
||||
Removes stopped service containers.
|
||||
@@ -888,7 +804,6 @@ class TopLevelCommand:
|
||||
else:
|
||||
print("No stopped containers")
|
||||
|
||||
@metrics()
|
||||
def run(self, options):
|
||||
"""
|
||||
Run a one-off command on a service.
|
||||
@@ -949,7 +864,6 @@ class TopLevelCommand:
|
||||
self.toplevel_options, self.toplevel_environment
|
||||
)
|
||||
|
||||
@metrics()
|
||||
def scale(self, options):
|
||||
"""
|
||||
Set number of containers to run for a service.
|
||||
@@ -978,7 +892,6 @@ class TopLevelCommand:
|
||||
for service_name, num in parse_scale_args(options['SERVICE=NUM']).items():
|
||||
self.project.get_service(service_name).scale(num, timeout=timeout)
|
||||
|
||||
@metrics()
|
||||
def start(self, options):
|
||||
"""
|
||||
Start existing containers.
|
||||
@@ -988,7 +901,6 @@ class TopLevelCommand:
|
||||
containers = self.project.start(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to start', 1)
|
||||
|
||||
@metrics()
|
||||
def stop(self, options):
|
||||
"""
|
||||
Stop running containers without removing them.
|
||||
@@ -1004,7 +916,6 @@ class TopLevelCommand:
|
||||
timeout = timeout_from_opts(options)
|
||||
self.project.stop(service_names=options['SERVICE'], timeout=timeout)
|
||||
|
||||
@metrics()
|
||||
def restart(self, options):
|
||||
"""
|
||||
Restart running containers.
|
||||
@@ -1019,7 +930,6 @@ class TopLevelCommand:
|
||||
containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout)
|
||||
exit_if(not containers, 'No containers to restart', 1)
|
||||
|
||||
@metrics()
|
||||
def top(self, options):
|
||||
"""
|
||||
Display the running processes
|
||||
@@ -1047,7 +957,6 @@ class TopLevelCommand:
|
||||
print(container.name)
|
||||
print(Formatter.table(headers, rows))
|
||||
|
||||
@metrics()
|
||||
def unpause(self, options):
|
||||
"""
|
||||
Unpause services.
|
||||
@@ -1057,7 +966,6 @@ class TopLevelCommand:
|
||||
containers = self.project.unpause(service_names=options['SERVICE'])
|
||||
exit_if(not containers, 'No containers to unpause', 1)
|
||||
|
||||
@metrics()
|
||||
def up(self, options):
|
||||
"""
|
||||
Builds, (re)creates, starts, and attaches to containers for a service.
|
||||
@@ -1109,7 +1017,6 @@ class TopLevelCommand:
|
||||
container. Implies --abort-on-container-exit.
|
||||
--scale SERVICE=NUM Scale SERVICE to NUM instances. Overrides the
|
||||
`scale` setting in the Compose file if present.
|
||||
--no-log-prefix Don't print prefix in logs.
|
||||
"""
|
||||
start_deps = not options['--no-deps']
|
||||
always_recreate_deps = options['--always-recreate-deps']
|
||||
@@ -1121,7 +1028,6 @@ class TopLevelCommand:
|
||||
detached = options.get('--detach')
|
||||
no_start = options.get('--no-start')
|
||||
attach_dependencies = options.get('--attach-dependencies')
|
||||
keep_prefix = not options.get('--no-log-prefix')
|
||||
|
||||
if detached and (cascade_stop or exit_value_from or attach_dependencies):
|
||||
raise UserError(
|
||||
@@ -1136,7 +1042,7 @@ class TopLevelCommand:
|
||||
for excluded in [x for x in opts if options.get(x) and no_start]:
|
||||
raise UserError('--no-start and {} cannot be combined.'.format(excluded))
|
||||
|
||||
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD', True)
|
||||
native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
|
||||
|
||||
with up_shutdown_context(self.project, service_names, timeout, detached):
|
||||
warn_for_swarm_mode(self.project.client)
|
||||
@@ -1158,7 +1064,6 @@ class TopLevelCommand:
|
||||
renew_anonymous_volumes=options.get('--renew-anon-volumes'),
|
||||
silent=options.get('--quiet-pull'),
|
||||
cli=native_builder,
|
||||
attach_dependencies=attach_dependencies,
|
||||
)
|
||||
|
||||
try:
|
||||
@@ -1186,11 +1091,10 @@ class TopLevelCommand:
|
||||
log_printer = log_printer_from_project(
|
||||
self.project,
|
||||
attached_containers,
|
||||
options['--no-color'],
|
||||
set_no_color_if_clicolor(options['--no-color']),
|
||||
{'follow': True},
|
||||
cascade_stop,
|
||||
event_stream=self.project.events(service_names=service_names),
|
||||
keep_prefix=keep_prefix)
|
||||
event_stream=self.project.events(service_names=service_names))
|
||||
print("Attaching to", list_containers(log_printer.containers))
|
||||
cascade_starter = log_printer.run()
|
||||
|
||||
@@ -1208,7 +1112,6 @@ class TopLevelCommand:
|
||||
sys.exit(exit_code)
|
||||
|
||||
@classmethod
|
||||
@metrics()
|
||||
def version(cls, options):
|
||||
"""
|
||||
Show version information and quit.
|
||||
@@ -1473,28 +1376,29 @@ def get_docker_start_call(container_options, container_id):
|
||||
|
||||
|
||||
def log_printer_from_project(
|
||||
project,
|
||||
containers,
|
||||
monochrome,
|
||||
log_args,
|
||||
cascade_stop=False,
|
||||
event_stream=None,
|
||||
keep_prefix=True,
|
||||
project,
|
||||
containers,
|
||||
monochrome,
|
||||
log_args,
|
||||
cascade_stop=False,
|
||||
event_stream=None,
|
||||
):
|
||||
return LogPrinter(
|
||||
[c for c in containers if c.log_driver not in (None, 'none')],
|
||||
build_log_presenters(project.service_names, monochrome, keep_prefix),
|
||||
containers,
|
||||
build_log_presenters(project.service_names, monochrome),
|
||||
event_stream or project.events(),
|
||||
cascade_stop=cascade_stop,
|
||||
log_args=log_args)
|
||||
|
||||
|
||||
def filter_attached_containers(containers, service_names, attach_dependencies=False):
|
||||
return filter_attached_for_up(
|
||||
containers,
|
||||
service_names,
|
||||
attach_dependencies,
|
||||
lambda container: container.service)
|
||||
if attach_dependencies or not service_names:
|
||||
return containers
|
||||
|
||||
return [
|
||||
container
|
||||
for container in containers if container.service in service_names
|
||||
]
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
@@ -1670,3 +1574,7 @@ def warn_for_swarm_mode(client):
|
||||
"To deploy your application across the swarm, "
|
||||
"use `docker stack deploy`.\n"
|
||||
)
|
||||
|
||||
|
||||
def set_no_color_if_clicolor(no_color_flag):
|
||||
return no_color_flag or os.environ.get('CLICOLOR') == "0"
|
||||
|
||||
@@ -20,7 +20,6 @@ from ..utils import json_hash
|
||||
from ..utils import parse_bytes
|
||||
from ..utils import parse_nanoseconds_int
|
||||
from ..utils import splitdrive
|
||||
from ..version import ComposeVersion
|
||||
from .environment import env_vars_from_file
|
||||
from .environment import Environment
|
||||
from .environment import split_env
|
||||
@@ -133,7 +132,6 @@ ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
||||
'logging',
|
||||
'network_mode',
|
||||
'platform',
|
||||
'profiles',
|
||||
'scale',
|
||||
'stop_grace_period',
|
||||
]
|
||||
@@ -186,13 +184,6 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
|
||||
def from_filename(cls, filename):
|
||||
return cls(filename, load_yaml(filename))
|
||||
|
||||
@cached_property
|
||||
def config_version(self):
|
||||
version = self.config.get('version', None)
|
||||
if isinstance(version, dict):
|
||||
return V1
|
||||
return ComposeVersion(version) if version else self.version
|
||||
|
||||
@cached_property
|
||||
def version(self):
|
||||
version = self.config.get('version', None)
|
||||
@@ -231,13 +222,15 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
|
||||
'Version "{}" in "{}" is invalid.'
|
||||
.format(version, self.filename))
|
||||
|
||||
if version.startswith("1"):
|
||||
if version.startswith("1"):
|
||||
version = V1
|
||||
|
||||
if version == V1:
|
||||
raise ConfigurationError(
|
||||
'Version in "{}" is invalid. {}'
|
||||
.format(self.filename, VERSION_EXPLANATION)
|
||||
)
|
||||
|
||||
return VERSION
|
||||
return version
|
||||
|
||||
def get_service(self, name):
|
||||
return self.get_service_dicts()[name]
|
||||
@@ -260,10 +253,8 @@ class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
|
||||
return {} if self.version == V1 else self.config.get('configs', {})
|
||||
|
||||
|
||||
class Config(namedtuple('_Config', 'config_version version services volumes networks secrets configs')):
|
||||
class Config(namedtuple('_Config', 'version services volumes networks secrets configs')):
|
||||
"""
|
||||
:param config_version: configuration file version
|
||||
:type config_version: int
|
||||
:param version: configuration version
|
||||
:type version: int
|
||||
:param services: List of service description dictionaries
|
||||
@@ -374,23 +365,6 @@ def find_candidates_in_parent_dirs(filenames, path):
|
||||
return (candidates, path)
|
||||
|
||||
|
||||
def check_swarm_only_config(service_dicts):
|
||||
warning_template = (
|
||||
"Some services ({services}) use the '{key}' key, which will be ignored. "
|
||||
"Compose does not support '{key}' configuration - use "
|
||||
"`docker stack deploy` to deploy to a swarm."
|
||||
)
|
||||
key = 'configs'
|
||||
services = [s for s in service_dicts if s.get(key)]
|
||||
if services:
|
||||
log.warning(
|
||||
warning_template.format(
|
||||
services=", ".join(sorted(s['name'] for s in services)),
|
||||
key=key
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def load(config_details, interpolate=True):
|
||||
"""Load the configuration from a working directory and a list of
|
||||
configuration files. Files are loaded in order, and merged on top
|
||||
@@ -427,10 +401,9 @@ def load(config_details, interpolate=True):
|
||||
for service_dict in service_dicts:
|
||||
match_named_volumes(service_dict, volumes)
|
||||
|
||||
check_swarm_only_config(service_dicts)
|
||||
version = main_file.version
|
||||
|
||||
return Config(main_file.config_version, main_file.version,
|
||||
service_dicts, volumes, networks, secrets, configs)
|
||||
return Config(version, service_dicts, volumes, networks, secrets, configs)
|
||||
|
||||
|
||||
def load_mapping(config_files, get_func, entity_type, working_dir=None):
|
||||
@@ -450,36 +423,20 @@ def load_mapping(config_files, get_func, entity_type, working_dir=None):
|
||||
elif not config.get('name'):
|
||||
config['name'] = name
|
||||
|
||||
if 'driver_opts' in config:
|
||||
config['driver_opts'] = build_string_dict(
|
||||
config['driver_opts']
|
||||
)
|
||||
|
||||
if 'labels' in config:
|
||||
config['labels'] = parse_labels(config['labels'])
|
||||
|
||||
if 'file' in config:
|
||||
config['file'] = expand_path(working_dir, config['file'])
|
||||
|
||||
if 'driver_opts' in config:
|
||||
config['driver_opts'] = build_string_dict(
|
||||
config['driver_opts']
|
||||
)
|
||||
device = format_device_option(entity_type, config)
|
||||
if device:
|
||||
config['driver_opts']['device'] = device
|
||||
return mapping
|
||||
|
||||
|
||||
def format_device_option(entity_type, config):
|
||||
if entity_type != 'Volume':
|
||||
return
|
||||
# default driver is 'local'
|
||||
driver = config.get('driver', 'local')
|
||||
if driver != 'local':
|
||||
return
|
||||
o = config['driver_opts'].get('o')
|
||||
device = config['driver_opts'].get('device')
|
||||
if o and o == 'bind' and device:
|
||||
fullpath = os.path.abspath(os.path.expanduser(device))
|
||||
return fullpath
|
||||
|
||||
|
||||
def validate_external(entity_type, name, config, version):
|
||||
for k in config.keys():
|
||||
if entity_type == 'Network' and k == 'driver':
|
||||
@@ -1067,7 +1024,7 @@ def merge_service_dicts(base, override, version):
|
||||
|
||||
for field in [
|
||||
'cap_add', 'cap_drop', 'expose', 'external_links',
|
||||
'volumes_from', 'device_cgroup_rules', 'profiles',
|
||||
'volumes_from', 'device_cgroup_rules',
|
||||
]:
|
||||
md.merge_field(field, merge_unique_items_lists, default=[])
|
||||
|
||||
@@ -1157,7 +1114,6 @@ def merge_deploy(base, override):
|
||||
md['resources'] = dict(resources_md)
|
||||
if md.needs_merge('placement'):
|
||||
placement_md = MergeDict(md.base.get('placement') or {}, md.override.get('placement') or {})
|
||||
placement_md.merge_scalar('max_replicas_per_node')
|
||||
placement_md.merge_field('constraints', merge_unique_items_lists, default=[])
|
||||
placement_md.merge_field('preferences', merge_unique_objects_lists, default=[])
|
||||
md['placement'] = dict(placement_md)
|
||||
@@ -1186,7 +1142,6 @@ def merge_reservations(base, override):
|
||||
md.merge_scalar('cpus')
|
||||
md.merge_scalar('memory')
|
||||
md.merge_sequence('generic_resources', types.GenericResource.parse)
|
||||
md.merge_field('devices', merge_unique_objects_lists, default=[])
|
||||
return dict(md)
|
||||
|
||||
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft/2019-09/schema#",
|
||||
"id": "compose_spec.json",
|
||||
"id": "config_schema_compose_spec.json",
|
||||
"type": "object",
|
||||
"title": "Compose Specification",
|
||||
"description": "The Compose file is a YAML file defining a multi-containers based application.",
|
||||
|
||||
"properties": {
|
||||
"version": {
|
||||
"type": "string",
|
||||
"description": "Version of the Compose specification used. Tools not implementing required version MUST reject the configuration file."
|
||||
},
|
||||
|
||||
"services": {
|
||||
"id": "#/properties/services",
|
||||
"type": "object",
|
||||
@@ -21,7 +19,6 @@
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"networks": {
|
||||
"id": "#/properties/networks",
|
||||
"type": "object",
|
||||
@@ -31,7 +28,6 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
"volumes": {
|
||||
"id": "#/properties/volumes",
|
||||
"type": "object",
|
||||
@@ -42,7 +38,6 @@
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"secrets": {
|
||||
"id": "#/properties/secrets",
|
||||
"type": "object",
|
||||
@@ -53,7 +48,6 @@
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"configs": {
|
||||
"id": "#/properties/configs",
|
||||
"type": "object",
|
||||
@@ -65,16 +59,12 @@
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
|
||||
"patternProperties": {"^x-": {}},
|
||||
"additionalProperties": false,
|
||||
|
||||
"definitions": {
|
||||
|
||||
"service": {
|
||||
"id": "#/definitions/service",
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"deploy": {"$ref": "#/definitions/deployment"},
|
||||
"build": {
|
||||
@@ -87,7 +77,7 @@
|
||||
"dockerfile": {"type": "string"},
|
||||
"args": {"$ref": "#/definitions/list_or_dict"},
|
||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
||||
"cache_from": {"type": "array", "items": {"type": "string"}},
|
||||
"cache_from": {"$ref": "#/definitions/list_of_strings"},
|
||||
"network": {"type": "string"},
|
||||
"target": {"type": "string"},
|
||||
"shm_size": {"type": ["integer", "string"]},
|
||||
@@ -163,7 +153,7 @@
|
||||
"cpu_period": {"type": ["number", "string"]},
|
||||
"cpu_rt_period": {"type": ["number", "string"]},
|
||||
"cpu_rt_runtime": {"type": ["number", "string"]},
|
||||
"cpus": {"type": ["number", "string"]},
|
||||
"cpus": {"type": "number", "minimum": 0},
|
||||
"cpuset": {"type": "string"},
|
||||
"credential_spec": {
|
||||
"type": "object",
|
||||
@@ -200,6 +190,7 @@
|
||||
"device_cgroup_rules": {"$ref": "#/definitions/list_of_strings"},
|
||||
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"dns": {"$ref": "#/definitions/string_or_list"},
|
||||
|
||||
"dns_opt": {"type": "array","items": {"type": "string"}, "uniqueItems": true},
|
||||
"dns_search": {"$ref": "#/definitions/string_or_list"},
|
||||
"domainname": {"type": "string"},
|
||||
@@ -220,12 +211,12 @@
|
||||
},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"extends": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"service": {"type": "string"},
|
||||
"file": {"type": "string"}
|
||||
@@ -254,7 +245,6 @@
|
||||
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
||||
"logging": {
|
||||
"type": "object",
|
||||
|
||||
"properties": {
|
||||
"driver": {"type": "string"},
|
||||
"options": {
|
||||
@@ -268,7 +258,7 @@
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
"mac_address": {"type": "string"},
|
||||
"mem_limit": {"type": ["number", "string"]},
|
||||
"mem_limit": {"type": "string"},
|
||||
"mem_reservation": {"type": ["string", "integer"]},
|
||||
"mem_swappiness": {"type": "integer"},
|
||||
"memswap_limit": {"type": ["number", "string"]},
|
||||
@@ -328,9 +318,8 @@
|
||||
"uniqueItems": true
|
||||
},
|
||||
"privileged": {"type": "boolean"},
|
||||
"profiles": {"$ref": "#/definitions/list_of_strings"},
|
||||
"pull_policy": {"type": "string", "enum": [
|
||||
"always", "never", "if_not_present", "build"
|
||||
"always", "never", "if_not_present"
|
||||
]},
|
||||
"read_only": {"type": "boolean"},
|
||||
"restart": {"type": "string"},
|
||||
@@ -436,9 +425,9 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
}
|
||||
]
|
||||
},
|
||||
"uniqueItems": true
|
||||
],
|
||||
"uniqueItems": true
|
||||
}
|
||||
},
|
||||
"volumes_from": {
|
||||
"type": "array",
|
||||
@@ -514,7 +503,7 @@
|
||||
"limits": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cpus": {"type": ["number", "string"]},
|
||||
"cpus": {"type": "number", "minimum": 0},
|
||||
"memory": {"type": "string"}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
@@ -523,10 +512,9 @@
|
||||
"reservations": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cpus": {"type": ["number", "string"]},
|
||||
"cpus": {"type": "number", "minimum": 0},
|
||||
"memory": {"type": "string"},
|
||||
"generic_resources": {"$ref": "#/definitions/generic_resources"},
|
||||
"devices": {"$ref": "#/definitions/devices"}
|
||||
"generic_resources": {"$ref": "#/definitions/generic_resources"}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
@@ -570,7 +558,6 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
|
||||
"generic_resources": {
|
||||
"id": "#/definitions/generic_resources",
|
||||
"type": "array",
|
||||
@@ -591,24 +578,6 @@
|
||||
"patternProperties": {"^x-": {}}
|
||||
}
|
||||
},
|
||||
|
||||
"devices": {
|
||||
"id": "#/definitions/devices",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"capabilities": {"$ref": "#/definitions/list_of_strings"},
|
||||
"count": {"type": ["string", "integer"]},
|
||||
"device_ids": {"$ref": "#/definitions/list_of_strings"},
|
||||
"driver":{"type": "string"},
|
||||
"options":{"$ref": "#/definitions/list_or_dict"}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
}
|
||||
},
|
||||
|
||||
"network": {
|
||||
"id": "#/definitions/network",
|
||||
"type": ["object", "null"],
|
||||
@@ -638,10 +607,10 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^.+$": {"type": "string"}}
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
}
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
"options": {
|
||||
"type": "object",
|
||||
@@ -671,7 +640,6 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
|
||||
"volume": {
|
||||
"id": "#/definitions/volume",
|
||||
"type": ["object", "null"],
|
||||
@@ -700,7 +668,6 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
|
||||
"secret": {
|
||||
"id": "#/definitions/secret",
|
||||
"type": "object",
|
||||
@@ -726,7 +693,6 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
|
||||
"config": {
|
||||
"id": "#/definitions/config",
|
||||
"type": "object",
|
||||
@@ -748,20 +714,17 @@
|
||||
"additionalProperties": false,
|
||||
"patternProperties": {"^x-": {}}
|
||||
},
|
||||
|
||||
"string_or_list": {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{"$ref": "#/definitions/list_of_strings"}
|
||||
]
|
||||
},
|
||||
|
||||
"list_of_strings": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"uniqueItems": true
|
||||
},
|
||||
|
||||
"list_or_dict": {
|
||||
"oneOf": [
|
||||
{
|
||||
@@ -776,7 +739,6 @@
|
||||
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
|
||||
]
|
||||
},
|
||||
|
||||
"blkio_limit": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
@@ -793,7 +755,6 @@
|
||||
},
|
||||
"additionalProperties": false
|
||||
},
|
||||
|
||||
"constraints": {
|
||||
"service": {
|
||||
"id": "#/definitions/constraints/service",
|
||||
@@ -113,13 +113,13 @@ class Environment(dict):
|
||||
)
|
||||
return super().get(key, *args, **kwargs)
|
||||
|
||||
def get_boolean(self, key, default=False):
|
||||
def get_boolean(self, key):
|
||||
# Convert a value to a boolean using "common sense" rules.
|
||||
# Unset, empty, "0" and "false" (i-case) yield False.
|
||||
# All other values yield True.
|
||||
value = self.get(key)
|
||||
if not value:
|
||||
return default
|
||||
return False
|
||||
if value.lower() in ['0', 'false']:
|
||||
return False
|
||||
return True
|
||||
|
||||
@@ -111,14 +111,12 @@ class TemplateWithDefaults(Template):
|
||||
var, _, err = braced.partition(':?')
|
||||
result = mapping.get(var)
|
||||
if not result:
|
||||
err = err or var
|
||||
raise UnsetRequiredSubstitution(err)
|
||||
return result
|
||||
elif '?' == sep:
|
||||
var, _, err = braced.partition('?')
|
||||
if var in mapping:
|
||||
return mapping.get(var)
|
||||
err = err or var
|
||||
raise UnsetRequiredSubstitution(err)
|
||||
|
||||
# Modified from python2.7/string.py
|
||||
|
||||
@@ -44,7 +44,7 @@ yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type)
|
||||
|
||||
|
||||
def denormalize_config(config, image_digests=None):
|
||||
result = {'version': str(config.config_version)}
|
||||
result = {'version': str(config.version)}
|
||||
denormalized_services = [
|
||||
denormalize_service_dict(
|
||||
service_dict,
|
||||
@@ -121,6 +121,11 @@ def denormalize_service_dict(service_dict, version, image_digest=None):
|
||||
if version == V1 and 'network_mode' not in service_dict:
|
||||
service_dict['network_mode'] = 'bridge'
|
||||
|
||||
if 'depends_on' in service_dict:
|
||||
service_dict['depends_on'] = sorted([
|
||||
svc for svc in service_dict['depends_on'].keys()
|
||||
])
|
||||
|
||||
if 'healthcheck' in service_dict:
|
||||
if 'interval' in service_dict['healthcheck']:
|
||||
service_dict['healthcheck']['interval'] = serialize_ns_time_value(
|
||||
|
||||
@@ -502,13 +502,13 @@ def get_schema_path():
|
||||
|
||||
|
||||
def load_jsonschema(version):
|
||||
name = "compose_spec"
|
||||
suffix = "compose_spec"
|
||||
if version == V1:
|
||||
name = "config_schema_v1"
|
||||
suffix = "v1"
|
||||
|
||||
filename = os.path.join(
|
||||
get_schema_path(),
|
||||
"{}.json".format(name))
|
||||
"config_schema_{}.json".format(suffix))
|
||||
|
||||
if not os.path.exists(filename):
|
||||
raise ConfigurationError(
|
||||
|
||||
@@ -186,6 +186,11 @@ class Container:
|
||||
def log_driver(self):
|
||||
return self.get('HostConfig.LogConfig.Type')
|
||||
|
||||
@property
|
||||
def has_api_logs(self):
|
||||
log_type = self.log_driver
|
||||
return not log_type or log_type in ('json-file', 'journald', 'local')
|
||||
|
||||
@property
|
||||
def human_readable_health_status(self):
|
||||
""" Generate UP status string with up time and health
|
||||
@@ -199,7 +204,11 @@ class Container:
|
||||
return status_string
|
||||
|
||||
def attach_log_stream(self):
|
||||
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
|
||||
"""A log stream can only be attached if the container uses a
|
||||
json-file, journald or local log driver.
|
||||
"""
|
||||
if self.has_api_logs:
|
||||
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
|
||||
|
||||
def get(self, key):
|
||||
"""Return a value from the container or None if the value is not set.
|
||||
|
||||
@@ -1,64 +0,0 @@
|
||||
import os
|
||||
from enum import Enum
|
||||
|
||||
import requests
|
||||
from docker import ContextAPI
|
||||
from docker.transport import UnixHTTPAdapter
|
||||
|
||||
from compose.const import IS_WINDOWS_PLATFORM
|
||||
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
from docker.transport import NpipeHTTPAdapter
|
||||
|
||||
|
||||
class Status(Enum):
|
||||
SUCCESS = "success"
|
||||
FAILURE = "failure"
|
||||
CANCELED = "canceled"
|
||||
|
||||
|
||||
class MetricsSource:
|
||||
CLI = "docker-compose"
|
||||
|
||||
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
METRICS_SOCKET_FILE = 'npipe://\\\\.\\pipe\\docker_cli'
|
||||
else:
|
||||
METRICS_SOCKET_FILE = 'http+unix:///var/run/docker-cli.sock'
|
||||
|
||||
|
||||
class MetricsCommand(requests.Session):
|
||||
"""
|
||||
Representation of a command in the metrics.
|
||||
"""
|
||||
|
||||
def __init__(self, command,
|
||||
context_type=None, status=Status.SUCCESS,
|
||||
source=MetricsSource.CLI, uri=None):
|
||||
super().__init__()
|
||||
self.command = "compose " + command if command else "compose --help"
|
||||
self.context = context_type or ContextAPI.get_current_context().context_type or 'moby'
|
||||
self.source = source
|
||||
self.status = status.value
|
||||
self.uri = uri or os.environ.get("METRICS_SOCKET_FILE", METRICS_SOCKET_FILE)
|
||||
if IS_WINDOWS_PLATFORM:
|
||||
self.mount("http+unix://", NpipeHTTPAdapter(self.uri))
|
||||
else:
|
||||
self.mount("http+unix://", UnixHTTPAdapter(self.uri))
|
||||
|
||||
def send_metrics(self):
|
||||
try:
|
||||
return self.post("http+unix://localhost/usage",
|
||||
json=self.to_map(),
|
||||
timeout=.05,
|
||||
headers={'Content-Type': 'application/json'})
|
||||
except Exception as e:
|
||||
return e
|
||||
|
||||
def to_map(self):
|
||||
return {
|
||||
'command': self.command,
|
||||
'context': self.context,
|
||||
'source': self.source,
|
||||
'status': self.status,
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
import functools
|
||||
|
||||
from compose.metrics.client import MetricsCommand
|
||||
from compose.metrics.client import Status
|
||||
|
||||
|
||||
class metrics:
|
||||
def __init__(self, command_name=None):
|
||||
self.command_name = command_name
|
||||
|
||||
def __call__(self, fn):
|
||||
@functools.wraps(fn,
|
||||
assigned=functools.WRAPPER_ASSIGNMENTS,
|
||||
updated=functools.WRAPPER_UPDATES)
|
||||
def wrapper(*args, **kwargs):
|
||||
if not self.command_name:
|
||||
self.command_name = fn.__name__
|
||||
result = fn(*args, **kwargs)
|
||||
MetricsCommand(self.command_name, status=Status.SUCCESS).send_metrics()
|
||||
return result
|
||||
return wrapper
|
||||
@@ -11,7 +11,6 @@ from threading import Thread
|
||||
from docker.errors import APIError
|
||||
from docker.errors import ImageNotFound
|
||||
|
||||
from compose.cli.colors import AnsiMode
|
||||
from compose.cli.colors import green
|
||||
from compose.cli.colors import red
|
||||
from compose.cli.signals import ShutdownException
|
||||
@@ -84,7 +83,10 @@ def parallel_execute(objects, func, get_name, msg, get_deps=None, limit=None, fa
|
||||
objects = list(objects)
|
||||
stream = sys.stderr
|
||||
|
||||
writer = ParallelStreamWriter.get_or_assign_instance(ParallelStreamWriter(stream))
|
||||
if ParallelStreamWriter.instance:
|
||||
writer = ParallelStreamWriter.instance
|
||||
else:
|
||||
writer = ParallelStreamWriter(stream)
|
||||
|
||||
for obj in objects:
|
||||
writer.add_object(msg, get_name(obj))
|
||||
@@ -257,37 +259,19 @@ class ParallelStreamWriter:
|
||||
to jump to the correct line, and write over the line.
|
||||
"""
|
||||
|
||||
default_ansi_mode = AnsiMode.AUTO
|
||||
write_lock = Lock()
|
||||
|
||||
noansi = False
|
||||
lock = Lock()
|
||||
instance = None
|
||||
instance_lock = Lock()
|
||||
|
||||
@classmethod
|
||||
def get_instance(cls):
|
||||
return cls.instance
|
||||
def set_noansi(cls, value=True):
|
||||
cls.noansi = value
|
||||
|
||||
@classmethod
|
||||
def get_or_assign_instance(cls, writer):
|
||||
cls.instance_lock.acquire()
|
||||
try:
|
||||
if cls.instance is None:
|
||||
cls.instance = writer
|
||||
return cls.instance
|
||||
finally:
|
||||
cls.instance_lock.release()
|
||||
|
||||
@classmethod
|
||||
def set_default_ansi_mode(cls, ansi_mode):
|
||||
cls.default_ansi_mode = ansi_mode
|
||||
|
||||
def __init__(self, stream, ansi_mode=None):
|
||||
if ansi_mode is None:
|
||||
ansi_mode = self.default_ansi_mode
|
||||
def __init__(self, stream):
|
||||
self.stream = stream
|
||||
self.use_ansi_codes = ansi_mode.use_ansi_codes(stream)
|
||||
self.lines = []
|
||||
self.width = 0
|
||||
ParallelStreamWriter.instance = self
|
||||
|
||||
def add_object(self, msg, obj_index):
|
||||
if msg is None:
|
||||
@@ -301,7 +285,7 @@ class ParallelStreamWriter:
|
||||
return self._write_noansi(msg, obj_index, '')
|
||||
|
||||
def _write_ansi(self, msg, obj_index, status):
|
||||
self.write_lock.acquire()
|
||||
self.lock.acquire()
|
||||
position = self.lines.index(msg + obj_index)
|
||||
diff = len(self.lines) - position
|
||||
# move up
|
||||
@@ -313,7 +297,7 @@ class ParallelStreamWriter:
|
||||
# move back down
|
||||
self.stream.write("%c[%dB" % (27, diff))
|
||||
self.stream.flush()
|
||||
self.write_lock.release()
|
||||
self.lock.release()
|
||||
|
||||
def _write_noansi(self, msg, obj_index, status):
|
||||
self.stream.write(
|
||||
@@ -326,10 +310,17 @@ class ParallelStreamWriter:
|
||||
def write(self, msg, obj_index, status, color_func):
|
||||
if msg is None:
|
||||
return
|
||||
if self.use_ansi_codes:
|
||||
self._write_ansi(msg, obj_index, color_func(status))
|
||||
else:
|
||||
if self.noansi:
|
||||
self._write_noansi(msg, obj_index, status)
|
||||
else:
|
||||
self._write_ansi(msg, obj_index, color_func(status))
|
||||
|
||||
|
||||
def get_stream_writer():
|
||||
instance = ParallelStreamWriter.instance
|
||||
if instance is None:
|
||||
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
|
||||
return instance
|
||||
|
||||
|
||||
def parallel_operation(containers, operation, options, message):
|
||||
|
||||
@@ -39,7 +39,6 @@ from .service import Service
|
||||
from .service import ServiceIpcMode
|
||||
from .service import ServiceNetworkMode
|
||||
from .service import ServicePidMode
|
||||
from .utils import filter_attached_for_up
|
||||
from .utils import microseconds_from_time_nano
|
||||
from .utils import truncate_string
|
||||
from .volume import ProjectVolumes
|
||||
@@ -69,15 +68,13 @@ class Project:
|
||||
"""
|
||||
A collection of services.
|
||||
"""
|
||||
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None,
|
||||
enabled_profiles=None):
|
||||
def __init__(self, name, services, client, networks=None, volumes=None, config_version=None):
|
||||
self.name = name
|
||||
self.services = services
|
||||
self.client = client
|
||||
self.volumes = volumes or ProjectVolumes({})
|
||||
self.networks = networks or ProjectNetworks({}, False)
|
||||
self.config_version = config_version
|
||||
self.enabled_profiles = enabled_profiles or []
|
||||
|
||||
def labels(self, one_off=OneOffFilter.exclude, legacy=False):
|
||||
name = self.name
|
||||
@@ -89,8 +86,7 @@ class Project:
|
||||
return labels
|
||||
|
||||
@classmethod
|
||||
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None,
|
||||
enabled_profiles=None):
|
||||
def from_config(cls, name, config_data, client, default_platform=None, extra_labels=None):
|
||||
"""
|
||||
Construct a Project from a config.Config object.
|
||||
"""
|
||||
@@ -102,7 +98,7 @@ class Project:
|
||||
networks,
|
||||
use_networking)
|
||||
volumes = ProjectVolumes.from_config(name, config_data, client)
|
||||
project = cls(name, [], client, project_networks, volumes, config_data.version, enabled_profiles)
|
||||
project = cls(name, [], client, project_networks, volumes, config_data.version)
|
||||
|
||||
for service_dict in config_data.services:
|
||||
service_dict = dict(service_dict)
|
||||
@@ -132,7 +128,7 @@ class Project:
|
||||
config_data.secrets)
|
||||
|
||||
service_dict['scale'] = project.get_service_scale(service_dict)
|
||||
service_dict['device_requests'] = project.get_device_requests(service_dict)
|
||||
|
||||
service_dict = translate_credential_spec_to_security_opt(service_dict)
|
||||
service_dict, ignored_keys = translate_deploy_keys_to_container_config(
|
||||
service_dict
|
||||
@@ -189,7 +185,7 @@ class Project:
|
||||
if name not in valid_names:
|
||||
raise NoSuchService(name)
|
||||
|
||||
def get_services(self, service_names=None, include_deps=False, auto_enable_profiles=True):
|
||||
def get_services(self, service_names=None, include_deps=False):
|
||||
"""
|
||||
Returns a list of this project's services filtered
|
||||
by the provided list of names, or all services if service_names is None
|
||||
@@ -202,36 +198,15 @@ class Project:
|
||||
reordering as needed to resolve dependencies.
|
||||
|
||||
Raises NoSuchService if any of the named services do not exist.
|
||||
|
||||
Raises ConfigurationError if any service depended on is not enabled by active profiles
|
||||
"""
|
||||
# create a copy so we can *locally* add auto-enabled profiles later
|
||||
enabled_profiles = self.enabled_profiles.copy()
|
||||
|
||||
if service_names is None or len(service_names) == 0:
|
||||
auto_enable_profiles = False
|
||||
service_names = [
|
||||
service.name
|
||||
for service in self.services
|
||||
if service.enabled_for_profiles(enabled_profiles)
|
||||
]
|
||||
service_names = self.service_names
|
||||
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
|
||||
if auto_enable_profiles:
|
||||
# enable profiles of explicitly targeted services
|
||||
for service in services:
|
||||
for profile in service.get_profiles():
|
||||
if profile not in enabled_profiles:
|
||||
enabled_profiles.append(profile)
|
||||
|
||||
if include_deps:
|
||||
services = reduce(
|
||||
lambda acc, s: self._inject_deps(acc, s, enabled_profiles),
|
||||
services,
|
||||
[]
|
||||
)
|
||||
services = reduce(self._inject_deps, services, [])
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
@@ -356,31 +331,6 @@ class Project:
|
||||
max_replicas))
|
||||
return scale
|
||||
|
||||
def get_device_requests(self, service_dict):
|
||||
deploy_dict = service_dict.get('deploy', None)
|
||||
if not deploy_dict:
|
||||
return
|
||||
|
||||
resources = deploy_dict.get('resources', None)
|
||||
if not resources or not resources.get('reservations', None):
|
||||
return
|
||||
devices = resources['reservations'].get('devices')
|
||||
if not devices:
|
||||
return
|
||||
|
||||
for dev in devices:
|
||||
count = dev.get("count", -1)
|
||||
if not isinstance(count, int):
|
||||
if count != "all":
|
||||
raise ConfigurationError(
|
||||
'Invalid value "{}" for devices count'.format(dev["count"]),
|
||||
'(expected integer or "all")')
|
||||
dev["count"] = -1
|
||||
|
||||
if 'capabilities' in dev:
|
||||
dev['capabilities'] = [dev['capabilities']]
|
||||
return devices
|
||||
|
||||
def start(self, service_names=None, **options):
|
||||
containers = []
|
||||
|
||||
@@ -462,12 +412,10 @@ class Project:
|
||||
self.remove_images(remove_image_type)
|
||||
|
||||
def remove_images(self, remove_image_type):
|
||||
for service in self.services:
|
||||
for service in self.get_services():
|
||||
service.remove_image(remove_image_type)
|
||||
|
||||
def restart(self, service_names=None, **options):
|
||||
# filter service_names by enabled profiles
|
||||
service_names = [s.name for s in self.get_services(service_names)]
|
||||
containers = self.containers(service_names, stopped=True)
|
||||
|
||||
parallel.parallel_execute(
|
||||
@@ -490,6 +438,7 @@ class Project:
|
||||
log.info('%s uses an image, skipping' % service.name)
|
||||
|
||||
if cli:
|
||||
log.warning("Native build is an experimental feature and could change at any time")
|
||||
if parallel_build:
|
||||
log.warning("Flag '--parallel' is ignored when building with "
|
||||
"COMPOSE_DOCKER_CLI_BUILD=1")
|
||||
@@ -645,10 +594,12 @@ class Project:
|
||||
silent=False,
|
||||
cli=False,
|
||||
one_off=False,
|
||||
attach_dependencies=False,
|
||||
override_options=None,
|
||||
):
|
||||
|
||||
if cli:
|
||||
log.warning("Native build is an experimental feature and could change at any time")
|
||||
|
||||
self.initialize()
|
||||
if not ignore_orphans:
|
||||
self.find_orphan_containers(remove_orphans)
|
||||
@@ -669,17 +620,12 @@ class Project:
|
||||
one_off=service_names if one_off else [],
|
||||
)
|
||||
|
||||
services_to_attach = filter_attached_for_up(
|
||||
services,
|
||||
service_names,
|
||||
attach_dependencies,
|
||||
lambda service: service.name)
|
||||
|
||||
def do(service):
|
||||
|
||||
return service.execute_convergence_plan(
|
||||
plans[service.name],
|
||||
timeout=timeout,
|
||||
detached=detached or (service not in services_to_attach),
|
||||
detached=detached,
|
||||
scale_override=scale_override.get(service.name),
|
||||
rescale=rescale,
|
||||
start=start,
|
||||
@@ -749,7 +695,7 @@ class Project:
|
||||
|
||||
return plans
|
||||
|
||||
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=True, silent=False,
|
||||
def pull(self, service_names=None, ignore_pull_failures=False, parallel_pull=False, silent=False,
|
||||
include_deps=False):
|
||||
services = self.get_services(service_names, include_deps)
|
||||
|
||||
@@ -783,9 +729,7 @@ class Project:
|
||||
return
|
||||
|
||||
try:
|
||||
writer = parallel.ParallelStreamWriter.get_instance()
|
||||
if writer is None:
|
||||
raise RuntimeError('ParallelStreamWriter has not yet been instantiated')
|
||||
writer = parallel.get_stream_writer()
|
||||
for event in strm:
|
||||
if 'status' not in event:
|
||||
continue
|
||||
@@ -886,26 +830,14 @@ class Project:
|
||||
)
|
||||
)
|
||||
|
||||
def _inject_deps(self, acc, service, enabled_profiles):
|
||||
def _inject_deps(self, acc, service):
|
||||
dep_names = service.get_dependency_names()
|
||||
|
||||
if len(dep_names) > 0:
|
||||
dep_services = self.get_services(
|
||||
service_names=list(set(dep_names)),
|
||||
include_deps=True,
|
||||
auto_enable_profiles=False
|
||||
include_deps=True
|
||||
)
|
||||
|
||||
for dep in dep_services:
|
||||
if not dep.enabled_for_profiles(enabled_profiles):
|
||||
raise ConfigurationError(
|
||||
'Service "{dep_name}" was pulled in as a dependency of '
|
||||
'service "{service_name}" but is not enabled by the '
|
||||
'active profiles. '
|
||||
'You may fix this by adding a common profile to '
|
||||
'"{dep_name}" and "{service_name}".'
|
||||
.format(dep_name=dep.name, service_name=service.name)
|
||||
)
|
||||
else:
|
||||
dep_services = []
|
||||
|
||||
|
||||
@@ -77,7 +77,6 @@ HOST_CONFIG_KEYS = [
|
||||
'cpuset',
|
||||
'device_cgroup_rules',
|
||||
'devices',
|
||||
'device_requests',
|
||||
'dns',
|
||||
'dns_search',
|
||||
'dns_opt',
|
||||
@@ -412,7 +411,7 @@ class Service:
|
||||
stopped = [c for c in containers if not c.is_running]
|
||||
|
||||
if stopped:
|
||||
return ConvergencePlan('start', containers)
|
||||
return ConvergencePlan('start', stopped)
|
||||
|
||||
return ConvergencePlan('noop', containers)
|
||||
|
||||
@@ -515,9 +514,8 @@ class Service:
|
||||
self._downscale(containers[scale:], timeout)
|
||||
containers = containers[:scale]
|
||||
if start:
|
||||
stopped = [c for c in containers if not c.is_running]
|
||||
_, errors = parallel_execute(
|
||||
stopped,
|
||||
containers,
|
||||
lambda c: self.start_container_if_stopped(c, attach_logs=not detached, quiet=True),
|
||||
lambda c: c.name,
|
||||
"Starting",
|
||||
@@ -717,7 +715,7 @@ class Service:
|
||||
'volumes_from': [
|
||||
(v.source.name, v.mode)
|
||||
for v in self.volumes_from if isinstance(v.source, Service)
|
||||
]
|
||||
],
|
||||
}
|
||||
|
||||
def get_dependency_names(self):
|
||||
@@ -1017,7 +1015,6 @@ class Service:
|
||||
privileged=options.get('privileged', False),
|
||||
network_mode=self.network_mode.mode,
|
||||
devices=options.get('devices'),
|
||||
device_requests=options.get('device_requests'),
|
||||
dns=options.get('dns'),
|
||||
dns_opt=options.get('dns_opt'),
|
||||
dns_search=options.get('dns_search'),
|
||||
@@ -1329,24 +1326,6 @@ class Service:
|
||||
|
||||
return result
|
||||
|
||||
def get_profiles(self):
|
||||
if 'profiles' not in self.options:
|
||||
return []
|
||||
|
||||
return self.options.get('profiles')
|
||||
|
||||
def enabled_for_profiles(self, enabled_profiles):
|
||||
# if service has no profiles specified it is always enabled
|
||||
if 'profiles' not in self.options:
|
||||
return True
|
||||
|
||||
service_profiles = self.options.get('profiles')
|
||||
for profile in enabled_profiles:
|
||||
if profile in service_profiles:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def short_id_alias_exists(container, network):
|
||||
aliases = container.get(
|
||||
@@ -1873,13 +1852,6 @@ class _CLIBuilder:
|
||||
command_builder.add_arg("--tag", tag)
|
||||
command_builder.add_arg("--target", target)
|
||||
command_builder.add_arg("--iidfile", iidfile)
|
||||
command_builder.add_arg("--platform", platform)
|
||||
command_builder.add_arg("--isolation", isolation)
|
||||
|
||||
if extra_hosts:
|
||||
for host, ip in extra_hosts.items():
|
||||
command_builder.add_arg("--add-host", "{}:{}".format(host, ip))
|
||||
|
||||
args = command_builder.build([path])
|
||||
|
||||
magic_word = "Successfully built "
|
||||
|
||||
@@ -174,18 +174,3 @@ def truncate_string(s, max_chars=35):
|
||||
if len(s) > max_chars:
|
||||
return s[:max_chars - 2] + '...'
|
||||
return s
|
||||
|
||||
|
||||
def filter_attached_for_up(items, service_names, attach_dependencies=False,
|
||||
item_to_service_name=lambda x: x):
|
||||
"""This function contains the logic of choosing which services to
|
||||
attach when doing docker-compose up. It may be used both with containers
|
||||
and services, and any other entities that map to service names -
|
||||
this mapping is provided by item_to_service_name."""
|
||||
if attach_dependencies or not service_names:
|
||||
return items
|
||||
|
||||
return [
|
||||
item
|
||||
for item in items if item_to_service_name(item) in service_names
|
||||
]
|
||||
|
||||
@@ -164,10 +164,6 @@ _docker_compose_docker_compose() {
|
||||
_filedir "y?(a)ml"
|
||||
return
|
||||
;;
|
||||
--ansi)
|
||||
COMPREPLY=( $( compgen -W "never always auto" -- "$cur" ) )
|
||||
return
|
||||
;;
|
||||
--log-level)
|
||||
COMPREPLY=( $( compgen -W "debug info warning error critical" -- "$cur" ) )
|
||||
return
|
||||
@@ -294,7 +290,7 @@ _docker_compose_logs() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --no-log-prefix --tail --timestamps -t" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--follow -f --help --no-color --tail --timestamps -t" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker_compose_complete_services
|
||||
@@ -549,7 +545,7 @@ _docker_compose_up() {
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-log-prefix --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--abort-on-container-exit --always-recreate-deps --attach-dependencies --build -d --detach --exit-code-from --force-recreate --help --no-build --no-color --no-deps --no-recreate --no-start --renew-anon-volumes -V --remove-orphans --scale --timeout -t" -- "$cur" ) )
|
||||
;;
|
||||
*)
|
||||
__docker_compose_complete_services
|
||||
@@ -620,7 +616,6 @@ _docker_compose() {
|
||||
|
||||
# These options are require special treatment when searching the command.
|
||||
local top_level_options_with_args="
|
||||
--ansi
|
||||
--log-level
|
||||
"
|
||||
|
||||
|
||||
@@ -21,7 +21,5 @@ complete -c docker-compose -l tlscert -r -d 'Path to TLS certif
|
||||
complete -c docker-compose -l tlskey -r -d 'Path to TLS key file'
|
||||
complete -c docker-compose -l tlsverify -d 'Use TLS and verify the remote'
|
||||
complete -c docker-compose -l skip-hostname-check -d "Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)"
|
||||
complete -c docker-compose -l no-ansi -d 'Do not print ANSI control characters'
|
||||
complete -c docker-compose -l ansi -a 'never always auto' -d 'Control when to print ANSI control characters'
|
||||
complete -c docker-compose -s h -l help -d 'Print usage'
|
||||
complete -c docker-compose -s v -l version -d 'Print version and exit'
|
||||
|
||||
@@ -342,7 +342,6 @@ _docker-compose() {
|
||||
'--verbose[Show more output]' \
|
||||
'--log-level=[Set log level]:level:(DEBUG INFO WARNING ERROR CRITICAL)' \
|
||||
'--no-ansi[Do not print ANSI control characters]' \
|
||||
'--ansi=[Control when to print ANSI control characters]:when:(never always auto)' \
|
||||
'(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \
|
||||
'--tls[Use TLS; implied by --tlsverify]' \
|
||||
'--tlscacert=[Trust certs signed only by this CA]:ca path:' \
|
||||
|
||||
@@ -23,8 +23,8 @@ exe = EXE(pyz,
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
'compose/config/compose_spec.json',
|
||||
'compose/config/compose_spec.json',
|
||||
'compose/config/config_schema_compose_spec.json',
|
||||
'compose/config/config_schema_compose_spec.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
|
||||
@@ -32,8 +32,8 @@ coll = COLLECT(exe,
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
'compose/config/compose_spec.json',
|
||||
'compose/config/compose_spec.json',
|
||||
'compose/config/config_schema_compose_spec.json',
|
||||
'compose/config/config_schema_compose_spec.json',
|
||||
'DATA'
|
||||
),
|
||||
(
|
||||
|
||||
@@ -1 +1 @@
|
||||
pyinstaller==4.1
|
||||
pyinstaller==3.6
|
||||
|
||||
@@ -2,9 +2,8 @@ Click==7.1.2
|
||||
coverage==5.2.1
|
||||
ddt==1.4.1
|
||||
flake8==3.8.3
|
||||
gitpython==3.1.11
|
||||
gitpython==3.1.7
|
||||
mock==3.0.5
|
||||
pytest==6.0.1; python_version >= '3.5'
|
||||
pytest==4.6.5; python_version < '3.5'
|
||||
pytest-cov==2.10.1
|
||||
PyYAML==5.3.1
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
altgraph==0.17
|
||||
appdirs==1.4.4
|
||||
attrs==20.3.0
|
||||
bcrypt==3.2.0
|
||||
cffi==1.14.4
|
||||
cryptography==3.3.2
|
||||
attrs==20.1.0
|
||||
bcrypt==3.1.7
|
||||
cffi==1.14.1
|
||||
cryptography==3.0
|
||||
distlib==0.3.1
|
||||
entrypoints==0.3
|
||||
filelock==3.0.12
|
||||
gitdb2==4.0.2
|
||||
mccabe==0.6.1
|
||||
more-itertools==8.6.0; python_version >= '3.5'
|
||||
more-itertools==8.4.0; python_version >= '3.5'
|
||||
more-itertools==5.0.0; python_version < '3.5'
|
||||
packaging==20.4
|
||||
pluggy==0.13.1
|
||||
@@ -23,6 +23,6 @@ pyrsistent==0.16.0
|
||||
smmap==3.0.4
|
||||
smmap2==3.0.1
|
||||
toml==0.10.1
|
||||
tox==3.21.2
|
||||
virtualenv==20.4.0
|
||||
tox==3.19.0
|
||||
virtualenv==20.0.30
|
||||
wcwidth==0.2.5
|
||||
|
||||
@@ -4,7 +4,7 @@ certifi==2020.6.20
|
||||
chardet==3.0.4
|
||||
colorama==0.4.3; sys_platform == 'win32'
|
||||
distro==1.5.0
|
||||
docker==4.4.4
|
||||
docker==4.3.1
|
||||
docker-pycreds==0.4.0
|
||||
dockerpty==0.4.1
|
||||
docopt==0.6.2
|
||||
@@ -12,9 +12,10 @@ idna==2.10
|
||||
ipaddress==1.0.23
|
||||
jsonschema==3.2.0
|
||||
paramiko==2.7.1
|
||||
pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
|
||||
pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
|
||||
PySocks==1.7.1
|
||||
python-dotenv==0.14.0
|
||||
pywin32==227; sys_platform == 'win32'
|
||||
PyYAML==5.3.1
|
||||
requests==2.24.0
|
||||
texttable==1.6.2
|
||||
|
||||
@@ -5,12 +5,14 @@ set -ex
|
||||
./script/clean
|
||||
|
||||
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
|
||||
TAG="docker/compose:tmp-glibc-linux-binary-${DOCKER_COMPOSE_GITSHA}"
|
||||
|
||||
docker build . \
|
||||
--target bin \
|
||||
--build-arg DISTRO=debian \
|
||||
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}" \
|
||||
--output dist/
|
||||
docker build -t "${TAG}" . \
|
||||
--build-arg BUILD_PLATFORM=debian \
|
||||
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
|
||||
TMP_CONTAINER=$(docker create "${TAG}")
|
||||
mkdir -p dist
|
||||
ARCH=$(uname -m)
|
||||
# Ensure that we output the binary with the same name as we did before
|
||||
mv dist/docker-compose-linux-amd64 "dist/docker-compose-Linux-${ARCH}"
|
||||
docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
|
||||
docker container rm -f "${TMP_CONTAINER}"
|
||||
docker image rm -f "${TAG}"
|
||||
|
||||
@@ -24,7 +24,7 @@ if [ ! -z "${BUILD_BOOTLOADER}" ]; then
|
||||
git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
|
||||
cd /tmp/pyinstaller/bootloader
|
||||
# Checkout commit corresponding to version in requirements-build
|
||||
git checkout v4.1
|
||||
git checkout v3.6
|
||||
"${VENV}"/bin/python3 ./waf configure --no-lsb all
|
||||
"${VENV}"/bin/pip3 install ..
|
||||
cd "${CODE_PATH}"
|
||||
|
||||
@@ -13,6 +13,6 @@ IMAGE="docker/compose-tests"
|
||||
DOCKER_COMPOSE_GITSHA="$(script/build/write-git-sha)"
|
||||
docker build -t "${IMAGE}:${TAG}" . \
|
||||
--target build \
|
||||
--build-arg DISTRO="debian" \
|
||||
--build-arg BUILD_PLATFORM="debian" \
|
||||
--build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
|
||||
docker tag "${IMAGE}":"${TAG}" "${IMAGE}":latest
|
||||
|
||||
@@ -6,17 +6,17 @@
|
||||
#
|
||||
# http://git-scm.com/download/win
|
||||
#
|
||||
# 2. Install Python 3.9.x:
|
||||
# 2. Install Python 3.7.x:
|
||||
#
|
||||
# https://www.python.org/downloads/
|
||||
#
|
||||
# 3. Append ";C:\Python39;C:\Python39\Scripts" to the "Path" environment variable:
|
||||
# 3. Append ";C:\Python37;C:\Python37\Scripts" to the "Path" environment variable:
|
||||
#
|
||||
# https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true
|
||||
#
|
||||
# 4. In Powershell, run the following commands:
|
||||
#
|
||||
# $ pip install 'virtualenv==20.2.2'
|
||||
# $ pip install 'virtualenv==20.0.30'
|
||||
# $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
|
||||
#
|
||||
# 5. Clone the repository:
|
||||
@@ -39,7 +39,7 @@ if (Test-Path venv) {
|
||||
Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName }
|
||||
|
||||
# Create virtualenv
|
||||
virtualenv -p C:\Python39\python.exe .\venv
|
||||
virtualenv -p C:\Python37\python.exe .\venv
|
||||
|
||||
# pip and pyinstaller generate lots of warnings, so we need to ignore them
|
||||
$ErrorActionPreference = "Continue"
|
||||
|
||||
@@ -15,16 +15,16 @@
|
||||
|
||||
set -e
|
||||
|
||||
VERSION="1.28.5"
|
||||
VERSION="1.27.2"
|
||||
IMAGE="docker/compose:$VERSION"
|
||||
|
||||
|
||||
# Setup options for connecting to docker host
|
||||
if [ -z "$DOCKER_HOST" ]; then
|
||||
DOCKER_HOST='unix:///var/run/docker.sock'
|
||||
DOCKER_HOST="/var/run/docker.sock"
|
||||
fi
|
||||
if [ -S "${DOCKER_HOST#unix://}" ]; then
|
||||
DOCKER_ADDR="-v ${DOCKER_HOST#unix://}:${DOCKER_HOST#unix://} -e DOCKER_HOST"
|
||||
if [ -S "$DOCKER_HOST" ]; then
|
||||
DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST"
|
||||
else
|
||||
DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH"
|
||||
fi
|
||||
@@ -44,34 +44,13 @@ fi
|
||||
if [ -n "$COMPOSE_PROJECT_NAME" ]; then
|
||||
COMPOSE_OPTIONS="-e COMPOSE_PROJECT_NAME $COMPOSE_OPTIONS"
|
||||
fi
|
||||
# TODO: also check --file argument
|
||||
if [ -n "$compose_dir" ]; then
|
||||
VOLUMES="$VOLUMES -v $compose_dir:$compose_dir"
|
||||
fi
|
||||
if [ -n "$HOME" ]; then
|
||||
VOLUMES="$VOLUMES -v $HOME:$HOME -e HOME" # Pass in HOME to share docker.config and allow ~/-relative paths to work.
|
||||
fi
|
||||
i=$#
|
||||
while [ $i -gt 0 ]; do
|
||||
arg=$1
|
||||
i=$((i - 1))
|
||||
shift
|
||||
|
||||
case "$arg" in
|
||||
-f|--file)
|
||||
value=$1
|
||||
i=$((i - 1))
|
||||
shift
|
||||
set -- "$@" "$arg" "$value"
|
||||
|
||||
file_dir=$(realpath "$(dirname "$value")")
|
||||
VOLUMES="$VOLUMES -v $file_dir:$file_dir"
|
||||
;;
|
||||
*) set -- "$@" "$arg" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Setup environment variables for compose config and context
|
||||
ENV_OPTIONS=$(printenv | sed -E "/^PATH=.*/d; s/^/-e /g; s/=.*//g; s/\n/ /g")
|
||||
|
||||
# Only allocate tty if we detect one
|
||||
if [ -t 0 ] && [ -t 1 ]; then
|
||||
@@ -88,4 +67,4 @@ if docker info --format '{{json .SecurityOptions}}' 2>/dev/null | grep -q 'name=
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $ENV_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"
|
||||
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@"
|
||||
|
||||
@@ -13,13 +13,13 @@ if ! [ ${DEPLOYMENT_TARGET} == "$(macos_version)" ]; then
|
||||
SDK_SHA1=dd228a335194e3392f1904ce49aff1b1da26ca62
|
||||
fi
|
||||
|
||||
OPENSSL_VERSION=1.1.1h
|
||||
OPENSSL_VERSION=1.1.1g
|
||||
OPENSSL_URL=https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz
|
||||
OPENSSL_SHA1=8d0d099e8973ec851368c8c775e05e1eadca1794
|
||||
OPENSSL_SHA1=b213a293f2127ec3e323fb3cfc0c9807664fd997
|
||||
|
||||
PYTHON_VERSION=3.9.0
|
||||
PYTHON_VERSION=3.7.7
|
||||
PYTHON_URL=https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz
|
||||
PYTHON_SHA1=5744a10ba989d2badacbab3c00cdcb83c83106c7
|
||||
PYTHON_SHA1=8e9968663a214aea29659ba9dfa959e8a7d82b39
|
||||
|
||||
#
|
||||
# Install prerequisites.
|
||||
@@ -36,7 +36,7 @@ if ! [ -x "$(command -v python3)" ]; then
|
||||
brew install python3
|
||||
fi
|
||||
if ! [ -x "$(command -v virtualenv)" ]; then
|
||||
pip3 install virtualenv==20.2.2
|
||||
pip3 install virtualenv==20.0.30
|
||||
fi
|
||||
|
||||
#
|
||||
|
||||
@@ -21,6 +21,7 @@ elif [ "$DOCKER_VERSIONS" == "all" ]; then
|
||||
DOCKER_VERSIONS=$($get_versions -n 2 recent)
|
||||
fi
|
||||
|
||||
|
||||
BUILD_NUMBER=${BUILD_NUMBER-$USER}
|
||||
PY_TEST_VERSIONS=${PY_TEST_VERSIONS:-py37}
|
||||
|
||||
@@ -38,19 +39,17 @@ for version in $DOCKER_VERSIONS; do
|
||||
|
||||
trap "on_exit" EXIT
|
||||
|
||||
repo="dockerswarm/dind"
|
||||
|
||||
docker run \
|
||||
-d \
|
||||
--name "$daemon_container" \
|
||||
--privileged \
|
||||
--volume="/var/lib/docker" \
|
||||
-v $DOCKER_CONFIG/config.json:/root/.docker/config.json \
|
||||
-e "DOCKER_TLS_CERTDIR=" \
|
||||
"docker:$version-dind" \
|
||||
"$repo:$version" \
|
||||
dockerd -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \
|
||||
2>&1 | tail -n 10
|
||||
|
||||
docker exec "$daemon_container" sh -c "apk add --no-cache git"
|
||||
|
||||
docker run \
|
||||
--rm \
|
||||
--tty \
|
||||
|
||||
4
setup.py
4
setup.py
@@ -32,7 +32,7 @@ install_requires = [
|
||||
'texttable >= 0.9.0, < 2',
|
||||
'websocket-client >= 0.32.0, < 1',
|
||||
'distro >= 1.5.0, < 2',
|
||||
'docker[ssh] >= 4.4.4, < 5',
|
||||
'docker[ssh] >= 4.3.1, < 5',
|
||||
'dockerpty >= 0.4.1, < 1',
|
||||
'jsonschema >= 2.5.1, < 4',
|
||||
'python-dotenv >= 0.13.0, < 1',
|
||||
@@ -102,7 +102,5 @@ setup(
|
||||
'Programming Language :: Python :: 3.4',
|
||||
'Programming Language :: Python :: 3.6',
|
||||
'Programming Language :: Python :: 3.7',
|
||||
'Programming Language :: Python :: 3.8',
|
||||
'Programming Language :: Python :: 3.9',
|
||||
],
|
||||
)
|
||||
|
||||
@@ -58,16 +58,13 @@ COMPOSE_COMPATIBILITY_DICT = {
|
||||
}
|
||||
|
||||
|
||||
def start_process(base_dir, options, executable=None, env=None):
|
||||
executable = executable or DOCKER_COMPOSE_EXECUTABLE
|
||||
def start_process(base_dir, options):
|
||||
proc = subprocess.Popen(
|
||||
[executable] + options,
|
||||
[DOCKER_COMPOSE_EXECUTABLE] + options,
|
||||
stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
cwd=base_dir,
|
||||
env=env,
|
||||
)
|
||||
cwd=base_dir)
|
||||
print("Running process: %s" % proc.pid)
|
||||
return proc
|
||||
|
||||
@@ -81,10 +78,9 @@ def wait_on_process(proc, returncode=0, stdin=None):
|
||||
return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8'))
|
||||
|
||||
|
||||
def dispatch(base_dir, options,
|
||||
project_options=None, returncode=0, stdin=None, executable=None, env=None):
|
||||
def dispatch(base_dir, options, project_options=None, returncode=0, stdin=None):
|
||||
project_options = project_options or []
|
||||
proc = start_process(base_dir, project_options + options, executable=executable, env=env)
|
||||
proc = start_process(base_dir, project_options + options)
|
||||
return wait_on_process(proc, returncode=returncode, stdin=stdin)
|
||||
|
||||
|
||||
@@ -363,7 +359,7 @@ services:
|
||||
'web': {
|
||||
'command': 'true',
|
||||
'image': 'alpine:latest',
|
||||
'ports': [{'target': 5643}, {'target': 9999}]
|
||||
'ports': ['5643/tcp', '9999/tcp']
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -378,7 +374,7 @@ services:
|
||||
'web': {
|
||||
'command': 'false',
|
||||
'image': 'alpine:latest',
|
||||
'ports': [{'target': 5644}, {'target': 9998}]
|
||||
'ports': ['5644/tcp', '9998/tcp']
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -393,7 +389,7 @@ services:
|
||||
'web': {
|
||||
'command': 'echo uwu',
|
||||
'image': 'alpine:3.10.1',
|
||||
'ports': [{'target': 3341}, {'target': 4449}]
|
||||
'ports': ['3341/tcp', '4449/tcp']
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -787,11 +783,7 @@ services:
|
||||
assert BUILD_CACHE_TEXT not in result.stdout
|
||||
assert BUILD_PULL_TEXT in result.stdout
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_build_log_level(self):
|
||||
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
|
||||
os.environ['DOCKER_BUILDKIT'] = '0'
|
||||
self.test_env_file_relative_to_compose_file()
|
||||
self.base_dir = 'tests/fixtures/simple-dockerfile'
|
||||
result = self.dispatch(['--log-level', 'warning', 'build', 'simple'])
|
||||
assert result.stderr == ''
|
||||
@@ -853,17 +845,13 @@ services:
|
||||
for c in self.project.client.containers(all=True):
|
||||
self.addCleanup(self.project.client.remove_container, c, force=True)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_build_shm_size_build_option(self):
|
||||
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
|
||||
pull_busybox(self.client)
|
||||
self.base_dir = 'tests/fixtures/build-shm-size'
|
||||
result = self.dispatch(['build', '--no-cache'], None)
|
||||
assert 'shm_size: 96' in result.stdout
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_build_memory_build_option(self):
|
||||
os.environ['COMPOSE_DOCKER_CLI_BUILD'] = '0'
|
||||
pull_busybox(self.client)
|
||||
self.base_dir = 'tests/fixtures/build-memory'
|
||||
result = self.dispatch(['build', '--no-cache', '--memory', '96m', 'service'], None)
|
||||
@@ -1731,98 +1719,6 @@ services:
|
||||
shareable_mode_container = self.project.get_service('shareable').containers()[0]
|
||||
assert shareable_mode_container.get('HostConfig.IpcMode') == 'shareable'
|
||||
|
||||
def test_profiles_up_with_no_profile(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['up'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'foo' in service_names
|
||||
assert len(containers) == 1
|
||||
|
||||
def test_profiles_up_with_profile(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['--profile', 'test', 'up'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'foo' in service_names
|
||||
assert 'bar' in service_names
|
||||
assert 'baz' in service_names
|
||||
assert len(containers) == 3
|
||||
|
||||
def test_profiles_up_invalid_dependency(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
result = self.dispatch(['--profile', 'debug', 'up'], returncode=1)
|
||||
|
||||
assert ('Service "bar" was pulled in as a dependency of service "zot" '
|
||||
'but is not enabled by the active profiles.') in result.stderr
|
||||
|
||||
def test_profiles_up_with_multiple_profiles(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['--profile', 'debug', '--profile', 'test', 'up'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'foo' in service_names
|
||||
assert 'bar' in service_names
|
||||
assert 'baz' in service_names
|
||||
assert 'zot' in service_names
|
||||
assert len(containers) == 4
|
||||
|
||||
def test_profiles_up_with_profile_enabled_by_service(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['up', 'bar'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'bar' in service_names
|
||||
assert len(containers) == 1
|
||||
|
||||
def test_profiles_up_with_dependency_and_profile_enabled_by_service(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['up', 'baz'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'bar' in service_names
|
||||
assert 'baz' in service_names
|
||||
assert len(containers) == 2
|
||||
|
||||
def test_profiles_up_with_invalid_dependency_for_target_service(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
result = self.dispatch(['up', 'zot'], returncode=1)
|
||||
|
||||
assert ('Service "bar" was pulled in as a dependency of service "zot" '
|
||||
'but is not enabled by the active profiles.') in result.stderr
|
||||
|
||||
def test_profiles_up_with_profile_for_dependency(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['--profile', 'test', 'up', 'zot'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'bar' in service_names
|
||||
assert 'zot' in service_names
|
||||
assert len(containers) == 2
|
||||
|
||||
def test_profiles_up_with_merged_profiles(self):
|
||||
self.base_dir = 'tests/fixtures/profiles'
|
||||
self.dispatch(['-f', 'docker-compose.yml', '-f', 'merge-profiles.yml', 'up', 'zot'])
|
||||
|
||||
containers = self.project.containers(stopped=True)
|
||||
service_names = [c.service for c in containers]
|
||||
|
||||
assert 'bar' in service_names
|
||||
assert 'zot' in service_names
|
||||
assert len(containers) == 2
|
||||
|
||||
def test_exec_without_tty(self):
|
||||
self.base_dir = 'tests/fixtures/links-composefile'
|
||||
self.dispatch(['up', '-d', 'console'])
|
||||
@@ -3138,12 +3034,3 @@ services:
|
||||
another = self.project.get_service('--log-service')
|
||||
assert len(service.containers()) == 1
|
||||
assert len(another.containers()) == 1
|
||||
|
||||
def test_up_no_log_prefix(self):
|
||||
self.base_dir = 'tests/fixtures/echo-services'
|
||||
result = self.dispatch(['up', '--no-log-prefix'])
|
||||
|
||||
assert 'simple' in result.stdout
|
||||
assert 'another' in result.stdout
|
||||
assert 'exited with code 0' in result.stdout
|
||||
assert 'exited with code 0' in result.stdout
|
||||
|
||||
20
tests/fixtures/profiles/docker-compose.yml
vendored
20
tests/fixtures/profiles/docker-compose.yml
vendored
@@ -1,20 +0,0 @@
|
||||
version: "3"
|
||||
services:
|
||||
foo:
|
||||
image: busybox:1.31.0-uclibc
|
||||
bar:
|
||||
image: busybox:1.31.0-uclibc
|
||||
profiles:
|
||||
- test
|
||||
baz:
|
||||
image: busybox:1.31.0-uclibc
|
||||
depends_on:
|
||||
- bar
|
||||
profiles:
|
||||
- test
|
||||
zot:
|
||||
image: busybox:1.31.0-uclibc
|
||||
depends_on:
|
||||
- bar
|
||||
profiles:
|
||||
- debug
|
||||
5
tests/fixtures/profiles/merge-profiles.yml
vendored
5
tests/fixtures/profiles/merge-profiles.yml
vendored
@@ -1,5 +0,0 @@
|
||||
version: "3"
|
||||
services:
|
||||
bar:
|
||||
profiles:
|
||||
- debug
|
||||
@@ -1,125 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
from http.server import BaseHTTPRequestHandler
|
||||
from http.server import HTTPServer
|
||||
from threading import Thread
|
||||
|
||||
import requests
|
||||
from docker.transport import UnixHTTPAdapter
|
||||
|
||||
from tests.acceptance.cli_test import dispatch
|
||||
from tests.integration.testcases import DockerClientTestCase
|
||||
|
||||
|
||||
TEST_SOCKET_FILE = '/tmp/test-metrics-docker-cli.sock'
|
||||
|
||||
|
||||
class MetricsTest(DockerClientTestCase):
|
||||
test_session = requests.sessions.Session()
|
||||
test_env = None
|
||||
base_dir = 'tests/fixtures/v3-full'
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
super().setUpClass()
|
||||
MetricsTest.test_session.mount("http+unix://", UnixHTTPAdapter(TEST_SOCKET_FILE))
|
||||
MetricsTest.test_env = os.environ.copy()
|
||||
MetricsTest.test_env['METRICS_SOCKET_FILE'] = TEST_SOCKET_FILE
|
||||
MetricsServer().start()
|
||||
|
||||
@classmethod
|
||||
def test_metrics_help(cls):
|
||||
# root `docker-compose` command is considered as a `--help`
|
||||
dispatch(cls.base_dir, [], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose --help", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['help', 'run'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose help", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['--help'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose --help", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['run', '--help'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose --help run", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['up', '--help', 'extra_args'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose --help up", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
|
||||
@classmethod
|
||||
def test_metrics_simple_commands(cls):
|
||||
dispatch(cls.base_dir, ['ps'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose ps", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['version'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose version", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "success"}'
|
||||
dispatch(cls.base_dir, ['version', '--yyy'], env=MetricsTest.test_env)
|
||||
assert cls.get_content() == \
|
||||
b'{"command": "compose version", "context": "moby", ' \
|
||||
b'"source": "docker-compose", "status": "failure"}'
|
||||
|
||||
@staticmethod
|
||||
def get_content():
|
||||
resp = MetricsTest.test_session.get("http+unix://localhost")
|
||||
print(resp.content)
|
||||
return resp.content
|
||||
|
||||
|
||||
def start_server(uri=TEST_SOCKET_FILE):
|
||||
try:
|
||||
os.remove(uri)
|
||||
except OSError:
|
||||
pass
|
||||
httpd = HTTPServer(uri, MetricsHTTPRequestHandler, False)
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
sock.bind(TEST_SOCKET_FILE)
|
||||
sock.listen(0)
|
||||
httpd.socket = sock
|
||||
print('Serving on ', uri)
|
||||
httpd.serve_forever()
|
||||
sock.shutdown(socket.SHUT_RDWR)
|
||||
sock.close()
|
||||
os.remove(uri)
|
||||
|
||||
|
||||
class MetricsServer:
|
||||
@classmethod
|
||||
def start(cls):
|
||||
t = Thread(target=start_server, daemon=True)
|
||||
t.start()
|
||||
|
||||
|
||||
class MetricsHTTPRequestHandler(BaseHTTPRequestHandler):
|
||||
usages = []
|
||||
|
||||
def do_GET(self):
|
||||
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
for u in MetricsHTTPRequestHandler.usages:
|
||||
self.wfile.write(u)
|
||||
MetricsHTTPRequestHandler.usages = []
|
||||
|
||||
def do_POST(self):
|
||||
self.client_address = ('',) # avoid exception in BaseHTTPServer.py log_message()
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
body = self.rfile.read(content_length)
|
||||
print(body)
|
||||
MetricsHTTPRequestHandler.usages.append(body)
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.getLogger("urllib3").propagate = False
|
||||
logging.getLogger("requests").propagate = False
|
||||
start_server()
|
||||
@@ -37,7 +37,6 @@ from tests.integration.testcases import no_cluster
|
||||
|
||||
def build_config(**kwargs):
|
||||
return config.Config(
|
||||
config_version=kwargs.get('version', VERSION),
|
||||
version=kwargs.get('version', VERSION),
|
||||
services=kwargs.get('services'),
|
||||
volumes=kwargs.get('volumes'),
|
||||
@@ -1348,36 +1347,6 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.up()
|
||||
assert len(project.containers()) == 3
|
||||
|
||||
def test_project_up_scale_with_stopped_containers(self):
|
||||
config_data = build_config(
|
||||
services=[{
|
||||
'name': 'web',
|
||||
'image': BUSYBOX_IMAGE_WITH_TAG,
|
||||
'command': 'top',
|
||||
'scale': 2
|
||||
}]
|
||||
)
|
||||
project = Project.from_config(
|
||||
name='composetest', config_data=config_data, client=self.client
|
||||
)
|
||||
|
||||
project.up()
|
||||
containers = project.containers()
|
||||
assert len(containers) == 2
|
||||
|
||||
self.client.stop(containers[0].id)
|
||||
project.up(scale_override={'web': 2})
|
||||
containers = project.containers()
|
||||
assert len(containers) == 2
|
||||
|
||||
self.client.stop(containers[0].id)
|
||||
project.up(scale_override={'web': 3})
|
||||
assert len(project.containers()) == 3
|
||||
|
||||
self.client.stop(containers[0].id)
|
||||
project.up(scale_override={'web': 1})
|
||||
assert len(project.containers()) == 1
|
||||
|
||||
def test_initialize_volumes(self):
|
||||
vol_name = '{:x}'.format(random.getrandbits(32))
|
||||
full_vol_name = 'composetest_{}'.format(vol_name)
|
||||
|
||||
@@ -948,12 +948,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
|
||||
f.write("FROM busybox\n")
|
||||
|
||||
service = self.create_service('web',
|
||||
build={'context': base_dir},
|
||||
environment={
|
||||
'COMPOSE_DOCKER_CLI_BUILD': '0',
|
||||
'DOCKER_BUILDKIT': '0',
|
||||
})
|
||||
service = self.create_service('web', build={'context': base_dir})
|
||||
service.build()
|
||||
self.addCleanup(self.client.remove_image, service.image_name)
|
||||
|
||||
@@ -969,6 +964,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
service = self.create_service('web',
|
||||
build={'context': base_dir},
|
||||
environment={
|
||||
'COMPOSE_DOCKER_CLI_BUILD': '1',
|
||||
'DOCKER_BUILDKIT': '1',
|
||||
})
|
||||
service.build(cli=True)
|
||||
@@ -1019,6 +1015,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
web = self.create_service('web',
|
||||
build={'context': base_dir},
|
||||
environment={
|
||||
'COMPOSE_DOCKER_CLI_BUILD': '1',
|
||||
'DOCKER_BUILDKIT': '1',
|
||||
})
|
||||
project = Project('composetest', [web], self.client)
|
||||
|
||||
@@ -375,7 +375,7 @@ class ServiceStateTest(DockerClientTestCase):
|
||||
|
||||
assert [c.is_running for c in containers] == [False, True]
|
||||
|
||||
assert ('start', containers) == web.convergence_plan()
|
||||
assert ('start', containers[0:1]) == web.convergence_plan()
|
||||
|
||||
def test_trigger_recreate_with_config_change(self):
|
||||
web = self.create_service('web', command=["top"])
|
||||
|
||||
@@ -61,7 +61,6 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
cls.client.close()
|
||||
del cls.client
|
||||
|
||||
def tearDown(self):
|
||||
|
||||
@@ -1,56 +0,0 @@
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
from compose.cli.colors import AnsiMode
|
||||
from tests import mock
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tty_stream():
|
||||
stream = mock.Mock()
|
||||
stream.isatty.return_value = True
|
||||
return stream
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def non_tty_stream():
|
||||
stream = mock.Mock()
|
||||
stream.isatty.return_value = False
|
||||
return stream
|
||||
|
||||
|
||||
class TestAnsiModeTestCase:
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_ansi_mode_never(self, tty_stream, non_tty_stream):
|
||||
if "CLICOLOR" in os.environ:
|
||||
del os.environ["CLICOLOR"]
|
||||
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
|
||||
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
|
||||
|
||||
os.environ["CLICOLOR"] = "0"
|
||||
assert not AnsiMode.NEVER.use_ansi_codes(tty_stream)
|
||||
assert not AnsiMode.NEVER.use_ansi_codes(non_tty_stream)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_ansi_mode_always(self, tty_stream, non_tty_stream):
|
||||
if "CLICOLOR" in os.environ:
|
||||
del os.environ["CLICOLOR"]
|
||||
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
|
||||
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
|
||||
|
||||
os.environ["CLICOLOR"] = "0"
|
||||
assert AnsiMode.ALWAYS.use_ansi_codes(tty_stream)
|
||||
assert AnsiMode.ALWAYS.use_ansi_codes(non_tty_stream)
|
||||
|
||||
@mock.patch.dict(os.environ)
|
||||
def test_ansi_mode_auto(self, tty_stream, non_tty_stream):
|
||||
if "CLICOLOR" in os.environ:
|
||||
del os.environ["CLICOLOR"]
|
||||
assert AnsiMode.AUTO.use_ansi_codes(tty_stream)
|
||||
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)
|
||||
|
||||
os.environ["CLICOLOR"] = "0"
|
||||
assert not AnsiMode.AUTO.use_ansi_codes(tty_stream)
|
||||
assert not AnsiMode.AUTO.use_ansi_codes(non_tty_stream)
|
||||
@@ -14,41 +14,49 @@ class TestGetConfigPathFromOptions:
|
||||
paths = ['one.yml', 'two.yml']
|
||||
opts = {'--file': paths}
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options(opts, environment) == paths
|
||||
assert get_config_path_from_options('.', opts, environment) == paths
|
||||
|
||||
def test_single_path_from_env(self):
|
||||
with mock.patch.dict(os.environ):
|
||||
os.environ['COMPOSE_FILE'] = 'one.yml'
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options({}, environment) == ['one.yml']
|
||||
assert get_config_path_from_options('.', {}, environment) == ['one.yml']
|
||||
|
||||
@pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix separator')
|
||||
def test_multiple_path_from_env(self):
|
||||
with mock.patch.dict(os.environ):
|
||||
os.environ['COMPOSE_FILE'] = 'one.yml:two.yml'
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
|
||||
assert get_config_path_from_options(
|
||||
'.', {}, environment
|
||||
) == ['one.yml', 'two.yml']
|
||||
|
||||
@pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows separator')
|
||||
def test_multiple_path_from_env_windows(self):
|
||||
with mock.patch.dict(os.environ):
|
||||
os.environ['COMPOSE_FILE'] = 'one.yml;two.yml'
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options({}, environment) == ['one.yml', 'two.yml']
|
||||
assert get_config_path_from_options(
|
||||
'.', {}, environment
|
||||
) == ['one.yml', 'two.yml']
|
||||
|
||||
def test_multiple_path_from_env_custom_separator(self):
|
||||
with mock.patch.dict(os.environ):
|
||||
os.environ['COMPOSE_PATH_SEPARATOR'] = '^'
|
||||
os.environ['COMPOSE_FILE'] = 'c:\\one.yml^.\\semi;colon.yml'
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options({}, environment) == ['c:\\one.yml', '.\\semi;colon.yml']
|
||||
assert get_config_path_from_options(
|
||||
'.', {}, environment
|
||||
) == ['c:\\one.yml', '.\\semi;colon.yml']
|
||||
|
||||
def test_no_path(self):
|
||||
environment = Environment.from_env_file('.')
|
||||
assert not get_config_path_from_options({}, environment)
|
||||
assert not get_config_path_from_options('.', {}, environment)
|
||||
|
||||
def test_unicode_path_from_options(self):
|
||||
paths = [b'\xe5\xb0\xb1\xe5\x90\x83\xe9\xa5\xad/docker-compose.yml']
|
||||
opts = {'--file': paths}
|
||||
environment = Environment.from_env_file('.')
|
||||
assert get_config_path_from_options(opts, environment) == ['就吃饭/docker-compose.yml']
|
||||
assert get_config_path_from_options(
|
||||
'.', opts, environment
|
||||
) == ['就吃饭/docker-compose.yml']
|
||||
|
||||
@@ -8,6 +8,7 @@ from docker.errors import APIError
|
||||
|
||||
from compose.cli.log_printer import build_log_generator
|
||||
from compose.cli.log_printer import build_log_presenters
|
||||
from compose.cli.log_printer import build_no_log_generator
|
||||
from compose.cli.log_printer import consume_queue
|
||||
from compose.cli.log_printer import QueueItem
|
||||
from compose.cli.log_printer import wait_on_exit
|
||||
@@ -74,6 +75,14 @@ def test_wait_on_exit_raises():
|
||||
assert expected in wait_on_exit(mock_container)
|
||||
|
||||
|
||||
def test_build_no_log_generator(mock_container):
|
||||
mock_container.has_api_logs = False
|
||||
mock_container.log_driver = 'none'
|
||||
output, = build_no_log_generator(mock_container, None)
|
||||
assert "WARNING: no logs are available with the 'none' log driver\n" in output
|
||||
assert "exited with code" not in output
|
||||
|
||||
|
||||
class TestBuildLogGenerator:
|
||||
|
||||
def test_no_log_stream(self, mock_container):
|
||||
|
||||
@@ -137,20 +137,21 @@ class TestCLIMainTestCase:
|
||||
|
||||
class TestSetupConsoleHandlerTestCase:
|
||||
|
||||
def test_with_console_formatter_verbose(self, logging_handler):
|
||||
def test_with_tty_verbose(self, logging_handler):
|
||||
setup_console_handler(logging_handler, True)
|
||||
assert type(logging_handler.formatter) == ConsoleWarningFormatter
|
||||
assert '%(name)s' in logging_handler.formatter._fmt
|
||||
assert '%(funcName)s' in logging_handler.formatter._fmt
|
||||
|
||||
def test_with_console_formatter_not_verbose(self, logging_handler):
|
||||
def test_with_tty_not_verbose(self, logging_handler):
|
||||
setup_console_handler(logging_handler, False)
|
||||
assert type(logging_handler.formatter) == ConsoleWarningFormatter
|
||||
assert '%(name)s' not in logging_handler.formatter._fmt
|
||||
assert '%(funcName)s' not in logging_handler.formatter._fmt
|
||||
|
||||
def test_without_console_formatter(self, logging_handler):
|
||||
setup_console_handler(logging_handler, False, use_console_formatter=False)
|
||||
def test_with_not_a_tty(self, logging_handler):
|
||||
logging_handler.stream.isatty.return_value = False
|
||||
setup_console_handler(logging_handler, False)
|
||||
assert type(logging_handler.formatter) == logging.Formatter
|
||||
|
||||
|
||||
|
||||
@@ -168,14 +168,12 @@ class ConfigTest(unittest.TestCase):
|
||||
}
|
||||
})
|
||||
)
|
||||
assert cfg.config_version == VERSION
|
||||
assert cfg.version == VERSION
|
||||
|
||||
for version in ['2', '2.0', '2.1', '2.2', '2.3',
|
||||
'3', '3.0', '3.1', '3.2', '3.3', '3.4', '3.5', '3.6', '3.7', '3.8']:
|
||||
cfg = config.load(build_config_details({'version': version}))
|
||||
assert cfg.config_version == version
|
||||
assert cfg.version == VERSION
|
||||
assert cfg.version == version
|
||||
|
||||
def test_v1_file_version(self):
|
||||
cfg = config.load(build_config_details({'web': {'image': 'busybox'}}))
|
||||
@@ -238,9 +236,7 @@ class ConfigTest(unittest.TestCase):
|
||||
)
|
||||
)
|
||||
|
||||
assert "compose.config.errors.ConfigurationError: " \
|
||||
"The Compose file 'filename.yml' is invalid because:\n" \
|
||||
"'web' does not match any of the regexes: '^x-'" in excinfo.exconly()
|
||||
assert 'Invalid top-level property "web"' in excinfo.exconly()
|
||||
assert VERSION_EXPLANATION in excinfo.exconly()
|
||||
|
||||
def test_named_volume_config_empty(self):
|
||||
@@ -669,7 +665,7 @@ class ConfigTest(unittest.TestCase):
|
||||
|
||||
assert 'Invalid service name \'mong\\o\'' in excinfo.exconly()
|
||||
|
||||
def test_config_duplicate_cache_from_values_no_validation_error(self):
|
||||
def test_config_duplicate_cache_from_values_validation_error(self):
|
||||
with pytest.raises(ConfigurationError) as exc:
|
||||
config.load(
|
||||
build_config_details({
|
||||
@@ -681,7 +677,7 @@ class ConfigTest(unittest.TestCase):
|
||||
})
|
||||
)
|
||||
|
||||
assert 'build.cache_from contains non-unique items' not in exc.exconly()
|
||||
assert 'build.cache_from contains non-unique items' in exc.exconly()
|
||||
|
||||
def test_load_with_multiple_files_v1(self):
|
||||
base_file = config.ConfigFile(
|
||||
@@ -2547,7 +2543,6 @@ web:
|
||||
'labels': ['com.docker.compose.a=1', 'com.docker.compose.b=2'],
|
||||
'mode': 'replicated',
|
||||
'placement': {
|
||||
'max_replicas_per_node': 1,
|
||||
'constraints': [
|
||||
'node.role == manager', 'engine.labels.aws == true'
|
||||
],
|
||||
@@ -2604,7 +2599,6 @@ web:
|
||||
'com.docker.compose.c': '3'
|
||||
},
|
||||
'placement': {
|
||||
'max_replicas_per_node': 1,
|
||||
'constraints': [
|
||||
'engine.labels.aws == true', 'engine.labels.dev == true',
|
||||
'node.role == manager', 'node.role == worker'
|
||||
@@ -5273,7 +5267,7 @@ def get_config_filename_for_files(filenames, subdir=None):
|
||||
|
||||
|
||||
class SerializeTest(unittest.TestCase):
|
||||
def test_denormalize_depends(self):
|
||||
def test_denormalize_depends_on_v3(self):
|
||||
service_dict = {
|
||||
'image': 'busybox',
|
||||
'command': 'true',
|
||||
@@ -5283,7 +5277,27 @@ class SerializeTest(unittest.TestCase):
|
||||
}
|
||||
}
|
||||
|
||||
assert denormalize_service_dict(service_dict, VERSION) == service_dict
|
||||
assert denormalize_service_dict(service_dict, VERSION) == {
|
||||
'image': 'busybox',
|
||||
'command': 'true',
|
||||
'depends_on': ['service2', 'service3']
|
||||
}
|
||||
|
||||
def test_denormalize_depends_on_v2_1(self):
|
||||
service_dict = {
|
||||
'image': 'busybox',
|
||||
'command': 'true',
|
||||
'depends_on': {
|
||||
'service2': {'condition': 'service_started'},
|
||||
'service3': {'condition': 'service_started'},
|
||||
}
|
||||
}
|
||||
|
||||
assert denormalize_service_dict(service_dict, VERSION) == {
|
||||
'image': 'busybox',
|
||||
'command': 'true',
|
||||
'depends_on': ['service2', 'service3']
|
||||
}
|
||||
|
||||
def test_serialize_time(self):
|
||||
data = {
|
||||
@@ -5373,7 +5387,7 @@ class SerializeTest(unittest.TestCase):
|
||||
assert serialized_config['secrets']['two'] == {'external': True, 'name': 'two'}
|
||||
|
||||
def test_serialize_ports(self):
|
||||
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
|
||||
config_dict = config.Config(version=VERSION, services=[
|
||||
{
|
||||
'ports': [types.ServicePort('80', '8080', None, None, None)],
|
||||
'image': 'alpine',
|
||||
@@ -5384,20 +5398,8 @@ class SerializeTest(unittest.TestCase):
|
||||
serialized_config = yaml.safe_load(serialize_config(config_dict))
|
||||
assert [{'published': 8080, 'target': 80}] == serialized_config['services']['web']['ports']
|
||||
|
||||
def test_serialize_ports_v1(self):
|
||||
config_dict = config.Config(config_version=V1, version=V1, services=[
|
||||
{
|
||||
'ports': [types.ServicePort('80', '8080', None, None, None)],
|
||||
'image': 'alpine',
|
||||
'name': 'web'
|
||||
}
|
||||
], volumes={}, networks={}, secrets={}, configs={})
|
||||
|
||||
serialized_config = yaml.safe_load(serialize_config(config_dict))
|
||||
assert ['8080:80/tcp'] == serialized_config['services']['web']['ports']
|
||||
|
||||
def test_serialize_ports_with_ext_ip(self):
|
||||
config_dict = config.Config(config_version=VERSION, version=VERSION, services=[
|
||||
config_dict = config.Config(version=VERSION, services=[
|
||||
{
|
||||
'ports': [types.ServicePort('80', '8080', None, None, '127.0.0.1')],
|
||||
'image': 'alpine',
|
||||
|
||||
@@ -416,7 +416,7 @@ def test_interpolate_mandatory_no_err_msg(defaults_interpolator):
|
||||
with pytest.raises(UnsetRequiredSubstitution) as e:
|
||||
defaults_interpolator("not ok ${BAZ?}")
|
||||
|
||||
assert e.value.err == 'BAZ'
|
||||
assert e.value.err == ''
|
||||
|
||||
|
||||
def test_interpolate_mixed_separators(defaults_interpolator):
|
||||
|
||||
@@ -221,6 +221,34 @@ class ContainerTest(unittest.TestCase):
|
||||
container = Container(None, self.container_dict, has_been_inspected=True)
|
||||
assert container.short_id == self.container_id[:12]
|
||||
|
||||
def test_has_api_logs(self):
|
||||
container_dict = {
|
||||
'HostConfig': {
|
||||
'LogConfig': {
|
||||
'Type': 'json-file'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
container = Container(None, container_dict, has_been_inspected=True)
|
||||
assert container.has_api_logs is True
|
||||
|
||||
container_dict['HostConfig']['LogConfig']['Type'] = 'none'
|
||||
container = Container(None, container_dict, has_been_inspected=True)
|
||||
assert container.has_api_logs is False
|
||||
|
||||
container_dict['HostConfig']['LogConfig']['Type'] = 'syslog'
|
||||
container = Container(None, container_dict, has_been_inspected=True)
|
||||
assert container.has_api_logs is False
|
||||
|
||||
container_dict['HostConfig']['LogConfig']['Type'] = 'journald'
|
||||
container = Container(None, container_dict, has_been_inspected=True)
|
||||
assert container.has_api_logs is True
|
||||
|
||||
container_dict['HostConfig']['LogConfig']['Type'] = 'foobar'
|
||||
container = Container(None, container_dict, has_been_inspected=True)
|
||||
assert container.has_api_logs is False
|
||||
|
||||
|
||||
class GetContainerNameTestCase(unittest.TestCase):
|
||||
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
import unittest
|
||||
|
||||
from compose.metrics.client import MetricsCommand
|
||||
from compose.metrics.client import Status
|
||||
|
||||
|
||||
class MetricsTest(unittest.TestCase):
|
||||
@classmethod
|
||||
def test_metrics(cls):
|
||||
assert MetricsCommand('up', 'moby').to_map() == {
|
||||
'command': 'compose up',
|
||||
'context': 'moby',
|
||||
'status': 'success',
|
||||
'source': 'docker-compose',
|
||||
}
|
||||
|
||||
assert MetricsCommand('down', 'local').to_map() == {
|
||||
'command': 'compose down',
|
||||
'context': 'local',
|
||||
'status': 'success',
|
||||
'source': 'docker-compose',
|
||||
}
|
||||
|
||||
assert MetricsCommand('help', 'aci', Status.FAILURE).to_map() == {
|
||||
'command': 'compose help',
|
||||
'context': 'aci',
|
||||
'status': 'failure',
|
||||
'source': 'docker-compose',
|
||||
}
|
||||
|
||||
assert MetricsCommand('run', 'ecs').to_map() == {
|
||||
'command': 'compose run',
|
||||
'context': 'ecs',
|
||||
'status': 'success',
|
||||
'source': 'docker-compose',
|
||||
}
|
||||
@@ -3,7 +3,6 @@ from threading import Lock
|
||||
|
||||
from docker.errors import APIError
|
||||
|
||||
from compose.cli.colors import AnsiMode
|
||||
from compose.parallel import GlobalLimit
|
||||
from compose.parallel import parallel_execute
|
||||
from compose.parallel import parallel_execute_iter
|
||||
@@ -157,7 +156,7 @@ def test_parallel_execute_alignment(capsys):
|
||||
|
||||
def test_parallel_execute_ansi(capsys):
|
||||
ParallelStreamWriter.instance = None
|
||||
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.ALWAYS)
|
||||
ParallelStreamWriter.set_noansi(value=False)
|
||||
results, errors = parallel_execute(
|
||||
objects=["something", "something more"],
|
||||
func=lambda x: x,
|
||||
@@ -173,7 +172,7 @@ def test_parallel_execute_ansi(capsys):
|
||||
|
||||
def test_parallel_execute_noansi(capsys):
|
||||
ParallelStreamWriter.instance = None
|
||||
ParallelStreamWriter.set_default_ansi_mode(AnsiMode.NEVER)
|
||||
ParallelStreamWriter.set_noansi()
|
||||
results, errors = parallel_execute(
|
||||
objects=["something", "something more"],
|
||||
func=lambda x: x,
|
||||
|
||||
@@ -28,7 +28,6 @@ from compose.service import Service
|
||||
|
||||
def build_config(**kwargs):
|
||||
return Config(
|
||||
config_version=kwargs.get('config_version', VERSION),
|
||||
version=kwargs.get('version', VERSION),
|
||||
services=kwargs.get('services'),
|
||||
volumes=kwargs.get('volumes'),
|
||||
|
||||
Reference in New Issue
Block a user