Compare commits

...

212 Commits
1.2.0 ... 1.3.3

Author SHA1 Message Date
Aanand Prasad
8cff440800 Bump 1.3.3
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-16 11:21:01 +01:00
Mazz Mosley
e5f6ae767d Merge pull request #1704 from aanand/fix-timeout-type
Make sure up/restart/stop timeout is an int
(cherry picked from commit c7dccccd1f)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-16 11:19:21 +01:00
Aanand Prasad
cd44179305 Merge pull request #1705 from aanand/fix-labels-null
Handle case where /containers/json returns "Labels": null
(cherry picked from commit 7b9664be8e)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-15 17:33:08 +01:00
Aanand Prasad
c3c5b354b8 Merge pull request #1690 from aanand/bump-1.3.2
Bump 1.3.2
2015-07-14 18:04:22 +01:00
Aanand Prasad
95cf195dbd Bump 1.3.2
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:40:43 +01:00
Aanand Prasad
a80afd67ab Merge pull request #1688 from aanand/use-docker-py-1.3.0
Use docker-py 1.3.0
(cherry picked from commit 1e71eebc74)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:29:25 +01:00
Aanand Prasad
4bc4d273ac Merge pull request #1643 from aanand/warn-about-legacy-one-off-containers
Show an error on 'run' when there are legacy one-off containers
(cherry picked from commit 81707ef1ad)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:29:12 +01:00
Aanand Prasad
4911c77134 Merge pull request #1489 from dnephin/faster_integration_tests
Faster integration tests
(cherry picked from commit 5231288b4e)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>

Conflicts:
	compose/cli/main.py
2015-07-14 17:28:54 +01:00
Aanand Prasad
c1b9a76a54 Merge pull request #1658 from aanand/fix-smart-recreate-nonexistent-image
Fix smart recreate when 'image' is changed to something nonexistent
(cherry picked from commit 2bc10db545)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:25:33 +01:00
Aanand Prasad
c31e25af72 Merge pull request #1642 from aanand/fix-1573
Fix bug where duplicate container is leftover after 'up' fails
(cherry picked from commit f42fd6a3ad)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:25:03 +01:00
Aanand Prasad
c8295d36cc Merge pull request #1644 from aanand/fix-rm-bug
Stop 'rm' and 'ps' listing services not defined in the current file
(cherry picked from commit d85688892c)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:24:15 +01:00
Aanand Prasad
b12c29479e Merge pull request #1521 from dano/validate-service-names
Validate that service names passed to Project.containers aren't bogus.
(cherry picked from commit bc14c473c9)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-07-14 17:24:15 +01:00
Aanand Prasad
cd47829f3d Merge pull request #1588 from aanand/bump-1.3.1
Bump 1.3.1
2015-06-22 08:01:13 -07:00
Aanand Prasad
4d4ef4e0b3 Bump 1.3.1
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-21 17:32:36 -07:00
Aanand Prasad
882ef2ccd8 Merge pull request #1578 from aanand/fix-migrate-help
Fix 'docker-compose help migrate-to-labels'
(cherry picked from commit c8751980f9)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-21 17:28:18 -07:00
Aanand Prasad
d6cd76c3c1 Merge pull request #1570 from aanand/fix-build-pull
Explicitly set pull=False when building
(cherry picked from commit 4f83a18912)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-21 17:28:09 -07:00
Ben Firshman
bd0be2cdc7 Merge pull request #1580 from aanand/dont-set-network-mode-when-none-is-specified
Don't set network mode when none is specified
(cherry picked from commit 911cd60360)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-21 17:27:59 -07:00
Aanand Prasad
a8d7ebd987 Merge pull request #1461 from aanand/bump-1.3.0
Bump 1.3.0
2015-06-18 11:41:40 -07:00
Aanand Prasad
00f61196a4 Bump 1.3.0
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-18 11:25:10 -07:00
Aanand Prasad
c21d6706b6 Merge pull request #1565 from aanand/use-docker-1.7.0
Use docker 1.7.0 and docker-py 1.2.3
(cherry picked from commit 8ffeaf2a54)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>

Conflicts:
	Dockerfile
2015-06-18 11:25:10 -07:00
Aanand Prasad
c3c5d91c47 Merge pull request #1563 from moxiegirl/hugo-test-fixes
Hugo final 1.7 Documentation PR -- please read carefully
(cherry picked from commit 4e73e86d94)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-18 11:25:10 -07:00
Aanand Prasad
7fa4cd1214 Merge pull request #1552 from aanand/add-upgrade-instructions
Add upgrading instructions to install docs
(cherry picked from commit bc7161b475)
2015-06-16 16:29:18 -07:00
Aanand Prasad
f353d9fbc0 Merge pull request #1406 from vdemeester/667-compose-port-scale
Fixing docker-compose port with scale (#667)
(cherry picked from commit 5b2a0cc73d)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:08 -07:00
Daniel Nephin
09018855ce Merge pull request #1550 from aanand/update-docker-py
Update setup.py with new docker-py minimum
(cherry picked from commit b3b44b8e4c)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:08 -07:00
Aanand Prasad
719954b02f Merge pull request #1545 from moxiegirl/test-tooling
Updated for new documentation tooling
(cherry picked from commit aaccd12d3d)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:08 -07:00
Daniel Nephin
67bc3fabe4 Merge pull request #1544 from aanand/fix-volume-deduping
Fix volume binds de-duplication
(cherry picked from commit 77e594dc94)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:08 -07:00
Daniel Nephin
e724a346c7 Merge pull request #1526 from aanand/remove-start-or-create-containers
Remove Service.start_or_create_containers()
(cherry picked from commit 38a11c4c28)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:08 -07:00
Daniel Nephin
87b4545b44 Merge pull request #1508 from thaJeztah/update-dockerproject-links
Update dockerproject.com links
(cherry picked from commit 417e6ce0c9)
2015-06-15 11:22:07 -07:00
Aanand Prasad
58a7844129 Merge pull request #1482 from bfirsh/add-build-and-dist-to-dockerignore
Make it possible to run tests remotely
(cherry picked from commit c8e096e089)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Daniel Nephin
4353f7b9f9 Merge pull request #1475 from fordhurley/patch-1
Fix markdown formatting for `--service-ports` example
(cherry picked from commit d64bf88e26)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Aanand Prasad
8f8693e13e Merge pull request #1480 from bfirsh/change-sigint-test-to-use-sigstop
Change kill SIGINT test to use SIGSTOP
(cherry picked from commit a15f996744)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Ben Firshman
363a6563c7 Merge pull request #1537 from aanand/reorder-service-utils
Reorder service.py utility methods
(cherry picked from commit e3525d64b5)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Aanand Prasad
59d6af73fa Merge pull request #1539 from bfirsh/add-image-affinity-to-test
Add image affinity to test script
(cherry picked from commit 4c2112dbfd)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Aanand Prasad
cd7f67018e Merge pull request #1466 from noironetworks/changing-scale-to-warning
Modified scale awareness from exception to warning
(cherry picked from commit 7d2a89427c)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Ben Firshman
b7e8770c4f Merge pull request #1538 from thieman/tnt-serivce-misspelled
Correct misspelling of "Service" in an error message
(cherry picked from commit bd246fb011)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:07 -07:00
Aanand Prasad
ad4cc5d6df Merge pull request #1497 from aanand/use-1.7-rc1
Run tests against Docker 1.7 RC2
(cherry picked from commit 0e9ccd36f3)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:06 -07:00
Aanand Prasad
ca14ed68f7 Merge pull request #1533 from edmorley/update-b2d-shellinit-example
Docs: Update boot2docker shellinit example to use 'eval'
(cherry picked from commit 17e03b29f9)
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:06 -07:00
Daniel Nephin
71514cb380 Merge pull request #1531 from aanand/test-crash-resilience
Test that data volumes now survive a crash when recreating
(cherry picked from commit 87c30ae6e4)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-15 11:22:06 -07:00
Daniel Nephin
8212f1bd45 Merge pull request #1529 from aanand/update-dockerpty
Update dockerpty to 0.3.4
(cherry picked from commit 95b2eaac04)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-09 18:24:14 -04:00
Daniel Nephin
dca3bbdea3 Merge pull request #1527 from aanand/remove-logging-on-run-rm
Remove logging on run --rm
(cherry picked from commit 5578ccbb01)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-09 14:39:53 -04:00
Daniel Nephin
8ed7dfef6f Merge pull request #1525 from aanand/fix-duplicate-logging
Fix duplicate logging on up/run
(cherry picked from commit e2b790f732)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-09 14:39:52 -04:00
Daniel Nephin
631f5be02f Merge pull request #1481 from albers/completion-smart-recreate
Support --x-smart-recreate in bash completion
(cherry picked from commit 9a0bb325f2)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-09 14:39:52 -04:00
Ben Firshman
4f4ea2a402 Merge pull request #1325 from sdurrheimer/master
Zsh completion for docker-compose
(cherry picked from commit b638728d6c)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>

Conflicts:
	docs/completion.md
2015-06-09 14:39:50 -04:00
Aanand Prasad
5a5bffebd1 Merge pull request #1464 from twhiteman/bug1461
Possible division by zero error when pulling an image - fixes #1463
(cherry picked from commit d0e87929a1)

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-09 14:39:05 -04:00
Aanand Prasad
8749bc0844 Build Python 2.7.9 in Docker image
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-02 11:39:17 +01:00
Aanand Prasad
f3d0c63db2 Make sure we use Python 2.7.9 and OpenSSL 1.0.1 when building OSX binary
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-06-02 11:39:17 +01:00
Aanand Prasad
93a846db31 Report Python and OpenSSL versions in --version output
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>

Conflicts:
	compose/cli/utils.py
2015-06-02 11:39:17 +01:00
Aanand Prasad
686c25d50f Script to prepare OSX build environment
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-27 17:52:23 +01:00
Aanand Prasad
ef6555f084 Merge branch 'release' into bump-1.3.0 2015-05-26 17:45:28 +01:00
Aanand Prasad
1344099e29 Merge pull request #1444 from aanand/migrate-in-dependency-order
Migrate containers in dependency order
2015-05-26 17:30:14 +01:00
Daniel Nephin
48f3d41947 Merge pull request #1447 from aanand/fix-convergence-when-service-not-created
Fix regression in `docker-compose up`
2015-05-26 10:54:28 -05:00
Aanand Prasad
7da8e6be3b Migrate containers in dependency order
This fixes a bug where migration would fail with an error if a
downstream container was migrated before its upstream dependencies, due
to `check_for_legacy_containers()` being implicitly called when we fetch
`links`, `volumes_from` or `net` dependencies.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-26 16:03:06 +01:00
Aanand Prasad
4795fd874f Fix regression in docker-compose up
When an upstream dependency (e.g. a db) has a container but a downstream
service (e.g. a web app) doesn't, a web container is not created on
`docker-compose up`.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-26 16:01:05 +01:00
Aanand Prasad
276fee105b Merge pull request #1459 from bfirsh/update-description
Update description of Compose
2015-05-26 15:57:26 +01:00
Ben Firshman
8af4ae7935 Merge pull request #1441 from aanand/abort-on-legacy-containers
Bail out immediately if there are legacy containers
2015-05-26 15:45:08 +01:00
Ben Firshman
91ceb33d5a Update description of Compose
"Define and run multi-container applications with Docker"

Not just development environments, and "complex" is not clear and
not really true.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2015-05-26 15:42:55 +01:00
Aanand Prasad
0b4d9401ee Bail out immediately if there are legacy containers
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-26 11:53:51 +01:00
Daniel Nephin
889d3636f4 Merge pull request #1440 from aanand/legacy-fixes
Legacy fixes
2015-05-24 12:42:14 -05:00
Daniel Nephin
b0f945d2da Merge pull request #1432 from albers/completion-migrate_to_labels
bash completion for migrate_to_labels
2015-05-23 18:21:44 -05:00
Daniel Nephin
93c529182e Merge pull request #1446 from aanand/fix-create-logging
Fix missing logging on container creation
2015-05-21 17:34:54 -05:00
Harald Albers
412034a023 bash completion for migrate-to-labels
Signed-off-by: Harald Albers <github@albersweb.de>
2015-05-21 12:45:04 -07:00
Aanand Prasad
30c9e7323a Fix missing logging on container creation
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 20:06:25 +01:00
Aanand Prasad
051f56a1e6 Fix bugs with one-off legacy containers
- One-off containers were included in the warning log messages, which can
  make for unreadable output when there are lots (as there often are).

- Compose was attempting to recreate one-off containers as normal
  containers when migrating.

Fixed by implementing the exact naming logic from before we used labels.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 18:21:49 +01:00
Aanand Prasad
b5ce23885b Split out fetching of legacy names so we can test it
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 18:21:49 +01:00
Aanand Prasad
0fdb8bf814 Refactor migration logic
- Rename `migration` module to `legacy` to make its legacy-ness explicit

- Move `check_for_legacy_containers` into `legacy` module

- Fix migration test so it can be run in isolation

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 18:21:09 +01:00
Aanand Prasad
e538923545 Merge pull request #1442 from aanand/dashes-in-migration-command
Rename migrate_to_labels -> migrate-to-labels
2015-05-21 18:19:30 +01:00
Daniel Nephin
c0f65a9f4c Merge pull request #1445 from aanand/replace-sleep-with-top
Use 'top' instead of 'sleep' as a dummy command
2015-05-21 11:51:23 -05:00
Aanand Prasad
b0cb31c186 Use 'top' instead of 'sleep' as a dummy command
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 16:24:29 +01:00
Aanand Prasad
3080244c0b Rename migrate_to_labels -> migrate-to-labels
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-21 14:54:41 +01:00
Aanand Prasad
b183a66db1 Merge pull request #1437 from dnephin/fix_project_containers
Project.containers with service_names
2015-05-21 10:42:08 +01:00
Daniel Nephin
022f81711e Fixes #1434, Project.containers with service_names.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-20 20:47:34 -04:00
Aanand Prasad
4f40d0c168 Merge pull request #1433 from bfirsh/remove-whitespace-from-json-representation-of-container-config
Remove whitespace from json hash
2015-05-20 16:55:02 +01:00
Ben Firshman
f5ac1fa073 Remove whitespace from json hash
Reasoning:

e5d8447f06 (commitcomment-11243708)

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2015-05-20 16:02:08 +01:00
Ben Firshman
f79eb7b9ad Merge pull request #1382 from lsowen/security_opt
Add security_opt as a docker-compose.yml option
2015-05-20 13:40:42 +01:00
Aanand Prasad
b0b6ed31c4 Merge pull request #1430 from albers/fix-1426
Fix #1426 - migrate_to_labels not found
2015-05-20 12:31:18 +01:00
lsowen
ea7ee301c0 Add security_opt as a docker-compose.yml option
Signed-off-by: Logan Owen <lsowen@s1network.com>
2015-05-19 13:47:41 -04:00
Harald Albers
41315b32cb Fix #1426 - migrate_to_labels not found
Signed-off-by: Harald Albers <github@albersweb.de>
2015-05-19 16:37:50 +02:00
Aanand Prasad
80eaf4cc9f Merge pull request #1399 from aanand/state
Only recreate what's changed
2015-05-18 19:25:42 +01:00
Aanand Prasad
ef4eb66723 Implement smart recreate behind an experimental CLI flag
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-18 18:39:18 +01:00
Aanand Prasad
82bc7cd5ba Remove override_options arg from recreate_container(s)
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-18 17:42:09 +01:00
Aanand Prasad
3304c68891 Only set AttachStdin/out/err for one-off containers
If we're just streaming logs from `docker-compose up`, we don't need
to set AttachStdin/out/err, and doing so results in containers with
different configuration depending on whether `up` or `run` were invoked
with `-d` or not.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-18 17:41:04 +01:00
Aanand Prasad
1e6d912fbc Merge pull request #1356 from dnephin/use_labels_instead_of_names
Use labels instead of names to identify containers
2015-05-18 17:38:46 +01:00
Aanand Prasad
4ef3bbcdf2 Merge pull request #1415 from aanand/fix-run-race-condition
Fix race condition in `docker-compose run`
2015-05-18 16:16:31 +01:00
Daniel Nephin
62059d55e6 Add migration warning and option to migrate to labels.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-18 10:55:12 -04:00
Daniel Nephin
ed50a0a3a0 Resolves #1066, use labels to identify containers
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-18 10:47:26 -04:00
Daniel Nephin
28d2aff8b8 Fix teardown for integration tests.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-18 10:44:44 -04:00
Aanand Prasad
862971cffa Fix race condition in docker-compose run
We shouldn't start the container before handing it off to dockerpty -
dockerpty will start it after attaching, which is the correct order.
Otherwise the container might exit before we attach to it, which can
lead to weird bugs.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-15 12:16:24 +01:00
Daniel Nephin
c8022457eb Merge pull request #1413 from aanand/update-dockerpty
Update dockerpty to 0.3.3
2015-05-14 21:23:46 -04:00
Aanand Prasad
9bbf1a33d1 Update dockerpty to 0.3.3
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-14 20:03:50 +01:00
Aanand Prasad
0ac8c3cb03 Merge pull request #858 from dnephin/fix_volumes_recreate_on_1.4.1
Preserve individual volumes on recreate
2015-05-14 16:12:08 +01:00
Daniel Nephin
d5c9626040 Merge pull request #1411 from aanand/fix-extends-docs
Fix typo in extends.md
2015-05-14 09:53:38 -04:00
Aanand Prasad
ad9c5ad938 Fix typo in extends.md
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-14 10:48:35 +01:00
Daniel Nephin
70d2e64dfe Merge pull request #1407 from aanand/update-docker-py
Update docker-py to 1.2.2
2015-05-12 20:22:48 -04:00
Aanand Prasad
1dccd58209 Update docker-py to 1.2.2
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-05-12 18:51:45 +01:00
Aanand Prasad
e0103ac0d4 Merge pull request #1405 from bfirsh/link-to-getting-started-guides-from-each-page
Link to getting started guides from each page
2015-05-12 14:40:53 +01:00
Ben Firshman
4d745ab87a Link to getting started guides from each page
These are really hard to find.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2015-05-12 12:44:43 +01:00
Daniel Nephin
417d9c2d51 Use individual volumes for recreate instead of volumes_from
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-11 13:01:43 -04:00
Daniel Nephin
4997facbb4 Merge pull request #1400 from DanElbert/754-device_option
Added devices config handling and device HostConfig handling
2015-05-11 12:42:48 -04:00
delbert@umn.edu
df87bd91c8 Added devices configuration option
Signed-off-by: Dan Elbert <dan.elbert@gmail.com>
2015-05-11 10:50:58 -05:00
Aanand Prasad
1748b0f81a Merge pull request #1349 from dnephin/rename_instead_of_intermediate
Rename container when recreating it
2015-05-08 10:33:28 +01:00
Daniel Nephin
6829efd4d3 Resolves #874, Rename instead of use an intermediate.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-05-07 21:53:41 -04:00
Daniel Nephin
99f2a3a583 Merge pull request #1396 from albers/completion-doc
Fix markdown formatting issue
2015-05-07 14:04:16 -04:00
Daniel Nephin
0f2f9db6d8 Merge pull request #1388 from vdemeester/1303-log-driver-support
Add support for log-driver as a docker-compose.yml option
2015-05-07 12:00:42 -04:00
Harald Albers
d6223371d6 Fix markdown formatting issue
Signed-off-by: Harald Albers <github@albersweb.de>
2015-05-07 04:41:12 -07:00
Daniel Nephin
4817d5944c Merge pull request #1391 from albers/completion-extglob
Fix #1386 by ensuring that exglob is set in bash completion
2015-05-06 10:46:13 -04:00
Vincent Demeester
f626fc5ce8 Add support for log-driver in docker-compose.yml
Closes #1303

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2015-05-06 13:18:58 +02:00
Harald Albers
1579a125a3 Ensure that exglob is set in bash completion
Signed-off-by: Harald Albers <github@albersweb.de>
2015-05-06 09:33:22 +02:00
Daniel Nephin
7fb9ec29c4 Merge pull request #1335 from chernjie/pid_readonly
docker-compose create --readonly
2015-05-05 20:50:01 -04:00
Daniel Nephin
f78e89f265 Merge pull request #1381 from sherter/help-fix
Show proper command in help text of build subcommand
2015-05-05 20:48:25 -04:00
CJ
b06294399a See #1335: Added --read-only
Signed-off-by: CJ <lim@chernjie.com>
2015-05-02 23:39:39 +08:00
Simon Herter
b8e0aed21c Show proper command in help text of build subcommand
The help text of the build subcommand suggested to use 'compose build' (instead of 'docker-compose build') to rebuild images.

Signed-off-by: Simon Herter <sim.herter@gmail.com>
2015-05-01 18:58:55 -04:00
Daniel Nephin
4bce388b51 Merge pull request #1376 from aanand/fix-build-non-ascii-filename
Make sure the build path we pass to docker-py is a binary string
2015-04-30 20:54:21 -04:00
Daniel Nephin
6c95eed781 Merge pull request #1269 from aanand/labels
Implement 'labels' option
2015-04-30 20:49:11 -04:00
Aanand Prasad
4f366d8355 Make sure the build path we pass to docker-py is a binary string
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-30 11:57:46 +01:00
Daniel Nephin
878d90febf Merge pull request #1374 from aanand/close-before-attaching
Close connection after building or pulling
2015-04-29 18:56:03 -04:00
Aanand Prasad
1a77feea3f Close connection before attaching on 'up' and 'run'
This ensures that the connection is not recycled, which can cause the
Docker daemon to complain if we've already performed another streaming
call such as doing a build.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-29 18:42:03 +01:00
Aanand Prasad
7e0ab0714f Merge pull request #1344 from dnephin/fix_pull_with_sha
Support image with ids instead of names
2015-04-29 16:46:34 +01:00
Aanand Prasad
2e6bc078fb Implement 'labels' option
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-29 16:45:18 +01:00
Daniel Nephin
3dd860f0ba Fix #923, support image with ids instead of names.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2015-04-29 10:13:18 -04:00
Daniel Nephin
de800dea0f Merge pull request #1370 from aanand/update-docker-version
Update Docker version to 1.6 stable
2015-04-29 10:03:13 -04:00
Aanand Prasad
fed4377ef6 Merge pull request #1351 from mchasal/1301-alphabetize_usage
Fix for #1301, Alphabetize Commands
2015-04-29 14:21:46 +01:00
Aanand Prasad
021bf46557 Update Docker version to 1.6 stable
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-29 13:58:32 +01:00
Aanand Prasad
b7e5116267 Merge pull request #1352 from bfirsh/add-irccloud-invite-link
Use cool new IRCCloud links for IRC channel
2015-04-29 13:25:25 +01:00
Daniel Nephin
9532e5a4f2 Merge pull request #1331 from xuxinkun/cpuset20150424
Add cpuset config.
2015-04-28 13:21:00 -04:00
Ben Firshman
e5a118e3ce Use cool new IRCCloud links for IRC channel
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2015-04-28 14:24:48 +01:00
Daniel Nephin
a631c1eddb Merge pull request #1357 from turtlemonvh/1350-extends_parent_build_directory_dne_error
Fix for #1350, nonexisting build path in parent section causes extending section to fail
2015-04-27 13:49:01 -04:00
Timothy Van Heest
855855a0e6 Fix for #1350, nonexisting build path in parent section causes extending section to fail
Signed-off-by: Timothy Van Heest <timothy.vanheest@gmail.com>
2015-04-27 10:55:30 -04:00
Daniel Nephin
b808674132 Merge pull request #1360 from aanand/remove-wercker
Remove wercker.yml
2015-04-27 10:17:41 -04:00
Ben Firshman
7e574fca71 Merge pull request #1358 from aanand/update-readme
Update README.md with changes to docs/index.md
2015-04-27 15:14:06 +01:00
Aanand Prasad
7d617d60bc Remove wercker.yml
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-27 15:10:01 +01:00
Aanand Prasad
da71e01d30 Merge pull request #1359 from aanand/remove-dco-validation
Remove DCO validation from CI script
2015-04-27 15:08:51 +01:00
Daniel Nephin
a89bc304f6 Merge pull request #1075 from KyleJamesWalker/master
Support alternate Dockerfile name.
2015-04-27 10:06:43 -04:00
Aanand Prasad
240495f07f Remove DCO validation from CI script
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-27 15:02:24 +01:00
Aanand Prasad
2e19887bf1 Update README.md with changes to docs/index.md
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-27 14:59:42 +01:00
Daniel Nephin
a982e516fc Merge pull request #1354 from xwisen/master
modified the release notes section the first[PR #972]to[PR #1088]
2015-04-27 09:56:46 -04:00
Ben Firshman
3af56e1602 Merge pull request #1200 from chanezon/1148-pat-paul
paulczar fixes plus example file
2015-04-27 14:54:14 +01:00
Aanand Prasad
16f8106149 Merge pull request #1158 from chernjie/addhosts
Add extra_hosts to yml configuration --add-hosts
2015-04-27 12:26:29 +01:00
CJ
86a08c00f2 See https://github.com/docker/compose/pull/1158#discussion_r29063218
Signed-off-by: CJ <lim@chernjie.com>
2015-04-27 14:07:21 +08:00
xwisen
0ca9fa8b2b modified the release notes section the first[PR #972]to[PR #1088]
Signed-off-by: xwisen <xwisen@gmail.com>
2015-04-26 13:16:23 +08:00
xuxinkun
688f82c1cf Add cpuset config.
Signed-off-by: xuxinkun <xuxinkun@gmail.com>
2015-04-26 00:14:52 +08:00
Michael Chase-Salerno
9a44708081 Fix for #1301, Alphabetize Commands
Signed-off-by: Michael Chase-Salerno <bratac@linux.vnet.ibm.com>
2015-04-24 20:45:18 +00:00
Daniel Nephin
89789c54ad Merge pull request #1232 from aleksandr-vin/add-parent-directories-search-for-default-compose-files
Add parent directories search for default compose-files
2015-04-24 13:12:24 -04:00
Kyle Walker
d17c4d27fa Support alternate Dockerfile name.
Signed-off-by: Kyle James Walker <KyleJamesWalker@gmail.com>
2015-04-24 08:30:36 -07:00
CJ
25ee3f0033 Remove extra s from --add-host
linting...
six.string_types
list-of-strings in examples
disallow extra_hosts support for list-of-dicts
A more thorough sets of tests for extra_hosts
Provide better examples
As per @aanand's [comment](https://github.com/docker/compose/pull/1158/files#r28326312)

  I think it'd be better to check `if not isinstance(extra_hosts_line,
  six.string_types)` and raise an error saying `extra_hosts_config must be
  either a list of strings or a string->string mapping`. We shouldn't need
  to do anything special with the list-of-dicts case.
order result to work with assert
use set() instead of sort()

Signed-off-by: CJ <lim@chernjie.com>
2015-04-24 09:21:29 +08:00
Thomas Desvenain
8098b65576 Fix when pyyaml has interpreted line as a dictionary
Added unit tests in build_extra_hosts + fix

Signed-off-by: CJ <lim@chernjie.com>
2015-04-24 09:21:21 +08:00
Sam Wing
fb81c37ca6 added the extra_hosts option to the yml configuration which exposes the --add-host flag from the docker client
Signed-off-by: Sam Wing <sampwing@gmail.com>
2015-04-23 21:54:59 +08:00
Daniel Nephin
e6ec76161d Merge pull request #1293 from mchasal/1224
1224: Check that image or build is specified.
2015-04-21 15:19:11 -04:00
Aanand Prasad
b317071cf3 Merge pull request #1205 from josephpage/run-rm-restart
[cli] run --rm overrides restart: always
2015-04-21 15:48:43 +01:00
Aanand Prasad
bb922d63f5 Merge pull request #1318 from aanand/fix-restart-timeout
Fix --timeout flag on restart, add tests for stop and restart
2015-04-21 15:07:07 +01:00
Aanand Prasad
2291fa2d45 Fix --timeout flag on restart, add tests for stop and restart
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-21 11:59:33 +01:00
Ben Firshman
3c6652c101 Merge pull request #1308 from aanand/update-docs-1.2.0
Update docs for 1.2.0
2015-04-17 10:35:06 -07:00
Aanand Prasad
43af1684c1 Update docs for 1.2.0
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-17 16:02:57 +01:00
Aanand Prasad
2cdde099fa Merge pull request #1297 from aanand/bump-1.3.0-dev
Bump 1.3.0dev
2015-04-16 18:02:59 +01:00
Aanand Prasad
310c7623f9 Bump 1.3.0dev
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-16 17:54:18 +01:00
Aanand Prasad
6e64802545 Merge pull request #1296 from aanand/merge-release-1.2.0
Merge release 1.2.0
2015-04-16 17:52:47 +01:00
Aanand Prasad
8b5015c10f Bump 1.2.0
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-16 17:48:28 +01:00
Aanand Prasad
ed549155b3 Merge pull request #1159 from aanand/bump-1.2.0
Bump 1.2.0
2015-04-16 17:46:47 +01:00
Michael Chase-Salerno
24a6c240fc Testcase for #1224, check that image or build is specified
Signed-off-by: Michael Chase-Salerno <bratac@linux.vnet.ibm.com>
2015-04-15 21:38:24 +00:00
Michael Chase-Salerno
15b763acdb Fix for #1224, check that image or build is specified
Signed-off-by: Michael Chase-Salerno <bratac@linux.vnet.ibm.com>
2015-04-15 02:03:02 +00:00
Daniel Nephin
3cd116b99d Merge pull request #1278 from albers/completion-run-user
Add bash completion for docker-compose run --user
2015-04-14 11:04:03 -04:00
Aanand Prasad
b559653c8c Merge pull request #1277 from fredlf/add-help-text
Adds Where to Get Help section
2015-04-14 15:16:11 +01:00
Harald Albers
5f17423d3e Add bash completion for docker-compose run --user
Signed-off-by: Harald Albers <github@albersweb.de>
2015-04-10 19:52:13 +02:00
Fred Lifton
2a442ec6d9 Adds Where to Get Help section
Signed-off-by: Fred Lifton <fred.lifton@docker.com>
2015-04-09 16:23:25 -07:00
Aleksandr Vinokurov
ceff5cb9ca Add parent directories search for default compose-files
Does not change directory to the parent with the compose-file found.
Works like passing '--file' or setting 'COMPOSE_FILE' with absolute path.
Resolves issue #946.

Signed-off-by: Aleksandr Vinokurov <aleksandr.vin@gmail.com>
2015-04-09 22:36:47 +00:00
Ben Firshman
4926f8aef6 Merge pull request #1261 from aanand/fix-vars-in-volume-paths
Fix vars in volume paths
2015-04-09 14:44:07 +01:00
Daniel Nephin
927115c3d4 Merge pull request #1271 from sdake/master
Remove stray print
2015-04-09 09:41:27 -04:00
Steven Dake
1d7247b67e Remove stray print
A previous commit introduced a stray print operation.  Remove it.

Signed-off-by: Steven Dake <stdake@cisco.com>
2015-04-08 12:49:37 -07:00
Ben Firshman
a1cd00e3f0 Merge pull request #1251 from aanand/extends-guide
Add tutorial and reference for `extends`
2015-04-08 15:47:39 +01:00
Aanand Prasad
fd568b389d Fix home directory and env expansion in volume paths
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-07 16:23:45 +01:00
Aanand Prasad
4f95e81c6d Merge pull request #1166 from spk/fix_example_env_file
Docs: fix env_file example
2015-04-07 15:52:10 +01:00
Aanand Prasad
619e783a05 Merge pull request #1011 from sdake/master
Add a --pid=host feature to expose the host PID space to the container
2015-04-07 13:49:53 +01:00
Aanand Prasad
f3f7f000fe Add tutorial and reference for extends
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-07 13:46:14 +01:00
Aanand Prasad
219751abc7 Merge pull request #1258 from fredlf/1.6-docs-updates
Prepping for 1.6 release.
2015-04-07 11:22:02 +01:00
Joseph Page
0b48e137e8 add unit tests for run --rm with restart
Signed-off-by: Joseph Page <joseph.page@rednet.io>
2015-04-07 10:18:25 +02:00
Fred Lifton
947742852e Prepping for 1.6 release.
Adds release notes and edits/revises new Compose in production doc.
2015-04-06 16:47:07 -07:00
Steven Dake
94277a3eb0 Add --pid=host support
Allow docker-compsoe to use the docker --pid=host API available in 1.17

Signed-off-by: Steven Dake <stdake@cisco.com>
2015-04-06 12:44:35 -07:00
Steven Dake
11a2100d53 Add a --pid=host feature to expose the host PID space to the container
Docker 1.5.0+ introduces a --pid=host feature which allows sharing of PID
namespaces between baremetal and containers.  This is useful for atomic
upgrades, atomic rollbacks, and monitoring.

For more details of a real-life use case, check out:
http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/

Signed-off-by: Steven Dake <stdake@cisco.com>
2015-04-06 11:45:37 -07:00
Fred Lifton
530d7af5cf Merge pull request #1253 from aanand/production-guide
Add guide to using Compose in production
2015-04-06 10:49:49 -07:00
Aanand Prasad
502d58abe6 Add guide to using Compose in production
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-03 18:56:29 -04:00
Fred Lifton
eb073c53f4 Merge pull request #1249 from asveepay/update_doc
Update install docs for permission denied error
2015-04-03 12:31:38 -07:00
Roland Cooper
d866415b9a Update install docs for permission denied error
Signed-off-by: Roland Cooper <rcooper@enova.com>
2015-04-03 12:21:15 -05:00
Aanand Prasad
dd40658f87 Merge pull request #1238 from aanand/use-docker-1.6-rc3
Use Docker 1.6 RC3
2015-04-03 13:00:49 -04:00
Aanand Prasad
b3382ffd4f Use Docker 1.6 RC4
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-04-03 10:57:28 -04:00
Aanand Prasad
15a0fac939 Merge pull request #1141 from bfirsh/contributing-section-in-readme
Add contributing section to readme
2015-04-03 10:56:06 -04:00
Laurent Arnoud
e3cff5d17d Docs: fix env_file example
Thanks-to: @aanand
Signed-off-by: Laurent Arnoud <laurent@spkdev.net>
2015-04-01 11:15:09 +02:00
Daniel Nephin
0f70b8638f Merge pull request #1213 from moysesb/relative_build
Make value of 'build:' relative to the yml file.
2015-03-31 21:20:02 -04:00
Moysés Borges
8584525e8d Interpret 'build:' as relative to the yml file
* This fix introduces one side-effect: the build parameter is now
validated early, when the service dicionary is first constructed.
That leads to less scary stack traces when the path is not valid.

* The tests for the changes introduced here alter the fixtures
of those (otherwise unrelated) tests that make use of the 'build:'
parameter)

Signed-off-by: Moysés Borges Furtado <moyses.furtado@wplex.com.br>
2015-03-31 18:47:26 -03:00
Aanand Prasad
e3e2247159 Merge pull request #1231 from aanand/docker-1.6rc2
Test against Docker 1.6 RC2 only
2015-03-31 16:31:03 -04:00
Aanand Prasad
0650c4485a Test against Docker 1.6 RC2 only
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-31 16:04:22 -04:00
Aanand Prasad
e708f4f59d Merge pull request #1226 from aanand/merge-multi-value-options
Merge multi-value options when extending
2015-03-31 16:01:22 -04:00
Aanand Prasad
907918b492 Merge multi-value options when extending
Closes #1143.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-31 15:30:59 -04:00
Aanand Prasad
6dbe321a45 Merge pull request #1225 from aanand/fix-1222
When extending, `build` replaces `image` and vice versa
2015-03-31 15:23:34 -04:00
Aanand Prasad
2a415ede08 When extending, build replaces image and vice versa
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-30 17:14:19 -04:00
Ben Firshman
43369cda9c Merge pull request #1221 from aanand/update-swarm-doc
Update Swarm doc
2015-03-30 18:08:16 +01:00
Aanand Prasad
a2557a3354 Merge pull request #1198 from funkyfuture/reformat-contributing.md
Reformat CONTRIBUTING.md
2015-03-30 12:08:56 -04:00
Aanand Prasad
1a14449fe6 Update Swarm doc
- Co-scheduling will now work, so we can remove the stuff about
  `volumes_from` and `net` and manual affinity filters.

- Added a section about building.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-30 11:34:26 -04:00
Joseph Page
0b89ae6f20 [cli] run --rm overrides restart: always
docker/compose#1013

Signed-off-by: Joseph Page <joseph.page@rednet.io>
2015-03-30 14:45:24 +02:00
Patrick Chanezon
cec6dc28bb implemented @fredl suggestions
Signed-off-by: Patrick Chanezon <patlist@chanezon.com>
2015-03-27 17:12:29 -07:00
Aanand Prasad
853ce255ea Merge pull request #1202 from aanand/jenkins-script
WIP: Jenkins script
2015-03-27 14:59:49 -07:00
Aanand Prasad
db852e14e4 Add script/ci
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-27 14:39:32 -07:00
Patrick Chanezon
98dd0cd1f8 implemented @aanand comments
Signed-off-by: Patrick Chanezon <patlist@chanezon.com>
2015-03-27 13:26:51 -07:00
Patrick Chanezon
c441ac90d6 paulczar fixes plus example file
Signed-off-by: Patrick Chanezon <patlist@chanezon.com>
2015-03-26 16:35:53 -07:00
Aanand Prasad
9d7b54d8fd Merge pull request #1183 from pborreli/patch-2
Fixed typo
2015-03-26 12:31:07 -07:00
Pascal Borreli
59f04c6e29 Fixed typo
Signed-off-by: Pascal Borreli <pascal@borreli.com>
2015-03-26 19:03:06 +00:00
Aanand Prasad
367ae0c848 Merge pull request #1185 from akoskaaa/master
Make test files and config files pep8 valid
2015-03-26 10:04:37 -07:00
akoskaaa
4e0f555c58 make flake8 a bit more specific
Signed-off-by: akoskaaa <akos.hochrein@prezi.com>
2015-03-26 09:09:15 -07:00
Ben Firshman
baf18decae Merge pull request #1179 from aanand/test-1.6-rc
Add Docker 1.6 RC2 to tested versions
2015-03-26 14:18:36 +00:00
funkyfuture
826b8ca4d3 Reformat CONTRIBUTING.md
- some reformatting to make it better readable in smaller terminals
- adds a note that suggests validating DCO before pushing

Signed-off-by: funkyfuture <funkyfuture@riseup.net>
2015-03-26 13:11:05 +01:00
akoskaaa
fa2fb6bd38 [pep8] flake8 run for everything, fix items from this change
Signed-off-by: akoskaaa <akos.hochrein@prezi.com>
2015-03-25 23:15:34 -07:00
akoskaaa
f9ea5ecf40 [pep8] make test files and config files pep8 valid
Signed-off-by: akoskaaa <akos.hochrein@prezi.com>
2015-03-25 20:20:38 -07:00
Aanand Prasad
99f7eba930 Add Docker 1.6 RC2 to tested versions
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2015-03-25 14:48:39 -07:00
Ben Firshman
e1b27acd02 Add contributing section to readme
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2015-03-25 17:22:04 +01:00
83 changed files with 4187 additions and 1050 deletions

View File

@@ -1,2 +1,4 @@
.git
build
dist
venv

View File

@@ -1,6 +1,83 @@
Change log
==========
1.3.3 (2015-07-15)
------------------
Two regressions have been fixed:
- When stopping containers gracefully, Compose was setting the timeout to 0, effectively forcing a SIGKILL every time.
- Compose would sometimes crash depending on the formatting of container data returned from the Docker API.
1.3.2 (2015-07-14)
------------------
The following bugs have been fixed:
- When there were one-off containers created by running `docker-compose run` on an older version of Compose, `docker-compose run` would fail with a name collision. Compose now shows an error if you have leftover containers of this type lying around, and tells you how to remove them.
- Compose was not reading Docker authentication config files created in the new location, `~/docker/config.json`, and authentication against private registries would therefore fail.
- When a container had a pseudo-TTY attached, its output in `docker-compose up` would be truncated.
- `docker-compose up --x-smart-recreate` would sometimes fail when an image tag was updated.
- `docker-compose up` would sometimes create two containers with the same numeric suffix.
- `docker-compose rm` and `docker-compose ps` would sometimes list services that aren't part of the current project (though no containers were erroneously removed).
- Some `docker-compose` commands would not show an error if invalid service names were passed in.
Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens!
1.3.1 (2015-06-21)
------------------
The following bugs have been fixed:
- `docker-compose build` would always attempt to pull the base image before building.
- `docker-compose help migrate-to-labels` failed with an error.
- If no network mode was specified, Compose would set it to "bridge", rather than allowing the Docker daemon to use its configured default network mode.
1.3.0 (2015-06-18)
------------------
Firstly, two important notes:
- **This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app** - see the [upgrading section of the install docs](https://github.com/docker/compose/blob/1.3.0rc1/docs/install.md#upgrading) for details.
- Compose now requires Docker 1.6.0 or later.
We've done a lot of work in this release to remove hacks and make Compose more stable:
- Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools.
- Compose no longer uses "intermediate containers" when recreating containers for a service. This makes `docker-compose up` less complex and more resilient to failure.
There are some new features:
- `docker-compose up` has an **experimental** new behaviour: it will only recreate containers for services whose configuration has changed in `docker-compose.yml`. This will eventually become the default, but for now you can take it for a spin:
$ docker-compose up --x-smart-recreate
- When invoked in a subdirectory of a project, `docker-compose` will now climb up through parent directories until it finds a `docker-compose.yml`.
Several new configuration keys have been added to `docker-compose.yml`:
- `dockerfile`, like `docker build --file`, lets you specify an alternate Dockerfile to use with `build`.
- `labels`, like `docker run --labels`, lets you add custom metadata to containers.
- `extra_hosts`, like `docker run --add-host`, lets you add entries to a container's `/etc/hosts` file.
- `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine.
- `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in.
- `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only.
- `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/reference/run/#security-configuration).
- `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/reference/run/#logging-drivers-log-driver).
Many bugs have been fixed, including the following:
- The output of `docker-compose run` was sometimes truncated, especially when running under Jenkins.
- A service's volumes would sometimes not update after volume configuration was changed in `docker-compose.yml`.
- Authenticating against third-party registries would sometimes fail.
- `docker-compose run --rm` would fail to remove the container if the service had a `restart` policy in place.
- `docker-compose scale` would refuse to scale a service beyond 1 container if it exposed a specific port number on the host.
- Compose would refuse to create multiple volume entries with the same host path.
Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily!
1.2.0 (2015-04-16)
------------------

View File

@@ -1,6 +1,8 @@
# Contributing to Compose
Compose is a part of the Docker project, and follows the same rules and principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md) to get an overview.
Compose is a part of the Docker project, and follows the same rules and
principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md)
to get an overview.
## TL;DR
@@ -17,22 +19,32 @@ If you're looking contribute to Compose
but you're new to the project or maybe even to Python, here are the steps
that should get you started.
1. Fork [https://github.com/docker/compose](https://github.com/docker/compose) to your username.
1. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`.
1. Enter the local directory `cd compose`.
1. Set up a development environment by running `python setup.py develop`. This will install the dependencies and set up a symlink from your `docker-compose` executable to the checkout of the repository. When you now run `docker-compose` from anywhere on your machine, it will run your development version of Compose.
1. Fork [https://github.com/docker/compose](https://github.com/docker/compose)
to your username.
2. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`.
3. Enter the local directory `cd compose`.
4. Set up a development environment by running `python setup.py develop`. This
will install the dependencies and set up a symlink from your `docker-compose`
executable to the checkout of the repository. When you now run
`docker-compose` from anywhere on your machine, it will run your development
version of Compose.
## Running the test suite
Use the test script to run linting checks and then the full test suite:
Use the test script to run linting checks and then the full test suite against
different Python interpreters:
$ script/test
Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions:
Tests are run against a Docker daemon inside a container, so that we can test
against multiple Docker versions. By default they'll run against only the latest
Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run
against all supported versions:
$ DOCKER_VERSIONS=all script/test
Arguments to `script/test` are passed through to the `nosetests` executable, so you can specify a test directory, file, module, class or method:
Arguments to `script/test` are passed through to the `nosetests` executable, so
you can specify a test directory, file, module, class or method:
$ script/test tests/unit
$ script/test tests/unit/cli_test.py
@@ -41,35 +53,34 @@ Arguments to `script/test` are passed through to the `nosetests` executable, so
## Building binaries
Linux:
`script/build-linux` will build the Linux binary inside a Docker container:
$ script/build-linux
OS X:
`script/build-osx` will build the Mac OS X binary inside a virtualenv:
$ script/build-osx
Note that this only works on Mountain Lion, not Mavericks, due to a [bug in PyInstaller](http://www.pyinstaller.org/ticket/807).
For official releases, you should build inside a Mountain Lion VM for proper
compatibility. Run the this script first to prepare the environment before
building - it will use Homebrew to make sure Python is installed and
up-to-date.
$ script/prepare-osx
## Release process
1. Open pull request that:
- Updates the version in `compose/__init__.py`
- Updates the binary URL in `docs/install.md`
- Updates the script URL in `docs/completion.md`
- Adds release notes to `CHANGES.md`
2. Create unpublished GitHub release with release notes
3. Build Linux version on any Docker host with `script/build-linux` and attach to release
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release as `docker-compose-Darwin-x86_64` and `docker-compose-Linux-x86_64`.
3. Build Linux version on any Docker host with `script/build-linux` and attach
to release
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to
release as `docker-compose-Darwin-x86_64` and `docker-compose-Linux-x86_64`.
5. Publish GitHub release, creating tag
6. Update website with `script/deploy-docs`
7. Upload PyPi package
$ git checkout $VERSION

View File

@@ -3,9 +3,11 @@ FROM debian:wheezy
RUN set -ex; \
apt-get update -qq; \
apt-get install -y \
python \
python-pip \
python-dev \
gcc \
make \
zlib1g \
zlib1g-dev \
libssl-dev \
git \
apt-transport-https \
ca-certificates \
@@ -15,16 +17,47 @@ RUN set -ex; \
; \
rm -rf /var/lib/apt/lists/*
ENV ALL_DOCKER_VERSIONS 1.3.3 1.4.1 1.5.0
# Build Python 2.7.9 from source
RUN set -ex; \
curl -LO https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz; \
tar -xzf Python-2.7.9.tgz; \
cd Python-2.7.9; \
./configure --enable-shared; \
make; \
make install; \
cd ..; \
rm -rf /Python-2.7.9; \
rm Python-2.7.9.tgz
# Make libpython findable
ENV LD_LIBRARY_PATH /usr/local/lib
# Install setuptools
RUN set -ex; \
curl -LO https://bootstrap.pypa.io/ez_setup.py; \
python ez_setup.py; \
rm ez_setup.py
# Install pip
RUN set -ex; \
curl -LO https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz; \
tar -xzf pip-7.0.1.tar.gz; \
cd pip-7.0.1; \
python setup.py install; \
cd ..; \
rm -rf pip-7.0.1; \
rm pip-7.0.1.tar.gz
ENV ALL_DOCKER_VERSIONS 1.6.0 1.7.0
RUN set -ex; \
for v in ${ALL_DOCKER_VERSIONS}; do \
curl https://get.docker.com/builds/Linux/x86_64/docker-$v -o /usr/local/bin/docker-$v; \
chmod +x /usr/local/bin/docker-$v; \
done
curl https://get.docker.com/builds/Linux/x86_64/docker-1.6.0 -o /usr/local/bin/docker-1.6.0; \
chmod +x /usr/local/bin/docker-1.6.0; \
curl https://test.docker.com/builds/Linux/x86_64/docker-1.7.0 -o /usr/local/bin/docker-1.7.0; \
chmod +x /usr/local/bin/docker-1.7.0
# Set the default Docker to be run
RUN ln -s /usr/local/bin/docker-1.3.3 /usr/local/bin/docker
RUN ln -s /usr/local/bin/docker-1.6.0 /usr/local/bin/docker
RUN useradd -d /home/user -m -s /bin/bash user
WORKDIR /code/

View File

@@ -1,45 +1,35 @@
Docker Compose
==============
[![Build Status](http://jenkins.dockerproject.com/buildStatus/icon?job=Compose Master)](http://jenkins.dockerproject.com/job/Compose%20Master/)
*(Previously known as Fig)*
Compose is a tool for defining and running complex applications with Docker.
With Compose, you define a multi-container application in a single file, then
spin your application up in a single command which does everything that needs to
be done to get it running.
Compose is a tool for defining and running multi-container applications with
Docker. With Compose, you define a multi-container application in a single
file, then spin your application up in a single command which does everything
that needs to be done to get it running.
Compose is great for development environments, staging servers, and CI. We don't
recommend that you use it in production yet.
Using Compose is basically a three-step process.
First, you define your app's environment with a `Dockerfile` so it can be
reproduced anywhere:
```Dockerfile
FROM python:2.7
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
CMD python app.py
```
Next, you define the services that make up your app in `docker-compose.yml` so
1. Define your app's environment with a `Dockerfile` so it can be
reproduced anywhere.
2. Define the services that make up your app in `docker-compose.yml` so
they can be run together in an isolated environment:
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
```yaml
web:
build: .
links:
- db
ports:
- "8000:8000"
db:
image: postgres
```
A `docker-compose.yml` looks like this:
Lastly, run `docker-compose up` and Compose will start and run your entire app.
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
Compose has commands for managing the whole lifecycle of your application:
@@ -52,4 +42,11 @@ Installation and documentation
------------------------------
- Full documentation is available on [Docker's website](http://docs.docker.com/compose/).
- Hop into #docker-compose on Freenode if you have any questions.
- If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
Contributing
------------
[![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/)
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).

View File

@@ -9,43 +9,24 @@ Still, Compose and Swarm can be useful in a “batch processing” scenario (whe
A number of things need to happen before full integration is achieved, which are documented below.
Re-deploying containers with `docker-compose up`
------------------------------------------------
Repeated invocations of `docker-compose up` will not work reliably when used against a Swarm cluster because of an under-the-hood design problem; [this will be fixed](https://github.com/docker/fig/pull/972) in the next version of Compose. For now, containers must be completely removed and re-created:
$ docker-compose kill
$ docker-compose rm --force
$ docker-compose up
Links and networking
--------------------
The primary thing stopping multi-container apps from working seamlessly on Swarm is getting them to talk to one another: enabling private communication between containers on different hosts hasnt been solved in a non-hacky way.
Long-term, networking is [getting overhauled](https://github.com/docker/docker/issues/9983) in such a way that itll fit the multi-host model much better. For now, containers on different hosts cannot be linked. In the next version of Compose, linked services will be automatically scheduled on the same host; for now, this must be done manually (see “Co-scheduling containers” below).
Long-term, networking is [getting overhauled](https://github.com/docker/docker/issues/9983) in such a way that itll fit the multi-host model much better. For now, **linked containers are automatically scheduled on the same host**.
`volumes_from` and `net: container`
-----------------------------------
Building
--------
For containers to share volumes or a network namespace, they must be scheduled on the same host - this is, after all, inherent to how both volumes and network namespaces work. In the next version of Compose, this co-scheduling will be automatic whenever `volumes_from` or `net: "container:..."` is specified; for now, containers which share volumes or a network namespace must be co-scheduled manually (see “Co-scheduling containers” below).
`docker build` against a Swarm cluster is not implemented, so for now the `build` option will not work - you will need to manually build your service's image, push it somewhere and use `image` to instruct Compose to pull it. Here's an example using the Docker Hub:
Co-scheduling containers
------------------------
For now, containers can be manually scheduled on the same host using Swarms [affinity filters](https://github.com/docker/swarm/blob/master/scheduler/filter/README.md#affinity-filter). Heres a simple example:
```yaml
web:
image: my-web-image
links: ["db"]
environment:
- "affinity:container==myproject_db_*"
db:
image: postgres
```
Here, we express an affinity filter on all web containers, saying that each one must run alongside a container whose name begins with `myproject_db_`.
- `myproject` is the common prefix Compose gives to all containers in your project, which is either generated from the name of the current directory or specified with `-p` or the `DOCKER_COMPOSE_PROJECT_NAME` environment variable.
- `*` is a wildcard, which works just like filename wildcards in a Unix shell.
$ docker build -t myusername/web .
$ docker push myusername/web
$ cat docker-compose.yml
web:
image: myusername/web
links: ["db"]
db:
image: postgres
$ docker-compose up -d

View File

@@ -1,4 +1,3 @@
from __future__ import unicode_literals
from .service import Service # noqa:flake8
__version__ = '1.2.0'
__version__ = '1.3.3'

View File

@@ -10,7 +10,7 @@ from .. import config
from ..project import Project
from ..service import ConfigError
from .docopt_command import DocoptCommand
from .utils import call_silently, is_mac, is_ubuntu
from .utils import call_silently, is_mac, is_ubuntu, find_candidates_in_parent_dirs
from .docker_client import docker_client
from . import verbose_proxy
from . import errors
@@ -18,6 +18,13 @@ from .. import __version__
log = logging.getLogger(__name__)
SUPPORTED_FILENAMES = [
'docker-compose.yml',
'docker-compose.yaml',
'fig.yml',
'fig.yaml',
]
class Command(DocoptCommand):
base_dir = '.'
@@ -100,20 +107,10 @@ class Command(DocoptCommand):
if file_path:
return os.path.join(self.base_dir, file_path)
supported_filenames = [
'docker-compose.yml',
'docker-compose.yaml',
'fig.yml',
'fig.yaml',
]
def expand(filename):
return os.path.join(self.base_dir, filename)
candidates = [filename for filename in supported_filenames if os.path.exists(expand(filename))]
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, self.base_dir)
if len(candidates) == 0:
raise errors.ComposeFileNotFound(supported_filenames)
raise errors.ComposeFileNotFound(SUPPORTED_FILENAMES)
winner = candidates[0]
@@ -130,4 +127,4 @@ class Command(DocoptCommand):
log.warning("%s is deprecated and will not be supported in future. "
"Please rename your config file to docker-compose.yml\n" % winner)
return expand(winner)
return os.path.join(path, winner)

View File

@@ -32,4 +32,4 @@ def docker_client():
)
timeout = int(os.environ.get('DOCKER_CLIENT_TIMEOUT', 60))
return Client(base_url=base_url, tls=tls_config, version='1.15', timeout=timeout)
return Client(base_url=base_url, tls=tls_config, version='1.18', timeout=timeout)

View File

@@ -33,10 +33,7 @@ class DocoptCommand(object):
if command is None:
raise SystemExit(getdoc(self))
if not hasattr(self, command):
raise NoSuchCommand(command, self)
handler = getattr(self, command)
handler = self.get_handler(command)
docstring = getdoc(handler)
if docstring is None:
@@ -45,6 +42,14 @@ class DocoptCommand(object):
command_options = docopt_full_help(docstring, options['ARGS'], options_first=True)
return options, handler, command_options
def get_handler(self, command):
command = command.replace('-', '_')
if not hasattr(self, command):
raise NoSuchCommand(command, self)
return getattr(self, command)
class NoSuchCommand(Exception):
def __init__(self, command, supercommand):

View File

@@ -58,7 +58,7 @@ class ConnectionErrorGeneric(UserError):
class ComposeFileNotFound(UserError):
def __init__(self, supported_filenames):
super(ComposeFileNotFound, self).__init__("""
Can't find a suitable configuration file. Are you in the right directory?
Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
Supported filenames: %s
""" % ", ".join(supported_filenames))

View File

@@ -10,16 +10,17 @@ import sys
from docker.errors import APIError
import dockerpty
from .. import __version__
from .. import legacy
from ..const import DEFAULT_TIMEOUT
from ..project import NoSuchService, ConfigurationError
from ..service import BuildError, CannotBeScaledError
from ..service import BuildError, NeedsBuildError
from ..config import parse_environment
from .command import Command
from .docopt_command import NoSuchCommand
from .errors import UserError
from .formatter import Formatter
from .log_printer import LogPrinter
from .utils import yesno
from .utils import get_version_info, yesno
log = logging.getLogger(__name__)
@@ -32,7 +33,7 @@ def main():
except KeyboardInterrupt:
log.error("\nAborting.")
sys.exit(1)
except (UserError, NoSuchService, ConfigurationError) as e:
except (UserError, NoSuchService, ConfigurationError, legacy.LegacyError) as e:
log.error(e.msg)
sys.exit(1)
except NoSuchCommand as e:
@@ -46,6 +47,9 @@ def main():
except BuildError as e:
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
sys.exit(1)
except NeedsBuildError as e:
log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name)
sys.exit(1)
def setup_logging():
@@ -68,38 +72,39 @@ def parse_doc_section(name, source):
class TopLevelCommand(Command):
"""Fast, isolated development environments using Docker.
"""Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
--verbose Show more output
--version Print version and exit
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
restart Restart services
up Create and start containers
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels
"""
def docopt_options(self):
options = super(TopLevelCommand, self).docopt_options()
options['version'] = "docker-compose %s" % __version__
options['version'] = get_version_info()
return options
def build(self, project, options):
@@ -108,7 +113,7 @@ class TopLevelCommand(Command):
Services are built once and then tagged as `project_service`,
e.g. `composetest_db`. If you change a service's `Dockerfile` or the
contents of its build directory, you can run `compose build` to rebuild it.
contents of its build directory, you can run `docker-compose build` to rebuild it.
Usage: build [options] [SERVICE...]
@@ -124,10 +129,8 @@ class TopLevelCommand(Command):
Usage: help COMMAND
"""
command = options['COMMAND']
if not hasattr(self, command):
raise NoSuchCommand(command, self)
raise SystemExit(getdoc(getattr(self, command)))
handler = self.get_handler(options['COMMAND'])
raise SystemExit(getdoc(handler))
def kill(self, project, options):
"""
@@ -165,13 +168,14 @@ class TopLevelCommand(Command):
Usage: port [options] SERVICE PRIVATE_PORT
Options:
--protocol=proto tcp or udp (defaults to tcp)
--protocol=proto tcp or udp [default: tcp]
--index=index index of the container if there are multiple
instances of a service (defaults to 1)
instances of a service [default: 1]
"""
index = int(options.get('--index'))
service = project.get_service(options['SERVICE'])
try:
container = service.get_container(number=options.get('--index') or 1)
container = service.get_container(number=index)
except ValueError as e:
raise UserError(str(e))
print(container.get_local_port(
@@ -295,9 +299,8 @@ class TopLevelCommand(Command):
project.up(
service_names=deps,
start_deps=True,
recreate=False,
allow_recreate=False,
insecure_registry=insecure_registry,
detach=options['-d']
)
tty = True
@@ -317,35 +320,44 @@ class TopLevelCommand(Command):
}
if options['-e']:
# Merge environment from config with -e command line
container_options['environment'] = dict(
parse_environment(service.options.get('environment')),
**parse_environment(options['-e']))
container_options['environment'] = parse_environment(options['-e'])
if options['--entrypoint']:
container_options['entrypoint'] = options.get('--entrypoint')
if options['--rm']:
container_options['restart'] = None
if options['--user']:
container_options['user'] = options.get('--user')
if not options['--service-ports']:
container_options['ports'] = []
container = service.create_container(
one_off=True,
insecure_registry=insecure_registry,
**container_options
)
try:
container = service.create_container(
quiet=True,
one_off=True,
insecure_registry=insecure_registry,
**container_options
)
except APIError as e:
legacy.check_for_legacy_containers(
project.client,
project.name,
[service.name],
allow_one_off=False,
)
raise e
if options['-d']:
service.start_container(container)
print(container.name)
else:
service.start_container(container)
dockerpty.start(project.client, container.id, interactive=not options['-T'])
exit_code = container.wait()
if options['--rm']:
log.info("Removing %s..." % container.name)
project.client.remove_container(container.id)
sys.exit(exit_code)
@@ -369,15 +381,7 @@ class TopLevelCommand(Command):
except ValueError:
raise UserError('Number of containers for service "%s" is not a '
'number' % service_name)
try:
project.get_service(service_name).scale(num)
except CannotBeScaledError:
raise UserError(
'Service "%s" cannot be scaled because it specifies a port '
'on the host. If multiple containers for this service were '
'created, the port would clash.\n\nRemove the ":" from the '
'port definition in docker-compose.yml so Docker can choose a random '
'port for each container.' % service_name)
project.get_service(service_name).scale(num)
def start(self, project, options):
"""
@@ -399,9 +403,8 @@ class TopLevelCommand(Command):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
timeout = options.get('--timeout')
params = {} if timeout is None else {'timeout': int(timeout)}
project.stop(service_names=options['SERVICE'], **params)
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
project.stop(service_names=options['SERVICE'], timeout=timeout)
def restart(self, project, options):
"""
@@ -413,9 +416,8 @@ class TopLevelCommand(Command):
-t, --timeout TIMEOUT Specify a shutdown timeout in seconds.
(default: 10)
"""
timeout = options.get('--timeout')
params = {} if timeout is None else {'timeout': int(timeout)}
project.restart(service_names=options['SERVICE'], **params)
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
project.restart(service_names=options['SERVICE'], timeout=timeout)
def up(self, project, options):
"""
@@ -440,11 +442,13 @@ class TopLevelCommand(Command):
print new container names.
--no-color Produce monochrome output.
--no-deps Don't start linked services.
--x-smart-recreate Only recreate containers whose configuration or
image needs to be updated. (EXPERIMENTAL)
--no-recreate If containers already exist, don't recreate them.
--no-build Don't build an image, even if it's missing
-t, --timeout TIMEOUT When attached, use this timeout in seconds
for the shutdown. (default: 10)
-t, --timeout TIMEOUT Use this timeout in seconds for container shutdown
when attached or when containers are already
running. (default: 10)
"""
insecure_registry = options['--allow-insecure-ssl']
detached = options['-d']
@@ -452,16 +456,19 @@ class TopLevelCommand(Command):
monochrome = options['--no-color']
start_deps = not options['--no-deps']
recreate = not options['--no-recreate']
allow_recreate = not options['--no-recreate']
smart_recreate = options['--x-smart-recreate']
service_names = options['SERVICE']
timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT)
project.up(
service_names=service_names,
start_deps=start_deps,
recreate=recreate,
allow_recreate=allow_recreate,
smart_recreate=smart_recreate,
insecure_registry=insecure_registry,
detach=detached,
do_build=not options['--no-build'],
timeout=timeout
)
to_attach = [c for s in project.get_services(service_names) for c in s.containers()]
@@ -479,9 +486,33 @@ class TopLevelCommand(Command):
signal.signal(signal.SIGINT, handler)
print("Gracefully stopping... (press Ctrl+C again to force)")
timeout = options.get('--timeout')
params = {} if timeout is None else {'timeout': int(timeout)}
project.stop(service_names=service_names, **params)
project.stop(service_names=service_names, timeout=timeout)
def migrate_to_labels(self, project, _options):
"""
Recreate containers to add labels
If you're coming from Compose 1.2 or earlier, you'll need to remove or
migrate your existing containers after upgrading Compose. This is
because, as of version 1.3, Compose uses Docker labels to keep track
of containers, and so they need to be recreated with labels added.
If Compose detects containers that were created without labels, it
will refuse to run so that you don't end up with two sets of them. If
you want to keep using your existing containers (for example, because
they have data volumes you want to preserve) you can migrate them with
the following command:
docker-compose migrate-to-labels
Alternatively, if you're not worried about keeping them, you can
remove them - Compose will just create new ones.
docker rm -f myapp_web_1 myapp_db_1 ...
Usage: migrate-to-labels
"""
legacy.migrate_project_to_labels(project)
def list_containers(containers):

View File

@@ -5,6 +5,9 @@ import datetime
import os
import subprocess
import platform
import ssl
from .. import __version__
def yesno(prompt, default=None):
@@ -62,6 +65,25 @@ def mkdir(path, permissions=0o700):
return path
def find_candidates_in_parent_dirs(filenames, path):
"""
Given a directory path to start, looks for filenames in the
directory, and then each parent directory successively,
until found.
Returns tuple (candidates, path).
"""
candidates = [filename for filename in filenames
if os.path.exists(os.path.join(path, filename))]
if len(candidates) == 0:
parent_dir = os.path.join(path, '..')
if os.path.abspath(parent_dir) != os.path.abspath(path):
return find_candidates_in_parent_dirs(filenames, parent_dir)
return (candidates, path)
def split_buffer(reader, separator):
"""
Given a generator which yields strings and a separator string,
@@ -101,3 +123,11 @@ def is_mac():
def is_ubuntu():
return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
def get_version_info():
return '\n'.join([
'docker-compose version: %s' % __version__,
"%s version: %s" % (platform.python_implementation(), platform.python_version()),
"OpenSSL version: %s" % ssl.OPENSSL_VERSION,
])

View File

@@ -7,22 +7,30 @@ DOCKER_CONFIG_KEYS = [
'cap_add',
'cap_drop',
'cpu_shares',
'cpuset',
'command',
'detach',
'devices',
'dns',
'dns_search',
'domainname',
'entrypoint',
'env_file',
'environment',
'extra_hosts',
'read_only',
'hostname',
'image',
'labels',
'links',
'mem_limit',
'net',
'log_driver',
'pid',
'ports',
'privileged',
'restart',
'security_opt',
'stdin_open',
'tty',
'user',
@@ -33,20 +41,25 @@ DOCKER_CONFIG_KEYS = [
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
'build',
'dockerfile',
'expose',
'external_links',
'name',
]
DOCKER_CONFIG_HINTS = {
'cpu_share' : 'cpu_shares',
'link' : 'links',
'port' : 'ports',
'privilege' : 'privileged',
'cpu_share': 'cpu_shares',
'add_host': 'extra_hosts',
'hosts': 'extra_hosts',
'extra_host': 'extra_hosts',
'device': 'devices',
'link': 'links',
'port': 'ports',
'privilege': 'privileged',
'priviliged': 'privileged',
'privilige' : 'privileged',
'volume' : 'volumes',
'workdir' : 'working_dir',
'privilige': 'privileged',
'volume': 'volumes',
'workdir': 'working_dir',
}
@@ -63,6 +76,7 @@ def from_dictionary(dictionary, working_dir=None, filename=None):
raise ConfigurationError('Service "%s" doesn\'t have any configuration options. All top level keys in your docker-compose.yml must map to a dictionary of configuration options.' % service_name)
loader = ServiceLoader(working_dir=working_dir, filename=filename)
service_dict = loader.make_service_dict(service_name, service_dict)
validate_paths(service_dict)
service_dicts.append(service_dict)
return service_dicts
@@ -174,6 +188,9 @@ def process_container_options(service_dict, working_dir=None):
if 'build' in service_dict:
service_dict['build'] = resolve_build_path(service_dict['build'], working_dir=working_dir)
if 'labels' in service_dict:
service_dict['labels'] = parse_labels(service_dict['labels'])
return service_dict
@@ -186,10 +203,19 @@ def merge_service_dicts(base, override):
override.get('environment'),
)
if 'volumes' in base or 'volumes' in override:
d['volumes'] = merge_volumes(
base.get('volumes'),
override.get('volumes'),
path_mapping_keys = ['volumes', 'devices']
for key in path_mapping_keys:
if key in base or key in override:
d[key] = merge_path_mappings(
base.get(key),
override.get(key),
)
if 'labels' in base or 'labels' in override:
d['labels'] = merge_labels(
base.get('labels'),
override.get('labels'),
)
if 'image' in override and 'build' in d:
@@ -210,7 +236,7 @@ def merge_service_dicts(base, override):
if key in base or key in override:
d[key] = to_list(base.get(key)) + to_list(override.get(key))
already_merged_keys = ['environment', 'volumes'] + list_keys + list_or_string_keys
already_merged_keys = ['environment', 'labels'] + path_mapping_keys + list_keys + list_or_string_keys
for k in set(ALLOWED_KEYS) - set(already_merged_keys):
if k in override:
@@ -326,7 +352,7 @@ def resolve_host_paths(volumes, working_dir=None):
def resolve_host_path(volume, working_dir):
container_path, host_path = split_volume(volume)
container_path, host_path = split_path_mapping(volume)
if host_path is not None:
host_path = os.path.expanduser(host_path)
host_path = os.path.expandvars(host_path)
@@ -338,32 +364,34 @@ def resolve_host_path(volume, working_dir):
def resolve_build_path(build_path, working_dir=None):
if working_dir is None:
raise Exception("No working_dir passed to resolve_build_path")
_path = expand_path(working_dir, build_path)
if not os.path.exists(_path) or not os.access(_path, os.R_OK):
raise ConfigurationError("build path %s either does not exist or is not accessible." % _path)
else:
return _path
return expand_path(working_dir, build_path)
def merge_volumes(base, override):
d = dict_from_volumes(base)
d.update(dict_from_volumes(override))
return volumes_from_dict(d)
def validate_paths(service_dict):
if 'build' in service_dict:
build_path = service_dict['build']
if not os.path.exists(build_path) or not os.access(build_path, os.R_OK):
raise ConfigurationError("build path %s either does not exist or is not accessible." % build_path)
def dict_from_volumes(volumes):
if volumes:
return dict(split_volume(v) for v in volumes)
def merge_path_mappings(base, override):
d = dict_from_path_mappings(base)
d.update(dict_from_path_mappings(override))
return path_mappings_from_dict(d)
def dict_from_path_mappings(path_mappings):
if path_mappings:
return dict(split_path_mapping(v) for v in path_mappings)
else:
return {}
def volumes_from_dict(d):
return [join_volume(v) for v in d.items()]
def path_mappings_from_dict(d):
return [join_path_mapping(v) for v in d.items()]
def split_volume(string):
def split_path_mapping(string):
if ':' in string:
(host, container) = string.split(':', 1)
return (container, host)
@@ -371,7 +399,7 @@ def split_volume(string):
return (string, None)
def join_volume(pair):
def join_path_mapping(pair):
(container, host) = pair
if host is None:
return container
@@ -379,6 +407,35 @@ def join_volume(pair):
return ":".join((host, container))
def merge_labels(base, override):
labels = parse_labels(base)
labels.update(parse_labels(override))
return labels
def parse_labels(labels):
if not labels:
return {}
if isinstance(labels, list):
return dict(split_label(e) for e in labels)
if isinstance(labels, dict):
return labels
raise ConfigurationError(
"labels \"%s\" must be a list or mapping" %
labels
)
def split_label(label):
if '=' in label:
return label.split('=', 1)
else:
return label, ''
def expand_path(working_dir, path):
return os.path.abspath(os.path.join(working_dir, path))

8
compose/const.py Normal file
View File

@@ -0,0 +1,8 @@
DEFAULT_TIMEOUT = 10
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
LABEL_PROJECT = 'com.docker.compose.project'
LABEL_SERVICE = 'com.docker.compose.service'
LABEL_VERSION = 'com.docker.compose.version'
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'

View File

@@ -4,6 +4,8 @@ from __future__ import absolute_import
import six
from functools import reduce
from .const import LABEL_CONTAINER_NUMBER, LABEL_SERVICE
class Container(object):
"""
@@ -44,6 +46,10 @@ class Container(object):
def image(self):
return self.dictionary['Image']
@property
def image_config(self):
return self.client.inspect_image(self.image)
@property
def short_id(self):
return self.id[:10]
@@ -54,14 +60,15 @@ class Container(object):
@property
def name_without_project(self):
return '_'.join(self.dictionary['Name'].split('_')[1:])
return '{0}_{1}'.format(self.labels.get(LABEL_SERVICE), self.number)
@property
def number(self):
try:
return int(self.name.split('_')[-1])
except ValueError:
return None
number = self.labels.get(LABEL_CONTAINER_NUMBER)
if not number:
raise ValueError("Container {0} does not have a {1} label".format(
self.short_id, LABEL_CONTAINER_NUMBER))
return int(number)
@property
def ports(self):
@@ -79,6 +86,14 @@ class Container(object):
return ', '.join(format_port(*item)
for item in sorted(six.iteritems(self.ports)))
@property
def labels(self):
return self.get('Config.Labels') or {}
@property
def log_config(self):
return self.get('HostConfig.LogConfig') or None
@property
def human_readable_state(self):
if self.is_running:
@@ -126,8 +141,8 @@ class Container(object):
def kill(self, **options):
return self.client.kill(self.id, **options)
def restart(self):
return self.client.restart(self.id)
def restart(self, **options):
return self.client.restart(self.id, **options)
def remove(self, **options):
return self.client.remove_container(self.id, **options)
@@ -147,6 +162,7 @@ class Container(object):
self.has_been_inspected = True
return self.dictionary
# TODO: only used by tests, move to test module
def links(self):
links = []
for container in self.client.containers():
@@ -163,13 +179,16 @@ class Container(object):
return self.client.attach_socket(self.id, **kwargs)
def __repr__(self):
return '<Container: %s>' % self.name
return '<Container: %s (%s)>' % (self.name, self.id[:6])
def __eq__(self, other):
if type(self) != type(other):
return False
return self.id == other.id
def __hash__(self):
return self.id.__hash__()
def get_container_name(container):
if not container.get('Name') and not container.get('Names'):

180
compose/legacy.py Normal file
View File

@@ -0,0 +1,180 @@
import logging
import re
from .const import LABEL_VERSION
from .container import get_container_name, Container
log = logging.getLogger(__name__)
# TODO: remove this section when migrate_project_to_labels is removed
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
ERROR_MESSAGE_FORMAT = """
Compose found the following containers without labels:
{names_list}
As of Compose 1.3.0, containers are identified with labels instead of naming convention. If you want to continue using these containers, run:
$ docker-compose migrate-to-labels
Alternatively, remove them:
$ docker rm -f {rm_args}
"""
ONE_OFF_ADDENDUM_FORMAT = """
You should also remove your one-off containers:
$ docker rm -f {rm_args}
"""
ONE_OFF_ERROR_MESSAGE_FORMAT = """
Compose found the following containers without labels:
{names_list}
As of Compose 1.3.0, containers are identified with labels instead of naming convention.
Remove them before continuing:
$ docker rm -f {rm_args}
"""
def check_for_legacy_containers(
client,
project,
services,
allow_one_off=True):
"""Check if there are containers named using the old naming convention
and warn the user that those containers may need to be migrated to
using labels, so that compose can find them.
"""
containers = get_legacy_containers(client, project, services, one_off=False)
if containers:
one_off_containers = get_legacy_containers(client, project, services, one_off=True)
raise LegacyContainersError(
[c.name for c in containers],
[c.name for c in one_off_containers],
)
if not allow_one_off:
one_off_containers = get_legacy_containers(client, project, services, one_off=True)
if one_off_containers:
raise LegacyOneOffContainersError(
[c.name for c in one_off_containers],
)
class LegacyError(Exception):
def __unicode__(self):
return self.msg
__str__ = __unicode__
class LegacyContainersError(LegacyError):
def __init__(self, names, one_off_names):
self.names = names
self.one_off_names = one_off_names
self.msg = ERROR_MESSAGE_FORMAT.format(
names_list="\n".join(" {}".format(name) for name in names),
rm_args=" ".join(names),
)
if one_off_names:
self.msg += ONE_OFF_ADDENDUM_FORMAT.format(rm_args=" ".join(one_off_names))
class LegacyOneOffContainersError(LegacyError):
def __init__(self, one_off_names):
self.one_off_names = one_off_names
self.msg = ONE_OFF_ERROR_MESSAGE_FORMAT.format(
names_list="\n".join(" {}".format(name) for name in one_off_names),
rm_args=" ".join(one_off_names),
)
def add_labels(project, container):
project_name, service_name, one_off, number = NAME_RE.match(container.name).groups()
if project_name != project.name or service_name not in project.service_names:
return
service = project.get_service(service_name)
service.recreate_container(container)
def migrate_project_to_labels(project):
log.info("Running migration to labels for project %s", project.name)
containers = get_legacy_containers(
project.client,
project.name,
project.service_names,
one_off=False,
)
for container in containers:
add_labels(project, container)
def get_legacy_containers(
client,
project,
services,
one_off=False):
return list(_get_legacy_containers_iter(
client,
project,
services,
one_off=one_off,
))
def _get_legacy_containers_iter(
client,
project,
services,
one_off=False):
containers = client.containers(all=True)
for service in services:
for container in containers:
if LABEL_VERSION in (container.get('Labels') or {}):
continue
name = get_container_name(container)
if has_container(project, service, name, one_off=one_off):
yield Container.from_ps(client, container)
def has_container(project, service, name, one_off=False):
if not name or not is_valid_name(name, one_off):
return False
container_project, container_service, _container_number = parse_name(name)
return container_project == project and container_service == service
def is_valid_name(name, one_off=False):
match = NAME_RE.match(name)
if match is None:
return False
if one_off:
return match.group(3) == 'run_'
else:
return match.group(3) is None
def parse_name(name):
match = NAME_RE.match(name)
(project, service_name, _, suffix) = match.groups()
return (project, service_name, int(suffix))

View File

@@ -74,8 +74,9 @@ def print_output_event(event, stream, is_terminal):
stream.write("%s %s%s" % (status, event['progress'], terminator))
elif 'progressDetail' in event:
detail = event['progressDetail']
if 'current' in detail:
percentage = float(detail['current']) / float(detail['total']) * 100
total = detail.get('total')
if 'current' in detail and total:
percentage = float(detail['current']) / float(total) * 100
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
else:
stream.write('%s%s' % (status, terminator))

View File

@@ -1,12 +1,15 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import logging
from functools import reduce
from docker.errors import APIError
from .config import get_service_name_from_net, ConfigurationError
from .const import LABEL_PROJECT, LABEL_SERVICE, LABEL_ONE_OFF, DEFAULT_TIMEOUT
from .service import Service
from .container import Container
from docker.errors import APIError
from .legacy import check_for_legacy_containers
log = logging.getLogger(__name__)
@@ -60,6 +63,12 @@ class Project(object):
self.services = services
self.client = client
def labels(self, one_off=False):
return [
'{0}={1}'.format(LABEL_PROJECT, self.name),
'{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False"),
]
@classmethod
def from_dicts(cls, name, service_dicts, client):
"""
@@ -75,6 +84,10 @@ class Project(object):
volumes_from=volumes_from, **service_dict))
return project
@property
def service_names(self):
return [service.name for service in self.services]
def get_service(self, name):
"""
Retrieve a service by name. Raises NoSuchService
@@ -86,6 +99,16 @@ class Project(object):
raise NoSuchService(name)
def validate_service_names(self, service_names):
"""
Validate that the given list of service names only contains valid
services. Raises NoSuchService if one of the names is invalid.
"""
valid_names = self.service_names
for name in service_names:
if name not in valid_names:
raise NoSuchService(name)
def get_services(self, service_names=None, include_deps=False):
"""
Returns a list of this project's services filtered
@@ -102,7 +125,7 @@ class Project(object):
"""
if service_names is None or len(service_names) == 0:
return self.get_services(
service_names=[s.name for s in self.services],
service_names=self.service_names,
include_deps=include_deps
)
else:
@@ -158,14 +181,14 @@ class Project(object):
try:
net = Container.from_id(self.client, net_name)
except APIError:
raise ConfigurationError('Serivce "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name))
raise ConfigurationError('Service "%s" is trying to use the network of "%s", which is not the name of a service or container.' % (service_dict['name'], net_name))
else:
net = service_dict['net']
del service_dict['net']
else:
net = 'bridge'
net = None
return net
@@ -195,26 +218,67 @@ class Project(object):
def up(self,
service_names=None,
start_deps=True,
recreate=True,
allow_recreate=True,
smart_recreate=False,
insecure_registry=False,
detach=False,
do_build=True):
running_containers = []
for service in self.get_services(service_names, include_deps=start_deps):
if recreate:
for (_, container) in service.recreate_containers(
insecure_registry=insecure_registry,
detach=detach,
do_build=do_build):
running_containers.append(container)
else:
for container in service.start_or_create_containers(
insecure_registry=insecure_registry,
detach=detach,
do_build=do_build):
running_containers.append(container)
do_build=True,
timeout=DEFAULT_TIMEOUT):
return running_containers
services = self.get_services(service_names, include_deps=start_deps)
for service in services:
service.remove_duplicate_containers()
plans = self._get_convergence_plans(
services,
allow_recreate=allow_recreate,
smart_recreate=smart_recreate,
)
return [
container
for service in services
for container in service.execute_convergence_plan(
plans[service.name],
insecure_registry=insecure_registry,
do_build=do_build,
timeout=timeout
)
]
def _get_convergence_plans(self,
services,
allow_recreate=True,
smart_recreate=False):
plans = {}
for service in services:
updated_dependencies = [
name
for name in service.get_dependency_names()
if name in plans
and plans[name].action == 'recreate'
]
if updated_dependencies:
log.debug(
'%s has upstream changes (%s)',
service.name, ", ".join(updated_dependencies),
)
plan = service.convergence_plan(
allow_recreate=allow_recreate,
smart_recreate=False,
)
else:
plan = service.convergence_plan(
allow_recreate=allow_recreate,
smart_recreate=smart_recreate,
)
plans[service.name] = plan
return plans
def pull(self, service_names=None, insecure_registry=False):
for service in self.get_services(service_names, include_deps=True):
@@ -225,16 +289,31 @@ class Project(object):
service.remove_stopped(**options)
def containers(self, service_names=None, stopped=False, one_off=False):
return [Container.from_ps(self.client, container)
for container in self.client.containers(all=stopped)
for service in self.get_services(service_names)
if service.has_container(container, one_off=one_off)]
if service_names:
self.validate_service_names(service_names)
else:
service_names = self.service_names
containers = [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
filters={'label': self.labels(one_off=one_off)})]
def matches_service_names(container):
return container.labels.get(LABEL_SERVICE) in service_names
if not containers:
check_for_legacy_containers(
self.client,
self.name,
self.service_names,
)
return filter(matches_service_names, containers)
def _inject_deps(self, acc, service):
net_name = service.get_net_name()
dep_names = (service.get_linked_names() +
service.get_volumes_from_names() +
([net_name] if net_name else []))
dep_names = service.get_dependency_names()
if len(dep_names) > 0:
dep_services = self.get_services(

View File

@@ -3,16 +3,28 @@ from __future__ import absolute_import
from collections import namedtuple
import logging
import re
from operator import attrgetter
import sys
from operator import attrgetter
import six
from docker.errors import APIError
from docker.utils import create_host_config
from docker.utils import create_host_config, LogConfig
from .config import DOCKER_CONFIG_KEYS
from .container import Container, get_container_name
from . import __version__
from .config import DOCKER_CONFIG_KEYS, merge_environment
from .const import (
DEFAULT_TIMEOUT,
LABEL_CONTAINER_NUMBER,
LABEL_ONE_OFF,
LABEL_PROJECT,
LABEL_SERVICE,
LABEL_VERSION,
LABEL_CONFIG_HASH,
)
from .container import Container
from .legacy import check_for_legacy_containers
from .progress_stream import stream_output, StreamOutputError
from .utils import json_hash
log = logging.getLogger(__name__)
@@ -20,12 +32,19 @@ log = logging.getLogger(__name__)
DOCKER_START_KEYS = [
'cap_add',
'cap_drop',
'devices',
'dns',
'dns_search',
'env_file',
'extra_hosts',
'read_only',
'net',
'log_driver',
'pid',
'privileged',
'restart',
'volumes_from',
'security_opt',
]
VALID_NAME_CHARS = '[a-zA-Z0-9]'
@@ -37,11 +56,16 @@ class BuildError(Exception):
self.reason = reason
class CannotBeScaledError(Exception):
class ConfigError(ValueError):
pass
class ConfigError(ValueError):
class NeedsBuildError(Exception):
def __init__(self, service):
self.service = service
class NoSuchImageError(Exception):
pass
@@ -51,6 +75,9 @@ VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
ServiceName = namedtuple('ServiceName', 'project service number')
ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
class Service(object):
def __init__(self, name, client=None, project='default', links=None, external_links=None, volumes_from=None, net=None, **options):
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
@@ -59,6 +86,8 @@ class Service(object):
raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
if 'image' in options and 'build' in options:
raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
if 'image' not in options and 'build' not in options:
raise ConfigError('Service %s has neither an image nor a build path specified. Exactly one must be provided.' % name)
self.name = name
self.client = client
@@ -70,28 +99,28 @@ class Service(object):
self.options = options
def containers(self, stopped=False, one_off=False):
return [Container.from_ps(self.client, container)
for container in self.client.containers(all=stopped)
if self.has_container(container, one_off=one_off)]
containers = [
Container.from_ps(self.client, container)
for container in self.client.containers(
all=stopped,
filters={'label': self.labels(one_off=one_off)})]
def has_container(self, container, one_off=False):
"""Return True if `container` was created to fulfill this service."""
name = get_container_name(container)
if not name or not is_valid_name(name, one_off):
return False
project, name, _number = parse_name(name)
return project == self.project and name == self.name
if not containers:
check_for_legacy_containers(
self.client,
self.project,
[self.name],
)
return containers
def get_container(self, number=1):
"""Return a :class:`compose.container.Container` for this service. The
container must be active, and match `number`.
"""
for container in self.client.containers():
if not self.has_container(container):
continue
_, _, container_number = parse_name(get_container_name(container))
if container_number == number:
return Container.from_ps(self.client, container)
labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)]
for container in self.client.containers(filters={'label': labels}):
return Container.from_ps(self.client, container)
raise ValueError("No container found for %s_%s" % (self.name, number))
@@ -125,13 +154,14 @@ class Service(object):
- removes all stopped containers
"""
if not self.can_be_scaled():
raise CannotBeScaledError()
log.warn('Service %s specifies a port on the host. If multiple containers '
'for this service are created on a single host, the port will clash.'
% self.name)
# Create enough containers
containers = self.containers(stopped=True)
while len(containers) < desired_num:
log.info("Creating %s..." % self._next_container_name(containers))
containers.append(self.create_container(detach=True))
containers.append(self.create_container())
running_containers = []
stopped_containers = []
@@ -169,67 +199,166 @@ class Service(object):
one_off=False,
insecure_registry=False,
do_build=True,
intermediate_container=None,
previous_container=None,
number=None,
quiet=False,
**override_options):
"""
Create a container for this service. If the image doesn't exist, attempt to pull
it.
"""
container_options = self._get_container_create_options(
override_options,
one_off=one_off,
intermediate_container=intermediate_container,
self.ensure_image_exists(
do_build=do_build,
insecure_registry=insecure_registry,
)
if (do_build and
self.can_be_built() and
not self.client.images(name=self.full_name)):
self.build()
container_options = self._get_container_create_options(
override_options,
number or self._next_container_number(one_off=one_off),
one_off=one_off,
previous_container=previous_container,
)
if 'name' in container_options and not quiet:
log.info("Creating %s..." % container_options['name'])
return Container.create(self.client, **container_options)
def ensure_image_exists(self,
do_build=True,
insecure_registry=False):
try:
return Container.create(self.client, **container_options)
self.image()
return
except NoSuchImageError:
pass
if self.can_be_built():
if do_build:
self.build()
else:
raise NeedsBuildError(self)
else:
self.pull(insecure_registry=insecure_registry)
def image(self):
try:
return self.client.inspect_image(self.image_name)
except APIError as e:
if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation):
log.info('Pulling image %s...' % container_options['image'])
output = self.client.pull(
container_options['image'],
stream=True,
insecure_registry=insecure_registry
)
stream_output(output, sys.stdout)
return Container.create(self.client, **container_options)
raise
raise NoSuchImageError("Image '{}' not found".format(self.image_name))
else:
raise
@property
def image_name(self):
if self.can_be_built():
return self.full_name
else:
return self.options['image']
def convergence_plan(self,
allow_recreate=True,
smart_recreate=False):
def recreate_containers(self, insecure_registry=False, do_build=True, **override_options):
"""
If a container for this service doesn't exist, create and start one. If there are
any, stop them, create+start new ones, and remove the old containers.
"""
containers = self.containers(stopped=True)
if not containers:
log.info("Creating %s..." % self._next_container_name(containers))
return ConvergencePlan('create', [])
if smart_recreate and not self._containers_have_diverged(containers):
stopped = [c for c in containers if not c.is_running]
if stopped:
return ConvergencePlan('start', stopped)
return ConvergencePlan('noop', containers)
if not allow_recreate:
return ConvergencePlan('start', containers)
return ConvergencePlan('recreate', containers)
def _containers_have_diverged(self, containers):
config_hash = None
try:
config_hash = self.config_hash()
except NoSuchImageError as e:
log.debug(
'Service %s has diverged: %s',
self.name, six.text_type(e),
)
return True
has_diverged = False
for c in containers:
container_config_hash = c.labels.get(LABEL_CONFIG_HASH, None)
if container_config_hash != config_hash:
log.debug(
'%s has diverged: %s != %s',
c.name, container_config_hash, config_hash,
)
has_diverged = True
return has_diverged
def execute_convergence_plan(self,
plan,
insecure_registry=False,
do_build=True,
timeout=DEFAULT_TIMEOUT):
(action, containers) = plan
if action == 'create':
container = self.create_container(
insecure_registry=insecure_registry,
do_build=do_build,
**override_options)
)
self.start_container(container)
return [(None, container)]
else:
tuples = []
return [container]
elif action == 'recreate':
return [
self.recreate_container(
c,
insecure_registry=insecure_registry,
timeout=timeout
)
for c in containers
]
elif action == 'start':
for c in containers:
log.info("Recreating %s..." % c.name)
tuples.append(self.recreate_container(c, insecure_registry=insecure_registry, **override_options))
self.start_container_if_stopped(c)
return tuples
return containers
def recreate_container(self, container, **override_options):
"""Recreate a container. An intermediate container is created so that
the new container has the same name, while still supporting
`volumes-from` the original container.
elif action == 'noop':
for c in containers:
log.info("%s is up-to-date" % c.name)
return containers
else:
raise Exception("Invalid action: {}".format(action))
def recreate_container(self,
container,
insecure_registry=False,
timeout=DEFAULT_TIMEOUT):
"""Recreate a container.
The original container is renamed to a temporary name so that data
volumes can be copied to the new container, before the original
container is removed.
"""
log.info("Recreating %s..." % container.name)
try:
container.stop()
container.stop(timeout=timeout)
except APIError as e:
if (e.response.status_code == 500
and e.explanation
@@ -238,29 +367,21 @@ class Service(object):
else:
raise
intermediate_container = Container.create(
self.client,
image=container.image,
entrypoint=['/bin/echo'],
command=[],
detach=True,
host_config=create_host_config(volumes_from=[container.id]),
)
intermediate_container.start()
intermediate_container.wait()
container.remove()
# Use a hopefully unique container name by prepending the short id
self.client.rename(
container.id,
'%s_%s' % (container.short_id, container.name))
options = dict(override_options)
new_container = self.create_container(
insecure_registry=insecure_registry,
do_build=False,
intermediate_container=intermediate_container,
**options
previous_container=container,
number=container.labels.get(LABEL_CONTAINER_NUMBER),
quiet=True,
)
self.start_container(new_container)
intermediate_container.remove()
return (intermediate_container, new_container)
container.remove()
return new_container
def start_container_if_stopped(self, container):
if container.is_running:
@@ -273,23 +394,40 @@ class Service(object):
container.start()
return container
def start_or_create_containers(
self,
insecure_registry=False,
detach=False,
do_build=True):
containers = self.containers(stopped=True)
def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT):
for c in self.duplicate_containers():
log.info('Removing %s...' % c.name)
c.stop(timeout=timeout)
c.remove()
if not containers:
log.info("Creating %s..." % self._next_container_name(containers))
new_container = self.create_container(
insecure_registry=insecure_registry,
detach=detach,
do_build=do_build,
)
return [self.start_container(new_container)]
else:
return [self.start_container_if_stopped(c) for c in containers]
def duplicate_containers(self):
containers = sorted(
self.containers(stopped=True),
key=lambda c: c.get('Created'),
)
numbers = set()
for c in containers:
if c.number in numbers:
yield c
else:
numbers.add(c.number)
def config_hash(self):
return json_hash(self.config_dict())
def config_dict(self):
return {
'options': self.options,
'image_id': self.image()['Id'],
}
def get_dependency_names(self):
net_name = self.get_net_name()
return (self.get_linked_names() +
self.get_volumes_from_names() +
([net_name] if net_name else []))
def get_linked_names(self):
return [s.name for (s, _) in self.links]
@@ -303,14 +441,19 @@ class Service(object):
else:
return
def _next_container_name(self, all_containers, one_off=False):
bits = [self.project, self.name]
if one_off:
bits.append('run')
return '_'.join(bits + [str(self._next_container_number(all_containers))])
def get_container_name(self, number, one_off=False):
# TODO: Implement issue #652 here
return build_container_name(self.project, self.name, number, one_off)
def _next_container_number(self, all_containers):
numbers = [parse_name(c.name).number for c in all_containers]
# TODO: this would benefit from github.com/docker/docker/pull/11943
# to remove the need to inspect every container
def _next_container_number(self, one_off=False):
numbers = [
Container.from_ps(self.client, container).number
for container in self.client.containers(
all=True,
filters={'label': self.labels(one_off=one_off)})
]
return 1 if not numbers else max(numbers) + 1
def _get_links(self, link_to_self):
@@ -333,7 +476,7 @@ class Service(object):
links.append((external_link, link_name))
return links
def _get_volumes_from(self, intermediate_container=None):
def _get_volumes_from(self):
volumes_from = []
for volume_source in self.volumes_from:
if isinstance(volume_source, Service):
@@ -346,14 +489,11 @@ class Service(object):
elif isinstance(volume_source, Container):
volumes_from.append(volume_source.id)
if intermediate_container:
volumes_from.append(intermediate_container.id)
return volumes_from
def _get_net(self):
if not self.net:
return "bridge"
return None
if isinstance(self.net, Service):
containers = self.net.containers()
@@ -370,15 +510,31 @@ class Service(object):
return net
def _get_container_create_options(self, override_options, one_off=False, intermediate_container=None):
def _get_container_create_options(
self,
override_options,
number,
one_off=False,
previous_container=None):
add_config_hash = (not one_off and not override_options)
container_options = dict(
(k, self.options[k])
for k in DOCKER_CONFIG_KEYS if k in self.options)
container_options.update(override_options)
container_options['name'] = self._next_container_name(
self.containers(stopped=True, one_off=one_off),
one_off)
container_options['name'] = self.get_container_name(number, one_off)
if add_config_hash:
config_hash = self.config_hash()
if 'labels' not in container_options:
container_options['labels'] = {}
container_options['labels'][LABEL_CONFIG_HASH] = config_hash
log.debug("Added config hash: %s" % config_hash)
if 'detach' not in container_options:
container_options['detach'] = True
# If a qualified hostname was given, split it into an
# unqualified hostname and a domainname unless domainname
@@ -403,36 +559,49 @@ class Service(object):
ports.append(port)
container_options['ports'] = ports
override_options['binds'] = merge_volume_bindings(
container_options.get('volumes') or [],
previous_container)
if 'volumes' in container_options:
container_options['volumes'] = dict(
(parse_volume_spec(v).internal, {})
for v in container_options['volumes'])
if self.can_be_built():
container_options['image'] = self.full_name
else:
container_options['image'] = self._get_image_name(container_options['image'])
container_options['environment'] = merge_environment(
self.options.get('environment'),
override_options.get('environment'))
if previous_container:
container_options['environment']['affinity:container'] = ('=' + previous_container.id)
container_options['image'] = self.image_name
container_options['labels'] = build_container_labels(
container_options.get('labels', {}),
self.labels(one_off=one_off),
number)
# Delete options which are only used when starting
for key in DOCKER_START_KEYS:
container_options.pop(key, None)
container_options['host_config'] = self._get_container_host_config(override_options, one_off=one_off, intermediate_container=intermediate_container)
container_options['host_config'] = self._get_container_host_config(
override_options,
one_off=one_off)
return container_options
def _get_container_host_config(self, override_options, one_off=False, intermediate_container=None):
def _get_container_host_config(self, override_options, one_off=False):
options = dict(self.options, **override_options)
port_bindings = build_port_bindings(options.get('ports') or [])
volume_bindings = dict(
build_volume_binding(parse_volume_spec(volume))
for volume in options.get('volumes') or []
if ':' in volume)
privileged = options.get('privileged', False)
cap_add = options.get('cap_add', None)
cap_drop = options.get('cap_drop', None)
log_config = LogConfig(type=options.get('log_driver', 'json-file'))
pid = options.get('pid', None)
security_opt = options.get('security_opt', None)
dns = options.get('dns', None)
if isinstance(dns, six.string_types):
@@ -444,35 +613,44 @@ class Service(object):
restart = parse_restart_spec(options.get('restart', None))
extra_hosts = build_extra_hosts(options.get('extra_hosts', None))
read_only = options.get('read_only', None)
devices = options.get('devices', None)
return create_host_config(
links=self._get_links(link_to_self=one_off),
port_bindings=port_bindings,
binds=volume_bindings,
volumes_from=self._get_volumes_from(intermediate_container),
binds=options.get('binds'),
volumes_from=self._get_volumes_from(),
privileged=privileged,
network_mode=self._get_net(),
devices=devices,
dns=dns,
dns_search=dns_search,
restart_policy=restart,
cap_add=cap_add,
cap_drop=cap_drop,
log_config=log_config,
extra_hosts=extra_hosts,
read_only=read_only,
pid_mode=pid,
security_opt=security_opt
)
def _get_image_name(self, image):
repo, tag = parse_repository_tag(image)
if tag == "":
tag = "latest"
return '%s:%s' % (repo, tag)
def build(self, no_cache=False):
log.info('Building %s...' % self.name)
path = six.binary_type(self.options['build'])
build_output = self.client.build(
self.options['build'],
tag=self.full_name,
path=path,
tag=self.image_name,
stream=True,
rm=True,
pull=False,
nocache=no_cache,
dockerfile=self.options.get('dockerfile', None),
)
try:
@@ -480,6 +658,11 @@ class Service(object):
except StreamOutputError as e:
raise BuildError(self, unicode(e))
# Ensure the HTTP connection is not reused for another
# streaming command, as the Docker daemon can sometimes
# complain about it
self.client.close()
image_id = None
for event in all_events:
@@ -503,6 +686,13 @@ class Service(object):
"""
return '%s_%s' % (self.project, self.name)
def labels(self, one_off=False):
return [
'{0}={1}'.format(LABEL_PROJECT, self.project),
'{0}={1}'.format(LABEL_SERVICE, self.name),
'{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False")
]
def can_be_scaled(self):
for port in self.options.get('ports', []):
if ':' in str(port):
@@ -510,48 +700,91 @@ class Service(object):
return True
def pull(self, insecure_registry=False):
if 'image' in self.options:
image_name = self._get_image_name(self.options['image'])
log.info('Pulling %s (%s)...' % (self.name, image_name))
self.client.pull(
image_name,
insecure_registry=insecure_registry
)
if 'image' not in self.options:
return
repo, tag = parse_repository_tag(self.options['image'])
tag = tag or 'latest'
log.info('Pulling %s (%s:%s)...' % (self.name, repo, tag))
output = self.client.pull(
repo,
tag=tag,
stream=True,
insecure_registry=insecure_registry)
stream_output(output, sys.stdout)
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
# Names
def is_valid_name(name, one_off=False):
match = NAME_RE.match(name)
if match is None:
return False
def build_container_name(project, service, number, one_off=False):
bits = [project, service]
if one_off:
return match.group(3) == 'run_'
else:
return match.group(3) is None
bits.append('run')
return '_'.join(bits + [str(number)])
def parse_name(name):
match = NAME_RE.match(name)
(project, service_name, _, suffix) = match.groups()
return ServiceName(project, service_name, int(suffix))
# Images
def parse_restart_spec(restart_config):
if not restart_config:
return None
parts = restart_config.split(':')
if len(parts) > 2:
raise ConfigError("Restart %s has incorrect format, should be "
"mode[:max_retry]" % restart_config)
if len(parts) == 2:
name, max_retry_count = parts
else:
name, = parts
max_retry_count = 0
def parse_repository_tag(s):
if ":" not in s:
return s, ""
repo, tag = s.rsplit(":", 1)
if "/" in tag:
return s, ""
return repo, tag
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
# Volumes
def merge_volume_bindings(volumes_option, previous_container):
"""Return a list of volume bindings for a container. Container data volumes
are replaced by those from the previous container.
"""
volume_bindings = dict(
build_volume_binding(parse_volume_spec(volume))
for volume in volumes_option or []
if ':' in volume)
if previous_container:
volume_bindings.update(
get_container_data_volumes(previous_container, volumes_option))
return volume_bindings.values()
def get_container_data_volumes(container, volumes_option):
"""Find the container data volumes that are in `volumes_option`, and return
a mapping of volume bindings for those volumes.
"""
volumes = []
volumes_option = volumes_option or []
container_volumes = container.get('Volumes') or {}
image_volumes = container.image_config['ContainerConfig'].get('Volumes') or {}
for volume in set(volumes_option + image_volumes.keys()):
volume = parse_volume_spec(volume)
# No need to preserve host volumes
if volume.external:
continue
volume_path = container_volumes.get(volume.internal)
# New volume, doesn't exist in the old container
if not volume_path:
continue
# Copy existing volume from old container
volume = volume._replace(external=volume_path)
volumes.append(build_volume_binding(volume))
return dict(volumes)
def build_volume_binding(volume_spec):
return volume_spec.internal, "{}:{}:{}".format(*volume_spec)
def parse_volume_spec(volume_config):
@@ -574,18 +807,7 @@ def parse_volume_spec(volume_config):
return VolumeSpec(external, internal, mode)
def parse_repository_tag(s):
if ":" not in s:
return s, ""
repo, tag = s.rsplit(":", 1)
if "/" in tag:
return s, ""
return repo, tag
def build_volume_binding(volume_spec):
internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
return volume_spec.external, internal
# Ports
def build_port_bindings(ports):
@@ -614,3 +836,61 @@ def split_port(port):
external_ip, external_port, internal_port = parts
return internal_port, (external_ip, external_port or None)
# Labels
def build_container_labels(label_options, service_labels, number, one_off=False):
labels = label_options or {}
labels.update(label.split('=', 1) for label in service_labels)
labels[LABEL_CONTAINER_NUMBER] = str(number)
labels[LABEL_VERSION] = __version__
return labels
# Restart policy
def parse_restart_spec(restart_config):
if not restart_config:
return None
parts = restart_config.split(':')
if len(parts) > 2:
raise ConfigError("Restart %s has incorrect format, should be "
"mode[:max_retry]" % restart_config)
if len(parts) == 2:
name, max_retry_count = parts
else:
name, = parts
max_retry_count = 0
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
# Extra hosts
def build_extra_hosts(extra_hosts_config):
if not extra_hosts_config:
return {}
if isinstance(extra_hosts_config, list):
extra_hosts_dict = {}
for extra_hosts_line in extra_hosts_config:
if not isinstance(extra_hosts_line, six.string_types):
raise ConfigError(
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
extra_hosts_config
)
host, ip = extra_hosts_line.split(':')
extra_hosts_dict.update({host.strip(): ip.strip()})
extra_hosts_config = extra_hosts_dict
if isinstance(extra_hosts_config, dict):
return extra_hosts_config
raise ConfigError(
"extra_hosts_config \"%s\" must be either a list of strings or a string->string mapping," %
extra_hosts_config
)

0
compose/state.py Normal file
View File

9
compose/utils.py Normal file
View File

@@ -0,0 +1,9 @@
import json
import hashlib
def json_hash(obj):
dump = json.dumps(obj, sort_keys=True, separators=(',', ':'))
h = hashlib.sha256()
h.update(dump)
return h.hexdigest()

View File

@@ -94,7 +94,7 @@ _docker-compose_build() {
_docker-compose_docker-compose() {
case "$prev" in
--file|-f)
_filedir y?(a)ml
_filedir "y?(a)ml"
return
;;
--project-name|-p)
@@ -104,7 +104,7 @@ _docker-compose_docker-compose() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help -h --verbose --version --file -f --project-name -p" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--help -h --verbose --version -v --file -f --project-name -p" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) )
@@ -293,7 +293,7 @@ _docker-compose_up() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--allow-insecure-ssl -d --no-build --no-color --no-deps --no-recreate -t --timeout --x-smart-recreate" -- "$cur" ) )
;;
*)
__docker-compose_services_all
@@ -303,11 +303,15 @@ _docker-compose_up() {
_docker-compose() {
local previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
local commands=(
build
help
kill
logs
migrate-to-labels
port
ps
pull
@@ -352,6 +356,7 @@ _docker-compose() {
local completions_func=_docker-compose_${command}
declare -F $completions_func >/dev/null && $completions_func
eval "$previous_extglob_setting"
return 0
}

View File

@@ -0,0 +1,304 @@
#compdef docker-compose
# Description
# -----------
# zsh completion for docker-compose
# https://github.com/sdurrheimer/docker-compose-zsh-completion
# -------------------------------------------------------------------------
# Version
# -------
# 0.1.0
# -------------------------------------------------------------------------
# Authors
# -------
# * Steve Durrheimer <s.durrheimer@gmail.com>
# -------------------------------------------------------------------------
# Inspiration
# -----------
# * @albers docker-compose bash completion script
# * @felixr docker zsh completion script : https://github.com/felixr/docker-zsh-completion
# -------------------------------------------------------------------------
# For compatibility reasons, Compose and therefore its completion supports several
# stack compositon files as listed here, in descending priority.
# Support for these filenames might be dropped in some future version.
__docker-compose_compose_file() {
local file
for file in docker-compose.y{,a}ml fig.y{,a}ml ; do
[ -e $file ] && {
echo $file
return
}
done
echo docker-compose.yml
}
# Extracts all service names from docker-compose.yml.
___docker-compose_all_services_in_compose_file() {
local already_selected
local -a services
already_selected=$(echo ${words[@]} | tr " " "|")
awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | grep -Ev "$already_selected"
}
# All services, even those without an existing container
__docker-compose_services_all() {
services=$(___docker-compose_all_services_in_compose_file)
_alternative "args:services:($services)"
}
# All services that have an entry with the given key in their docker-compose.yml section
___docker-compose_services_with_key() {
local already_selected
local -a buildable
already_selected=$(echo ${words[@]} | tr " " "|")
# flatten sections to one line, then filter lines containing the key and return section name.
awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' 2>/dev/null | grep -Ev "$already_selected"
}
# All services that are defined by a Dockerfile reference
__docker-compose_services_from_build() {
buildable=$(___docker-compose_services_with_key build)
_alternative "args:buildable services:($buildable)"
}
# All services that are defined by an image
__docker-compose_services_from_image() {
pullable=$(___docker-compose_services_with_key image)
_alternative "args:pullable services:($pullable)"
}
__docker-compose_get_services() {
local kind expl
declare -a running stopped lines args services
docker_status=$(docker ps > /dev/null 2>&1)
if [ $? -ne 0 ]; then
_message "Error! Docker is not running."
return 1
fi
kind=$1
shift
[[ $kind = (stopped|all) ]] && args=($args -a)
lines=(${(f)"$(_call_program commands docker ps ${args})"})
services=(${(f)"$(_call_program commands docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} ps -q)"})
# Parse header line to find columns
local i=1 j=1 k header=${lines[1]}
declare -A begin end
while (( $j < ${#header} - 1 )) {
i=$(( $j + ${${header[$j,-1]}[(i)[^ ]]} - 1))
j=$(( $i + ${${header[$i,-1]}[(i) ]} - 1))
k=$(( $j + ${${header[$j,-1]}[(i)[^ ]]} - 2))
begin[${header[$i,$(($j-1))]}]=$i
end[${header[$i,$(($j-1))]}]=$k
}
lines=(${lines[2,-1]})
# Container ID
local line s name
local -a names
for line in $lines; do
if [[ $services == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then
names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}})
for name in $names; do
s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}"
s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"
s="$s, ${${${line[$begin[IMAGE],$end[IMAGE]]}/:/\\:}%% ##}"
if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then
stopped=($stopped $s)
else
running=($running $s)
fi
done
fi
done
[[ $kind = (running|all) ]] && _describe -t services-running "running services" running
[[ $kind = (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped
}
__docker-compose_stoppedservices() {
__docker-compose_get_services stopped "$@"
}
__docker-compose_runningservices() {
__docker-compose_get_services running "$@"
}
__docker-compose_services () {
__docker-compose_get_services all "$@"
}
__docker-compose_caching_policy() {
oldp=( "$1"(Nmh+1) ) # 1 hour
(( $#oldp ))
}
__docker-compose_commands () {
local cache_policy
zstyle -s ":completion:${curcontext}:" cache-policy cache_policy
if [[ -z "$cache_policy" ]]; then
zstyle ":completion:${curcontext}:" cache-policy __docker-compose_caching_policy
fi
if ( [[ ${+_docker_compose_subcommands} -eq 0 ]] || _cache_invalid docker_compose_subcommands) \
&& ! _retrieve_cache docker_compose_subcommands;
then
local -a lines
lines=(${(f)"$(_call_program commands docker-compose 2>&1)"})
_docker_compose_subcommands=(${${${lines[$((${lines[(i)Commands:]} + 1)),${lines[(I) *]}]}## #}/ ##/:})
_store_cache docker_compose_subcommands _docker_compose_subcommands
fi
_describe -t docker-compose-commands "docker-compose command" _docker_compose_subcommands
}
__docker-compose_subcommand () {
local -a _command_args
integer ret=1
case "$words[1]" in
(build)
_arguments \
'--no-cache[Do not use cache when building the image]' \
'*:services:__docker-compose_services_from_build' && ret=0
;;
(help)
_arguments ':subcommand:__docker-compose_commands' && ret=0
;;
(kill)
_arguments \
'-s[SIGNAL to send to the container. Default signal is SIGKILL.]:signal:_signals' \
'*:running services:__docker-compose_runningservices' && ret=0
;;
(logs)
_arguments \
'--no-color[Produce monochrome output.]' \
'*:services:__docker-compose_services_all' && ret=0
;;
(migrate-to-labels)
_arguments \
'(-):Recreate containers to add labels' && ret=0
;;
(port)
_arguments \
'--protocol=-[tcp or udap (defaults to tcp)]:protocol:(tcp udp)' \
'--index=-[index of the container if there are mutiple instances of a service (defaults to 1)]:index: ' \
'1:running services:__docker-compose_runningservices' \
'2:port:_ports' && ret=0
;;
(ps)
_arguments \
'-q[Only display IDs]' \
'*:services:__docker-compose_services_all' && ret=0
;;
(pull)
_arguments \
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
'*:services:__docker-compose_services_from_image' && ret=0
;;
(rm)
_arguments \
'(-f --force)'{-f,--force}"[Don't ask to confirm removal]" \
'-v[Remove volumes associated with containers]' \
'*:stopped services:__docker-compose_stoppedservices' && ret=0
;;
(run)
_arguments \
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
'-d[Detached mode: Run container in the background, print new container name.]' \
'--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \
'*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \
'(-u --user)'{-u,--user=-}'[Run as specified username or uid]:username or uid:_users' \
"--no-deps[Don't start linked services.]" \
'--rm[Remove container after run. Ignored in detached mode.]' \
"--service-ports[Run command with the service's ports enabled and mapped to the host.]" \
'-T[Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY.]' \
'(-):services:__docker-compose_services' \
'(-):command: _command_names -e' \
'*::arguments: _normal' && ret=0
;;
(scale)
_arguments '*:running services:__docker-compose_runningservices' && ret=0
;;
(start)
_arguments '*:stopped services:__docker-compose_stoppedservices' && ret=0
;;
(stop|restart)
_arguments \
'(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \
'*:running services:__docker-compose_runningservices' && ret=0
;;
(up)
_arguments \
'--allow-insecure-ssl[Allow insecure connections to the docker registry]' \
'-d[Detached mode: Run containers in the background, print new container names.]' \
'--no-color[Produce monochrome output.]' \
"--no-deps[Don't start linked services.]" \
"--no-recreate[If containers already exist, don't recreate them.]" \
"--no-build[Don't build an image, even if it's missing]" \
'(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \
"--x-smart-recreate[Only recreate containers whose configuration or image needs to be updated. (EXPERIMENTAL)]" \
'*:services:__docker-compose_services_all' && ret=0
;;
(*)
_message 'Unknown sub command'
esac
return ret
}
_docker-compose () {
# Support for subservices, which allows for `compdef _docker docker-shell=_docker_containers`.
# Based on /usr/share/zsh/functions/Completion/Unix/_git without support for `ret`.
if [[ $service != docker-compose ]]; then
_call_function - _$service
return
fi
local curcontext="$curcontext" state line ret=1
typeset -A opt_args
_arguments -C \
'(- :)'{-h,--help}'[Get help]' \
'--verbose[Show more output]' \
'(- :)'{-v,--version}'[Print version and exit]' \
'(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \
'(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \
'(-): :->command' \
'(-)*:: :->option-or-argument' && ret=0
local counter=1
#local compose_file compose_project
while [ $counter -lt ${#words[@]} ]; do
case "${words[$counter]}" in
-f|--file)
(( counter++ ))
compose_file="${words[$counter]}"
;;
-p|--project-name)
(( counter++ ))
compose_project="${words[$counter]}"
;;
*)
;;
esac
(( counter++ ))
done
case $state in
(command)
__docker-compose_commands && ret=0
;;
(option-or-argument)
curcontext=${curcontext%:*:*}:docker-compose-$words[1]:
__docker-compose_subcommand && ret=0
;;
esac
return ret
}
_docker-compose "$@"

View File

@@ -1,15 +1,24 @@
FROM docs/base:latest
MAINTAINER Sven Dowideit <SvenDowideit@docker.com> (@SvenDowideit)
FROM docs/base:hugo
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
# to get the git info for this repo
# To get the git info for this repo
COPY . /src
# Reset the /docs dir so we can replace the theme meta with the new repo's git info
RUN git reset --hard
COPY . /docs/content/compose/
RUN grep "__version" /src/compose/__init__.py | sed "s/.*'\(.*\)'/\1/" > /docs/VERSION
COPY docs/* /docs/sources/compose/
COPY docs/mkdocs.yml /docs/mkdocs-compose.yml
# Then build everything together, ready for mkdocs
RUN /docs/build.sh
# Sed to process GitHub Markdown
# 1-2 Remove comment code from metadata block
# 3 Change ](/word to ](/project/ in links
# 4 Change ](word.md) to ](/project/word)
# 5 Remove .md extension from link text
# 6 Change ](../ to ](/project/word)
# 7 Change ](../../ to ](/project/ --> not implemented
#
#
RUN find /docs/content/compose -type f -name "*.md" -exec sed -i.old \
-e '/^<!.*metadata]>/g' \
-e '/^<!.*end-metadata.*>/g' \
-e 's/\(\]\)\([(]\)\(\/\)/\1\2\/compose\//g' \
-e 's/\(\][(]\)\([A-z].*\)\(\.md\)/\1\/compose\/\2/g' \
-e 's/\([(]\)\(.*\)\(\.md\)/\1\2/g' \
-e 's/\(\][(]\)\(\.\.\/\)/\1\/compose\//g' {} \;

55
docs/Makefile Normal file
View File

@@ -0,0 +1,55 @@
.PHONY: all binary build cross default docs docs-build docs-shell shell test test-unit test-integration test-integration-cli test-docker-py validate
# env vars passed through directly to Docker's build scripts
# to allow things like `make DOCKER_CLIENTONLY=1 binary` easily
# `docs/sources/contributing/devenvironment.md ` and `project/PACKAGERS.md` have some limited documentation of some of these
DOCKER_ENVS := \
-e BUILDFLAGS \
-e DOCKER_CLIENTONLY \
-e DOCKER_EXECDRIVER \
-e DOCKER_GRAPHDRIVER \
-e TESTDIRS \
-e TESTFLAGS \
-e TIMEOUT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make DOCSDIR=docs docs-shell` (to create a bind mount in docs)
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR)/$(DOCSDIR):/$(DOCSDIR))
# to allow `make DOCSPORT=9000 docs`
DOCSPORT := 8000
# Get the IP ADDRESS
DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''")
HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)")
HUGO_BIND_IP=0.0.0.0
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
# for some docs workarounds (see below in "docs-build" target)
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
default: docs
docs: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
docs-draft: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
docs-shell: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
docs-build:
# ( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs ./ > ./changed-files
# echo "$(GIT_BRANCH)" > GIT_BRANCH
# echo "$(AWS_S3_BUCKET)" > AWS_S3_BUCKET
# echo "$(GITCOMMIT)" > GITCOMMIT
docker build -t "$(DOCKER_DOCS_IMAGE)" .

77
docs/README.md Normal file
View File

@@ -0,0 +1,77 @@
# Contributing to the Docker Compose documentation
The documentation in this directory is part of the [https://docs.docker.com](https://docs.docker.com) website. Docker uses [the Hugo static generator](http://gohugo.io/overview/introduction/) to convert project Markdown files to a static HTML site.
You don't need to be a Hugo expert to contribute to the compose documentation. If you are familiar with Markdown, you can modify the content in the `docs` files.
If you want to add a new file or change the location of the document in the menu, you do need to know a little more.
## Documentation contributing workflow
1. Edit a Markdown file in the tree.
2. Save your changes.
3. Make sure you in your `docs` subdirectory.
4. Build the documentation.
$ make docs
---> ffcf3f6c4e97
Removing intermediate container a676414185e8
Successfully built ffcf3f6c4e97
docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 -e DOCKERHOST "docs-base:test-tooling" hugo server --port=8000 --baseUrl=192.168.59.103 --bind=0.0.0.0
ERROR: 2015/06/13 MenuEntry's .Url is deprecated and will be removed in Hugo 0.15. Use .URL instead.
0 of 4 drafts rendered
0 future content
12 pages created
0 paginator pages created
0 tags created
0 categories created
in 55 ms
Serving pages from /docs/public
Web Server is available at http://0.0.0.0:8000/
Press Ctrl+C to stop
5. Open the available server in your browser.
The documentation server has the complete menu but only the Docker Compose
documentation resolves. You can't access the other project docs from this
localized build.
## Tips on Hugo metadata and menu positioning
The top of each Docker Compose documentation file contains TOML metadata. The metadata is commented out to prevent it from appears in GitHub.
<!--[metadata]>
+++
title = "Extending services in Compose"
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
[menu.main]
parent="smn_workw_compose"
weight=2
+++
<![end-metadata]-->
The metadata alone has this structure:
+++
title = "Extending services in Compose"
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
[menu.main]
parent="smn_workw_compose"
weight=2
+++
The `[menu.main]` section refers to navigation defined [in the main Docker menu](https://github.com/docker/docs-base/blob/hugo/config.toml). This metadata says *add a menu item called* Extending services in Compose *to the menu with the* `smn_workdw_compose` *identifier*. If you locate the menu in the configuration, you'll find *Create multi-container applications* is the menu title.
You can move an article in the tree by specifying a new parent. You can shift the location of the item by changing its weight. Higher numbers are heavier and shift the item to the bottom of menu. Low or no numbers shift it up.
## Other key documentation repositories
The `docker/docs-base` repository contains [the Hugo theme and menu configuration](https://github.com/docker/docs-base). If you open the `Dockerfile` you'll see the `make docs` relies on this as a base image for building the Compose documentation.
The `docker/docs.docker.com` repository contains [build system for building the Docker documentation site](https://github.com/docker/docs.docker.com). Fork this repository to build the entire documentation site.

View File

@@ -1,9 +1,16 @@
page_title: Compose CLI reference
page_description: Compose CLI reference
page_keywords: fig, composition, compose, docker, orchestration, cli, reference
<!--[metadata]>
+++
title = "Compose CLI reference"
description = "Compose CLI reference"
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
[menu.main]
identifier = "smn_install_compose"
parent = "smn_compose_ref"
+++
<![end-metadata]-->
# CLI reference
# Compose CLI reference
Most Docker Compose commands are run against one or more services. If
the service is not specified, the command will apply to all services.
@@ -47,6 +54,10 @@ Lists containers.
Pulls service images.
### restart
Restarts services.
### rm
Removes stopped service containers.
@@ -91,7 +102,9 @@ specify the `--no-deps` flag:
Similarly, if you do want the service's ports to be created and mapped to the
host, specify the `--service-ports` flag:
$ docker-compose run --service-ports web python manage.py shell
$ docker-compose run --service-ports web python manage.py shell
### scale
@@ -130,13 +143,16 @@ By default, if there are existing containers for a service, `docker-compose up`
Shows more output
### --version
### -v, --version
Prints version and exits
### -f, --file FILE
Specifies an alternate Compose yaml file (default: `docker-compose.yml`)
Specify what file to read configuration from. If not provided, Compose will look
for `docker-compose.yml` in the current working directory, and then each parent
directory successively, until found.
### -p, --project-name NAME
@@ -148,7 +164,7 @@ By default, if there are existing containers for a service, `docker-compose up`
Several environment variables are available for you to configure Compose's behaviour.
Variables starting with `DOCKER_` are the same as those used to configure the
Docker command-line client. If you're using boot2docker, `$(boot2docker shellinit)`
Docker command-line client. If you're using boot2docker, `eval "$(boot2docker shellinit)"`
will set them to their correct values.
### COMPOSE\_PROJECT\_NAME
@@ -157,7 +173,9 @@ Sets the project name, which is prepended to the name of every container started
### COMPOSE\_FILE
Sets the path to the `docker-compose.yml` to use. Defaults to `docker-compose.yml` in the current working directory.
Specify what file to read configuration from. If not provided, Compose will look
for `docker-compose.yml` in the current working directory, and then each parent
directory successively, until found.
### DOCKER\_HOST
@@ -174,8 +192,11 @@ Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TL
## Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)
- [Compose command line completion](completion.md)

View File

@@ -1,28 +1,53 @@
---
layout: default
title: Command Completion
---
<!--[metadata]>
+++
title = "Command Completion"
description = "Compose CLI reference"
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
[menu.main]
parent="smn_workw_compose"
weight=3
+++
<![end-metadata]-->
Command Completion
==================
# Command Completion
Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion)
for the bash shell.
for the bash and zsh shell.
Installing Command Completion
-----------------------------
## Installing Command Completion
### Bash
Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available.
On a Mac, install with `brew install bash-completion`
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
curl -L https://raw.githubusercontent.com/docker/compose/1.2.0/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g.
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk '{print $2}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
Completion will be available upon next login.
Available completions
---------------------
### Zsh
Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/`
mkdir -p ~/.zsh/completion
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk '{print $2}')/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose
Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc`
fpath=(~/.zsh/completion $fpath)
Make sure `compinit` is loaded or do it by adding in `~/.zshrc`
autoload -Uz compinit && compinit -i
Then reload your shell
exec $SHELL -l
## Available completions
Depending on what you typed on the command line so far, it will complete
- available docker-compose commands
@@ -34,8 +59,11 @@ Enjoy working with Compose faster and with less typos!
## Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)
- [Compose environment variables](env.md)

View File

@@ -1,10 +1,16 @@
page_title: Quickstart Guide: Compose and Django
page_description: Getting started with Docker Compose and Django
page_keywords: documentation, docs, docker, compose, orchestration, containers,
django
<!--[metadata]>
+++
title = "Quickstart Guide: Compose and Django"
description = "Getting started with Docker Compose and Django"
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
[menu.main]
parent="smn_workw_compose"
weight=4
+++
<![end-metadata]-->
## Getting started with Compose and Django
## Quickstart Guide: Compose and Django
This Quick-start Guide will demonstrate how to use Compose to set up and run a
@@ -119,8 +125,11 @@ example, run `docker-compose up` and in another terminal run:
## More Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)

View File

@@ -1,9 +1,15 @@
---
layout: default
title: Compose environment variables reference
---
<!--[metadata]>
+++
title = "Compose environment variables reference"
description = "Compose CLI reference"
keywords = ["fig, composition, compose, docker, orchestration, cli, reference"]
[menu.main]
parent="smn_compose_ref"
weight=3
+++
<![end-metadata]-->
Environment variables reference
# Compose environment variables reference
===============================
**Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](yml.md#links) for details.
@@ -34,8 +40,11 @@ Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
## Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose command line completion](completion.md)

381
docs/extends.md Normal file
View File

@@ -0,0 +1,381 @@
<!--[metadata]>
+++
title = "Extending services in Compose"
description = "How to use Docker Compose's extends keyword to share configuration between files and projects"
keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"]
[menu.main]
parent="smn_workw_compose"
weight=2
+++
<![end-metadata]-->
## Extending services in Compose
Docker Compose's `extends` keyword enables sharing of common configurations
among different files, or even different projects entirely. Extending services
is useful if you have several applications that reuse commonly-defined services.
Using `extends` you can define a service in one place and refer to it from
anywhere.
Alternatively, you can deploy the same application to multiple environments with
a slightly different set of services in each case (or with changes to the
configuration of some services). Moreover, you can do so without copy-pasting
the configuration around.
### Understand the extends configuration
When defining any service in `docker-compose.yml`, you can declare that you are
extending another service like this:
```yaml
web:
extends:
file: common-services.yml
service: webapp
```
This instructs Compose to re-use the configuration for the `webapp` service
defined in the `common-services.yml` file. Suppose that `common-services.yml`
looks like this:
```yaml
webapp:
build: .
ports:
- "8000:8000"
volumes:
- "/data"
```
In this case, you'll get exactly the same result as if you wrote
`docker-compose.yml` with that `build`, `ports` and `volumes` configuration
defined directly under `web`.
You can go further and define (or re-define) configuration locally in
`docker-compose.yml`:
```yaml
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
```
You can also write other services and link your `web` service to them:
```yaml
web:
extends:
file: common-services.yml
service: webapp
environment:
- DEBUG=1
cpu_shares: 5
links:
- db
db:
image: postgres
```
For full details on how to use `extends`, refer to the [reference](#reference).
### Example use case
In this example, youll repurpose the example app from the [quick start
guide](index.md). (If you're not familiar with Compose, it's recommended that
you go through the quick start first.) This example assumes you want to use
Compose both to develop an application locally and then deploy it to a
production environment.
The local and production environments are similar, but there are some
differences. In development, you mount the application code as a volume so that
it can pick up changes; in production, the code should be immutable from the
outside. This ensures its not accidentally changed. The development environment
uses a local Redis container, but in production another team manages the Redis
service, which is listening at `redis-production.example.com`.
To configure with `extends` for this sample, you must:
1. Define the web application as a Docker image in `Dockerfile` and a Compose
service in `common.yml`.
2. Define the development environment in the standard Compose file,
`docker-compose.yml`.
- Use `extends` to pull in the web service.
- Configure a volume to enable code reloading.
- Create an additional Redis service for the application to use locally.
3. Define the production environment in a third Compose file, `production.yml`.
- Use `extends` to pull in the web service.
- Configure the web service to talk to the external, production Redis service.
#### Define the web app
Defining the web application requires the following:
1. Create an `app.py` file.
This file contains a simple Python application that uses Flask to serve HTTP
and increments a counter in Redis:
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host=os.environ['REDIS_HOST'], port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.\n' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
This code uses a `REDIS_HOST` environment variable to determine where to
find Redis.
2. Define the Python dependencies in a `requirements.txt` file:
flask
redis
3. Create a `Dockerfile` to build an image containing the app:
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD python app.py
4. Create a Compose configuration file called `common.yml`:
This configuration defines how to run the app.
web:
build: .
ports:
- "5000:5000"
Typically, you would have dropped this configuration into
`docker-compose.yml` file, but in order to pull it into multiple files with
`extends`, it needs to be in a separate file.
#### Define the development environment
1. Create a `docker-compose.yml` file.
The `extends` option pulls in the `web` service from the `common.yml` file
you created in the previous section.
web:
extends:
file: common.yml
service: web
volumes:
- .:/code
links:
- redis
environment:
- REDIS_HOST=redis
redis:
image: redis
The new addition defines a `web` service that:
- Fetches the base configuration for `web` out of `common.yml`.
- Adds `volumes` and `links` configuration to the base (`common.yml`)
configuration.
- Sets the `REDIS_HOST` environment variable to point to the linked redis
container. This environment uses a stock `redis` image from the Docker Hub.
2. Run `docker-compose up`.
Compose creates, links, and starts a web and redis container linked together.
It mounts your application code inside the web container.
3. Verify that the code is mounted by changing the message in
`app.py`&mdash;say, from `Hello world!` to `Hello from Compose!`.
Don't forget to refresh your browser to see the change!
#### Define the production environment
You are almost done. Now, define your production environment:
1. Create a `production.yml` file.
As with `docker-compose.yml`, the `extends` option pulls in the `web` service
from `common.yml`.
web:
extends:
file: common.yml
service: web
environment:
- REDIS_HOST=redis-production.example.com
2. Run `docker-compose -f production.yml up`.
Compose creates *just* a web container and configures the Redis connection via
the `REDIS_HOST` environment variable. This variable points to the production
Redis instance.
> **Note**: If you try to load up the webapp in your browser you'll get an
> error&mdash;`redis-production.example.com` isn't actually a Redis server.
You've now done a basic `extends` configuration. As your application develops,
you can make any necessary changes to the web service in `common.yml`. Compose
picks up both the development and production environments when you next run
`docker-compose`. You don't have to do any copy-and-paste, and you don't have to
manually keep both environments in sync.
### Reference
You can use `extends` on any service together with other configuration keys. It
always expects a dictionary that should always contain two keys: `file` and
`service`.
The `file` key specifies which file to look in. It can be an absolute path or a
relative one&mdash;if relative, it's treated as relative to the current file.
The `service` key specifies the name of the service to extend, for example `web`
or `database`.
You can extend a service that itself extends another. You can extend
indefinitely. Compose does not support circular references and `docker-compose`
returns an error if it encounters them.
#### Adding and overriding configuration
Compose copies configurations from the original service over to the local one,
**except** for `links` and `volumes_from`. These exceptions exist to avoid
implicit dependencies&mdash;you always define `links` and `volumes_from`
locally. This ensures dependencies between services are clearly visible when
reading the current file. Defining these locally also ensures changes to the
referenced file don't result in breakage.
If a configuration option is defined in both the original service and the local
service, the local value either *override*s or *extend*s the definition of the
original service. This works differently for other configuration options.
For single-value options like `image`, `command` or `mem_limit`, the new value
replaces the old value. **This is the default behaviour - all exceptions are
listed below.**
```yaml
# original service
command: python app.py
# local service
command: python otherapp.py
# result
command: python otherapp.py
```
In the case of `build` and `image`, using one in the local service causes
Compose to discard the other, if it was defined in the original service.
```yaml
# original service
build: .
# local service
image: redis
# result
image: redis
```
```yaml
# original service
image: redis
# local service
build: .
# result
build: .
```
For the **multi-value options** `ports`, `expose`, `external_links`, `dns` and
`dns_search`, Compose concatenates both sets of values:
```yaml
# original service
expose:
- "3000"
# local service
expose:
- "4000"
- "5000"
# result
expose:
- "3000"
- "4000"
- "5000"
```
In the case of `environment` and `labels`, Compose "merges" entries together
with locally-defined values taking precedence:
```yaml
# original service
environment:
- FOO=original
- BAR=original
# local service
environment:
- BAR=local
- BAZ=local
# result
environment:
- FOO=original
- BAR=local
- BAZ=local
```
Finally, for `volumes` and `devices`, Compose "merges" entries together with
locally-defined bindings taking precedence:
```yaml
# original service
volumes:
- /original-dir/foo:/foo
- /original-dir/bar:/bar
# local service
volumes:
- /local-dir/bar:/bar
- /local-dir/baz/:baz
# result
volumes:
- /original-dir/foo:/foo
- /local-dir/bar:/bar
- /local-dir/baz/:baz
```
## Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose command line completion](completion.md)

View File

@@ -1,48 +1,47 @@
page_title: Compose: Multi-container orchestration for Docker
page_description: Introduction and Overview of Compose
page_keywords: documentation, docs, docker, compose, orchestration, containers
<!--[metadata]>
+++
title = "Overview of Docker Compose"
description = "Introduction and Overview of Compose"
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
[menu.main]
parent="smn_workw_compose"
+++
<![end-metadata]-->
# Docker Compose
# Overview of Docker Compose
Compose is a tool for defining and running complex applications with Docker.
With Compose, you define a multi-container application in a single file, then
spin your application up in a single command which does everything that needs to
be done to get it running.
Compose is a tool for defining and running multi-container applications with
Docker. With Compose, you define a multi-container application in a single
file, then spin your application up in a single command which does everything
that needs to be done to get it running.
Compose is great for development environments, staging servers, and CI. We don't
recommend that you use it in production yet.
Using Compose is basically a three-step process.
First, you define your app's environment with a `Dockerfile` so it can be
reproduced anywhere:
```Dockerfile
FROM python:2.7
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
CMD python app.py
```
Next, you define the services that make up your app in `docker-compose.yml` so
1. Define your app's environment with a `Dockerfile` so it can be
reproduced anywhere.
2. Define the services that make up your app in `docker-compose.yml` so
they can be run together in an isolated environment:
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
A `docker-compose.yml` looks like this:
```yaml
web:
build: .
links:
- db
ports:
- "8000:8000"
db:
image: postgres
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
```
Lastly, run `docker-compose up` and Compose will start and run your entire app.
Compose has commands for managing the whole lifecycle of your application:
* Start, stop and rebuild services
@@ -53,6 +52,9 @@ Compose has commands for managing the whole lifecycle of your application:
## Compose documentation
- [Installing Compose](install.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)
@@ -108,13 +110,19 @@ specify how to build the image using a file called
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD python app.py
This tells Docker to include Python, your code, and your Python dependencies in
a Docker image. For more information on how to write Dockerfiles, see the
[Docker user
guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile)
and the
[Dockerfile reference](http://docs.docker.com/reference/builder/).
This tells Docker to:
* Build an image starting with the Python 2.7 image.
* Add the current directory `.` into the path `/code` in the image.
* Set the working directory to `/code`.
* Install your Python dependencies.
* Set the default command for the container to `python app.py`
For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
You can test that this builds by running `docker build -t web .`.
### Define services
@@ -122,7 +130,6 @@ Next, define a set of services using `docker-compose.yml`:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
@@ -134,19 +141,20 @@ Next, define a set of services using `docker-compose.yml`:
This defines two services:
- `web`, which is built from the `Dockerfile` in the current directory. It also
says to run the command `python app.py` inside the image, forward the exposed
port 5000 on the container to port 5000 on the host machine, connect up the
Redis service, and mount the current directory inside the container so we can
work on code without having to rebuild the image.
- `redis`, which uses the public image
[redis](https://registry.hub.docker.com/_/redis/), which gets pulled from the
Docker Hub registry.
#### web
* Builds from the `Dockerfile` in the current directory.
* Forwards the exposed port 5000 on the container to port 5000 on the host machine.
* Connects the web container to the Redis service via a link.
* Mounts the current directory on the host to `/code` inside the container allowing you to modify the code without having to rebuild the image.
#### redis
* Uses the public [Redis](https://registry.hub.docker.com/_/redis/) image which gets pulled from the Docker Hub registry.
### Build and run your app with Compose
Now, when you run `docker-compose up`, Compose will pull a Redis image, build an
image for your code, and start everything up:
Now, when you run `docker-compose up`, Compose will pull a Redis image, build an image for your code, and start everything up:
$ docker-compose up
Pulling image redis...
@@ -157,7 +165,12 @@ image for your code, and start everything up:
web_1 | * Running on http://0.0.0.0:5000/
The web app should now be listening on port 5000 on your Docker daemon host (if
you're using Boot2docker, `boot2docker ip` will tell you its address).
you're using Boot2docker, `boot2docker ip` will tell you its address). In a browser,
open `http://ip-from-boot2docker:5000` and you should get a message in your browser saying:
`Hello World! I have been seen 1 times.`
Refreshing the page will increment the number.
If you want to run your services in the background, you can pass the `-d` flag
(for daemon mode) to `docker-compose up` and use `docker-compose ps` to see what
@@ -191,3 +204,31 @@ At this point, you have seen the basics of how Compose works.
[Rails](rails.md), or [Wordpress](wordpress.md).
- See the reference guides for complete details on the [commands](cli.md), the
[configuration file](yml.md) and [environment variables](env.md).
## Release Notes
### Version 1.2.0 (April 7, 2015)
For complete information on this release, see the [1.2.0 Milestone project page](https://github.com/docker/compose/wiki/1.2.0-Milestone-Project-Page).
In addition to bug fixes and refinements, this release adds the following:
* The `extends` keyword, which adds the ability to extend services by sharing common configurations. For details, see
[PR #1088](https://github.com/docker/compose/pull/1088).
* Better integration with Swarm. Swarm will now schedule inter-dependent
containers on the same host. For details, see
[PR #972](https://github.com/docker/compose/pull/972).
## Getting help
Docker Compose is still in its infancy and under active development. If you need
help, would like to contribute, or simply want to talk about the project with
like-minded individuals, we have a number of open channels for communication.
* To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues).
* To talk about the project with people in real time: please join the `#docker-compose` channel on IRC.
* To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls).
For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/).

View File

@@ -1,28 +1,37 @@
page_title: Installing Compose
page_description: How to intall Docker Compose
page_keywords: compose, orchestration, install, installation, docker, documentation
<!--[metadata]>
+++
title = "Docker Compose"
description = "How to install Docker Compose"
keywords = ["compose, orchestration, install, installation, docker, documentation"]
[menu.main]
parent="mn_install"
weight=4
+++
<![end-metadata]-->
## Installing Compose
# Install Docker Compose
To install Compose, you'll need to install Docker first. You'll then install
Compose with a `curl` command.
### Install Docker
## Install Docker
First, install Docker version 1.3 or greater:
First, install Docker version 1.6 or greater:
- [Instructions for Mac OS X](http://docs.docker.com/installation/mac/)
- [Instructions for Ubuntu](http://docs.docker.com/installation/ubuntulinux/)
- [Instructions for other systems](http://docs.docker.com/installation/)
### Install Compose
## Install Compose
To install Compose, run the following commands:
curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
> Note: If you get a "Permission denied" error, your `/usr/local/bin` directory probably isn't writable and you'll need to install Compose as the superuser. Run `sudo -i`, then the two commands above, then `exit`.
Optionally, you can also install [command completion](completion.md) for the
bash shell.
@@ -31,12 +40,27 @@ Compose can also be installed as a Python package:
$ sudo pip install -U docker-compose
No further steps are required; Compose should now be successfully installed.
No further steps are required; Compose should now be successfully installed.
You can test the installation by running `docker-compose --version`.
### Upgrading
If you're coming from Compose 1.2 or earlier, you'll need to remove or migrate your existing containers after upgrading Compose. This is because, as of version 1.3, Compose uses Docker labels to keep track of containers, and so they need to be recreated with labels added.
If Compose detects containers that were created without labels, it will refuse to run so that you don't end up with two sets of them. If you want to keep using your existing containers (for example, because they have data volumes you want to preserve) you can migrate them with the following command:
docker-compose migrate-to-labels
Alternatively, if you're not worried about keeping them, you can remove them - Compose will just create new ones.
docker rm -f myapp_web_1 myapp_db_1 ...
## Compose documentation
- [User guide](index.md)
- [User guide](/)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)

View File

@@ -1,10 +0,0 @@
- ['compose/index.md', 'User Guide', 'Docker Compose' ]
- ['compose/install.md', 'Installation', 'Docker Compose']
- ['compose/cli.md', 'Reference', 'Compose command line']
- ['compose/yml.md', 'Reference', 'Compose yml']
- ['compose/env.md', 'Reference', 'Compose ENV variables']
- ['compose/completion.md', 'Reference', 'Compose commandline completion']
- ['compose/django.md', 'Examples', 'Getting started with Compose and Django']
- ['compose/rails.md', 'Examples', 'Getting started with Compose and Rails']
- ['compose/wordpress.md', 'Examples', 'Getting started with Compose and Wordpress']

96
docs/production.md Normal file
View File

@@ -0,0 +1,96 @@
<!--[metadata]>
+++
title = "Using Compose in production"
description = "Guide to using Docker Compose in production"
keywords = ["documentation, docs, docker, compose, orchestration, containers, production"]
[menu.main]
parent="smn_workw_compose"
weight=1
+++
<![end-metadata]-->
## Using Compose in production
While **Compose is not yet considered production-ready**, if you'd like to experiment and learn more about using it in production deployments, this guide
can help.
The project is actively working towards becoming
production-ready; to learn more about the progress being made, check out the
[roadmap](https://github.com/docker/compose/blob/master/ROADMAP.md) for details
on how it's coming along and what still needs to be done.
When deploying to production, you'll almost certainly want to make changes to
your app configuration that are more appropriate to a live environment. These
changes may include:
- Removing any volume bindings for application code, so that code stays inside
the container and can't be changed from outside
- Binding to different ports on the host
- Setting environment variables differently (e.g., to decrease the verbosity of
logging, or to enable email sending)
- Specifying a restart policy (e.g., `restart: always`) to avoid downtime
- Adding extra services (e.g., a log aggregator)
For this reason, you'll probably want to define a separate Compose file, say
`production.yml`, which specifies production-appropriate configuration.
> **Note:** The [extends](extends.md) keyword is useful for maintaining multiple
> Compose files which re-use common services without having to manually copy and
> paste.
Once you've got an alternate configuration file, make Compose use it
by setting the `COMPOSE_FILE` environment variable:
$ COMPOSE_FILE=production.yml
$ docker-compose up -d
> **Note:** You can also use the file for a one-off command without setting
> an environment variable. You do this by passing the `-f` flag, e.g.,
> `docker-compose -f production.yml up -d`.
### Deploying changes
When you make changes to your app code, you'll need to rebuild your image and
recreate your app's containers. To redeploy a service called
`web`, you would use:
$ docker-compose build web
$ docker-compose up --no-deps -d web
This will first rebuild the image for `web` and then stop, destroy, and recreate
*just* the `web` service. The `--no-deps` flag prevents Compose from also
recreating any services which `web` depends on.
### Running Compose on a single server
You can use Compose to deploy an app to a remote Docker host by setting the
`DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables
appropriately. For tasks like this,
[Docker Machine](https://docs.docker.com/machine) makes managing local and
remote Docker hosts very easy, and is recommended even if you're not deploying
remotely.
Once you've set up your environment variables, all the normal `docker-compose`
commands will work with no further configuration.
### Running Compose on a Swarm cluster
[Docker Swarm](https://docs.docker.com/swarm), a Docker-native clustering
system, exposes the same API as a single Docker host, which means you can use
Compose against a Swarm instance and run your apps across multiple hosts.
Compose/Swarm integration is still in the experimental stage, and Swarm is still
in beta, but if you'd like to explore and experiment, check out the
[integration guide](https://github.com/docker/compose/blob/master/SWARM.md).
## Compose documentation
- [Installing Compose](install.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)
- [Compose command line completion](completion.md)

View File

@@ -1,10 +1,15 @@
page_title: Quickstart Guide: Compose and Rails
page_description: Getting started with Docker Compose and Rails
page_keywords: documentation, docs, docker, compose, orchestration, containers,
rails
<!--[metadata]>
+++
title = "Quickstart Guide: Compose and Rails"
description = "Getting started with Docker Compose and Rails"
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
[menu.main]
parent="smn_workw_compose"
weight=5
+++
<![end-metadata]-->
## Getting started with Compose and Rails
## Quickstart Guide: Compose and Rails
This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md).
@@ -119,8 +124,11 @@ you're using Boot2docker, `boot2docker ip` will tell you its address).
## More Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)

View File

@@ -1,14 +1,21 @@
page_title: Quickstart Guide: Compose and Wordpress
page_description: Getting started with Docker Compose and Rails
page_keywords: documentation, docs, docker, compose, orchestration, containers,
wordpress
<!--[metadata]>
+++
title = "Quickstart Guide: Compose and Wordpress"
description = "Getting started with Compose and Wordpress"
keywords = ["documentation, docs, docker, compose, orchestration, containers"]
[menu.main]
parent="smn_workw_compose"
weight=6
+++
<![end-metadata]-->
## Getting started with Compose and Wordpress
# Quickstart Guide: Compose and Wordpress
You can use Compose to easily run Wordpress in an isolated environment built
with Docker containers.
### Define the project
## Define the project
First, [Install Compose](install.md) and then download Wordpress into the
current directory:
@@ -114,8 +121,11 @@ address).
## More Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Yaml file reference](yml.md)
- [Compose environment variables](env.md)

View File

@@ -1,10 +1,13 @@
---
layout: default
title: docker-compose.yml reference
page_title: docker-compose.yml reference
page_description: docker-compose.yml reference
page_keywords: fig, composition, compose, docker
---
<!--[metadata]>
+++
title = "docker-compose.yml reference"
description = "docker-compose.yml reference"
keywords = ["fig, composition, compose, docker"]
[menu.main]
parent="smn_compose_ref"
+++
<![end-metadata]-->
# docker-compose.yml reference
@@ -29,8 +32,8 @@ image: a4bc65fd
### build
Path to a directory containing a Dockerfile. When the value supplied is a
relative path, it is interpreted as relative to the location of the yml file
Path to a directory containing a Dockerfile. When the value supplied is a
relative path, it is interpreted as relative to the location of the yml file
itself. This directory is also the build context that is sent to the Docker daemon.
Compose will build and tag it with a generated name, and use that image thereafter.
@@ -39,6 +42,16 @@ Compose will build and tag it with a generated name, and use that image thereaft
build: /path/to/build/dir
```
### dockerfile
Alternate Dockerfile.
Compose will use an alternate file to build with.
```
dockerfile: Dockerfile-alternate
```
### command
Override the default command.
@@ -87,6 +100,23 @@ external_links:
- project_db_1:postgresql
```
### extra_hosts
Add hostname mappings. Use the same values as the docker client `--add-host` parameter.
```
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
```
An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g:
```
162.242.195.82 somehost
50.31.209.229 otherhost
```
### ports
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container
@@ -173,8 +203,12 @@ env_file:
- /opt/secrets.env
```
Compose expects each line in an env file to be in `VAR=VAL` format. Lines
beginning with `#` (i.e. comments) are ignored, as are blank lines.
```
RACK_ENV: development
# Set Rails/Rack environment
RACK_ENV=development
```
### extends
@@ -217,42 +251,42 @@ Here, the `web` service in **development.yml** inherits the configuration of
the `webapp` service in **common.yml** - the `build` and `environment` keys -
and adds `ports` and `links` configuration. It overrides one of the defined
environment variables (DEBUG) with a new value, and the other one
(SEND_EMAILS) is left untouched. It's exactly as if you defined `web` like
this:
(SEND_EMAILS) is left untouched.
```yaml
web:
build: ./webapp
ports:
- "8000:8000"
links:
- db
environment:
- DEBUG=true
- SEND_EMAILS=false
```
For more on `extends`, see the [tutorial](extends.md#example) and
[reference](extends.md#reference).
The `extends` option is great for sharing configuration between different
apps, or for configuring the same app differently for different environments.
You could write a new file for a staging environment, **staging.yml**, which
binds to a different port and doesn't turn on debugging:
### labels
Add metadata to containers using [Docker labels](http://docs.docker.com/userguide/labels-custom-metadata/). You can use either an array or a dictionary.
It's recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software.
```
web:
extends:
file: common.yml
service: webapp
ports:
- "80:8000"
links:
- db
db:
image: postgres
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
com.example.label-with-empty-value: ""
labels:
- "com.example.description=Accounting webapp"
- "com.example.department=Finance"
- "com.example.label-with-empty-value"
```
> **Note:** When you extend a service, `links` and `volumes_from`
> configuration options are **not** inherited - you will have to define
> those manually each time you extend it.
### log driver
Specify a logging driver for the service's containers, as with the ``--log-driver`` option for docker run ([documented here](http://docs.docker.com/reference/run/#logging-drivers-log-driver)).
Allowed values are currently ``json-file``, ``syslog`` and ``none``. The list will change over time as more drivers are added to the Docker engine.
The default value is json-file.
```
log_driver: "json-file"
log_driver: "syslog"
log_driver: "none"
```
### net
@@ -264,6 +298,16 @@ net: "none"
net: "container:[name or id]"
net: "host"
```
### pid
```
pid: "host"
```
Sets the PID mode to the host PID mode. This turns on sharing between
container and the host operating system the PID address space. Containers
launched with this flag will be able to access and manipulate other
containers in the bare-metal machine's namespace and vise-versa.
### dns
@@ -301,13 +345,34 @@ dns_search:
- dc2.example.com
```
### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares
### devices
List of device mappings. Uses the same format as the `--device` docker
client create option.
```
devices:
- "/dev/ttyUSB0:/dev/ttyUSB0"
```
### security_opt
Override the default labeling scheme for each container.
```
security_opt:
- label:user:USER
- label:role:ROLE
```
### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged, restart, stdin\_open, tty, cpu\_shares, cpuset, read\_only
Each of these is a single value, analogous to its
[docker run](https://docs.docker.com/reference/run/) counterpart.
```
cpu_shares: 73
cpuset: 0,1
working_dir: /code
entrypoint: /code/entrypoint.sh
@@ -323,12 +388,16 @@ restart: always
stdin_open: true
tty: true
read_only: true
```
## Compose documentation
- [User guide](/)
- [Installing Compose](install.md)
- [User guide](index.md)
- [Get started with Django](django.md)
- [Get started with Rails](rails.md)
- [Get started with Wordpress](wordpress.md)
- [Command line reference](cli.md)
- [Compose environment variables](env.md)
- [Compose command line completion](completion.md)

View File

@@ -1,8 +1,8 @@
PyYAML==3.10
docker-py==1.0.0
dockerpty==0.3.2
docker-py==1.3.0
dockerpty==0.3.4
docopt==0.6.1
requests==2.2.1
requests==2.6.1
six==1.7.3
texttable==0.8.2
websocket-client==0.11.0
websocket-client==0.32.0

View File

@@ -1,33 +0,0 @@
#!/bin/bash
if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/docker/fig.git'
VALIDATE_BRANCH='master'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
VALIDATE_BRANCH="${TRAVIS_BRANCH}"
fi
VALIDATE_HEAD="$(git rev-parse --verify HEAD)"
git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH"
VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)"
VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD"
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
validate_diff() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git diff "$VALIDATE_COMMIT_DIFF" "$@"
fi
}
validate_log() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git log "$VALIDATE_COMMIT_LOG" "$@"
fi
}
fi

View File

@@ -1,7 +1,10 @@
#!/bin/bash
set -ex
PATH="/usr/local/bin:$PATH"
rm -rf venv
virtualenv venv
virtualenv -p /usr/local/bin/python venv
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-dev.txt
venv/bin/pip install .

View File

@@ -8,9 +8,6 @@
set -e
>&2 echo "Validating DCO"
script/validate-dco
export DOCKER_VERSIONS=all
. script/test-versions

53
script/prepare-osx Executable file
View File

@@ -0,0 +1,53 @@
#!/bin/bash
set -ex
python_version() {
python -V 2>&1
}
openssl_version() {
python -c "import ssl; print ssl.OPENSSL_VERSION"
}
desired_python_version="2.7.9"
desired_python_brew_version="2.7.9"
python_formula="https://raw.githubusercontent.com/Homebrew/homebrew/1681e193e4d91c9620c4901efd4458d9b6fcda8e/Library/Formula/python.rb"
desired_openssl_version="1.0.1j"
desired_openssl_brew_version="1.0.1j_1"
openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew/62fc2a1a65e83ba9dbb30b2e0a2b7355831c714b/Library/Formula/openssl.rb"
PATH="/usr/local/bin:$PATH"
if !(which brew); then
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
fi
brew update
if !(python_version | grep "$desired_python_version"); then
if brew list | grep python; then
brew unlink python
fi
brew install "$python_formula"
brew switch python "$desired_python_brew_version"
fi
if !(openssl_version | grep "$desired_openssl_version"); then
if brew list | grep openssl; then
brew unlink openssl
fi
brew install "$openssl_formula"
brew switch openssl "$desired_openssl_brew_version"
fi
echo "*** Using $(python_version)"
echo "*** Using $(openssl_version)"
if !(which virtualenv); then
pip install virtualenv
fi

View File

@@ -9,9 +9,9 @@ docker build -t "$TAG" .
docker run \
--rm \
--volume="/var/run/docker.sock:/var/run/docker.sock" \
--volume="$(pwd):/code" \
-e DOCKER_VERSIONS \
-e "TAG=$TAG" \
-e "affinity:image==$TAG" \
--entrypoint="script/test-versions" \
"$TAG" \
"$@"

View File

@@ -5,10 +5,10 @@
set -e
>&2 echo "Running lint checks"
flake8 compose
flake8 compose tests setup.py
if [ "$DOCKER_VERSIONS" == "" ]; then
DOCKER_VERSIONS="1.5.0"
DOCKER_VERSIONS="default"
elif [ "$DOCKER_VERSIONS" == "all" ]; then
DOCKER_VERSIONS="$ALL_DOCKER_VERSIONS"
fi

View File

@@ -1,58 +0,0 @@
#!/bin/bash
set -e
source "$(dirname "$BASH_SOURCE")/.validate"
adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }')
notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')"
: ${adds:=0}
: ${dels:=0}
# "Username may only contain alphanumeric characters or dashes and cannot begin with a dash"
githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+'
# https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
dcoPrefix='Signed-off-by:'
dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$"
check_dco() {
grep -qE "$dcoRegex"
}
if [ $adds -eq 0 -a $dels -eq 0 ]; then
echo '0 adds, 0 deletions; nothing to validate! :)'
elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then
echo 'Congratulations! DCO small-patch-exception material!'
else
commits=( $(validate_log --format='format:%H%n') )
badCommits=()
for commit in "${commits[@]}"; do
if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then
# no content (ie, Merge commit, etc)
continue
fi
if ! git log -1 --format='format:%B' "$commit" | check_dco; then
badCommits+=( "$commit" )
fi
done
if [ ${#badCommits[@]} -eq 0 ]; then
echo "Congratulations! All commits are properly signed with the DCO!"
else
{
echo "These commits do not have a proper '$dcoPrefix' marker:"
for commit in "${badCommits[@]}"; do
echo " - $commit"
done
echo
echo 'Please amend each commit to include a properly formatted DCO marker.'
echo
echo 'Visit the following URL for information about the Docker DCO:'
echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work'
echo
} >&2
false
fi
fi

View File

@@ -1,11 +1,9 @@
#!/bin/bash
if [ "$DOCKER_VERSION" == "" ]; then
DOCKER_VERSION="1.5.0"
if [ "$DOCKER_VERSION" != "" ] && [ "$DOCKER_VERSION" != "default" ]; then
ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker"
fi
ln -fs "/usr/local/bin/docker-$DOCKER_VERSION" "/usr/local/bin/docker"
# If a pidfile is still around (for example after a container restart),
# delete it so that docker can start.
rm -rf /var/run/docker.pid

View File

@@ -27,14 +27,15 @@ def find_version(*file_paths):
install_requires = [
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.2.1, < 2.6',
'requests >= 2.6.1, < 2.7',
'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.11.0, < 1.0',
'docker-py >= 1.0.0, < 1.2',
'dockerpty >= 0.3.2, < 0.4',
'docker-py >= 1.3.0, < 1.4',
'dockerpty >= 0.3.4, < 0.4',
'six >= 1.3.0, < 2',
]
tests_require = [
'mock >= 1.0.1',
'nose',
@@ -54,7 +55,7 @@ setup(
url='https://www.docker.com/',
author='Docker, Inc.',
license='Apache License 2.0',
packages=find_packages(exclude=[ 'tests.*', 'tests' ]),
packages=find_packages(exclude=['tests.*', 'tests']),
include_package_data=True,
test_suite='nose.collector',
install_requires=install_requires,

View File

@@ -1,7 +1,6 @@
import sys
if sys.version_info >= (2,7):
import unittest
if sys.version_info >= (2, 7):
import unittest # NOQA
else:
import unittest2 as unittest
import unittest2 as unittest # NOQA

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
command: /bin/sleep 300
command: top
another:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -1,2 +1,3 @@
FROM busybox:latest
LABEL com.docker.compose.test_image=true
CMD echo "success"

View File

@@ -1,3 +1,4 @@
FROM busybox
FROM busybox:latest
LABEL com.docker.compose.test_image=true
VOLUME /data
CMD sleep 3000
CMD top

View File

@@ -1,2 +1,3 @@
FROM busybox:latest
LABEL com.docker.compose.test_image=true
ENTRYPOINT echo "From prebuilt entrypoint"

View File

@@ -1,6 +1,6 @@
service:
image: busybox:latest
command: sleep 5
command: top
environment:
foo: bar

View File

@@ -2,7 +2,7 @@ myweb:
extends:
file: common.yml
service: web
command: sleep 300
command: top
links:
- "mydb:db"
environment:
@@ -13,4 +13,4 @@ myweb:
BAZ: "2"
mydb:
image: busybox
command: sleep 300
command: top

View File

@@ -0,0 +1,6 @@
dnebase:
build: nonexistent.path
command: /bin/true
environment:
- FOO=1
- BAR=1

View File

@@ -0,0 +1,8 @@
dnechild:
extends:
file: nonexistent-path-base.yml
service: dnebase
image: busybox
command: /bin/true
environment:
- BAR=2

View File

@@ -1,11 +1,11 @@
db:
image: busybox:latest
command: /bin/sleep 300
command: top
web:
image: busybox:latest
command: /bin/sleep 300
command: top
links:
- db:db
console:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -1,3 +1,3 @@
definedinyamlnotyml:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -1,3 +1,3 @@
yetanother:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
command: /bin/sleep 300
command: top
another:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -0,0 +1,6 @@
simple:
image: busybox:latest
command: /bin/sleep 300
ports:
- '3000'

View File

@@ -1,7 +1,7 @@
simple:
image: busybox:latest
command: /bin/sleep 300
command: top
ports:
- '3000'
- '49152:3001'

View File

@@ -1,6 +1,6 @@
simple:
image: busybox:latest
command: /bin/sleep 300
command: top
another:
image: busybox:latest
command: /bin/sleep 300
command: top

View File

@@ -1,2 +1,3 @@
FROM busybox:latest
LABEL com.docker.compose.test_image=true
CMD echo "success"

View File

@@ -1,12 +1,15 @@
from __future__ import absolute_import
from operator import attrgetter
import sys
import os
import shlex
from six import StringIO
from mock import patch
from .testcases import DockerClientTestCase
from compose.cli.main import TopLevelCommand
from compose.project import NoSuchService
class CLITestCase(DockerClientTestCase):
@@ -21,6 +24,9 @@ class CLITestCase(DockerClientTestCase):
sys.exit = self.old_sys_exit
self.project.kill()
self.project.remove_stopped()
for container in self.project.containers(stopped=True, one_off=True):
container.remove(force=True)
super(CLITestCase, self).tearDown()
@property
def project(self):
@@ -62,6 +68,10 @@ class CLITestCase(DockerClientTestCase):
@patch('sys.stdout', new_callable=StringIO)
def test_ps_alternate_composefile(self, mock_stdout):
config_path = os.path.abspath(
'tests/fixtures/multiple-composefiles/compose2.yml')
self._project = self.command.get_project(config_path)
self.command.base_dir = 'tests/fixtures/multiple-composefiles'
self.command.dispatch(['-f', 'compose2.yml', 'up', '-d'], None)
self.command.dispatch(['-f', 'compose2.yml', 'ps'], None)
@@ -154,6 +164,19 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(old_ids, new_ids)
def test_up_with_timeout(self):
self.command.dispatch(['up', '-d', '-t', '1'], None)
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 1)
self.assertEqual(len(another.containers()), 1)
# Ensure containers don't have stdin and stdout connected in -d mode
config = service.containers()[0].inspect()['Config']
self.assertFalse(config['AttachStderr'])
self.assertFalse(config['AttachStdout'])
self.assertFalse(config['AttachStdin'])
@patch('dockerpty.start')
def test_run_service_without_links(self, mock_stdout):
self.command.base_dir = 'tests/fixtures/links-composefile'
@@ -200,13 +223,10 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(old_ids, new_ids)
@patch('dockerpty.start')
def test_run_without_command(self, __):
def test_run_without_command(self, _):
self.command.base_dir = 'tests/fixtures/commands-composefile'
self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test')
for c in self.project.containers(stopped=True, one_off=True):
c.remove()
self.command.dispatch(['run', 'implicit'], None)
service = self.project.get_service('implicit')
containers = service.containers(stopped=True, one_off=True)
@@ -234,8 +254,8 @@ class CLITestCase(DockerClientTestCase):
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
self.assertEqual(
container.human_readable_command,
u'/bin/echo helloworld'
shlex.split(container.human_readable_command),
[u'/bin/echo', u'helloworld'],
)
@patch('dockerpty.start')
@@ -332,6 +352,21 @@ class CLITestCase(DockerClientTestCase):
self.command.dispatch(['rm', '-f'], None)
self.assertEqual(len(service.containers(stopped=True)), 0)
def test_stop(self):
self.command.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['stop', '-t', '1'], None)
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertFalse(service.containers(stopped=True)[0].is_running)
def test_logs_invalid_service_name(self):
with self.assertRaises(NoSuchService):
self.command.dispatch(['logs', 'madeupname'], None)
def test_kill(self):
self.command.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
@@ -343,22 +378,22 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(service.containers(stopped=True)), 1)
self.assertFalse(service.containers(stopped=True)[0].is_running)
def test_kill_signal_sigint(self):
def test_kill_signal_sigstop(self):
self.command.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['kill', '-s', 'SIGINT'], None)
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.assertEqual(len(service.containers()), 1)
# The container is still running. It has been only interrupted
# The container is still running. It has only been paused
self.assertTrue(service.containers()[0].is_running)
def test_kill_interrupted_service(self):
def test_kill_stopped_service(self):
self.command.dispatch(['up', '-d'], None)
service = self.project.get_service('simple')
self.command.dispatch(['kill', '-s', 'SIGINT'], None)
self.command.dispatch(['kill', '-s', 'SIGSTOP'], None)
self.assertTrue(service.containers()[0].is_running)
self.command.dispatch(['kill', '-s', 'SIGKILL'], None)
@@ -371,7 +406,7 @@ class CLITestCase(DockerClientTestCase):
container = service.create_container()
service.start_container(container)
started_at = container.dictionary['State']['StartedAt']
self.command.dispatch(['restart'], None)
self.command.dispatch(['restart', '-t', '1'], None)
container.inspect()
self.assertNotEqual(
container.dictionary['State']['FinishedAt'],
@@ -405,7 +440,6 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(len(project.get_service('another').containers()), 0)
def test_port(self):
self.command.base_dir = 'tests/fixtures/ports-composefile'
self.command.dispatch(['up', '-d'], None)
container = self.project.get_service('simple').get_container()
@@ -419,6 +453,27 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(get_port(3001), "0.0.0.0:49152")
self.assertEqual(get_port(3002), "")
def test_port_with_scale(self):
self.command.base_dir = 'tests/fixtures/ports-composefile-scale'
self.command.dispatch(['scale', 'simple=2'], None)
containers = sorted(
self.project.containers(service_names=['simple']),
key=attrgetter('name'))
@patch('sys.stdout', new_callable=StringIO)
def get_port(number, mock_stdout, index=None):
if index is None:
self.command.dispatch(['port', 'simple', str(number)], None)
else:
self.command.dispatch(['port', '--index=' + str(index), 'simple', str(number)], None)
return mock_stdout.getvalue().rstrip()
self.assertEqual(get_port(3000), containers[0].get_local_port(3000))
self.assertEqual(get_port(3000, index=1), containers[0].get_local_port(3000))
self.assertEqual(get_port(3000, index=2), containers[1].get_local_port(3000))
self.assertEqual(get_port(3002), "")
def test_env_file_relative_to_compose_file(self):
config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml')
self.command.dispatch(['-f', config_path, 'up', '-d'], None)

View File

@@ -0,0 +1,207 @@
import unittest
from mock import Mock
from docker.errors import APIError
from compose import legacy
from compose.project import Project
from .testcases import DockerClientTestCase
class UtilitiesTestCase(unittest.TestCase):
def test_has_container(self):
self.assertTrue(
legacy.has_container("composetest", "web", "composetest_web_1", one_off=False),
)
self.assertFalse(
legacy.has_container("composetest", "web", "composetest_web_run_1", one_off=False),
)
def test_has_container_one_off(self):
self.assertFalse(
legacy.has_container("composetest", "web", "composetest_web_1", one_off=True),
)
self.assertTrue(
legacy.has_container("composetest", "web", "composetest_web_run_1", one_off=True),
)
def test_has_container_different_project(self):
self.assertFalse(
legacy.has_container("composetest", "web", "otherapp_web_1", one_off=False),
)
self.assertFalse(
legacy.has_container("composetest", "web", "otherapp_web_run_1", one_off=True),
)
def test_has_container_different_service(self):
self.assertFalse(
legacy.has_container("composetest", "web", "composetest_db_1", one_off=False),
)
self.assertFalse(
legacy.has_container("composetest", "web", "composetest_db_run_1", one_off=True),
)
def test_is_valid_name(self):
self.assertTrue(
legacy.is_valid_name("composetest_web_1", one_off=False),
)
self.assertFalse(
legacy.is_valid_name("composetest_web_run_1", one_off=False),
)
def test_is_valid_name_one_off(self):
self.assertFalse(
legacy.is_valid_name("composetest_web_1", one_off=True),
)
self.assertTrue(
legacy.is_valid_name("composetest_web_run_1", one_off=True),
)
def test_is_valid_name_invalid(self):
self.assertFalse(
legacy.is_valid_name("foo"),
)
self.assertFalse(
legacy.is_valid_name("composetest_web_lol_1", one_off=True),
)
def test_get_legacy_containers_no_labels(self):
client = Mock()
client.containers.return_value = [
{
"Id": "abc123",
"Image": "def456",
"Name": "composetest_web_1",
"Labels": None,
},
]
containers = list(legacy.get_legacy_containers(
client, "composetest", ["web"]))
self.assertEqual(len(containers), 1)
class LegacyTestCase(DockerClientTestCase):
def setUp(self):
super(LegacyTestCase, self).setUp()
self.containers = []
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'db')])
nginx = self.create_service('nginx', links=[(web, 'web')])
self.services = [db, web, nginx]
self.project = Project('composetest', self.services, self.client)
# Create a legacy container for each service
for service in self.services:
service.ensure_image_exists()
container = self.client.create_container(
name='{}_{}_1'.format(self.project.name, service.name),
**service.options
)
self.client.start(container)
self.containers.append(container)
# Create a single one-off legacy container
self.containers.append(self.client.create_container(
name='{}_{}_run_1'.format(self.project.name, db.name),
**self.services[0].options
))
def tearDown(self):
super(LegacyTestCase, self).tearDown()
for container in self.containers:
try:
self.client.kill(container)
except APIError:
pass
try:
self.client.remove_container(container)
except APIError:
pass
def get_legacy_containers(self, **kwargs):
return legacy.get_legacy_containers(
self.client,
self.project.name,
[s.name for s in self.services],
**kwargs
)
def test_get_legacy_container_names(self):
self.assertEqual(len(self.get_legacy_containers()), len(self.services))
def test_get_legacy_container_names_one_off(self):
self.assertEqual(len(self.get_legacy_containers(one_off=True)), 1)
def test_migration_to_labels(self):
# Trying to get the container list raises an exception
with self.assertRaises(legacy.LegacyContainersError) as cm:
self.project.containers(stopped=True)
self.assertEqual(
set(cm.exception.names),
set(['composetest_db_1', 'composetest_web_1', 'composetest_nginx_1']),
)
self.assertEqual(
set(cm.exception.one_off_names),
set(['composetest_db_run_1']),
)
# Migrate the containers
legacy.migrate_project_to_labels(self.project)
# Getting the list no longer raises an exception
containers = self.project.containers(stopped=True)
self.assertEqual(len(containers), len(self.services))
def test_migration_one_off(self):
# We've already migrated
legacy.migrate_project_to_labels(self.project)
# Trying to create a one-off container results in a Docker API error
with self.assertRaises(APIError) as cm:
self.project.get_service('db').create_container(one_off=True)
# Checking for legacy one-off containers raises an exception
with self.assertRaises(legacy.LegacyOneOffContainersError) as cm:
legacy.check_for_legacy_containers(
self.client,
self.project.name,
['db'],
allow_one_off=False,
)
self.assertEqual(
set(cm.exception.one_off_names),
set(['composetest_db_run_1']),
)
# Remove the old one-off container
c = self.client.inspect_container('composetest_db_run_1')
self.client.remove_container(c)
# Checking no longer raises an exception
legacy.check_for_legacy_containers(
self.client,
self.project.name,
['db'],
allow_one_off=False,
)
# Creating a one-off container no longer results in an API error
self.project.get_service('db').create_container(one_off=True)
self.assertIsInstance(self.client.inspect_container('composetest_db_run_1'), dict)

View File

@@ -1,11 +1,51 @@
from __future__ import unicode_literals
from compose import config
from compose.const import LABEL_PROJECT
from compose.project import Project
from compose.container import Container
from .testcases import DockerClientTestCase
class ProjectTest(DockerClientTestCase):
def test_containers(self):
web = self.create_service('web')
db = self.create_service('db')
project = Project('composetest', [web, db], self.client)
project.up()
containers = project.containers()
self.assertEqual(len(containers), 2)
def test_containers_with_service_names(self):
web = self.create_service('web')
db = self.create_service('db')
project = Project('composetest', [web, db], self.client)
project.up()
containers = project.containers(['web'])
self.assertEqual(
[c.name for c in containers],
['composetest_web_1'])
def test_containers_with_extra_service(self):
web = self.create_service('web')
web_1 = web.create_container()
db = self.create_service('db')
db_1 = db.create_container()
self.create_service('extra').create_container()
project = Project('composetest', [web, db], self.client)
self.assertEqual(
set(project.containers(stopped=True)),
set([web_1, db_1]),
)
def test_volumes_from_service(self):
service_dicts = config.from_dictionary({
'data': {
@@ -32,6 +72,7 @@ class ProjectTest(DockerClientTestCase):
image='busybox:latest',
volumes=['/var/data'],
name='composetest_data_container',
labels={LABEL_PROJECT: 'composetest'},
)
project = Project.from_dicts(
name='composetest',
@@ -46,21 +87,18 @@ class ProjectTest(DockerClientTestCase):
db = project.get_service('db')
self.assertEqual(db.volumes_from, [data_container])
project.kill()
project.remove_stopped()
def test_net_from_service(self):
project = Project.from_dicts(
name='composetest',
service_dicts=config.from_dictionary({
'net': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"]
'command': ["top"]
},
'web': {
'image': 'busybox:latest',
'net': 'container:net',
'command': ["/bin/sleep", "300"]
'command': ["top"]
},
}),
client=self.client,
@@ -70,17 +108,15 @@ class ProjectTest(DockerClientTestCase):
web = project.get_service('web')
net = project.get_service('net')
self.assertEqual(web._get_net(), 'container:'+net.containers()[0].id)
project.kill()
project.remove_stopped()
self.assertEqual(web._get_net(), 'container:' + net.containers()[0].id)
def test_net_from_container(self):
net_container = Container.create(
self.client,
image='busybox:latest',
name='composetest_net_container',
command='/bin/sleep 300'
command='top',
labels={LABEL_PROJECT: 'composetest'},
)
net_container.start()
@@ -98,10 +134,7 @@ class ProjectTest(DockerClientTestCase):
project.up()
web = project.get_service('web')
self.assertEqual(web._get_net(), 'container:'+net_container.id)
project.kill()
project.remove_stopped()
self.assertEqual(web._get_net(), 'container:' + net_container.id)
def test_start_stop_kill_remove(self):
web = self.create_service('web')
@@ -148,8 +181,17 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(web.containers()), 0)
project.kill()
project.remove_stopped()
def test_project_up_starts_uncreated_services(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'db')])
project = Project('composetest', [db, web], self.client)
project.up(['db'])
self.assertEqual(len(project.containers()), 1)
project.up()
self.assertEqual(len(project.containers()), 2)
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(web.containers()), 1)
def test_project_up_recreates_containers(self):
web = self.create_service('web')
@@ -170,9 +212,6 @@ class ProjectTest(DockerClientTestCase):
self.assertNotEqual(db_container.id, old_db_id)
self.assertEqual(db_container.get('Volumes./etc'), db_volume_path)
project.kill()
project.remove_stopped()
def test_project_up_with_no_recreate_running(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
@@ -185,7 +224,7 @@ class ProjectTest(DockerClientTestCase):
old_db_id = project.containers()[0].id
db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db']
project.up(recreate=False)
project.up(allow_recreate=False)
self.assertEqual(len(project.containers()), 2)
db_container = [c for c in project.containers() if 'db' in c.name][0]
@@ -193,9 +232,6 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(db_container.inspect()['Volumes']['/var/db'],
db_volume_path)
project.kill()
project.remove_stopped()
def test_project_up_with_no_recreate_stopped(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
@@ -204,7 +240,7 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.containers()), 0)
project.up(['db'])
project.stop()
project.kill()
old_containers = project.containers(stopped=True)
@@ -212,19 +248,17 @@ class ProjectTest(DockerClientTestCase):
old_db_id = old_containers[0].id
db_volume_path = old_containers[0].inspect()['Volumes']['/var/db']
project.up(recreate=False)
project.up(allow_recreate=False)
new_containers = project.containers(stopped=True)
self.assertEqual(len(new_containers), 2)
self.assertEqual([c.is_running for c in new_containers], [True, True])
db_container = [c for c in new_containers if 'db' in c.name][0]
self.assertEqual(db_container.id, old_db_id)
self.assertEqual(db_container.inspect()['Volumes']['/var/db'],
db_volume_path)
project.kill()
project.remove_stopped()
def test_project_up_without_all_services(self):
console = self.create_service('console')
db = self.create_service('db')
@@ -237,9 +271,6 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 1)
project.kill()
project.remove_stopped()
def test_project_up_starts_links(self):
console = self.create_service('console')
db = self.create_service('db', volumes=['/var/db'])
@@ -255,29 +286,26 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 0)
project.kill()
project.remove_stopped()
def test_project_up_starts_depends(self):
project = Project.from_dicts(
name='composetest',
service_dicts=config.from_dictionary({
'console': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
},
'data' : {
'data': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"]
'command': ["top"]
},
'db': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
'volumes_from': ['data'],
},
'web': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
'links': ['db'],
},
}),
@@ -293,29 +321,26 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.get_service('data').containers()), 1)
self.assertEqual(len(project.get_service('console').containers()), 0)
project.kill()
project.remove_stopped()
def test_project_up_with_no_deps(self):
project = Project.from_dicts(
name='composetest',
service_dicts=config.from_dictionary({
'console': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
},
'data' : {
'data': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"]
'command': ["top"]
},
'db': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
'volumes_from': ['data'],
},
'web': {
'image': 'busybox:latest',
'command': ["/bin/sleep", "300"],
'command': ["top"],
'links': ['db'],
},
}),
@@ -332,9 +357,6 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1)
self.assertEqual(len(project.get_service('console').containers()), 0)
project.kill()
project.remove_stopped()
def test_unscale_after_restart(self):
web = self.create_service('web')
project = Project('composetest', [web], self.client)
@@ -359,5 +381,3 @@ class ProjectTest(DockerClientTestCase):
project.up()
service = project.get_service('web')
self.assertEqual(len(service.containers()), 1)
project.kill()
project.remove_stopped()

View File

@@ -0,0 +1,48 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import mock
from compose.project import Project
from .testcases import DockerClientTestCase
class ResilienceTest(DockerClientTestCase):
def setUp(self):
self.db = self.create_service('db', volumes=['/var/db'], command='top')
self.project = Project('composetest', [self.db], self.client)
container = self.db.create_container()
self.db.start_container(container)
self.host_path = container.get('Volumes')['/var/db']
def test_successful_recreate(self):
self.project.up()
container = self.db.containers()[0]
self.assertEqual(container.get('Volumes')['/var/db'], self.host_path)
def test_create_failure(self):
with mock.patch('compose.service.Service.create_container', crash):
with self.assertRaises(Crash):
self.project.up()
self.project.up()
container = self.db.containers()[0]
self.assertEqual(container.get('Volumes')['/var/db'], self.host_path)
def test_start_failure(self):
with mock.patch('compose.service.Service.start_container', crash):
with self.assertRaises(Crash):
self.project.up()
self.project.up()
container = self.db.containers()[0]
self.assertEqual(container.get('Volumes')['/var/db'], self.host_path)
class Crash(Exception):
pass
def crash(*args, **kwargs):
raise Crash()

View File

@@ -2,12 +2,28 @@ from __future__ import unicode_literals
from __future__ import absolute_import
import os
from os import path
import mock
from compose import Service
from compose.service import CannotBeScaledError
from compose.container import Container
from docker.errors import APIError
import mock
import tempfile
import shutil
import six
from compose import __version__
from compose.const import (
LABEL_CONTAINER_NUMBER,
LABEL_ONE_OFF,
LABEL_PROJECT,
LABEL_SERVICE,
LABEL_VERSION,
)
from compose.service import (
ConfigError,
ConvergencePlan,
Service,
build_extra_hosts,
)
from compose.container import Container
from .testcases import DockerClientTestCase
@@ -99,7 +115,7 @@ class ServiceTest(DockerClientTestCase):
service = self.create_service('db', volumes=['/var/db'])
container = service.create_container()
service.start_container(container)
self.assertIn('/var/db', container.inspect()['Volumes'])
self.assertIn('/var/db', container.get('Volumes'))
def test_create_container_with_cpu_shares(self):
service = self.create_service('db', cpu_shares=73)
@@ -107,6 +123,82 @@ class ServiceTest(DockerClientTestCase):
service.start_container(container)
self.assertEqual(container.inspect()['Config']['CpuShares'], 73)
def test_build_extra_hosts(self):
# string
self.assertRaises(ConfigError, lambda: build_extra_hosts("www.example.com: 192.168.0.17"))
# list of strings
self.assertEqual(build_extra_hosts(
["www.example.com:192.168.0.17"]),
{'www.example.com': '192.168.0.17'})
self.assertEqual(build_extra_hosts(
["www.example.com: 192.168.0.17"]),
{'www.example.com': '192.168.0.17'})
self.assertEqual(build_extra_hosts(
["www.example.com: 192.168.0.17",
"static.example.com:192.168.0.19",
"api.example.com: 192.168.0.18"]),
{'www.example.com': '192.168.0.17',
'static.example.com': '192.168.0.19',
'api.example.com': '192.168.0.18'})
# list of dictionaries
self.assertRaises(ConfigError, lambda: build_extra_hosts(
[{'www.example.com': '192.168.0.17'},
{'api.example.com': '192.168.0.18'}]))
# dictionaries
self.assertEqual(build_extra_hosts(
{'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'}),
{'www.example.com': '192.168.0.17',
'api.example.com': '192.168.0.18'})
def test_create_container_with_extra_hosts_list(self):
extra_hosts = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
service = self.create_service('db', extra_hosts=extra_hosts)
container = service.create_container()
service.start_container(container)
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts))
def test_create_container_with_extra_hosts_string(self):
extra_hosts = 'somehost:162.242.195.82'
service = self.create_service('db', extra_hosts=extra_hosts)
self.assertRaises(ConfigError, lambda: service.create_container())
def test_create_container_with_extra_hosts_list_of_dicts(self):
extra_hosts = [{'somehost': '162.242.195.82'}, {'otherhost': '50.31.209.229'}]
service = self.create_service('db', extra_hosts=extra_hosts)
self.assertRaises(ConfigError, lambda: service.create_container())
def test_create_container_with_extra_hosts_dicts(self):
extra_hosts = {'somehost': '162.242.195.82', 'otherhost': '50.31.209.229'}
extra_hosts_list = ['somehost:162.242.195.82', 'otherhost:50.31.209.229']
service = self.create_service('db', extra_hosts=extra_hosts)
container = service.create_container()
service.start_container(container)
self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts_list))
def test_create_container_with_cpu_set(self):
service = self.create_service('db', cpuset='0')
container = service.create_container()
service.start_container(container)
self.assertEqual(container.inspect()['Config']['Cpuset'], '0')
def test_create_container_with_read_only_root_fs(self):
read_only = True
service = self.create_service('db', read_only=read_only)
container = service.create_container()
service.start_container(container)
self.assertEqual(container.get('HostConfig.ReadonlyRootfs'), read_only, container.get('HostConfig'))
def test_create_container_with_security_opt(self):
security_opt = ['label:disable']
service = self.create_service('db', security_opt=security_opt)
container = service.create_container()
service.start_container(container)
self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt))
def test_create_container_with_specified_volume(self):
host_path = '/tmp/host-path'
container_path = '/container-path'
@@ -121,7 +213,7 @@ class ServiceTest(DockerClientTestCase):
# Match the last component ("host-path"), because boot2docker symlinks /tmp
actual_host_path = volumes[container_path]
self.assertTrue(path.basename(actual_host_path) == path.basename(host_path),
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
msg=("Last component differs: %s, %s" % (actual_host_path, host_path)))
@mock.patch.dict(os.environ)
def test_create_container_with_home_and_env_var_in_volume_path(self):
@@ -144,7 +236,12 @@ class ServiceTest(DockerClientTestCase):
def test_create_container_with_volumes_from(self):
volume_service = self.create_service('data')
volume_container_1 = volume_service.create_container()
volume_container_2 = Container.create(self.client, image='busybox:latest', command=["/bin/sleep", "300"])
volume_container_2 = Container.create(
self.client,
image='busybox:latest',
command=["top"],
labels={LABEL_PROJECT: 'composetest'},
)
host_service = self.create_service('host', volumes_from=[volume_service, volume_container_2])
host_container = host_service.create_container()
host_service.start_container(host_container)
@@ -153,60 +250,68 @@ class ServiceTest(DockerClientTestCase):
self.assertIn(volume_container_2.id,
host_container.get('HostConfig.VolumesFrom'))
def test_recreate_containers(self):
def test_execute_convergence_plan_recreate(self):
service = self.create_service(
'db',
environment={'FOO': '1'},
volumes=['/etc'],
entrypoint=['sleep'],
command=['300']
entrypoint=['top'],
command=['-d', '1']
)
old_container = service.create_container()
self.assertEqual(old_container.dictionary['Config']['Entrypoint'], ['sleep'])
self.assertEqual(old_container.dictionary['Config']['Cmd'], ['300'])
self.assertIn('FOO=1', old_container.dictionary['Config']['Env'])
self.assertEqual(old_container.get('Config.Entrypoint'), ['top'])
self.assertEqual(old_container.get('Config.Cmd'), ['-d', '1'])
self.assertIn('FOO=1', old_container.get('Config.Env'))
self.assertEqual(old_container.name, 'composetest_db_1')
service.start_container(old_container)
volume_path = old_container.inspect()['Volumes']['/etc']
old_container.inspect() # reload volume data
volume_path = old_container.get('Volumes')['/etc']
num_containers_before = len(self.client.containers(all=True))
service.options['environment']['FOO'] = '2'
tuples = service.recreate_containers()
self.assertEqual(len(tuples), 1)
new_container, = service.execute_convergence_plan(
ConvergencePlan('recreate', [old_container]))
intermediate_container = tuples[0][0]
new_container = tuples[0][1]
self.assertEqual(intermediate_container.dictionary['Config']['Entrypoint'], ['/bin/echo'])
self.assertEqual(new_container.dictionary['Config']['Entrypoint'], ['sleep'])
self.assertEqual(new_container.dictionary['Config']['Cmd'], ['300'])
self.assertIn('FOO=2', new_container.dictionary['Config']['Env'])
self.assertEqual(new_container.get('Config.Entrypoint'), ['top'])
self.assertEqual(new_container.get('Config.Cmd'), ['-d', '1'])
self.assertIn('FOO=2', new_container.get('Config.Env'))
self.assertEqual(new_container.name, 'composetest_db_1')
self.assertEqual(new_container.inspect()['Volumes']['/etc'], volume_path)
self.assertIn(intermediate_container.id, new_container.dictionary['HostConfig']['VolumesFrom'])
self.assertEqual(new_container.get('Volumes')['/etc'], volume_path)
self.assertIn(
'affinity:container==%s' % old_container.id,
new_container.get('Config.Env'))
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
self.assertNotEqual(old_container.id, new_container.id)
self.assertRaises(APIError,
self.client.inspect_container,
intermediate_container.id)
old_container.id)
def test_recreate_containers_when_containers_are_stopped(self):
def test_execute_convergence_plan_when_containers_are_stopped(self):
service = self.create_service(
'db',
environment={'FOO': '1'},
volumes=['/var/db'],
entrypoint=['sleep'],
command=['300']
entrypoint=['top'],
command=['-d', '1']
)
old_container = service.create_container()
self.assertEqual(len(service.containers(stopped=True)), 1)
service.recreate_containers()
self.assertEqual(len(service.containers(stopped=True)), 1)
service.create_container()
containers = service.containers(stopped=True)
self.assertEqual(len(containers), 1)
container, = containers
self.assertFalse(container.is_running)
def test_recreate_containers_with_image_declared_volume(self):
service.execute_convergence_plan(ConvergencePlan('start', [container]))
containers = service.containers()
self.assertEqual(len(containers), 1)
container.inspect()
self.assertEqual(container, containers[0])
self.assertTrue(container.is_running)
def test_execute_convergence_plan_with_image_declared_volume(self):
service = Service(
project='composetest',
name='db',
@@ -218,9 +323,8 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(old_container.get('Volumes').keys(), ['/data'])
volume_path = old_container.get('Volumes')['/data']
service.recreate_containers()
new_container = service.containers()[0]
service.start_container(new_container)
new_container, = service.execute_convergence_plan(
ConvergencePlan('recreate', [old_container]))
self.assertEqual(new_container.get('Volumes').keys(), ['/data'])
self.assertEqual(new_container.get('Volumes')['/data'], volume_path)
@@ -247,8 +351,7 @@ class ServiceTest(DockerClientTestCase):
set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
'db',
]),
'db'])
)
def test_start_container_creates_links_with_names(self):
@@ -264,8 +367,7 @@ class ServiceTest(DockerClientTestCase):
set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
'custom_link_name',
]),
'custom_link_name'])
)
def test_start_container_with_external_links(self):
@@ -283,8 +385,7 @@ class ServiceTest(DockerClientTestCase):
set([
'composetest_db_1',
'composetest_db_2',
'db_3',
]),
'db_3']),
)
def test_start_normal_container_does_not_create_links_to_its_own_service(self):
@@ -309,8 +410,7 @@ class ServiceTest(DockerClientTestCase):
set([
'composetest_db_1', 'db_1',
'composetest_db_2', 'db_2',
'db',
]),
'db'])
)
def test_start_container_builds_images(self):
@@ -326,7 +426,7 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(len(self.client.images(name='composetest_test')), 1)
def test_start_container_uses_tagged_image_if_it_exists(self):
self.client.build('tests/fixtures/simple-dockerfile', tag='composetest_test')
self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test')
service = Service(
name='test',
client=self.client,
@@ -343,13 +443,36 @@ class ServiceTest(DockerClientTestCase):
self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/tcp'])
self.assertNotEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000')
def test_build(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
self.create_service('web', build=base_dir).build()
self.assertEqual(len(self.client.images(name='composetest_web')), 1)
def test_build_non_ascii_filename(self):
base_dir = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, base_dir)
with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
f.write("FROM busybox\n")
with open(os.path.join(base_dir, b'foo\xE2bar'), 'w') as f:
f.write("hello world\n")
self.create_service('web', build=six.text_type(base_dir)).build()
self.assertEqual(len(self.client.images(name='composetest_web')), 1)
def test_start_container_stays_unpriviliged(self):
service = self.create_service('web')
container = create_and_start_container(service).inspect()
self.assertEqual(container['HostConfig']['Privileged'], False)
def test_start_container_becomes_priviliged(self):
service = self.create_service('web', privileged = True)
service = self.create_service('web', privileged=True)
container = create_and_start_container(service).inspect()
self.assertEqual(container['HostConfig']['Privileged'], True)
@@ -396,6 +519,11 @@ class ServiceTest(DockerClientTestCase):
],
})
def test_create_with_image_id(self):
# Image id for the current busybox:latest
service = self.create_service('foo', image='8c2e06607696')
service.create_container()
def test_scale(self):
service = self.create_service('web')
service.scale(1)
@@ -415,10 +543,6 @@ class ServiceTest(DockerClientTestCase):
service.scale(0)
self.assertEqual(len(service.containers()), 0)
def test_scale_on_service_that_cannot_be_scaled(self):
service = self.create_service('web', ports=['8000:8000'])
self.assertRaises(CannotBeScaledError, lambda: service.scale(1))
def test_scale_sets_ports(self):
service = self.create_service('web', ports=['8000'])
service.scale(2)
@@ -442,6 +566,16 @@ class ServiceTest(DockerClientTestCase):
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
def test_pid_mode_none_defined(self):
service = self.create_service('web', pid=None)
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), '')
def test_pid_mode_host(self):
service = self.create_service('web', pid='host')
container = create_and_start_container(service)
self.assertEqual(container.get('HostConfig.PidMode'), 'host')
def test_dns_no_value(self):
service = self.create_service('web')
container = create_and_start_container(service)
@@ -501,13 +635,13 @@ class ServiceTest(DockerClientTestCase):
def test_split_env(self):
service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='])
env = create_and_start_container(service).environment
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items():
for k, v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items():
self.assertEqual(env[k], v)
def test_env_from_file_combined_with_env(self):
service = self.create_service('web', environment=['ONE=1', 'TWO=2', 'THREE=3'], env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env'])
env = create_and_start_container(service).environment
for k,v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
for k, v in {'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}.items():
self.assertEqual(env[k], v)
@mock.patch.dict(os.environ)
@@ -517,5 +651,90 @@ class ServiceTest(DockerClientTestCase):
os.environ['ENV_DEF'] = 'E3'
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
env = create_and_start_container(service).environment
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
for k, v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.items():
self.assertEqual(env[k], v)
def test_labels(self):
labels_dict = {
'com.example.description': "Accounting webapp",
'com.example.department': "Finance",
'com.example.label-with-empty-value': "",
}
compose_labels = {
LABEL_CONTAINER_NUMBER: '1',
LABEL_ONE_OFF: 'False',
LABEL_PROJECT: 'composetest',
LABEL_SERVICE: 'web',
LABEL_VERSION: __version__,
}
expected = dict(labels_dict, **compose_labels)
service = self.create_service('web', labels=labels_dict)
labels = create_and_start_container(service).labels.items()
for pair in expected.items():
self.assertIn(pair, labels)
service.kill()
service.remove_stopped()
labels_list = ["%s=%s" % pair for pair in labels_dict.items()]
service = self.create_service('web', labels=labels_list)
labels = create_and_start_container(service).labels.items()
for pair in expected.items():
self.assertIn(pair, labels)
def test_empty_labels(self):
labels_list = ['foo', 'bar']
service = self.create_service('web', labels=labels_list)
labels = create_and_start_container(service).labels.items()
for name in labels_list:
self.assertIn((name, ''), labels)
def test_log_drive_invalid(self):
service = self.create_service('web', log_driver='xxx')
self.assertRaises(ValueError, lambda: create_and_start_container(service))
def test_log_drive_empty_default_jsonfile(self):
service = self.create_service('web')
log_config = create_and_start_container(service).log_config
self.assertEqual('json-file', log_config['Type'])
self.assertFalse(log_config['Config'])
def test_log_drive_none(self):
service = self.create_service('web', log_driver='none')
log_config = create_and_start_container(service).log_config
self.assertEqual('none', log_config['Type'])
self.assertFalse(log_config['Config'])
def test_devices(self):
service = self.create_service('web', devices=["/dev/random:/dev/mapped-random"])
device_config = create_and_start_container(service).get('HostConfig.Devices')
device_dict = {
'PathOnHost': '/dev/random',
'CgroupPermissions': 'rwm',
'PathInContainer': '/dev/mapped-random'
}
self.assertEqual(1, len(device_config))
self.assertDictEqual(device_dict, device_config[0])
def test_duplicate_containers(self):
service = self.create_service('web')
options = service._get_container_create_options({}, 1)
original = Container.create(service.client, **options)
self.assertEqual(set(service.containers(stopped=True)), set([original]))
self.assertEqual(set(service.duplicate_containers()), set())
options['name'] = 'temporary_container_name'
duplicate = Container.create(service.client, **options)
self.assertEqual(set(service.containers(stopped=True)), set([original, duplicate]))
self.assertEqual(set(service.duplicate_containers()), set([duplicate]))

View File

@@ -0,0 +1,295 @@
from __future__ import unicode_literals
import tempfile
import shutil
import os
from compose import config
from compose.project import Project
from compose.const import LABEL_CONFIG_HASH
from .testcases import DockerClientTestCase
class ProjectTestCase(DockerClientTestCase):
def run_up(self, cfg, **kwargs):
kwargs.setdefault('smart_recreate', True)
kwargs.setdefault('timeout', 1)
project = self.make_project(cfg)
project.up(**kwargs)
return set(project.containers(stopped=True))
def make_project(self, cfg):
return Project.from_dicts(
name='composetest',
client=self.client,
service_dicts=config.from_dictionary(cfg),
)
class BasicProjectTest(ProjectTestCase):
def setUp(self):
super(BasicProjectTest, self).setUp()
self.cfg = {
'db': {'image': 'busybox:latest'},
'web': {'image': 'busybox:latest'},
}
def test_no_change(self):
old_containers = self.run_up(self.cfg)
self.assertEqual(len(old_containers), 2)
new_containers = self.run_up(self.cfg)
self.assertEqual(len(new_containers), 2)
self.assertEqual(old_containers, new_containers)
def test_partial_change(self):
old_containers = self.run_up(self.cfg)
old_db = [c for c in old_containers if c.name_without_project == 'db_1'][0]
old_web = [c for c in old_containers if c.name_without_project == 'web_1'][0]
self.cfg['web']['command'] = '/bin/true'
new_containers = self.run_up(self.cfg)
self.assertEqual(len(new_containers), 2)
preserved = list(old_containers & new_containers)
self.assertEqual(preserved, [old_db])
removed = list(old_containers - new_containers)
self.assertEqual(removed, [old_web])
created = list(new_containers - old_containers)
self.assertEqual(len(created), 1)
self.assertEqual(created[0].name_without_project, 'web_1')
self.assertEqual(created[0].get('Config.Cmd'), ['/bin/true'])
def test_all_change(self):
old_containers = self.run_up(self.cfg)
self.assertEqual(len(old_containers), 2)
self.cfg['web']['command'] = '/bin/true'
self.cfg['db']['command'] = '/bin/true'
new_containers = self.run_up(self.cfg)
self.assertEqual(len(new_containers), 2)
unchanged = old_containers & new_containers
self.assertEqual(len(unchanged), 0)
new = new_containers - old_containers
self.assertEqual(len(new), 2)
class ProjectWithDependenciesTest(ProjectTestCase):
def setUp(self):
super(ProjectWithDependenciesTest, self).setUp()
self.cfg = {
'db': {
'image': 'busybox:latest',
'command': 'tail -f /dev/null',
},
'web': {
'image': 'busybox:latest',
'command': 'tail -f /dev/null',
'links': ['db'],
},
'nginx': {
'image': 'busybox:latest',
'command': 'tail -f /dev/null',
'links': ['web'],
},
}
def test_up(self):
containers = self.run_up(self.cfg)
self.assertEqual(
set(c.name_without_project for c in containers),
set(['db_1', 'web_1', 'nginx_1']),
)
def test_change_leaf(self):
old_containers = self.run_up(self.cfg)
self.cfg['nginx']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
self.assertEqual(
set(c.name_without_project for c in new_containers - old_containers),
set(['nginx_1']),
)
def test_change_middle(self):
old_containers = self.run_up(self.cfg)
self.cfg['web']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
self.assertEqual(
set(c.name_without_project for c in new_containers - old_containers),
set(['web_1', 'nginx_1']),
)
def test_change_root(self):
old_containers = self.run_up(self.cfg)
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg)
self.assertEqual(
set(c.name_without_project for c in new_containers - old_containers),
set(['db_1', 'web_1', 'nginx_1']),
)
def test_change_root_no_recreate(self):
old_containers = self.run_up(self.cfg)
self.cfg['db']['environment'] = {'NEW_VAR': '1'}
new_containers = self.run_up(self.cfg, allow_recreate=False)
self.assertEqual(new_containers - old_containers, set())
def converge(service,
allow_recreate=True,
smart_recreate=False,
insecure_registry=False,
do_build=True):
"""
If a container for this service doesn't exist, create and start one. If there are
any, stop them, create+start new ones, and remove the old containers.
"""
plan = service.convergence_plan(
allow_recreate=allow_recreate,
smart_recreate=smart_recreate,
)
return service.execute_convergence_plan(
plan,
insecure_registry=insecure_registry,
do_build=do_build,
timeout=1,
)
class ServiceStateTest(DockerClientTestCase):
"""Test cases for Service.convergence_plan."""
def test_trigger_create(self):
web = self.create_service('web')
self.assertEqual(('create', []), web.convergence_plan(smart_recreate=True))
def test_trigger_noop(self):
web = self.create_service('web')
container = web.create_container()
web.start()
web = self.create_service('web')
self.assertEqual(('noop', [container]), web.convergence_plan(smart_recreate=True))
def test_trigger_start(self):
options = dict(command=["top"])
web = self.create_service('web', **options)
web.scale(2)
containers = web.containers(stopped=True)
containers[0].stop()
containers[0].inspect()
self.assertEqual([c.is_running for c in containers], [False, True])
web = self.create_service('web', **options)
self.assertEqual(
('start', containers[0:1]),
web.convergence_plan(smart_recreate=True),
)
def test_trigger_recreate_with_config_change(self):
web = self.create_service('web', command=["top"])
container = web.create_container()
web = self.create_service('web', command=["top", "-d", "1"])
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
def test_trigger_recreate_with_nonexistent_image_tag(self):
web = self.create_service('web', image="busybox:latest")
container = web.create_container()
web = self.create_service('web', image="nonexistent-image")
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
def test_trigger_recreate_with_image_change(self):
repo = 'composetest_myimage'
tag = 'latest'
image = '{}:{}'.format(repo, tag)
image_id = self.client.images(name='busybox')[0]['Id']
self.client.tag(image_id, repository=repo, tag=tag)
try:
web = self.create_service('web', image=image)
container = web.create_container()
# update the image
c = self.client.create_container(image, ['touch', '/hello.txt'])
self.client.commit(c, repository=repo, tag=tag)
self.client.remove_container(c)
web = self.create_service('web', image=image)
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
finally:
self.client.remove_image(image)
def test_trigger_recreate_with_build(self):
context = tempfile.mkdtemp()
base_image = "FROM busybox\nLABEL com.docker.compose.test_image=true\n"
try:
dockerfile = os.path.join(context, 'Dockerfile')
with open(dockerfile, 'w') as f:
f.write(base_image)
web = self.create_service('web', build=context)
container = web.create_container()
with open(dockerfile, 'w') as f:
f.write(base_image + 'CMD echo hello world\n')
web.build()
web = self.create_service('web', build=context)
self.assertEqual(('recreate', [container]), web.convergence_plan(smart_recreate=True))
finally:
shutil.rmtree(context)
class ConfigHashTest(DockerClientTestCase):
def test_no_config_hash_when_one_off(self):
web = self.create_service('web')
container = web.create_container(one_off=True)
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_no_config_hash_when_overriding_options(self):
web = self.create_service('web')
container = web.create_container(environment={'FOO': '1'})
self.assertNotIn(LABEL_CONFIG_HASH, container.labels)
def test_config_hash_with_custom_labels(self):
web = self.create_service('web', labels={'foo': '1'})
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
self.assertIn('foo', container.labels)
def test_config_hash_sticks_around(self):
web = self.create_service('web', command=["top"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)
web = self.create_service('web', command=["top", "-d", "1"])
container = converge(web)[0]
self.assertIn(LABEL_CONFIG_HASH, container.labels)

View File

@@ -2,6 +2,7 @@ from __future__ import unicode_literals
from __future__ import absolute_import
from compose.service import Service
from compose.config import make_service_dict
from compose.const import LABEL_PROJECT
from compose.cli.docker_client import docker_client
from compose.progress_stream import stream_output
from .. import unittest
@@ -12,20 +13,22 @@ class DockerClientTestCase(unittest.TestCase):
def setUpClass(cls):
cls.client = docker_client()
def setUp(self):
for c in self.client.containers(all=True):
if c['Names'] and 'composetest' in c['Names'][0]:
self.client.kill(c['Id'])
self.client.remove_container(c['Id'])
for i in self.client.images():
if isinstance(i.get('Tag'), basestring) and 'composetest' in i['Tag']:
self.client.remove_image(i)
def tearDown(self):
for c in self.client.containers(
all=True,
filters={'label': '%s=composetest' % LABEL_PROJECT}):
self.client.kill(c['Id'])
self.client.remove_container(c['Id'])
for i in self.client.images(
filters={'label': 'com.docker.compose.test_image'}):
self.client.remove_image(i)
def create_service(self, name, **kwargs):
kwargs['image'] = "busybox:latest"
if 'image' not in kwargs and 'build' not in kwargs:
kwargs['image'] = 'busybox:latest'
if 'command' not in kwargs:
kwargs['command'] = ["/bin/sleep", "300"]
kwargs['command'] = ["top"]
return Service(
project='composetest',
@@ -34,5 +37,6 @@ class DockerClientTestCase(unittest.TestCase):
)
def check_build(self, *args, **kwargs):
kwargs.setdefault('rm', True)
build_output = self.client.build(*args, **kwargs)
stream_output(build_output, open('/dev/null', 'w'))

View File

@@ -5,7 +5,7 @@ import os
import mock
from tests import unittest
from compose.cli import docker_client
from compose.cli import docker_client
class DockerClientTestCase(unittest.TestCase):

View File

@@ -8,10 +8,10 @@ from .. import unittest
import docker
import mock
from six import StringIO
from compose.cli import main
from compose.cli.main import TopLevelCommand
from compose.cli.docopt_command import NoSuchCommand
from compose.cli.errors import ComposeFileNotFound
from compose.service import Service
@@ -63,30 +63,32 @@ class CLITestCase(unittest.TestCase):
self.assertEquals(project_name, name)
def test_filename_check(self):
self.assertEqual('docker-compose.yml', get_config_filename_for_files([
files = [
'docker-compose.yml',
'docker-compose.yaml',
'fig.yml',
'fig.yaml',
]))
]
self.assertEqual('docker-compose.yaml', get_config_filename_for_files([
'docker-compose.yaml',
'fig.yml',
'fig.yaml',
]))
self.assertEqual('fig.yml', get_config_filename_for_files([
'fig.yml',
'fig.yaml',
]))
self.assertEqual('fig.yaml', get_config_filename_for_files([
'fig.yaml',
]))
"""Test with files placed in the basedir"""
self.assertEqual('docker-compose.yml', get_config_filename_for_files(files[0:]))
self.assertEqual('docker-compose.yaml', get_config_filename_for_files(files[1:]))
self.assertEqual('fig.yml', get_config_filename_for_files(files[2:]))
self.assertEqual('fig.yaml', get_config_filename_for_files(files[3:]))
self.assertRaises(ComposeFileNotFound, lambda: get_config_filename_for_files([]))
"""Test with files placed in the subdir"""
def get_config_filename_for_files_in_subdir(files):
return get_config_filename_for_files(files, subdir=True)
self.assertEqual('docker-compose.yml', get_config_filename_for_files_in_subdir(files[0:]))
self.assertEqual('docker-compose.yaml', get_config_filename_for_files_in_subdir(files[1:]))
self.assertEqual('fig.yml', get_config_filename_for_files_in_subdir(files[2:]))
self.assertEqual('fig.yaml', get_config_filename_for_files_in_subdir(files[3:]))
self.assertRaises(ComposeFileNotFound, lambda: get_config_filename_for_files_in_subdir([]))
def test_get_project(self):
command = TopLevelCommand()
command.base_dir = 'tests/fixtures/longer-filename-composefile'
@@ -100,6 +102,22 @@ class CLITestCase(unittest.TestCase):
with self.assertRaises(SystemExit):
command.dispatch(['-h'], None)
def test_command_help(self):
with self.assertRaises(SystemExit) as ctx:
TopLevelCommand().dispatch(['help', 'up'], None)
self.assertIn('Usage: up', str(ctx.exception))
def test_command_help_dashes(self):
with self.assertRaises(SystemExit) as ctx:
TopLevelCommand().dispatch(['help', 'migrate-to-labels'], None)
self.assertIn('Usage: migrate-to-labels', str(ctx.exception))
def test_command_help_nonexistent(self):
with self.assertRaises(NoSuchCommand):
TopLevelCommand().dispatch(['help', 'nonexistent'], None)
def test_setup_logging(self):
main.setup_logging()
self.assertEqual(logging.getLogger().level, logging.DEBUG)
@@ -109,7 +127,7 @@ class CLITestCase(unittest.TestCase):
def test_run_with_environment_merged_with_options_list(self, mock_dockerpty):
command = TopLevelCommand()
mock_client = mock.create_autospec(docker.Client)
mock_project = mock.Mock()
mock_project = mock.Mock(client=mock_client)
mock_project.get_service.return_value = Service(
'service',
client=mock_client,
@@ -135,13 +153,65 @@ class CLITestCase(unittest.TestCase):
call_kwargs['environment'],
{'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': 'THREE'})
def test_run_service_with_restart_always(self):
command = TopLevelCommand()
mock_client = mock.create_autospec(docker.Client)
mock_project = mock.Mock(client=mock_client)
mock_project.get_service.return_value = Service(
'service',
client=mock_client,
restart='always',
image='someimage')
command.run(mock_project, {
'SERVICE': 'service',
'COMMAND': None,
'-e': [],
'--user': None,
'--no-deps': None,
'--allow-insecure-ssl': None,
'-d': True,
'-T': None,
'--entrypoint': None,
'--service-ports': None,
'--rm': None,
})
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
self.assertEquals(call_kwargs['host_config']['RestartPolicy']['Name'], 'always')
def get_config_filename_for_files(filenames):
command = TopLevelCommand()
mock_client = mock.create_autospec(docker.Client)
mock_project = mock.Mock(client=mock_client)
mock_project.get_service.return_value = Service(
'service',
client=mock_client,
restart='always',
image='someimage')
command.run(mock_project, {
'SERVICE': 'service',
'COMMAND': None,
'-e': [],
'--user': None,
'--no-deps': None,
'--allow-insecure-ssl': None,
'-d': True,
'-T': None,
'--entrypoint': None,
'--service-ports': None,
'--rm': True,
})
_, _, call_kwargs = mock_client.create_container.mock_calls[0]
self.assertFalse('RestartPolicy' in call_kwargs['host_config'])
def get_config_filename_for_files(filenames, subdir=None):
project_dir = tempfile.mkdtemp()
try:
make_files(project_dir, filenames)
command = TopLevelCommand()
command.base_dir = project_dir
if subdir:
command.base_dir = tempfile.mkdtemp(dir=project_dir)
else:
command.base_dir = project_dir
return os.path.basename(command.get_config_path())
finally:
shutil.rmtree(project_dir)
@@ -151,4 +221,3 @@ def make_files(dirname, filenames):
for fname in filenames:
with open(os.path.join(dirname, fname), 'w') as f:
f.write('')

View File

@@ -4,6 +4,7 @@ from .. import unittest
from compose import config
class ConfigTest(unittest.TestCase):
def test_from_dictionary(self):
service_dicts = config.from_dictionary({
@@ -53,46 +54,61 @@ class VolumePathTest(unittest.TestCase):
self.assertEqual(d['volumes'], ['/home/user:/container/path'])
class MergeVolumesTest(unittest.TestCase):
class MergePathMappingTest(object):
def config_name(self):
return ""
def test_empty(self):
service_dict = config.merge_service_dicts({}, {})
self.assertNotIn('volumes', service_dict)
self.assertNotIn(self.config_name(), service_dict)
def test_no_override(self):
service_dict = config.merge_service_dicts(
{'volumes': ['/foo:/code', '/data']},
{self.config_name(): ['/foo:/code', '/data']},
{},
)
self.assertEqual(set(service_dict['volumes']), set(['/foo:/code', '/data']))
self.assertEqual(set(service_dict[self.config_name()]), set(['/foo:/code', '/data']))
def test_no_base(self):
service_dict = config.merge_service_dicts(
{},
{'volumes': ['/bar:/code']},
{self.config_name(): ['/bar:/code']},
)
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code']))
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code']))
def test_override_explicit_path(self):
service_dict = config.merge_service_dicts(
{'volumes': ['/foo:/code', '/data']},
{'volumes': ['/bar:/code']},
{self.config_name(): ['/foo:/code', '/data']},
{self.config_name(): ['/bar:/code']},
)
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data']))
def test_add_explicit_path(self):
service_dict = config.merge_service_dicts(
{'volumes': ['/foo:/code', '/data']},
{'volumes': ['/bar:/code', '/quux:/data']},
{self.config_name(): ['/foo:/code', '/data']},
{self.config_name(): ['/bar:/code', '/quux:/data']},
)
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/quux:/data']))
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/quux:/data']))
def test_remove_explicit_path(self):
service_dict = config.merge_service_dicts(
{'volumes': ['/foo:/code', '/quux:/data']},
{'volumes': ['/bar:/code', '/data']},
{self.config_name(): ['/foo:/code', '/quux:/data']},
{self.config_name(): ['/bar:/code', '/data']},
)
self.assertEqual(set(service_dict['volumes']), set(['/bar:/code', '/data']))
self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data']))
class MergeVolumesTest(unittest.TestCase, MergePathMappingTest):
def config_name(self):
return 'volumes'
class MergeDevicesTest(unittest.TestCase, MergePathMappingTest):
def config_name(self):
return 'devices'
class BuildOrImageMergeTest(unittest.TestCase):
def test_merge_build_or_image_no_override(self):
self.assertEqual(
config.merge_service_dicts({'build': '.'}, {}),
@@ -184,9 +200,50 @@ class MergeStringsOrListsTest(unittest.TestCase):
self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9']))
class MergeLabelsTest(unittest.TestCase):
def test_empty(self):
service_dict = config.merge_service_dicts({}, {})
self.assertNotIn('labels', service_dict)
def test_no_override(self):
service_dict = config.merge_service_dicts(
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
config.make_service_dict('foo', {}),
)
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''})
def test_no_base(self):
service_dict = config.merge_service_dicts(
config.make_service_dict('foo', {}),
config.make_service_dict('foo', {'labels': ['foo=2']}),
)
self.assertEqual(service_dict['labels'], {'foo': '2'})
def test_override_explicit_value(self):
service_dict = config.merge_service_dicts(
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
config.make_service_dict('foo', {'labels': ['foo=2']}),
)
self.assertEqual(service_dict['labels'], {'foo': '2', 'bar': ''})
def test_add_explicit_value(self):
service_dict = config.merge_service_dicts(
config.make_service_dict('foo', {'labels': ['foo=1', 'bar']}),
config.make_service_dict('foo', {'labels': ['bar=2']}),
)
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': '2'})
def test_remove_explicit_value(self):
service_dict = config.merge_service_dicts(
config.make_service_dict('foo', {'labels': ['foo=1', 'bar=2']}),
config.make_service_dict('foo', {'labels': ['bar']}),
)
self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''})
class EnvTest(unittest.TestCase):
def test_parse_environment_as_list(self):
environment =[
environment = [
'NORMAL=F1',
'CONTAINS_EQUALS=F=2',
'TRAILING_EQUALS=',
@@ -218,9 +275,8 @@ class EnvTest(unittest.TestCase):
os.environ['ENV_DEF'] = 'E3'
service_dict = config.make_service_dict(
'foo',
{
'environment': {
'foo', {
'environment': {
'FILE_DEF': 'F1',
'FILE_DEF_EMPTY': '',
'ENV_DEF': None,
@@ -278,6 +334,7 @@ class EnvTest(unittest.TestCase):
{'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''},
)
class ExtendsTest(unittest.TestCase):
def test_extends(self):
service_dicts = config.load('tests/fixtures/extends/docker-compose.yml')
@@ -291,12 +348,12 @@ class ExtendsTest(unittest.TestCase):
{
'name': 'mydb',
'image': 'busybox',
'command': 'sleep 300',
'command': 'top',
},
{
'name': 'myweb',
'image': 'busybox',
'command': 'sleep 300',
'command': 'top',
'links': ['mydb:db'],
'environment': {
"FOO": "1",
@@ -335,10 +392,11 @@ class ExtendsTest(unittest.TestCase):
],
)
def test_extends_validation(self):
dictionary = {'extends': None}
load_config = lambda: config.make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends')
def load_config():
return config.make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends')
self.assertRaisesRegexp(config.ConfigurationError, 'dictionary', load_config)
@@ -396,6 +454,21 @@ class ExtendsTest(unittest.TestCase):
self.assertEqual(set(dicts[0]['volumes']), set(paths))
def test_parent_build_path_dne(self):
child = config.load('tests/fixtures/extends/nonexistent-path-child.yml')
self.assertEqual(child, [
{
'name': 'dnechild',
'image': 'busybox',
'command': '/bin/true',
'environment': {
"FOO": "1",
"BAR": "2",
},
},
])
class BuildPathTest(unittest.TestCase):
def setUp(self):
@@ -405,7 +478,10 @@ class BuildPathTest(unittest.TestCase):
options = {'build': 'nonexistent.path'}
self.assertRaises(
config.ConfigurationError,
lambda: config.make_service_dict('foo', options, 'tests/fixtures/build-path'),
lambda: config.from_dictionary({
'foo': options,
'working_dir': 'tests/fixtures/build-path'
})
)
def test_relative_path(self):

View File

@@ -5,16 +5,16 @@ import mock
import docker
from compose.container import Container
from compose.container import get_container_name
class ContainerTest(unittest.TestCase):
def setUp(self):
self.container_dict = {
"Id": "abc",
"Image": "busybox:latest",
"Command": "sleep 300",
"Command": "top",
"Created": 1387384730,
"Status": "Up 8 seconds",
"Ports": None,
@@ -24,17 +24,26 @@ class ContainerTest(unittest.TestCase):
"NetworkSettings": {
"Ports": {},
},
"Config": {
"Labels": {
"com.docker.compose.project": "composetest",
"com.docker.compose.service": "web",
"com.docker.compose.container-number": 7,
},
}
}
def test_from_ps(self):
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.dictionary, {
"Id": "abc",
"Image":"busybox:latest",
"Name": "/composetest_db_1",
})
self.assertEqual(
container.dictionary,
{
"Id": "abc",
"Image": "busybox:latest",
"Name": "/composetest_db_1",
})
def test_from_ps_prefixed(self):
self.container_dict['Names'] = ['/swarm-host-1' + n for n in self.container_dict['Names']]
@@ -44,7 +53,7 @@ class ContainerTest(unittest.TestCase):
has_been_inspected=True)
self.assertEqual(container.dictionary, {
"Id": "abc",
"Image":"busybox:latest",
"Image": "busybox:latest",
"Name": "/composetest_db_1",
})
@@ -64,10 +73,8 @@ class ContainerTest(unittest.TestCase):
})
def test_number(self):
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.number, 1)
container = Container(None, self.container_dict, has_been_inspected=True)
self.assertEqual(container.number, 7)
def test_name(self):
container = Container.from_ps(None,
@@ -76,10 +83,8 @@ class ContainerTest(unittest.TestCase):
self.assertEqual(container.name, "composetest_db_1")
def test_name_without_project(self):
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.name_without_project, "db_1")
container = Container(None, self.container_dict, has_been_inspected=True)
self.assertEqual(container.name_without_project, "web_7")
def test_inspect_if_not_inspected(self):
mock_client = mock.create_autospec(docker.Client)
@@ -100,7 +105,7 @@ class ContainerTest(unittest.TestCase):
def test_human_readable_ports_public_and_private(self):
self.container_dict['NetworkSettings']['Ports'].update({
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
"45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}],
"45453/tcp": [],
})
container = Container(None, self.container_dict, has_been_inspected=True)
@@ -110,7 +115,7 @@ class ContainerTest(unittest.TestCase):
def test_get_local_port(self):
self.container_dict['NetworkSettings']['Ports'].update({
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
"45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}],
})
container = Container(None, self.container_dict, has_been_inspected=True)
@@ -120,12 +125,21 @@ class ContainerTest(unittest.TestCase):
def test_get(self):
container = Container(None, {
"Status":"Up 8 seconds",
"Status": "Up 8 seconds",
"HostConfig": {
"VolumesFrom": ["volume_id",]
"VolumesFrom": ["volume_id"]
},
}, has_been_inspected=True)
self.assertEqual(container.get('Status'), "Up 8 seconds")
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id",])
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id"])
self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None)
class GetContainerNameTestCase(unittest.TestCase):
def test_get_container_name(self):
self.assertIsNone(get_container_name({}))
self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1')
self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1')
self.assertEqual(get_container_name({'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}), 'myproject_db_1')

View File

@@ -2,10 +2,9 @@ from __future__ import unicode_literals
from __future__ import absolute_import
from tests import unittest
import mock
from six import StringIO
from compose import progress_stream
from compose import progress_stream
class ProgressStreamTestCase(unittest.TestCase):
@@ -18,3 +17,21 @@ class ProgressStreamTestCase(unittest.TestCase):
]
events = progress_stream.stream_output(output, StringIO())
self.assertEqual(len(events), 1)
def test_stream_output_div_zero(self):
output = [
'{"status": "Downloading", "progressDetail": {"current": '
'0, "start": 1413653874, "total": 0}, '
'"progress": "..."}',
]
events = progress_stream.stream_output(output, StringIO())
self.assertEqual(len(events), 1)
def test_stream_output_null_total(self):
output = [
'{"status": "Downloading", "progressDetail": {"current": '
'0, "start": 1413653874, "total": null}, '
'"progress": "..."}',
]
events = progress_stream.stream_output(output, StringIO())
self.assertEqual(len(events), 1)

View File

@@ -8,6 +8,7 @@ from compose import config
import mock
import docker
class ProjectTest(unittest.TestCase):
def test_from_dict(self):
project = Project.from_dicts('composetest', [
@@ -79,10 +80,12 @@ class ProjectTest(unittest.TestCase):
web = Service(
project='composetest',
name='web',
image='foo',
)
console = Service(
project='composetest',
name='console',
image='foo',
)
project = Project('test', [web, console], None)
self.assertEqual(project.get_services(), [web, console])
@@ -91,10 +94,12 @@ class ProjectTest(unittest.TestCase):
web = Service(
project='composetest',
name='web',
image='foo',
)
console = Service(
project='composetest',
name='console',
image='foo',
)
project = Project('test', [web, console], None)
self.assertEqual(project.get_services(['console']), [console])
@@ -103,19 +108,23 @@ class ProjectTest(unittest.TestCase):
db = Service(
project='composetest',
name='db',
image='foo',
)
web = Service(
project='composetest',
name='web',
image='foo',
links=[(db, 'database')]
)
cache = Service(
project='composetest',
name='cache'
name='cache',
image='foo'
)
console = Service(
project='composetest',
name='console',
image='foo',
links=[(web, 'web')]
)
project = Project('test', [web, db, cache, console], None)
@@ -128,10 +137,12 @@ class ProjectTest(unittest.TestCase):
db = Service(
project='composetest',
name='db',
image='foo',
)
web = Service(
project='composetest',
name='web',
image='foo',
links=[(db, 'database')]
)
project = Project('test', [web, db], None)
@@ -198,6 +209,18 @@ class ProjectTest(unittest.TestCase):
], None)
self.assertEqual(project.get_service('test')._get_volumes_from(), container_ids)
def test_net_unset(self):
mock_client = mock.create_autospec(docker.Client)
project = Project.from_dicts('test', [
{
'name': 'test',
'image': 'busybox:latest',
}
], mock_client)
service = project.get_service('test')
self.assertEqual(service._get_net(), None)
self.assertNotIn('NetworkMode', service._get_container_host_config({}))
def test_use_net_from_container(self):
container_id = 'aabbccddee'
container_dict = dict(Name='aaa', Id=container_id)
@@ -211,7 +234,7 @@ class ProjectTest(unittest.TestCase):
}
], mock_client)
service = project.get_service('test')
self.assertEqual(service._get_net(), 'container:'+container_id)
self.assertEqual(service._get_net(), 'container:' + container_id)
def test_use_net_from_service(self):
container_name = 'test_aaa_1'
@@ -237,4 +260,4 @@ class ProjectTest(unittest.TestCase):
], mock_client)
service = project.get_service('test')
self.assertEqual(service._get_net(), 'container:'+container_name)
self.assertEqual(service._get_net(), 'container:' + container_name)

View File

@@ -5,16 +5,18 @@ from .. import unittest
import mock
import docker
from requests import Response
from compose import Service
from compose.service import Service
from compose.container import Container
from compose.const import LABEL_SERVICE, LABEL_PROJECT, LABEL_ONE_OFF
from compose.service import (
APIError,
ConfigError,
NeedsBuildError,
NoSuchImageError,
build_port_bindings,
build_volume_binding,
get_container_name,
get_container_data_volumes,
merge_volume_bindings,
parse_repository_tag,
parse_volume_spec,
split_port,
@@ -38,59 +40,45 @@ class ServiceTest(unittest.TestCase):
self.assertRaises(ConfigError, lambda: Service(name='foo_bar'))
self.assertRaises(ConfigError, lambda: Service(name='__foo_bar__'))
Service('a')
Service('foo')
Service('a', image='foo')
Service('foo', image='foo')
def test_project_validation(self):
self.assertRaises(ConfigError, lambda: Service(name='foo', project='_'))
Service(name='foo', project='bar')
def test_get_container_name(self):
self.assertIsNone(get_container_name({}))
self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1')
self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1')
self.assertEqual(get_container_name({'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']}), 'myproject_db_1')
self.assertRaises(ConfigError, lambda: Service('bar'))
self.assertRaises(ConfigError, lambda: Service(name='foo', project='_', image='foo'))
Service(name='foo', project='bar', image='foo')
def test_containers(self):
service = Service('db', client=self.mock_client, project='myproject')
service = Service('db', self.mock_client, 'myproject', image='foo')
self.mock_client.containers.return_value = []
self.assertEqual(service.containers(), [])
def test_containers_with_containers(self):
self.mock_client.containers.return_value = [
{'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/myproject', '/foo/bar']},
{'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/myproject_db']},
{'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/db_1']},
{'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/myproject_db_1', '/myproject_web_1/db']},
dict(Name=str(i), Image='foo', Id=i) for i in range(3)
]
self.assertEqual([c.id for c in service.containers()], ['IN_1'])
service = Service('db', self.mock_client, 'myproject', image='foo')
self.assertEqual([c.id for c in service.containers()], range(3))
def test_containers_prefixed(self):
service = Service('db', client=self.mock_client, project='myproject')
self.mock_client.containers.return_value = [
{'Image': 'busybox', 'Id': 'OUT_1', 'Names': ['/swarm-host-1/myproject', '/swarm-host-1/foo/bar']},
{'Image': 'busybox', 'Id': 'OUT_2', 'Names': ['/swarm-host-1/myproject_db']},
{'Image': 'busybox', 'Id': 'OUT_3', 'Names': ['/swarm-host-1/db_1']},
{'Image': 'busybox', 'Id': 'IN_1', 'Names': ['/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db']},
expected_labels = [
'{0}=myproject'.format(LABEL_PROJECT),
'{0}=db'.format(LABEL_SERVICE),
'{0}=False'.format(LABEL_ONE_OFF),
]
self.assertEqual([c.id for c in service.containers()], ['IN_1'])
self.mock_client.containers.assert_called_once_with(
all=False,
filters={'label': expected_labels})
def test_get_volumes_from_container(self):
container_id = 'aabbccddee'
service = Service(
'test',
image='foo',
volumes_from=[mock.Mock(id=container_id, spec=Container)])
self.assertEqual(service._get_volumes_from(), [container_id])
def test_get_volumes_from_intermediate_container(self):
container_id = 'aabbccddee'
service = Service('test')
container = mock.Mock(id=container_id, spec=Container)
self.assertEqual(service._get_volumes_from(container), [container_id])
def test_get_volumes_from_service_container_exists(self):
container_ids = ['aabbccddee', '12345']
from_service = mock.create_autospec(Service)
@@ -98,7 +86,7 @@ class ServiceTest(unittest.TestCase):
mock.Mock(id=container_id, spec=Container)
for container_id in container_ids
]
service = Service('test', volumes_from=[from_service])
service = Service('test', volumes_from=[from_service], image='foo')
self.assertEqual(service._get_volumes_from(), container_ids)
@@ -109,7 +97,7 @@ class ServiceTest(unittest.TestCase):
from_service.create_container.return_value = mock.Mock(
id=container_id,
spec=Container)
service = Service('test', volumes_from=[from_service])
service = Service('test', image='foo', volumes_from=[from_service])
self.assertEqual(service._get_volumes_from(), [container_id])
from_service.create_container.assert_called_once_with()
@@ -145,56 +133,62 @@ class ServiceTest(unittest.TestCase):
def test_build_port_bindings_with_one_port(self):
port_bindings = build_port_bindings(["127.0.0.1:1000:1000"])
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")])
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000")])
def test_build_port_bindings_with_matching_internal_ports(self):
port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:1000"])
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000"),("127.0.0.1","2000")])
port_bindings = build_port_bindings(["127.0.0.1:1000:1000", "127.0.0.1:2000:1000"])
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000"), ("127.0.0.1", "2000")])
def test_build_port_bindings_with_nonmatching_internal_ports(self):
port_bindings = build_port_bindings(["127.0.0.1:1000:1000","127.0.0.1:2000:2000"])
self.assertEqual(port_bindings["1000"],[("127.0.0.1","1000")])
self.assertEqual(port_bindings["2000"],[("127.0.0.1","2000")])
port_bindings = build_port_bindings(["127.0.0.1:1000:1000", "127.0.0.1:2000:2000"])
self.assertEqual(port_bindings["1000"], [("127.0.0.1", "1000")])
self.assertEqual(port_bindings["2000"], [("127.0.0.1", "2000")])
def test_split_domainname_none(self):
service = Service('foo', hostname='name', client=self.mock_client)
service = Service('foo', image='foo', hostname='name', client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({'image': 'foo'})
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertFalse('domainname' in opts, 'domainname')
def test_split_domainname_fqdn(self):
service = Service('foo',
hostname='name.domain.tld',
client=self.mock_client)
service = Service(
'foo',
hostname='name.domain.tld',
image='foo',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({'image': 'foo'})
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_split_domainname_both(self):
service = Service('foo',
hostname='name',
domainname='domain.tld',
client=self.mock_client)
service = Service(
'foo',
hostname='name',
image='foo',
domainname='domain.tld',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({'image': 'foo'})
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_split_domainname_weird(self):
service = Service('foo',
hostname='name.sub',
domainname='domain.tld',
client=self.mock_client)
service = Service(
'foo',
hostname='name.sub',
domainname='domain.tld',
image='foo',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({'image': 'foo'})
opts = service._get_container_create_options({'image': 'foo'}, 1)
self.assertEqual(opts['hostname'], 'name.sub', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_get_container_not_found(self):
self.mock_client.containers.return_value = []
service = Service('foo', client=self.mock_client)
service = Service('foo', client=self.mock_client, image='foo')
self.assertRaises(ValueError, service.get_container)
@@ -202,7 +196,7 @@ class ServiceTest(unittest.TestCase):
def test_get_container(self, mock_container_class):
container_dict = dict(Name='default_foo_2')
self.mock_client.containers.return_value = [container_dict]
service = Service('foo', client=self.mock_client)
service = Service('foo', image='foo', client=self.mock_client)
container = service.get_container(number=2)
self.assertEqual(container, mock_container_class.from_ps.return_value)
@@ -213,33 +207,62 @@ class ServiceTest(unittest.TestCase):
def test_pull_image(self, mock_log):
service = Service('foo', client=self.mock_client, image='someimage:sometag')
service.pull(insecure_registry=True)
self.mock_client.pull.assert_called_once_with('someimage:sometag', insecure_registry=True)
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
@mock.patch('compose.service.Container', autospec=True)
@mock.patch('compose.service.log', autospec=True)
def test_create_container_from_insecure_registry(
self,
mock_log,
mock_container):
service = Service('foo', client=self.mock_client, image='someimage:sometag')
mock_response = mock.Mock(Response)
mock_response.status_code = 404
mock_response.reason = "Not Found"
mock_container.create.side_effect = APIError(
'Mock error', mock_response, "No such image")
# We expect the APIError because our service requires a
# non-existent image.
with self.assertRaises(APIError):
service.create_container(insecure_registry=True)
self.mock_client.pull.assert_called_once_with(
'someimage:sometag',
'someimage',
tag='sometag',
insecure_registry=True,
stream=True)
mock_log.info.assert_called_once_with(
'Pulling image someimage:sometag...')
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
def test_pull_image_no_tag(self):
service = Service('foo', client=self.mock_client, image='ababab')
service.pull()
self.mock_client.pull.assert_called_once_with(
'ababab',
tag='latest',
insecure_registry=False,
stream=True)
def test_create_container_from_insecure_registry(self):
service = Service('foo', client=self.mock_client, image='someimage:sometag')
images = []
def pull(repo, tag=None, insecure_registry=False, **kwargs):
self.assertEqual('someimage', repo)
self.assertEqual('sometag', tag)
self.assertTrue(insecure_registry)
images.append({'Id': 'abc123'})
return []
service.image = lambda *args, **kwargs: mock_get_image(images)
self.mock_client.pull = pull
service.create_container(insecure_registry=True)
self.assertEqual(1, len(images))
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container(self, _):
mock_container = mock.create_autospec(Container)
service = Service('foo', client=self.mock_client, image='someimage')
service.image = lambda: {'Id': 'abc123'}
new_container = service.recreate_container(mock_container)
mock_container.stop.assert_called_once_with(timeout=10)
self.mock_client.rename.assert_called_once_with(
mock_container.id,
'%s_%s' % (mock_container.short_id, mock_container.name))
new_container.start.assert_called_once_with()
mock_container.remove.assert_called_once_with()
@mock.patch('compose.service.Container', autospec=True)
def test_recreate_container_with_timeout(self, _):
mock_container = mock.create_autospec(Container)
self.mock_client.inspect_image.return_value = {'Id': 'abc123'}
service = Service('foo', client=self.mock_client, image='someimage')
service.recreate_container(mock_container, timeout=1)
mock_container.stop.assert_called_once_with(timeout=1)
def test_parse_repository_tag(self):
self.assertEqual(parse_repository_tag("root"), ("root", ""))
@@ -249,32 +272,71 @@ class ServiceTest(unittest.TestCase):
self.assertEqual(parse_repository_tag("url:5000/repo"), ("url:5000/repo", ""))
self.assertEqual(parse_repository_tag("url:5000/repo:tag"), ("url:5000/repo", "tag"))
def test_latest_is_used_when_tag_is_not_specified(self):
@mock.patch('compose.service.Container', autospec=True)
def test_create_container_latest_is_used_when_no_tag_specified(self, mock_container):
service = Service('foo', client=self.mock_client, image='someimage')
Container.create = mock.Mock()
images = []
def pull(repo, tag=None, **kwargs):
self.assertEqual('someimage', repo)
self.assertEqual('latest', tag)
images.append({'Id': 'abc123'})
return []
service.image = lambda *args, **kwargs: mock_get_image(images)
self.mock_client.pull = pull
service.create_container()
self.assertEqual(Container.create.call_args[1]['image'], 'someimage:latest')
self.assertEqual(1, len(images))
def test_create_container_with_build(self):
self.mock_client.images.return_value = []
service = Service('foo', client=self.mock_client, build='.')
service.build = mock.create_autospec(service.build)
service.create_container(do_build=True)
self.mock_client.images.assert_called_once_with(name=service.full_name)
service.build.assert_called_once_with()
images = []
service.image = lambda *args, **kwargs: mock_get_image(images)
service.build = lambda: images.append({'Id': 'abc123'})
service.create_container(do_build=True)
self.assertEqual(1, len(images))
def test_create_container_no_build(self):
self.mock_client.images.return_value = []
service = Service('foo', client=self.mock_client, build='.')
service.create_container(do_build=False)
service.image = lambda: {'Id': 'abc123'}
self.assertFalse(self.mock_client.images.called)
service.create_container(do_build=False)
self.assertFalse(self.mock_client.build.called)
def test_create_container_no_build_but_needs_build(self):
service = Service('foo', client=self.mock_client, build='.')
service.image = lambda *args, **kwargs: mock_get_image([])
with self.assertRaises(NeedsBuildError):
service.create_container(do_build=False)
def test_build_does_not_pull(self):
self.mock_client.build.return_value = [
'{"stream": "Successfully built 12345"}',
]
service = Service('foo', client=self.mock_client, build='.')
service.build()
self.assertEqual(self.mock_client.build.call_count, 1)
self.assertFalse(self.mock_client.build.call_args[1]['pull'])
def mock_get_image(images):
if images:
return images[0]
else:
raise NoSuchImageError()
class ServiceVolumesTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
def test_parse_volume_spec_only_one_path(self):
spec = parse_volume_spec('/the/volume')
self.assertEqual(spec, (None, '/the/volume', 'rw'))
@@ -297,6 +359,129 @@ class ServiceVolumesTest(unittest.TestCase):
def test_build_volume_binding(self):
binding = build_volume_binding(parse_volume_spec('/outside:/inside'))
self.assertEqual(binding, ('/inside', '/outside:/inside:rw'))
def test_get_container_data_volumes(self):
options = [
'/host/volume:/host/volume:ro',
'/new/volume',
'/existing/volume',
]
self.mock_client.inspect_image.return_value = {
'ContainerConfig': {
'Volumes': {
'/mnt/image/data': {},
}
}
}
container = Container(self.mock_client, {
'Image': 'ababab',
'Volumes': {
'/host/volume': '/host/volume',
'/existing/volume': '/var/lib/docker/aaaaaaaa',
'/removed/volume': '/var/lib/docker/bbbbbbbb',
'/mnt/image/data': '/var/lib/docker/cccccccc',
},
}, has_been_inspected=True)
expected = {
'/existing/volume': '/var/lib/docker/aaaaaaaa:/existing/volume:rw',
'/mnt/image/data': '/var/lib/docker/cccccccc:/mnt/image/data:rw',
}
binds = get_container_data_volumes(container, options)
self.assertEqual(binds, expected)
def test_merge_volume_bindings(self):
options = [
'/host/volume:/host/volume:ro',
'/host/rw/volume:/host/rw/volume',
'/new/volume',
'/existing/volume',
]
self.mock_client.inspect_image.return_value = {
'ContainerConfig': {'Volumes': {}}
}
intermediate_container = Container(self.mock_client, {
'Image': 'ababab',
'Volumes': {'/existing/volume': '/var/lib/docker/aaaaaaaa'},
}, has_been_inspected=True)
expected = [
'/host/volume:/host/volume:ro',
'/host/rw/volume:/host/rw/volume:rw',
'/var/lib/docker/aaaaaaaa:/existing/volume:rw',
]
binds = merge_volume_bindings(options, intermediate_container)
self.assertEqual(set(binds), set(expected))
def test_mount_same_host_path_to_two_volumes(self):
service = Service(
'web',
image='busybox',
volumes=[
'/host/path:/data1',
'/host/path:/data2',
],
client=self.mock_client,
)
self.mock_client.inspect_image.return_value = {
'Id': 'ababab',
'ContainerConfig': {
'Volumes': {}
}
}
create_options = service._get_container_create_options(
override_options={},
number=1,
)
self.assertEqual(
binding,
('/outside', dict(bind='/inside', ro=False)))
set(create_options['host_config']['Binds']),
set([
'/host/path:/data1:rw',
'/host/path:/data2:rw',
]),
)
def test_different_host_path_in_container_json(self):
service = Service(
'web',
image='busybox',
volumes=['/host/path:/data'],
client=self.mock_client,
)
self.mock_client.inspect_image.return_value = {
'Id': 'ababab',
'ContainerConfig': {
'Volumes': {
'/data': {},
}
}
}
self.mock_client.inspect_container.return_value = {
'Id': '123123123',
'Image': 'ababab',
'Volumes': {
'/data': '/mnt/sda1/host/path',
},
}
create_options = service._get_container_create_options(
override_options={},
number=1,
previous_container=Container(self.mock_client, {'Id': '123123123'}),
)
self.assertEqual(
create_options['host_config']['Binds'],
['/mnt/sda1/host/path:/data:rw'],
)

View File

@@ -3,6 +3,7 @@ from __future__ import absolute_import
from compose.cli.utils import split_buffer
from .. import unittest
class SplitBufferTest(unittest.TestCase):
def test_single_line_chunks(self):
def reader():

View File

@@ -8,7 +8,7 @@ deps =
-rrequirements-dev.txt
commands =
nosetests -v {posargs}
flake8 compose
flake8 compose tests setup.py
[flake8]
# ignore line-length for now

View File

@@ -1,12 +0,0 @@
box: wercker-labs/docker
build:
steps:
- script:
name: validate DCO
code: script/validate-dco
- script:
name: run tests
code: script/test
- script:
name: build binary
code: script/build-linux