Compare commits

..

119 Commits
0.5.2 ... 1.0.0

Author SHA1 Message Date
Aanand Prasad
6580c5609c Merge pull request #526 from bfirsh/ship-1.0.0
WIP: Ship 1.0.0
2014-10-16 18:54:56 +01:00
Ben Firshman
a07c83659d Merge pull request #537 from aanand/tls
TLS support
2014-10-16 18:53:59 +01:00
Aanand Prasad
0f58b9f6b4 Update docker-py to 0.5.3
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-16 18:46:55 +01:00
Aanand Prasad
fed391a23e Update documentation for TLS and boot2docker
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-16 18:46:48 +01:00
Ben Firshman
4d1b2f1547 Ship 1.0.0
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-16 16:27:13 +01:00
Aanand Prasad
60411e9f05 Vendor dockerpty at c8b493553477c9a57d163c71c97b2102f44a6ce7
Include TLS fixes and relative imports:

c8b4935534

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-16 16:24:39 +01:00
Aanand Prasad
b318585f3c TLS support, with same env vars as docker client
Thanks to @jkingyens for the bulk of the work.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-16 10:41:26 +01:00
Ben Firshman
1820306d0a Merge pull request #529 from chmouel/override-environement
Allow overriding environements on command line
2014-10-14 11:27:20 +01:00
Aanand Prasad
bf1c1b4c17 Merge pull request #534 from bfirsh/use-bin-echo-for-intermediate-container
Use /bin/echo for intermediate container
2014-10-10 13:12:33 +01:00
Ben Firshman
352062c2dc Use /bin/echo for intermediate container
In cases where the service is using a minimal container,
/bin/echo can be created but echo cannot.

See #517

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-10 12:24:00 +01:00
Chmouel Boudjnah
92249364b6 Allow overriding environements on command line
Add a new command line option -e to override environement variables when
running a service.

Signed-off-by: Chmouel Boudjnah <chmouel@chmouel.com>
2014-10-09 16:33:41 +02:00
Ben Firshman
872a1b5a5c Merge pull request #523 from aanand/remove-references-to-docker-osx
Remove references to docker-osx
2014-10-06 16:05:06 +01:00
Aanand Prasad
b969988ccb Remove references to docker-osx
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-06 14:21:49 +01:00
Aanand Prasad
a5aac7d59e Merge pull request #508 from bfirsh/recommend-boot2docker-in-install-instructions
Recommend boot2docker in installation instructions
2014-10-06 13:44:23 +01:00
Ben Firshman
c16f4d4041 Recommend boot2docker in installation instructions
Fixes #26

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-06 13:14:12 +01:00
Aanand Prasad
d91c458d52 Merge pull request #509 from bfirsh/smarter-binary-urls
Use uname to generate binary download URL
2014-10-06 12:35:36 +01:00
Ben Firshman
475a7e19a9 Merge pull request #521 from aanand/clean-up-env-docs
Clean up environment variable docs
2014-10-06 12:34:35 +01:00
Aanand Prasad
f43bfaadaa Clean up environment variable docs
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-10-06 11:58:16 +01:00
Aanand Prasad
2680756dd6 Merge pull request #518 from dnephin/env_docs
Add environment variables to cli.md docs
2014-10-06 11:36:21 +01:00
Daniel Nephin
837f368361 Add environment variables to cli.md docs.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-10-03 13:47:54 -04:00
Aanand Prasad
9df5481066 Merge pull request #512 from bfirsh/fix-entrypoint
Fix fig run entrypoint option
2014-10-01 15:39:00 -07:00
Aanand Prasad
59c7528b4e Merge pull request #513 from bfirsh/update-wercker-badge
Update wercker badge
2014-10-01 14:57:47 -07:00
Ben Firshman
d2385e3c2c Fix fig run entrypoint option
Slipped through because Wercker didn't report build status.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-01 14:19:23 -07:00
Ben Firshman
6b600faf0b Update wercker badge
I moved the project.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-01 12:50:30 -07:00
Aanand Prasad
431fdaa0f1 Merge pull request #490 from LuminosoInsight/insecure-pull
Allow pulls from an insecure registry
2014-10-01 10:52:54 -07:00
Aanand Prasad
ed12f2539c Merge pull request #511 from bfirsh/support_entrypoint
Add support for entrypoint to "fig run"
2014-10-01 10:51:50 -07:00
Ben Firshman
0dc19cb885 Order "fig run" options alphabetically
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-10-01 09:28:28 -07:00
satoru
62b9c64311 Add support for the --entrypoint option of docker run
Signed-off-by: Satoru Logic <satorulogic@gmail.com>
2014-10-01 09:28:28 -07:00
Moss Collum
3408e0d463 Add unit test for Service.pull
Signed-off-by: Moss Collum <mcollum@luminoso.com>
2014-10-01 11:09:19 -04:00
Ben Firshman
f4b599551a Use uname to generate binary download URL
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-30 18:59:18 -07:00
Aanand Prasad
267be12bb2 Merge pull request #456 from dnephin/volumes_from_service
Fix volumes_from a service with no containers
2014-09-30 18:33:39 -07:00
Aanand Prasad
ec5c864cc7 Merge pull request #460 from dnephin/fix_tests_for_docker_1.1
Fix test failures on docker 1.1.2
2014-09-30 17:52:38 -07:00
Aanand Prasad
cabe47a379 Merge pull request #505 from bfirsh/use-debian-wheezy-base-image
Compile against older version of GLIBC
2014-09-30 17:07:15 -07:00
Ben Firshman
b4fbab4b56 Run pyinstaller build as normal user
... and test build on CI so we don't break it again!

Fixes #503

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-30 16:25:40 -07:00
Alexander Holbreich
6797a322b5 Changing to stable debian (wheezy).
Therefore it compiles agains more common version of GLIBC.

Now works out of the box on Debian wheezy, centos:centos6 and other stable
Of course it works on new distributions. Tested with:
    debian:jessie and
    ubuntu:14.04

Signed-off-by: Alexander Holbreich <alexander@holbreich.org>

Conflicts:
	Dockerfile
2014-09-30 16:25:13 -07:00
Aanand Prasad
e04c5cb52c Merge pull request #506 from bfirsh/make-test-script-use-docker
Make script/test use docker
2014-09-30 15:16:07 -07:00
Ben Firshman
a3f70a9f64 Make script/test use Docker
Really easy to run entire test suite with Docker now. Also switch
Wercker to use the same script.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-30 13:51:42 -07:00
Ben Firshman
f407504679 Remove old Travis files
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-30 13:47:47 -07:00
Aanand Prasad
253b245a1c Merge pull request #492 from dnephin/project_name_from_env
Support setting project name from the environment
2014-09-29 15:32:29 -07:00
Daniel Nephin
fac49b62b6 Support setting project name from the environment.
Signed-off-by: Daniel Nephin <dnephin@yelp.com>
2014-09-29 18:01:08 -04:00
Ben Firshman
92ae5af019 Merge pull request #501 from aanand/fix-build-error
Fix race condition in cli_test.py
2014-09-28 04:54:08 +01:00
Jason Bernardino Alonso
c270e9d622 Require docker-py 0.5 or later for insecure_registry kwarg
Signed-off-by: Jason Bernardino Alonso <jalonso@luminoso.com>
2014-09-26 16:36:42 -04:00
Jason Bernardino Alonso
1c5194e2ec Allow pulls from an insecure registry
Signed-off-by: Jason Bernardino Alonso <jalonso@luminoso.com>
2014-09-26 16:36:36 -04:00
Aanand Prasad
537d435a28 Fix race condition in cli_test.py
We weren't waiting for the build to finish, causing the rest of the test
to occasionally fail when looking for the image.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-09-26 11:42:16 -07:00
Aanand Prasad
5d76d183b4 Merge pull request #483 from dnephin/sort_containers_by_name_in_ps
Sort containers in ps output by name
2014-09-26 10:18:08 -07:00
Aanand Prasad
d4b7ed94e1 Merge pull request #499 from bfirsh/wercker-badge
Add wercker badge to readme
2014-09-25 11:37:03 -07:00
Aanand Prasad
d978787fcc Merge pull request #439 from dnephin/faster_integration_tests
Faster integration testing
2014-09-24 17:34:52 -07:00
Ben Firshman
3535270ef0 Add wercker badge to readme
And removed PyPi badge because who cares.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-24 14:19:27 -07:00
Ben Firshman
b9eb55a225 Merge pull request #498 from bfirsh/mieciu-patch-1
Fix the broken URL
2014-09-24 21:53:57 +01:00
mieciu
35b217a0a4 Fix the broken URL
It was not redirecting to the proper page

Signed-off-by: Przemek Hejman <przemyslaw.hejman@gmail.com>
2014-09-24 13:52:08 -07:00
Aanand Prasad
c37dc558fb Merge pull request #497 from bfirsh/wercker
Add wercker.yml
2014-09-24 10:51:56 -07:00
Ben Firshman
7a8f5e10fd Add wercker.yml
Changed Dockerfile to run as root so it has access to
/var/run/docker.sock.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-23 10:40:51 -07:00
Daniel Nephin
192fce9153 Resolves #43 - sort containers in ps output by name, so services are grouped together.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-14 16:06:23 -04:00
Ben Firshman
fc4c35e977 Merge pull request #411 from Banno/fig-pull
adding "fig pull [SERVICE]" to pull service images
2014-09-10 23:57:20 +01:00
Luke Amdor
648c89768b adding 'fig pull' to cli docs
Signed-off-by: Luke Amdor <luke.amdor@gmail.com>
2014-09-10 16:20:14 -05:00
Daniel Nephin
e0b0801e87 Do less work during integration testing.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-09 22:00:21 -04:00
Daniel Nephin
71e7103662 Fix some tests failing with docker 1.1.2 and add a comment to recreate_container() explaining what it does.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-09 20:59:28 -04:00
Daniel Nephin
dbd723659b Add container.get() which removes the duplication of container.inspect() in every property, and provides a nicer interface for querying container data.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-09 20:59:28 -04:00
Aanand Prasad
6b221d5687 Merge pull request #477 from bfirsh/docker-py-0.5.0
Upgrade to docker-py 0.5.0
2014-09-08 14:20:12 -07:00
Ben Firshman
ce8ef23c09 Merge pull request #393 from marksteve/restart
Implement restart command (Closes #98)
2014-09-08 17:47:08 +01:00
Ben Firshman
b0159e5100 Upgrade to docker-py 0.5.0
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-08 09:23:01 -07:00
Ben Firshman
ee6bb9a252 Merge pull request #475 from bfirsh/fix-missing-six-package
Fix missing six package
2014-09-06 02:20:55 +01:00
Ben Firshman
866050937a Fix missing six package
I had some .pyc files kicking around, urgh.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-09-05 17:19:53 -07:00
Ben Firshman
41ee65b664 Merge branch 'dnephin-replace_packages_with_deps'
Closes #375
2014-09-05 11:55:17 -07:00
Daniel Nephin
7fd37c89b9 Remove fig.packages replace with real deps.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-05 11:44:49 -07:00
Aanand Prasad
8d3c9dccc5 Merge pull request #402 from dnephin/fig_ports.rebase
Fig port command
2014-09-05 11:09:40 -07:00
Daniel Nephin
f5d43b6452 Resolves #455 - volumes_from a service with no containers.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-04 22:21:49 -04:00
Daniel Nephin
c48ee5caef Add a new fig command for retrieving the locally bound port of a service.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-09-04 22:09:12 -04:00
Ben Firshman
2827786886 Merge pull request #364 from docker/non-numeric-link-alias
Non-numeric link alias
2014-09-04 20:39:18 +01:00
Aanand Prasad
7ad91f3f00 Merge pull request #442 from dnephin/fix_create_container_volumes
Additional validation for container volumes and ports.
2014-09-04 12:12:09 -07:00
Daniel Nephin
24044fa704 Some additional validation for container ports, and a couple extra test cases.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-31 02:00:58 -04:00
Daniel Nephin
07fa169fd2 Resolves #260, #301 and #449
Adds ~ support and ro mode support for volumes, along with some additional validation.

Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-31 02:00:52 -04:00
Daniel Nephin
8157f0887d Fix the return value of get_tty_width() it should return an int.
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-25 22:20:07 -04:00
Aanand Prasad
6dab8c1b89 Merge pull request #420 from bfirsh/inline-installation-commands
Collapse install instructions into single line
2014-08-21 17:28:08 -07:00
Ben Firshman
63bd05d40e Collapse install instructions into single line
For maximum copy and paste happiness

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-18 13:36:37 -07:00
Luke Amdor
e51851c884 adding "fig pull [SERVICE]" to pull service images
Fixes #158

Signed-off-by: Luke Amdor <luke.amdor@gmail.com>
2014-08-15 09:24:15 -05:00
Mark Steve Samson
e224c4caa4 Add integration test for restart command
Signed-off-by: Mark Steve Samson <hello@marksteve.com>
2014-08-13 10:01:27 +08:00
Aanand Prasad
dc857a7ad5 Merge pull request #404 from bfirsh/validate-dco
Validate DCO on Travis
2014-08-12 11:20:38 -07:00
Ben Firshman
aca0e42178 Validate DCO on Travis
Copied straight from Docker, replacing github.com/docker/docker
with github.com/docker/fig in .validate.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-12 10:58:16 -07:00
Ben Firshman
15037ce0e5 Merge pull request #397 from alunduil/master
Exclude tests package from installation.
2014-08-12 18:16:50 +01:00
Mark Steve Samson
9d55e01e2a Implement restart command (Closes #98)
Signed-off-by: Mark Steve Samson <hello@marksteve.com>
2014-08-12 10:07:20 +08:00
Aanand Prasad
8eee2bf913 Merge pull request #396 from dnephin/remove_extra_calls
Remove extra calls, test bug fixes
2014-08-11 11:54:57 -07:00
Alex Brandt
3965db9dff Exclude tests package from installation.
Installing the top-level tests package is asking for conflicts with
other python packages and isn't required to run fig.  This simply lets
find_packages know to ignore tests and any sub-packages.
2014-08-11 09:00:16 -05:00
Daniel Nephin
294453433d Remove extra calls to docker server.
Fix broken integration tests

Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-10 11:24:20 -04:00
Chris Corbyn
22f897ed09 Merge pull request #370 from dnephin/debug_option
Add a --debug flag for debugging docker calls
2014-08-10 21:36:17 +10:00
Daniel Nephin
df7c2cc43f Resolves #369, add verbose output on --verbose flag
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-09 21:05:54 -04:00
Aanand Prasad
f2bf7f9e0d Merge pull request #384 from timfreund/master
Enable monochrome output in the 'up' and 'logs' commands
2014-08-08 14:55:31 -07:00
Tim Freund
69c241ba12 Enable monochrome output in the 'up' and 'logs' commands
Some systems, like Jenkins or other build servers, cannot correctly
render ANSI color codes.  The '--no-color' option enables monochrome
output in the 'up' and 'logs' commands to improve readability in those
systems.

Signed-off-by: Tim Freund <tim@freunds.net>
2014-08-08 16:15:27 -04:00
Aanand Prasad
59e31ff544 Update docs to remove numeric suffix
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-08-08 13:08:27 -07:00
Aanand Prasad
62a4d214e8 Default link alias which is just the service name
Closes #37.

Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-08-08 13:05:42 -07:00
Aanand Prasad
73bd4aca74 Use hostnames everywhere in docs, add YAML note and deprecate env.md
Signed-off-by: Aanand Prasad <aanand.prasad@gmail.com>
2014-08-08 11:58:41 -07:00
Aanand Prasad
342ed948ec Merge pull request #379 from docker/use-library-python-image-for-dockerfile
Use library python image for dockerfile
2014-08-07 17:06:38 -07:00
Ben Firshman
3fc7ad3291 Don't use deprecated orchardup/python image
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 16:43:59 -07:00
Aanand Prasad
a39460d7b2 Merge pull request #392 from bfirsh/more-official-images
More official images
2014-08-07 16:42:36 -07:00
Chris Corbyn
6ab084a338 Merge pull request #390 from bfirsh/remove-integration-tests-from-ci
Remove integration tests from ci
2014-08-08 08:37:12 +10:00
Ben Firshman
406425a6b9 Use official images in rails tutorial
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:54:59 -07:00
Chris Corbyn
b690b0d20e Merge pull request #387 from dnephin/make_ps_work_on_jekins
tty width on jenkins
2014-08-08 07:54:00 +10:00
Ben Firshman
796df302dd Use official images on getting started guide
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:50:55 -07:00
Aanand Prasad
1346805bef Merge pull request #391 from bfirsh/repository-move
github.com/docker/fig
2014-08-07 14:49:26 -07:00
Ben Firshman
16744bc78b Switch to official images in readme
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:48:29 -07:00
Ben Firshman
fc3c12ad90 Update repository URL
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:42:49 -07:00
Ben Firshman
bbcbe9df9f Upload PyPi package manually
This never worked properly anyway.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:36:49 -07:00
Ben Firshman
1a240f50ae Remove integration tests from Travis
Orchard is going away and Travis will be able to run these soon
anyway.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:35:22 -07:00
Ben Firshman
47970761c5 Remove Orchard message from readme
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 14:19:27 -07:00
Aanand Prasad
df7bc8cbb8 Merge pull request #389 from orchardup/update-irc-channel
Update IRC channel to be #docker-fig
2014-08-07 13:43:12 -07:00
Ben Firshman
ba9d744293 Update IRC channel to be #docker-fig
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 13:39:27 -07:00
Aanand Prasad
d5854bb625 Merge pull request #388 from orchardup/dockerize
Point at Docker website and Docker IRC channel
2014-08-07 10:16:16 -07:00
Ben Firshman
e5f7690137 Point at Docker website and Docker IRC channel
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-07 10:15:01 -07:00
Daniel Nephin
b0f398caaa Resolves #386, tty width on jenkins
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-08-07 12:30:01 -04:00
Aanand Prasad
0a15e7fe9c Merge pull request #378 from orchardup/use-ruby-image-for-rails-tutorial
Use ruby image for rails docs
2014-08-06 16:51:04 -07:00
Aanand Prasad
255b9419dd Merge pull request #382 from orchardup/use-official-repositories-in-django-example
Use official images in Django example
2014-08-06 16:50:45 -07:00
Aanand Prasad
c63947b5e2 Merge pull request #383 from orchardup/use-official-repos-on-homepage
Use official repos on home page
2014-08-06 16:40:48 -07:00
Ben Firshman
b5d1979d58 Use official repos on home page
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-06 16:33:31 -07:00
Ben Firshman
94aa097bc3 Use official images in Django example
Much neater.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-06 16:30:27 -07:00
Aanand Prasad
72ce7ce374 Merge pull request #380 from orchardup/use-hostname-for-redis-on-home-page-example
Use hostname on home page example
2014-08-06 16:11:44 -07:00
Ben Firshman
6d64f20ad6 Use hostname on home page example
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-06 15:53:45 -07:00
Ben Firshman
506f54e9c3 Use ruby image for rails docs
Fixes #376

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
2014-08-06 13:42:56 -07:00
Chris Corbyn
90f5eda930 Merge pull request #367 from dnephin/fix_366_python_dependencies
Don't pin versions in setup.py install_requires
2014-08-05 08:17:23 +10:00
Daniel Nephin
c0450f7df0 Resolves #366, non-pinned versions in setup.py:install_requires
Signed-off-by: Daniel Nephin <dnephin@gmail.com>
2014-07-29 17:13:22 -07:00
68 changed files with 2017 additions and 2282 deletions

View File

@@ -1,30 +0,0 @@
language: python
python:
- '2.6'
- '2.7'
env:
global:
- secure: exbot0LTV/0Wic6ElKCrOZmh2ZrieuGwEqfYKf5rVuwu1sLngYRihh+lBL/hTwc79NSu829pbwiWfsQZrXbk/yvaS7avGR0CLDoipyPxlYa2/rfs/o4OdTZqXv0LcFmmd54j5QBMpWU1S+CYOwNkwas57trrvIpPbzWjMtfYzOU=
install:
- pip install .
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
- sudo curl -L -o /usr/local/bin/orchard https://github.com/orchardup/go-orchard/releases/download/2.0.5/linux
- sudo chmod +x /usr/local/bin/orchard
before_script:
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts rm -f $TRAVIS_JOB_ID || true; fi'
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts create $TRAVIS_JOB_ID; fi'
script:
- nosetests tests/unit
- flake8 fig
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then script/travis-integration; fi'
after_script:
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts rm -f $TRAVIS_JOB_ID; fi'
deploy:
provider: pypi
user: orchard
password:
secure: M8UMupCLSsB1hV00Zn6ra8Vg81SCFBpbcRsa0nUw9kgXn9hOCESWYVHTqQ1ksWZOa8z6WMaqYtoosPKXGJQNf0wF/kEVDsMUeaZWOF/PqDkx1EwQ1diVfwlbN4/k0iX+Se7SrZfiWnJiAqiIPqToQipvLlJohqf8WwfPcVvILVE=
on:
tags: true
repo: orchardup/fig

View File

@@ -1,6 +1,48 @@
Change log
==========
1.0.0 (2014-10-07)
------------------
The highlights:
- [Fig has joined Docker.](https://www.orchardup.com/blog/orchard-is-joining-docker) Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself.
This means the GitHub repository has moved to [https://github.com/docker/fig](https://github.com/docker/fig) and our IRC channel is now #docker-fig on Freenode.
- Fig can be used with the [official Docker OS X installer](https://docs.docker.com/installation/mac/). Boot2Docker will mount the home directory from your host machine so volumes work as expected.
- Fig supports Docker 1.3.
- It is now possible to connect to the Docker daemon using TLS by using the `DOCKER_CERT_PATH` and `DOCKER_TLS_VERIFY` environment variables.
- There is a new `fig port` command which outputs the host port binding of a service, in a similar way to `docker port`.
- There is a new `fig pull` command which pulls the latest images for a service.
- There is a new `fig restart` command which restarts a service's containers.
- Fig creates multiple containers in service by appending a number to the service name (e.g. `db_1`, `db_2`, etc). As a convenience, Fig will now give the first container an alias of the service name (e.g. `db`).
This link alias is also a valid hostname and added to `/etc/hosts` so you can connect to linked services using their hostname. For example, instead of resolving the environment variables `DB_PORT_5432_TCP_ADDR` and `DB_PORT_5432_TCP_PORT`, you could just use the hostname `db` and port `5432` directly.
- Volume definitions now support `ro` mode, expanding `~` and expanding environment variables.
- `.dockerignore` is supported when building.
- The project name can be set with the `FIG_PROJECT_NAME` environment variable.
- The `--env` and `--entrypoint` options have been added to `fig run`.
- The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy.
Other things:
- `fig ps` now works on Jenkins and makes fewer API calls to the Docker daemon.
- `--verbose` displays more useful debugging output.
- When starting a service where `volumes_from` points to a service without any containers running, that service will now be started.
- Lots of docs improvements. Notably, environment variables are documented and official repositories are used throughout.
0.5.2 (2014-07-28)
------------------

View File

@@ -6,7 +6,7 @@ If you're looking contribute to [Fig](http://www.fig.sh/)
but you're new to the project or maybe even to Python, here are the steps
that should get you started.
1. Fork [https://github.com/orchardup/fig](https://github.com/orchardup/fig) to your username. kvz in this example.
1. Fork [https://github.com/docker/fig](https://github.com/docker/fig) to your username. kvz in this example.
1. Clone your forked repository locally `git clone git@github.com:kvz/fig.git`.
1. Enter the local directory `cd fig`.
1. Set up a development environment `python setup.py develop`. That will install the dependencies and set up a symlink from your `fig` executable to the checkout of the repo. So from any of your fig projects, `fig` now refers to your development project. Time to start hacking : )
@@ -86,8 +86,13 @@ The easiest way to do this is to use the `--signoff` flag when committing. E.g.:
3. Build Linux version on any Docker host with `script/build-linux` and attach to release
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release as `fig-Darwin-x86_64` and `fig-Linux-x86_64`.
5. Publish GitHub release, creating tag
6. Update website with `script/deploy-docs`
7. Upload PyPi package
$ git checkout $VERSION
$ python setup.py sdist upload

View File

@@ -1,11 +1,15 @@
FROM orchardup/python:2.7
ADD requirements.txt /code/
FROM debian:wheezy
RUN apt-get update -qq && apt-get install -qy python python-pip python-dev git && apt-get clean
RUN useradd -d /home/user -m -s /bin/bash user
WORKDIR /code/
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD requirements-dev.txt /code/
RUN pip install -r requirements-dev.txt
ADD . /code/
RUN python setup.py develop
RUN useradd -d /home/user -m -s /bin/bash user
RUN python setup.py install
RUN chown -R user /code/
USER user

View File

@@ -1,14 +1,13 @@
Fig
===
[![Build Status](https://travis-ci.org/orchardup/fig.svg?branch=master)](https://travis-ci.org/orchardup/fig)
[![PyPI version](https://badge.fury.io/py/fig.png)](http://badge.fury.io/py/fig)
[![wercker status](https://app.wercker.com/status/d5dbac3907301c3d5ce735e2d5e95a5b/s/master "wercker status")](https://app.wercker.com/project/bykey/d5dbac3907301c3d5ce735e2d5e95a5b)
Fast, isolated development environments using Docker.
Define your app's environment with Docker so it can be reproduced anywhere:
FROM orchardup/python:2.7
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
@@ -25,7 +24,7 @@ web:
- "8000:8000"
- "49100:22"
db:
image: orchardup/postgresql
image: postgres
```
(No more installing Postgres on your laptop!)
@@ -41,8 +40,6 @@ There are commands to:
- tail running services' log output
- run a one-off command on a service
Fig is a project from [Orchard](https://orchardup.com), a Docker hosting service. [Follow us on Twitter](https://twitter.com/orchardup) to keep up to date with Fig and other Docker news.
Installation and documentation
------------------------------

View File

@@ -44,15 +44,14 @@
</ul>
</ul>
<ul class="nav">
<li><a href="https://github.com/orchardup/fig">Fig on GitHub</a></li>
<li><a href="http://webchat.freenode.net/?channels=%23orchardup&uio=d4">#orchardup on Freenode</a></li>
<li><a href="https://github.com/docker/fig">Fig on GitHub</a></li>
<li><a href="http://webchat.freenode.net/?channels=%23docker-fig&uio=d4">#docker-fig on Freenode</a></li>
</ul>
<p>Fig is a project from <a href="https://www.orchardup.com">Orchard</a>, a Docker hosting service.</p>
<p><a href="https://twitter.com/orchardup">Follow us on Twitter</a> to keep up to date with Fig and other Docker news.</p>
<p>Fig is a project from <a href="https://www.docker.com">Docker</a>.</p>
<div class="badges">
<iframe src="http://ghbtns.com/github-btn.html?user=orchardup&amp;repo=fig&amp;type=watch&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20"></iframe>
<iframe src="http://ghbtns.com/github-btn.html?user=docker&amp;repo=fig&amp;type=watch&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20"></iframe>
<a href="https://twitter.com/share" class="twitter-share-button" data-url="http://orchardup.github.io/fig/">Tweet</a>
</div>
</div>

View File

@@ -10,34 +10,44 @@ Most commands are run against one or more services. If the service is omitted, i
Run `fig [COMMAND] --help` for full usage.
## build
## Commands
### build
Build or rebuild services.
Services are built once and then tagged as `project_service`, e.g. `figtest_db`. If you change a service's `Dockerfile` or the contents of its build directory, you can run `fig build` to rebuild it.
## help
### help
Get help on a command.
## kill
### kill
Force stop service containers.
## logs
### logs
View output from services.
## ps
### port
Print the public port for a port binding
### ps
List containers.
## rm
### pull
Pulls service images.
### rm
Remove stopped service containers.
## run
### run
Run a one-off command on a service.
@@ -51,13 +61,13 @@ One-off commands are started in new containers with the same config as a normal
Links are also created between one-off commands and the other containers for that service so you can do stuff like this:
$ fig run db /bin/sh -c "psql -h \$DB_1_PORT_5432_TCP_ADDR -U docker"
$ fig run db psql -h db -U docker
If you do not want linked containers to be started when running the one-off command, specify the `--no-deps` flag:
$ fig run --no-deps web python manage.py shell
## scale
### scale
Set number of containers to run for a service.
@@ -66,15 +76,15 @@ For example:
$ fig scale web=2 worker=3
## start
### start
Start existing containers for a service.
## stop
### stop
Stop running containers without removing them. They can be started again with `fig start`.
## up
### up
Build, (re)create, start and attach to containers for a service.
@@ -85,3 +95,30 @@ By default, `fig up` will aggregate the output of each container, and when it ex
By default if there are existing containers for a service, `fig up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `fig.yml` are picked up. If you do no want containers to be stopped and recreated, use `fig up --no-recreate`. This will still start any stopped containers, if needed.
[volumes-from]: http://docs.docker.io/en/latest/use/working_with_volumes/
## Environment Variables
Several environment variables can be used to configure Fig's behaviour.
Variables starting with `DOCKER_` are the same as those used to configure the Docker command-line client. If you're using boot2docker, `$(boot2docker shellinit)` will set them to their correct values.
### FIG\_PROJECT\_NAME
Set the project name, which is prepended to the name of every container started by Fig. Defaults to the `basename` of the current working directory.
### FIG\_FILE
Set the path to the `fig.yml` to use. Defaults to `fig.yml` in the current working directory.
### DOCKER\_HOST
Set the URL to the docker daemon. Defaults to `unix:///var/run/docker.sock`, as with the docker client.
### DOCKER\_TLS\_VERIFY
When set to anything other than an empty string, enables TLS communication with the daemon.
### DOCKER\_CERT\_PATH
Configure the path to the `ca.pem`, `cert.pem` and `key.pem` files used for TLS verification. Defaults to `~/.docker`.

View File

@@ -10,9 +10,8 @@ Let's use Fig to set up and run a Django/PostgreSQL app. Before starting, you'll
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
FROM orchardup/python:2.7
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN apt-get update -qq && apt-get install -y python-psycopg2
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
@@ -24,11 +23,12 @@ That'll install our application inside an image with Python installed alongside
Second, we define our Python dependencies in a file called `requirements.txt`:
Django
psycopg2
Simple enough. Finally, this is all tied together with a file called `fig.yml`. It describes the services that our app comprises of (a web server and database), what Docker images they use, how they link together, what volumes will be mounted inside the containers and what ports they expose.
db:
image: orchardup/postgresql
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
@@ -57,15 +57,14 @@ First thing we need to do is set up the database connection. Replace the `DATABA
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'docker',
'USER': 'docker',
'PASSWORD': 'docker',
'HOST': os.environ.get('DB_1_PORT_5432_TCP_ADDR'),
'PORT': os.environ.get('DB_1_PORT_5432_TCP_PORT'),
'NAME': 'postgres',
'USER': 'postgres',
'HOST': 'db',
'PORT': 5432,
}
}
These settings are determined by the [orchardup/postgresql](https://github.com/orchardup/docker-postgresql) Docker image we are using.
These settings are determined by the [postgres](https://registry.hub.docker.com/_/postgres/) Docker image we are using.
Then, run `fig up`:
@@ -84,7 +83,7 @@ Then, run `fig up`:
myapp_web_1 | Starting development server at http://0.0.0.0:8000/
myapp_web_1 | Quit the server with CONTROL-C.
And your Django app should be running at [localhost:8000](http://localhost:8000) (or [localdocker:8000](http://localdocker:8000) if you're using docker-osx).
And your Django app should be running at port 8000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
You can also run management commands with Docker. To set up your database, for example, run `fig up` and in another terminal run:

View File

@@ -6,26 +6,28 @@ title: Fig environment variables reference
Environment variables reference
===============================
**Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [fig.yml documentation](yml.html#links) for details.
Fig uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container.
To see what environment variables are available to a service, run `fig run SERVICE env`.
<b><i>name</i>\_PORT</b><br>
Full URL, e.g. `DB_1_PORT=tcp://172.17.0.5:5432`
Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i></b><br>
Full URL, e.g. `DB_1_PORT_5432_TCP=tcp://172.17.0.5:5432`
Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_ADDR</b><br>
Container's IP address, e.g. `DB_1_PORT_5432_TCP_ADDR=172.17.0.5`
Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_PORT</b><br>
Exposed port number, e.g. `DB_1_PORT_5432_TCP_PORT=5432`
Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432`
<b><i>name</i>\_PORT\_<i>num</i>\_<i>protocol</i>\_PROTO</b><br>
Protocol (tcp or udp), e.g. `DB_1_PORT_5432_TCP_PROTO=tcp`
Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp`
<b><i>name</i>\_NAME</b><br>
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
[Docker links]: http://docs.docker.io/en/latest/use/port_redirection/#linking-a-container
[Docker links]: http://docs.docker.com/userguide/dockerlinks/

View File

@@ -7,7 +7,7 @@ title: Fig | Fast, isolated development environments using Docker
Define your app's environment with Docker so it can be reproduced anywhere:
FROM orchardup/python:2.7
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
@@ -23,7 +23,7 @@ web:
ports:
- "8000:8000"
db:
image: orchardup/postgresql
image: postgres
```
(No more installing Postgres on your laptop!)
@@ -59,10 +59,7 @@ from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(
host=os.environ.get('REDIS_1_PORT_6379_TCP_ADDR'),
port=int(os.environ.get('REDIS_1_PORT_6379_TCP_PORT'))
)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
@@ -80,7 +77,7 @@ We define our Python dependencies in a file called `requirements.txt`:
Next, we want to create a Docker image containing all of our app's dependencies. We specify how to build one using a file called `Dockerfile`:
FROM orchardup/python:2.7
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
@@ -99,24 +96,24 @@ We then define a set of services using `fig.yml`:
links:
- redis
redis:
image: orchardup/redis
image: redis
This defines two services:
- `web`, which is built from `Dockerfile` in the current directory. It also says to run the command `python app.py` inside the image, forward the exposed port 5000 on the container to port 5000 on the host machine, connect up the Redis service, and mount the current directory inside the container so we can work on code without having to rebuild the image.
- `redis`, which uses the public image [orchardup/redis](https://index.docker.io/u/orchardup/redis/).
- `redis`, which uses the public image [redis](https://registry.hub.docker.com/_/redis/).
Now if we run `fig up`, it'll pull a Redis image, build an image for our own code, and start everything up:
$ fig up
Pulling image orchardup/redis...
Pulling image redis...
Building web...
Starting figtest_redis_1...
Starting figtest_web_1...
redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
web_1 | * Running on http://0.0.0.0:5000/
Open up [http://localhost:5000](http://localhost:5000) in your browser (or [http://localdocker:5000](http://localdocker:5000) if you're using [docker-osx](https://github.com/noplay/docker-osx)) and you should see it running!
The web app should now be listening on port 5000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
If you want to run your services in the background, you can pass the `-d` flag to `fig up` and use `fig ps` to see what is currently running:
@@ -140,4 +137,4 @@ If you started Fig with `fig up -d`, you'll probably want to stop your services
$ fig stop
That's more-or-less how Fig works. See the reference section below for full details on the commands, configuration file and environment variables. If you have any thoughts or suggestions, [open an issue on GitHub](https://github.com/orchardup/fig) or [email us](mailto:hello@orchardup.com).
That's more-or-less how Fig works. See the reference section below for full details on the commands, configuration file and environment variables. If you have any thoughts or suggestions, [open an issue on GitHub](https://github.com/docker/fig).

View File

@@ -6,25 +6,21 @@ title: Installing Fig
Installing Fig
==============
First, install Docker version 1.0 or greater. If you're on OS X, you can use [docker-osx](https://github.com/noplay/docker-osx):
First, install Docker version 1.3 or greater.
$ curl https://raw.githubusercontent.com/noplay/docker-osx/1.1.1/docker-osx > /usr/local/bin/docker-osx
$ chmod +x /usr/local/bin/docker-osx
$ docker-osx shell
If you're on OS X, you can use the [OS X installer](https://docs.docker.com/installation/mac/) to install both Docker and boot2docker. Once boot2docker is running, set the environment variables that'll configure Docker and Fig to talk to it:
Docker has guides for [Ubuntu](http://docs.docker.io/en/latest/installation/ubuntulinux/) and [other platforms](http://docs.docker.io/en/latest/installation/) in their documentation.
$(boot2docker shellinit)
Next, install Fig. On OS X:
To persist the environment variables across shell sessions, you can add that line to your `~/.bashrc` file.
$ curl -L https://github.com/orchardup/fig/releases/download/0.5.2/darwin > /usr/local/bin/fig
$ chmod +x /usr/local/bin/fig
There are also guides for [Ubuntu](https://docs.docker.com/installation/ubuntulinux/) and [other platforms](https://docs.docker.com/installation/) in Dockers documentation.
On 64-bit Linux:
Next, install Fig:
$ curl -L https://github.com/orchardup/fig/releases/download/0.5.2/linux > /usr/local/bin/fig
$ chmod +x /usr/local/bin/fig
curl -L https://github.com/docker/fig/releases/download/1.0.0/fig-`uname -s`-`uname -m` > /usr/local/bin/fig; chmod +x /usr/local/bin/fig
Fig is also available as a Python package if you're on another platform (or if you prefer that sort of thing):
Releases are available for OS X and 64-bit Linux. Fig is also available as a Python package if you're on another platform (or if you prefer that sort of thing):
$ sudo pip install -U fig

View File

@@ -10,7 +10,7 @@ We're going to use Fig to set up and run a Rails/PostgreSQL app. Before starting
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
FROM binaryphile/ruby:2.0.0-p247
FROM ruby
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
RUN mkdir /myapp
WORKDIR /myapp
@@ -28,7 +28,7 @@ Next, we have a bootstrap `Gemfile` which just loads Rails. It'll be overwritten
Finally, `fig.yml` is where the magic happens. It describes what services our app comprises (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration we need to link them together and expose the web app's port.
db:
image: orchardup/postgresql
image: postgres
ports:
- "5432"
web:
@@ -62,19 +62,18 @@ Now that we've got a new `Gemfile`, we need to build the image again. (This, and
$ fig build
The app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on `localhost` - we need to point it at the `db` container instead. We also need to change the username and password to align with the defaults set by `orchardup/postgresql`.
The app is now bootable, but we're not quite there yet. By default, Rails expects a database to be running on `localhost` - we need to point it at the `db` container instead. We also need to change the database and username to align with the defaults set by the `postgres` image.
Open up your newly-generated `database.yml`. Replace its contents with the following:
development: &default
adapter: postgresql
encoding: unicode
database: myapp_development
database: postgres
pool: 5
username: docker
password: docker
host: <%= ENV.fetch('DB_1_PORT_5432_TCP_ADDR', 'localhost') %>
port: <%= ENV.fetch('DB_1_PORT_5432_TCP_PORT', '5432') %>
username: postgres
password:
host: db
test:
<<: *default
@@ -94,6 +93,6 @@ Finally, we just need to create the database. In another terminal, run:
$ fig run web rake db:create
And we're rolling—see for yourself at [localhost:3000](http://localhost:3000) (or [localdocker:3000](http://localdocker:3000) if you're using docker-osx).
And we're rolling—your app should now be running on port 3000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).
![Screenshot of Rails' stock index.html](https://orchardup.com/static/images/fig-rails-screenshot.png)

View File

@@ -44,7 +44,7 @@ Two supporting files are needed to get this working - first up, `wp-config.php`
define('DB_NAME', 'wordpress');
define('DB_USER', 'root');
define('DB_PASSWORD', '');
define('DB_HOST', getenv("DB_1_PORT_3306_TCP_ADDR") . ":" . getenv("DB_1_PORT_3306_TCP_PORT"));
define('DB_HOST', "db:3306");
define('DB_CHARSET', 'utf8');
define('DB_COLLATE', '');
@@ -88,4 +88,4 @@ if(file_exists($root.$path))
}else include_once 'index.php';
```
With those four files in place, run `fig up` inside your Wordpress directory and it'll pull and build the images we need, and then start the web and database containers. You'll then be able to visit Wordpress and set it up by visiting [localhost:8000](http://localhost:8000) - or [localdocker:8000](http://localdocker:8000) if you're using docker-osx.
With those four files in place, run `fig up` inside your Wordpress directory and it'll pull and build the images we need, and then start the web and database containers. You'll then be able to visit Wordpress at port 8000 on your docker daemon (if you're using boot2docker, `boot2docker ip` will tell you its address).

View File

@@ -36,10 +36,10 @@ Override the default command.
command: bundle exec thin -p 3000
```
<a name="links"></a>
### links
Link to containers in another service. Optionally specify an alternate name for the link, which will determine how environment variables are prefixed, e.g. `db` -> `DB_1_PORT`, `db:database` -> `DATABASE_1_PORT`
Link to containers in another service. Either specify both the service name and the link alias (`SERVICE:ALIAS`), or just the service name (which will also be used for the alias).
```
links:
@@ -48,6 +48,16 @@ links:
- redis
```
An entry with the alias' name will be created in `/etc/hosts` inside containers for this service, e.g:
```
172.17.2.186 db
172.17.2.186 database
172.17.2.187 redis
```
Environment variables will also be created - see the [environment variable reference](env.html) for details.
### ports
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (a random host port will be chosen).
@@ -74,14 +84,14 @@ expose:
### volumes
Mount paths as volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`).
Note: Mapping local volumes is currently unsupported on boot2docker. We recommend you use [docker-osx](https://github.com/noplay/docker-osx) if want to map local volumes.
Mount paths as volumes, optionally specifying a path on the host machine
(`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`).
```
volumes:
- /var/lib/mysql
- cache/:/tmp/cache
- ~/configs:/etc/configs/:ro
```
### volumes_from

View File

@@ -1,4 +1,4 @@
from __future__ import unicode_literals
from .service import Service # noqa:flake8
__version__ = '0.5.2'
__version__ = '1.0.0'

View File

@@ -1,20 +1,21 @@
from __future__ import unicode_literals
from __future__ import absolute_import
from ..packages.docker import Client
from requests.exceptions import ConnectionError
from requests.exceptions import ConnectionError, SSLError
import errno
import logging
import os
import re
import yaml
from ..packages import six
import six
from ..project import Project
from ..service import ConfigError
from .docopt_command import DocoptCommand
from .formatter import Formatter
from .utils import cached_property, docker_url, call_silently, is_mac, is_ubuntu
from .utils import call_silently, is_mac, is_ubuntu
from .docker_client import docker_client
from . import verbose_proxy
from . import errors
from .. import __version__
log = logging.getLogger(__name__)
@@ -22,13 +23,11 @@ log = logging.getLogger(__name__)
class Command(DocoptCommand):
base_dir = '.'
def __init__(self):
self._yaml_path = os.environ.get('FIG_FILE', None)
self.explicit_project_name = None
def dispatch(self, *args, **kwargs):
try:
super(Command, self).dispatch(*args, **kwargs)
except SSLError, e:
raise errors.UserError('SSL error: %s' % e)
except ConnectionError:
if call_silently(['which', 'docker']) != 0:
if is_mac():
@@ -37,63 +36,74 @@ class Command(DocoptCommand):
raise errors.DockerNotFoundUbuntu()
else:
raise errors.DockerNotFoundGeneric()
elif call_silently(['which', 'docker-osx']) == 0:
raise errors.ConnectionErrorDockerOSX()
elif call_silently(['which', 'boot2docker']) == 0:
raise errors.ConnectionErrorBoot2Docker()
else:
raise errors.ConnectionErrorGeneric(self.client.base_url)
raise errors.ConnectionErrorGeneric(self.get_client().base_url)
def perform_command(self, options, *args, **kwargs):
if options['--file'] is not None:
self.yaml_path = os.path.join(self.base_dir, options['--file'])
if options['--project-name'] is not None:
self.explicit_project_name = options['--project-name']
return super(Command, self).perform_command(options, *args, **kwargs)
def perform_command(self, options, handler, command_options):
explicit_config_path = options.get('--file') or os.environ.get('FIG_FILE')
project = self.get_project(
self.get_config_path(explicit_config_path),
project_name=options.get('--project-name'),
verbose=options.get('--verbose'))
@cached_property
def client(self):
return Client(docker_url())
handler(project, command_options)
@cached_property
def project(self):
def get_client(self, verbose=False):
client = docker_client()
if verbose:
version_info = six.iteritems(client.version())
log.info("Fig version %s", __version__)
log.info("Docker base_url: %s", client.base_url)
log.info("Docker version: %s",
", ".join("%s=%s" % item for item in version_info))
return verbose_proxy.VerboseProxy('docker', client)
return client
def get_config(self, config_path):
try:
config = yaml.safe_load(open(self.yaml_path))
with open(config_path, 'r') as fh:
return yaml.safe_load(fh)
except IOError as e:
if e.errno == errno.ENOENT:
raise errors.FigFileNotFound(os.path.basename(e.filename))
raise errors.UserError(six.text_type(e))
def get_project(self, config_path, project_name=None, verbose=False):
try:
return Project.from_config(self.project_name, config, self.client)
return Project.from_config(
self.get_project_name(config_path, project_name),
self.get_config(config_path),
self.get_client(verbose=verbose))
except ConfigError as e:
raise errors.UserError(six.text_type(e))
@cached_property
def project_name(self):
project = os.path.basename(os.path.dirname(os.path.abspath(self.yaml_path)))
if self.explicit_project_name is not None:
project = self.explicit_project_name
project = re.sub(r'[^a-zA-Z0-9]', '', project)
if not project:
project = 'default'
return project
def get_project_name(self, config_path, project_name=None):
def normalize_name(name):
return re.sub(r'[^a-zA-Z0-9]', '', name)
@cached_property
def formatter(self):
return Formatter()
project_name = project_name or os.environ.get('FIG_PROJECT_NAME')
if project_name is not None:
return normalize_name(project_name)
@cached_property
def yaml_path(self):
if self._yaml_path is not None:
return self._yaml_path
elif os.path.exists(os.path.join(self.base_dir, 'fig.yaml')):
project = os.path.basename(os.path.dirname(os.path.abspath(config_path)))
if project:
return normalize_name(project)
log.warning("Fig just read the file 'fig.yaml' on startup, rather than 'fig.yml'")
log.warning("Please be aware that fig.yml the expected extension in most cases, and using .yaml can cause compatibility issues in future")
return 'default'
def get_config_path(self, file_path=None):
if file_path:
return os.path.join(self.base_dir, file_path)
if os.path.exists(os.path.join(self.base_dir, 'fig.yaml')):
log.warning("Fig just read the file 'fig.yaml' on startup, rather "
"than 'fig.yml'")
log.warning("Please be aware that fig.yml the expected extension "
"in most cases, and using .yaml can cause compatibility "
"issues in future")
return os.path.join(self.base_dir, 'fig.yaml')
else:
return os.path.join(self.base_dir, 'fig.yml')
@yaml_path.setter
def yaml_path(self, value):
self._yaml_path = value
return os.path.join(self.base_dir, 'fig.yml')

34
fig/cli/docker_client.py Normal file
View File

@@ -0,0 +1,34 @@
from docker import Client
from docker import tls
import ssl
import os
def docker_client():
"""
Returns a docker-py client configured using environment variables
according to the same logic as the official Docker client.
"""
cert_path = os.environ.get('DOCKER_CERT_PATH', '')
if cert_path == '':
cert_path = os.path.join(os.environ.get('HOME'), '.docker')
base_url = os.environ.get('DOCKER_HOST')
tls_config = None
if os.environ.get('DOCKER_TLS_VERIFY', '') != '':
parts = base_url.split('://', 1)
base_url = '%s://%s' % ('https', parts[1])
client_cert = (os.path.join(cert_path, 'cert.pem'), os.path.join(cert_path, 'key.pem'))
ca_cert = os.path.join(cert_path, 'ca.pem')
tls_config = tls.TLSConfig(
ssl_version=ssl.PROTOCOL_TLSv1,
verify=True,
assert_hostname=False,
client_cert=client_cert,
ca_cert=ca_cert,
)
return Client(base_url=base_url, tls=tls_config)

View File

@@ -23,7 +23,7 @@ class DocoptCommand(object):
def dispatch(self, argv, global_options):
self.perform_command(*self.parse(argv, global_options))
def perform_command(self, options, command, handler, command_options):
def perform_command(self, options, handler, command_options):
handler(command_options)
def parse(self, argv, global_options):
@@ -43,7 +43,7 @@ class DocoptCommand(object):
raise NoSuchCommand(command, self)
command_options = docopt_full_help(docstring, options['ARGS'], options_first=True)
return (options, command, handler, command_options)
return options, handler, command_options
class NoSuchCommand(Exception):

View File

@@ -9,6 +9,8 @@ class UserError(Exception):
def __unicode__(self):
return self.msg
__str__ = __unicode__
class DockerNotFoundMac(UserError):
def __init__(self):
@@ -37,10 +39,10 @@ class DockerNotFoundGeneric(UserError):
""")
class ConnectionErrorDockerOSX(UserError):
class ConnectionErrorBoot2Docker(UserError):
def __init__(self):
super(ConnectionErrorDockerOSX, self).__init__("""
Couldn't connect to Docker daemon - you might need to run `docker-osx shell`.
super(ConnectionErrorBoot2Docker, self).__init__("""
Couldn't connect to Docker daemon - you might need to run `boot2docker up`.
""")

View File

@@ -4,11 +4,17 @@ import os
import texttable
def get_tty_width():
tty_size = os.popen('stty size', 'r').read().split()
if len(tty_size) != 2:
return 80
_, width = tty_size
return int(width)
class Formatter(object):
def table(self, headers, rows):
height, width = os.popen('stty size', 'r').read().split()
table = texttable.Texttable(max_width=width)
table = texttable.Texttable(max_width=get_tty_width())
table.set_cols_dtype(['t' for h in headers])
table.add_rows([headers] + rows)
table.set_deco(table.HEADER)

View File

@@ -10,11 +10,11 @@ from .utils import split_buffer
class LogPrinter(object):
def __init__(self, containers, attach_params=None, output=sys.stdout):
def __init__(self, containers, attach_params=None, output=sys.stdout, monochrome=False):
self.containers = containers
self.attach_params = attach_params or {}
self.prefix_width = self._calculate_prefix_width(containers)
self.generators = self._make_log_generators()
self.generators = self._make_log_generators(monochrome)
self.output = output
def run(self):
@@ -35,12 +35,15 @@ class LogPrinter(object):
prefix_width = max(prefix_width, len(container.name_without_project))
return prefix_width
def _make_log_generators(self):
def _make_log_generators(self, monochrome):
color_fns = cycle(colors.rainbow())
generators = []
for container in self.containers:
color_fn = color_fns.next()
if monochrome:
color_fn = lambda s: s
else:
color_fn = color_fns.next()
generators.append(self._make_log_generator(container, color_fn))
return generators

View File

@@ -4,9 +4,10 @@ import logging
import sys
import re
import signal
from operator import attrgetter
from inspect import getdoc
import dockerpty
from fig.packages import dockerpty
from .. import __version__
from ..project import NoSuchService, ConfigurationError
@@ -16,7 +17,7 @@ from .formatter import Formatter
from .log_printer import LogPrinter
from .utils import yesno
from ..packages.docker.errors import APIError
from docker.errors import APIError
from .errors import UserError
from .docopt_command import NoSuchCommand
@@ -84,12 +85,15 @@ class TopLevelCommand(Command):
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
restart Restart services
up Create and start containers
"""
@@ -98,7 +102,7 @@ class TopLevelCommand(Command):
options['version'] = "fig %s" % __version__
return options
def build(self, options):
def build(self, project, options):
"""
Build or rebuild services.
@@ -112,9 +116,9 @@ class TopLevelCommand(Command):
--no-cache Do not use cache when building the image.
"""
no_cache = bool(options.get('--no-cache', False))
self.project.build(service_names=options['SERVICE'], no_cache=no_cache)
project.build(service_names=options['SERVICE'], no_cache=no_cache)
def help(self, options):
def help(self, project, options):
"""
Get help on a command.
@@ -125,25 +129,50 @@ class TopLevelCommand(Command):
raise NoSuchCommand(command, self)
raise SystemExit(getdoc(getattr(self, command)))
def kill(self, options):
def kill(self, project, options):
"""
Force stop service containers.
Usage: kill [SERVICE...]
"""
self.project.kill(service_names=options['SERVICE'])
project.kill(service_names=options['SERVICE'])
def logs(self, options):
def logs(self, project, options):
"""
View output from containers.
Usage: logs [SERVICE...]
"""
containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
print("Attaching to", list_containers(containers))
LogPrinter(containers, attach_params={'logs': True}).run()
Usage: logs [options] [SERVICE...]
def ps(self, options):
Options:
--no-color Produce monochrome output.
"""
containers = project.containers(service_names=options['SERVICE'], stopped=True)
monochrome = options['--no-color']
print("Attaching to", list_containers(containers))
LogPrinter(containers, attach_params={'logs': True}, monochrome=monochrome).run()
def port(self, project, options):
"""
Print the public port for a port binding.
Usage: port [options] SERVICE PRIVATE_PORT
Options:
--protocol=proto tcp or udp (defaults to tcp)
--index=index index of the container if there are multiple
instances of a service (defaults to 1)
"""
service = project.get_service(options['SERVICE'])
try:
container = service.get_container(number=options.get('--index') or 1)
except ValueError as e:
raise UserError(str(e))
print(container.get_local_port(
options['PRIVATE_PORT'],
protocol=options.get('--protocol') or 'tcp') or '')
def ps(self, project, options):
"""
List containers.
@@ -152,7 +181,10 @@ class TopLevelCommand(Command):
Options:
-q Only display IDs
"""
containers = self.project.containers(service_names=options['SERVICE'], stopped=True) + self.project.containers(service_names=options['SERVICE'], one_off=True)
containers = sorted(
project.containers(service_names=options['SERVICE'], stopped=True) +
project.containers(service_names=options['SERVICE'], one_off=True),
key=attrgetter('name'))
if options['-q']:
for container in containers:
@@ -177,7 +209,23 @@ class TopLevelCommand(Command):
])
print(Formatter().table(headers, rows))
def rm(self, options):
def pull(self, project, options):
"""
Pulls images for services.
Usage: pull [options] [SERVICE...]
Options:
--allow-insecure-ssl Allow insecure connections to the docker
registry
"""
insecure_registry = options['--allow-insecure-ssl']
project.pull(
service_names=options['SERVICE'],
insecure_registry=insecure_registry
)
def rm(self, project, options):
"""
Remove stopped service containers.
@@ -187,21 +235,21 @@ class TopLevelCommand(Command):
--force Don't ask to confirm removal
-v Remove volumes associated with containers
"""
all_containers = self.project.containers(service_names=options['SERVICE'], stopped=True)
all_containers = project.containers(service_names=options['SERVICE'], stopped=True)
stopped_containers = [c for c in all_containers if not c.is_running]
if len(stopped_containers) > 0:
print("Going to remove", list_containers(stopped_containers))
if options.get('--force') \
or yesno("Are you sure? [yN] ", default=False):
self.project.remove_stopped(
project.remove_stopped(
service_names=options['SERVICE'],
v=options.get('-v', False)
)
else:
print("No stopped containers")
def run(self, options):
def run(self, project, options):
"""
Run a one-off command on a service.
@@ -213,24 +261,25 @@ class TopLevelCommand(Command):
running. If you do not want to start linked services, use
`fig run --no-deps SERVICE COMMAND [ARGS...]`.
Usage: run [options] SERVICE [COMMAND] [ARGS...]
Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...]
Options:
-d Detached mode: Run container in the background, print
new container name.
-T Disable pseudo-tty allocation. By default `fig run`
allocates a TTY.
--rm Remove container after run. Ignored in detached mode.
--no-deps Don't start linked services.
-d Detached mode: Run container in the background, print
new container name.
--entrypoint CMD Override the entrypoint of the image.
-e KEY=VAL Set an environment variable (can be used multiple times)
--no-deps Don't start linked services.
--rm Remove container after run. Ignored in detached mode.
-T Disable pseudo-tty allocation. By default `fig run`
allocates a TTY.
"""
service = self.project.get_service(options['SERVICE'])
service = project.get_service(options['SERVICE'])
if not options['--no-deps']:
deps = service.get_linked_names()
if len(deps) > 0:
self.project.up(
project.up(
service_names=deps,
start_links=True,
recreate=False,
@@ -250,20 +299,31 @@ class TopLevelCommand(Command):
'tty': tty,
'stdin_open': not options['-d'],
}
if options['-e']:
for option in options['-e']:
if 'environment' not in service.options:
service.options['environment'] = {}
k, v = option.split('=', 1)
service.options['environment'][k] = v
if options['--entrypoint']:
container_options['entrypoint'] = options.get('--entrypoint')
container = service.create_container(one_off=True, **container_options)
if options['-d']:
service.start_container(container, ports=None, one_off=True)
print(container.name)
else:
service.start_container(container, ports=None, one_off=True)
dockerpty.start(self.client, container.id)
dockerpty.start(project.client, container.id)
exit_code = container.wait()
if options['--rm']:
log.info("Removing %s..." % container.name)
self.client.remove_container(container.id)
project.client.remove_container(container.id)
sys.exit(exit_code)
def scale(self, options):
def scale(self, project, options):
"""
Set number of containers to run for a service.
@@ -284,19 +344,24 @@ class TopLevelCommand(Command):
raise UserError('Number of containers for service "%s" is not a '
'number' % service_name)
try:
self.project.get_service(service_name).scale(num)
project.get_service(service_name).scale(num)
except CannotBeScaledError:
raise UserError('Service "%s" cannot be scaled because it specifies a port on the host. If multiple containers for this service were created, the port would clash.\n\nRemove the ":" from the port definition in fig.yml so Docker can choose a random port for each container.' % service_name)
raise UserError(
'Service "%s" cannot be scaled because it specifies a port '
'on the host. If multiple containers for this service were '
'created, the port would clash.\n\nRemove the ":" from the '
'port definition in fig.yml so Docker can choose a random '
'port for each container.' % service_name)
def start(self, options):
def start(self, project, options):
"""
Start existing containers.
Usage: start [SERVICE...]
"""
self.project.start(service_names=options['SERVICE'])
project.start(service_names=options['SERVICE'])
def stop(self, options):
def stop(self, project, options):
"""
Stop running containers without removing them.
@@ -304,9 +369,17 @@ class TopLevelCommand(Command):
Usage: stop [SERVICE...]
"""
self.project.stop(service_names=options['SERVICE'])
project.stop(service_names=options['SERVICE'])
def up(self, options):
def restart(self, project, options):
"""
Restart running containers.
Usage: restart [SERVICE...]
"""
project.restart(service_names=options['SERVICE'])
def up(self, project, options):
"""
Build, (re)create, start and attach to containers for a service.
@@ -325,37 +398,40 @@ class TopLevelCommand(Command):
Options:
-d Detached mode: Run containers in the background,
print new container names.
--no-color Produce monochrome output.
--no-deps Don't start linked services.
--no-recreate If containers already exist, don't recreate them.
"""
detached = options['-d']
monochrome = options['--no-color']
start_links = not options['--no-deps']
recreate = not options['--no-recreate']
service_names = options['SERVICE']
self.project.up(
project.up(
service_names=service_names,
start_links=start_links,
recreate=recreate
)
to_attach = [c for s in self.project.get_services(service_names) for c in s.containers()]
to_attach = [c for s in project.get_services(service_names) for c in s.containers()]
if not detached:
print("Attaching to", list_containers(to_attach))
log_printer = LogPrinter(to_attach, attach_params={"logs": True})
log_printer = LogPrinter(to_attach, attach_params={"logs": True}, monochrome=monochrome)
try:
log_printer.run()
finally:
def handler(signal, frame):
self.project.kill(service_names=service_names)
project.kill(service_names=service_names)
sys.exit(0)
signal.signal(signal.SIGINT, handler)
print("Gracefully stopping... (press Ctrl+C again to force)")
self.project.stop(service_names=service_names)
project.stop(service_names=service_names)
def list_containers(containers):

View File

@@ -7,25 +7,6 @@ import subprocess
import platform
def cached_property(f):
"""
returns a cached property that is calculated by function f
http://code.activestate.com/recipes/576563-cached-property/
"""
def get(self):
try:
return self._property_cache[f]
except AttributeError:
self._property_cache = {}
x = self._property_cache[f] = f(self)
return x
except KeyError:
x = self._property_cache[f] = f(self)
return x
return property(get)
def yesno(prompt, default=None):
"""
Prompt the user for a yes or no.
@@ -81,10 +62,6 @@ def mkdir(path, permissions=0o700):
return path
def docker_url():
return os.environ.get('DOCKER_HOST')
def split_buffer(reader, separator):
"""
Given a generator which yields strings and a separator string,

58
fig/cli/verbose_proxy.py Normal file
View File

@@ -0,0 +1,58 @@
import functools
from itertools import chain
import logging
import pprint
import six
def format_call(args, kwargs):
args = (repr(a) for a in args)
kwargs = ("{0!s}={1!r}".format(*item) for item in six.iteritems(kwargs))
return "({0})".format(", ".join(chain(args, kwargs)))
def format_return(result, max_lines):
if isinstance(result, (list, tuple, set)):
return "({0} with {1} items)".format(type(result).__name__, len(result))
if result:
lines = pprint.pformat(result).split('\n')
extra = '\n...' if len(lines) > max_lines else ''
return '\n'.join(lines[:max_lines]) + extra
return result
class VerboseProxy(object):
"""Proxy all function calls to another class and log method name, arguments
and return values for each call.
"""
def __init__(self, obj_name, obj, log_name=None, max_lines=10):
self.obj_name = obj_name
self.obj = obj
self.max_lines = max_lines
self.log = logging.getLogger(log_name or __name__)
def __getattr__(self, name):
attr = getattr(self.obj, name)
if not six.callable(attr):
return attr
return functools.partial(self.proxy_callable, name)
def proxy_callable(self, call_name, *args, **kwargs):
self.log.info("%s %s <- %s",
self.obj_name,
call_name,
format_call(args, kwargs))
result = getattr(self.obj, call_name)(*args, **kwargs)
self.log.info("%s %s -> %s",
self.obj_name,
call_name,
format_return(result, self.max_lines))
return result

View File

@@ -1,6 +1,8 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import six
class Container(object):
"""
@@ -63,50 +65,58 @@ class Container(object):
return None
@property
def human_readable_ports(self):
def ports(self):
self.inspect_if_not_inspected()
if not self.dictionary['NetworkSettings']['Ports']:
return ''
ports = []
for private, public in list(self.dictionary['NetworkSettings']['Ports'].items()):
if public:
ports.append('%s->%s' % (public[0]['HostPort'], private))
else:
ports.append(private)
return ', '.join(ports)
return self.get('NetworkSettings.Ports') or {}
@property
def human_readable_ports(self):
def format_port(private, public):
if not public:
return private
return '{HostIp}:{HostPort}->{private}'.format(
private=private, **public[0])
return ', '.join(format_port(*item)
for item in sorted(six.iteritems(self.ports)))
@property
def human_readable_state(self):
self.inspect_if_not_inspected()
if self.dictionary['State']['Running']:
if self.dictionary['State'].get('Ghost'):
return 'Ghost'
else:
return 'Up'
if self.is_running:
return 'Ghost' if self.get('State.Ghost') else 'Up'
else:
return 'Exit %s' % self.dictionary['State']['ExitCode']
return 'Exit %s' % self.get('State.ExitCode')
@property
def human_readable_command(self):
self.inspect_if_not_inspected()
if self.dictionary['Config']['Cmd']:
return ' '.join(self.dictionary['Config']['Cmd'])
else:
return ''
entrypoint = self.get('Config.Entrypoint') or []
cmd = self.get('Config.Cmd') or []
return ' '.join(entrypoint + cmd)
@property
def environment(self):
self.inspect_if_not_inspected()
out = {}
for var in self.dictionary.get('Config', {}).get('Env', []):
k, v = var.split('=', 1)
out[k] = v
return out
return dict(var.split("=", 1) for var in self.get('Config.Env') or [])
@property
def is_running(self):
return self.get('State.Running')
def get(self, key):
"""Return a value from the container or None if the value is not set.
:param key: a string using dotted notation for nested dictionary
lookups
"""
self.inspect_if_not_inspected()
return self.dictionary['State']['Running']
def get_value(dictionary, key):
return (dictionary or {}).get(key)
return reduce(get_value, key.split('.'), self.dictionary)
def get_local_port(self, port, protocol='tcp'):
port = self.ports.get("%s/%s" % (port, protocol))
return "{HostIp}:{HostPort}".format(**port[0]) if port else None
def start(self, **options):
return self.client.start(self.id, **options)
@@ -117,6 +127,9 @@ class Container(object):
def kill(self):
return self.client.kill(self.id)
def restart(self):
return self.client.restart(self.id)
def remove(self, **options):
return self.client.remove_container(self.id, **options)
@@ -132,6 +145,7 @@ class Container(object):
def inspect(self):
self.dictionary = self.client.inspect_container(self.id)
self.has_been_inspected = True
return self.dictionary
def links(self):

View File

@@ -1,20 +0,0 @@
# Copyright 2013 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .version import version
__version__ = version
__title__ = 'docker-py'
from .client import Client # flake8: noqa

View File

@@ -1,7 +0,0 @@
from .auth import (
INDEX_URL,
encode_header,
load_config,
resolve_authconfig,
resolve_repository_name
) # flake8: noqa

View File

@@ -1,167 +0,0 @@
# Copyright 2013 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import fileinput
import json
import os
from fig.packages import six
from ..utils import utils
from .. import errors
INDEX_URL = 'https://index.docker.io/v1/'
DOCKER_CONFIG_FILENAME = '.dockercfg'
def swap_protocol(url):
if url.startswith('http://'):
return url.replace('http://', 'https://', 1)
if url.startswith('https://'):
return url.replace('https://', 'http://', 1)
return url
def expand_registry_url(hostname):
if hostname.startswith('http:') or hostname.startswith('https:'):
if '/' not in hostname[9:]:
hostname = hostname + '/v1/'
return hostname
if utils.ping('https://' + hostname + '/v1/_ping'):
return 'https://' + hostname + '/v1/'
return 'http://' + hostname + '/v1/'
def resolve_repository_name(repo_name):
if '://' in repo_name:
raise errors.InvalidRepository(
'Repository name cannot contain a scheme ({0})'.format(repo_name))
parts = repo_name.split('/', 1)
if '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost':
# This is a docker index repo (ex: foo/bar or ubuntu)
return INDEX_URL, repo_name
if len(parts) < 2:
raise errors.InvalidRepository(
'Invalid repository name ({0})'.format(repo_name))
if 'index.docker.io' in parts[0]:
raise errors.InvalidRepository(
'Invalid repository name, try "{0}" instead'.format(parts[1]))
return expand_registry_url(parts[0]), parts[1]
def resolve_authconfig(authconfig, registry=None):
"""Return the authentication data from the given auth configuration for a
specific registry. We'll do our best to infer the correct URL for the
registry, trying both http and https schemes. Returns an empty dictionnary
if no data exists."""
# Default to the public index server
registry = registry or INDEX_URL
# Ff its not the index server there are three cases:
#
# 1. this is a full config url -> it should be used as is
# 2. it could be a full url, but with the wrong protocol
# 3. it can be the hostname optionally with a port
#
# as there is only one auth entry which is fully qualified we need to start
# parsing and matching
if '/' not in registry:
registry = registry + '/v1/'
if not registry.startswith('http:') and not registry.startswith('https:'):
registry = 'https://' + registry
if registry in authconfig:
return authconfig[registry]
return authconfig.get(swap_protocol(registry), None)
def encode_auth(auth_info):
return base64.b64encode(auth_info.get('username', '') + b':' +
auth_info.get('password', ''))
def decode_auth(auth):
if isinstance(auth, six.string_types):
auth = auth.encode('ascii')
s = base64.b64decode(auth)
login, pwd = s.split(b':')
return login.decode('ascii'), pwd.decode('ascii')
def encode_header(auth):
auth_json = json.dumps(auth).encode('ascii')
return base64.b64encode(auth_json)
def encode_full_header(auth):
""" Returns the given auth block encoded for the X-Registry-Config header.
"""
return encode_header({'configs': auth})
def load_config(root=None):
"""Loads authentication data from a Docker configuration file in the given
root directory."""
conf = {}
data = None
config_file = os.path.join(root or os.environ.get('HOME', '.'),
DOCKER_CONFIG_FILENAME)
# First try as JSON
try:
with open(config_file) as f:
conf = {}
for registry, entry in six.iteritems(json.load(f)):
username, password = decode_auth(entry['auth'])
conf[registry] = {
'username': username,
'password': password,
'email': entry['email'],
'serveraddress': registry,
}
return conf
except:
pass
# If that fails, we assume the configuration file contains a single
# authentication token for the public registry in the following format:
#
# auth = AUTH_TOKEN
# email = email@domain.com
try:
data = []
for line in fileinput.input(config_file):
data.append(line.strip().split(' = ')[1])
if len(data) < 2:
# Not enough data
raise errors.InvalidConfigFile(
'Invalid or empty configuration file!')
username, password = decode_auth(data[0])
conf[INDEX_URL] = {
'username': username,
'password': password,
'email': data[1],
'serveraddress': INDEX_URL,
}
return conf
except:
pass
# If all fails, return an empty config
return {}

View File

@@ -1,860 +0,0 @@
# Copyright 2013 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import re
import shlex
import struct
import warnings
import requests
import requests.exceptions
from fig.packages import six
from .auth import auth
from .unixconn import unixconn
from .utils import utils
from . import errors
if not six.PY3:
import websocket
DEFAULT_DOCKER_API_VERSION = '1.12'
DEFAULT_TIMEOUT_SECONDS = 60
STREAM_HEADER_SIZE_BYTES = 8
class Client(requests.Session):
def __init__(self, base_url=None, version=DEFAULT_DOCKER_API_VERSION,
timeout=DEFAULT_TIMEOUT_SECONDS):
super(Client, self).__init__()
if base_url is None:
base_url = "http+unix://var/run/docker.sock"
if 'unix:///' in base_url:
base_url = base_url.replace('unix:/', 'unix:')
if base_url.startswith('unix:'):
base_url = "http+" + base_url
if base_url.startswith('tcp:'):
base_url = base_url.replace('tcp:', 'http:')
if base_url.endswith('/'):
base_url = base_url[:-1]
self.base_url = base_url
self._version = version
self._timeout = timeout
self._auth_configs = auth.load_config()
self.mount('http+unix://', unixconn.UnixAdapter(base_url, timeout))
def _set_request_timeout(self, kwargs):
"""Prepare the kwargs for an HTTP request by inserting the timeout
parameter, if not already present."""
kwargs.setdefault('timeout', self._timeout)
return kwargs
def _post(self, url, **kwargs):
return self.post(url, **self._set_request_timeout(kwargs))
def _get(self, url, **kwargs):
return self.get(url, **self._set_request_timeout(kwargs))
def _delete(self, url, **kwargs):
return self.delete(url, **self._set_request_timeout(kwargs))
def _url(self, path):
return '{0}/v{1}{2}'.format(self.base_url, self._version, path)
def _raise_for_status(self, response, explanation=None):
"""Raises stored :class:`APIError`, if one occurred."""
try:
response.raise_for_status()
except requests.exceptions.HTTPError as e:
raise errors.APIError(e, response, explanation=explanation)
def _result(self, response, json=False, binary=False):
assert not (json and binary)
self._raise_for_status(response)
if json:
return response.json()
if binary:
return response.content
return response.text
def _container_config(self, image, command, hostname=None, user=None,
detach=False, stdin_open=False, tty=False,
mem_limit=0, ports=None, environment=None, dns=None,
volumes=None, volumes_from=None,
network_disabled=False, entrypoint=None,
cpu_shares=None, working_dir=None, domainname=None,
memswap_limit=0):
if isinstance(command, six.string_types):
command = shlex.split(str(command))
if isinstance(environment, dict):
environment = [
'{0}={1}'.format(k, v) for k, v in environment.items()
]
if isinstance(ports, list):
exposed_ports = {}
for port_definition in ports:
port = port_definition
proto = 'tcp'
if isinstance(port_definition, tuple):
if len(port_definition) == 2:
proto = port_definition[1]
port = port_definition[0]
exposed_ports['{0}/{1}'.format(port, proto)] = {}
ports = exposed_ports
if isinstance(volumes, list):
volumes_dict = {}
for vol in volumes:
volumes_dict[vol] = {}
volumes = volumes_dict
if volumes_from:
if not isinstance(volumes_from, six.string_types):
volumes_from = ','.join(volumes_from)
else:
# Force None, an empty list or dict causes client.start to fail
volumes_from = None
attach_stdin = False
attach_stdout = False
attach_stderr = False
stdin_once = False
if not detach:
attach_stdout = True
attach_stderr = True
if stdin_open:
attach_stdin = True
stdin_once = True
if utils.compare_version('1.10', self._version) >= 0:
message = ('{0!r} parameter has no effect on create_container().'
' It has been moved to start()')
if dns is not None:
raise errors.DockerException(message.format('dns'))
if volumes_from is not None:
raise errors.DockerException(message.format('volumes_from'))
return {
'Hostname': hostname,
'Domainname': domainname,
'ExposedPorts': ports,
'User': user,
'Tty': tty,
'OpenStdin': stdin_open,
'StdinOnce': stdin_once,
'Memory': mem_limit,
'AttachStdin': attach_stdin,
'AttachStdout': attach_stdout,
'AttachStderr': attach_stderr,
'Env': environment,
'Cmd': command,
'Dns': dns,
'Image': image,
'Volumes': volumes,
'VolumesFrom': volumes_from,
'NetworkDisabled': network_disabled,
'Entrypoint': entrypoint,
'CpuShares': cpu_shares,
'WorkingDir': working_dir,
'MemorySwap': memswap_limit
}
def _post_json(self, url, data, **kwargs):
# Go <1.1 can't unserialize null to a string
# so we do this disgusting thing here.
data2 = {}
if data is not None:
for k, v in six.iteritems(data):
if v is not None:
data2[k] = v
if 'headers' not in kwargs:
kwargs['headers'] = {}
kwargs['headers']['Content-Type'] = 'application/json'
return self._post(url, data=json.dumps(data2), **kwargs)
def _attach_params(self, override=None):
return override or {
'stdout': 1,
'stderr': 1,
'stream': 1
}
def _attach_websocket(self, container, params=None):
if six.PY3:
raise NotImplementedError("This method is not currently supported "
"under python 3")
url = self._url("/containers/{0}/attach/ws".format(container))
req = requests.Request("POST", url, params=self._attach_params(params))
full_url = req.prepare().url
full_url = full_url.replace("http://", "ws://", 1)
full_url = full_url.replace("https://", "wss://", 1)
return self._create_websocket_connection(full_url)
def _create_websocket_connection(self, url):
return websocket.create_connection(url)
def _get_raw_response_socket(self, response):
self._raise_for_status(response)
if six.PY3:
return response.raw._fp.fp.raw._sock
else:
return response.raw._fp.fp._sock
def _stream_helper(self, response):
"""Generator for data coming from a chunked-encoded HTTP response."""
socket_fp = self._get_raw_response_socket(response)
socket_fp.setblocking(1)
socket = socket_fp.makefile()
while True:
# Because Docker introduced newlines at the end of chunks in v0.9,
# and only on some API endpoints, we have to cater for both cases.
size_line = socket.readline()
if size_line == '\r\n':
size_line = socket.readline()
size = int(size_line, 16)
if size <= 0:
break
data = socket.readline()
if not data:
break
yield data
def _multiplexed_buffer_helper(self, response):
"""A generator of multiplexed data blocks read from a buffered
response."""
buf = self._result(response, binary=True)
walker = 0
while True:
if len(buf[walker:]) < 8:
break
_, length = struct.unpack_from('>BxxxL', buf[walker:])
start = walker + STREAM_HEADER_SIZE_BYTES
end = start + length
walker = end
yield buf[start:end]
def _multiplexed_socket_stream_helper(self, response):
"""A generator of multiplexed data blocks coming from a response
socket."""
socket = self._get_raw_response_socket(response)
def recvall(socket, size):
blocks = []
while size > 0:
block = socket.recv(size)
if not block:
return None
blocks.append(block)
size -= len(block)
sep = bytes() if six.PY3 else str()
data = sep.join(blocks)
return data
while True:
socket.settimeout(None)
header = recvall(socket, STREAM_HEADER_SIZE_BYTES)
if not header:
break
_, length = struct.unpack('>BxxxL', header)
if not length:
break
data = recvall(socket, length)
if not data:
break
yield data
def attach(self, container, stdout=True, stderr=True,
stream=False, logs=False):
if isinstance(container, dict):
container = container.get('Id')
params = {
'logs': logs and 1 or 0,
'stdout': stdout and 1 or 0,
'stderr': stderr and 1 or 0,
'stream': stream and 1 or 0,
}
u = self._url("/containers/{0}/attach".format(container))
response = self._post(u, params=params, stream=stream)
# Stream multi-plexing was only introduced in API v1.6. Anything before
# that needs old-style streaming.
if utils.compare_version('1.6', self._version) < 0:
def stream_result():
self._raise_for_status(response)
for line in response.iter_lines(chunk_size=1,
decode_unicode=True):
# filter out keep-alive new lines
if line:
yield line
return stream_result() if stream else \
self._result(response, binary=True)
sep = bytes() if six.PY3 else str()
return stream and self._multiplexed_socket_stream_helper(response) or \
sep.join([x for x in self._multiplexed_buffer_helper(response)])
def attach_socket(self, container, params=None, ws=False):
if params is None:
params = {
'stdout': 1,
'stderr': 1,
'stream': 1
}
if ws:
return self._attach_websocket(container, params)
if isinstance(container, dict):
container = container.get('Id')
u = self._url("/containers/{0}/attach".format(container))
return self._get_raw_response_socket(self.post(
u, None, params=self._attach_params(params), stream=True))
def build(self, path=None, tag=None, quiet=False, fileobj=None,
nocache=False, rm=False, stream=False, timeout=None,
custom_context=False, encoding=None):
remote = context = headers = None
if path is None and fileobj is None:
raise TypeError("Either path or fileobj needs to be provided.")
if custom_context:
if not fileobj:
raise TypeError("You must specify fileobj with custom_context")
context = fileobj
elif fileobj is not None:
context = utils.mkbuildcontext(fileobj)
elif path.startswith(('http://', 'https://',
'git://', 'github.com/')):
remote = path
else:
context = utils.tar(path)
if utils.compare_version('1.8', self._version) >= 0:
stream = True
u = self._url('/build')
params = {
't': tag,
'remote': remote,
'q': quiet,
'nocache': nocache,
'rm': rm
}
if context is not None:
headers = {'Content-Type': 'application/tar'}
if encoding:
headers['Content-Encoding'] = encoding
if utils.compare_version('1.9', self._version) >= 0:
# If we don't have any auth data so far, try reloading the config
# file one more time in case anything showed up in there.
if not self._auth_configs:
self._auth_configs = auth.load_config()
# Send the full auth configuration (if any exists), since the build
# could use any (or all) of the registries.
if self._auth_configs:
headers['X-Registry-Config'] = auth.encode_full_header(
self._auth_configs
)
response = self._post(
u,
data=context,
params=params,
headers=headers,
stream=stream,
timeout=timeout,
)
if context is not None:
context.close()
if stream:
return self._stream_helper(response)
else:
output = self._result(response)
srch = r'Successfully built ([0-9a-f]+)'
match = re.search(srch, output)
if not match:
return None, output
return match.group(1), output
def commit(self, container, repository=None, tag=None, message=None,
author=None, conf=None):
params = {
'container': container,
'repo': repository,
'tag': tag,
'comment': message,
'author': author
}
u = self._url("/commit")
return self._result(self._post_json(u, data=conf, params=params),
json=True)
def containers(self, quiet=False, all=False, trunc=True, latest=False,
since=None, before=None, limit=-1, size=False):
params = {
'limit': 1 if latest else limit,
'all': 1 if all else 0,
'size': 1 if size else 0,
'trunc_cmd': 1 if trunc else 0,
'since': since,
'before': before
}
u = self._url("/containers/json")
res = self._result(self._get(u, params=params), True)
if quiet:
return [{'Id': x['Id']} for x in res]
return res
def copy(self, container, resource):
if isinstance(container, dict):
container = container.get('Id')
res = self._post_json(
self._url("/containers/{0}/copy".format(container)),
data={"Resource": resource},
stream=True
)
self._raise_for_status(res)
return res.raw
def create_container(self, image, command=None, hostname=None, user=None,
detach=False, stdin_open=False, tty=False,
mem_limit=0, ports=None, environment=None, dns=None,
volumes=None, volumes_from=None,
network_disabled=False, name=None, entrypoint=None,
cpu_shares=None, working_dir=None, domainname=None,
memswap_limit=0):
config = self._container_config(
image, command, hostname, user, detach, stdin_open, tty, mem_limit,
ports, environment, dns, volumes, volumes_from, network_disabled,
entrypoint, cpu_shares, working_dir, domainname, memswap_limit
)
return self.create_container_from_config(config, name)
def create_container_from_config(self, config, name=None):
u = self._url("/containers/create")
params = {
'name': name
}
res = self._post_json(u, data=config, params=params)
return self._result(res, True)
def diff(self, container):
if isinstance(container, dict):
container = container.get('Id')
return self._result(self._get(self._url("/containers/{0}/changes".
format(container))), True)
def events(self):
return self._stream_helper(self.get(self._url('/events'), stream=True))
def export(self, container):
if isinstance(container, dict):
container = container.get('Id')
res = self._get(self._url("/containers/{0}/export".format(container)),
stream=True)
self._raise_for_status(res)
return res.raw
def get_image(self, image):
res = self._get(self._url("/images/{0}/get".format(image)),
stream=True)
self._raise_for_status(res)
return res.raw
def history(self, image):
res = self._get(self._url("/images/{0}/history".format(image)))
self._raise_for_status(res)
return self._result(res)
def images(self, name=None, quiet=False, all=False, viz=False):
if viz:
if utils.compare_version('1.7', self._version) >= 0:
raise Exception('Viz output is not supported in API >= 1.7!')
return self._result(self._get(self._url("images/viz")))
params = {
'filter': name,
'only_ids': 1 if quiet else 0,
'all': 1 if all else 0,
}
res = self._result(self._get(self._url("/images/json"), params=params),
True)
if quiet:
return [x['Id'] for x in res]
return res
def import_image(self, src=None, repository=None, tag=None, image=None):
u = self._url("/images/create")
params = {
'repo': repository,
'tag': tag
}
if src:
try:
# XXX: this is ways not optimal but the only way
# for now to import tarballs through the API
fic = open(src)
data = fic.read()
fic.close()
src = "-"
except IOError:
# file does not exists or not a file (URL)
data = None
if isinstance(src, six.string_types):
params['fromSrc'] = src
return self._result(self._post(u, data=data, params=params))
return self._result(self._post(u, data=src, params=params))
if image:
params['fromImage'] = image
return self._result(self._post(u, data=None, params=params))
raise Exception("Must specify a src or image")
def info(self):
return self._result(self._get(self._url("/info")),
True)
def insert(self, image, url, path):
if utils.compare_version('1.12', self._version) >= 0:
raise errors.DeprecatedMethod(
'insert is not available for API version >=1.12'
)
api_url = self._url("/images/" + image + "/insert")
params = {
'url': url,
'path': path
}
return self._result(self._post(api_url, params=params))
def inspect_container(self, container):
if isinstance(container, dict):
container = container.get('Id')
return self._result(
self._get(self._url("/containers/{0}/json".format(container))),
True)
def inspect_image(self, image_id):
return self._result(
self._get(self._url("/images/{0}/json".format(image_id))),
True
)
def kill(self, container, signal=None):
if isinstance(container, dict):
container = container.get('Id')
url = self._url("/containers/{0}/kill".format(container))
params = {}
if signal is not None:
params['signal'] = signal
res = self._post(url, params=params)
self._raise_for_status(res)
def load_image(self, data):
res = self._post(self._url("/images/load"), data=data)
self._raise_for_status(res)
def login(self, username, password=None, email=None, registry=None,
reauth=False):
# If we don't have any auth data so far, try reloading the config file
# one more time in case anything showed up in there.
if not self._auth_configs:
self._auth_configs = auth.load_config()
registry = registry or auth.INDEX_URL
authcfg = auth.resolve_authconfig(self._auth_configs, registry)
# If we found an existing auth config for this registry and username
# combination, we can return it immediately unless reauth is requested.
if authcfg and authcfg.get('username', None) == username \
and not reauth:
return authcfg
req_data = {
'username': username,
'password': password,
'email': email,
'serveraddress': registry,
}
response = self._post_json(self._url('/auth'), data=req_data)
if response.status_code == 200:
self._auth_configs[registry] = req_data
return self._result(response, json=True)
def logs(self, container, stdout=True, stderr=True, stream=False,
timestamps=False):
if isinstance(container, dict):
container = container.get('Id')
if utils.compare_version('1.11', self._version) >= 0:
params = {'stderr': stderr and 1 or 0,
'stdout': stdout and 1 or 0,
'timestamps': timestamps and 1 or 0,
'follow': stream and 1 or 0}
url = self._url("/containers/{0}/logs".format(container))
res = self._get(url, params=params, stream=stream)
if stream:
return self._multiplexed_socket_stream_helper(res)
elif six.PY3:
return bytes().join(
[x for x in self._multiplexed_buffer_helper(res)]
)
else:
return str().join(
[x for x in self._multiplexed_buffer_helper(res)]
)
return self.attach(
container,
stdout=stdout,
stderr=stderr,
stream=stream,
logs=True
)
def ping(self):
return self._result(self._get(self._url('/_ping')))
def port(self, container, private_port):
if isinstance(container, dict):
container = container.get('Id')
res = self._get(self._url("/containers/{0}/json".format(container)))
self._raise_for_status(res)
json_ = res.json()
s_port = str(private_port)
h_ports = None
h_ports = json_['NetworkSettings']['Ports'].get(s_port + '/udp')
if h_ports is None:
h_ports = json_['NetworkSettings']['Ports'].get(s_port + '/tcp')
return h_ports
def pull(self, repository, tag=None, stream=False):
if not tag:
repository, tag = utils.parse_repository_tag(repository)
registry, repo_name = auth.resolve_repository_name(repository)
if repo_name.count(":") == 1:
repository, tag = repository.rsplit(":", 1)
params = {
'tag': tag,
'fromImage': repository
}
headers = {}
if utils.compare_version('1.5', self._version) >= 0:
# If we don't have any auth data so far, try reloading the config
# file one more time in case anything showed up in there.
if not self._auth_configs:
self._auth_configs = auth.load_config()
authcfg = auth.resolve_authconfig(self._auth_configs, registry)
# Do not fail here if no authentication exists for this specific
# registry as we can have a readonly pull. Just put the header if
# we can.
if authcfg:
headers['X-Registry-Auth'] = auth.encode_header(authcfg)
response = self._post(self._url('/images/create'), params=params,
headers=headers, stream=stream, timeout=None)
if stream:
return self._stream_helper(response)
else:
return self._result(response)
def push(self, repository, stream=False):
registry, repo_name = auth.resolve_repository_name(repository)
u = self._url("/images/{0}/push".format(repository))
headers = {}
if utils.compare_version('1.5', self._version) >= 0:
# If we don't have any auth data so far, try reloading the config
# file one more time in case anything showed up in there.
if not self._auth_configs:
self._auth_configs = auth.load_config()
authcfg = auth.resolve_authconfig(self._auth_configs, registry)
# Do not fail here if no authentication exists for this specific
# registry as we can have a readonly pull. Just put the header if
# we can.
if authcfg:
headers['X-Registry-Auth'] = auth.encode_header(authcfg)
response = self._post_json(u, None, headers=headers, stream=stream)
else:
response = self._post_json(u, None, stream=stream)
return stream and self._stream_helper(response) \
or self._result(response)
def remove_container(self, container, v=False, link=False, force=False):
if isinstance(container, dict):
container = container.get('Id')
params = {'v': v, 'link': link, 'force': force}
res = self._delete(self._url("/containers/" + container),
params=params)
self._raise_for_status(res)
def remove_image(self, image, force=False, noprune=False):
params = {'force': force, 'noprune': noprune}
res = self._delete(self._url("/images/" + image), params=params)
self._raise_for_status(res)
def restart(self, container, timeout=10):
if isinstance(container, dict):
container = container.get('Id')
params = {'t': timeout}
url = self._url("/containers/{0}/restart".format(container))
res = self._post(url, params=params)
self._raise_for_status(res)
def search(self, term):
return self._result(self._get(self._url("/images/search"),
params={'term': term}),
True)
def start(self, container, binds=None, port_bindings=None, lxc_conf=None,
publish_all_ports=False, links=None, privileged=False,
dns=None, dns_search=None, volumes_from=None, network_mode=None):
if isinstance(container, dict):
container = container.get('Id')
if isinstance(lxc_conf, dict):
formatted = []
for k, v in six.iteritems(lxc_conf):
formatted.append({'Key': k, 'Value': str(v)})
lxc_conf = formatted
start_config = {
'LxcConf': lxc_conf
}
if binds:
start_config['Binds'] = utils.convert_volume_binds(binds)
if port_bindings:
start_config['PortBindings'] = utils.convert_port_bindings(
port_bindings
)
start_config['PublishAllPorts'] = publish_all_ports
if links:
if isinstance(links, dict):
links = six.iteritems(links)
formatted_links = [
'{0}:{1}'.format(k, v) for k, v in sorted(links)
]
start_config['Links'] = formatted_links
start_config['Privileged'] = privileged
if utils.compare_version('1.10', self._version) >= 0:
if dns is not None:
start_config['Dns'] = dns
if volumes_from is not None:
if isinstance(volumes_from, six.string_types):
volumes_from = volumes_from.split(',')
start_config['VolumesFrom'] = volumes_from
else:
warning_message = ('{0!r} parameter is discarded. It is only'
' available for API version greater or equal'
' than 1.10')
if dns is not None:
warnings.warn(warning_message.format('dns'),
DeprecationWarning)
if volumes_from is not None:
warnings.warn(warning_message.format('volumes_from'),
DeprecationWarning)
if dns_search:
start_config['DnsSearch'] = dns_search
if network_mode:
start_config['NetworkMode'] = network_mode
url = self._url("/containers/{0}/start".format(container))
res = self._post_json(url, data=start_config)
self._raise_for_status(res)
def resize(self, container, height, width):
if isinstance(container, dict):
container = container.get('Id')
params = {'h': height, 'w': width}
url = self._url("/containers/{0}/resize".format(container))
res = self._post(url, params=params)
self._raise_for_status(res)
def stop(self, container, timeout=10):
if isinstance(container, dict):
container = container.get('Id')
params = {'t': timeout}
url = self._url("/containers/{0}/stop".format(container))
res = self._post(url, params=params,
timeout=max(timeout, self._timeout))
self._raise_for_status(res)
def tag(self, image, repository, tag=None, force=False):
params = {
'tag': tag,
'repo': repository,
'force': 1 if force else 0
}
url = self._url("/images/{0}/tag".format(image))
res = self._post(url, params=params)
self._raise_for_status(res)
return res.status_code == 201
def top(self, container):
u = self._url("/containers/{0}/top".format(container))
return self._result(self._get(u), True)
def version(self):
return self._result(self._get(self._url("/version")), True)
def wait(self, container):
if isinstance(container, dict):
container = container.get('Id')
url = self._url("/containers/{0}/wait".format(container))
res = self._post(url, timeout=None)
self._raise_for_status(res)
json_ = res.json()
if 'StatusCode' in json_:
return json_['StatusCode']
return -1

View File

@@ -1,65 +0,0 @@
# Copyright 2014 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import requests
class APIError(requests.exceptions.HTTPError):
def __init__(self, message, response, explanation=None):
# requests 1.2 supports response as a keyword argument, but
# requests 1.1 doesn't
super(APIError, self).__init__(message)
self.response = response
self.explanation = explanation
if self.explanation is None and response.content:
self.explanation = response.content.strip()
def __str__(self):
message = super(APIError, self).__str__()
if self.is_client_error():
message = '%s Client Error: %s' % (
self.response.status_code, self.response.reason)
elif self.is_server_error():
message = '%s Server Error: %s' % (
self.response.status_code, self.response.reason)
if self.explanation:
message = '%s ("%s")' % (message, self.explanation)
return message
def is_client_error(self):
return 400 <= self.response.status_code < 500
def is_server_error(self):
return 500 <= self.response.status_code < 600
class DockerException(Exception):
pass
class InvalidRepository(DockerException):
pass
class InvalidConfigFile(DockerException):
pass
class DeprecatedMethod(DockerException):
pass

View File

@@ -1 +0,0 @@
from .unixconn import UnixAdapter # flake8: noqa

View File

@@ -1,71 +0,0 @@
# Copyright 2013 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from fig.packages import six
if six.PY3:
import http.client as httplib
else:
import httplib
import requests.adapters
import socket
try:
import requests.packages.urllib3.connectionpool as connectionpool
except ImportError:
import urllib3.connectionpool as connectionpool
class UnixHTTPConnection(httplib.HTTPConnection, object):
def __init__(self, base_url, unix_socket, timeout=60):
httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)
self.base_url = base_url
self.unix_socket = unix_socket
self.timeout = timeout
def connect(self):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(self.timeout)
sock.connect(self.base_url.replace("http+unix:/", ""))
self.sock = sock
def _extract_path(self, url):
# remove the base_url entirely..
return url.replace(self.base_url, "")
def request(self, method, url, **kwargs):
url = self._extract_path(self.unix_socket)
super(UnixHTTPConnection, self).request(method, url, **kwargs)
class UnixHTTPConnectionPool(connectionpool.HTTPConnectionPool):
def __init__(self, base_url, socket_path, timeout=60):
connectionpool.HTTPConnectionPool.__init__(self, 'localhost',
timeout=timeout)
self.base_url = base_url
self.socket_path = socket_path
self.timeout = timeout
def _new_conn(self):
return UnixHTTPConnection(self.base_url, self.socket_path,
self.timeout)
class UnixAdapter(requests.adapters.HTTPAdapter):
def __init__(self, base_url, timeout=60):
self.base_url = base_url
self.timeout = timeout
super(UnixAdapter, self).__init__()
def get_connection(self, socket_path, proxies=None):
return UnixHTTPConnectionPool(self.base_url, socket_path, self.timeout)

View File

@@ -1,4 +0,0 @@
from .utils import (
compare_version, convert_port_bindings, convert_volume_binds,
mkbuildcontext, ping, tar, parse_repository_tag
) # flake8: noqa

View File

@@ -1,147 +0,0 @@
# Copyright 2013 dotCloud inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import tarfile
import tempfile
from distutils.version import StrictVersion
import requests
from fig.packages import six
def mkbuildcontext(dockerfile):
f = tempfile.NamedTemporaryFile()
t = tarfile.open(mode='w', fileobj=f)
if isinstance(dockerfile, io.StringIO):
dfinfo = tarfile.TarInfo('Dockerfile')
if six.PY3:
raise TypeError('Please use io.BytesIO to create in-memory '
'Dockerfiles with Python 3')
else:
dfinfo.size = len(dockerfile.getvalue())
elif isinstance(dockerfile, io.BytesIO):
dfinfo = tarfile.TarInfo('Dockerfile')
dfinfo.size = len(dockerfile.getvalue())
else:
dfinfo = t.gettarinfo(fileobj=dockerfile, arcname='Dockerfile')
t.addfile(dfinfo, dockerfile)
t.close()
f.seek(0)
return f
def tar(path):
f = tempfile.NamedTemporaryFile()
t = tarfile.open(mode='w', fileobj=f)
t.add(path, arcname='.')
t.close()
f.seek(0)
return f
def compare_version(v1, v2):
"""Compare docker versions
>>> v1 = '1.9'
>>> v2 = '1.10'
>>> compare_version(v1, v2)
1
>>> compare_version(v2, v1)
-1
>>> compare_version(v2, v2)
0
"""
s1 = StrictVersion(v1)
s2 = StrictVersion(v2)
if s1 == s2:
return 0
elif s1 > s2:
return -1
else:
return 1
def ping(url):
try:
res = requests.get(url)
except Exception:
return False
else:
return res.status_code < 400
def _convert_port_binding(binding):
result = {'HostIp': '', 'HostPort': ''}
if isinstance(binding, tuple):
if len(binding) == 2:
result['HostPort'] = binding[1]
result['HostIp'] = binding[0]
elif isinstance(binding[0], six.string_types):
result['HostIp'] = binding[0]
else:
result['HostPort'] = binding[0]
elif isinstance(binding, dict):
if 'HostPort' in binding:
result['HostPort'] = binding['HostPort']
if 'HostIp' in binding:
result['HostIp'] = binding['HostIp']
else:
raise ValueError(binding)
else:
result['HostPort'] = binding
if result['HostPort'] is None:
result['HostPort'] = ''
else:
result['HostPort'] = str(result['HostPort'])
return result
def convert_port_bindings(port_bindings):
result = {}
for k, v in six.iteritems(port_bindings):
key = str(k)
if '/' not in key:
key = key + '/tcp'
if isinstance(v, list):
result[key] = [_convert_port_binding(binding) for binding in v]
else:
result[key] = [_convert_port_binding(v)]
return result
def convert_volume_binds(binds):
result = []
for k, v in binds.items():
if isinstance(v, dict):
result.append('%s:%s:%s' % (
k, v['bind'], 'ro' if v.get('ro', False) else 'rw'
))
else:
result.append('%s:%s:rw' % (k, v))
return result
def parse_repository_tag(repo):
column_index = repo.rfind(':')
if column_index < 0:
return repo, None
tag = repo[column_index+1:]
slash_index = tag.find('/')
if slash_index < 0:
return repo[:column_index], tag
return repo, None

View File

@@ -1 +0,0 @@
version = "0.3.2"

View File

@@ -0,0 +1,27 @@
# dockerpty.
#
# Copyright 2014 Chris Corbyn <chris@w3style.co.uk>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .pty import PseudoTerminal
def start(client, container):
"""
Present the PTY of the container inside the current process.
This is just a wrapper for PseudoTerminal(client, container).start()
"""
PseudoTerminal(client, container).start()

View File

@@ -0,0 +1,294 @@
# dockerpty: io.py
#
# Copyright 2014 Chris Corbyn <chris@w3style.co.uk>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import fcntl
import errno
import struct
import select as builtin_select
def set_blocking(fd, blocking=True):
"""
Set the given file-descriptor blocking or non-blocking.
Returns the original blocking status.
"""
old_flag = fcntl.fcntl(fd, fcntl.F_GETFL)
if blocking:
new_flag = old_flag &~ os.O_NONBLOCK
else:
new_flag = old_flag | os.O_NONBLOCK
fcntl.fcntl(fd, fcntl.F_SETFL, new_flag)
return not bool(old_flag & os.O_NONBLOCK)
def select(read_streams, timeout=0):
"""
Select the streams from `read_streams` that are ready for reading.
Uses `select.select()` internally but returns a flat list of streams.
"""
write_streams = []
exception_streams = []
try:
return builtin_select.select(
read_streams,
write_streams,
exception_streams,
timeout,
)[0]
except builtin_select.error as e:
# POSIX signals interrupt select()
if e[0] == errno.EINTR:
return []
else:
raise e
class Stream(object):
"""
Generic Stream class.
This is a file-like abstraction on top of os.read() and os.write(), which
add consistency to the reading of sockets and files alike.
"""
"""
Recoverable IO/OS Errors.
"""
ERRNO_RECOVERABLE = [
errno.EINTR,
errno.EDEADLK,
errno.EWOULDBLOCK,
]
def __init__(self, fd):
"""
Initialize the Stream for the file descriptor `fd`.
The `fd` object must have a `fileno()` method.
"""
self.fd = fd
def fileno(self):
"""
Return the fileno() of the file descriptor.
"""
return self.fd.fileno()
def set_blocking(self, value):
if hasattr(self.fd, 'setblocking'):
self.fd.setblocking(value)
return True
else:
return set_blocking(self.fd, value)
def read(self, n=4096):
"""
Return `n` bytes of data from the Stream, or None at end of stream.
"""
try:
if hasattr(self.fd, 'recv'):
return self.fd.recv(n)
return os.read(self.fd.fileno(), n)
except EnvironmentError as e:
if e.errno not in Stream.ERRNO_RECOVERABLE:
raise e
def write(self, data):
"""
Write `data` to the Stream.
"""
if not data:
return None
while True:
try:
if hasattr(self.fd, 'send'):
self.fd.send(data)
return len(data)
os.write(self.fd.fileno(), data)
return len(data)
except EnvironmentError as e:
if e.errno not in Stream.ERRNO_RECOVERABLE:
raise e
def __repr__(self):
return "{cls}({fd})".format(cls=type(self).__name__, fd=self.fd)
class Demuxer(object):
"""
Wraps a multiplexed Stream to read in data demultiplexed.
Docker multiplexes streams together when there is no PTY attached, by
sending an 8-byte header, followed by a chunk of data.
The first 4 bytes of the header denote the stream from which the data came
(i.e. 0x01 = stdout, 0x02 = stderr). Only the first byte of these initial 4
bytes is used.
The next 4 bytes indicate the length of the following chunk of data as an
integer in big endian format. This much data must be consumed before the
next 8-byte header is read.
"""
def __init__(self, stream):
"""
Initialize a new Demuxer reading from `stream`.
"""
self.stream = stream
self.remain = 0
def fileno(self):
"""
Returns the fileno() of the underlying Stream.
This is useful for select() to work.
"""
return self.stream.fileno()
def set_blocking(self, value):
return self.stream.set_blocking(value)
def read(self, n=4096):
"""
Read up to `n` bytes of data from the Stream, after demuxing.
Less than `n` bytes of data may be returned depending on the available
payload, but the number of bytes returned will never exceed `n`.
Because demuxing involves scanning 8-byte headers, the actual amount of
data read from the underlying stream may be greater than `n`.
"""
size = self._next_packet_size(n)
if size <= 0:
return
else:
return self.stream.read(size)
def write(self, data):
"""
Delegates the the underlying Stream.
"""
return self.stream.write(data)
def _next_packet_size(self, n=0):
size = 0
if self.remain > 0:
size = min(n, self.remain)
self.remain -= size
else:
data = self.stream.read(8)
if data is None:
return 0
if len(data) == 8:
__, actual = struct.unpack('>BxxxL', data)
size = min(n, actual)
self.remain = actual - size
return size
def __repr__(self):
return "{cls}({stream})".format(cls=type(self).__name__,
stream=self.stream)
class Pump(object):
"""
Stream pump class.
A Pump wraps two Streams, reading from one and and writing its data into
the other, much like a pipe but manually managed.
This abstraction is used to facilitate piping data between the file
descriptors associated with the tty and those associated with a container's
allocated pty.
Pumps are selectable based on the 'read' end of the pipe.
"""
def __init__(self, from_stream, to_stream):
"""
Initialize a Pump with a Stream to read from and another to write to.
"""
self.from_stream = from_stream
self.to_stream = to_stream
def fileno(self):
"""
Returns the `fileno()` of the reader end of the Pump.
This is useful to allow Pumps to function with `select()`.
"""
return self.from_stream.fileno()
def set_blocking(self, value):
return self.from_stream.set_blocking(value)
def flush(self, n=4096):
"""
Flush `n` bytes of data from the reader Stream to the writer Stream.
Returns the number of bytes that were actually flushed. A return value
of zero is not an error.
If EOF has been reached, `None` is returned.
"""
try:
return self.to_stream.write(self.from_stream.read(n))
except OSError as e:
if e.errno != errno.EPIPE:
raise e
def __repr__(self):
return "{cls}(from={from_stream}, to={to_stream})".format(
cls=type(self).__name__,
from_stream=self.from_stream,
to_stream=self.to_stream)

View File

@@ -0,0 +1,235 @@
# dockerpty: pty.py
#
# Copyright 2014 Chris Corbyn <chris@w3style.co.uk>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import signal
from ssl import SSLError
from . import io
from . import tty
class WINCHHandler(object):
"""
WINCH Signal handler to keep the PTY correctly sized.
"""
def __init__(self, pty):
"""
Initialize a new WINCH handler for the given PTY.
Initializing a handler has no immediate side-effects. The `start()`
method must be invoked for the signals to be trapped.
"""
self.pty = pty
self.original_handler = None
def __enter__(self):
"""
Invoked on entering a `with` block.
"""
self.start()
return self
def __exit__(self, *_):
"""
Invoked on exiting a `with` block.
"""
self.stop()
def start(self):
"""
Start trapping WINCH signals and resizing the PTY.
This method saves the previous WINCH handler so it can be restored on
`stop()`.
"""
def handle(signum, frame):
if signum == signal.SIGWINCH:
self.pty.resize()
self.original_handler = signal.signal(signal.SIGWINCH, handle)
def stop(self):
"""
Stop trapping WINCH signals and restore the previous WINCH handler.
"""
if self.original_handler is not None:
signal.signal(signal.SIGWINCH, self.original_handler)
class PseudoTerminal(object):
"""
Wraps the pseudo-TTY (PTY) allocated to a docker container.
The PTY is managed via the current process' TTY until it is closed.
Example:
import docker
from dockerpty import PseudoTerminal
client = docker.Client()
container = client.create_container(
image='busybox:latest',
stdin_open=True,
tty=True,
command='/bin/sh',
)
# hijacks the current tty until the pty is closed
PseudoTerminal(client, container).start()
Care is taken to ensure all file descriptors are restored on exit. For
example, you can attach to a running container from within a Python REPL
and when the container exits, the user will be returned to the Python REPL
without adverse effects.
"""
def __init__(self, client, container):
"""
Initialize the PTY using the docker.Client instance and container dict.
"""
self.client = client
self.container = container
self.raw = None
def start(self, **kwargs):
"""
Present the PTY of the container inside the current process.
This will take over the current process' TTY until the container's PTY
is closed.
"""
pty_stdin, pty_stdout, pty_stderr = self.sockets()
mappings = [
(io.Stream(sys.stdin), pty_stdin),
(pty_stdout, io.Stream(sys.stdout)),
(pty_stderr, io.Stream(sys.stderr)),
]
pumps = [io.Pump(a, b) for (a, b) in mappings if a and b]
if not self.container_info()['State']['Running']:
self.client.start(self.container, **kwargs)
flags = [p.set_blocking(False) for p in pumps]
try:
with WINCHHandler(self):
self._hijack_tty(pumps)
finally:
if flags:
for (pump, flag) in zip(pumps, flags):
io.set_blocking(pump, flag)
def israw(self):
"""
Returns True if the PTY should operate in raw mode.
If the container was not started with tty=True, this will return False.
"""
if self.raw is None:
info = self.container_info()
self.raw = sys.stdout.isatty() and info['Config']['Tty']
return self.raw
def sockets(self):
"""
Returns a tuple of sockets connected to the pty (stdin,stdout,stderr).
If any of the sockets are not attached in the container, `None` is
returned in the tuple.
"""
info = self.container_info()
def attach_socket(key):
if info['Config']['Attach{0}'.format(key.capitalize())]:
socket = self.client.attach_socket(
self.container,
{key: 1, 'stream': 1, 'logs': 1},
)
stream = io.Stream(socket)
if info['Config']['Tty']:
return stream
else:
return io.Demuxer(stream)
else:
return None
return map(attach_socket, ('stdin', 'stdout', 'stderr'))
def resize(self, size=None):
"""
Resize the container's PTY.
If `size` is not None, it must be a tuple of (height,width), otherwise
it will be determined by the size of the current TTY.
"""
if not self.israw():
return
size = size or tty.size(sys.stdout)
if size is not None:
rows, cols = size
try:
self.client.resize(self.container, height=rows, width=cols)
except IOError: # Container already exited
pass
def container_info(self):
"""
Thin wrapper around client.inspect_container().
"""
return self.client.inspect_container(self.container)
def _hijack_tty(self, pumps):
with tty.Terminal(sys.stdin, raw=self.israw()):
self.resize()
while True:
_ready = io.select(pumps, timeout=60)
try:
if all([p.flush() is None for p in pumps]):
break
except SSLError as e:
if 'The operation did not complete' not in e.strerror:
raise e

View File

@@ -0,0 +1,130 @@
# dockerpty: tty.py
#
# Copyright 2014 Chris Corbyn <chris@w3style.co.uk>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
import os
import termios
import tty
import fcntl
import struct
def size(fd):
"""
Return a tuple (rows,cols) representing the size of the TTY `fd`.
The provided file descriptor should be the stdout stream of the TTY.
If the TTY size cannot be determined, returns None.
"""
if not os.isatty(fd.fileno()):
return None
try:
dims = struct.unpack('hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, 'hhhh'))
except:
try:
dims = (os.environ['LINES'], os.environ['COLUMNS'])
except:
return None
return dims
class Terminal(object):
"""
Terminal provides wrapper functionality to temporarily make the tty raw.
This is useful when streaming data from a pseudo-terminal into the tty.
Example:
with Terminal(sys.stdin, raw=True):
do_things_in_raw_mode()
"""
def __init__(self, fd, raw=True):
"""
Initialize a terminal for the tty with stdin attached to `fd`.
Initializing the Terminal has no immediate side effects. The `start()`
method must be invoked, or `with raw_terminal:` used before the
terminal is affected.
"""
self.fd = fd
self.raw = raw
self.original_attributes = None
def __enter__(self):
"""
Invoked when a `with` block is first entered.
"""
self.start()
return self
def __exit__(self, *_):
"""
Invoked when a `with` block is finished.
"""
self.stop()
def israw(self):
"""
Returns True if the TTY should operate in raw mode.
"""
return self.raw
def start(self):
"""
Saves the current terminal attributes and makes the tty raw.
This method returns None immediately.
"""
if os.isatty(self.fd.fileno()) and self.israw():
self.original_attributes = termios.tcgetattr(self.fd)
tty.setraw(self.fd)
def stop(self):
"""
Restores the terminal attributes back to before setting raw mode.
If the raw terminal was not started, does nothing.
"""
if self.original_attributes is not None:
termios.tcsetattr(
self.fd,
termios.TCSADRAIN,
self.original_attributes,
)
def __repr__(self):
return "{cls}({fd}, raw={raw})".format(
cls=type(self).__name__,
fd=self.fd,
raw=self.raw)

View File

@@ -1,404 +0,0 @@
"""Utilities for writing code that runs on Python 2 and 3"""
# Copyright (c) 2010-2013 Benjamin Peterson
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
# the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import operator
import sys
import types
__author__ = "Benjamin Peterson <benjamin@python.org>"
__version__ = "1.3.0"
# True if we are running on Python 3.
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result)
# This is a bit ugly, but it avoids running this again.
delattr(tp, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _MovedItems(types.ModuleType):
"""Lazy loading of moved objects"""
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
del attr
moves = sys.modules[__name__ + ".moves"] = _MovedItems("moves")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
_iterkeys = "keys"
_itervalues = "values"
_iteritems = "items"
_iterlists = "lists"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
_iterkeys = "iterkeys"
_itervalues = "itervalues"
_iteritems = "iteritems"
_iterlists = "iterlists"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
def iterkeys(d, **kw):
"""Return an iterator over the keys of a dictionary."""
return iter(getattr(d, _iterkeys)(**kw))
def itervalues(d, **kw):
"""Return an iterator over the values of a dictionary."""
return iter(getattr(d, _itervalues)(**kw))
def iteritems(d, **kw):
"""Return an iterator over the (key, value) pairs of a dictionary."""
return iter(getattr(d, _iteritems)(**kw))
def iterlists(d, **kw):
"""Return an iterator over the (key, [values]) pairs of a dictionary."""
return iter(getattr(d, _iterlists)(**kw))
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
def u(s):
return unicode(s, "unicode_escape")
int2byte = chr
import StringIO
StringIO = BytesIO = StringIO.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
import builtins
exec_ = getattr(builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
print_ = getattr(builtins, "print")
del builtins
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
def print_(*args, **kwargs):
"""The new-style print function."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
def with_metaclass(meta, base=object):
"""Create a base class with a metaclass."""
return meta("NewBase", (base,), {})

View File

@@ -1,9 +1,10 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import logging
from .service import Service
from .container import Container
from .packages.docker.errors import APIError
from docker.errors import APIError
log = logging.getLogger(__name__)
@@ -155,6 +156,10 @@ class Project(object):
for service in reversed(self.get_services(service_names)):
service.kill(**options)
def restart(self, service_names=None, **options):
for service in self.get_services(service_names):
service.restart(**options)
def build(self, service_names=None, no_cache=False):
for service in self.get_services(service_names):
if service.can_be_built():
@@ -175,16 +180,19 @@ class Project(object):
return running_containers
def pull(self, service_names=None, insecure_registry=False):
for service in self.get_services(service_names, include_links=True):
service.pull(insecure_registry=insecure_registry)
def remove_stopped(self, service_names=None, **options):
for service in self.get_services(service_names):
service.remove_stopped(**options)
def containers(self, service_names=None, *args, **kwargs):
l = []
for service in self.get_services(service_names):
for container in service.containers(*args, **kwargs):
l.append(container)
return l
def containers(self, service_names=None, stopped=False, one_off=False):
return [Container.from_ps(self.client, container)
for container in self.client.containers(all=stopped)
for service in self.get_services(service_names)
if service.has_container(container, one_off=one_off)]
def _inject_links(self, acc, service):
linked_names = service.get_linked_names()

View File

@@ -1,10 +1,14 @@
from __future__ import unicode_literals
from __future__ import absolute_import
from .packages.docker.errors import APIError
from collections import namedtuple
import logging
import re
import os
from operator import attrgetter
import sys
from docker.errors import APIError
from .container import Container
from .progress_stream import stream_output, StreamOutputError
@@ -39,6 +43,12 @@ class ConfigError(ValueError):
pass
VolumeSpec = namedtuple('VolumeSpec', 'external internal mode')
ServiceName = namedtuple('ServiceName', 'project service number')
class Service(object):
def __init__(self, name, client=None, project='default', links=None, volumes_from=None, **options):
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
@@ -65,15 +75,30 @@ class Service(object):
self.options = options
def containers(self, stopped=False, one_off=False):
l = []
for container in self.client.containers(all=stopped):
name = get_container_name(container)
if not name or not is_valid_name(name, one_off):
return [Container.from_ps(self.client, container)
for container in self.client.containers(all=stopped)
if self.has_container(container, one_off=one_off)]
def has_container(self, container, one_off=False):
"""Return True if `container` was created to fulfill this service."""
name = get_container_name(container)
if not name or not is_valid_name(name, one_off):
return False
project, name, _number = parse_name(name)
return project == self.project and name == self.name
def get_container(self, number=1):
"""Return a :class:`fig.container.Container` for this service. The
container must be active, and match `number`.
"""
for container in self.client.containers():
if not self.has_container(container):
continue
project, name, number = parse_name(name)
if project == self.project and name == self.name:
l.append(Container.from_ps(self.client, container))
return l
_, _, container_number = parse_name(get_container_name(container))
if container_number == number:
return Container.from_ps(self.client, container)
raise ValueError("No container found for %s_%s" % (self.name, number))
def start(self, **options):
for c in self.containers(stopped=True):
@@ -89,6 +114,11 @@ class Service(object):
log.info("Killing %s..." % c.name)
c.kill(**options)
def restart(self, **options):
for c in self.containers():
log.info("Restarting %s..." % c.name)
c.restart(**options)
def scale(self, desired_num):
"""
Adjusts the number of containers to the specified number and ensures they are running.
@@ -161,8 +191,8 @@ class Service(object):
"""
containers = self.containers(stopped=True)
if len(containers) == 0:
log.info("Creating %s..." % self.next_container_name())
if not containers:
log.info("Creating %s..." % self._next_container_name(containers))
container = self.create_container(**override_options)
self.start_container(container)
return [(None, container)]
@@ -176,6 +206,10 @@ class Service(object):
return tuples
def recreate_container(self, container, **override_options):
"""Recreate a container. An intermediate container is created so that
the new container has the same name, while still supporting
`volumes-from` the original container.
"""
try:
container.stop()
except APIError as e:
@@ -189,7 +223,7 @@ class Service(object):
intermediate_container = Container.create(
self.client,
image=container.image,
entrypoint=['echo'],
entrypoint=['/bin/echo'],
command=[],
)
intermediate_container.start(volumes_from=container.id)
@@ -212,37 +246,22 @@ class Service(object):
return self.start_container(container, **options)
def start_container(self, container=None, intermediate_container=None, **override_options):
if container is None:
container = self.create_container(**override_options)
container = container or self.create_container(**override_options)
options = dict(self.options, **override_options)
ports = dict(split_port(port) for port in options.get('ports') or [])
options = self.options.copy()
options.update(override_options)
port_bindings = {}
if options.get('ports', None) is not None:
for port in options['ports']:
internal_port, external_port = split_port(port)
port_bindings[internal_port] = external_port
volume_bindings = {}
if options.get('volumes', None) is not None:
for volume in options['volumes']:
if ':' in volume:
external_dir, internal_dir = volume.split(':')
volume_bindings[os.path.abspath(external_dir)] = {
'bind': internal_dir,
'ro': False,
}
volume_bindings = dict(
build_volume_binding(parse_volume_spec(volume))
for volume in options.get('volumes') or []
if ':' in volume)
privileged = options.get('privileged', False)
net = options.get('net', 'bridge')
dns = options.get('dns', None)
container.start(
links=self._get_links(link_to_self=override_options.get('one_off', False)),
port_bindings=port_bindings,
links=self._get_links(link_to_self=options.get('one_off', False)),
port_bindings=ports,
binds=volume_bindings,
volumes_from=self._get_volumes_from(intermediate_container),
privileged=privileged,
@@ -254,8 +273,8 @@ class Service(object):
def start_or_create_containers(self):
containers = self.containers(stopped=True)
if len(containers) == 0:
log.info("Creating %s..." % self.next_container_name())
if not containers:
log.info("Creating %s..." % self._next_container_name(containers))
new_container = self.create_container()
return [self.start_container(new_container)]
else:
@@ -264,42 +283,43 @@ class Service(object):
def get_linked_names(self):
return [s.name for (s, _) in self.links]
def next_container_name(self, one_off=False):
def _next_container_name(self, all_containers, one_off=False):
bits = [self.project, self.name]
if one_off:
bits.append('run')
return '_'.join(bits + [str(self.next_container_number(one_off=one_off))])
return '_'.join(bits + [str(self._next_container_number(all_containers))])
def next_container_number(self, one_off=False):
numbers = [parse_name(c.name)[2] for c in self.containers(stopped=True, one_off=one_off)]
if len(numbers) == 0:
return 1
else:
return max(numbers) + 1
def _next_container_number(self, all_containers):
numbers = [parse_name(c.name).number for c in all_containers]
return 1 if not numbers else max(numbers) + 1
def _get_links(self, link_to_self):
links = []
for service, link_name in self.links:
for container in service.containers():
if link_name:
links.append((container.name, link_name))
links.append((container.name, link_name or service.name))
links.append((container.name, container.name))
links.append((container.name, container.name_without_project))
if link_to_self:
for container in self.containers():
links.append((container.name, self.name))
links.append((container.name, container.name))
links.append((container.name, container.name_without_project))
return links
def _get_volumes_from(self, intermediate_container=None):
volumes_from = []
for v in self.volumes_from:
if isinstance(v, Service):
for container in v.containers(stopped=True):
volumes_from.append(container.id)
elif isinstance(v, Container):
volumes_from.append(v.id)
for volume_source in self.volumes_from:
if isinstance(volume_source, Service):
containers = volume_source.containers(stopped=True)
if not containers:
volumes_from.append(volume_source.create_container().id)
else:
volumes_from.extend(map(attrgetter('id'), containers))
elif isinstance(volume_source, Container):
volumes_from.append(volume_source.id)
if intermediate_container:
volumes_from.append(intermediate_container.id)
@@ -310,7 +330,9 @@ class Service(object):
container_options = dict((k, self.options[k]) for k in DOCKER_CONFIG_KEYS if k in self.options)
container_options.update(override_options)
container_options['name'] = self.next_container_name(one_off)
container_options['name'] = self._next_container_name(
self.containers(stopped=True, one_off=one_off),
one_off)
# If a qualified hostname was given, split it into an
# unqualified hostname and a domainname unless domainname
@@ -336,7 +358,9 @@ class Service(object):
container_options['ports'] = ports
if 'volumes' in container_options:
container_options['volumes'] = dict((split_volume(v)[1], {}) for v in container_options['volumes'])
container_options['volumes'] = dict(
(parse_volume_spec(v).internal, {})
for v in container_options['volumes'])
if 'environment' in container_options:
if isinstance(container_options['environment'], list):
@@ -399,6 +423,14 @@ class Service(object):
return False
return True
def pull(self, insecure_registry=False):
if 'image' in self.options:
log.info('Pulling %s (%s)...' % (self.name, self.options.get('image')))
self.client.pull(
self.options.get('image'),
insecure_registry=insecure_registry
)
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
@@ -413,10 +445,10 @@ def is_valid_name(name, one_off=False):
return match.group(3) is None
def parse_name(name, one_off=False):
def parse_name(name):
match = NAME_RE.match(name)
(project, service_name, _, suffix) = match.groups()
return (project, service_name, int(suffix))
return ServiceName(project, service_name, int(suffix))
def get_container_name(container):
@@ -431,32 +463,47 @@ def get_container_name(container):
return name[1:]
def split_volume(v):
"""
If v is of the format EXTERNAL:INTERNAL, returns (EXTERNAL, INTERNAL).
If v is of the format INTERNAL, returns (None, INTERNAL).
"""
if ':' in v:
return v.split(':', 1)
else:
return (None, v)
def parse_volume_spec(volume_config):
parts = volume_config.split(':')
if len(parts) > 3:
raise ConfigError("Volume %s has incorrect format, should be "
"external:internal[:mode]" % volume_config)
if len(parts) == 1:
return VolumeSpec(None, parts[0], 'rw')
if len(parts) == 2:
parts.append('rw')
external, internal, mode = parts
if mode not in ('rw', 'ro'):
raise ConfigError("Volume %s has invalid mode (%s), should be "
"one of: rw, ro." % (volume_config, mode))
return VolumeSpec(external, internal, mode)
def build_volume_binding(volume_spec):
internal = {'bind': volume_spec.internal, 'ro': volume_spec.mode == 'ro'}
external = os.path.expanduser(volume_spec.external)
return os.path.abspath(os.path.expandvars(external)), internal
def split_port(port):
port = str(port)
external_ip = None
if ':' in port:
external_port, internal_port = port.rsplit(':', 1)
if ':' in external_port:
external_ip, external_port = external_port.split(':', 1)
else:
external_port, internal_port = (None, port)
if external_ip:
if external_port:
external_port = (external_ip, external_port)
else:
external_port = (external_ip,)
return internal_port, external_port
parts = str(port).split(':')
if not 1 <= len(parts) <= 3:
raise ConfigError('Invalid port "%s", should be '
'[[remote_ip:]remote_port:]port[/protocol]' % port)
if len(parts) == 1:
internal_port, = parts
return internal_port, None
if len(parts) == 2:
external_port, internal_port = parts
return internal_port, external_port
external_ip, external_port, internal_port = parts
return internal_port, (external_ip, external_port or None)
def split_env(env):

View File

@@ -1,5 +1,5 @@
mock==1.0.1
nose==1.3.0
pyinstaller==2.1
mock >= 1.0.1
nose
git+https://github.com/pyinstaller/pyinstaller.git@12e40471c77f588ea5be352f7219c873ddaae056#egg=pyinstaller
unittest2
flake8

View File

@@ -1,6 +1,7 @@
docopt==0.6.1
PyYAML==3.10
docker-py==0.5.3
docopt==0.6.1
requests==2.2.1
six==1.7.3
texttable==0.8.1
websocket-client==0.11.0
dockerpty==0.2.3

33
script/.validate Normal file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
if [ -z "$VALIDATE_UPSTREAM" ]; then
# this is kind of an expensive check, so let's not do this twice if we
# are running more than one validate bundlescript
VALIDATE_REPO='https://github.com/docker/fig.git'
VALIDATE_BRANCH='master'
if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then
VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
VALIDATE_BRANCH="${TRAVIS_BRANCH}"
fi
VALIDATE_HEAD="$(git rev-parse --verify HEAD)"
git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH"
VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)"
VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD"
VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD"
validate_diff() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git diff "$VALIDATE_COMMIT_DIFF" "$@"
fi
}
validate_log() {
if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then
git log "$VALIDATE_COMMIT_LOG" "$@"
fi
}
fi

View File

@@ -3,5 +3,6 @@ set -ex
mkdir -p `pwd`/dist
chmod 777 `pwd`/dist
docker build -t fig .
docker run -v `pwd`/dist:/code/dist fig pyinstaller -F bin/fig
docker run -v `pwd`/dist:/code/dist fig dist/fig --version
docker run -u user -v `pwd`/dist:/code/dist fig pyinstaller -F bin/fig
mv dist/fig dist/fig-Linux-x86_64
docker run -u user -v `pwd`/dist:/code/dist fig dist/fig-Linux-x86_64 --version

View File

@@ -2,7 +2,9 @@
set -ex
rm -rf venv
virtualenv venv
venv/bin/pip install pyinstaller==2.1
venv/bin/pip install -r requirements.txt
venv/bin/pip install -r requirements-dev.txt
venv/bin/pip install .
venv/bin/pyinstaller -F bin/fig
dist/fig --version
mv dist/fig dist/fig-Darwin-x86_64
dist/fig-Darwin-x86_64 --version

View File

@@ -13,7 +13,7 @@ if [ ! -d "$GIT_DIR" ]; then
fi
if !(git remote | grep origin); then
git remote add origin git@github.com:orchardup/fig.git
git remote add origin git@github.com:docker/fig.git
fi
git fetch origin

View File

@@ -1,4 +1,12 @@
#!/bin/sh
set -e
flake8 fig
PYTHONIOENCODING=ascii nosetests $@
set -ex
target="tests"
if [[ -n "$@" ]]; then
target="$@"
fi
docker build -t fig .
docker run -v /var/run/docker.sock:/var/run/docker.sock fig flake8 --exclude=packages fig
docker run -v /var/run/docker.sock:/var/run/docker.sock fig nosetests $target

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -ex
# Kill background processes on exit
trap 'kill -9 $(jobs -p)' SIGINT SIGTERM EXIT
export DOCKER_HOST=tcp://localhost:4243
orchard proxy -H $TRAVIS_JOB_ID $DOCKER_HOST &
sleep 2
nosetests -v

56
script/validate-dco Executable file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
source "$(dirname "$BASH_SOURCE")/.validate"
adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }')
notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')"
: ${adds:=0}
: ${dels:=0}
# "Username may only contain alphanumeric characters or dashes and cannot begin with a dash"
githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+'
# https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
dcoPrefix='Signed-off-by:'
dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$"
check_dco() {
grep -qE "$dcoRegex"
}
if [ $adds -eq 0 -a $dels -eq 0 ]; then
echo '0 adds, 0 deletions; nothing to validate! :)'
elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then
echo 'Congratulations! DCO small-patch-exception material!'
else
commits=( $(validate_log --format='format:%H%n') )
badCommits=()
for commit in "${commits[@]}"; do
if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then
# no content (ie, Merge commit, etc)
continue
fi
if ! git log -1 --format='format:%B' "$commit" | check_dco; then
badCommits+=( "$commit" )
fi
done
if [ ${#badCommits[@]} -eq 0 ]; then
echo "Congratulations! All commits are properly signed with the DCO!"
else
{
echo "These commits do not have a proper '$dcoPrefix' marker:"
for commit in "${badCommits[@]}"; do
echo " - $commit"
done
echo
echo 'Please amend each commit to include a properly formatted DCO marker.'
echo
echo 'Visit the following URL for information about the Docker DCO:'
echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work'
echo
} >&2
false
fi
fi

View File

@@ -3,9 +3,10 @@
from __future__ import unicode_literals
from __future__ import absolute_import
from setuptools import setup, find_packages
import re
import os
import codecs
import os
import re
import sys
def read(*parts):
@@ -22,11 +23,28 @@ def find_version(*file_paths):
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
with open('requirements.txt') as f:
install_requires = f.read().splitlines()
with open('requirements-dev.txt') as f:
tests_require = f.read().splitlines()
install_requires = [
'docopt >= 0.6.1, < 0.7',
'PyYAML >= 3.10, < 4',
'requests >= 2.2.1, < 3',
'texttable >= 0.8.1, < 0.9',
'websocket-client >= 0.11.0, < 0.12',
'docker-py >= 0.5, < 0.6',
'six >= 1.3.0, < 2',
]
tests_require = [
'mock >= 1.0.1',
'nose',
'pyinstaller',
'flake8',
]
if sys.version_info < (2, 7):
tests_require.append('unittest2')
setup(
name='fig',
@@ -35,7 +53,7 @@ setup(
url='http://www.fig.sh/',
author='Docker, Inc.',
license='Apache License 2.0',
packages=find_packages(),
packages=find_packages(exclude=[ 'tests.*', 'tests' ]),
include_package_data=True,
test_suite='nose.collector',
install_requires=install_requires,

View File

@@ -0,0 +1,2 @@
FROM busybox:latest
ENTRYPOINT echo "From prebuilt entrypoint"

View File

@@ -0,0 +1,2 @@
service:
build: tests/fixtures/dockerfile_with_entrypoint

View File

@@ -0,0 +1,7 @@
service:
image: busybox:latest
command: sleep 5
environment:
foo: bar
hello: world

7
tests/fixtures/ports-figfile/fig.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
simple:
image: busybox:latest
command: /bin/sleep 300
ports:
- '3000'
- '9999:3001'

View File

@@ -1,10 +1,13 @@
from __future__ import absolute_import
from .testcases import DockerClientTestCase
from mock import patch
from fig.cli.main import TopLevelCommand
from fig.packages.six import StringIO
import sys
from six import StringIO
from mock import patch
from .testcases import DockerClientTestCase
from fig.cli.main import TopLevelCommand
class CLITestCase(DockerClientTestCase):
def setUp(self):
super(CLITestCase, self).setUp()
@@ -15,12 +18,16 @@ class CLITestCase(DockerClientTestCase):
def tearDown(self):
sys.exit = self.old_sys_exit
self.command.project.kill()
self.command.project.remove_stopped()
self.project.kill()
self.project.remove_stopped()
@property
def project(self):
return self.command.get_project(self.command.get_config_path())
@patch('sys.stdout', new_callable=StringIO)
def test_ps(self, mock_stdout):
self.command.project.get_service('simple').create_container()
self.project.get_service('simple').create_container()
self.command.dispatch(['ps'], None)
self.assertIn('simplefigfile_simple_1', mock_stdout.getvalue())
@@ -46,6 +53,12 @@ class CLITestCase(DockerClientTestCase):
self.assertNotIn('multiplefigfiles_another_1', output)
self.assertIn('multiplefigfiles_yetanother_1', output)
@patch('fig.service.log')
def test_pull(self, mock_logging):
self.command.dispatch(['pull'], None)
mock_logging.info.assert_any_call('Pulling simple (busybox:latest)...')
mock_logging.info.assert_any_call('Pulling another (busybox:latest)...')
@patch('sys.stdout', new_callable=StringIO)
def test_build_no_cache(self, mock_stdout):
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
@@ -61,20 +74,19 @@ class CLITestCase(DockerClientTestCase):
self.command.dispatch(['build', '--no-cache', 'simple'], None)
output = mock_stdout.getvalue()
self.assertNotIn(cache_indicator, output)
def test_up(self):
self.command.dispatch(['up', '-d'], None)
service = self.command.project.get_service('simple')
another = self.command.project.get_service('another')
service = self.project.get_service('simple')
another = self.project.get_service('another')
self.assertEqual(len(service.containers()), 1)
self.assertEqual(len(another.containers()), 1)
def test_up_with_links(self):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['up', '-d', 'web'], None)
web = self.command.project.get_service('web')
db = self.command.project.get_service('db')
console = self.command.project.get_service('console')
web = self.project.get_service('web')
db = self.project.get_service('db')
console = self.project.get_service('console')
self.assertEqual(len(web.containers()), 1)
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 0)
@@ -82,16 +94,16 @@ class CLITestCase(DockerClientTestCase):
def test_up_with_no_deps(self):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['up', '-d', '--no-deps', 'web'], None)
web = self.command.project.get_service('web')
db = self.command.project.get_service('db')
console = self.command.project.get_service('console')
web = self.project.get_service('web')
db = self.project.get_service('db')
console = self.project.get_service('console')
self.assertEqual(len(web.containers()), 1)
self.assertEqual(len(db.containers()), 0)
self.assertEqual(len(console.containers()), 0)
def test_up_with_recreate(self):
self.command.dispatch(['up', '-d'], None)
service = self.command.project.get_service('simple')
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
old_ids = [c.id for c in service.containers()]
@@ -105,7 +117,7 @@ class CLITestCase(DockerClientTestCase):
def test_up_with_keep_old(self):
self.command.dispatch(['up', '-d'], None)
service = self.command.project.get_service('simple')
service = self.project.get_service('simple')
self.assertEqual(len(service.containers()), 1)
old_ids = [c.id for c in service.containers()]
@@ -117,34 +129,33 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(old_ids, new_ids)
@patch('dockerpty.start')
@patch('fig.packages.dockerpty.start')
def test_run_service_without_links(self, mock_stdout):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['run', 'console', '/bin/true'], None)
self.assertEqual(len(self.command.project.containers()), 0)
self.assertEqual(len(self.project.containers()), 0)
@patch('dockerpty.start')
@patch('fig.packages.dockerpty.start')
def test_run_service_with_links(self, __):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['run', 'web', '/bin/true'], None)
db = self.command.project.get_service('db')
console = self.command.project.get_service('console')
db = self.project.get_service('db')
console = self.project.get_service('console')
self.assertEqual(len(db.containers()), 1)
self.assertEqual(len(console.containers()), 0)
@patch('dockerpty.start')
@patch('fig.packages.dockerpty.start')
def test_run_with_no_deps(self, __):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['run', '--no-deps', 'web', '/bin/true'], None)
db = self.command.project.get_service('db')
db = self.project.get_service('db')
self.assertEqual(len(db.containers()), 0)
@patch('dockerpty.start')
@patch('fig.packages.dockerpty.start')
def test_run_does_not_recreate_linked_containers(self, __):
self.command.base_dir = 'tests/fixtures/links-figfile'
self.command.dispatch(['up', '-d', 'db'], None)
db = self.command.project.get_service('db')
db = self.project.get_service('db')
self.assertEqual(len(db.containers()), 1)
old_ids = [c.id for c in db.containers()]
@@ -156,16 +167,16 @@ class CLITestCase(DockerClientTestCase):
self.assertEqual(old_ids, new_ids)
@patch('dockerpty.start')
@patch('fig.packages.dockerpty.start')
def test_run_without_command(self, __):
self.command.base_dir = 'tests/fixtures/commands-figfile'
self.client.build('tests/fixtures/simple-dockerfile', tag='figtest_test')
self.check_build('tests/fixtures/simple-dockerfile', tag='figtest_test')
for c in self.command.project.containers(stopped=True, one_off=True):
for c in self.project.containers(stopped=True, one_off=True):
c.remove()
self.command.dispatch(['run', 'implicit'], None)
service = self.command.project.get_service('implicit')
service = self.project.get_service('implicit')
containers = service.containers(stopped=True, one_off=True)
self.assertEqual(
[c.human_readable_command for c in containers],
@@ -173,40 +184,104 @@ class CLITestCase(DockerClientTestCase):
)
self.command.dispatch(['run', 'explicit'], None)
service = self.command.project.get_service('explicit')
service = self.project.get_service('explicit')
containers = service.containers(stopped=True, one_off=True)
self.assertEqual(
[c.human_readable_command for c in containers],
[u'/bin/true'],
)
@patch('fig.packages.dockerpty.start')
def test_run_service_with_entrypoint_overridden(self, _):
self.command.base_dir = 'tests/fixtures/dockerfile_with_entrypoint'
name = 'service'
self.command.dispatch(
['run', '--entrypoint', '/bin/echo', name, 'helloworld'],
None
)
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
self.assertEqual(
container.human_readable_command,
u'/bin/echo helloworld'
)
@patch('fig.packages.dockerpty.start')
def test_run_service_with_environement_overridden(self, _):
name = 'service'
self.command.base_dir = 'tests/fixtures/environment-figfile'
self.command.dispatch(
['run', '-e', 'foo=notbar', '-e', 'allo=moto=bobo',
'-e', 'alpha=beta', name],
None
)
service = self.project.get_service(name)
container = service.containers(stopped=True, one_off=True)[0]
# env overriden
self.assertEqual('notbar', container.environment['foo'])
# keep environement from yaml
self.assertEqual('world', container.environment['hello'])
# added option from command line
self.assertEqual('beta', container.environment['alpha'])
# make sure a value with a = don't crash out
self.assertEqual('moto=bobo', container.environment['allo'])
def test_rm(self):
service = self.command.project.get_service('simple')
service = self.project.get_service('simple')
service.create_container()
service.kill()
self.assertEqual(len(service.containers(stopped=True)), 1)
self.command.dispatch(['rm', '--force'], None)
self.assertEqual(len(service.containers(stopped=True)), 0)
def test_scale(self):
project = self.command.project
def test_restart(self):
service = self.project.get_service('simple')
container = service.create_container()
service.start_container(container)
started_at = container.dictionary['State']['StartedAt']
self.command.dispatch(['restart'], None)
container.inspect()
self.assertNotEqual(
container.dictionary['State']['FinishedAt'],
'0001-01-01T00:00:00Z',
)
self.assertNotEqual(
container.dictionary['State']['StartedAt'],
started_at,
)
self.command.scale({'SERVICE=NUM': ['simple=1']})
def test_scale(self):
project = self.project
self.command.scale(project, {'SERVICE=NUM': ['simple=1']})
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.command.scale({'SERVICE=NUM': ['simple=3', 'another=2']})
self.command.scale(project, {'SERVICE=NUM': ['simple=3', 'another=2']})
self.assertEqual(len(project.get_service('simple').containers()), 3)
self.assertEqual(len(project.get_service('another').containers()), 2)
self.command.scale({'SERVICE=NUM': ['simple=1', 'another=1']})
self.command.scale(project, {'SERVICE=NUM': ['simple=1', 'another=1']})
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.assertEqual(len(project.get_service('another').containers()), 1)
self.command.scale({'SERVICE=NUM': ['simple=1', 'another=1']})
self.command.scale(project, {'SERVICE=NUM': ['simple=1', 'another=1']})
self.assertEqual(len(project.get_service('simple').containers()), 1)
self.assertEqual(len(project.get_service('another').containers()), 1)
self.command.scale({'SERVICE=NUM': ['simple=0', 'another=0']})
self.command.scale(project, {'SERVICE=NUM': ['simple=0', 'another=0']})
self.assertEqual(len(project.get_service('simple').containers()), 0)
self.assertEqual(len(project.get_service('another').containers()), 0)
def test_port(self):
self.command.base_dir = 'tests/fixtures/ports-figfile'
self.command.dispatch(['up', '-d'], None)
container = self.project.get_service('simple').get_container()
@patch('sys.stdout', new_callable=StringIO)
def get_port(number, mock_stdout):
self.command.dispatch(['port', 'simple', str(number)], None)
return mock_stdout.getvalue().rstrip()
self.assertEqual(get_port(3000), container.get_local_port(3000))
self.assertEqual(get_port(3001), "0.0.0.0:9999")
self.assertEqual(get_port(3002), "")

View File

@@ -94,7 +94,7 @@ class ProjectTest(DockerClientTestCase):
def test_project_up_recreates_containers(self):
web = self.create_service('web')
db = self.create_service('db', volumes=['/var/db'])
db = self.create_service('db', volumes=['/etc'])
project = Project('figtest', [web, db], self.client)
project.start()
self.assertEqual(len(project.containers()), 0)
@@ -102,14 +102,14 @@ class ProjectTest(DockerClientTestCase):
project.up(['db'])
self.assertEqual(len(project.containers()), 1)
old_db_id = project.containers()[0].id
db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db']
db_volume_path = project.containers()[0].get('Volumes./etc')
project.up()
self.assertEqual(len(project.containers()), 2)
db_container = [c for c in project.containers() if 'db' in c.name][0]
self.assertNotEqual(c.id, old_db_id)
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
self.assertNotEqual(db_container.id, old_db_id)
self.assertEqual(db_container.get('Volumes./etc'), db_volume_path)
project.kill()
project.remove_stopped()
@@ -130,8 +130,9 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(project.containers()), 2)
db_container = [c for c in project.containers() if 'db' in c.name][0]
self.assertEqual(c.id, old_db_id)
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
self.assertEqual(db_container.id, old_db_id)
self.assertEqual(db_container.inspect()['Volumes']['/var/db'],
db_volume_path)
project.kill()
project.remove_stopped()
@@ -158,8 +159,9 @@ class ProjectTest(DockerClientTestCase):
self.assertEqual(len(new_containers), 2)
db_container = [c for c in new_containers if 'db' in c.name][0]
self.assertEqual(c.id, old_db_id)
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
self.assertEqual(db_container.id, old_db_id)
self.assertEqual(db_container.inspect()['Volumes']['/var/db'],
db_volume_path)
project.kill()
project.remove_stopped()

View File

@@ -1,11 +1,13 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import os
from fig import Service
from fig.service import CannotBeScaledError
from fig.container import Container
from fig.packages.docker.errors import APIError
from docker.errors import APIError
from .testcases import DockerClientTestCase
import os
class ServiceTest(DockerClientTestCase):
def test_containers(self):
@@ -105,14 +107,16 @@ class ServiceTest(DockerClientTestCase):
host_service = self.create_service('host', volumes_from=[volume_service, volume_container_2])
host_container = host_service.create_container()
host_service.start_container(host_container)
self.assertIn(volume_container_1.id, host_container.inspect()['HostConfig']['VolumesFrom'])
self.assertIn(volume_container_2.id, host_container.inspect()['HostConfig']['VolumesFrom'])
self.assertIn(volume_container_1.id,
host_container.get('HostConfig.VolumesFrom'))
self.assertIn(volume_container_2.id,
host_container.get('HostConfig.VolumesFrom'))
def test_recreate_containers(self):
service = self.create_service(
'db',
environment={'FOO': '1'},
volumes=['/var/db'],
volumes=['/etc'],
entrypoint=['sleep'],
command=['300']
)
@@ -122,7 +126,7 @@ class ServiceTest(DockerClientTestCase):
self.assertIn('FOO=1', old_container.dictionary['Config']['Env'])
self.assertEqual(old_container.name, 'figtest_db_1')
service.start_container(old_container)
volume_path = old_container.inspect()['Volumes']['/var/db']
volume_path = old_container.inspect()['Volumes']['/etc']
num_containers_before = len(self.client.containers(all=True))
@@ -132,18 +136,20 @@ class ServiceTest(DockerClientTestCase):
intermediate_container = tuples[0][0]
new_container = tuples[0][1]
self.assertEqual(intermediate_container.dictionary['Config']['Entrypoint'], ['echo'])
self.assertEqual(intermediate_container.dictionary['Config']['Entrypoint'], ['/bin/echo'])
self.assertEqual(new_container.dictionary['Config']['Entrypoint'], ['sleep'])
self.assertEqual(new_container.dictionary['Config']['Cmd'], ['300'])
self.assertIn('FOO=2', new_container.dictionary['Config']['Env'])
self.assertEqual(new_container.name, 'figtest_db_1')
self.assertEqual(new_container.inspect()['Volumes']['/var/db'], volume_path)
self.assertEqual(new_container.inspect()['Volumes']['/etc'], volume_path)
self.assertIn(intermediate_container.id, new_container.dictionary['HostConfig']['VolumesFrom'])
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
self.assertNotEqual(old_container.id, new_container.id)
self.assertRaises(APIError, lambda: self.client.inspect_container(intermediate_container.id))
self.assertRaises(APIError,
self.client.inspect_container,
intermediate_container.id)
def test_recreate_containers_when_containers_are_stopped(self):
service = self.create_service(
@@ -171,29 +177,62 @@ class ServiceTest(DockerClientTestCase):
def test_start_container_creates_links(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, None)])
db.start_container()
db.start_container()
web.start_container()
self.assertIn('figtest_db_1', web.containers()[0].links())
self.assertIn('db_1', web.containers()[0].links())
self.assertEqual(
set(web.containers()[0].links()),
set([
'figtest_db_1', 'db_1',
'figtest_db_2', 'db_2',
'db',
]),
)
def test_start_container_creates_links_with_names(self):
db = self.create_service('db')
web = self.create_service('web', links=[(db, 'custom_link_name')])
db.start_container()
db.start_container()
web.start_container()
self.assertIn('custom_link_name', web.containers()[0].links())
self.assertEqual(
set(web.containers()[0].links()),
set([
'figtest_db_1', 'db_1',
'figtest_db_2', 'db_2',
'custom_link_name',
]),
)
def test_start_normal_container_does_not_create_links_to_its_own_service(self):
db = self.create_service('db')
c1 = db.start_container()
c2 = db.start_container()
self.assertNotIn(c1.name, c2.links())
db.start_container()
db.start_container()
c = db.start_container()
self.assertEqual(set(c.links()), set([]))
def test_start_one_off_container_creates_links_to_its_own_service(self):
db = self.create_service('db')
c1 = db.start_container()
c2 = db.start_container(one_off=True)
self.assertIn(c1.name, c2.links())
db.start_container()
db.start_container()
c = db.start_container(one_off=True)
self.assertEqual(
set(c.links()),
set([
'figtest_db_1', 'db_1',
'figtest_db_2', 'db_2',
'db',
]),
)
def test_start_container_builds_images(self):
service = Service(
@@ -303,18 +342,18 @@ class ServiceTest(DockerClientTestCase):
def test_network_mode_none(self):
service = self.create_service('web', net='none')
container = service.start_container().inspect()
self.assertEqual(container['HostConfig']['NetworkMode'], 'none')
container = service.start_container()
self.assertEqual(container.get('HostConfig.NetworkMode'), 'none')
def test_network_mode_bridged(self):
service = self.create_service('web', net='bridge')
container = service.start_container().inspect()
self.assertEqual(container['HostConfig']['NetworkMode'], 'bridge')
container = service.start_container()
self.assertEqual(container.get('HostConfig.NetworkMode'), 'bridge')
def test_network_mode_host(self):
service = self.create_service('web', net='host')
container = service.start_container().inspect()
self.assertEqual(container['HostConfig']['NetworkMode'], 'host')
container = service.start_container()
self.assertEqual(container.get('HostConfig.NetworkMode'), 'host')
def test_dns_single_value(self):
service = self.create_service('web', dns='8.8.8.8')

View File

@@ -1,16 +1,15 @@
from __future__ import unicode_literals
from __future__ import absolute_import
from fig.packages.docker import Client
from fig.service import Service
from fig.cli.utils import docker_url
from fig.cli.docker_client import docker_client
from fig.progress_stream import stream_output
from .. import unittest
class DockerClientTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.client = Client(docker_url())
cls.client.pull('busybox', tag='latest')
cls.client = docker_client()
def setUp(self):
for c in self.client.containers(all=True):
@@ -32,5 +31,6 @@ class DockerClientTestCase(unittest.TestCase):
**kwargs
)
def check_build(self, *args, **kwargs):
build_output = self.client.build(*args, **kwargs)
stream_output(build_output, open('/dev/null', 'w'))

View File

View File

@@ -0,0 +1,30 @@
from __future__ import unicode_literals
from __future__ import absolute_import
from tests import unittest
from fig.cli import verbose_proxy
class VerboseProxy(unittest.TestCase):
def test_format_call(self):
expected = "(u'arg1', True, key=u'value')"
actual = verbose_proxy.format_call(
("arg1", True),
{'key': 'value'})
self.assertEqual(expected, actual)
def test_format_return_sequence(self):
expected = "(list with 10 items)"
actual = verbose_proxy.format_return(list(range(10)), 2)
self.assertEqual(expected, actual)
def test_format_return(self):
expected = "{u'Id': u'ok'}"
actual = verbose_proxy.format_return({'Id': 'ok'}, 2)
self.assertEqual(expected, actual)
def test_format_return_no_result(self):
actual = verbose_proxy.format_return(None, 2)
self.assertEqual(None, actual)

View File

@@ -4,9 +4,11 @@ import logging
import os
from .. import unittest
import mock
from fig.cli import main
from fig.cli.main import TopLevelCommand
from fig.packages.six import StringIO
from six import StringIO
class CLITestCase(unittest.TestCase):
@@ -16,24 +18,45 @@ class CLITestCase(unittest.TestCase):
try:
os.chdir('tests/fixtures/simple-figfile')
command = TopLevelCommand()
self.assertEquals('simplefigfile', command.project_name)
project_name = command.get_project_name(command.get_config_path())
self.assertEquals('simplefigfile', project_name)
finally:
os.chdir(cwd)
def test_project_name_with_explicit_base_dir(self):
command = TopLevelCommand()
command.base_dir = 'tests/fixtures/simple-figfile'
self.assertEquals('simplefigfile', command.project_name)
project_name = command.get_project_name(command.get_config_path())
self.assertEquals('simplefigfile', project_name)
def test_project_name_with_explicit_project_name(self):
command = TopLevelCommand()
command.explicit_project_name = 'explicit-project-name'
self.assertEquals('explicitprojectname', command.project_name)
name = 'explicit-project-name'
project_name = command.get_project_name(None, project_name=name)
self.assertEquals('explicitprojectname', project_name)
def test_project_name_from_environment(self):
command = TopLevelCommand()
name = 'namefromenv'
with mock.patch.dict(os.environ):
os.environ['FIG_PROJECT_NAME'] = name
project_name = command.get_project_name(None)
self.assertEquals(project_name, name)
def test_yaml_filename_check(self):
command = TopLevelCommand()
command.base_dir = 'tests/fixtures/longer-filename-figfile'
self.assertTrue(command.project.get_service('definedinyamlnotyml'))
with mock.patch('fig.cli.command.log', autospec=True) as mock_log:
self.assertTrue(command.get_config_path())
self.assertEqual(mock_log.warning.call_count, 2)
def test_get_project(self):
command = TopLevelCommand()
command.base_dir = 'tests/fixtures/longer-filename-figfile'
project = command.get_project(command.get_config_path())
self.assertEqual(project.name, 'longerfilenamefigfile')
self.assertTrue(project.client)
self.assertTrue(project.services)
def test_help(self):
command = TopLevelCommand()

View File

@@ -1,20 +1,35 @@
from __future__ import unicode_literals
from .. import unittest
import mock
import docker
from fig.container import Container
class ContainerTest(unittest.TestCase):
def setUp(self):
self.container_dict = {
"Id": "abc",
"Image": "busybox:latest",
"Command": "sleep 300",
"Created": 1387384730,
"Status": "Up 8 seconds",
"Ports": None,
"SizeRw": 0,
"SizeRootFs": 0,
"Names": ["/figtest_db_1"],
"NetworkSettings": {
"Ports": {},
},
}
def test_from_ps(self):
container = Container.from_ps(None, {
"Id":"abc",
"Image":"busybox:latest",
"Command":"sleep 300",
"Created":1387384730,
"Status":"Up 8 seconds",
"Ports":None,
"SizeRw":0,
"SizeRootFs":0,
"Names":["/figtest_db_1"]
}, has_been_inspected=True)
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.dictionary, {
"Id": "abc",
"Image":"busybox:latest",
@@ -37,33 +52,68 @@ class ContainerTest(unittest.TestCase):
})
def test_number(self):
container = Container.from_ps(None, {
"Id":"abc",
"Image":"busybox:latest",
"Command":"sleep 300",
"Created":1387384730,
"Status":"Up 8 seconds",
"Ports":None,
"SizeRw":0,
"SizeRootFs":0,
"Names":["/figtest_db_1"]
}, has_been_inspected=True)
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.number, 1)
def test_name(self):
container = Container.from_ps(None, {
"Id":"abc",
"Image":"busybox:latest",
"Command":"sleep 300",
"Names":["/figtest_db_1"]
}, has_been_inspected=True)
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.name, "figtest_db_1")
def test_name_without_project(self):
container = Container.from_ps(None, {
"Id":"abc",
"Image":"busybox:latest",
"Command":"sleep 300",
"Names":["/figtest_db_1"]
}, has_been_inspected=True)
container = Container.from_ps(None,
self.container_dict,
has_been_inspected=True)
self.assertEqual(container.name_without_project, "db_1")
def test_inspect_if_not_inspected(self):
mock_client = mock.create_autospec(docker.Client)
container = Container(mock_client, dict(Id="the_id"))
container.inspect_if_not_inspected()
mock_client.inspect_container.assert_called_once_with("the_id")
self.assertEqual(container.dictionary,
mock_client.inspect_container.return_value)
self.assertTrue(container.has_been_inspected)
container.inspect_if_not_inspected()
self.assertEqual(mock_client.inspect_container.call_count, 1)
def test_human_readable_ports_none(self):
container = Container(None, self.container_dict, has_been_inspected=True)
self.assertEqual(container.human_readable_ports, '')
def test_human_readable_ports_public_and_private(self):
self.container_dict['NetworkSettings']['Ports'].update({
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
"45453/tcp": [],
})
container = Container(None, self.container_dict, has_been_inspected=True)
expected = "45453/tcp, 0.0.0.0:49197->45454/tcp"
self.assertEqual(container.human_readable_ports, expected)
def test_get_local_port(self):
self.container_dict['NetworkSettings']['Ports'].update({
"45454/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "49197" } ],
})
container = Container(None, self.container_dict, has_been_inspected=True)
self.assertEqual(
container.get_local_port(45454, protocol='tcp'),
'0.0.0.0:49197')
def test_get(self):
container = Container(None, {
"Status":"Up 8 seconds",
"HostConfig": {
"VolumesFrom": ["volume_id",]
},
}, has_been_inspected=True)
self.assertEqual(container.get('Status'), "Up 8 seconds")
self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id",])
self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None)

View File

@@ -7,16 +7,28 @@ from .. import unittest
class LogPrinterTest(unittest.TestCase):
def test_single_container(self):
def get_default_output(self, monochrome=False):
def reader(*args, **kwargs):
yield "hello\nworld"
container = MockContainer(reader)
output = run_log_printer([container])
output = run_log_printer([container], monochrome=monochrome)
return output
def test_single_container(self):
output = self.get_default_output()
self.assertIn('hello', output)
self.assertIn('world', output)
def test_monochrome(self):
output = self.get_default_output(monochrome=True)
self.assertNotIn('\033[', output)
def test_polychrome(self):
output = self.get_default_output()
self.assertIn('\033[', output)
def test_unicode(self):
glyph = u'\u2022'.encode('utf-8')
@@ -29,10 +41,10 @@ class LogPrinterTest(unittest.TestCase):
self.assertIn(glyph, output)
def run_log_printer(containers):
def run_log_printer(containers, monochrome=False):
r, w = os.pipe()
reader, writer = os.fdopen(r, 'r'), os.fdopen(w, 'w')
printer = LogPrinter(containers, output=writer)
printer = LogPrinter(containers, output=writer, monochrome=monochrome)
printer.run()
writer.close()
return reader.read()

View File

@@ -1,10 +1,27 @@
from __future__ import unicode_literals
from __future__ import absolute_import
import os
from .. import unittest
import mock
import docker
from fig import Service
from fig.service import ConfigError, split_port
from fig.container import Container
from fig.service import (
ConfigError,
split_port,
parse_volume_spec,
build_volume_binding,
)
class ServiceTest(unittest.TestCase):
def setUp(self):
self.mock_client = mock.create_autospec(docker.Client)
def test_name_validations(self):
self.assertRaises(ConfigError, lambda: Service(name=''))
@@ -28,57 +45,174 @@ class ServiceTest(unittest.TestCase):
self.assertRaises(ConfigError, lambda: Service(name='foo', port=['8000']))
Service(name='foo', ports=['8000'])
def test_split_port(self):
def test_get_volumes_from_container(self):
container_id = 'aabbccddee'
service = Service(
'test',
volumes_from=[mock.Mock(id=container_id, spec=Container)])
self.assertEqual(service._get_volumes_from(), [container_id])
def test_get_volumes_from_intermediate_container(self):
container_id = 'aabbccddee'
service = Service('test')
container = mock.Mock(id=container_id, spec=Container)
self.assertEqual(service._get_volumes_from(container), [container_id])
def test_get_volumes_from_service_container_exists(self):
container_ids = ['aabbccddee', '12345']
from_service = mock.create_autospec(Service)
from_service.containers.return_value = [
mock.Mock(id=container_id, spec=Container)
for container_id in container_ids
]
service = Service('test', volumes_from=[from_service])
self.assertEqual(service._get_volumes_from(), container_ids)
def test_get_volumes_from_service_no_container(self):
container_id = 'abababab'
from_service = mock.create_autospec(Service)
from_service.containers.return_value = []
from_service.create_container.return_value = mock.Mock(
id=container_id,
spec=Container)
service = Service('test', volumes_from=[from_service])
self.assertEqual(service._get_volumes_from(), [container_id])
from_service.create_container.assert_called_once_with()
def test_split_port_with_host_ip(self):
internal_port, external_port = split_port("127.0.0.1:1000:2000")
self.assertEqual(internal_port, "2000")
self.assertEqual(external_port, ("127.0.0.1", "1000"))
def test_split_port_with_protocol(self):
internal_port, external_port = split_port("127.0.0.1:1000:2000/udp")
self.assertEqual(internal_port, "2000/udp")
self.assertEqual(external_port, ("127.0.0.1", "1000"))
def test_split_port_with_host_ip_no_port(self):
internal_port, external_port = split_port("127.0.0.1::2000")
self.assertEqual(internal_port, "2000")
self.assertEqual(external_port, ("127.0.0.1",))
self.assertEqual(external_port, ("127.0.0.1", None))
def test_split_port_with_host_port(self):
internal_port, external_port = split_port("1000:2000")
self.assertEqual(internal_port, "2000")
self.assertEqual(external_port, "1000")
def test_split_port_no_host_port(self):
internal_port, external_port = split_port("2000")
self.assertEqual(internal_port, "2000")
self.assertEqual(external_port, None)
def test_split_port_invalid(self):
with self.assertRaises(ConfigError):
split_port("0.0.0.0:1000:2000:tcp")
def test_split_domainname_none(self):
service = Service('foo',
hostname = 'name',
)
service.next_container_name = lambda x: 'foo'
service = Service('foo', hostname='name', client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({})
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertFalse('domainname' in opts, 'domainname')
def test_split_domainname_fqdn(self):
service = Service('foo',
hostname = 'name.domain.tld',
)
service.next_container_name = lambda x: 'foo'
hostname='name.domain.tld',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({})
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_split_domainname_both(self):
service = Service('foo',
hostname = 'name',
domainname = 'domain.tld',
)
service.next_container_name = lambda x: 'foo'
hostname='name',
domainname='domain.tld',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({})
self.assertEqual(opts['hostname'], 'name', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_split_domainname_weird(self):
service = Service('foo',
hostname = 'name.sub',
domainname = 'domain.tld',
)
service.next_container_name = lambda x: 'foo'
hostname='name.sub',
domainname='domain.tld',
client=self.mock_client)
self.mock_client.containers.return_value = []
opts = service._get_container_create_options({})
self.assertEqual(opts['hostname'], 'name.sub', 'hostname')
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
def test_get_container_not_found(self):
mock_client = mock.create_autospec(docker.Client)
mock_client.containers.return_value = []
service = Service('foo', client=mock_client)
self.assertRaises(ValueError, service.get_container)
@mock.patch('fig.service.Container', autospec=True)
def test_get_container(self, mock_container_class):
mock_client = mock.create_autospec(docker.Client)
container_dict = dict(Name='default_foo_2')
mock_client.containers.return_value = [container_dict]
service = Service('foo', client=mock_client)
container = service.get_container(number=2)
self.assertEqual(container, mock_container_class.from_ps.return_value)
mock_container_class.from_ps.assert_called_once_with(
mock_client, container_dict)
@mock.patch('fig.service.log', autospec=True)
def test_pull_image(self, mock_log):
service = Service('foo', client=self.mock_client, image='someimage:sometag')
service.pull(insecure_registry=True)
self.mock_client.pull.assert_called_once_with('someimage:sometag', insecure_registry=True)
mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...')
class ServiceVolumesTest(unittest.TestCase):
def test_parse_volume_spec_only_one_path(self):
spec = parse_volume_spec('/the/volume')
self.assertEqual(spec, (None, '/the/volume', 'rw'))
def test_parse_volume_spec_internal_and_external(self):
spec = parse_volume_spec('external:interval')
self.assertEqual(spec, ('external', 'interval', 'rw'))
def test_parse_volume_spec_with_mode(self):
spec = parse_volume_spec('external:interval:ro')
self.assertEqual(spec, ('external', 'interval', 'ro'))
def test_parse_volume_spec_too_many_parts(self):
with self.assertRaises(ConfigError):
parse_volume_spec('one:two:three:four')
def test_parse_volume_bad_mode(self):
with self.assertRaises(ConfigError):
parse_volume_spec('one:two:notrw')
def test_build_volume_binding(self):
binding = build_volume_binding(parse_volume_spec('/outside:/inside'))
self.assertEqual(
binding,
('/outside', dict(bind='/inside', ro=False)))
@mock.patch.dict(os.environ)
def test_build_volume_binding_with_environ(self):
os.environ['VOLUME_PATH'] = '/opt'
binding = build_volume_binding(parse_volume_spec('${VOLUME_PATH}:/opt'))
self.assertEqual(binding, ('/opt', dict(bind='/opt', ro=False)))
@mock.patch.dict(os.environ)
def test_building_volume_binding_with_home(self):
os.environ['HOME'] = '/home/user'
binding = build_volume_binding(parse_volume_spec('~:/home/user'))
self.assertEqual(
binding,
('/home/user', dict(bind='/home/user', ro=False)))

View File

@@ -2,6 +2,7 @@
envlist = py26,py27,py32,py33,pypy
[testenv]
usedevelop=True
deps =
-rrequirements.txt
-rrequirements-dev.txt
@@ -12,4 +13,3 @@ commands =
[flake8]
# ignore line-length for now
ignore = E501,E203
exclude = fig/packages/

12
wercker.yml Normal file
View File

@@ -0,0 +1,12 @@
box: wercker-labs/docker
build:
steps:
- script:
name: validate DCO
code: script/validate-dco
- script:
name: run tests
code: script/test
- script:
name: build binary
code: script/build-linux