mirror of
https://github.com/docker/compose.git
synced 2026-02-10 10:39:23 +08:00
Compare commits
187 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
847ec5b559 | ||
|
|
09ffa101ed | ||
|
|
01e2b56405 | ||
|
|
2f6c763703 | ||
|
|
4caf90c581 | ||
|
|
8fdeb46430 | ||
|
|
07c47426ba | ||
|
|
a6324d6226 | ||
|
|
939406ca9d | ||
|
|
50a24bc3bf | ||
|
|
0dc55fda45 | ||
|
|
dcd8e7863f | ||
|
|
ed283fd3df | ||
|
|
393433b702 | ||
|
|
7516b67a14 | ||
|
|
5eac04d8d4 | ||
|
|
fec41d3567 | ||
|
|
c09734822e | ||
|
|
94887a28c7 | ||
|
|
262efce43e | ||
|
|
99064d17dd | ||
|
|
ed80576236 | ||
|
|
5131eaeba0 | ||
|
|
b559880a80 | ||
|
|
7f06d46827 | ||
|
|
e1a0937a61 | ||
|
|
59c976510c | ||
|
|
f189e299fd | ||
|
|
d08720247a | ||
|
|
a4df76dd3f | ||
|
|
b2e3a91098 | ||
|
|
47bbc35b74 | ||
|
|
a12f3b40d5 | ||
|
|
3386927f9f | ||
|
|
12d75a74e6 | ||
|
|
1a9c5e197d | ||
|
|
8fa85ecc05 | ||
|
|
140ced6a3b | ||
|
|
779f4bda01 | ||
|
|
0021a06468 | ||
|
|
6ead40e14c | ||
|
|
5e2308d14a | ||
|
|
9ab8f358ca | ||
|
|
d9c9b5e1f0 | ||
|
|
730de9187a | ||
|
|
bad0f45816 | ||
|
|
190ea2bbd6 | ||
|
|
91fe414522 | ||
|
|
7fb43cc85f | ||
|
|
d6657ed16c | ||
|
|
48e7c86d66 | ||
|
|
e9c2f2c5fb | ||
|
|
197fd77b99 | ||
|
|
36bef254ff | ||
|
|
8e265905d3 | ||
|
|
ef2fb77c1d | ||
|
|
dd5c2e8767 | ||
|
|
c255999fce | ||
|
|
89341013a0 | ||
|
|
f983110492 | ||
|
|
9fd296f416 | ||
|
|
bb89f85984 | ||
|
|
b573b87a92 | ||
|
|
036adb2de9 | ||
|
|
36f4e30dba | ||
|
|
9f0cfbdfd2 | ||
|
|
e117a7822d | ||
|
|
5489465905 | ||
|
|
4afcdbdb3c | ||
|
|
94d82d4acb | ||
|
|
d528f9f642 | ||
|
|
99d7a474af | ||
|
|
d1052ff666 | ||
|
|
44a91e6ba8 | ||
|
|
3996947024 | ||
|
|
b7afaba56a | ||
|
|
2ce3685e32 | ||
|
|
699bbe9ca2 | ||
|
|
4b890bffde | ||
|
|
789e1ba82b | ||
|
|
1a9614c35e | ||
|
|
d83bdd5164 | ||
|
|
e1a3fc2536 | ||
|
|
251aa7efb6 | ||
|
|
2924b9997a | ||
|
|
2a9aef1332 | ||
|
|
361294d20b | ||
|
|
9a825c5c35 | ||
|
|
944e15fa65 | ||
|
|
d04b1724ec | ||
|
|
e5916b2fae | ||
|
|
4f7cbc3812 | ||
|
|
3c48884dbb | ||
|
|
7ec63afae9 | ||
|
|
8c6b516aa0 | ||
|
|
50c588176c | ||
|
|
3770aac1af | ||
|
|
256dccc554 | ||
|
|
d0f65906ed | ||
|
|
95aa61cfe5 | ||
|
|
247691ca44 | ||
|
|
0fc9cc65d1 | ||
|
|
eb69225444 | ||
|
|
cafe68a92d | ||
|
|
723cccdae8 | ||
|
|
6b8044e92c | ||
|
|
1e7e8202af | ||
|
|
c0fdf7bd39 | ||
|
|
034b66fedb | ||
|
|
eed274c632 | ||
|
|
5b10c4811f | ||
|
|
2bd6e3d0a5 | ||
|
|
d0b5bcf26a | ||
|
|
262248d8a6 | ||
|
|
9eb3697b40 | ||
|
|
c246897af1 | ||
|
|
cfcabce593 | ||
|
|
e517061010 | ||
|
|
feb8ad7b4c | ||
|
|
1b5bf6e12a | ||
|
|
e953a32a82 | ||
|
|
f1390b3cb6 | ||
|
|
6e485df084 | ||
|
|
3a342fb25d | ||
|
|
e71e82f8ac | ||
|
|
da80eca28c | ||
|
|
1d1e23611b | ||
|
|
74e067c6e6 | ||
|
|
85b9619799 | ||
|
|
ab1fbc96c3 | ||
|
|
a04143e2a7 | ||
|
|
6c4299039a | ||
|
|
655d347ea2 | ||
|
|
94a3164248 | ||
|
|
18728a64b9 | ||
|
|
d8b0fa294e | ||
|
|
a6c8319b5d | ||
|
|
5d92f12f8e | ||
|
|
c0231bdb70 | ||
|
|
ac541e208f | ||
|
|
3d8ce448b8 | ||
|
|
949df97726 | ||
|
|
14cbe40543 | ||
|
|
9dd53ecdaa | ||
|
|
6bfe5e049d | ||
|
|
b672861ffd | ||
|
|
b081077f2b | ||
|
|
13a296049b | ||
|
|
22c531dea7 | ||
|
|
dfc74e2a77 | ||
|
|
0c12db06ec | ||
|
|
edf6b56016 | ||
|
|
8b4ed0c1a8 | ||
|
|
1b5335f409 | ||
|
|
3a2c9c1016 | ||
|
|
cf3eed2cda | ||
|
|
2ecd366905 | ||
|
|
d34dc45b78 | ||
|
|
8394e84099 | ||
|
|
adda3a7f79 | ||
|
|
52d0f4d9e7 | ||
|
|
c1a38d787d | ||
|
|
7879dfd3fd | ||
|
|
cd1c8b2f09 | ||
|
|
7a9228ad75 | ||
|
|
98ceb62202 | ||
|
|
b99bb64487 | ||
|
|
580affa5f3 | ||
|
|
d600b3498b | ||
|
|
f0eaf84cb9 | ||
|
|
257a171c0c | ||
|
|
3c5e818b49 | ||
|
|
c3c8395cef | ||
|
|
38c008e527 | ||
|
|
a3d8f7d113 | ||
|
|
65a642097c | ||
|
|
dff9aa6f0c | ||
|
|
ab145b5365 | ||
|
|
5878fe3834 | ||
|
|
715e29d7ba | ||
|
|
983337401c | ||
|
|
8251dec587 | ||
|
|
52f994cf04 | ||
|
|
3d4b5cfbfe | ||
|
|
33b057bfaf | ||
|
|
629fe771df | ||
|
|
5e6f175b5f |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,7 +1,8 @@
|
||||
*.egg-info
|
||||
*.pyc
|
||||
.tox
|
||||
/build
|
||||
/dist
|
||||
/docs/_site
|
||||
/docs/.git-gh-pages
|
||||
/venv
|
||||
fig.spec
|
||||
|
||||
@@ -12,13 +12,14 @@ install:
|
||||
- sudo curl -L -o /usr/local/bin/orchard https://github.com/orchardup/go-orchard/releases/download/2.0.5/linux
|
||||
- sudo chmod +x /usr/local/bin/orchard
|
||||
before_script:
|
||||
- '[ "${TRAVIS_PULL_REQUEST}" = "false" ] && orchard hosts rm -f $TRAVIS_JOB_ID'
|
||||
- '[ "${TRAVIS_PULL_REQUEST}" = "false" ] && orchard hosts create $TRAVIS_JOB_ID || false'
|
||||
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts rm -f $TRAVIS_JOB_ID || true; fi'
|
||||
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts create $TRAVIS_JOB_ID; fi'
|
||||
script:
|
||||
- nosetests tests/unit
|
||||
- '[ "${TRAVIS_PULL_REQUEST}" = "false" ] && script/travis-integration || false'
|
||||
- flake8 fig
|
||||
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then script/travis-integration; fi'
|
||||
after_script:
|
||||
- '[ "${TRAVIS_PULL_REQUEST}" = "false" ] && orchard hosts rm -f $TRAVIS_JOB_ID || false'
|
||||
- 'if [ "${TRAVIS_PULL_REQUEST}" = "false" ]; then orchard hosts rm -f $TRAVIS_JOB_ID; fi'
|
||||
deploy:
|
||||
provider: pypi
|
||||
user: orchard
|
||||
|
||||
72
CHANGES.md
72
CHANGES.md
@@ -1,6 +1,78 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
0.5.2 (2014-07-28)
|
||||
------------------
|
||||
|
||||
- Added a `--no-cache` option to `fig build`, which bypasses the cache just like `docker build --no-cache`.
|
||||
- Fixed the `dns:` fig.yml option, which was causing fig to error out.
|
||||
- Fixed a bug where fig couldn't start under Python 2.6.
|
||||
- Fixed a log-streaming bug that occasionally caused fig to exit.
|
||||
|
||||
Thanks @dnephin and @marksteve!
|
||||
|
||||
|
||||
0.5.1 (2014-07-11)
|
||||
------------------
|
||||
|
||||
- If a service has a command defined, `fig run [service]` with no further arguments will run it.
|
||||
- The project name now defaults to the directory containing fig.yml, not the current working directory (if they're different)
|
||||
- `volumes_from` now works properly with containers as well as services
|
||||
- Fixed a race condition when recreating containers in `fig up`
|
||||
|
||||
Thanks @ryanbrainard and @d11wtq!
|
||||
|
||||
|
||||
0.5.0 (2014-07-11)
|
||||
------------------
|
||||
|
||||
- Fig now starts links when you run `fig run` or `fig up`.
|
||||
|
||||
For example, if you have a `web` service which depends on a `db` service, `fig run web ...` will start the `db` service.
|
||||
|
||||
- Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your `fig.yml` and, if set, it'll be resolved:
|
||||
```
|
||||
environment:
|
||||
RACK_ENV: development
|
||||
SESSION_SECRET:
|
||||
```
|
||||
|
||||
- `volumes_from` is now supported in `fig.yml`. All of the volumes from the specified services and containers will be mounted:
|
||||
|
||||
```
|
||||
volumes_from:
|
||||
- service_name
|
||||
- container_name
|
||||
```
|
||||
|
||||
- A host address can now be specified in `ports`:
|
||||
|
||||
```
|
||||
ports:
|
||||
- "0.0.0.0:8000:8000"
|
||||
- "127.0.0.1:8001:8001"
|
||||
```
|
||||
|
||||
- The `net` and `workdir` options are now supported in `fig.yml`.
|
||||
- The `hostname` option now works in the same way as the Docker CLI, splitting out into a `domainname` option.
|
||||
- TTY behaviour is far more robust, and resizes are supported correctly.
|
||||
- Load YAML files safely.
|
||||
|
||||
Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release!
|
||||
|
||||
|
||||
0.4.2 (2014-06-18)
|
||||
------------------
|
||||
|
||||
- Fix various encoding errors when using `fig run`, `fig up` and `fig build`.
|
||||
|
||||
0.4.1 (2014-05-08)
|
||||
------------------
|
||||
|
||||
- Add support for Docker 0.11.0. (Thanks @marksteve!)
|
||||
- Make project name configurable. (Thanks @jefmathiot!)
|
||||
- Return correct exit code from `fig run`.
|
||||
|
||||
0.4.0 (2014-04-29)
|
||||
------------------
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# Contributing to Fig
|
||||
|
||||
If you're looking contribute to [Fig](http://orchardup.github.io/fig/)
|
||||
## Development environment
|
||||
|
||||
If you're looking contribute to [Fig](http://www.fig.sh/)
|
||||
but you're new to the project or maybe even to Python, here are the steps
|
||||
that should get you started.
|
||||
|
||||
@@ -8,7 +10,7 @@ that should get you started.
|
||||
1. Clone your forked repository locally `git clone git@github.com:kvz/fig.git`.
|
||||
1. Enter the local directory `cd fig`.
|
||||
1. Set up a development environment `python setup.py develop`. That will install the dependencies and set up a symlink from your `fig` executable to the checkout of the repo. So from any of your fig projects, `fig` now refers to your development project. Time to start hacking : )
|
||||
1. Works for you? Run the test suite via `./scripts/test` to verify it won't break other usecases.
|
||||
1. Works for you? Run the test suite via `./script/test` to verify it won't break other usecases.
|
||||
1. All good? Commit and push to GitHub, and submit a pull request.
|
||||
|
||||
## Running the test suite
|
||||
@@ -27,4 +29,65 @@ OS X:
|
||||
|
||||
Note that this only works on Mountain Lion, not Mavericks, due to a [bug in PyInstaller](http://www.pyinstaller.org/ticket/807).
|
||||
|
||||
## Sign your work
|
||||
|
||||
The sign-off is a simple line at the end of the explanation for the
|
||||
patch, which certifies that you wrote it or otherwise have the right to
|
||||
pass it on as an open-source patch. The rules are pretty simple: if you
|
||||
can certify the below (from [developercertificate.org](http://developercertificate.org/)):
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
|
||||
then you just add a line saying
|
||||
|
||||
Signed-off-by: Random J Developer <random@developer.example.org>
|
||||
|
||||
using your real name (sorry, no pseudonyms or anonymous contributions.)
|
||||
|
||||
The easiest way to do this is to use the `--signoff` flag when committing. E.g.:
|
||||
|
||||
|
||||
$ git commit --signoff
|
||||
|
||||
|
||||
## Release process
|
||||
|
||||
1. Open pull request that:
|
||||
|
||||
- Updates version in `fig/__init__.py`
|
||||
- Updates version in `docs/install.md`
|
||||
- Adds release notes to `CHANGES.md`
|
||||
|
||||
2. Create unpublished GitHub release with release notes
|
||||
|
||||
3. Build Linux version on any Docker host with `script/build-linux` and attach to release
|
||||
|
||||
4. Build OS X version on Mountain Lion with `script/build-osx` and attach to release
|
||||
|
||||
5. Publish GitHub release, creating tag
|
||||
|
||||
6. Update website with `script/deploy-docs`
|
||||
|
||||
@@ -5,6 +5,7 @@ RUN pip install -r requirements.txt
|
||||
ADD requirements-dev.txt /code/
|
||||
RUN pip install -r requirements-dev.txt
|
||||
ADD . /code/
|
||||
RUN python setup.py develop
|
||||
RUN useradd -d /home/user -m -s /bin/bash user
|
||||
RUN chown -R user /code/
|
||||
USER user
|
||||
|
||||
215
LICENSE
215
LICENSE
@@ -1,24 +1,191 @@
|
||||
Copyright (c) 2013, Orchard Laboratories Ltd.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
* The names of its contributors may not be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Copyright 2014 Docker, Inc.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
4
MAINTAINERS
Normal file
4
MAINTAINERS
Normal file
@@ -0,0 +1,4 @@
|
||||
Aanand Prasad <aanand.prasad@gmail.com> (@aanand)
|
||||
Ben Firshman <ben@firshman.co.uk> (@bfirsh)
|
||||
Chris Corbyn <chris@w3style.co.uk> (@d11wtq)
|
||||
Nathan LeClaire <nathan.leclaire@gmail.com> (@nathanleclaire)
|
||||
@@ -32,7 +32,7 @@ db:
|
||||
|
||||
Then type `fig up`, and Fig will start and run your entire app:
|
||||
|
||||

|
||||

|
||||
|
||||
There are commands to:
|
||||
|
||||
@@ -46,4 +46,4 @@ Fig is a project from [Orchard](https://orchardup.com), a Docker hosting service
|
||||
Installation and documentation
|
||||
------------------------------
|
||||
|
||||
Full documentation is available on [Fig's website](http://orchardup.github.io/fig/).
|
||||
Full documentation is available on [Fig's website](http://www.fig.sh/).
|
||||
|
||||
1
docs/CNAME
Normal file
1
docs/CNAME
Normal file
@@ -0,0 +1 @@
|
||||
www.fig.sh
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM stackbrew/ubuntu:13.10
|
||||
FROM ubuntu:13.10
|
||||
RUN apt-get -qq update && apt-get install -y ruby1.8 bundler python
|
||||
RUN locale-gen en_US.UTF-8
|
||||
ADD Gemfile /code/
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
<link href='http://fonts.googleapis.com/css?family=Lilita+One|Lato:300,400,700' rel='stylesheet' type='text/css'>
|
||||
<link rel="stylesheet" type="text/css" href="css/bootstrap.min.css">
|
||||
<link rel="stylesheet" type="text/css" href="css/fig.css?{{ site.time | date:'%Y%m%d%U%H%N%S' }}">
|
||||
<link rel="canonical" href="http://www.fig.sh{% if page.url =="/index.html" %}/{% else %}{{ page.url }}{% endif %}">
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
@@ -44,10 +45,12 @@
|
||||
</ul>
|
||||
<ul class="nav">
|
||||
<li><a href="https://github.com/orchardup/fig">Fig on GitHub</a></li>
|
||||
<li><a href="https://twitter.com/orchardup">Follow us on Twitter</a></li>
|
||||
<li><a href="http://webchat.freenode.net/?channels=%23orchardup&uio=d4">#orchardup on Freenode</a></li>
|
||||
</ul>
|
||||
|
||||
<p>Fig is a project from <a href="https://www.orchardup.com">Orchard</a>, a Docker hosting service.</p>
|
||||
<p><a href="https://twitter.com/orchardup">Follow us on Twitter</a> to keep up to date with Fig and other Docker news.</p>
|
||||
|
||||
<div class="badges">
|
||||
<iframe src="http://ghbtns.com/github-btn.html?user=orchardup&repo=fig&type=watch&count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20"></iframe>
|
||||
<a href="https://twitter.com/share" class="twitter-share-button" data-url="http://orchardup.github.io/fig/">Tweet</a>
|
||||
|
||||
10
docs/cli.md
10
docs/cli.md
@@ -45,7 +45,7 @@ For example:
|
||||
|
||||
$ fig run web python manage.py shell
|
||||
|
||||
Note that this will not start any services that the command's service links to. So if, for example, your one-off command talks to your database, you will need to run `fig up -d db` first.
|
||||
By default, linked services will be started, unless they are already running.
|
||||
|
||||
One-off commands are started in new containers with the same config as a normal container for that service, so volumes, links, etc will all be created as expected. The only thing different to a normal container is the command will be overridden with the one specified and no ports will be created in case they collide.
|
||||
|
||||
@@ -53,6 +53,10 @@ Links are also created between one-off commands and the other containers for tha
|
||||
|
||||
$ fig run db /bin/sh -c "psql -h \$DB_1_PORT_5432_TCP_ADDR -U docker"
|
||||
|
||||
If you do not want linked containers to be started when running the one-off command, specify the `--no-deps` flag:
|
||||
|
||||
$ fig run --no-deps web python manage.py shell
|
||||
|
||||
## scale
|
||||
|
||||
Set number of containers to run for a service.
|
||||
@@ -74,8 +78,10 @@ Stop running containers without removing them. They can be started again with `f
|
||||
|
||||
Build, (re)create, start and attach to containers for a service.
|
||||
|
||||
Linked services will be started, unless they are already running.
|
||||
|
||||
By default, `fig up` will aggregate the output of each container, and when it exits, all containers will be stopped. If you run `fig up -d`, it'll start the containers in the background and leave them running.
|
||||
|
||||
If there are existing containers for a service, `fig up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `fig.yml` are picked up.
|
||||
By default if there are existing containers for a service, `fig up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `fig.yml` are picked up. If you do no want containers to be stopped and recreated, use `fig up --no-recreate`. This will still start any stopped containers, if needed.
|
||||
|
||||
[volumes-from]: http://docs.docker.io/en/latest/use/working_with_volumes/
|
||||
|
||||
@@ -58,7 +58,7 @@ img {
|
||||
|
||||
.logo {
|
||||
font-family: 'Lilita One', sans-serif;
|
||||
font-size: 80px;
|
||||
font-size: 64px;
|
||||
margin: 20px 0 40px 0;
|
||||
}
|
||||
|
||||
@@ -68,8 +68,8 @@ img {
|
||||
}
|
||||
|
||||
.logo img {
|
||||
width: 80px;
|
||||
vertical-align: -17px;
|
||||
width: 60px;
|
||||
vertical-align: -8px;
|
||||
}
|
||||
|
||||
.mobile-logo {
|
||||
@@ -77,13 +77,18 @@ img {
|
||||
}
|
||||
|
||||
.sidebar {
|
||||
font-size: 16px;
|
||||
font-size: 15px;
|
||||
color: #777;
|
||||
}
|
||||
|
||||
.sidebar a {
|
||||
color: #a41211;
|
||||
}
|
||||
|
||||
.sidebar p {
|
||||
margin: 10px 0;
|
||||
}
|
||||
|
||||
@media (max-width: 767px) {
|
||||
.sidebar {
|
||||
text-align: center;
|
||||
@@ -101,7 +106,8 @@ img {
|
||||
}
|
||||
|
||||
.logo {
|
||||
margin-top: 40px;
|
||||
margin-top: 30px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
.content h1 {
|
||||
@@ -116,6 +122,7 @@ img {
|
||||
width: 280px;
|
||||
overflow-y: auto;
|
||||
padding-left: 40px;
|
||||
padding-right: 10px;
|
||||
border-right: 1px solid #ccc;
|
||||
}
|
||||
|
||||
@@ -126,12 +133,12 @@ img {
|
||||
}
|
||||
|
||||
.nav {
|
||||
margin: 20px 0;
|
||||
margin: 15px 0;
|
||||
}
|
||||
|
||||
.nav li a {
|
||||
display: block;
|
||||
padding: 8px 0;
|
||||
padding: 5px 0;
|
||||
line-height: 1.2;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
@@ -11,6 +11,7 @@ Let's use Fig to set up and run a Django/PostgreSQL app. Before starting, you'll
|
||||
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
|
||||
|
||||
FROM orchardup/python:2.7
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
RUN apt-get update -qq && apt-get install -y python-psycopg2
|
||||
RUN mkdir /code
|
||||
WORKDIR /code
|
||||
@@ -18,7 +19,7 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN pip install -r requirements.txt
|
||||
ADD . /code/
|
||||
|
||||
That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Second, we define our Python dependencies in a file called `requirements.txt`:
|
||||
|
||||
@@ -38,7 +39,7 @@ Simple enough. Finally, this is all tied together with a file called `fig.yml`.
|
||||
links:
|
||||
- db
|
||||
|
||||
See the [`fig.yml` reference](http://orchardup.github.io/fig/yml.html) for more information on how it works.
|
||||
See the [`fig.yml` reference](yml.html) for more information on how it works.
|
||||
|
||||
We can now start a Django project using `fig run`:
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ db:
|
||||
|
||||
Then type `fig up`, and Fig will start and run your entire app:
|
||||
|
||||

|
||||

|
||||
|
||||
There are commands to:
|
||||
|
||||
@@ -39,8 +39,6 @@ There are commands to:
|
||||
- tail running services' log output
|
||||
- run a one-off command on a service
|
||||
|
||||
Fig is a project from [Orchard](https://orchardup.com), a Docker hosting service. [Follow us on Twitter](https://twitter.com/orchardup) to keep up to date with Fig and other Docker news.
|
||||
|
||||
|
||||
Quick start
|
||||
-----------
|
||||
@@ -87,7 +85,7 @@ Next, we want to create a Docker image containing all of our app's dependencies.
|
||||
WORKDIR /code
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
This tells Docker to install Python, our code and our Python dependencies inside a Docker image. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
This tells Docker to install Python, our code and our Python dependencies inside a Docker image. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
We then define a set of services using `fig.yml`:
|
||||
|
||||
@@ -115,8 +113,8 @@ Now if we run `fig up`, it'll pull a Redis image, build an image for our own cod
|
||||
Building web...
|
||||
Starting figtest_redis_1...
|
||||
Starting figtest_web_1...
|
||||
figtest_redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
|
||||
figtest_web_1 | * Running on http://0.0.0.0:5000/
|
||||
redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
|
||||
web_1 | * Running on http://0.0.0.0:5000/
|
||||
|
||||
Open up [http://localhost:5000](http://localhost:5000) in your browser (or [http://localdocker:5000](http://localdocker:5000) if you're using [docker-osx](https://github.com/noplay/docker-osx)) and you should see it running!
|
||||
|
||||
|
||||
@@ -6,9 +6,9 @@ title: Installing Fig
|
||||
Installing Fig
|
||||
==============
|
||||
|
||||
First, install Docker version 0.10.0. If you're on OS X, you can use [docker-osx](https://github.com/noplay/docker-osx):
|
||||
First, install Docker version 1.0 or greater. If you're on OS X, you can use [docker-osx](https://github.com/noplay/docker-osx):
|
||||
|
||||
$ curl https://raw.github.com/noplay/docker-osx/0.10.0/docker-osx > /usr/local/bin/docker-osx
|
||||
$ curl https://raw.githubusercontent.com/noplay/docker-osx/1.1.1/docker-osx > /usr/local/bin/docker-osx
|
||||
$ chmod +x /usr/local/bin/docker-osx
|
||||
$ docker-osx shell
|
||||
|
||||
@@ -16,12 +16,12 @@ Docker has guides for [Ubuntu](http://docs.docker.io/en/latest/installation/ubun
|
||||
|
||||
Next, install Fig. On OS X:
|
||||
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.3.2/darwin > /usr/local/bin/fig
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.5.2/darwin > /usr/local/bin/fig
|
||||
$ chmod +x /usr/local/bin/fig
|
||||
|
||||
On 64-bit Linux:
|
||||
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.3.2/linux > /usr/local/bin/fig
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.5.2/linux > /usr/local/bin/fig
|
||||
$ chmod +x /usr/local/bin/fig
|
||||
|
||||
Fig is also available as a Python package if you're on another platform (or if you prefer that sort of thing):
|
||||
|
||||
@@ -18,7 +18,7 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN bundle install
|
||||
ADD . /myapp
|
||||
|
||||
That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Next, we have a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ FROM orchardup/php5
|
||||
ADD . /code
|
||||
```
|
||||
|
||||
This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Next up, `fig.yml` starts our web service and a separate MySQL instance:
|
||||
|
||||
|
||||
140
docs/yml.md
140
docs/yml.md
@@ -10,52 +10,140 @@ Each service defined in `fig.yml` must specify exactly one of `image` or `build`
|
||||
|
||||
As with `docker run`, options specified in the Dockerfile (e.g. `CMD`, `EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to specify them again in `fig.yml`.
|
||||
|
||||
```yaml
|
||||
-- Tag or partial image ID. Can be local or remote - Fig will attempt to pull
|
||||
-- if it doesn't exist locally.
|
||||
###image
|
||||
|
||||
Tag or partial image ID. Can be local or remote - Fig will attempt to pull if it doesn't exist locally.
|
||||
|
||||
```
|
||||
image: ubuntu
|
||||
image: orchardup/postgresql
|
||||
image: a4bc65fd
|
||||
```
|
||||
|
||||
-- Path to a directory containing a Dockerfile. Fig will build and tag it with
|
||||
-- a generated name, and use that image thereafter.
|
||||
### build
|
||||
|
||||
Path to a directory containing a Dockerfile. Fig will build and tag it with a generated name, and use that image thereafter.
|
||||
|
||||
```
|
||||
build: /path/to/build/dir
|
||||
```
|
||||
|
||||
-- Override the default command.
|
||||
### command
|
||||
|
||||
Override the default command.
|
||||
|
||||
```
|
||||
command: bundle exec thin -p 3000
|
||||
```
|
||||
|
||||
-- Link to containers in another service. Optionally specify an alternate name
|
||||
-- for the link, which will determine how environment variables are prefixed,
|
||||
-- e.g. "db" -> DB_1_PORT, "db:database" -> DATABASE_1_PORT
|
||||
### links
|
||||
|
||||
|
||||
Link to containers in another service. Optionally specify an alternate name for the link, which will determine how environment variables are prefixed, e.g. `db` -> `DB_1_PORT`, `db:database` -> `DATABASE_1_PORT`
|
||||
|
||||
```
|
||||
links:
|
||||
- db
|
||||
- db:database
|
||||
- redis
|
||||
```
|
||||
|
||||
-- Expose ports. Either specify both ports (HOST:CONTAINER), or just the
|
||||
-- container port (a random host port will be chosen).
|
||||
-- Note: When mapping ports in the HOST:CONTAINER format, you may experience
|
||||
-- erroneous results when using a container port lower than 60, because YAML
|
||||
-- will parse numbers in the format "xx:yy" as sexagesimal (base 60). For
|
||||
-- this reason, we recommend always explicitly specifying your port mappings
|
||||
-- as strings.
|
||||
### ports
|
||||
|
||||
Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (a random host port will be chosen).
|
||||
|
||||
**Note:** When mapping ports in the `HOST:CONTAINER` format, you may experience erroneous results when using a container port lower than 60, because YAML will parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason, we recommend always explicitly specifying your port mappings as strings.
|
||||
|
||||
```
|
||||
ports:
|
||||
- "3000"
|
||||
- "8000:8000"
|
||||
- "49100:22"
|
||||
- "127.0.0.1:8001:8001"
|
||||
```
|
||||
|
||||
-- Expose ports without publishing them to the host machine - they'll only be
|
||||
-- accessible to linked services. Only the internal port can be specified.
|
||||
### expose
|
||||
|
||||
Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified.
|
||||
|
||||
```
|
||||
expose:
|
||||
- "3000"
|
||||
- "8000"
|
||||
|
||||
-- Map volumes from the host machine (HOST:CONTAINER).
|
||||
volumes:
|
||||
- cache/:/tmp/cache
|
||||
|
||||
-- Add environment variables.
|
||||
environment:
|
||||
RACK_ENV: development
|
||||
```
|
||||
|
||||
### volumes
|
||||
|
||||
Mount paths as volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`).
|
||||
|
||||
Note: Mapping local volumes is currently unsupported on boot2docker. We recommend you use [docker-osx](https://github.com/noplay/docker-osx) if want to map local volumes.
|
||||
|
||||
```
|
||||
volumes:
|
||||
- /var/lib/mysql
|
||||
- cache/:/tmp/cache
|
||||
```
|
||||
|
||||
### volumes_from
|
||||
|
||||
Mount all of the volumes from another service or container.
|
||||
|
||||
```
|
||||
volumes_from:
|
||||
- service_name
|
||||
- container_name
|
||||
```
|
||||
|
||||
### environment
|
||||
|
||||
Add environment variables. You can use either an array or a dictionary.
|
||||
|
||||
Environment variables with only a key are resolved to their values on the machine Fig is running on, which can be helpful for secret or host-specific values.
|
||||
|
||||
```
|
||||
environment:
|
||||
RACK_ENV: development
|
||||
SESSION_SECRET:
|
||||
|
||||
environment:
|
||||
- RACK_ENV=development
|
||||
- SESSION_SECRET
|
||||
```
|
||||
|
||||
### net
|
||||
|
||||
Networking mode. Use the same values as the docker client `--net` parameter.
|
||||
|
||||
```
|
||||
net: "bridge"
|
||||
net: "none"
|
||||
net: "container:[name or id]"
|
||||
net: "host"
|
||||
```
|
||||
|
||||
### dns
|
||||
|
||||
Custom DNS servers. Can be a single value or a list.
|
||||
|
||||
```
|
||||
dns: 8.8.8.8
|
||||
dns:
|
||||
- 8.8.8.8
|
||||
- 9.9.9.9
|
||||
```
|
||||
|
||||
### working\_dir, entrypoint, user, hostname, domainname, mem\_limit, privileged
|
||||
|
||||
Each of these is a single value, analogous to its [docker run](https://docs.docker.com/reference/run/) counterpart.
|
||||
|
||||
```
|
||||
working_dir: /code
|
||||
entrypoint: /code/entrypoint.sh
|
||||
user: postgresql
|
||||
|
||||
hostname: foo
|
||||
domainname: foo.com
|
||||
|
||||
mem_limit: 1000000000
|
||||
privileged: true
|
||||
```
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from __future__ import unicode_literals
|
||||
from .service import Service
|
||||
from .service import Service # noqa:flake8
|
||||
|
||||
__version__ = '0.4.0'
|
||||
__version__ = '0.5.2'
|
||||
|
||||
@@ -8,7 +8,6 @@ import os
|
||||
import re
|
||||
import yaml
|
||||
from ..packages import six
|
||||
import sys
|
||||
|
||||
from ..project import Project
|
||||
from ..service import ConfigError
|
||||
@@ -19,11 +18,13 @@ from . import errors
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Command(DocoptCommand):
|
||||
base_dir = '.'
|
||||
|
||||
def __init__(self):
|
||||
self.yaml_path = os.environ.get('FIG_FILE', None)
|
||||
self._yaml_path = os.environ.get('FIG_FILE', None)
|
||||
self.explicit_project_name = None
|
||||
|
||||
def dispatch(self, *args, **kwargs):
|
||||
try:
|
||||
@@ -44,6 +45,8 @@ class Command(DocoptCommand):
|
||||
def perform_command(self, options, *args, **kwargs):
|
||||
if options['--file'] is not None:
|
||||
self.yaml_path = os.path.join(self.base_dir, options['--file'])
|
||||
if options['--project-name'] is not None:
|
||||
self.explicit_project_name = options['--project-name']
|
||||
return super(Command, self).perform_command(options, *args, **kwargs)
|
||||
|
||||
@cached_property
|
||||
@@ -53,10 +56,7 @@ class Command(DocoptCommand):
|
||||
@cached_property
|
||||
def project(self):
|
||||
try:
|
||||
yaml_path = self.yaml_path
|
||||
if yaml_path is None:
|
||||
yaml_path = self.check_yaml_filename()
|
||||
config = yaml.load(open(yaml_path))
|
||||
config = yaml.safe_load(open(self.yaml_path))
|
||||
except IOError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
raise errors.FigFileNotFound(os.path.basename(e.filename))
|
||||
@@ -69,7 +69,9 @@ class Command(DocoptCommand):
|
||||
|
||||
@cached_property
|
||||
def project_name(self):
|
||||
project = os.path.basename(os.getcwd())
|
||||
project = os.path.basename(os.path.dirname(os.path.abspath(self.yaml_path)))
|
||||
if self.explicit_project_name is not None:
|
||||
project = self.explicit_project_name
|
||||
project = re.sub(r'[^a-zA-Z0-9]', '', project)
|
||||
if not project:
|
||||
project = 'default'
|
||||
@@ -79,8 +81,11 @@ class Command(DocoptCommand):
|
||||
def formatter(self):
|
||||
return Formatter()
|
||||
|
||||
def check_yaml_filename(self):
|
||||
if os.path.exists(os.path.join(self.base_dir, 'fig.yaml')):
|
||||
@cached_property
|
||||
def yaml_path(self):
|
||||
if self._yaml_path is not None:
|
||||
return self._yaml_path
|
||||
elif os.path.exists(os.path.join(self.base_dir, 'fig.yaml')):
|
||||
|
||||
log.warning("Fig just read the file 'fig.yaml' on startup, rather than 'fig.yml'")
|
||||
log.warning("Please be aware that fig.yml the expected extension in most cases, and using .yaml can cause compatibility issues in future")
|
||||
@@ -88,3 +93,7 @@ class Command(DocoptCommand):
|
||||
return os.path.join(self.base_dir, 'fig.yaml')
|
||||
else:
|
||||
return os.path.join(self.base_dir, 'fig.yml')
|
||||
|
||||
@yaml_path.setter
|
||||
def yaml_path(self, value):
|
||||
self._yaml_path = value
|
||||
|
||||
@@ -10,16 +10,17 @@ from .utils import split_buffer
|
||||
|
||||
|
||||
class LogPrinter(object):
|
||||
def __init__(self, containers, attach_params=None):
|
||||
def __init__(self, containers, attach_params=None, output=sys.stdout):
|
||||
self.containers = containers
|
||||
self.attach_params = attach_params or {}
|
||||
self.prefix_width = self._calculate_prefix_width(containers)
|
||||
self.generators = self._make_log_generators()
|
||||
self.output = output
|
||||
|
||||
def run(self):
|
||||
mux = Multiplexer(self.generators)
|
||||
for line in mux.loop():
|
||||
sys.stdout.write(line.encode(sys.__stdout__.encoding or 'utf-8'))
|
||||
self.output.write(line)
|
||||
|
||||
def _calculate_prefix_width(self, containers):
|
||||
"""
|
||||
@@ -45,12 +46,12 @@ class LogPrinter(object):
|
||||
return generators
|
||||
|
||||
def _make_log_generator(self, container, color_fn):
|
||||
prefix = color_fn(self._generate_prefix(container))
|
||||
prefix = color_fn(self._generate_prefix(container)).encode('utf-8')
|
||||
# Attach to container before log printer starts running
|
||||
line_generator = split_buffer(self._attach(container), '\n')
|
||||
|
||||
for line in line_generator:
|
||||
yield prefix + line.decode('utf-8')
|
||||
yield prefix + line
|
||||
|
||||
exit_code = container.wait()
|
||||
yield color_fn("%s exited with code %s\n" % (container.name, exit_code))
|
||||
|
||||
126
fig/cli/main.py
126
fig/cli/main.py
@@ -6,6 +6,7 @@ import re
|
||||
import signal
|
||||
|
||||
from inspect import getdoc
|
||||
import dockerpty
|
||||
|
||||
from .. import __version__
|
||||
from ..project import NoSuchService, ConfigurationError
|
||||
@@ -18,22 +19,12 @@ from .utils import yesno
|
||||
from ..packages.docker.errors import APIError
|
||||
from .errors import UserError
|
||||
from .docopt_command import NoSuchCommand
|
||||
from .socketclient import SocketClient
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def main():
|
||||
console_handler = logging.StreamHandler(stream=sys.stderr)
|
||||
console_handler.setFormatter(logging.Formatter())
|
||||
console_handler.setLevel(logging.INFO)
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.addHandler(console_handler)
|
||||
root_logger.setLevel(logging.DEBUG)
|
||||
|
||||
# Disable requests logging
|
||||
logging.getLogger("requests").propagate = False
|
||||
|
||||
setup_logging()
|
||||
try:
|
||||
command = TopLevelCommand()
|
||||
command.sys_dispatch()
|
||||
@@ -56,6 +47,18 @@ def main():
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def setup_logging():
|
||||
console_handler = logging.StreamHandler(sys.stderr)
|
||||
console_handler.setFormatter(logging.Formatter())
|
||||
console_handler.setLevel(logging.INFO)
|
||||
root_logger = logging.getLogger()
|
||||
root_logger.addHandler(console_handler)
|
||||
root_logger.setLevel(logging.DEBUG)
|
||||
|
||||
# Disable requests logging
|
||||
logging.getLogger("requests").propagate = False
|
||||
|
||||
|
||||
# stolen from docopt master
|
||||
def parse_doc_section(name, source):
|
||||
pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)',
|
||||
@@ -71,9 +74,10 @@ class TopLevelCommand(Command):
|
||||
fig -h|--help
|
||||
|
||||
Options:
|
||||
--verbose Show more output
|
||||
--version Print version and exit
|
||||
-f, --file FILE Specify an alternate fig file (default: fig.yml)
|
||||
--verbose Show more output
|
||||
--version Print version and exit
|
||||
-f, --file FILE Specify an alternate fig file (default: fig.yml)
|
||||
-p, --project-name NAME Specify an alternate project name (default: directory name)
|
||||
|
||||
Commands:
|
||||
build Build or rebuild services
|
||||
@@ -102,9 +106,13 @@ class TopLevelCommand(Command):
|
||||
e.g. `figtest_db`. If you change a service's `Dockerfile` or the
|
||||
contents of its build directory, you can run `fig build` to rebuild it.
|
||||
|
||||
Usage: build [SERVICE...]
|
||||
Usage: build [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
--no-cache Do not use cache when building the image.
|
||||
"""
|
||||
self.project.build(service_names=options['SERVICE'])
|
||||
no_cache = bool(options.get('--no-cache', False))
|
||||
self.project.build(service_names=options['SERVICE'], no_cache=no_cache)
|
||||
|
||||
def help(self, options):
|
||||
"""
|
||||
@@ -201,27 +209,44 @@ class TopLevelCommand(Command):
|
||||
|
||||
$ fig run web python manage.py shell
|
||||
|
||||
Note that this will not start any services that the command's service
|
||||
links to. So if, for example, your one-off command talks to your
|
||||
database, you will need to run `fig up -d db` first.
|
||||
By default, linked services will be started, unless they are already
|
||||
running. If you do not want to start linked services, use
|
||||
`fig run --no-deps SERVICE COMMAND [ARGS...]`.
|
||||
|
||||
Usage: run [options] SERVICE COMMAND [ARGS...]
|
||||
Usage: run [options] SERVICE [COMMAND] [ARGS...]
|
||||
|
||||
Options:
|
||||
-d Detached mode: Run container in the background, print new
|
||||
container name
|
||||
-T Disable pseudo-tty allocation. By default `fig run`
|
||||
allocates a TTY.
|
||||
--rm Remove container after run. Ignored in detached mode.
|
||||
-d Detached mode: Run container in the background, print
|
||||
new container name.
|
||||
-T Disable pseudo-tty allocation. By default `fig run`
|
||||
allocates a TTY.
|
||||
--rm Remove container after run. Ignored in detached mode.
|
||||
--no-deps Don't start linked services.
|
||||
"""
|
||||
|
||||
service = self.project.get_service(options['SERVICE'])
|
||||
|
||||
if not options['--no-deps']:
|
||||
deps = service.get_linked_names()
|
||||
|
||||
if len(deps) > 0:
|
||||
self.project.up(
|
||||
service_names=deps,
|
||||
start_links=True,
|
||||
recreate=False,
|
||||
)
|
||||
|
||||
tty = True
|
||||
if options['-d'] or options['-T'] or not sys.stdin.isatty():
|
||||
tty = False
|
||||
|
||||
if options['COMMAND']:
|
||||
command = [options['COMMAND']] + options['ARGS']
|
||||
else:
|
||||
command = service.options.get('command')
|
||||
|
||||
container_options = {
|
||||
'command': [options['COMMAND']] + options['ARGS'],
|
||||
'command': command,
|
||||
'tty': tty,
|
||||
'stdin_open': not options['-d'],
|
||||
}
|
||||
@@ -230,13 +255,13 @@ class TopLevelCommand(Command):
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
print(container.name)
|
||||
else:
|
||||
with self._attach_to_container(container.id, raw=tty) as c:
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
c.run()
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
dockerpty.start(self.client, container.id)
|
||||
exit_code = container.wait()
|
||||
if options['--rm']:
|
||||
container.wait()
|
||||
log.info("Removing %s..." % container.name)
|
||||
self.client.remove_container(container.id)
|
||||
sys.exit(exit_code)
|
||||
|
||||
def scale(self, options):
|
||||
"""
|
||||
@@ -256,13 +281,13 @@ class TopLevelCommand(Command):
|
||||
try:
|
||||
num = int(num)
|
||||
except ValueError:
|
||||
raise UserError('Number of containers for service "%s" is not a number' % service)
|
||||
raise UserError('Number of containers for service "%s" is not a '
|
||||
'number' % service_name)
|
||||
try:
|
||||
self.project.get_service(service_name).scale(num)
|
||||
except CannotBeScaledError:
|
||||
raise UserError('Service "%s" cannot be scaled because it specifies a port on the host. If multiple containers for this service were created, the port would clash.\n\nRemove the ":" from the port definition in fig.yml so Docker can choose a random port for each container.' % service_name)
|
||||
|
||||
|
||||
def start(self, options):
|
||||
"""
|
||||
Start existing containers.
|
||||
@@ -291,17 +316,31 @@ class TopLevelCommand(Command):
|
||||
|
||||
If there are existing containers for a service, `fig up` will stop
|
||||
and recreate them (preserving mounted volumes with volumes-from),
|
||||
so that changes in `fig.yml` are picked up.
|
||||
so that changes in `fig.yml` are picked up. If you do not want existing
|
||||
containers to be recreated, `fig up --no-recreate` will re-use existing
|
||||
containers.
|
||||
|
||||
Usage: up [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-d Detached mode: Run containers in the background, print new
|
||||
container names
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-deps Don't start linked services.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
"""
|
||||
detached = options['-d']
|
||||
|
||||
to_attach = self.project.up(service_names=options['SERVICE'])
|
||||
start_links = not options['--no-deps']
|
||||
recreate = not options['--no-recreate']
|
||||
service_names = options['SERVICE']
|
||||
|
||||
self.project.up(
|
||||
service_names=service_names,
|
||||
start_links=start_links,
|
||||
recreate=recreate
|
||||
)
|
||||
|
||||
to_attach = [c for s in self.project.get_services(service_names) for c in s.containers()]
|
||||
|
||||
if not detached:
|
||||
print("Attaching to", list_containers(to_attach))
|
||||
@@ -311,24 +350,13 @@ class TopLevelCommand(Command):
|
||||
log_printer.run()
|
||||
finally:
|
||||
def handler(signal, frame):
|
||||
self.project.kill(service_names=options['SERVICE'])
|
||||
self.project.kill(service_names=service_names)
|
||||
sys.exit(0)
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
|
||||
print("Gracefully stopping... (press Ctrl+C again to force)")
|
||||
self.project.stop(service_names=options['SERVICE'])
|
||||
self.project.stop(service_names=service_names)
|
||||
|
||||
def _attach_to_container(self, container_id, raw=False):
|
||||
socket_in = self.client.attach_socket(container_id, params={'stdin': 1, 'stream': 1})
|
||||
socket_out = self.client.attach_socket(container_id, params={'stdout': 1, 'logs': 1, 'stream': 1})
|
||||
socket_err = self.client.attach_socket(container_id, params={'stderr': 1, 'logs': 1, 'stream': 1})
|
||||
|
||||
return SocketClient(
|
||||
socket_in=socket_in,
|
||||
socket_out=socket_out,
|
||||
socket_err=socket_err,
|
||||
raw=raw,
|
||||
)
|
||||
|
||||
def list_containers(containers):
|
||||
return ", ".join(c.name for c in containers)
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
from __future__ import print_function
|
||||
# Adapted from https://github.com/benthor/remotty/blob/master/socketclient.py
|
||||
|
||||
import sys
|
||||
import tty
|
||||
import fcntl
|
||||
import os
|
||||
import termios
|
||||
import threading
|
||||
import errno
|
||||
|
||||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SocketClient:
|
||||
def __init__(self,
|
||||
socket_in=None,
|
||||
socket_out=None,
|
||||
socket_err=None,
|
||||
raw=True,
|
||||
):
|
||||
self.socket_in = socket_in
|
||||
self.socket_out = socket_out
|
||||
self.socket_err = socket_err
|
||||
self.raw = raw
|
||||
|
||||
self.stdin_fileno = sys.stdin.fileno()
|
||||
|
||||
def __enter__(self):
|
||||
self.create()
|
||||
return self
|
||||
|
||||
def __exit__(self, type, value, trace):
|
||||
self.destroy()
|
||||
|
||||
def create(self):
|
||||
if os.isatty(sys.stdin.fileno()):
|
||||
self.settings = termios.tcgetattr(sys.stdin.fileno())
|
||||
else:
|
||||
self.settings = None
|
||||
|
||||
if self.socket_in is not None:
|
||||
self.set_blocking(sys.stdin, False)
|
||||
self.set_blocking(sys.stdout, True)
|
||||
self.set_blocking(sys.stderr, True)
|
||||
|
||||
if self.raw:
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
|
||||
def set_blocking(self, file, blocking):
|
||||
fd = file.fileno()
|
||||
flags = fcntl.fcntl(fd, fcntl.F_GETFL)
|
||||
flags = (flags & ~os.O_NONBLOCK) if blocking else (flags | os.O_NONBLOCK)
|
||||
fcntl.fcntl(fd, fcntl.F_SETFL, flags)
|
||||
|
||||
def run(self):
|
||||
if self.socket_in is not None:
|
||||
self.start_background_thread(target=self.send, args=(self.socket_in, sys.stdin))
|
||||
|
||||
recv_threads = []
|
||||
|
||||
if self.socket_out is not None:
|
||||
recv_threads.append(self.start_background_thread(target=self.recv, args=(self.socket_out, sys.stdout)))
|
||||
|
||||
if self.socket_err is not None:
|
||||
recv_threads.append(self.start_background_thread(target=self.recv, args=(self.socket_err, sys.stderr)))
|
||||
|
||||
for t in recv_threads:
|
||||
t.join()
|
||||
|
||||
def start_background_thread(self, **kwargs):
|
||||
thread = threading.Thread(**kwargs)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
return thread
|
||||
|
||||
def recv(self, socket, stream):
|
||||
try:
|
||||
while True:
|
||||
chunk = socket.recv(4096)
|
||||
|
||||
if chunk:
|
||||
stream.write(chunk.encode(stream.encoding or 'utf-8'))
|
||||
stream.flush()
|
||||
else:
|
||||
break
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
def send(self, socket, stream):
|
||||
while True:
|
||||
chunk = stream.read(1)
|
||||
|
||||
if chunk == '':
|
||||
socket.close()
|
||||
break
|
||||
else:
|
||||
try:
|
||||
socket.send(chunk)
|
||||
except Exception as e:
|
||||
if hasattr(e, 'errno') and e.errno == errno.EPIPE:
|
||||
break
|
||||
else:
|
||||
raise e
|
||||
|
||||
def destroy(self):
|
||||
if self.settings is not None:
|
||||
termios.tcsetattr(self.stdin_fileno, termios.TCSADRAIN, self.settings)
|
||||
|
||||
sys.stdout.flush()
|
||||
|
||||
if __name__ == '__main__':
|
||||
import websocket
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write("Usage: python socketclient.py WEBSOCKET_URL\n")
|
||||
sys.exit(1)
|
||||
|
||||
url = sys.argv[1]
|
||||
socket = websocket.create_connection(url)
|
||||
|
||||
print("connected\r")
|
||||
|
||||
with SocketClient(socket, interactive=True) as client:
|
||||
client.run()
|
||||
@@ -65,11 +65,11 @@ def prettydate(d):
|
||||
elif s < 120:
|
||||
return '1 minute ago'
|
||||
elif s < 3600:
|
||||
return '{0} minutes ago'.format(s/60)
|
||||
return '{0} minutes ago'.format(s / 60)
|
||||
elif s < 7200:
|
||||
return '1 hour ago'
|
||||
else:
|
||||
return '{0} hours ago'.format(s/3600)
|
||||
return '{0} hours ago'.format(s / 3600)
|
||||
|
||||
|
||||
def mkdir(path, permissions=0o700):
|
||||
@@ -103,8 +103,8 @@ def split_buffer(reader, separator):
|
||||
index = buffered.find(separator)
|
||||
if index == -1:
|
||||
break
|
||||
yield buffered[:index+1]
|
||||
buffered = buffered[index+1:]
|
||||
yield buffered[:index + 1]
|
||||
buffered = buffered[index + 1:]
|
||||
|
||||
if len(buffered) > 0:
|
||||
yield buffered
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
|
||||
|
||||
class Container(object):
|
||||
"""
|
||||
Represents a Docker container, constructed from the output of
|
||||
@@ -17,7 +18,7 @@ class Container(object):
|
||||
Construct a container object from the output of GET /containers/json.
|
||||
"""
|
||||
new_dictionary = {
|
||||
'ID': dictionary['Id'],
|
||||
'Id': dictionary['Id'],
|
||||
'Image': dictionary['Image'],
|
||||
}
|
||||
for name in dictionary.get('Names', []):
|
||||
@@ -36,7 +37,7 @@ class Container(object):
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return self.dictionary['ID']
|
||||
return self.dictionary['Id']
|
||||
|
||||
@property
|
||||
def image(self):
|
||||
@@ -78,7 +79,7 @@ class Container(object):
|
||||
def human_readable_state(self):
|
||||
self.inspect_if_not_inspected()
|
||||
if self.dictionary['State']['Running']:
|
||||
if self.dictionary['State']['Ghost']:
|
||||
if self.dictionary['State'].get('Ghost'):
|
||||
return 'Ghost'
|
||||
else:
|
||||
return 'Up'
|
||||
|
||||
@@ -12,7 +12,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .version import version
|
||||
|
||||
__version__ = version
|
||||
__title__ = 'docker-py'
|
||||
__version__ = '0.3.0'
|
||||
|
||||
from .client import Client # flake8: noqa
|
||||
|
||||
@@ -16,6 +16,7 @@ import json
|
||||
import re
|
||||
import shlex
|
||||
import struct
|
||||
import warnings
|
||||
|
||||
import requests
|
||||
import requests.exceptions
|
||||
@@ -29,7 +30,7 @@ from . import errors
|
||||
if not six.PY3:
|
||||
import websocket
|
||||
|
||||
DEFAULT_DOCKER_API_VERSION = '1.9'
|
||||
DEFAULT_DOCKER_API_VERSION = '1.12'
|
||||
DEFAULT_TIMEOUT_SECONDS = 60
|
||||
STREAM_HEADER_SIZE_BYTES = 8
|
||||
|
||||
@@ -95,7 +96,8 @@ class Client(requests.Session):
|
||||
mem_limit=0, ports=None, environment=None, dns=None,
|
||||
volumes=None, volumes_from=None,
|
||||
network_disabled=False, entrypoint=None,
|
||||
cpu_shares=None, working_dir=None, domainname=None):
|
||||
cpu_shares=None, working_dir=None, domainname=None,
|
||||
memswap_limit=0):
|
||||
if isinstance(command, six.string_types):
|
||||
command = shlex.split(str(command))
|
||||
if isinstance(environment, dict):
|
||||
@@ -121,8 +123,12 @@ class Client(requests.Session):
|
||||
volumes_dict[vol] = {}
|
||||
volumes = volumes_dict
|
||||
|
||||
if volumes_from and not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
if volumes_from:
|
||||
if not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
else:
|
||||
# Force None, an empty list or dict causes client.start to fail
|
||||
volumes_from = None
|
||||
|
||||
attach_stdin = False
|
||||
attach_stdout = False
|
||||
@@ -137,6 +143,14 @@ class Client(requests.Session):
|
||||
attach_stdin = True
|
||||
stdin_once = True
|
||||
|
||||
if utils.compare_version('1.10', self._version) >= 0:
|
||||
message = ('{0!r} parameter has no effect on create_container().'
|
||||
' It has been moved to start()')
|
||||
if dns is not None:
|
||||
raise errors.DockerException(message.format('dns'))
|
||||
if volumes_from is not None:
|
||||
raise errors.DockerException(message.format('volumes_from'))
|
||||
|
||||
return {
|
||||
'Hostname': hostname,
|
||||
'Domainname': domainname,
|
||||
@@ -158,7 +172,8 @@ class Client(requests.Session):
|
||||
'NetworkDisabled': network_disabled,
|
||||
'Entrypoint': entrypoint,
|
||||
'CpuShares': cpu_shares,
|
||||
'WorkingDir': working_dir
|
||||
'WorkingDir': working_dir,
|
||||
'MemorySwap': memswap_limit
|
||||
}
|
||||
|
||||
def _post_json(self, url, data, **kwargs):
|
||||
@@ -235,7 +250,7 @@ class Client(requests.Session):
|
||||
start = walker + STREAM_HEADER_SIZE_BYTES
|
||||
end = start + length
|
||||
walker = end
|
||||
yield str(buf[start:end])
|
||||
yield buf[start:end]
|
||||
|
||||
def _multiplexed_socket_stream_helper(self, response):
|
||||
"""A generator of multiplexed data blocks coming from a response
|
||||
@@ -296,8 +311,10 @@ class Client(requests.Session):
|
||||
return stream_result() if stream else \
|
||||
self._result(response, binary=True)
|
||||
|
||||
sep = bytes() if six.PY3 else str()
|
||||
|
||||
return stream and self._multiplexed_socket_stream_helper(response) or \
|
||||
''.join([x for x in self._multiplexed_buffer_helper(response)])
|
||||
sep.join([x for x in self._multiplexed_buffer_helper(response)])
|
||||
|
||||
def attach_socket(self, container, params=None, ws=False):
|
||||
if params is None:
|
||||
@@ -318,14 +335,20 @@ class Client(requests.Session):
|
||||
u, None, params=self._attach_params(params), stream=True))
|
||||
|
||||
def build(self, path=None, tag=None, quiet=False, fileobj=None,
|
||||
nocache=False, rm=False, stream=False, timeout=None):
|
||||
nocache=False, rm=False, stream=False, timeout=None,
|
||||
custom_context=False, encoding=None):
|
||||
remote = context = headers = None
|
||||
if path is None and fileobj is None:
|
||||
raise TypeError("Either path or fileobj needs to be provided.")
|
||||
|
||||
if fileobj is not None:
|
||||
if custom_context:
|
||||
if not fileobj:
|
||||
raise TypeError("You must specify fileobj with custom_context")
|
||||
context = fileobj
|
||||
elif fileobj is not None:
|
||||
context = utils.mkbuildcontext(fileobj)
|
||||
elif path.startswith(('http://', 'https://', 'git://', 'github.com/')):
|
||||
elif path.startswith(('http://', 'https://',
|
||||
'git://', 'github.com/')):
|
||||
remote = path
|
||||
else:
|
||||
context = utils.tar(path)
|
||||
@@ -341,8 +364,11 @@ class Client(requests.Session):
|
||||
'nocache': nocache,
|
||||
'rm': rm
|
||||
}
|
||||
|
||||
if context is not None:
|
||||
headers = {'Content-Type': 'application/tar'}
|
||||
if encoding:
|
||||
headers['Content-Encoding'] = encoding
|
||||
|
||||
if utils.compare_version('1.9', self._version) >= 0:
|
||||
# If we don't have any auth data so far, try reloading the config
|
||||
@@ -393,10 +419,11 @@ class Client(requests.Session):
|
||||
json=True)
|
||||
|
||||
def containers(self, quiet=False, all=False, trunc=True, latest=False,
|
||||
since=None, before=None, limit=-1):
|
||||
since=None, before=None, limit=-1, size=False):
|
||||
params = {
|
||||
'limit': 1 if latest else limit,
|
||||
'all': 1 if all else 0,
|
||||
'size': 1 if size else 0,
|
||||
'trunc_cmd': 1 if trunc else 0,
|
||||
'since': since,
|
||||
'before': before
|
||||
@@ -424,12 +451,13 @@ class Client(requests.Session):
|
||||
mem_limit=0, ports=None, environment=None, dns=None,
|
||||
volumes=None, volumes_from=None,
|
||||
network_disabled=False, name=None, entrypoint=None,
|
||||
cpu_shares=None, working_dir=None, domainname=None):
|
||||
cpu_shares=None, working_dir=None, domainname=None,
|
||||
memswap_limit=0):
|
||||
|
||||
config = self._container_config(
|
||||
image, command, hostname, user, detach, stdin_open, tty, mem_limit,
|
||||
ports, environment, dns, volumes, volumes_from, network_disabled,
|
||||
entrypoint, cpu_shares, working_dir, domainname
|
||||
entrypoint, cpu_shares, working_dir, domainname, memswap_limit
|
||||
)
|
||||
return self.create_container_from_config(config, name)
|
||||
|
||||
@@ -458,6 +486,12 @@ class Client(requests.Session):
|
||||
self._raise_for_status(res)
|
||||
return res.raw
|
||||
|
||||
def get_image(self, image):
|
||||
res = self._get(self._url("/images/{0}/get".format(image)),
|
||||
stream=True)
|
||||
self._raise_for_status(res)
|
||||
return res.raw
|
||||
|
||||
def history(self, image):
|
||||
res = self._get(self._url("/images/{0}/history".format(image)))
|
||||
self._raise_for_status(res)
|
||||
@@ -513,6 +547,10 @@ class Client(requests.Session):
|
||||
True)
|
||||
|
||||
def insert(self, image, url, path):
|
||||
if utils.compare_version('1.12', self._version) >= 0:
|
||||
raise errors.DeprecatedMethod(
|
||||
'insert is not available for API version >=1.12'
|
||||
)
|
||||
api_url = self._url("/images/" + image + "/insert")
|
||||
params = {
|
||||
'url': url,
|
||||
@@ -544,6 +582,10 @@ class Client(requests.Session):
|
||||
|
||||
self._raise_for_status(res)
|
||||
|
||||
def load_image(self, data):
|
||||
res = self._post(self._url("/images/load"), data=data)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def login(self, username, password=None, email=None, registry=None,
|
||||
reauth=False):
|
||||
# If we don't have any auth data so far, try reloading the config file
|
||||
@@ -572,7 +614,27 @@ class Client(requests.Session):
|
||||
self._auth_configs[registry] = req_data
|
||||
return self._result(response, json=True)
|
||||
|
||||
def logs(self, container, stdout=True, stderr=True, stream=False):
|
||||
def logs(self, container, stdout=True, stderr=True, stream=False,
|
||||
timestamps=False):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
if utils.compare_version('1.11', self._version) >= 0:
|
||||
params = {'stderr': stderr and 1 or 0,
|
||||
'stdout': stdout and 1 or 0,
|
||||
'timestamps': timestamps and 1 or 0,
|
||||
'follow': stream and 1 or 0}
|
||||
url = self._url("/containers/{0}/logs".format(container))
|
||||
res = self._get(url, params=params, stream=stream)
|
||||
if stream:
|
||||
return self._multiplexed_socket_stream_helper(res)
|
||||
elif six.PY3:
|
||||
return bytes().join(
|
||||
[x for x in self._multiplexed_buffer_helper(res)]
|
||||
)
|
||||
else:
|
||||
return str().join(
|
||||
[x for x in self._multiplexed_buffer_helper(res)]
|
||||
)
|
||||
return self.attach(
|
||||
container,
|
||||
stdout=stdout,
|
||||
@@ -581,6 +643,9 @@ class Client(requests.Session):
|
||||
logs=True
|
||||
)
|
||||
|
||||
def ping(self):
|
||||
return self._result(self._get(self._url('/_ping')))
|
||||
|
||||
def port(self, container, private_port):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
@@ -597,6 +662,8 @@ class Client(requests.Session):
|
||||
return h_ports
|
||||
|
||||
def pull(self, repository, tag=None, stream=False):
|
||||
if not tag:
|
||||
repository, tag = utils.parse_repository_tag(repository)
|
||||
registry, repo_name = auth.resolve_repository_name(repository)
|
||||
if repo_name.count(":") == 1:
|
||||
repository, tag = repository.rsplit(":", 1)
|
||||
@@ -653,16 +720,17 @@ class Client(requests.Session):
|
||||
return stream and self._stream_helper(response) \
|
||||
or self._result(response)
|
||||
|
||||
def remove_container(self, container, v=False, link=False):
|
||||
def remove_container(self, container, v=False, link=False, force=False):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
params = {'v': v, 'link': link}
|
||||
params = {'v': v, 'link': link, 'force': force}
|
||||
res = self._delete(self._url("/containers/" + container),
|
||||
params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def remove_image(self, image):
|
||||
res = self._delete(self._url("/images/" + image))
|
||||
def remove_image(self, image, force=False, noprune=False):
|
||||
params = {'force': force, 'noprune': noprune}
|
||||
res = self._delete(self._url("/images/" + image), params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def restart(self, container, timeout=10):
|
||||
@@ -678,8 +746,9 @@ class Client(requests.Session):
|
||||
params={'term': term}),
|
||||
True)
|
||||
|
||||
def start(self, container, binds=None, volumes_from=None, port_bindings=None,
|
||||
lxc_conf=None, publish_all_ports=False, links=None, privileged=False):
|
||||
def start(self, container, binds=None, port_bindings=None, lxc_conf=None,
|
||||
publish_all_ports=False, links=None, privileged=False,
|
||||
dns=None, dns_search=None, volumes_from=None, network_mode=None):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
@@ -693,19 +762,7 @@ class Client(requests.Session):
|
||||
'LxcConf': lxc_conf
|
||||
}
|
||||
if binds:
|
||||
bind_pairs = [
|
||||
'%s:%s:%s' % (
|
||||
h, d['bind'],
|
||||
'ro' if 'ro' in d and d['ro'] else 'rw'
|
||||
) for h, d in binds.items()
|
||||
]
|
||||
|
||||
start_config['Binds'] = bind_pairs
|
||||
|
||||
if volumes_from and not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
|
||||
start_config['VolumesFrom'] = volumes_from
|
||||
start_config['Binds'] = utils.convert_volume_binds(binds)
|
||||
|
||||
if port_bindings:
|
||||
start_config['PortBindings'] = utils.convert_port_bindings(
|
||||
@@ -726,10 +783,44 @@ class Client(requests.Session):
|
||||
|
||||
start_config['Privileged'] = privileged
|
||||
|
||||
if utils.compare_version('1.10', self._version) >= 0:
|
||||
if dns is not None:
|
||||
start_config['Dns'] = dns
|
||||
if volumes_from is not None:
|
||||
if isinstance(volumes_from, six.string_types):
|
||||
volumes_from = volumes_from.split(',')
|
||||
start_config['VolumesFrom'] = volumes_from
|
||||
else:
|
||||
warning_message = ('{0!r} parameter is discarded. It is only'
|
||||
' available for API version greater or equal'
|
||||
' than 1.10')
|
||||
|
||||
if dns is not None:
|
||||
warnings.warn(warning_message.format('dns'),
|
||||
DeprecationWarning)
|
||||
if volumes_from is not None:
|
||||
warnings.warn(warning_message.format('volumes_from'),
|
||||
DeprecationWarning)
|
||||
|
||||
if dns_search:
|
||||
start_config['DnsSearch'] = dns_search
|
||||
|
||||
if network_mode:
|
||||
start_config['NetworkMode'] = network_mode
|
||||
|
||||
url = self._url("/containers/{0}/start".format(container))
|
||||
res = self._post_json(url, data=start_config)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def resize(self, container, height, width):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
params = {'h': height, 'w': width}
|
||||
url = self._url("/containers/{0}/resize".format(container))
|
||||
res = self._post(url, params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def stop(self, container, timeout=10):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
@@ -59,3 +59,7 @@ class InvalidRepository(DockerException):
|
||||
|
||||
class InvalidConfigFile(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class DeprecatedMethod(DockerException):
|
||||
pass
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from .utils import (
|
||||
compare_version, convert_port_bindings, mkbuildcontext, ping, tar, parse_repository_tag
|
||||
compare_version, convert_port_bindings, convert_volume_binds,
|
||||
mkbuildcontext, ping, tar, parse_repository_tag
|
||||
) # flake8: noqa
|
||||
|
||||
@@ -92,6 +92,13 @@ def _convert_port_binding(binding):
|
||||
result['HostIp'] = binding[0]
|
||||
else:
|
||||
result['HostPort'] = binding[0]
|
||||
elif isinstance(binding, dict):
|
||||
if 'HostPort' in binding:
|
||||
result['HostPort'] = binding['HostPort']
|
||||
if 'HostIp' in binding:
|
||||
result['HostIp'] = binding['HostIp']
|
||||
else:
|
||||
raise ValueError(binding)
|
||||
else:
|
||||
result['HostPort'] = binding
|
||||
|
||||
@@ -116,13 +123,25 @@ def convert_port_bindings(port_bindings):
|
||||
return result
|
||||
|
||||
|
||||
def convert_volume_binds(binds):
|
||||
result = []
|
||||
for k, v in binds.items():
|
||||
if isinstance(v, dict):
|
||||
result.append('%s:%s:%s' % (
|
||||
k, v['bind'], 'ro' if v.get('ro', False) else 'rw'
|
||||
))
|
||||
else:
|
||||
result.append('%s:%s:rw' % (k, v))
|
||||
return result
|
||||
|
||||
|
||||
def parse_repository_tag(repo):
|
||||
column_index = repo.rfind(':')
|
||||
if column_index < 0:
|
||||
return repo, ""
|
||||
return repo, None
|
||||
tag = repo[column_index+1:]
|
||||
slash_index = tag.find('/')
|
||||
if slash_index < 0:
|
||||
return repo[:column_index], tag
|
||||
|
||||
return repo, ""
|
||||
return repo, None
|
||||
|
||||
1
fig/packages/docker/version.py
Normal file
1
fig/packages/docker/version.py
Normal file
@@ -0,0 +1 @@
|
||||
version = "0.3.2"
|
||||
83
fig/progress_stream.py
Normal file
83
fig/progress_stream.py
Normal file
@@ -0,0 +1,83 @@
|
||||
import json
|
||||
import os
|
||||
import codecs
|
||||
|
||||
|
||||
class StreamOutputError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def stream_output(output, stream):
|
||||
is_terminal = hasattr(stream, 'fileno') and os.isatty(stream.fileno())
|
||||
stream = codecs.getwriter('utf-8')(stream)
|
||||
all_events = []
|
||||
lines = {}
|
||||
diff = 0
|
||||
|
||||
for chunk in output:
|
||||
event = json.loads(chunk)
|
||||
all_events.append(event)
|
||||
|
||||
if 'progress' in event or 'progressDetail' in event:
|
||||
image_id = event['id']
|
||||
|
||||
if image_id in lines:
|
||||
diff = len(lines) - lines[image_id]
|
||||
else:
|
||||
lines[image_id] = len(lines)
|
||||
stream.write("\n")
|
||||
diff = 0
|
||||
|
||||
if is_terminal:
|
||||
# move cursor up `diff` rows
|
||||
stream.write("%c[%dA" % (27, diff))
|
||||
|
||||
print_output_event(event, stream, is_terminal)
|
||||
|
||||
if 'id' in event and is_terminal:
|
||||
# move cursor back down
|
||||
stream.write("%c[%dB" % (27, diff))
|
||||
|
||||
stream.flush()
|
||||
|
||||
return all_events
|
||||
|
||||
|
||||
def print_output_event(event, stream, is_terminal):
|
||||
if 'errorDetail' in event:
|
||||
raise StreamOutputError(event['errorDetail']['message'])
|
||||
|
||||
terminator = ''
|
||||
|
||||
if is_terminal and 'stream' not in event:
|
||||
# erase current line
|
||||
stream.write("%c[2K\r" % 27)
|
||||
terminator = "\r"
|
||||
pass
|
||||
elif 'progressDetail' in event:
|
||||
return
|
||||
|
||||
if 'time' in event:
|
||||
stream.write("[%s] " % event['time'])
|
||||
|
||||
if 'id' in event:
|
||||
stream.write("%s: " % event['id'])
|
||||
|
||||
if 'from' in event:
|
||||
stream.write("(from %s) " % event['from'])
|
||||
|
||||
status = event.get('status', '')
|
||||
|
||||
if 'progress' in event:
|
||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
||||
elif 'progressDetail' in event:
|
||||
detail = event['progressDetail']
|
||||
if 'current' in detail:
|
||||
percentage = float(detail['current']) / float(detail['total']) * 100
|
||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
||||
else:
|
||||
stream.write('%s%s' % (status, terminator))
|
||||
elif 'stream' in event:
|
||||
stream.write("%s%s" % (event['stream'], terminator))
|
||||
else:
|
||||
stream.write("%s%s\n" % (status, terminator))
|
||||
120
fig/project.py
120
fig/project.py
@@ -2,6 +2,8 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
from .service import Service
|
||||
from .container import Container
|
||||
from .packages.docker.errors import APIError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@@ -18,11 +20,13 @@ def sort_service_dicts(services):
|
||||
if n['name'] in temporary_marked:
|
||||
if n['name'] in get_service_names(n.get('links', [])):
|
||||
raise DependencyError('A service can not link to itself: %s' % n['name'])
|
||||
if n['name'] in n.get('volumes_from', []):
|
||||
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
|
||||
else:
|
||||
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
|
||||
if n in unmarked:
|
||||
temporary_marked.add(n['name'])
|
||||
dependents = [m for m in services if n['name'] in get_service_names(m.get('links', []))]
|
||||
dependents = [m for m in services if (n['name'] in get_service_names(m.get('links', []))) or (n['name'] in m.get('volumes_from', []))]
|
||||
for m in dependents:
|
||||
visit(m)
|
||||
temporary_marked.remove(n['name'])
|
||||
@@ -34,6 +38,7 @@ def sort_service_dicts(services):
|
||||
|
||||
return sorted_services
|
||||
|
||||
|
||||
class Project(object):
|
||||
"""
|
||||
A collection of services.
|
||||
@@ -50,21 +55,10 @@ class Project(object):
|
||||
"""
|
||||
project = cls(name, [], client)
|
||||
for service_dict in sort_service_dicts(service_dicts):
|
||||
# Reference links by object
|
||||
links = []
|
||||
if 'links' in service_dict:
|
||||
for link in service_dict.get('links', []):
|
||||
if ':' in link:
|
||||
service_name, link_name = link.split(':', 1)
|
||||
else:
|
||||
service_name, link_name = link, None
|
||||
try:
|
||||
links.append((project.get_service(service_name), link_name))
|
||||
except NoSuchService:
|
||||
raise ConfigurationError('Service "%s" has a link to service "%s" which does not exist.' % (service_dict['name'], service_name))
|
||||
links = project.get_links(service_dict)
|
||||
volumes_from = project.get_volumes_from(service_dict)
|
||||
|
||||
del service_dict['links']
|
||||
project.services.append(Service(client=client, project=name, links=links, **service_dict))
|
||||
project.services.append(Service(client=client, project=name, links=links, volumes_from=volumes_from, **service_dict))
|
||||
return project
|
||||
|
||||
@classmethod
|
||||
@@ -88,22 +82,66 @@ class Project(object):
|
||||
|
||||
raise NoSuchService(name)
|
||||
|
||||
def get_services(self, service_names=None):
|
||||
def get_services(self, service_names=None, include_links=False):
|
||||
"""
|
||||
Returns a list of this project's services filtered
|
||||
by the provided list of names, or all services if
|
||||
service_names is None or [].
|
||||
by the provided list of names, or all services if service_names is None
|
||||
or [].
|
||||
|
||||
Preserves the original order of self.services.
|
||||
If include_links is specified, returns a list including the links for
|
||||
service_names, in order of dependency.
|
||||
|
||||
Raises NoSuchService if any of the named services
|
||||
do not exist.
|
||||
Preserves the original order of self.services where possible,
|
||||
reordering as needed to resolve links.
|
||||
|
||||
Raises NoSuchService if any of the named services do not exist.
|
||||
"""
|
||||
if service_names is None or len(service_names) == 0:
|
||||
return self.services
|
||||
return self.get_services(
|
||||
service_names=[s.name for s in self.services],
|
||||
include_links=include_links
|
||||
)
|
||||
else:
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
return [s for s in self.services if s in unsorted]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
|
||||
if include_links:
|
||||
services = reduce(self._inject_links, services, [])
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
return uniques
|
||||
|
||||
def get_links(self, service_dict):
|
||||
links = []
|
||||
if 'links' in service_dict:
|
||||
for link in service_dict.get('links', []):
|
||||
if ':' in link:
|
||||
service_name, link_name = link.split(':', 1)
|
||||
else:
|
||||
service_name, link_name = link, None
|
||||
try:
|
||||
links.append((self.get_service(service_name), link_name))
|
||||
except NoSuchService:
|
||||
raise ConfigurationError('Service "%s" has a link to service "%s" which does not exist.' % (service_dict['name'], service_name))
|
||||
del service_dict['links']
|
||||
return links
|
||||
|
||||
def get_volumes_from(self, service_dict):
|
||||
volumes_from = []
|
||||
if 'volumes_from' in service_dict:
|
||||
for volume_name in service_dict.get('volumes_from', []):
|
||||
try:
|
||||
service = self.get_service(volume_name)
|
||||
volumes_from.append(service)
|
||||
except NoSuchService:
|
||||
try:
|
||||
container = Container.from_id(self.client, volume_name)
|
||||
volumes_from.append(container)
|
||||
except APIError:
|
||||
raise ConfigurationError('Service "%s" mounts volumes from "%s", which is not the name of a service or container.' % (service_dict['name'], volume_name))
|
||||
del service_dict['volumes_from']
|
||||
return volumes_from
|
||||
|
||||
def start(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
@@ -117,21 +155,25 @@ class Project(object):
|
||||
for service in reversed(self.get_services(service_names)):
|
||||
service.kill(**options)
|
||||
|
||||
def build(self, service_names=None, **options):
|
||||
def build(self, service_names=None, no_cache=False):
|
||||
for service in self.get_services(service_names):
|
||||
if service.can_be_built():
|
||||
service.build(**options)
|
||||
service.build(no_cache)
|
||||
else:
|
||||
log.info('%s uses an image, skipping' % service.name)
|
||||
|
||||
def up(self, service_names=None):
|
||||
new_containers = []
|
||||
def up(self, service_names=None, start_links=True, recreate=True):
|
||||
running_containers = []
|
||||
|
||||
for service in self.get_services(service_names):
|
||||
for (_, new) in service.recreate_containers():
|
||||
new_containers.append(new)
|
||||
for service in self.get_services(service_names, include_links=start_links):
|
||||
if recreate:
|
||||
for (_, container) in service.recreate_containers():
|
||||
running_containers.append(container)
|
||||
else:
|
||||
for container in service.start_or_create_containers():
|
||||
running_containers.append(container)
|
||||
|
||||
return new_containers
|
||||
return running_containers
|
||||
|
||||
def remove_stopped(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
@@ -144,6 +186,20 @@ class Project(object):
|
||||
l.append(container)
|
||||
return l
|
||||
|
||||
def _inject_links(self, acc, service):
|
||||
linked_names = service.get_linked_names()
|
||||
|
||||
if len(linked_names) > 0:
|
||||
linked_services = self.get_services(
|
||||
service_names=linked_names,
|
||||
include_links=True
|
||||
)
|
||||
else:
|
||||
linked_services = []
|
||||
|
||||
linked_services.append(service)
|
||||
return acc + linked_services
|
||||
|
||||
|
||||
class NoSuchService(Exception):
|
||||
def __init__(self, name):
|
||||
@@ -161,6 +217,6 @@ class ConfigurationError(Exception):
|
||||
def __str__(self):
|
||||
return self.msg
|
||||
|
||||
|
||||
class DependencyError(ConfigurationError):
|
||||
pass
|
||||
|
||||
|
||||
227
fig/service.py
227
fig/service.py
@@ -5,13 +5,13 @@ import logging
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from .container import Container
|
||||
from .progress_stream import stream_output, StreamOutputError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
DOCKER_CONFIG_KEYS = ['image', 'command', 'hostname', 'user', 'detach', 'stdin_open', 'tty', 'mem_limit', 'ports', 'environment', 'dns', 'volumes', 'volumes_from', 'entrypoint', 'privileged']
|
||||
DOCKER_CONFIG_KEYS = ['image', 'command', 'hostname', 'domainname', 'user', 'detach', 'stdin_open', 'tty', 'mem_limit', 'ports', 'environment', 'dns', 'volumes', 'entrypoint', 'privileged', 'volumes_from', 'net', 'working_dir']
|
||||
DOCKER_CONFIG_HINTS = {
|
||||
'link' : 'links',
|
||||
'port' : 'ports',
|
||||
@@ -19,8 +19,11 @@ DOCKER_CONFIG_HINTS = {
|
||||
'priviliged': 'privileged',
|
||||
'privilige' : 'privileged',
|
||||
'volume' : 'volumes',
|
||||
'workdir' : 'working_dir',
|
||||
}
|
||||
|
||||
VALID_NAME_CHARS = '[a-zA-Z0-9]'
|
||||
|
||||
|
||||
class BuildError(Exception):
|
||||
def __init__(self, service, reason):
|
||||
@@ -37,11 +40,11 @@ class ConfigError(ValueError):
|
||||
|
||||
|
||||
class Service(object):
|
||||
def __init__(self, name, client=None, project='default', links=[], **options):
|
||||
if not re.match('^[a-zA-Z0-9]+$', name):
|
||||
raise ConfigError('Invalid name: %s' % name)
|
||||
if not re.match('^[a-zA-Z0-9]+$', project):
|
||||
raise ConfigError('Invalid project: %s' % project)
|
||||
def __init__(self, name, client=None, project='default', links=None, volumes_from=None, **options):
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
|
||||
raise ConfigError('Invalid service name "%s" - only %s are allowed' % (name, VALID_NAME_CHARS))
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, project):
|
||||
raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
|
||||
if 'image' in options and 'build' in options:
|
||||
raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
|
||||
|
||||
@@ -58,6 +61,7 @@ class Service(object):
|
||||
self.client = client
|
||||
self.project = project
|
||||
self.links = links or []
|
||||
self.volumes_from = volumes_from or []
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False):
|
||||
@@ -73,9 +77,7 @@ class Service(object):
|
||||
|
||||
def start(self, **options):
|
||||
for c in self.containers(stopped=True):
|
||||
if not c.is_running:
|
||||
log.info("Starting %s..." % c.name)
|
||||
self.start_container(c, **options)
|
||||
self.start_container_if_stopped(c, **options)
|
||||
|
||||
def stop(self, **options):
|
||||
for c in self.containers():
|
||||
@@ -130,7 +132,6 @@ class Service(object):
|
||||
|
||||
self.remove_stopped()
|
||||
|
||||
|
||||
def remove_stopped(self, **options):
|
||||
for c in self.containers(stopped=True):
|
||||
if not c.is_running:
|
||||
@@ -175,13 +176,19 @@ class Service(object):
|
||||
return tuples
|
||||
|
||||
def recreate_container(self, container, **override_options):
|
||||
if container.is_running:
|
||||
container.stop(timeout=1)
|
||||
try:
|
||||
container.stop()
|
||||
except APIError as e:
|
||||
if (e.response.status_code == 500
|
||||
and e.explanation
|
||||
and 'no such process' in str(e.explanation)):
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
|
||||
intermediate_container = Container.create(
|
||||
self.client,
|
||||
image=container.image,
|
||||
volumes_from=container.id,
|
||||
entrypoint=['echo'],
|
||||
command=[],
|
||||
)
|
||||
@@ -190,15 +197,21 @@ class Service(object):
|
||||
container.remove()
|
||||
|
||||
options = dict(override_options)
|
||||
options['volumes_from'] = intermediate_container.id
|
||||
new_container = self.create_container(**options)
|
||||
self.start_container(new_container, volumes_from=intermediate_container.id)
|
||||
self.start_container(new_container, intermediate_container=intermediate_container)
|
||||
|
||||
intermediate_container.remove()
|
||||
|
||||
return (intermediate_container, new_container)
|
||||
|
||||
def start_container(self, container=None, volumes_from=None, **override_options):
|
||||
def start_container_if_stopped(self, container, **options):
|
||||
if container.is_running:
|
||||
return container
|
||||
else:
|
||||
log.info("Starting %s..." % container.name)
|
||||
return self.start_container(container, **options)
|
||||
|
||||
def start_container(self, container=None, intermediate_container=None, **override_options):
|
||||
if container is None:
|
||||
container = self.create_container(**override_options)
|
||||
|
||||
@@ -209,12 +222,7 @@ class Service(object):
|
||||
|
||||
if options.get('ports', None) is not None:
|
||||
for port in options['ports']:
|
||||
port = str(port)
|
||||
if ':' in port:
|
||||
external_port, internal_port = port.split(':', 1)
|
||||
else:
|
||||
external_port, internal_port = (None, port)
|
||||
|
||||
internal_port, external_port = split_port(port)
|
||||
port_bindings[internal_port] = external_port
|
||||
|
||||
volume_bindings = {}
|
||||
@@ -229,16 +237,33 @@ class Service(object):
|
||||
}
|
||||
|
||||
privileged = options.get('privileged', False)
|
||||
net = options.get('net', 'bridge')
|
||||
dns = options.get('dns', None)
|
||||
|
||||
container.start(
|
||||
links=self._get_links(link_to_self=override_options.get('one_off', False)),
|
||||
port_bindings=port_bindings,
|
||||
binds=volume_bindings,
|
||||
volumes_from=volumes_from,
|
||||
volumes_from=self._get_volumes_from(intermediate_container),
|
||||
privileged=privileged,
|
||||
network_mode=net,
|
||||
dns=dns,
|
||||
)
|
||||
return container
|
||||
|
||||
def start_or_create_containers(self):
|
||||
containers = self.containers(stopped=True)
|
||||
|
||||
if len(containers) == 0:
|
||||
log.info("Creating %s..." % self.next_container_name())
|
||||
new_container = self.create_container()
|
||||
return [self.start_container(new_container)]
|
||||
else:
|
||||
return [self.start_container_if_stopped(c) for c in containers]
|
||||
|
||||
def get_linked_names(self):
|
||||
return [s.name for (s, _) in self.links]
|
||||
|
||||
def next_container_name(self, one_off=False):
|
||||
bits = [self.project, self.name]
|
||||
if one_off:
|
||||
@@ -267,12 +292,37 @@ class Service(object):
|
||||
links.append((container.name, container.name_without_project))
|
||||
return links
|
||||
|
||||
def _get_volumes_from(self, intermediate_container=None):
|
||||
volumes_from = []
|
||||
for v in self.volumes_from:
|
||||
if isinstance(v, Service):
|
||||
for container in v.containers(stopped=True):
|
||||
volumes_from.append(container.id)
|
||||
elif isinstance(v, Container):
|
||||
volumes_from.append(v.id)
|
||||
|
||||
if intermediate_container:
|
||||
volumes_from.append(intermediate_container.id)
|
||||
|
||||
return volumes_from
|
||||
|
||||
def _get_container_create_options(self, override_options, one_off=False):
|
||||
container_options = dict((k, self.options[k]) for k in DOCKER_CONFIG_KEYS if k in self.options)
|
||||
container_options.update(override_options)
|
||||
|
||||
container_options['name'] = self.next_container_name(one_off)
|
||||
|
||||
# If a qualified hostname was given, split it into an
|
||||
# unqualified hostname and a domainname unless domainname
|
||||
# was also given explicitly. This matches the behavior of
|
||||
# the official Docker CLI in that scenario.
|
||||
if ('hostname' in container_options
|
||||
and 'domainname' not in container_options
|
||||
and '.' in container_options['hostname']):
|
||||
parts = container_options['hostname'].partition('.')
|
||||
container_options['hostname'] = parts[0]
|
||||
container_options['domainname'] = parts[2]
|
||||
|
||||
if 'ports' in container_options or 'expose' in self.options:
|
||||
ports = []
|
||||
all_ports = container_options.get('ports', []) + self.options.get('expose', [])
|
||||
@@ -288,24 +338,32 @@ class Service(object):
|
||||
if 'volumes' in container_options:
|
||||
container_options['volumes'] = dict((split_volume(v)[1], {}) for v in container_options['volumes'])
|
||||
|
||||
if 'environment' in container_options:
|
||||
if isinstance(container_options['environment'], list):
|
||||
container_options['environment'] = dict(split_env(e) for e in container_options['environment'])
|
||||
container_options['environment'] = dict(resolve_env(k, v) for k, v in container_options['environment'].iteritems())
|
||||
|
||||
if self.can_be_built():
|
||||
if len(self.client.images(name=self._build_tag_name())) == 0:
|
||||
self.build()
|
||||
container_options['image'] = self._build_tag_name()
|
||||
|
||||
# Priviliged is only required for starting containers, not for creating them
|
||||
if 'privileged' in container_options:
|
||||
del container_options['privileged']
|
||||
# Delete options which are only used when starting
|
||||
for key in ['privileged', 'net', 'dns']:
|
||||
if key in container_options:
|
||||
del container_options[key]
|
||||
|
||||
return container_options
|
||||
|
||||
def build(self):
|
||||
def build(self, no_cache=False):
|
||||
log.info('Building %s...' % self.name)
|
||||
|
||||
build_output = self.client.build(
|
||||
self.options['build'],
|
||||
tag=self._build_tag_name(),
|
||||
stream=True
|
||||
stream=True,
|
||||
rm=True,
|
||||
nocache=no_cache,
|
||||
)
|
||||
|
||||
try:
|
||||
@@ -342,84 +400,6 @@ class Service(object):
|
||||
return True
|
||||
|
||||
|
||||
class StreamOutputError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def stream_output(output, stream):
|
||||
is_terminal = hasattr(stream, 'fileno') and os.isatty(stream.fileno())
|
||||
all_events = []
|
||||
lines = {}
|
||||
diff = 0
|
||||
|
||||
for chunk in output:
|
||||
event = json.loads(chunk)
|
||||
all_events.append(event)
|
||||
|
||||
if 'progress' in event or 'progressDetail' in event:
|
||||
image_id = event['id']
|
||||
|
||||
if image_id in lines:
|
||||
diff = len(lines) - lines[image_id]
|
||||
else:
|
||||
lines[image_id] = len(lines)
|
||||
stream.write("\n")
|
||||
diff = 0
|
||||
|
||||
if is_terminal:
|
||||
# move cursor up `diff` rows
|
||||
stream.write("%c[%dA" % (27, diff))
|
||||
|
||||
print_output_event(event, stream, is_terminal)
|
||||
|
||||
if 'id' in event and is_terminal:
|
||||
# move cursor back down
|
||||
stream.write("%c[%dB" % (27, diff))
|
||||
|
||||
stream.flush()
|
||||
|
||||
return all_events
|
||||
|
||||
def print_output_event(event, stream, is_terminal):
|
||||
if 'errorDetail' in event:
|
||||
raise StreamOutputError(event['errorDetail']['message'])
|
||||
|
||||
terminator = ''
|
||||
|
||||
if is_terminal and 'stream' not in event:
|
||||
# erase current line
|
||||
stream.write("%c[2K\r" % 27)
|
||||
terminator = "\r"
|
||||
pass
|
||||
elif 'progressDetail' in event:
|
||||
return
|
||||
|
||||
if 'time' in event:
|
||||
stream.write("[%s] " % event['time'])
|
||||
|
||||
if 'id' in event:
|
||||
stream.write("%s: " % event['id'])
|
||||
|
||||
if 'from' in event:
|
||||
stream.write("(from %s) " % event['from'])
|
||||
|
||||
status = event.get('status', '')
|
||||
|
||||
if 'progress' in event:
|
||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
||||
elif 'progressDetail' in event:
|
||||
detail = event['progressDetail']
|
||||
if 'current' in detail:
|
||||
percentage = float(detail['current']) / float(detail['total']) * 100
|
||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
||||
else:
|
||||
stream.write('%s%s' % (status, terminator))
|
||||
elif 'stream' in event:
|
||||
stream.write("%s%s" % (event['stream'], terminator))
|
||||
else:
|
||||
stream.write("%s%s\n" % (status, terminator))
|
||||
|
||||
|
||||
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
|
||||
|
||||
|
||||
@@ -460,3 +440,36 @@ def split_volume(v):
|
||||
return v.split(':', 1)
|
||||
else:
|
||||
return (None, v)
|
||||
|
||||
|
||||
def split_port(port):
|
||||
port = str(port)
|
||||
external_ip = None
|
||||
if ':' in port:
|
||||
external_port, internal_port = port.rsplit(':', 1)
|
||||
if ':' in external_port:
|
||||
external_ip, external_port = external_port.split(':', 1)
|
||||
else:
|
||||
external_port, internal_port = (None, port)
|
||||
if external_ip:
|
||||
if external_port:
|
||||
external_port = (external_ip, external_port)
|
||||
else:
|
||||
external_port = (external_ip,)
|
||||
return internal_port, external_port
|
||||
|
||||
|
||||
def split_env(env):
|
||||
if '=' in env:
|
||||
return env.split('=', 1)
|
||||
else:
|
||||
return env, None
|
||||
|
||||
|
||||
def resolve_env(key, val):
|
||||
if val is not None:
|
||||
return key, val
|
||||
elif key in os.environ:
|
||||
return key, os.environ[key]
|
||||
else:
|
||||
return key, ''
|
||||
|
||||
@@ -2,3 +2,4 @@ mock==1.0.1
|
||||
nose==1.3.0
|
||||
pyinstaller==2.1
|
||||
unittest2
|
||||
flake8
|
||||
|
||||
@@ -3,3 +3,4 @@ PyYAML==3.10
|
||||
requests==2.2.1
|
||||
texttable==0.8.1
|
||||
websocket-client==0.11.0
|
||||
dockerpty==0.2.3
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
#!/bin/sh
|
||||
set -ex
|
||||
mkdir -p `pwd`/dist
|
||||
chmod 777 `pwd`/dist
|
||||
docker build -t fig .
|
||||
docker run -v `pwd`/dist:/code/dist fig pyinstaller -F bin/fig
|
||||
docker run -v `pwd`/dist:/code/dist fig dist/fig --version
|
||||
|
||||
@@ -5,3 +5,4 @@ virtualenv venv
|
||||
venv/bin/pip install pyinstaller==2.1
|
||||
venv/bin/pip install .
|
||||
venv/bin/pyinstaller -F bin/fig
|
||||
dist/fig --version
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
#!/bin/sh
|
||||
find . -type f -name '*.pyc' -delete
|
||||
|
||||
rm -rf docs/_site build dist fig.egg-info
|
||||
|
||||
@@ -21,8 +21,7 @@ git reset --soft origin/gh-pages
|
||||
|
||||
echo ".git-gh-pages" > .gitignore
|
||||
|
||||
git add -u
|
||||
git add .
|
||||
git add -A .
|
||||
|
||||
git commit -m "update" || echo "didn't commit"
|
||||
git push origin master:gh-pages
|
||||
|
||||
@@ -1,2 +1,4 @@
|
||||
#!/bin/sh
|
||||
nosetests
|
||||
set -e
|
||||
flake8 fig
|
||||
PYTHONIOENCODING=ascii nosetests $@
|
||||
|
||||
7
setup.py
7
setup.py
@@ -32,10 +32,9 @@ setup(
|
||||
name='fig',
|
||||
version=find_version("fig", "__init__.py"),
|
||||
description='Punctual, lightweight development environments using Docker',
|
||||
url='http://orchardup.github.io/fig/',
|
||||
author='Orchard Laboratories Ltd.',
|
||||
author_email='hello@orchardup.com',
|
||||
license='BSD',
|
||||
url='http://www.fig.sh/',
|
||||
author='Docker, Inc.',
|
||||
license='Apache License 2.0',
|
||||
packages=find_packages(),
|
||||
include_package_data=True,
|
||||
test_suite='nose.collector',
|
||||
|
||||
5
tests/fixtures/commands-figfile/fig.yml
vendored
Normal file
5
tests/fixtures/commands-figfile/fig.yml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
implicit:
|
||||
image: figtest_test
|
||||
explicit:
|
||||
image: figtest_test
|
||||
command: [ "/bin/true" ]
|
||||
11
tests/fixtures/links-figfile/fig.yml
vendored
Normal file
11
tests/fixtures/links-figfile/fig.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
db:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
web:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
links:
|
||||
- db:db
|
||||
console:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
@@ -1,3 +1,3 @@
|
||||
definedinyamlnotyml:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
4
tests/fixtures/multiple-figfiles/fig.yml
vendored
4
tests/fixtures/multiple-figfiles/fig.yml
vendored
@@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
another:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
2
tests/fixtures/multiple-figfiles/fig2.yml
vendored
2
tests/fixtures/multiple-figfiles/fig2.yml
vendored
@@ -1,3 +1,3 @@
|
||||
yetanother:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
2
tests/fixtures/simple-dockerfile/Dockerfile
vendored
2
tests/fixtures/simple-dockerfile/Dockerfile
vendored
@@ -1,2 +1,2 @@
|
||||
FROM ubuntu
|
||||
FROM busybox:latest
|
||||
CMD echo "success"
|
||||
|
||||
2
tests/fixtures/simple-dockerfile/fig.yml
vendored
Normal file
2
tests/fixtures/simple-dockerfile/fig.yml
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
simple:
|
||||
build: tests/fixtures/simple-dockerfile
|
||||
4
tests/fixtures/simple-figfile/fig.yml
vendored
4
tests/fixtures/simple-figfile/fig.yml
vendored
@@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
another:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
@@ -1,17 +1,20 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from .testcases import DockerClientTestCase
|
||||
from mock import patch
|
||||
from fig.cli.main import TopLevelCommand
|
||||
from fig.packages.six import StringIO
|
||||
import sys
|
||||
|
||||
class CLITestCase(DockerClientTestCase):
|
||||
def setUp(self):
|
||||
super(CLITestCase, self).setUp()
|
||||
self.old_sys_exit = sys.exit
|
||||
sys.exit = lambda code=0: None
|
||||
self.command = TopLevelCommand()
|
||||
self.command.base_dir = 'tests/fixtures/simple-figfile'
|
||||
|
||||
def tearDown(self):
|
||||
sys.exit = self.old_sys_exit
|
||||
self.command.project.kill()
|
||||
self.command.project.remove_stopped()
|
||||
|
||||
@@ -19,7 +22,7 @@ class CLITestCase(DockerClientTestCase):
|
||||
def test_ps(self, mock_stdout):
|
||||
self.command.project.get_service('simple').create_container()
|
||||
self.command.dispatch(['ps'], None)
|
||||
self.assertIn('fig_simple_1', mock_stdout.getvalue())
|
||||
self.assertIn('simplefigfile_simple_1', mock_stdout.getvalue())
|
||||
|
||||
@patch('sys.stdout', new_callable=StringIO)
|
||||
def test_ps_default_figfile(self, mock_stdout):
|
||||
@@ -28,9 +31,9 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.command.dispatch(['ps'], None)
|
||||
|
||||
output = mock_stdout.getvalue()
|
||||
self.assertIn('fig_simple_1', output)
|
||||
self.assertIn('fig_another_1', output)
|
||||
self.assertNotIn('fig_yetanother_1', output)
|
||||
self.assertIn('multiplefigfiles_simple_1', output)
|
||||
self.assertIn('multiplefigfiles_another_1', output)
|
||||
self.assertNotIn('multiplefigfiles_yetanother_1', output)
|
||||
|
||||
@patch('sys.stdout', new_callable=StringIO)
|
||||
def test_ps_alternate_figfile(self, mock_stdout):
|
||||
@@ -39,9 +42,143 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.command.dispatch(['-f', 'fig2.yml', 'ps'], None)
|
||||
|
||||
output = mock_stdout.getvalue()
|
||||
self.assertNotIn('fig_simple_1', output)
|
||||
self.assertNotIn('fig_another_1', output)
|
||||
self.assertIn('fig_yetanother_1', output)
|
||||
self.assertNotIn('multiplefigfiles_simple_1', output)
|
||||
self.assertNotIn('multiplefigfiles_another_1', output)
|
||||
self.assertIn('multiplefigfiles_yetanother_1', output)
|
||||
|
||||
@patch('sys.stdout', new_callable=StringIO)
|
||||
def test_build_no_cache(self, mock_stdout):
|
||||
self.command.base_dir = 'tests/fixtures/simple-dockerfile'
|
||||
self.command.dispatch(['build', 'simple'], None)
|
||||
|
||||
mock_stdout.truncate(0)
|
||||
cache_indicator = 'Using cache'
|
||||
self.command.dispatch(['build', 'simple'], None)
|
||||
output = mock_stdout.getvalue()
|
||||
self.assertIn(cache_indicator, output)
|
||||
|
||||
mock_stdout.truncate(0)
|
||||
self.command.dispatch(['build', '--no-cache', 'simple'], None)
|
||||
output = mock_stdout.getvalue()
|
||||
self.assertNotIn(cache_indicator, output)
|
||||
|
||||
def test_up(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
another = self.command.project.get_service('another')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
self.assertEqual(len(another.containers()), 1)
|
||||
|
||||
def test_up_with_links(self):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', 'web'], None)
|
||||
web = self.command.project.get_service('web')
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
def test_up_with_no_deps(self):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', '--no-deps', 'web'], None)
|
||||
web = self.command.project.get_service('web')
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
def test_up_with_recreate(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.assertNotEqual(old_ids, new_ids)
|
||||
|
||||
def test_up_with_keep_old(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.command.dispatch(['up', '-d', '--no-recreate'], None)
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.assertEqual(old_ids, new_ids)
|
||||
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_without_links(self, mock_stdout):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', 'console', '/bin/true'], None)
|
||||
self.assertEqual(len(self.command.project.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_links(self, __):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', 'web', '/bin/true'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_with_no_deps(self, __):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', '--no-deps', 'web', '/bin/true'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_does_not_recreate_linked_containers(self, __):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', 'db'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in db.containers()]
|
||||
|
||||
self.command.dispatch(['run', 'web', '/bin/true'], None)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in db.containers()]
|
||||
|
||||
self.assertEqual(old_ids, new_ids)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_without_command(self, __):
|
||||
self.command.base_dir = 'tests/fixtures/commands-figfile'
|
||||
self.client.build('tests/fixtures/simple-dockerfile', tag='figtest_test')
|
||||
|
||||
for c in self.command.project.containers(stopped=True, one_off=True):
|
||||
c.remove()
|
||||
|
||||
self.command.dispatch(['run', 'implicit'], None)
|
||||
service = self.command.project.get_service('implicit')
|
||||
containers = service.containers(stopped=True, one_off=True)
|
||||
self.assertEqual(
|
||||
[c.human_readable_command for c in containers],
|
||||
[u'/bin/sh -c echo "success"'],
|
||||
)
|
||||
|
||||
self.command.dispatch(['run', 'explicit'], None)
|
||||
service = self.command.project.get_service('explicit')
|
||||
containers = service.containers(stopped=True, one_off=True)
|
||||
self.assertEqual(
|
||||
[c.human_readable_command for c in containers],
|
||||
[u'/bin/true'],
|
||||
)
|
||||
|
||||
def test_rm(self):
|
||||
service = self.command.project.get_service('simple')
|
||||
@@ -72,3 +209,4 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.command.scale({'SERVICE=NUM': ['simple=0', 'another=0']})
|
||||
self.assertEqual(len(project.get_service('simple').containers()), 0)
|
||||
self.assertEqual(len(project.get_service('another').containers()), 0)
|
||||
|
||||
|
||||
@@ -1,9 +1,49 @@
|
||||
from __future__ import unicode_literals
|
||||
from fig.project import Project, ConfigurationError
|
||||
from fig.container import Container
|
||||
from .testcases import DockerClientTestCase
|
||||
|
||||
|
||||
class ProjectTest(DockerClientTestCase):
|
||||
def test_volumes_from_service(self):
|
||||
project = Project.from_config(
|
||||
name='figtest',
|
||||
config={
|
||||
'data': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes': ['/var/data'],
|
||||
},
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['data'],
|
||||
},
|
||||
},
|
||||
client=self.client,
|
||||
)
|
||||
db = project.get_service('db')
|
||||
data = project.get_service('data')
|
||||
self.assertEqual(db.volumes_from, [data])
|
||||
|
||||
def test_volumes_from_container(self):
|
||||
data_container = Container.create(
|
||||
self.client,
|
||||
image='busybox:latest',
|
||||
volumes=['/var/data'],
|
||||
name='figtest_data_container',
|
||||
)
|
||||
project = Project.from_config(
|
||||
name='figtest',
|
||||
config={
|
||||
'db': {
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['figtest_data_container'],
|
||||
},
|
||||
},
|
||||
client=self.client,
|
||||
)
|
||||
db = project.get_service('db')
|
||||
self.assertEqual(db.volumes_from, [data_container])
|
||||
|
||||
def test_start_stop_kill_remove(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db')
|
||||
@@ -44,6 +84,21 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_recreates_containers(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
old_db_id = project.containers()[0].id
|
||||
@@ -59,6 +114,107 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_recreate_running(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
old_db_id = project.containers()[0].id
|
||||
db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
|
||||
db_container = [c for c in project.containers() if 'db' in c.name][0]
|
||||
self.assertEqual(c.id, old_db_id)
|
||||
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_recreate_stopped(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
project.stop()
|
||||
|
||||
old_containers = project.containers(stopped=True)
|
||||
|
||||
self.assertEqual(len(old_containers), 1)
|
||||
old_db_id = old_containers[0].id
|
||||
db_volume_path = old_containers[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
|
||||
new_containers = project.containers(stopped=True)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
|
||||
db_container = [c for c in new_containers if 'db' in c.name][0]
|
||||
self.assertEqual(c.id, old_db_id)
|
||||
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_without_all_services(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db')
|
||||
project = Project('figtest', [console, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up()
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 1)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_starts_links(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
|
||||
project = Project('figtest', [web, db, console], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['web'])
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_deps(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
|
||||
project = Project('figtest', [web, db, console], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['web'], start_links=False)
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_unscale_after_restart(self):
|
||||
web = self.create_service('web')
|
||||
project = Project('figtest', [web], self.client)
|
||||
|
||||
@@ -2,8 +2,10 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from fig import Service
|
||||
from fig.service import CannotBeScaledError
|
||||
from fig.container import Container
|
||||
from fig.packages.docker.errors import APIError
|
||||
from .testcases import DockerClientTestCase
|
||||
import os
|
||||
|
||||
class ServiceTest(DockerClientTestCase):
|
||||
def test_containers(self):
|
||||
@@ -96,17 +98,27 @@ class ServiceTest(DockerClientTestCase):
|
||||
service.start_container(container)
|
||||
self.assertIn('/host-tmp', container.inspect()['Volumes'])
|
||||
|
||||
def test_create_container_with_volumes_from(self):
|
||||
volume_service = self.create_service('data')
|
||||
volume_container_1 = volume_service.create_container()
|
||||
volume_container_2 = Container.create(self.client, image='busybox:latest', command=["/bin/sleep", "300"])
|
||||
host_service = self.create_service('host', volumes_from=[volume_service, volume_container_2])
|
||||
host_container = host_service.create_container()
|
||||
host_service.start_container(host_container)
|
||||
self.assertIn(volume_container_1.id, host_container.inspect()['HostConfig']['VolumesFrom'])
|
||||
self.assertIn(volume_container_2.id, host_container.inspect()['HostConfig']['VolumesFrom'])
|
||||
|
||||
def test_recreate_containers(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
environment={'FOO': '1'},
|
||||
volumes=['/var/db'],
|
||||
entrypoint=['ps'],
|
||||
command=['ax']
|
||||
entrypoint=['sleep'],
|
||||
command=['300']
|
||||
)
|
||||
old_container = service.create_container()
|
||||
self.assertEqual(old_container.dictionary['Config']['Entrypoint'], ['ps'])
|
||||
self.assertEqual(old_container.dictionary['Config']['Cmd'], ['ax'])
|
||||
self.assertEqual(old_container.dictionary['Config']['Entrypoint'], ['sleep'])
|
||||
self.assertEqual(old_container.dictionary['Config']['Cmd'], ['300'])
|
||||
self.assertIn('FOO=1', old_container.dictionary['Config']['Env'])
|
||||
self.assertEqual(old_container.name, 'figtest_db_1')
|
||||
service.start_container(old_container)
|
||||
@@ -122,16 +134,30 @@ class ServiceTest(DockerClientTestCase):
|
||||
new_container = tuples[0][1]
|
||||
self.assertEqual(intermediate_container.dictionary['Config']['Entrypoint'], ['echo'])
|
||||
|
||||
self.assertEqual(new_container.dictionary['Config']['Entrypoint'], ['ps'])
|
||||
self.assertEqual(new_container.dictionary['Config']['Cmd'], ['ax'])
|
||||
self.assertEqual(new_container.dictionary['Config']['Entrypoint'], ['sleep'])
|
||||
self.assertEqual(new_container.dictionary['Config']['Cmd'], ['300'])
|
||||
self.assertIn('FOO=2', new_container.dictionary['Config']['Env'])
|
||||
self.assertEqual(new_container.name, 'figtest_db_1')
|
||||
self.assertEqual(new_container.inspect()['Volumes']['/var/db'], volume_path)
|
||||
self.assertIn(intermediate_container.id, new_container.dictionary['HostConfig']['VolumesFrom'])
|
||||
|
||||
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
|
||||
self.assertNotEqual(old_container.id, new_container.id)
|
||||
self.assertRaises(APIError, lambda: self.client.inspect_container(intermediate_container.id))
|
||||
|
||||
def test_recreate_containers_when_containers_are_stopped(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
environment={'FOO': '1'},
|
||||
volumes=['/var/db'],
|
||||
entrypoint=['sleep'],
|
||||
command=['300']
|
||||
)
|
||||
old_container = service.create_container()
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
service.recreate_containers()
|
||||
self.assertEqual(len(service.containers(stopped=True)), 1)
|
||||
|
||||
def test_start_container_passes_through_options(self):
|
||||
db = self.create_service('db')
|
||||
db.start_container(environment={'FOO': 'BAR'})
|
||||
@@ -231,6 +257,27 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertIn('8000/tcp', container['NetworkSettings']['Ports'])
|
||||
self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8001')
|
||||
|
||||
def test_port_with_explicit_interface(self):
|
||||
service = self.create_service('web', ports=[
|
||||
'127.0.0.1:8001:8000',
|
||||
'0.0.0.0:9001:9000/udp',
|
||||
])
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['NetworkSettings']['Ports'], {
|
||||
'8000/tcp': [
|
||||
{
|
||||
'HostIp': '127.0.0.1',
|
||||
'HostPort': '8001',
|
||||
},
|
||||
],
|
||||
'9000/udp': [
|
||||
{
|
||||
'HostIp': '0.0.0.0',
|
||||
'HostPort': '9001',
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
def test_scale(self):
|
||||
service = self.create_service('web')
|
||||
service.scale(1)
|
||||
@@ -253,3 +300,53 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertEqual(len(containers), 2)
|
||||
for container in containers:
|
||||
self.assertEqual(list(container.inspect()['HostConfig']['PortBindings'].keys()), ['8000/tcp'])
|
||||
|
||||
def test_network_mode_none(self):
|
||||
service = self.create_service('web', net='none')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'none')
|
||||
|
||||
def test_network_mode_bridged(self):
|
||||
service = self.create_service('web', net='bridge')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'bridge')
|
||||
|
||||
def test_network_mode_host(self):
|
||||
service = self.create_service('web', net='host')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'host')
|
||||
|
||||
def test_dns_single_value(self):
|
||||
service = self.create_service('web', dns='8.8.8.8')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['Dns'], ['8.8.8.8'])
|
||||
|
||||
def test_dns_list(self):
|
||||
service = self.create_service('web', dns=['8.8.8.8', '9.9.9.9'])
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['Dns'], ['8.8.8.8', '9.9.9.9'])
|
||||
|
||||
def test_working_dir_param(self):
|
||||
service = self.create_service('container', working_dir='/working/dir/sample')
|
||||
container = service.create_container().inspect()
|
||||
self.assertEqual(container['Config']['WorkingDir'], '/working/dir/sample')
|
||||
|
||||
def test_split_env(self):
|
||||
service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='])
|
||||
env = service.start_container().environment
|
||||
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.iteritems():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
def test_resolve_env(self):
|
||||
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
try:
|
||||
env = service.start_container().environment
|
||||
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.iteritems():
|
||||
self.assertEqual(env[k], v)
|
||||
finally:
|
||||
del os.environ['FILE_DEF']
|
||||
del os.environ['FILE_DEF_EMPTY']
|
||||
del os.environ['ENV_DEF']
|
||||
|
||||
@@ -10,7 +10,7 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls.client = Client(docker_url())
|
||||
cls.client.pull('ubuntu', tag='latest')
|
||||
cls.client.pull('busybox', tag='latest')
|
||||
|
||||
def setUp(self):
|
||||
for c in self.client.containers(all=True):
|
||||
@@ -28,7 +28,7 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
project='figtest',
|
||||
name=name,
|
||||
client=self.client,
|
||||
image="ubuntu",
|
||||
image="busybox:latest",
|
||||
**kwargs
|
||||
)
|
||||
|
||||
|
||||
@@ -1,10 +1,35 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import os
|
||||
from .. import unittest
|
||||
|
||||
from fig.cli import main
|
||||
from fig.cli.main import TopLevelCommand
|
||||
from fig.packages.six import StringIO
|
||||
|
||||
|
||||
class CLITestCase(unittest.TestCase):
|
||||
def test_default_project_name(self):
|
||||
cwd = os.getcwd()
|
||||
|
||||
try:
|
||||
os.chdir('tests/fixtures/simple-figfile')
|
||||
command = TopLevelCommand()
|
||||
self.assertEquals('simplefigfile', command.project_name)
|
||||
finally:
|
||||
os.chdir(cwd)
|
||||
|
||||
def test_project_name_with_explicit_base_dir(self):
|
||||
command = TopLevelCommand()
|
||||
command.base_dir = 'tests/fixtures/simple-figfile'
|
||||
self.assertEquals('simplefigfile', command.project_name)
|
||||
|
||||
def test_project_name_with_explicit_project_name(self):
|
||||
command = TopLevelCommand()
|
||||
command.explicit_project_name = 'explicit-project-name'
|
||||
self.assertEquals('explicitprojectname', command.project_name)
|
||||
|
||||
def test_yaml_filename_check(self):
|
||||
command = TopLevelCommand()
|
||||
command.base_dir = 'tests/fixtures/longer-filename-figfile'
|
||||
@@ -14,3 +39,8 @@ class CLITestCase(unittest.TestCase):
|
||||
command = TopLevelCommand()
|
||||
with self.assertRaises(SystemExit):
|
||||
command.dispatch(['-h'], None)
|
||||
|
||||
def test_setup_logging(self):
|
||||
main.setup_logging()
|
||||
self.assertEqual(logging.getLogger().level, logging.DEBUG)
|
||||
self.assertEqual(logging.getLogger('requests').propagate, False)
|
||||
|
||||
@@ -6,7 +6,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_from_ps(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Created":1387384730,
|
||||
"Status":"Up 8 seconds",
|
||||
@@ -16,14 +16,14 @@ class ContainerTest(unittest.TestCase):
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
self.assertEqual(container.dictionary, {
|
||||
"ID": "abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Id": "abc",
|
||||
"Image":"busybox:latest",
|
||||
"Name": "/figtest_db_1",
|
||||
})
|
||||
|
||||
def test_environment(self):
|
||||
container = Container(None, {
|
||||
'ID': 'abc',
|
||||
'Id': 'abc',
|
||||
'Config': {
|
||||
'Env': [
|
||||
'FOO=BAR',
|
||||
@@ -39,7 +39,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_number(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Created":1387384730,
|
||||
"Status":"Up 8 seconds",
|
||||
@@ -53,7 +53,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_name(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
@@ -62,7 +62,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_name_without_project(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
|
||||
57
tests/unit/log_printer_test.py
Normal file
57
tests/unit/log_printer_test.py
Normal file
@@ -0,0 +1,57 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
from fig.cli.log_printer import LogPrinter
|
||||
from .. import unittest
|
||||
|
||||
|
||||
class LogPrinterTest(unittest.TestCase):
|
||||
def test_single_container(self):
|
||||
def reader(*args, **kwargs):
|
||||
yield "hello\nworld"
|
||||
|
||||
container = MockContainer(reader)
|
||||
output = run_log_printer([container])
|
||||
|
||||
self.assertIn('hello', output)
|
||||
self.assertIn('world', output)
|
||||
|
||||
def test_unicode(self):
|
||||
glyph = u'\u2022'.encode('utf-8')
|
||||
|
||||
def reader(*args, **kwargs):
|
||||
yield glyph + b'\n'
|
||||
|
||||
container = MockContainer(reader)
|
||||
output = run_log_printer([container])
|
||||
|
||||
self.assertIn(glyph, output)
|
||||
|
||||
|
||||
def run_log_printer(containers):
|
||||
r, w = os.pipe()
|
||||
reader, writer = os.fdopen(r, 'r'), os.fdopen(w, 'w')
|
||||
printer = LogPrinter(containers, output=writer)
|
||||
printer.run()
|
||||
writer.close()
|
||||
return reader.read()
|
||||
|
||||
|
||||
class MockContainer(object):
|
||||
def __init__(self, reader):
|
||||
self._reader = reader
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return 'myapp_web_1'
|
||||
|
||||
@property
|
||||
def name_without_project(self):
|
||||
return 'web_1'
|
||||
|
||||
def attach(self, *args, **kwargs):
|
||||
return self._reader()
|
||||
|
||||
def wait(self, *args, **kwargs):
|
||||
return 0
|
||||
@@ -8,54 +8,61 @@ class ProjectTest(unittest.TestCase):
|
||||
project = Project.from_dicts('figtest', [
|
||||
{
|
||||
'name': 'web',
|
||||
'image': 'ubuntu'
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
{
|
||||
'name': 'db',
|
||||
'image': 'ubuntu'
|
||||
}
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
], None)
|
||||
self.assertEqual(len(project.services), 2)
|
||||
self.assertEqual(project.get_service('web').name, 'web')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'busybox:latest')
|
||||
self.assertEqual(project.get_service('db').name, 'db')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
|
||||
|
||||
def test_from_dict_sorts_in_dependency_order(self):
|
||||
project = Project.from_dicts('figtest', [
|
||||
{
|
||||
'name': 'web',
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
'links': ['db'],
|
||||
},
|
||||
{
|
||||
'name': 'db',
|
||||
'image': 'ubuntu'
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['volume']
|
||||
},
|
||||
{
|
||||
'name': 'volume',
|
||||
'image': 'busybox:latest',
|
||||
'volumes': ['/tmp'],
|
||||
}
|
||||
], None)
|
||||
|
||||
self.assertEqual(project.services[0].name, 'db')
|
||||
self.assertEqual(project.services[1].name, 'web')
|
||||
self.assertEqual(project.services[0].name, 'volume')
|
||||
self.assertEqual(project.services[1].name, 'db')
|
||||
self.assertEqual(project.services[2].name, 'web')
|
||||
|
||||
def test_from_config(self):
|
||||
project = Project.from_config('figtest', {
|
||||
'web': {
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
'db': {
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
}, None)
|
||||
self.assertEqual(len(project.services), 2)
|
||||
self.assertEqual(project.get_service('web').name, 'web')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'busybox:latest')
|
||||
self.assertEqual(project.get_service('db').name, 'db')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
|
||||
|
||||
def test_from_config_throws_error_when_not_dict(self):
|
||||
with self.assertRaises(ConfigurationError):
|
||||
project = Project.from_config('figtest', {
|
||||
'web': 'ubuntu',
|
||||
'web': 'busybox:latest',
|
||||
}, None)
|
||||
|
||||
def test_get_service(self):
|
||||
@@ -63,7 +70,72 @@ class ProjectTest(unittest.TestCase):
|
||||
project='figtest',
|
||||
name='web',
|
||||
client=None,
|
||||
image="ubuntu",
|
||||
image="busybox:latest",
|
||||
)
|
||||
project = Project('test', [web], None)
|
||||
self.assertEqual(project.get_service('web'), web)
|
||||
|
||||
def test_get_services_returns_all_services_without_args(self):
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(), [web, console])
|
||||
|
||||
def test_get_services_returns_listed_services_with_args(self):
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(['console']), [console])
|
||||
|
||||
def test_get_services_with_include_links(self):
|
||||
db = Service(
|
||||
project='figtest',
|
||||
name='db',
|
||||
)
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
cache = Service(
|
||||
project='figtest',
|
||||
name='cache'
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
links=[(web, 'web')]
|
||||
)
|
||||
project = Project('test', [web, db, cache, console], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['console'], include_links=True),
|
||||
[db, web, console]
|
||||
)
|
||||
|
||||
def test_get_services_removes_duplicates_following_links(self):
|
||||
db = Service(
|
||||
project='figtest',
|
||||
name='db',
|
||||
)
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
project = Project('test', [web, db], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['web', 'db'], include_links=True),
|
||||
[db, web]
|
||||
)
|
||||
|
||||
@@ -2,7 +2,7 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from .. import unittest
|
||||
from fig import Service
|
||||
from fig.service import ConfigError
|
||||
from fig.service import ConfigError, split_port
|
||||
|
||||
class ServiceTest(unittest.TestCase):
|
||||
def test_name_validations(self):
|
||||
@@ -27,3 +27,58 @@ class ServiceTest(unittest.TestCase):
|
||||
def test_config_validation(self):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', port=['8000']))
|
||||
Service(name='foo', ports=['8000'])
|
||||
|
||||
def test_split_port(self):
|
||||
internal_port, external_port = split_port("127.0.0.1:1000:2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, ("127.0.0.1", "1000"))
|
||||
|
||||
internal_port, external_port = split_port("127.0.0.1:1000:2000/udp")
|
||||
self.assertEqual(internal_port, "2000/udp")
|
||||
self.assertEqual(external_port, ("127.0.0.1", "1000"))
|
||||
|
||||
internal_port, external_port = split_port("127.0.0.1::2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, ("127.0.0.1",))
|
||||
|
||||
internal_port, external_port = split_port("1000:2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, "1000")
|
||||
|
||||
def test_split_domainname_none(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertFalse('domainname' in opts, 'domainname')
|
||||
|
||||
def test_split_domainname_fqdn(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name.domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_both(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name',
|
||||
domainname = 'domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_weird(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name.sub',
|
||||
domainname = 'domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name.sub', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
@@ -6,32 +6,47 @@ from .. import unittest
|
||||
class SplitBufferTest(unittest.TestCase):
|
||||
def test_single_line_chunks(self):
|
||||
def reader():
|
||||
yield "abc\n"
|
||||
yield "def\n"
|
||||
yield "ghi\n"
|
||||
yield b'abc\n'
|
||||
yield b'def\n'
|
||||
yield b'ghi\n'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi\n"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi\n'])
|
||||
|
||||
def test_no_end_separator(self):
|
||||
def reader():
|
||||
yield "abc\n"
|
||||
yield "def\n"
|
||||
yield "ghi"
|
||||
yield b'abc\n'
|
||||
yield b'def\n'
|
||||
yield b'ghi'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi'])
|
||||
|
||||
def test_multiple_line_chunk(self):
|
||||
def reader():
|
||||
yield "abc\ndef\nghi"
|
||||
yield b'abc\ndef\nghi'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi'])
|
||||
|
||||
def test_chunked_line(self):
|
||||
def reader():
|
||||
yield "a"
|
||||
yield "b"
|
||||
yield "c"
|
||||
yield "\n"
|
||||
yield "d"
|
||||
yield b'a'
|
||||
yield b'b'
|
||||
yield b'c'
|
||||
yield b'\n'
|
||||
yield b'd'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "d"])
|
||||
self.assert_produces(reader, [b'abc\n', b'd'])
|
||||
|
||||
def test_preserves_unicode_sequences_within_lines(self):
|
||||
string = u"a\u2022c\n".encode('utf-8')
|
||||
|
||||
def reader():
|
||||
yield string
|
||||
|
||||
self.assert_produces(reader, [string])
|
||||
|
||||
def assert_produces(self, reader, expectations):
|
||||
split = split_buffer(reader(), b'\n')
|
||||
|
||||
for (actual, expected) in zip(split, expectations):
|
||||
self.assertEqual(type(actual), type(expected))
|
||||
self.assertEqual(actual, expected)
|
||||
|
||||
13
tox.ini
13
tox.ini
@@ -2,7 +2,14 @@
|
||||
envlist = py26,py27,py32,py33,pypy
|
||||
|
||||
[testenv]
|
||||
deps =
|
||||
-rrequirements.txt
|
||||
-rrequirements-dev.txt
|
||||
commands =
|
||||
pip install -e {toxinidir}
|
||||
pip install -e {toxinidir}[test]
|
||||
python setup.py test
|
||||
nosetests {posargs}
|
||||
flake8 fig
|
||||
|
||||
[flake8]
|
||||
# ignore line-length for now
|
||||
ignore = E501,E203
|
||||
exclude = fig/packages/
|
||||
|
||||
Reference in New Issue
Block a user