mirror of
https://github.com/docker/compose.git
synced 2026-02-10 02:29:25 +08:00
Compare commits
100 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e117a7822d | ||
|
|
5489465905 | ||
|
|
4afcdbdb3c | ||
|
|
94d82d4acb | ||
|
|
d528f9f642 | ||
|
|
99d7a474af | ||
|
|
d1052ff666 | ||
|
|
44a91e6ba8 | ||
|
|
3996947024 | ||
|
|
b7afaba56a | ||
|
|
2ce3685e32 | ||
|
|
699bbe9ca2 | ||
|
|
4b890bffde | ||
|
|
789e1ba82b | ||
|
|
1a9614c35e | ||
|
|
d83bdd5164 | ||
|
|
e1a3fc2536 | ||
|
|
251aa7efb6 | ||
|
|
2924b9997a | ||
|
|
2a9aef1332 | ||
|
|
361294d20b | ||
|
|
9a825c5c35 | ||
|
|
944e15fa65 | ||
|
|
d04b1724ec | ||
|
|
e5916b2fae | ||
|
|
4f7cbc3812 | ||
|
|
3c48884dbb | ||
|
|
7ec63afae9 | ||
|
|
8c6b516aa0 | ||
|
|
50c588176c | ||
|
|
3770aac1af | ||
|
|
256dccc554 | ||
|
|
d0f65906ed | ||
|
|
95aa61cfe5 | ||
|
|
247691ca44 | ||
|
|
0fc9cc65d1 | ||
|
|
eb69225444 | ||
|
|
cafe68a92d | ||
|
|
723cccdae8 | ||
|
|
6b8044e92c | ||
|
|
1e7e8202af | ||
|
|
c0fdf7bd39 | ||
|
|
034b66fedb | ||
|
|
eed274c632 | ||
|
|
5b10c4811f | ||
|
|
2bd6e3d0a5 | ||
|
|
d0b5bcf26a | ||
|
|
262248d8a6 | ||
|
|
9eb3697b40 | ||
|
|
c246897af1 | ||
|
|
cfcabce593 | ||
|
|
e517061010 | ||
|
|
feb8ad7b4c | ||
|
|
1b5bf6e12a | ||
|
|
e953a32a82 | ||
|
|
f1390b3cb6 | ||
|
|
6e485df084 | ||
|
|
3a342fb25d | ||
|
|
e71e82f8ac | ||
|
|
da80eca28c | ||
|
|
1d1e23611b | ||
|
|
74e067c6e6 | ||
|
|
85b9619799 | ||
|
|
ab1fbc96c3 | ||
|
|
a04143e2a7 | ||
|
|
6c4299039a | ||
|
|
655d347ea2 | ||
|
|
94a3164248 | ||
|
|
18728a64b9 | ||
|
|
d8b0fa294e | ||
|
|
a6c8319b5d | ||
|
|
5d92f12f8e | ||
|
|
c0231bdb70 | ||
|
|
ac541e208f | ||
|
|
3d8ce448b8 | ||
|
|
949df97726 | ||
|
|
14cbe40543 | ||
|
|
9dd53ecdaa | ||
|
|
6bfe5e049d | ||
|
|
b672861ffd | ||
|
|
b081077f2b | ||
|
|
13a296049b | ||
|
|
22c531dea7 | ||
|
|
dfc74e2a77 | ||
|
|
0c12db06ec | ||
|
|
edf6b56016 | ||
|
|
8b4ed0c1a8 | ||
|
|
1b5335f409 | ||
|
|
3a2c9c1016 | ||
|
|
cf3eed2cda | ||
|
|
2ecd366905 | ||
|
|
d34dc45b78 | ||
|
|
8394e84099 | ||
|
|
adda3a7f79 | ||
|
|
52d0f4d9e7 | ||
|
|
c1a38d787d | ||
|
|
7879dfd3fd | ||
|
|
cd1c8b2f09 | ||
|
|
7a9228ad75 | ||
|
|
98ceb62202 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -3,4 +3,5 @@
|
||||
/build
|
||||
/dist
|
||||
/docs/_site
|
||||
/venv
|
||||
fig.spec
|
||||
|
||||
39
CHANGES.md
39
CHANGES.md
@@ -1,10 +1,47 @@
|
||||
Change log
|
||||
==========
|
||||
|
||||
0.5.0 (2014-07-11)
|
||||
------------------
|
||||
|
||||
- Fig now starts links when you run `fig run` or `fig up`.
|
||||
|
||||
For example, if you have a `web` service which depends on a `db` service, `fig run web ...` will start the `db` service.
|
||||
|
||||
- Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your `fig.yml` and, if set, it'll be resolved:
|
||||
```
|
||||
environment:
|
||||
RACK_ENV: development
|
||||
SESSION_SECRET:
|
||||
```
|
||||
|
||||
- `volumes_from` is now supported in `fig.yml`. All of the volumes from the specified services and containers will be mounted:
|
||||
|
||||
```
|
||||
volumes_from:
|
||||
- service_name
|
||||
- container_name
|
||||
```
|
||||
|
||||
- The `net` and `workdir` options are now supported in `fig.yml`.
|
||||
- The `hostname` option now works in the same way as the Docker CLI, splitting out into a `domainname` option.
|
||||
- TTY behaviour is far more robust, and resizes are supported correctly.
|
||||
- Load YAML files safely.
|
||||
|
||||
Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca and @mozz100 for their help with this release!
|
||||
|
||||
|
||||
0.4.2 (2014-06-18)
|
||||
------------------
|
||||
|
||||
- Fix various encoding errors when using `fig run`, `fig up` and `fig build`.
|
||||
|
||||
0.4.1 (2014-05-08)
|
||||
------------------
|
||||
|
||||
- Add support for Docker 0.11.0.
|
||||
- Add support for Docker 0.11.0. (Thanks @marksteve!)
|
||||
- Make project name configurable. (Thanks @jefmathiot!)
|
||||
- Return correct exit code from `fig run`.
|
||||
|
||||
0.4.0 (2014-04-29)
|
||||
------------------
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
# Contributing to Fig
|
||||
|
||||
## Development environment
|
||||
|
||||
If you're looking contribute to [Fig](http://orchardup.github.io/fig/)
|
||||
but you're new to the project or maybe even to Python, here are the steps
|
||||
that should get you started.
|
||||
@@ -8,7 +10,7 @@ that should get you started.
|
||||
1. Clone your forked repository locally `git clone git@github.com:kvz/fig.git`.
|
||||
1. Enter the local directory `cd fig`.
|
||||
1. Set up a development environment `python setup.py develop`. That will install the dependencies and set up a symlink from your `fig` executable to the checkout of the repo. So from any of your fig projects, `fig` now refers to your development project. Time to start hacking : )
|
||||
1. Works for you? Run the test suite via `./scripts/test` to verify it won't break other usecases.
|
||||
1. Works for you? Run the test suite via `./script/test` to verify it won't break other usecases.
|
||||
1. All good? Commit and push to GitHub, and submit a pull request.
|
||||
|
||||
## Running the test suite
|
||||
@@ -27,4 +29,47 @@ OS X:
|
||||
|
||||
Note that this only works on Mountain Lion, not Mavericks, due to a [bug in PyInstaller](http://www.pyinstaller.org/ticket/807).
|
||||
|
||||
## Sign your work
|
||||
|
||||
The sign-off is a simple line at the end of the explanation for the
|
||||
patch, which certifies that you wrote it or otherwise have the right to
|
||||
pass it on as an open-source patch. The rules are pretty simple: if you
|
||||
can certify the below (from [developercertificate.org](http://developercertificate.org/)):
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
|
||||
then you just add a line saying
|
||||
|
||||
Signed-off-by: Random J Developer <random@developer.example.org>
|
||||
|
||||
using your real name (sorry, no pseudonyms or anonymous contributions.)
|
||||
|
||||
The easiest way to do this is to use the `--signoff` flag when committing. E.g.:
|
||||
|
||||
|
||||
$ git commit --signoff
|
||||
|
||||
|
||||
215
LICENSE
215
LICENSE
@@ -1,24 +1,191 @@
|
||||
Copyright (c) 2013, Orchard Laboratories Ltd.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
* The names of its contributors may not be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
Copyright 2014 Orchard Laboratories Ltd.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM stackbrew/ubuntu:13.10
|
||||
FROM ubuntu:13.10
|
||||
RUN apt-get -qq update && apt-get install -y ruby1.8 bundler python
|
||||
RUN locale-gen en_US.UTF-8
|
||||
ADD Gemfile /code/
|
||||
|
||||
@@ -44,10 +44,12 @@
|
||||
</ul>
|
||||
<ul class="nav">
|
||||
<li><a href="https://github.com/orchardup/fig">Fig on GitHub</a></li>
|
||||
<li><a href="https://twitter.com/orchardup">Follow us on Twitter</a></li>
|
||||
<li><a href="http://webchat.freenode.net/?channels=%23orchardup&uio=d4">#orchardup on Freenode</a></li>
|
||||
</ul>
|
||||
|
||||
<p>Fig is a project from <a href="https://www.orchardup.com">Orchard</a>, a Docker hosting service.</p>
|
||||
<p><a href="https://twitter.com/orchardup">Follow us on Twitter</a> to keep up to date with Fig and other Docker news.</p>
|
||||
|
||||
<div class="badges">
|
||||
<iframe src="http://ghbtns.com/github-btn.html?user=orchardup&repo=fig&type=watch&count=true" allowtransparency="true" frameborder="0" scrolling="0" width="100" height="20"></iframe>
|
||||
<a href="https://twitter.com/share" class="twitter-share-button" data-url="http://orchardup.github.io/fig/">Tweet</a>
|
||||
|
||||
10
docs/cli.md
10
docs/cli.md
@@ -45,7 +45,7 @@ For example:
|
||||
|
||||
$ fig run web python manage.py shell
|
||||
|
||||
Note that this will not start any services that the command's service links to. So if, for example, your one-off command talks to your database, you will need to run `fig up -d db` first.
|
||||
By default, linked services will be started, unless they are already running.
|
||||
|
||||
One-off commands are started in new containers with the same config as a normal container for that service, so volumes, links, etc will all be created as expected. The only thing different to a normal container is the command will be overridden with the one specified and no ports will be created in case they collide.
|
||||
|
||||
@@ -53,6 +53,10 @@ Links are also created between one-off commands and the other containers for tha
|
||||
|
||||
$ fig run db /bin/sh -c "psql -h \$DB_1_PORT_5432_TCP_ADDR -U docker"
|
||||
|
||||
If you do not want linked containers to be started when running the one-off command, specify the `--no-deps` flag:
|
||||
|
||||
$ fig run --no-deps web python manage.py shell
|
||||
|
||||
## scale
|
||||
|
||||
Set number of containers to run for a service.
|
||||
@@ -74,8 +78,10 @@ Stop running containers without removing them. They can be started again with `f
|
||||
|
||||
Build, (re)create, start and attach to containers for a service.
|
||||
|
||||
Linked services will be started, unless they are already running.
|
||||
|
||||
By default, `fig up` will aggregate the output of each container, and when it exits, all containers will be stopped. If you run `fig up -d`, it'll start the containers in the background and leave them running.
|
||||
|
||||
If there are existing containers for a service, `fig up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `fig.yml` are picked up.
|
||||
By default if there are existing containers for a service, `fig up` will stop and recreate them (preserving mounted volumes with [volumes-from]), so that changes in `fig.yml` are picked up. If you do no want containers to be stopped and recreated, use `fig up --no-recreate`. This will still start any stopped containers, if needed.
|
||||
|
||||
[volumes-from]: http://docs.docker.io/en/latest/use/working_with_volumes/
|
||||
|
||||
@@ -58,7 +58,7 @@ img {
|
||||
|
||||
.logo {
|
||||
font-family: 'Lilita One', sans-serif;
|
||||
font-size: 80px;
|
||||
font-size: 64px;
|
||||
margin: 20px 0 40px 0;
|
||||
}
|
||||
|
||||
@@ -68,8 +68,8 @@ img {
|
||||
}
|
||||
|
||||
.logo img {
|
||||
width: 80px;
|
||||
vertical-align: -17px;
|
||||
width: 60px;
|
||||
vertical-align: -8px;
|
||||
}
|
||||
|
||||
.mobile-logo {
|
||||
@@ -77,13 +77,18 @@ img {
|
||||
}
|
||||
|
||||
.sidebar {
|
||||
font-size: 16px;
|
||||
font-size: 15px;
|
||||
color: #777;
|
||||
}
|
||||
|
||||
.sidebar a {
|
||||
color: #a41211;
|
||||
}
|
||||
|
||||
.sidebar p {
|
||||
margin: 10px 0;
|
||||
}
|
||||
|
||||
@media (max-width: 767px) {
|
||||
.sidebar {
|
||||
text-align: center;
|
||||
@@ -101,7 +106,8 @@ img {
|
||||
}
|
||||
|
||||
.logo {
|
||||
margin-top: 40px;
|
||||
margin-top: 30px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
.content h1 {
|
||||
@@ -116,6 +122,7 @@ img {
|
||||
width: 280px;
|
||||
overflow-y: auto;
|
||||
padding-left: 40px;
|
||||
padding-right: 10px;
|
||||
border-right: 1px solid #ccc;
|
||||
}
|
||||
|
||||
@@ -126,12 +133,12 @@ img {
|
||||
}
|
||||
|
||||
.nav {
|
||||
margin: 20px 0;
|
||||
margin: 15px 0;
|
||||
}
|
||||
|
||||
.nav li a {
|
||||
display: block;
|
||||
padding: 8px 0;
|
||||
padding: 5px 0;
|
||||
line-height: 1.2;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
@@ -11,6 +11,7 @@ Let's use Fig to set up and run a Django/PostgreSQL app. Before starting, you'll
|
||||
Let's set up the three files that'll get us started. First, our app is going to be running inside a Docker container which contains all of its dependencies. We can define what goes inside that Docker container using a file called `Dockerfile`. It'll contain this to start with:
|
||||
|
||||
FROM orchardup/python:2.7
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
RUN apt-get update -qq && apt-get install -y python-psycopg2
|
||||
RUN mkdir /code
|
||||
WORKDIR /code
|
||||
@@ -18,7 +19,7 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN pip install -r requirements.txt
|
||||
ADD . /code/
|
||||
|
||||
That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
That'll install our application inside an image with Python installed alongside all of our Python dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Second, we define our Python dependencies in a file called `requirements.txt`:
|
||||
|
||||
|
||||
@@ -39,8 +39,6 @@ There are commands to:
|
||||
- tail running services' log output
|
||||
- run a one-off command on a service
|
||||
|
||||
Fig is a project from [Orchard](https://orchardup.com), a Docker hosting service. [Follow us on Twitter](https://twitter.com/orchardup) to keep up to date with Fig and other Docker news.
|
||||
|
||||
|
||||
Quick start
|
||||
-----------
|
||||
@@ -87,7 +85,7 @@ Next, we want to create a Docker image containing all of our app's dependencies.
|
||||
WORKDIR /code
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
This tells Docker to install Python, our code and our Python dependencies inside a Docker image. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
This tells Docker to install Python, our code and our Python dependencies inside a Docker image. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
We then define a set of services using `fig.yml`:
|
||||
|
||||
@@ -115,8 +113,8 @@ Now if we run `fig up`, it'll pull a Redis image, build an image for our own cod
|
||||
Building web...
|
||||
Starting figtest_redis_1...
|
||||
Starting figtest_web_1...
|
||||
figtest_redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
|
||||
figtest_web_1 | * Running on http://0.0.0.0:5000/
|
||||
redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3
|
||||
web_1 | * Running on http://0.0.0.0:5000/
|
||||
|
||||
Open up [http://localhost:5000](http://localhost:5000) in your browser (or [http://localdocker:5000](http://localdocker:5000) if you're using [docker-osx](https://github.com/noplay/docker-osx)) and you should see it running!
|
||||
|
||||
|
||||
@@ -6,9 +6,9 @@ title: Installing Fig
|
||||
Installing Fig
|
||||
==============
|
||||
|
||||
First, install Docker version 0.11.0. If you're on OS X, you can use [docker-osx](https://github.com/noplay/docker-osx):
|
||||
First, install Docker version 1.0.0. If you're on OS X, you can use [docker-osx](https://github.com/noplay/docker-osx):
|
||||
|
||||
$ curl https://raw.githubusercontent.com/noplay/docker-osx/0.11.0/docker-osx > /usr/local/bin/docker-osx
|
||||
$ curl https://raw.githubusercontent.com/noplay/docker-osx/1.0.0/docker-osx > /usr/local/bin/docker-osx
|
||||
$ chmod +x /usr/local/bin/docker-osx
|
||||
$ docker-osx shell
|
||||
|
||||
@@ -16,12 +16,12 @@ Docker has guides for [Ubuntu](http://docs.docker.io/en/latest/installation/ubun
|
||||
|
||||
Next, install Fig. On OS X:
|
||||
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.4.0/darwin > /usr/local/bin/fig
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.4.2/darwin > /usr/local/bin/fig
|
||||
$ chmod +x /usr/local/bin/fig
|
||||
|
||||
On 64-bit Linux:
|
||||
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.4.0/linux > /usr/local/bin/fig
|
||||
$ curl -L https://github.com/orchardup/fig/releases/download/0.4.2/linux > /usr/local/bin/fig
|
||||
$ chmod +x /usr/local/bin/fig
|
||||
|
||||
Fig is also available as a Python package if you're on another platform (or if you prefer that sort of thing):
|
||||
|
||||
@@ -18,7 +18,7 @@ Let's set up the three files that'll get us started. First, our app is going to
|
||||
RUN bundle install
|
||||
ADD . /myapp
|
||||
|
||||
That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
That'll put our application code inside an image with Ruby, Bundler and all our dependencies. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Next, we have a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ FROM orchardup/php5
|
||||
ADD . /code
|
||||
```
|
||||
|
||||
This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Dockerfile tutorial](https://www.docker.io/learn/dockerfile/) and the [Dockerfile reference](http://docs.docker.io/en/latest/reference/builder/).
|
||||
This instructs Docker on how to build an image that contains PHP and Wordpress. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/).
|
||||
|
||||
Next up, `fig.yml` starts our web service and a separate MySQL instance:
|
||||
|
||||
|
||||
10
docs/yml.md
10
docs/yml.md
@@ -54,8 +54,18 @@ expose:
|
||||
volumes:
|
||||
- cache/:/tmp/cache
|
||||
|
||||
-- Mount all of the volumes from another service or container
|
||||
volumes_from:
|
||||
- service_name
|
||||
- container_name
|
||||
|
||||
-- Add environment variables.
|
||||
-- Environment variables with only a key are resolved to values on the host
|
||||
-- machine, which can be helpful for secret or host-specific values.
|
||||
environment:
|
||||
RACK_ENV: development
|
||||
SESSION_SECRET:
|
||||
```
|
||||
|
||||
-- Networking mode. Use the same values as the docker client --net parameter
|
||||
net: "host"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from __future__ import unicode_literals
|
||||
from .service import Service
|
||||
|
||||
__version__ = '0.4.1'
|
||||
__version__ = '0.5.0'
|
||||
|
||||
@@ -59,7 +59,7 @@ class Command(DocoptCommand):
|
||||
yaml_path = self.yaml_path
|
||||
if yaml_path is None:
|
||||
yaml_path = self.check_yaml_filename()
|
||||
config = yaml.load(open(yaml_path))
|
||||
config = yaml.safe_load(open(yaml_path))
|
||||
except IOError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
raise errors.FigFileNotFound(os.path.basename(e.filename))
|
||||
|
||||
@@ -10,16 +10,17 @@ from .utils import split_buffer
|
||||
|
||||
|
||||
class LogPrinter(object):
|
||||
def __init__(self, containers, attach_params=None):
|
||||
def __init__(self, containers, attach_params=None, output=sys.stdout):
|
||||
self.containers = containers
|
||||
self.attach_params = attach_params or {}
|
||||
self.prefix_width = self._calculate_prefix_width(containers)
|
||||
self.generators = self._make_log_generators()
|
||||
self.output = output
|
||||
|
||||
def run(self):
|
||||
mux = Multiplexer(self.generators)
|
||||
for line in mux.loop():
|
||||
sys.stdout.write(line.encode(sys.__stdout__.encoding or 'utf-8'))
|
||||
self.output.write(line)
|
||||
|
||||
def _calculate_prefix_width(self, containers):
|
||||
"""
|
||||
@@ -45,12 +46,12 @@ class LogPrinter(object):
|
||||
return generators
|
||||
|
||||
def _make_log_generator(self, container, color_fn):
|
||||
prefix = color_fn(self._generate_prefix(container))
|
||||
prefix = color_fn(self._generate_prefix(container)).encode('utf-8')
|
||||
# Attach to container before log printer starts running
|
||||
line_generator = split_buffer(self._attach(container), '\n')
|
||||
|
||||
for line in line_generator:
|
||||
yield prefix + line.decode('utf-8')
|
||||
yield prefix + line
|
||||
|
||||
exit_code = container.wait()
|
||||
yield color_fn("%s exited with code %s\n" % (container.name, exit_code))
|
||||
|
||||
@@ -6,6 +6,7 @@ import re
|
||||
import signal
|
||||
|
||||
from inspect import getdoc
|
||||
import dockerpty
|
||||
|
||||
from .. import __version__
|
||||
from ..project import NoSuchService, ConfigurationError
|
||||
@@ -18,7 +19,6 @@ from .utils import yesno
|
||||
from ..packages.docker.errors import APIError
|
||||
from .errors import UserError
|
||||
from .docopt_command import NoSuchCommand
|
||||
from .socketclient import SocketClient
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@@ -202,21 +202,33 @@ class TopLevelCommand(Command):
|
||||
|
||||
$ fig run web python manage.py shell
|
||||
|
||||
Note that this will not start any services that the command's service
|
||||
links to. So if, for example, your one-off command talks to your
|
||||
database, you will need to run `fig up -d db` first.
|
||||
By default, linked services will be started, unless they are already
|
||||
running. If you do not want to start linked services, use
|
||||
`fig run --no-deps SERVICE COMMAND [ARGS...]`.
|
||||
|
||||
Usage: run [options] SERVICE COMMAND [ARGS...]
|
||||
|
||||
Options:
|
||||
-d Detached mode: Run container in the background, print new
|
||||
container name
|
||||
-T Disable pseudo-tty allocation. By default `fig run`
|
||||
allocates a TTY.
|
||||
--rm Remove container after run. Ignored in detached mode.
|
||||
-d Detached mode: Run container in the background, print
|
||||
new container name.
|
||||
-T Disable pseudo-tty allocation. By default `fig run`
|
||||
allocates a TTY.
|
||||
--rm Remove container after run. Ignored in detached mode.
|
||||
--no-deps Don't start linked services.
|
||||
"""
|
||||
|
||||
service = self.project.get_service(options['SERVICE'])
|
||||
|
||||
if not options['--no-deps']:
|
||||
deps = service.get_linked_names()
|
||||
|
||||
if len(deps) > 0:
|
||||
self.project.up(
|
||||
service_names=deps,
|
||||
start_links=True,
|
||||
recreate=False,
|
||||
)
|
||||
|
||||
tty = True
|
||||
if options['-d'] or options['-T'] or not sys.stdin.isatty():
|
||||
tty = False
|
||||
@@ -231,9 +243,8 @@ class TopLevelCommand(Command):
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
print(container.name)
|
||||
else:
|
||||
with self._attach_to_container(container.id, raw=tty) as c:
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
c.run()
|
||||
service.start_container(container, ports=None, one_off=True)
|
||||
dockerpty.start(self.client, container.id)
|
||||
exit_code = container.wait()
|
||||
if options['--rm']:
|
||||
log.info("Removing %s..." % container.name)
|
||||
@@ -293,17 +304,31 @@ class TopLevelCommand(Command):
|
||||
|
||||
If there are existing containers for a service, `fig up` will stop
|
||||
and recreate them (preserving mounted volumes with volumes-from),
|
||||
so that changes in `fig.yml` are picked up.
|
||||
so that changes in `fig.yml` are picked up. If you do not want existing
|
||||
containers to be recreated, `fig up --no-recreate` will re-use existing
|
||||
containers.
|
||||
|
||||
Usage: up [options] [SERVICE...]
|
||||
|
||||
Options:
|
||||
-d Detached mode: Run containers in the background, print new
|
||||
container names
|
||||
-d Detached mode: Run containers in the background,
|
||||
print new container names.
|
||||
--no-deps Don't start linked services.
|
||||
--no-recreate If containers already exist, don't recreate them.
|
||||
"""
|
||||
detached = options['-d']
|
||||
|
||||
to_attach = self.project.up(service_names=options['SERVICE'])
|
||||
start_links = not options['--no-deps']
|
||||
recreate = not options['--no-recreate']
|
||||
service_names = options['SERVICE']
|
||||
|
||||
self.project.up(
|
||||
service_names=service_names,
|
||||
start_links=start_links,
|
||||
recreate=recreate
|
||||
)
|
||||
|
||||
to_attach = [c for s in self.project.get_services(service_names) for c in s.containers()]
|
||||
|
||||
if not detached:
|
||||
print("Attaching to", list_containers(to_attach))
|
||||
@@ -313,24 +338,12 @@ class TopLevelCommand(Command):
|
||||
log_printer.run()
|
||||
finally:
|
||||
def handler(signal, frame):
|
||||
self.project.kill(service_names=options['SERVICE'])
|
||||
self.project.kill(service_names=service_names)
|
||||
sys.exit(0)
|
||||
signal.signal(signal.SIGINT, handler)
|
||||
|
||||
print("Gracefully stopping... (press Ctrl+C again to force)")
|
||||
self.project.stop(service_names=options['SERVICE'])
|
||||
|
||||
def _attach_to_container(self, container_id, raw=False):
|
||||
socket_in = self.client.attach_socket(container_id, params={'stdin': 1, 'stream': 1})
|
||||
socket_out = self.client.attach_socket(container_id, params={'stdout': 1, 'logs': 1, 'stream': 1})
|
||||
socket_err = self.client.attach_socket(container_id, params={'stderr': 1, 'logs': 1, 'stream': 1})
|
||||
|
||||
return SocketClient(
|
||||
socket_in=socket_in,
|
||||
socket_out=socket_out,
|
||||
socket_err=socket_err,
|
||||
raw=raw,
|
||||
)
|
||||
self.project.stop(service_names=service_names)
|
||||
|
||||
def list_containers(containers):
|
||||
return ", ".join(c.name for c in containers)
|
||||
|
||||
@@ -1,126 +0,0 @@
|
||||
from __future__ import print_function
|
||||
# Adapted from https://github.com/benthor/remotty/blob/master/socketclient.py
|
||||
|
||||
import sys
|
||||
import tty
|
||||
import fcntl
|
||||
import os
|
||||
import termios
|
||||
import threading
|
||||
import errno
|
||||
|
||||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SocketClient:
|
||||
def __init__(self,
|
||||
socket_in=None,
|
||||
socket_out=None,
|
||||
socket_err=None,
|
||||
raw=True,
|
||||
):
|
||||
self.socket_in = socket_in
|
||||
self.socket_out = socket_out
|
||||
self.socket_err = socket_err
|
||||
self.raw = raw
|
||||
|
||||
self.stdin_fileno = sys.stdin.fileno()
|
||||
|
||||
def __enter__(self):
|
||||
self.create()
|
||||
return self
|
||||
|
||||
def __exit__(self, type, value, trace):
|
||||
self.destroy()
|
||||
|
||||
def create(self):
|
||||
if os.isatty(sys.stdin.fileno()):
|
||||
self.settings = termios.tcgetattr(sys.stdin.fileno())
|
||||
else:
|
||||
self.settings = None
|
||||
|
||||
if self.socket_in is not None:
|
||||
self.set_blocking(sys.stdin, False)
|
||||
self.set_blocking(sys.stdout, True)
|
||||
self.set_blocking(sys.stderr, True)
|
||||
|
||||
if self.raw:
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
|
||||
def set_blocking(self, file, blocking):
|
||||
fd = file.fileno()
|
||||
flags = fcntl.fcntl(fd, fcntl.F_GETFL)
|
||||
flags = (flags & ~os.O_NONBLOCK) if blocking else (flags | os.O_NONBLOCK)
|
||||
fcntl.fcntl(fd, fcntl.F_SETFL, flags)
|
||||
|
||||
def run(self):
|
||||
if self.socket_in is not None:
|
||||
self.start_background_thread(target=self.send, args=(self.socket_in, sys.stdin))
|
||||
|
||||
recv_threads = []
|
||||
|
||||
if self.socket_out is not None:
|
||||
recv_threads.append(self.start_background_thread(target=self.recv, args=(self.socket_out, sys.stdout)))
|
||||
|
||||
if self.socket_err is not None:
|
||||
recv_threads.append(self.start_background_thread(target=self.recv, args=(self.socket_err, sys.stderr)))
|
||||
|
||||
for t in recv_threads:
|
||||
t.join()
|
||||
|
||||
def start_background_thread(self, **kwargs):
|
||||
thread = threading.Thread(**kwargs)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
return thread
|
||||
|
||||
def recv(self, socket, stream):
|
||||
try:
|
||||
while True:
|
||||
chunk = socket.recv(4096)
|
||||
|
||||
if chunk:
|
||||
stream.write(chunk.encode(stream.encoding or 'utf-8'))
|
||||
stream.flush()
|
||||
else:
|
||||
break
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
def send(self, socket, stream):
|
||||
while True:
|
||||
chunk = stream.read(1)
|
||||
|
||||
if chunk == '':
|
||||
socket.close()
|
||||
break
|
||||
else:
|
||||
try:
|
||||
socket.send(chunk)
|
||||
except Exception as e:
|
||||
if hasattr(e, 'errno') and e.errno == errno.EPIPE:
|
||||
break
|
||||
else:
|
||||
raise e
|
||||
|
||||
def destroy(self):
|
||||
if self.settings is not None:
|
||||
termios.tcsetattr(self.stdin_fileno, termios.TCSADRAIN, self.settings)
|
||||
|
||||
sys.stdout.flush()
|
||||
|
||||
if __name__ == '__main__':
|
||||
import websocket
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write("Usage: python socketclient.py WEBSOCKET_URL\n")
|
||||
sys.exit(1)
|
||||
|
||||
url = sys.argv[1]
|
||||
socket = websocket.create_connection(url)
|
||||
|
||||
print("connected\r")
|
||||
|
||||
with SocketClient(socket, interactive=True) as client:
|
||||
client.run()
|
||||
@@ -17,7 +17,7 @@ class Container(object):
|
||||
Construct a container object from the output of GET /containers/json.
|
||||
"""
|
||||
new_dictionary = {
|
||||
'ID': dictionary['Id'],
|
||||
'Id': dictionary['Id'],
|
||||
'Image': dictionary['Image'],
|
||||
}
|
||||
for name in dictionary.get('Names', []):
|
||||
@@ -36,7 +36,7 @@ class Container(object):
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return self.dictionary['ID']
|
||||
return self.dictionary['Id']
|
||||
|
||||
@property
|
||||
def image(self):
|
||||
|
||||
@@ -12,7 +12,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .version import version
|
||||
|
||||
__version__ = version
|
||||
__title__ = 'docker-py'
|
||||
__version__ = '0.3.0'
|
||||
|
||||
from .client import Client # flake8: noqa
|
||||
|
||||
@@ -16,6 +16,7 @@ import json
|
||||
import re
|
||||
import shlex
|
||||
import struct
|
||||
import warnings
|
||||
|
||||
import requests
|
||||
import requests.exceptions
|
||||
@@ -29,7 +30,7 @@ from . import errors
|
||||
if not six.PY3:
|
||||
import websocket
|
||||
|
||||
DEFAULT_DOCKER_API_VERSION = '1.9'
|
||||
DEFAULT_DOCKER_API_VERSION = '1.12'
|
||||
DEFAULT_TIMEOUT_SECONDS = 60
|
||||
STREAM_HEADER_SIZE_BYTES = 8
|
||||
|
||||
@@ -95,7 +96,8 @@ class Client(requests.Session):
|
||||
mem_limit=0, ports=None, environment=None, dns=None,
|
||||
volumes=None, volumes_from=None,
|
||||
network_disabled=False, entrypoint=None,
|
||||
cpu_shares=None, working_dir=None, domainname=None):
|
||||
cpu_shares=None, working_dir=None, domainname=None,
|
||||
memswap_limit=0):
|
||||
if isinstance(command, six.string_types):
|
||||
command = shlex.split(str(command))
|
||||
if isinstance(environment, dict):
|
||||
@@ -121,8 +123,12 @@ class Client(requests.Session):
|
||||
volumes_dict[vol] = {}
|
||||
volumes = volumes_dict
|
||||
|
||||
if volumes_from and not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
if volumes_from:
|
||||
if not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
else:
|
||||
# Force None, an empty list or dict causes client.start to fail
|
||||
volumes_from = None
|
||||
|
||||
attach_stdin = False
|
||||
attach_stdout = False
|
||||
@@ -137,6 +143,14 @@ class Client(requests.Session):
|
||||
attach_stdin = True
|
||||
stdin_once = True
|
||||
|
||||
if utils.compare_version('1.10', self._version) >= 0:
|
||||
message = ('{0!r} parameter has no effect on create_container().'
|
||||
' It has been moved to start()')
|
||||
if dns is not None:
|
||||
raise errors.DockerException(message.format('dns'))
|
||||
if volumes_from is not None:
|
||||
raise errors.DockerException(message.format('volumes_from'))
|
||||
|
||||
return {
|
||||
'Hostname': hostname,
|
||||
'Domainname': domainname,
|
||||
@@ -158,7 +172,8 @@ class Client(requests.Session):
|
||||
'NetworkDisabled': network_disabled,
|
||||
'Entrypoint': entrypoint,
|
||||
'CpuShares': cpu_shares,
|
||||
'WorkingDir': working_dir
|
||||
'WorkingDir': working_dir,
|
||||
'MemorySwap': memswap_limit
|
||||
}
|
||||
|
||||
def _post_json(self, url, data, **kwargs):
|
||||
@@ -235,7 +250,7 @@ class Client(requests.Session):
|
||||
start = walker + STREAM_HEADER_SIZE_BYTES
|
||||
end = start + length
|
||||
walker = end
|
||||
yield str(buf[start:end])
|
||||
yield buf[start:end]
|
||||
|
||||
def _multiplexed_socket_stream_helper(self, response):
|
||||
"""A generator of multiplexed data blocks coming from a response
|
||||
@@ -296,8 +311,10 @@ class Client(requests.Session):
|
||||
return stream_result() if stream else \
|
||||
self._result(response, binary=True)
|
||||
|
||||
sep = bytes() if six.PY3 else str()
|
||||
|
||||
return stream and self._multiplexed_socket_stream_helper(response) or \
|
||||
''.join([x for x in self._multiplexed_buffer_helper(response)])
|
||||
sep.join([x for x in self._multiplexed_buffer_helper(response)])
|
||||
|
||||
def attach_socket(self, container, params=None, ws=False):
|
||||
if params is None:
|
||||
@@ -318,14 +335,20 @@ class Client(requests.Session):
|
||||
u, None, params=self._attach_params(params), stream=True))
|
||||
|
||||
def build(self, path=None, tag=None, quiet=False, fileobj=None,
|
||||
nocache=False, rm=False, stream=False, timeout=None):
|
||||
nocache=False, rm=False, stream=False, timeout=None,
|
||||
custom_context=False, encoding=None):
|
||||
remote = context = headers = None
|
||||
if path is None and fileobj is None:
|
||||
raise TypeError("Either path or fileobj needs to be provided.")
|
||||
|
||||
if fileobj is not None:
|
||||
if custom_context:
|
||||
if not fileobj:
|
||||
raise TypeError("You must specify fileobj with custom_context")
|
||||
context = fileobj
|
||||
elif fileobj is not None:
|
||||
context = utils.mkbuildcontext(fileobj)
|
||||
elif path.startswith(('http://', 'https://', 'git://', 'github.com/')):
|
||||
elif path.startswith(('http://', 'https://',
|
||||
'git://', 'github.com/')):
|
||||
remote = path
|
||||
else:
|
||||
context = utils.tar(path)
|
||||
@@ -341,8 +364,11 @@ class Client(requests.Session):
|
||||
'nocache': nocache,
|
||||
'rm': rm
|
||||
}
|
||||
|
||||
if context is not None:
|
||||
headers = {'Content-Type': 'application/tar'}
|
||||
if encoding:
|
||||
headers['Content-Encoding'] = encoding
|
||||
|
||||
if utils.compare_version('1.9', self._version) >= 0:
|
||||
# If we don't have any auth data so far, try reloading the config
|
||||
@@ -393,10 +419,11 @@ class Client(requests.Session):
|
||||
json=True)
|
||||
|
||||
def containers(self, quiet=False, all=False, trunc=True, latest=False,
|
||||
since=None, before=None, limit=-1):
|
||||
since=None, before=None, limit=-1, size=False):
|
||||
params = {
|
||||
'limit': 1 if latest else limit,
|
||||
'all': 1 if all else 0,
|
||||
'size': 1 if size else 0,
|
||||
'trunc_cmd': 1 if trunc else 0,
|
||||
'since': since,
|
||||
'before': before
|
||||
@@ -424,12 +451,13 @@ class Client(requests.Session):
|
||||
mem_limit=0, ports=None, environment=None, dns=None,
|
||||
volumes=None, volumes_from=None,
|
||||
network_disabled=False, name=None, entrypoint=None,
|
||||
cpu_shares=None, working_dir=None, domainname=None):
|
||||
cpu_shares=None, working_dir=None, domainname=None,
|
||||
memswap_limit=0):
|
||||
|
||||
config = self._container_config(
|
||||
image, command, hostname, user, detach, stdin_open, tty, mem_limit,
|
||||
ports, environment, dns, volumes, volumes_from, network_disabled,
|
||||
entrypoint, cpu_shares, working_dir, domainname
|
||||
entrypoint, cpu_shares, working_dir, domainname, memswap_limit
|
||||
)
|
||||
return self.create_container_from_config(config, name)
|
||||
|
||||
@@ -458,6 +486,12 @@ class Client(requests.Session):
|
||||
self._raise_for_status(res)
|
||||
return res.raw
|
||||
|
||||
def get_image(self, image):
|
||||
res = self._get(self._url("/images/{0}/get".format(image)),
|
||||
stream=True)
|
||||
self._raise_for_status(res)
|
||||
return res.raw
|
||||
|
||||
def history(self, image):
|
||||
res = self._get(self._url("/images/{0}/history".format(image)))
|
||||
self._raise_for_status(res)
|
||||
@@ -513,6 +547,10 @@ class Client(requests.Session):
|
||||
True)
|
||||
|
||||
def insert(self, image, url, path):
|
||||
if utils.compare_version('1.12', self._version) >= 0:
|
||||
raise errors.DeprecatedMethod(
|
||||
'insert is not available for API version >=1.12'
|
||||
)
|
||||
api_url = self._url("/images/" + image + "/insert")
|
||||
params = {
|
||||
'url': url,
|
||||
@@ -544,6 +582,10 @@ class Client(requests.Session):
|
||||
|
||||
self._raise_for_status(res)
|
||||
|
||||
def load_image(self, data):
|
||||
res = self._post(self._url("/images/load"), data=data)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def login(self, username, password=None, email=None, registry=None,
|
||||
reauth=False):
|
||||
# If we don't have any auth data so far, try reloading the config file
|
||||
@@ -572,7 +614,27 @@ class Client(requests.Session):
|
||||
self._auth_configs[registry] = req_data
|
||||
return self._result(response, json=True)
|
||||
|
||||
def logs(self, container, stdout=True, stderr=True, stream=False):
|
||||
def logs(self, container, stdout=True, stderr=True, stream=False,
|
||||
timestamps=False):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
if utils.compare_version('1.11', self._version) >= 0:
|
||||
params = {'stderr': stderr and 1 or 0,
|
||||
'stdout': stdout and 1 or 0,
|
||||
'timestamps': timestamps and 1 or 0,
|
||||
'follow': stream and 1 or 0}
|
||||
url = self._url("/containers/{0}/logs".format(container))
|
||||
res = self._get(url, params=params, stream=stream)
|
||||
if stream:
|
||||
return self._multiplexed_socket_stream_helper(res)
|
||||
elif six.PY3:
|
||||
return bytes().join(
|
||||
[x for x in self._multiplexed_buffer_helper(res)]
|
||||
)
|
||||
else:
|
||||
return str().join(
|
||||
[x for x in self._multiplexed_buffer_helper(res)]
|
||||
)
|
||||
return self.attach(
|
||||
container,
|
||||
stdout=stdout,
|
||||
@@ -581,6 +643,9 @@ class Client(requests.Session):
|
||||
logs=True
|
||||
)
|
||||
|
||||
def ping(self):
|
||||
return self._result(self._get(self._url('/_ping')))
|
||||
|
||||
def port(self, container, private_port):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
@@ -597,6 +662,8 @@ class Client(requests.Session):
|
||||
return h_ports
|
||||
|
||||
def pull(self, repository, tag=None, stream=False):
|
||||
if not tag:
|
||||
repository, tag = utils.parse_repository_tag(repository)
|
||||
registry, repo_name = auth.resolve_repository_name(repository)
|
||||
if repo_name.count(":") == 1:
|
||||
repository, tag = repository.rsplit(":", 1)
|
||||
@@ -653,16 +720,17 @@ class Client(requests.Session):
|
||||
return stream and self._stream_helper(response) \
|
||||
or self._result(response)
|
||||
|
||||
def remove_container(self, container, v=False, link=False):
|
||||
def remove_container(self, container, v=False, link=False, force=False):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
params = {'v': v, 'link': link}
|
||||
params = {'v': v, 'link': link, 'force': force}
|
||||
res = self._delete(self._url("/containers/" + container),
|
||||
params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def remove_image(self, image):
|
||||
res = self._delete(self._url("/images/" + image))
|
||||
def remove_image(self, image, force=False, noprune=False):
|
||||
params = {'force': force, 'noprune': noprune}
|
||||
res = self._delete(self._url("/images/" + image), params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def restart(self, container, timeout=10):
|
||||
@@ -678,8 +746,9 @@ class Client(requests.Session):
|
||||
params={'term': term}),
|
||||
True)
|
||||
|
||||
def start(self, container, binds=None, volumes_from=None, port_bindings=None,
|
||||
lxc_conf=None, publish_all_ports=False, links=None, privileged=False):
|
||||
def start(self, container, binds=None, port_bindings=None, lxc_conf=None,
|
||||
publish_all_ports=False, links=None, privileged=False,
|
||||
dns=None, dns_search=None, volumes_from=None, network_mode=None):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
@@ -693,19 +762,7 @@ class Client(requests.Session):
|
||||
'LxcConf': lxc_conf
|
||||
}
|
||||
if binds:
|
||||
bind_pairs = [
|
||||
'%s:%s:%s' % (
|
||||
h, d['bind'],
|
||||
'ro' if 'ro' in d and d['ro'] else 'rw'
|
||||
) for h, d in binds.items()
|
||||
]
|
||||
|
||||
start_config['Binds'] = bind_pairs
|
||||
|
||||
if volumes_from and not isinstance(volumes_from, six.string_types):
|
||||
volumes_from = ','.join(volumes_from)
|
||||
|
||||
start_config['VolumesFrom'] = volumes_from
|
||||
start_config['Binds'] = utils.convert_volume_binds(binds)
|
||||
|
||||
if port_bindings:
|
||||
start_config['PortBindings'] = utils.convert_port_bindings(
|
||||
@@ -726,10 +783,44 @@ class Client(requests.Session):
|
||||
|
||||
start_config['Privileged'] = privileged
|
||||
|
||||
if utils.compare_version('1.10', self._version) >= 0:
|
||||
if dns is not None:
|
||||
start_config['Dns'] = dns
|
||||
if volumes_from is not None:
|
||||
if isinstance(volumes_from, six.string_types):
|
||||
volumes_from = volumes_from.split(',')
|
||||
start_config['VolumesFrom'] = volumes_from
|
||||
else:
|
||||
warning_message = ('{0!r} parameter is discarded. It is only'
|
||||
' available for API version greater or equal'
|
||||
' than 1.10')
|
||||
|
||||
if dns is not None:
|
||||
warnings.warn(warning_message.format('dns'),
|
||||
DeprecationWarning)
|
||||
if volumes_from is not None:
|
||||
warnings.warn(warning_message.format('volumes_from'),
|
||||
DeprecationWarning)
|
||||
|
||||
if dns_search:
|
||||
start_config['DnsSearch'] = dns_search
|
||||
|
||||
if network_mode:
|
||||
start_config['NetworkMode'] = network_mode
|
||||
|
||||
url = self._url("/containers/{0}/start".format(container))
|
||||
res = self._post_json(url, data=start_config)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def resize(self, container, height, width):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
params = {'h': height, 'w': width}
|
||||
url = self._url("/containers/{0}/resize".format(container))
|
||||
res = self._post(url, params=params)
|
||||
self._raise_for_status(res)
|
||||
|
||||
def stop(self, container, timeout=10):
|
||||
if isinstance(container, dict):
|
||||
container = container.get('Id')
|
||||
|
||||
@@ -59,3 +59,7 @@ class InvalidRepository(DockerException):
|
||||
|
||||
class InvalidConfigFile(DockerException):
|
||||
pass
|
||||
|
||||
|
||||
class DeprecatedMethod(DockerException):
|
||||
pass
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from .utils import (
|
||||
compare_version, convert_port_bindings, mkbuildcontext, ping, tar, parse_repository_tag
|
||||
compare_version, convert_port_bindings, convert_volume_binds,
|
||||
mkbuildcontext, ping, tar, parse_repository_tag
|
||||
) # flake8: noqa
|
||||
|
||||
@@ -92,6 +92,13 @@ def _convert_port_binding(binding):
|
||||
result['HostIp'] = binding[0]
|
||||
else:
|
||||
result['HostPort'] = binding[0]
|
||||
elif isinstance(binding, dict):
|
||||
if 'HostPort' in binding:
|
||||
result['HostPort'] = binding['HostPort']
|
||||
if 'HostIp' in binding:
|
||||
result['HostIp'] = binding['HostIp']
|
||||
else:
|
||||
raise ValueError(binding)
|
||||
else:
|
||||
result['HostPort'] = binding
|
||||
|
||||
@@ -116,13 +123,25 @@ def convert_port_bindings(port_bindings):
|
||||
return result
|
||||
|
||||
|
||||
def convert_volume_binds(binds):
|
||||
result = []
|
||||
for k, v in binds.items():
|
||||
if isinstance(v, dict):
|
||||
result.append('%s:%s:%s' % (
|
||||
k, v['bind'], 'ro' if v.get('ro', False) else 'rw'
|
||||
))
|
||||
else:
|
||||
result.append('%s:%s:rw' % (k, v))
|
||||
return result
|
||||
|
||||
|
||||
def parse_repository_tag(repo):
|
||||
column_index = repo.rfind(':')
|
||||
if column_index < 0:
|
||||
return repo, ""
|
||||
return repo, None
|
||||
tag = repo[column_index+1:]
|
||||
slash_index = tag.find('/')
|
||||
if slash_index < 0:
|
||||
return repo[:column_index], tag
|
||||
|
||||
return repo, ""
|
||||
return repo, None
|
||||
|
||||
1
fig/packages/docker/version.py
Normal file
1
fig/packages/docker/version.py
Normal file
@@ -0,0 +1 @@
|
||||
version = "0.3.2"
|
||||
83
fig/progress_stream.py
Normal file
83
fig/progress_stream.py
Normal file
@@ -0,0 +1,83 @@
|
||||
import json
|
||||
import os
|
||||
import codecs
|
||||
|
||||
|
||||
class StreamOutputError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def stream_output(output, stream):
|
||||
is_terminal = hasattr(stream, 'fileno') and os.isatty(stream.fileno())
|
||||
stream = codecs.getwriter('utf-8')(stream)
|
||||
all_events = []
|
||||
lines = {}
|
||||
diff = 0
|
||||
|
||||
for chunk in output:
|
||||
event = json.loads(chunk)
|
||||
all_events.append(event)
|
||||
|
||||
if 'progress' in event or 'progressDetail' in event:
|
||||
image_id = event['id']
|
||||
|
||||
if image_id in lines:
|
||||
diff = len(lines) - lines[image_id]
|
||||
else:
|
||||
lines[image_id] = len(lines)
|
||||
stream.write("\n")
|
||||
diff = 0
|
||||
|
||||
if is_terminal:
|
||||
# move cursor up `diff` rows
|
||||
stream.write("%c[%dA" % (27, diff))
|
||||
|
||||
print_output_event(event, stream, is_terminal)
|
||||
|
||||
if 'id' in event and is_terminal:
|
||||
# move cursor back down
|
||||
stream.write("%c[%dB" % (27, diff))
|
||||
|
||||
stream.flush()
|
||||
|
||||
return all_events
|
||||
|
||||
|
||||
def print_output_event(event, stream, is_terminal):
|
||||
if 'errorDetail' in event:
|
||||
raise StreamOutputError(event['errorDetail']['message'])
|
||||
|
||||
terminator = ''
|
||||
|
||||
if is_terminal and 'stream' not in event:
|
||||
# erase current line
|
||||
stream.write("%c[2K\r" % 27)
|
||||
terminator = "\r"
|
||||
pass
|
||||
elif 'progressDetail' in event:
|
||||
return
|
||||
|
||||
if 'time' in event:
|
||||
stream.write("[%s] " % event['time'])
|
||||
|
||||
if 'id' in event:
|
||||
stream.write("%s: " % event['id'])
|
||||
|
||||
if 'from' in event:
|
||||
stream.write("(from %s) " % event['from'])
|
||||
|
||||
status = event.get('status', '')
|
||||
|
||||
if 'progress' in event:
|
||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
||||
elif 'progressDetail' in event:
|
||||
detail = event['progressDetail']
|
||||
if 'current' in detail:
|
||||
percentage = float(detail['current']) / float(detail['total']) * 100
|
||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
||||
else:
|
||||
stream.write('%s%s' % (status, terminator))
|
||||
elif 'stream' in event:
|
||||
stream.write("%s%s" % (event['stream'], terminator))
|
||||
else:
|
||||
stream.write("%s%s\n" % (status, terminator))
|
||||
113
fig/project.py
113
fig/project.py
@@ -2,6 +2,8 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
from .service import Service
|
||||
from .container import Container
|
||||
from .packages.docker.errors import APIError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@@ -18,11 +20,13 @@ def sort_service_dicts(services):
|
||||
if n['name'] in temporary_marked:
|
||||
if n['name'] in get_service_names(n.get('links', [])):
|
||||
raise DependencyError('A service can not link to itself: %s' % n['name'])
|
||||
if n['name'] in n.get('volumes_from', []):
|
||||
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
|
||||
else:
|
||||
raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked))
|
||||
if n in unmarked:
|
||||
temporary_marked.add(n['name'])
|
||||
dependents = [m for m in services if n['name'] in get_service_names(m.get('links', []))]
|
||||
dependents = [m for m in services if (n['name'] in get_service_names(m.get('links', []))) or (n['name'] in m.get('volumes_from', []))]
|
||||
for m in dependents:
|
||||
visit(m)
|
||||
temporary_marked.remove(n['name'])
|
||||
@@ -50,21 +54,10 @@ class Project(object):
|
||||
"""
|
||||
project = cls(name, [], client)
|
||||
for service_dict in sort_service_dicts(service_dicts):
|
||||
# Reference links by object
|
||||
links = []
|
||||
if 'links' in service_dict:
|
||||
for link in service_dict.get('links', []):
|
||||
if ':' in link:
|
||||
service_name, link_name = link.split(':', 1)
|
||||
else:
|
||||
service_name, link_name = link, None
|
||||
try:
|
||||
links.append((project.get_service(service_name), link_name))
|
||||
except NoSuchService:
|
||||
raise ConfigurationError('Service "%s" has a link to service "%s" which does not exist.' % (service_dict['name'], service_name))
|
||||
links = project.get_links(service_dict)
|
||||
volumes_from = project.get_volumes_from(service_dict)
|
||||
|
||||
del service_dict['links']
|
||||
project.services.append(Service(client=client, project=name, links=links, **service_dict))
|
||||
project.services.append(Service(client=client, project=name, links=links, volumes_from=volumes_from, **service_dict))
|
||||
return project
|
||||
|
||||
@classmethod
|
||||
@@ -88,22 +81,66 @@ class Project(object):
|
||||
|
||||
raise NoSuchService(name)
|
||||
|
||||
def get_services(self, service_names=None):
|
||||
def get_services(self, service_names=None, include_links=False):
|
||||
"""
|
||||
Returns a list of this project's services filtered
|
||||
by the provided list of names, or all services if
|
||||
service_names is None or [].
|
||||
by the provided list of names, or all services if service_names is None
|
||||
or [].
|
||||
|
||||
Preserves the original order of self.services.
|
||||
If include_links is specified, returns a list including the links for
|
||||
service_names, in order of dependency.
|
||||
|
||||
Raises NoSuchService if any of the named services
|
||||
do not exist.
|
||||
Preserves the original order of self.services where possible,
|
||||
reordering as needed to resolve links.
|
||||
|
||||
Raises NoSuchService if any of the named services do not exist.
|
||||
"""
|
||||
if service_names is None or len(service_names) == 0:
|
||||
return self.services
|
||||
return self.get_services(
|
||||
service_names=[s.name for s in self.services],
|
||||
include_links=include_links
|
||||
)
|
||||
else:
|
||||
unsorted = [self.get_service(name) for name in service_names]
|
||||
return [s for s in self.services if s in unsorted]
|
||||
services = [s for s in self.services if s in unsorted]
|
||||
|
||||
if include_links:
|
||||
services = reduce(self._inject_links, services, [])
|
||||
|
||||
uniques = []
|
||||
[uniques.append(s) for s in services if s not in uniques]
|
||||
return uniques
|
||||
|
||||
def get_links(self, service_dict):
|
||||
links = []
|
||||
if 'links' in service_dict:
|
||||
for link in service_dict.get('links', []):
|
||||
if ':' in link:
|
||||
service_name, link_name = link.split(':', 1)
|
||||
else:
|
||||
service_name, link_name = link, None
|
||||
try:
|
||||
links.append((self.get_service(service_name), link_name))
|
||||
except NoSuchService:
|
||||
raise ConfigurationError('Service "%s" has a link to service "%s" which does not exist.' % (service_dict['name'], service_name))
|
||||
del service_dict['links']
|
||||
return links
|
||||
|
||||
def get_volumes_from(self, service_dict):
|
||||
volumes_from = []
|
||||
if 'volumes_from' in service_dict:
|
||||
for volume_name in service_dict.get('volumes_from', []):
|
||||
try:
|
||||
service = self.get_service(volume_name)
|
||||
volumes_from.append(service)
|
||||
except NoSuchService:
|
||||
try:
|
||||
container = Container.from_id(client, volume_name)
|
||||
volumes_from.append(Container.from_id(client, volume_name))
|
||||
except APIError:
|
||||
raise ConfigurationError('Service "%s" mounts volumes from "%s", which is not the name of a service or container.' % (service_dict['name'], volume_name))
|
||||
del service_dict['volumes_from']
|
||||
return volumes_from
|
||||
|
||||
def start(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
@@ -124,14 +161,18 @@ class Project(object):
|
||||
else:
|
||||
log.info('%s uses an image, skipping' % service.name)
|
||||
|
||||
def up(self, service_names=None):
|
||||
new_containers = []
|
||||
def up(self, service_names=None, start_links=True, recreate=True):
|
||||
running_containers = []
|
||||
|
||||
for service in self.get_services(service_names):
|
||||
for (_, new) in service.recreate_containers():
|
||||
new_containers.append(new)
|
||||
for service in self.get_services(service_names, include_links=start_links):
|
||||
if recreate:
|
||||
for (_, container) in service.recreate_containers():
|
||||
running_containers.append(container)
|
||||
else:
|
||||
for container in service.start_or_create_containers():
|
||||
running_containers.append(container)
|
||||
|
||||
return new_containers
|
||||
return running_containers
|
||||
|
||||
def remove_stopped(self, service_names=None, **options):
|
||||
for service in self.get_services(service_names):
|
||||
@@ -144,6 +185,20 @@ class Project(object):
|
||||
l.append(container)
|
||||
return l
|
||||
|
||||
def _inject_links(self, acc, service):
|
||||
linked_names = service.get_linked_names()
|
||||
|
||||
if len(linked_names) > 0:
|
||||
linked_services = self.get_services(
|
||||
service_names=linked_names,
|
||||
include_links=True
|
||||
)
|
||||
else:
|
||||
linked_services = []
|
||||
|
||||
linked_services.append(service)
|
||||
return acc + linked_services
|
||||
|
||||
|
||||
class NoSuchService(Exception):
|
||||
def __init__(self, name):
|
||||
|
||||
205
fig/service.py
205
fig/service.py
@@ -5,13 +5,13 @@ import logging
|
||||
import re
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from .container import Container
|
||||
from .progress_stream import stream_output, StreamOutputError
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
DOCKER_CONFIG_KEYS = ['image', 'command', 'hostname', 'user', 'detach', 'stdin_open', 'tty', 'mem_limit', 'ports', 'environment', 'dns', 'volumes', 'volumes_from', 'entrypoint', 'privileged']
|
||||
DOCKER_CONFIG_KEYS = ['image', 'command', 'hostname', 'domainname', 'user', 'detach', 'stdin_open', 'tty', 'mem_limit', 'ports', 'environment', 'dns', 'volumes', 'entrypoint', 'privileged', 'volumes_from', 'net', 'working_dir']
|
||||
DOCKER_CONFIG_HINTS = {
|
||||
'link' : 'links',
|
||||
'port' : 'ports',
|
||||
@@ -19,8 +19,11 @@ DOCKER_CONFIG_HINTS = {
|
||||
'priviliged': 'privileged',
|
||||
'privilige' : 'privileged',
|
||||
'volume' : 'volumes',
|
||||
'workdir' : 'working_dir',
|
||||
}
|
||||
|
||||
VALID_NAME_CHARS = '[a-zA-Z0-9]'
|
||||
|
||||
|
||||
class BuildError(Exception):
|
||||
def __init__(self, service, reason):
|
||||
@@ -37,11 +40,11 @@ class ConfigError(ValueError):
|
||||
|
||||
|
||||
class Service(object):
|
||||
def __init__(self, name, client=None, project='default', links=[], **options):
|
||||
if not re.match('^[a-zA-Z0-9]+$', name):
|
||||
raise ConfigError('Invalid name: %s' % name)
|
||||
if not re.match('^[a-zA-Z0-9]+$', project):
|
||||
raise ConfigError('Invalid project: %s' % project)
|
||||
def __init__(self, name, client=None, project='default', links=[], volumes_from=[], **options):
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, name):
|
||||
raise ConfigError('Invalid service name "%s" - only %s are allowed' % (name, VALID_NAME_CHARS))
|
||||
if not re.match('^%s+$' % VALID_NAME_CHARS, project):
|
||||
raise ConfigError('Invalid project name "%s" - only %s are allowed' % (project, VALID_NAME_CHARS))
|
||||
if 'image' in options and 'build' in options:
|
||||
raise ConfigError('Service %s has both an image and build path specified. A service can either be built to image or use an existing image, not both.' % name)
|
||||
|
||||
@@ -58,6 +61,7 @@ class Service(object):
|
||||
self.client = client
|
||||
self.project = project
|
||||
self.links = links or []
|
||||
self.volumes_from = volumes_from or []
|
||||
self.options = options
|
||||
|
||||
def containers(self, stopped=False, one_off=False):
|
||||
@@ -73,9 +77,7 @@ class Service(object):
|
||||
|
||||
def start(self, **options):
|
||||
for c in self.containers(stopped=True):
|
||||
if not c.is_running:
|
||||
log.info("Starting %s..." % c.name)
|
||||
self.start_container(c, **options)
|
||||
self.start_container_if_stopped(c, **options)
|
||||
|
||||
def stop(self, **options):
|
||||
for c in self.containers():
|
||||
@@ -181,7 +183,6 @@ class Service(object):
|
||||
intermediate_container = Container.create(
|
||||
self.client,
|
||||
image=container.image,
|
||||
volumes_from=container.id,
|
||||
entrypoint=['echo'],
|
||||
command=[],
|
||||
)
|
||||
@@ -190,15 +191,21 @@ class Service(object):
|
||||
container.remove()
|
||||
|
||||
options = dict(override_options)
|
||||
options['volumes_from'] = intermediate_container.id
|
||||
new_container = self.create_container(**options)
|
||||
self.start_container(new_container, volumes_from=intermediate_container.id)
|
||||
self.start_container(new_container, intermediate_container=intermediate_container)
|
||||
|
||||
intermediate_container.remove()
|
||||
|
||||
return (intermediate_container, new_container)
|
||||
|
||||
def start_container(self, container=None, volumes_from=None, **override_options):
|
||||
def start_container_if_stopped(self, container, **options):
|
||||
if container.is_running:
|
||||
return container
|
||||
else:
|
||||
log.info("Starting %s..." % container.name)
|
||||
return self.start_container(container, **options)
|
||||
|
||||
def start_container(self, container=None, intermediate_container=None,**override_options):
|
||||
if container is None:
|
||||
container = self.create_container(**override_options)
|
||||
|
||||
@@ -209,12 +216,7 @@ class Service(object):
|
||||
|
||||
if options.get('ports', None) is not None:
|
||||
for port in options['ports']:
|
||||
port = str(port)
|
||||
if ':' in port:
|
||||
external_port, internal_port = port.split(':', 1)
|
||||
else:
|
||||
external_port, internal_port = (None, port)
|
||||
|
||||
internal_port, external_port = split_port(port)
|
||||
port_bindings[internal_port] = external_port
|
||||
|
||||
volume_bindings = {}
|
||||
@@ -229,16 +231,31 @@ class Service(object):
|
||||
}
|
||||
|
||||
privileged = options.get('privileged', False)
|
||||
net = options.get('net', 'bridge')
|
||||
|
||||
container.start(
|
||||
links=self._get_links(link_to_self=override_options.get('one_off', False)),
|
||||
port_bindings=port_bindings,
|
||||
binds=volume_bindings,
|
||||
volumes_from=volumes_from,
|
||||
volumes_from=self._get_volumes_from(intermediate_container),
|
||||
privileged=privileged,
|
||||
network_mode=net,
|
||||
)
|
||||
return container
|
||||
|
||||
def start_or_create_containers(self):
|
||||
containers = self.containers(stopped=True)
|
||||
|
||||
if len(containers) == 0:
|
||||
log.info("Creating %s..." % self.next_container_name())
|
||||
new_container = self.create_container()
|
||||
return [self.start_container(new_container)]
|
||||
else:
|
||||
return [self.start_container_if_stopped(c) for c in containers]
|
||||
|
||||
def get_linked_names(self):
|
||||
return [s.name for (s, _) in self.links]
|
||||
|
||||
def next_container_name(self, one_off=False):
|
||||
bits = [self.project, self.name]
|
||||
if one_off:
|
||||
@@ -267,12 +284,37 @@ class Service(object):
|
||||
links.append((container.name, container.name_without_project))
|
||||
return links
|
||||
|
||||
def _get_volumes_from(self, intermediate_container=None):
|
||||
volumes_from = []
|
||||
for v in self.volumes_from:
|
||||
if isinstance(v, Service):
|
||||
for container in v.containers(stopped=True):
|
||||
volumes_from.append(container.id)
|
||||
elif isinstance(v, Container):
|
||||
volumes_from.append(v.id)
|
||||
|
||||
if intermediate_container:
|
||||
volumes_from.append(intermediate_container.id)
|
||||
|
||||
return volumes_from
|
||||
|
||||
def _get_container_create_options(self, override_options, one_off=False):
|
||||
container_options = dict((k, self.options[k]) for k in DOCKER_CONFIG_KEYS if k in self.options)
|
||||
container_options.update(override_options)
|
||||
|
||||
container_options['name'] = self.next_container_name(one_off)
|
||||
|
||||
# If a qualified hostname was given, split it into an
|
||||
# unqualified hostname and a domainname unless domainname
|
||||
# was also given explicitly. This matches the behavior of
|
||||
# the official Docker CLI in that scenario.
|
||||
if ('hostname' in container_options
|
||||
and 'domainname' not in container_options
|
||||
and '.' in container_options['hostname']):
|
||||
parts = container_options['hostname'].partition('.')
|
||||
container_options['hostname'] = parts[0]
|
||||
container_options['domainname'] = parts[2]
|
||||
|
||||
if 'ports' in container_options or 'expose' in self.options:
|
||||
ports = []
|
||||
all_ports = container_options.get('ports', []) + self.options.get('expose', [])
|
||||
@@ -288,6 +330,11 @@ class Service(object):
|
||||
if 'volumes' in container_options:
|
||||
container_options['volumes'] = dict((split_volume(v)[1], {}) for v in container_options['volumes'])
|
||||
|
||||
if 'environment' in container_options:
|
||||
if isinstance(container_options['environment'], list):
|
||||
container_options['environment'] = dict(split_env(e) for e in container_options['environment'])
|
||||
container_options['environment'] = dict(resolve_env(k,v) for k,v in container_options['environment'].iteritems())
|
||||
|
||||
if self.can_be_built():
|
||||
if len(self.client.images(name=self._build_tag_name())) == 0:
|
||||
self.build()
|
||||
@@ -297,6 +344,10 @@ class Service(object):
|
||||
if 'privileged' in container_options:
|
||||
del container_options['privileged']
|
||||
|
||||
# net is only required for starting containers, not for creating them
|
||||
if 'net' in container_options:
|
||||
del container_options['net']
|
||||
|
||||
return container_options
|
||||
|
||||
def build(self):
|
||||
@@ -305,7 +356,8 @@ class Service(object):
|
||||
build_output = self.client.build(
|
||||
self.options['build'],
|
||||
tag=self._build_tag_name(),
|
||||
stream=True
|
||||
stream=True,
|
||||
rm=True
|
||||
)
|
||||
|
||||
try:
|
||||
@@ -342,84 +394,6 @@ class Service(object):
|
||||
return True
|
||||
|
||||
|
||||
class StreamOutputError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def stream_output(output, stream):
|
||||
is_terminal = hasattr(stream, 'fileno') and os.isatty(stream.fileno())
|
||||
all_events = []
|
||||
lines = {}
|
||||
diff = 0
|
||||
|
||||
for chunk in output:
|
||||
event = json.loads(chunk)
|
||||
all_events.append(event)
|
||||
|
||||
if 'progress' in event or 'progressDetail' in event:
|
||||
image_id = event['id']
|
||||
|
||||
if image_id in lines:
|
||||
diff = len(lines) - lines[image_id]
|
||||
else:
|
||||
lines[image_id] = len(lines)
|
||||
stream.write("\n")
|
||||
diff = 0
|
||||
|
||||
if is_terminal:
|
||||
# move cursor up `diff` rows
|
||||
stream.write("%c[%dA" % (27, diff))
|
||||
|
||||
print_output_event(event, stream, is_terminal)
|
||||
|
||||
if 'id' in event and is_terminal:
|
||||
# move cursor back down
|
||||
stream.write("%c[%dB" % (27, diff))
|
||||
|
||||
stream.flush()
|
||||
|
||||
return all_events
|
||||
|
||||
def print_output_event(event, stream, is_terminal):
|
||||
if 'errorDetail' in event:
|
||||
raise StreamOutputError(event['errorDetail']['message'])
|
||||
|
||||
terminator = ''
|
||||
|
||||
if is_terminal and 'stream' not in event:
|
||||
# erase current line
|
||||
stream.write("%c[2K\r" % 27)
|
||||
terminator = "\r"
|
||||
pass
|
||||
elif 'progressDetail' in event:
|
||||
return
|
||||
|
||||
if 'time' in event:
|
||||
stream.write("[%s] " % event['time'])
|
||||
|
||||
if 'id' in event:
|
||||
stream.write("%s: " % event['id'])
|
||||
|
||||
if 'from' in event:
|
||||
stream.write("(from %s) " % event['from'])
|
||||
|
||||
status = event.get('status', '')
|
||||
|
||||
if 'progress' in event:
|
||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
||||
elif 'progressDetail' in event:
|
||||
detail = event['progressDetail']
|
||||
if 'current' in detail:
|
||||
percentage = float(detail['current']) / float(detail['total']) * 100
|
||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
||||
else:
|
||||
stream.write('%s%s' % (status, terminator))
|
||||
elif 'stream' in event:
|
||||
stream.write("%s%s" % (event['stream'], terminator))
|
||||
else:
|
||||
stream.write("%s%s\n" % (status, terminator))
|
||||
|
||||
|
||||
NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$')
|
||||
|
||||
|
||||
@@ -460,3 +434,34 @@ def split_volume(v):
|
||||
return v.split(':', 1)
|
||||
else:
|
||||
return (None, v)
|
||||
|
||||
|
||||
def split_port(port):
|
||||
port = str(port)
|
||||
external_ip = None
|
||||
if ':' in port:
|
||||
external_port, internal_port = port.rsplit(':', 1)
|
||||
if ':' in external_port:
|
||||
external_ip, external_port = external_port.split(':', 1)
|
||||
else:
|
||||
external_port, internal_port = (None, port)
|
||||
if external_ip:
|
||||
if external_port:
|
||||
external_port = (external_ip, external_port)
|
||||
else:
|
||||
external_port = (external_ip,)
|
||||
return internal_port, external_port
|
||||
|
||||
def split_env(env):
|
||||
if '=' in env:
|
||||
return env.split('=', 1)
|
||||
else:
|
||||
return env, None
|
||||
|
||||
def resolve_env(key,val):
|
||||
if val is not None:
|
||||
return key, val
|
||||
elif key in os.environ:
|
||||
return key, os.environ[key]
|
||||
else:
|
||||
return key, ''
|
||||
|
||||
@@ -3,3 +3,4 @@ PyYAML==3.10
|
||||
requests==2.2.1
|
||||
texttable==0.8.1
|
||||
websocket-client==0.11.0
|
||||
dockerpty==0.2.1
|
||||
|
||||
@@ -21,8 +21,7 @@ git reset --soft origin/gh-pages
|
||||
|
||||
echo ".git-gh-pages" > .gitignore
|
||||
|
||||
git add -u
|
||||
git add .
|
||||
git add -A .
|
||||
|
||||
git commit -m "update" || echo "didn't commit"
|
||||
git push origin master:gh-pages
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
#!/bin/sh
|
||||
nosetests
|
||||
PYTHONIOENCODING=ascii nosetests $@
|
||||
|
||||
2
setup.py
2
setup.py
@@ -35,7 +35,7 @@ setup(
|
||||
url='http://orchardup.github.io/fig/',
|
||||
author='Orchard Laboratories Ltd.',
|
||||
author_email='hello@orchardup.com',
|
||||
license='BSD',
|
||||
license='Apache License 2.0',
|
||||
packages=find_packages(),
|
||||
include_package_data=True,
|
||||
test_suite='nose.collector',
|
||||
|
||||
11
tests/fixtures/links-figfile/fig.yml
vendored
Normal file
11
tests/fixtures/links-figfile/fig.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
db:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
web:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
links:
|
||||
- db:db
|
||||
console:
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
@@ -1,3 +1,3 @@
|
||||
definedinyamlnotyml:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
4
tests/fixtures/multiple-figfiles/fig.yml
vendored
4
tests/fixtures/multiple-figfiles/fig.yml
vendored
@@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
another:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
2
tests/fixtures/multiple-figfiles/fig2.yml
vendored
2
tests/fixtures/multiple-figfiles/fig2.yml
vendored
@@ -1,3 +1,3 @@
|
||||
yetanother:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
2
tests/fixtures/simple-dockerfile/Dockerfile
vendored
2
tests/fixtures/simple-dockerfile/Dockerfile
vendored
@@ -1,2 +1,2 @@
|
||||
FROM ubuntu
|
||||
FROM busybox:latest
|
||||
CMD echo "success"
|
||||
|
||||
4
tests/fixtures/simple-figfile/fig.yml
vendored
4
tests/fixtures/simple-figfile/fig.yml
vendored
@@ -1,6 +1,6 @@
|
||||
simple:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
another:
|
||||
image: ubuntu
|
||||
image: busybox:latest
|
||||
command: /bin/sleep 300
|
||||
|
||||
@@ -1,17 +1,20 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from .testcases import DockerClientTestCase
|
||||
from mock import patch
|
||||
from fig.cli.main import TopLevelCommand
|
||||
from fig.packages.six import StringIO
|
||||
import sys
|
||||
|
||||
class CLITestCase(DockerClientTestCase):
|
||||
def setUp(self):
|
||||
super(CLITestCase, self).setUp()
|
||||
self.old_sys_exit = sys.exit
|
||||
sys.exit = lambda code=0: None
|
||||
self.command = TopLevelCommand()
|
||||
self.command.base_dir = 'tests/fixtures/simple-figfile'
|
||||
|
||||
def tearDown(self):
|
||||
sys.exit = self.old_sys_exit
|
||||
self.command.project.kill()
|
||||
self.command.project.remove_stopped()
|
||||
|
||||
@@ -43,6 +46,104 @@ class CLITestCase(DockerClientTestCase):
|
||||
self.assertNotIn('fig_another_1', output)
|
||||
self.assertIn('fig_yetanother_1', output)
|
||||
|
||||
def test_up(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
another = self.command.project.get_service('another')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
self.assertEqual(len(another.containers()), 1)
|
||||
|
||||
def test_up_with_links(self):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', 'web'], None)
|
||||
web = self.command.project.get_service('web')
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
def test_up_with_no_deps(self):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', '--no-deps', 'web'], None)
|
||||
web = self.command.project.get_service('web')
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
def test_up_with_recreate(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.assertNotEqual(old_ids, new_ids)
|
||||
|
||||
def test_up_with_keep_old(self):
|
||||
self.command.dispatch(['up', '-d'], None)
|
||||
service = self.command.project.get_service('simple')
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.command.dispatch(['up', '-d', '--no-recreate'], None)
|
||||
self.assertEqual(len(service.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in service.containers()]
|
||||
|
||||
self.assertEqual(old_ids, new_ids)
|
||||
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_without_links(self, mock_stdout):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', 'console', '/bin/true'], None)
|
||||
self.assertEqual(len(self.command.project.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_service_with_links(self, mock_stdout):
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', 'web', '/bin/true'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
console = self.command.project.get_service('console')
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_with_no_deps(self, mock_stdout):
|
||||
mock_stdout.fileno = lambda: 1
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['run', '--no-deps', 'web', '/bin/true'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
|
||||
@patch('dockerpty.start')
|
||||
def test_run_does_not_recreate_linked_containers(self, mock_stdout):
|
||||
mock_stdout.fileno = lambda: 1
|
||||
|
||||
self.command.base_dir = 'tests/fixtures/links-figfile'
|
||||
self.command.dispatch(['up', '-d', 'db'], None)
|
||||
db = self.command.project.get_service('db')
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
|
||||
old_ids = [c.id for c in db.containers()]
|
||||
|
||||
self.command.dispatch(['run', 'web', '/bin/true'], None)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
|
||||
new_ids = [c.id for c in db.containers()]
|
||||
|
||||
self.assertEqual(old_ids, new_ids)
|
||||
|
||||
def test_rm(self):
|
||||
service = self.command.project.get_service('simple')
|
||||
service.create_container()
|
||||
|
||||
@@ -44,6 +44,21 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_recreates_containers(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
old_db_id = project.containers()[0].id
|
||||
@@ -59,6 +74,107 @@ class ProjectTest(DockerClientTestCase):
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_recreate_running(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
old_db_id = project.containers()[0].id
|
||||
db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
|
||||
db_container = [c for c in project.containers() if 'db' in c.name][0]
|
||||
self.assertEqual(c.id, old_db_id)
|
||||
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_recreate_stopped(self):
|
||||
web = self.create_service('web')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
project = Project('figtest', [web, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['db'])
|
||||
project.stop()
|
||||
|
||||
old_containers = project.containers(stopped=True)
|
||||
|
||||
self.assertEqual(len(old_containers), 1)
|
||||
old_db_id = old_containers[0].id
|
||||
db_volume_path = old_containers[0].inspect()['Volumes']['/var/db']
|
||||
|
||||
project.up(recreate=False)
|
||||
|
||||
new_containers = project.containers(stopped=True)
|
||||
self.assertEqual(len(new_containers), 2)
|
||||
|
||||
db_container = [c for c in new_containers if 'db' in c.name][0]
|
||||
self.assertEqual(c.id, old_db_id)
|
||||
self.assertEqual(c.inspect()['Volumes']['/var/db'], db_volume_path)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_without_all_services(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db')
|
||||
project = Project('figtest', [console, db], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up()
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 1)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_starts_links(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
|
||||
project = Project('figtest', [web, db, console], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['web'])
|
||||
self.assertEqual(len(project.containers()), 2)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 1)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_project_up_with_no_deps(self):
|
||||
console = self.create_service('console')
|
||||
db = self.create_service('db', volumes=['/var/db'])
|
||||
web = self.create_service('web', links=[(db, 'db')])
|
||||
|
||||
project = Project('figtest', [web, db, console], self.client)
|
||||
project.start()
|
||||
self.assertEqual(len(project.containers()), 0)
|
||||
|
||||
project.up(['web'], start_links=False)
|
||||
self.assertEqual(len(project.containers()), 1)
|
||||
self.assertEqual(len(web.containers()), 1)
|
||||
self.assertEqual(len(db.containers()), 0)
|
||||
self.assertEqual(len(console.containers()), 0)
|
||||
|
||||
project.kill()
|
||||
project.remove_stopped()
|
||||
|
||||
def test_unscale_after_restart(self):
|
||||
web = self.create_service('web')
|
||||
project = Project('figtest', [web], self.client)
|
||||
|
||||
@@ -2,8 +2,10 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from fig import Service
|
||||
from fig.service import CannotBeScaledError
|
||||
from fig.container import Container
|
||||
from fig.packages.docker.errors import APIError
|
||||
from .testcases import DockerClientTestCase
|
||||
import os
|
||||
|
||||
class ServiceTest(DockerClientTestCase):
|
||||
def test_containers(self):
|
||||
@@ -96,6 +98,16 @@ class ServiceTest(DockerClientTestCase):
|
||||
service.start_container(container)
|
||||
self.assertIn('/host-tmp', container.inspect()['Volumes'])
|
||||
|
||||
def test_create_container_with_volumes_from(self):
|
||||
volume_service = self.create_service('data')
|
||||
volume_container_1 = volume_service.create_container()
|
||||
volume_container_2 = Container.create(self.client, image='busybox:latest', command=["/bin/sleep", "300"])
|
||||
host_service = self.create_service('host', volumes_from=[volume_service, volume_container_2])
|
||||
host_container = host_service.create_container()
|
||||
host_service.start_container(host_container)
|
||||
self.assertIn(volume_container_1.id, host_container.inspect()['HostConfig']['VolumesFrom'])
|
||||
self.assertIn(volume_container_2.id, host_container.inspect()['HostConfig']['VolumesFrom'])
|
||||
|
||||
def test_recreate_containers(self):
|
||||
service = self.create_service(
|
||||
'db',
|
||||
@@ -127,6 +139,7 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertIn('FOO=2', new_container.dictionary['Config']['Env'])
|
||||
self.assertEqual(new_container.name, 'figtest_db_1')
|
||||
self.assertEqual(new_container.inspect()['Volumes']['/var/db'], volume_path)
|
||||
self.assertIn(intermediate_container.id, new_container.dictionary['HostConfig']['VolumesFrom'])
|
||||
|
||||
self.assertEqual(len(self.client.containers(all=True)), num_containers_before)
|
||||
self.assertNotEqual(old_container.id, new_container.id)
|
||||
@@ -231,6 +244,27 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertIn('8000/tcp', container['NetworkSettings']['Ports'])
|
||||
self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8001')
|
||||
|
||||
def test_port_with_explicit_interface(self):
|
||||
service = self.create_service('web', ports=[
|
||||
'127.0.0.1:8001:8000',
|
||||
'0.0.0.0:9001:9000',
|
||||
])
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['NetworkSettings']['Ports'], {
|
||||
'8000/tcp': [
|
||||
{
|
||||
'HostIp': '127.0.0.1',
|
||||
'HostPort': '8001',
|
||||
},
|
||||
],
|
||||
'9000/tcp': [
|
||||
{
|
||||
'HostIp': '0.0.0.0',
|
||||
'HostPort': '9001',
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
def test_scale(self):
|
||||
service = self.create_service('web')
|
||||
service.scale(1)
|
||||
@@ -253,3 +287,43 @@ class ServiceTest(DockerClientTestCase):
|
||||
self.assertEqual(len(containers), 2)
|
||||
for container in containers:
|
||||
self.assertEqual(list(container.inspect()['HostConfig']['PortBindings'].keys()), ['8000/tcp'])
|
||||
|
||||
def test_network_mode_none(self):
|
||||
service = self.create_service('web', net='none')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'none')
|
||||
|
||||
def test_network_mode_bridged(self):
|
||||
service = self.create_service('web', net='bridge')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'bridge')
|
||||
|
||||
def test_network_mode_host(self):
|
||||
service = self.create_service('web', net='host')
|
||||
container = service.start_container().inspect()
|
||||
self.assertEqual(container['HostConfig']['NetworkMode'], 'host')
|
||||
|
||||
def test_working_dir_param(self):
|
||||
service = self.create_service('container', working_dir='/working/dir/sample')
|
||||
container = service.create_container().inspect()
|
||||
self.assertEqual(container['Config']['WorkingDir'], '/working/dir/sample')
|
||||
|
||||
def test_split_env(self):
|
||||
service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS='])
|
||||
env = service.start_container().environment
|
||||
for k,v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.iteritems():
|
||||
self.assertEqual(env[k], v)
|
||||
|
||||
def test_resolve_env(self):
|
||||
service = self.create_service('web', environment={'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None})
|
||||
os.environ['FILE_DEF'] = 'E1'
|
||||
os.environ['FILE_DEF_EMPTY'] = 'E2'
|
||||
os.environ['ENV_DEF'] = 'E3'
|
||||
try:
|
||||
env = service.start_container().environment
|
||||
for k,v in {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}.iteritems():
|
||||
self.assertEqual(env[k], v)
|
||||
finally:
|
||||
del os.environ['FILE_DEF']
|
||||
del os.environ['FILE_DEF_EMPTY']
|
||||
del os.environ['ENV_DEF']
|
||||
|
||||
@@ -10,7 +10,7 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls.client = Client(docker_url())
|
||||
cls.client.pull('ubuntu', tag='latest')
|
||||
cls.client.pull('busybox', tag='latest')
|
||||
|
||||
def setUp(self):
|
||||
for c in self.client.containers(all=True):
|
||||
@@ -28,7 +28,7 @@ class DockerClientTestCase(unittest.TestCase):
|
||||
project='figtest',
|
||||
name=name,
|
||||
client=self.client,
|
||||
image="ubuntu",
|
||||
image="busybox:latest",
|
||||
**kwargs
|
||||
)
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_from_ps(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Created":1387384730,
|
||||
"Status":"Up 8 seconds",
|
||||
@@ -16,14 +16,14 @@ class ContainerTest(unittest.TestCase):
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
self.assertEqual(container.dictionary, {
|
||||
"ID": "abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Id": "abc",
|
||||
"Image":"busybox:latest",
|
||||
"Name": "/figtest_db_1",
|
||||
})
|
||||
|
||||
def test_environment(self):
|
||||
container = Container(None, {
|
||||
'ID': 'abc',
|
||||
'Id': 'abc',
|
||||
'Config': {
|
||||
'Env': [
|
||||
'FOO=BAR',
|
||||
@@ -39,7 +39,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_number(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Created":1387384730,
|
||||
"Status":"Up 8 seconds",
|
||||
@@ -53,7 +53,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_name(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
@@ -62,7 +62,7 @@ class ContainerTest(unittest.TestCase):
|
||||
def test_name_without_project(self):
|
||||
container = Container.from_ps(None, {
|
||||
"Id":"abc",
|
||||
"Image":"ubuntu:12.04",
|
||||
"Image":"busybox:latest",
|
||||
"Command":"sleep 300",
|
||||
"Names":["/figtest_db_1"]
|
||||
}, has_been_inspected=True)
|
||||
|
||||
57
tests/unit/log_printer_test.py
Normal file
57
tests/unit/log_printer_test.py
Normal file
@@ -0,0 +1,57 @@
|
||||
from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
|
||||
from fig.cli.log_printer import LogPrinter
|
||||
from .. import unittest
|
||||
|
||||
|
||||
class LogPrinterTest(unittest.TestCase):
|
||||
def test_single_container(self):
|
||||
def reader(*args, **kwargs):
|
||||
yield "hello\nworld"
|
||||
|
||||
container = MockContainer(reader)
|
||||
output = run_log_printer([container])
|
||||
|
||||
self.assertIn('hello', output)
|
||||
self.assertIn('world', output)
|
||||
|
||||
def test_unicode(self):
|
||||
glyph = u'\u2022'.encode('utf-8')
|
||||
|
||||
def reader(*args, **kwargs):
|
||||
yield glyph + b'\n'
|
||||
|
||||
container = MockContainer(reader)
|
||||
output = run_log_printer([container])
|
||||
|
||||
self.assertIn(glyph, output)
|
||||
|
||||
|
||||
def run_log_printer(containers):
|
||||
r, w = os.pipe()
|
||||
reader, writer = os.fdopen(r, 'r'), os.fdopen(w, 'w')
|
||||
printer = LogPrinter(containers, output=writer)
|
||||
printer.run()
|
||||
writer.close()
|
||||
return reader.read()
|
||||
|
||||
|
||||
class MockContainer(object):
|
||||
def __init__(self, reader):
|
||||
self._reader = reader
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return 'myapp_web_1'
|
||||
|
||||
@property
|
||||
def name_without_project(self):
|
||||
return 'web_1'
|
||||
|
||||
def attach(self, *args, **kwargs):
|
||||
return self._reader()
|
||||
|
||||
def wait(self, *args, **kwargs):
|
||||
return 0
|
||||
@@ -8,54 +8,61 @@ class ProjectTest(unittest.TestCase):
|
||||
project = Project.from_dicts('figtest', [
|
||||
{
|
||||
'name': 'web',
|
||||
'image': 'ubuntu'
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
{
|
||||
'name': 'db',
|
||||
'image': 'ubuntu'
|
||||
}
|
||||
'image': 'busybox:latest'
|
||||
},
|
||||
], None)
|
||||
self.assertEqual(len(project.services), 2)
|
||||
self.assertEqual(project.get_service('web').name, 'web')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'busybox:latest')
|
||||
self.assertEqual(project.get_service('db').name, 'db')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
|
||||
|
||||
def test_from_dict_sorts_in_dependency_order(self):
|
||||
project = Project.from_dicts('figtest', [
|
||||
{
|
||||
'name': 'web',
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
'links': ['db'],
|
||||
},
|
||||
{
|
||||
'name': 'db',
|
||||
'image': 'ubuntu'
|
||||
'image': 'busybox:latest',
|
||||
'volumes_from': ['volume']
|
||||
},
|
||||
{
|
||||
'name': 'volume',
|
||||
'image': 'busybox:latest',
|
||||
'volumes': ['/tmp'],
|
||||
}
|
||||
], None)
|
||||
|
||||
self.assertEqual(project.services[0].name, 'db')
|
||||
self.assertEqual(project.services[1].name, 'web')
|
||||
self.assertEqual(project.services[0].name, 'volume')
|
||||
self.assertEqual(project.services[1].name, 'db')
|
||||
self.assertEqual(project.services[2].name, 'web')
|
||||
|
||||
def test_from_config(self):
|
||||
project = Project.from_config('figtest', {
|
||||
'web': {
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
'db': {
|
||||
'image': 'ubuntu',
|
||||
'image': 'busybox:latest',
|
||||
},
|
||||
}, None)
|
||||
self.assertEqual(len(project.services), 2)
|
||||
self.assertEqual(project.get_service('web').name, 'web')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('web').options['image'], 'busybox:latest')
|
||||
self.assertEqual(project.get_service('db').name, 'db')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'ubuntu')
|
||||
self.assertEqual(project.get_service('db').options['image'], 'busybox:latest')
|
||||
|
||||
def test_from_config_throws_error_when_not_dict(self):
|
||||
with self.assertRaises(ConfigurationError):
|
||||
project = Project.from_config('figtest', {
|
||||
'web': 'ubuntu',
|
||||
'web': 'busybox:latest',
|
||||
}, None)
|
||||
|
||||
def test_get_service(self):
|
||||
@@ -63,7 +70,72 @@ class ProjectTest(unittest.TestCase):
|
||||
project='figtest',
|
||||
name='web',
|
||||
client=None,
|
||||
image="ubuntu",
|
||||
image="busybox:latest",
|
||||
)
|
||||
project = Project('test', [web], None)
|
||||
self.assertEqual(project.get_service('web'), web)
|
||||
|
||||
def test_get_services_returns_all_services_without_args(self):
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(), [web, console])
|
||||
|
||||
def test_get_services_returns_listed_services_with_args(self):
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
)
|
||||
project = Project('test', [web, console], None)
|
||||
self.assertEqual(project.get_services(['console']), [console])
|
||||
|
||||
def test_get_services_with_include_links(self):
|
||||
db = Service(
|
||||
project='figtest',
|
||||
name='db',
|
||||
)
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
cache = Service(
|
||||
project='figtest',
|
||||
name='cache'
|
||||
)
|
||||
console = Service(
|
||||
project='figtest',
|
||||
name='console',
|
||||
links=[(web, 'web')]
|
||||
)
|
||||
project = Project('test', [web, db, cache, console], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['console'], include_links=True),
|
||||
[db, web, console]
|
||||
)
|
||||
|
||||
def test_get_services_removes_duplicates_following_links(self):
|
||||
db = Service(
|
||||
project='figtest',
|
||||
name='db',
|
||||
)
|
||||
web = Service(
|
||||
project='figtest',
|
||||
name='web',
|
||||
links=[(db, 'database')]
|
||||
)
|
||||
project = Project('test', [web, db], None)
|
||||
self.assertEqual(
|
||||
project.get_services(['web', 'db'], include_links=True),
|
||||
[db, web]
|
||||
)
|
||||
|
||||
@@ -2,7 +2,7 @@ from __future__ import unicode_literals
|
||||
from __future__ import absolute_import
|
||||
from .. import unittest
|
||||
from fig import Service
|
||||
from fig.service import ConfigError
|
||||
from fig.service import ConfigError, split_port
|
||||
|
||||
class ServiceTest(unittest.TestCase):
|
||||
def test_name_validations(self):
|
||||
@@ -27,3 +27,54 @@ class ServiceTest(unittest.TestCase):
|
||||
def test_config_validation(self):
|
||||
self.assertRaises(ConfigError, lambda: Service(name='foo', port=['8000']))
|
||||
Service(name='foo', ports=['8000'])
|
||||
|
||||
def test_split_port(self):
|
||||
internal_port, external_port = split_port("127.0.0.1:1000:2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, ("127.0.0.1", "1000"))
|
||||
|
||||
internal_port, external_port = split_port("127.0.0.1::2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, ("127.0.0.1",))
|
||||
|
||||
internal_port, external_port = split_port("1000:2000")
|
||||
self.assertEqual(internal_port, "2000")
|
||||
self.assertEqual(external_port, "1000")
|
||||
|
||||
def test_split_domainname_none(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertFalse('domainname' in opts, 'domainname')
|
||||
|
||||
def test_split_domainname_fqdn(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name.domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_both(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name',
|
||||
domainname = 'domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
def test_split_domainname_weird(self):
|
||||
service = Service('foo',
|
||||
hostname = 'name.sub',
|
||||
domainname = 'domain.tld',
|
||||
)
|
||||
service.next_container_name = lambda x: 'foo'
|
||||
opts = service._get_container_create_options({})
|
||||
self.assertEqual(opts['hostname'], 'name.sub', 'hostname')
|
||||
self.assertEqual(opts['domainname'], 'domain.tld', 'domainname')
|
||||
|
||||
@@ -6,32 +6,47 @@ from .. import unittest
|
||||
class SplitBufferTest(unittest.TestCase):
|
||||
def test_single_line_chunks(self):
|
||||
def reader():
|
||||
yield "abc\n"
|
||||
yield "def\n"
|
||||
yield "ghi\n"
|
||||
yield b'abc\n'
|
||||
yield b'def\n'
|
||||
yield b'ghi\n'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi\n"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi\n'])
|
||||
|
||||
def test_no_end_separator(self):
|
||||
def reader():
|
||||
yield "abc\n"
|
||||
yield "def\n"
|
||||
yield "ghi"
|
||||
yield b'abc\n'
|
||||
yield b'def\n'
|
||||
yield b'ghi'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi'])
|
||||
|
||||
def test_multiple_line_chunk(self):
|
||||
def reader():
|
||||
yield "abc\ndef\nghi"
|
||||
yield b'abc\ndef\nghi'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "def\n", "ghi"])
|
||||
self.assert_produces(reader, [b'abc\n', b'def\n', b'ghi'])
|
||||
|
||||
def test_chunked_line(self):
|
||||
def reader():
|
||||
yield "a"
|
||||
yield "b"
|
||||
yield "c"
|
||||
yield "\n"
|
||||
yield "d"
|
||||
yield b'a'
|
||||
yield b'b'
|
||||
yield b'c'
|
||||
yield b'\n'
|
||||
yield b'd'
|
||||
|
||||
self.assertEqual(list(split_buffer(reader(), '\n')), ["abc\n", "d"])
|
||||
self.assert_produces(reader, [b'abc\n', b'd'])
|
||||
|
||||
def test_preserves_unicode_sequences_within_lines(self):
|
||||
string = u"a\u2022c\n".encode('utf-8')
|
||||
|
||||
def reader():
|
||||
yield string
|
||||
|
||||
self.assert_produces(reader, [string])
|
||||
|
||||
def assert_produces(self, reader, expectations):
|
||||
split = split_buffer(reader(), b'\n')
|
||||
|
||||
for (actual, expected) in zip(split, expectations):
|
||||
self.assertEqual(type(actual), type(expected))
|
||||
self.assertEqual(actual, expected)
|
||||
|
||||
Reference in New Issue
Block a user