forked from extern/podman-compose
Compare commits
443 Commits
Author | SHA1 | Date | |
---|---|---|---|
d38b26bb01 | |||
22a4ad5806 | |||
37e2cb28d4 | |||
0cd3902c5f | |||
6ef759c6fd | |||
16cbcf4152 | |||
67ce900885 | |||
4e9f76768c | |||
84f7fdd7da | |||
405001b990 | |||
6b1aeff55f | |||
f06975b346 | |||
546cad5171 | |||
e07c28d127 | |||
935029dc33 | |||
80b2aa6ed0 | |||
360b85bf2d | |||
650a835eca | |||
82740cc311 | |||
0f645e4c70 | |||
3b15170ccf | |||
3359380ec6 | |||
14f39e5b86 | |||
e799a0b0ea | |||
6d8d3e94fe | |||
65d1fdeaa3 | |||
d905a7c638 | |||
2e8ed2f924 | |||
040b73adab | |||
f3e9a96c96 | |||
04b107805a | |||
2c5d00d3e7 | |||
cac90f69b8 | |||
b513f50f30 | |||
8f618b6fab | |||
cac836b0f5 | |||
3bb305cef4 | |||
09034a0c38 | |||
e668a339ce | |||
0065082db9 | |||
5d4de80ab7 | |||
23ad5c3ef7 | |||
45efe461b0 | |||
4f73f2b79e | |||
1d64f2cf8c | |||
2ce6d1a1e7 | |||
4e22faefd6 | |||
7a2da76ab8 | |||
79865c2e13 | |||
33d7d35a4d | |||
c23a8b2cbd | |||
36a3d3c207 | |||
b202a09501 | |||
35cbc49160 | |||
5c4aa40032 | |||
0ee7c2630a | |||
cef1785cd5 | |||
b4cfef12e9 | |||
b761050b0b | |||
e1d0ea7b4e | |||
1430578568 | |||
8f41cd3cdb | |||
a73dac2e39 | |||
d31a8b124d | |||
5df4e786ee | |||
27e27e9fe9 | |||
70a0e2d003 | |||
58641f0545 | |||
eea8bac496 | |||
09a8a3edf9 | |||
a6c4263738 | |||
9599cc039e | |||
0a6c057486 | |||
2b4ecee082 | |||
77f2e8e5b0 | |||
12d46ca836 | |||
72a94d5185 | |||
2681566580 | |||
62ce087c6e | |||
c97f003732 | |||
c88558b4ec | |||
131010bc9d | |||
1da2f85a90 | |||
cdcedeb6b2 | |||
3e1f7d554b | |||
d7cf0966d3 | |||
e893d06313 | |||
1f35c00694 | |||
9a5b43907f | |||
6c09ce710e | |||
953534a71a | |||
6feff244db | |||
9fd4cf43c1 | |||
65849c95e2 | |||
9baea704d7 | |||
bdff78dcba | |||
45ca1f994f | |||
91fbea3d89 | |||
2743d690d2 | |||
dd34a90068 | |||
6103df78fb | |||
07128f6b37 | |||
81d81fb303 | |||
b263dc1a7d | |||
078ee7b649 | |||
a6e3ae7b31 | |||
1e9cf1dff0 | |||
d704622522 | |||
bbfff78e25 | |||
abb0cb1649 | |||
d53df307f8 | |||
c351f99da5 | |||
829cde0de9 | |||
f18c8092cc | |||
43059dc16f | |||
da63048775 | |||
7987a7185e | |||
5e55df890c | |||
fde7995cb8 | |||
c592596a5e | |||
688ee9a104 | |||
c3a152e68f | |||
bd60bc9f21 | |||
9d8b0b8632 | |||
ee013316c0 | |||
f2f5483a74 | |||
c4fa8f7a16 | |||
4c270b9116 | |||
c5f7f550f9 | |||
c98cbaaaf0 | |||
f2f2f15a3b | |||
91d316ff1f | |||
121b5f6ea1 | |||
969edb88d0 | |||
bba1f33d51 | |||
970e4ac4ab | |||
a9c335bf82 | |||
e67c52f2c4 | |||
e0fc9a7546 | |||
2cdfb3e6d4 | |||
60137eb7f9 | |||
a7a993faef | |||
8ec5e0372a | |||
da520e299b | |||
1e9e2ee776 | |||
dcb6cdb55a | |||
0f693ee584 | |||
d468f85fd1 | |||
cf90ab2858 | |||
d2fa80196f | |||
247774822c | |||
6e65a73ab9 | |||
f95ca7aca7 | |||
cdf5961bf2 | |||
0c0e77c246 | |||
e6fc585702 | |||
bb2338e447 | |||
a8db898ac6 | |||
1ba1c364b9 | |||
15ae2140d4 | |||
ed39523342 | |||
4e43606df3 | |||
91052cb2d9 | |||
a6e0092627 | |||
59a1fa3942 | |||
36139fb282 | |||
b0da6f82d3 | |||
b1e4324d41 | |||
94df95acea | |||
3a5a283cf6 | |||
16a90e2bce | |||
7c81044860 | |||
5b571942e0 | |||
de8f545f07 | |||
1a24cde608 | |||
a90da4dfaf | |||
6841619b9c | |||
27c8cebbdc | |||
c84b4c33fc | |||
0614687840 | |||
a494f20e15 | |||
6af7a2d691 | |||
872664b727 | |||
f4dc5f3b93 | |||
ac5034065d | |||
b34f699adb | |||
3d68d2432d | |||
2c6c1be197 | |||
f81cbe39b5 | |||
91737ee0a6 | |||
9d1407ba90 | |||
b65d4a3916 | |||
23fe9e7e1d | |||
7539257ee8 | |||
9e29891aa7 | |||
a967cab02b | |||
a5c354d60b | |||
e4e5b7d461 | |||
e0edd5dac1 | |||
831caa6276 | |||
9ac33392a0 | |||
c5be5bae90 | |||
c6a1c4c432 | |||
3c9628b462 | |||
38b13a34ea | |||
bce40c2db3 | |||
78f8cad7c4 | |||
7942a540cd | |||
cb9cf6002f | |||
06587c1dca | |||
bc9168b039 | |||
57c527c2c9 | |||
d1f5ac9edc | |||
0164c1db56 | |||
e5cdce4e7d | |||
280f1770bf | |||
f75d12af21 | |||
5454c3ad0f | |||
901adf47d0 | |||
bf07e91163 | |||
c31b4e2816 | |||
3890eacf57 | |||
cfd24cc2e8 | |||
79bfad103c | |||
d1509468c3 | |||
9011e9faa1 | |||
517aeba330 | |||
85d5d5dcc9 | |||
1ffd24dcf9 | |||
8c66b1cda7 | |||
a0005db474 | |||
221cf14501 | |||
a61945b516 | |||
6b6330c587 | |||
5d279c4948 | |||
5a3bdbf89b | |||
1eb166445b | |||
82182b7bc6 | |||
3f4618866b | |||
91bc6ebdb4 | |||
59a59c1a3a | |||
620f5d7473 | |||
6f902faed0 | |||
ccdf01e9b0 | |||
e6b1eabe4c | |||
75de39c239 | |||
874192568f | |||
0b853f29f4 | |||
847f01a6c6 | |||
e511e6420f | |||
a9723ec1cf | |||
1cb608d8a7 | |||
252f1d57a5 | |||
13856d2e9c | |||
8d8df0bc28 | |||
bc5f0123d9 | |||
9a08f85ffd | |||
8625d7a4e8 | |||
016c97fd1e | |||
2df11674c4 | |||
5eff38e743 | |||
7f5ce26b1b | |||
f6dbce3618 | |||
dfb64d884d | |||
990f774659 | |||
5e518c7ca7 | |||
9046f7eee0 | |||
ef55067834 | |||
ed2a6c0917 | |||
b4c0792995 | |||
e84451f4ea | |||
456370bd46 | |||
efe3714266 | |||
c55a2f4c26 | |||
b8a7593026 | |||
bd29ddb3e9 | |||
38219eb85c | |||
08ffcf6126 | |||
801faea30b | |||
06da9667f3 | |||
de3f607758 | |||
db1861d33f | |||
9d5b255927 | |||
2d05c5c339 | |||
3c460160e0 | |||
5b9cfe5d17 | |||
8d1a4d7274 | |||
859f03cbe6 | |||
ae6be272b5 | |||
ccdb98c0e4 | |||
909d05e718 | |||
0cf98c7893 | |||
843b876885 | |||
1188463734 | |||
10580db329 | |||
f7d335dc6a | |||
4a73ae86bc | |||
f674ab8cfb | |||
265e0ca32a | |||
92662f3409 | |||
42c0078e6b | |||
da5ee723c3 | |||
06fc0715fe | |||
9eda56caf9 | |||
13c8981c6d | |||
ee7029fbde | |||
75033a4ed7 | |||
c175fd1b10 | |||
d479001454 | |||
a2defdd06a | |||
c55cd67bd2 | |||
eed38ce76c | |||
86ffad86c7 | |||
118d39b5bb | |||
814bd2a31a | |||
606b9d94c8 | |||
0057a4bb31 | |||
8ecb74916d | |||
d983056982 | |||
ed302ca518 | |||
0b5c844431 | |||
9c29c8914f | |||
89d2062579 | |||
f42b568fc2 | |||
a1d3ba4ea2 | |||
6be661f6da | |||
fc3598faf2 | |||
fbff315e18 | |||
fc34703dd4 | |||
c7ada820de | |||
5e286f6356 | |||
3dd8b05d74 | |||
3ecb4b5dd5 | |||
d05cad4c65 | |||
ebb3dfe634 | |||
7b99b38f0e | |||
4ef8afc63e | |||
a1aed09a58 | |||
2cacf9cfb5 | |||
4064c84521 | |||
0dde95ac1d | |||
1be41b46a5 | |||
105c27c8dc | |||
f820594257 | |||
8a72321720 | |||
529391963d | |||
48a19f13fc | |||
a9faabb1b0 | |||
3fb2b98ecc | |||
b35b7e448a | |||
1a72e1e087 | |||
b620311aaf | |||
bf8004b04d | |||
cadf046306 | |||
8d8149cfe5 | |||
3dd981727b | |||
0b469e0590 | |||
9e3020a9df | |||
fc9ed19b2b | |||
2d6bb52e36 | |||
7942a091c3 | |||
701311aa7a | |||
d7049150d0 | |||
3b7bf81051 | |||
a735aa5b96 | |||
b78509527b | |||
762318093c | |||
af10345483 | |||
2d1bcddf09 | |||
4f025679cf | |||
064521255b | |||
b7c5609603 | |||
44508352e8 | |||
5c33e4efbb | |||
cbd6f6b1b6 | |||
de1e59d1d5 | |||
2f0ca9e41d | |||
59c9a69689 | |||
b7eac1e898 | |||
0d47e470fc | |||
c2d7b26f2e | |||
1e895c0873 | |||
132a22b524 | |||
0bde01de07 | |||
91a579b81e | |||
56b88639ad | |||
5c3ec5f49a | |||
779198b003 | |||
40cb6a760e | |||
4fd9d86e17 | |||
2a2c3a09c1 | |||
80e852717d | |||
d6e21dc752 | |||
b9b2f83d04 | |||
9af65ea112 | |||
3e6e268034 | |||
af6a3069ce | |||
68f745fe62 | |||
90dcfdbf44 | |||
ed8635a9a3 | |||
1d972ef174 | |||
536925ca78 | |||
09c6cbe503 | |||
154a51245f | |||
523d215b48 | |||
25494b5f6e | |||
19662c02a1 | |||
4943e52344 | |||
4aa08cd016 | |||
15e0ab9261 | |||
f66861f89a | |||
af53b65068 | |||
890c584881 | |||
0bd493f1ba | |||
481c6d0a41 | |||
31df70b8d2 | |||
df400518a7 | |||
21a716cfd3 | |||
f00ac92640 | |||
0433410702 | |||
0f9fe2bf9f | |||
a1be5ce6b3 | |||
56a4988481 | |||
377b5525c9 | |||
c50599c0e7 | |||
4557279930 | |||
7ad377557d | |||
30051c2f5b | |||
9e8e25c159 | |||
2c60516f77 | |||
24ec539932 | |||
2803046ac3 | |||
d1768c1d9d | |||
820ea012c5 | |||
5ba96a1082 | |||
49fe6e7e0f | |||
6c1ccfcefa | |||
724d2fd18c | |||
3e940579d9 | |||
af1697e9bf | |||
e62f1a54af | |||
179f9ab0e3 | |||
dd6b1ee88c |
1
.codespellignore
Normal file
1
.codespellignore
Normal file
@ -0,0 +1 @@
|
||||
assertIn
|
4
.codespellrc
Normal file
4
.codespellrc
Normal file
@ -0,0 +1,4 @@
|
||||
[codespell]
|
||||
skip = .git,*.pdf,*.svg,requirements.txt,test-requirements.txt
|
||||
# poped - loved variable name
|
||||
ignore-words-list = poped
|
2
.coveragerc
Normal file
2
.coveragerc
Normal file
@ -0,0 +1,2 @@
|
||||
[run]
|
||||
parallel=True
|
19
.editorconfig
Normal file
19
.editorconfig
Normal file
@ -0,0 +1,19 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
indent_style = space
|
||||
indent_size = tab
|
||||
tab_width = 4
|
||||
end_of_line = lf
|
||||
charset = utf-8
|
||||
trim_trailing_whitespace = true
|
||||
insert_final_newline = true
|
||||
max_line_length = 100
|
||||
|
||||
[*.{yml,yaml}]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[*.py]
|
||||
indent_style = space
|
||||
|
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -35,7 +35,7 @@ What is the behavior you actually got and that should not happen.
|
||||
```
|
||||
$ podman-compose version
|
||||
using podman version: 3.4.0
|
||||
podman-composer version 0.1.7dev
|
||||
podman-compose version 0.1.7dev
|
||||
podman --version
|
||||
podman version 3.4.0
|
||||
|
||||
|
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
|
||||
## Contributor Checklist:
|
||||
|
||||
If this PR adds a new feature that improves compatibility with docker-compose, please add a link
|
||||
to the exact part of compose spec that the PR touches.
|
||||
|
||||
For any user-visible change please add a release note to newsfragments directory, e.g.
|
||||
newsfragments/my_feature.feature. See newsfragments/README.md for more details.
|
||||
|
||||
All changes require additional unit tests.
|
6
.github/dependabot.yml
vendored
Normal file
6
.github/dependabot.yml
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
22
.github/workflows/codespell.yml
vendored
Normal file
22
.github/workflows/codespell.yml
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
---
|
||||
name: Codespell
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
codespell:
|
||||
name: Check for spelling errors
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
- name: Codespell
|
||||
uses: codespell-project/actions-codespell@v2
|
||||
with:
|
||||
ignore_words_file: .codespellignore
|
25
.github/workflows/static-checks.yml
vendored
Normal file
25
.github/workflows/static-checks.yml
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
name: Static checks
|
||||
|
||||
on:
|
||||
- push
|
||||
- pull_request
|
||||
|
||||
jobs:
|
||||
static-checks:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: docker.io/library/python:3.11-bookworm
|
||||
# cgroupns needed to address the following error:
|
||||
# write /sys/fs/cgroup/cgroup.subtree_control: operation not supported
|
||||
options: --privileged --cgroupns=host
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Analysing the code with ruff
|
||||
run: |
|
||||
set -e
|
||||
pip install -r test-requirements.txt
|
||||
ruff format --check
|
||||
ruff check
|
||||
- name: Analysing the code with pylint
|
||||
run: |
|
||||
pylint podman_compose.py
|
40
.github/workflows/test.yml
vendored
Normal file
40
.github/workflows/test.yml
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
test:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: [ '3.8', '3.9', '3.10', '3.11', '3.12' ]
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: "docker.io/library/python:${{ matrix.python-version }}-bookworm"
|
||||
# cgroupns needed to address the following error:
|
||||
# write /sys/fs/cgroup/cgroup.subtree_control: operation not supported
|
||||
options: --privileged --cgroupns=host
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
set -e
|
||||
apt update && apt install -y podman
|
||||
python -m pip install --upgrade pip
|
||||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
|
||||
if [ -f test-requirements.txt ]; then pip install -r test-requirements.txt; fi
|
||||
- name: Run tests in tests/
|
||||
run: |
|
||||
python -m unittest -v tests/*.py
|
||||
env:
|
||||
TESTS_DEBUG: 1
|
||||
- name: Run tests in pytests/
|
||||
run: |
|
||||
coverage run --source podman_compose -m unittest pytests/*.py
|
||||
- name: Report coverage
|
||||
run: |
|
||||
coverage combine
|
||||
coverage report --format=markdown | tee -a $GITHUB_STEP_SUMMARY
|
5
.gitignore
vendored
5
.gitignore
vendored
@ -47,6 +47,8 @@ coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
test-compose.yaml
|
||||
test-compose-?.yaml
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
@ -103,3 +105,6 @@ venv.bak/
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
|
||||
|
||||
.vscode
|
||||
|
36
.pre-commit-config.yaml
Normal file
36
.pre-commit-config.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 23.3.0
|
||||
hooks:
|
||||
- id: black
|
||||
# It is recommended to specify the latest version of Python
|
||||
# supported by your project here, or alternatively use
|
||||
# pre-commit's default_language_version, see
|
||||
# https://pre-commit.com/#top_level-default_language_version
|
||||
language_version: python3.10
|
||||
types: [python]
|
||||
args: [
|
||||
"--check", # Don't apply changes automatically
|
||||
]
|
||||
- repo: https://github.com/pycqa/flake8
|
||||
rev: 6.0.0
|
||||
hooks:
|
||||
- id: flake8
|
||||
types: [python]
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: pylint
|
||||
name: pylint
|
||||
entry: pylint
|
||||
language: system
|
||||
types: [python]
|
||||
args:
|
||||
[
|
||||
"-rn", # Only display messages
|
||||
"-sn", # Don't display the score
|
||||
"--rcfile=.pylintrc", # Link to your config file
|
||||
]
|
||||
- repo: https://github.com/codespell-project/codespell
|
||||
rev: v2.2.5
|
||||
hooks:
|
||||
- id: codespell
|
11
.pylintrc
11
.pylintrc
@ -1,13 +1,18 @@
|
||||
[MESSAGES CONTROL]
|
||||
disable=W0614,C0410,C0321,C0111,I0011,C0103
|
||||
# C0111 missing-docstring: missing-class-docstring, missing-function-docstring, missing-method-docstring, missing-module-docstrin
|
||||
# consider-using-with: we need it for color formatter pipe
|
||||
disable=too-many-lines,too-many-branches,too-many-locals,too-many-statements,too-many-arguments,too-many-instance-attributes,fixme,multiple-statements,missing-docstring,line-too-long,consider-using-f-string,consider-using-with,unnecessary-lambda-assignment
|
||||
# allow _ for ignored variables
|
||||
# allow generic names like a,b,c and i,j,k,l,m,n and x,y,z
|
||||
# allow k,v for key/value
|
||||
# allow e for exceptions, it for iterator
|
||||
# allow e for exceptions, it for iterator, ix for index
|
||||
# allow ip for ip address
|
||||
# allow w,h for width, height
|
||||
# allow op for operation/operator/opcode
|
||||
# allow t, t0, t1, t2, and t3 for time
|
||||
# allow dt for delta time
|
||||
# allow db for database
|
||||
# allow ls for list
|
||||
good-names=_,a,b,c,dt,db,e,f,fn,fd,i,j,k,v,kv,kw,l,m,n,ls,t,t0,t1,t2,t3,w,h,x,y,z,it,op
|
||||
# allow p for pipe
|
||||
# allow ex for examples, exists ..etc
|
||||
good-names=_,a,b,c,dt,db,e,f,fn,fd,i,j,k,v,kv,kw,l,m,n,ls,t,t0,t1,t2,t3,w,h,x,y,z,it,ix,ip,op,p,ex
|
||||
|
143
CONTRIBUTING.md
143
CONTRIBUTING.md
@ -1,74 +1,141 @@
|
||||
# Contributing to podman-compose
|
||||
|
||||
## Who can contribute?
|
||||
|
||||
- Users that found a bug,
|
||||
- Users that want to propose new functionalities or enhancements,
|
||||
- Users that want to help other users to troubleshoot their environments,
|
||||
- Developers that want to fix bugs,
|
||||
- Developers that want to implement new functionalities or enhancements.
|
||||
|
||||
## Branches
|
||||
|
||||
Please request your pull request to be merged into the `devel` branch.
|
||||
Changes to the `stable` branch are managed by the repository maintainers.
|
||||
|
||||
## Development environment setup
|
||||
|
||||
Note: Some steps are OPTIONAL but all are RECOMMENDED.
|
||||
|
||||
1. Fork the project repository and clone it:
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/USERNAME/podman-compose.git
|
||||
$ cd podman-compose
|
||||
```
|
||||
|
||||
2. (OPTIONAL) Create a Python virtual environment. Example using
|
||||
[virtualenv wrapper](https://virtualenvwrapper.readthedocs.io/en/latest/):
|
||||
|
||||
```shell
|
||||
$ mkvirtualenv podman-compose
|
||||
```
|
||||
|
||||
3. Install the project runtime and development requirements:
|
||||
|
||||
```shell
|
||||
$ pip install '.[devel]'
|
||||
```
|
||||
|
||||
4. (OPTIONAL) Install `pre-commit` git hook scripts
|
||||
(https://pre-commit.com/#3-install-the-git-hook-scripts):
|
||||
|
||||
```shell
|
||||
$ pre-commit install
|
||||
```
|
||||
|
||||
5. Create a new branch, develop and add tests when possible.
|
||||
6. Run linting and testing before committing code. Ensure all the hooks are passing.
|
||||
|
||||
```shell
|
||||
$ pre-commit run --all-files
|
||||
```
|
||||
|
||||
7. Run code coverage:
|
||||
|
||||
```shell
|
||||
$ coverage run --source podman_compose -m unittest pytests/*.py
|
||||
$ python -m unittest tests/*.py
|
||||
$ coverage combine
|
||||
$ coverage report
|
||||
$ coverage html
|
||||
```
|
||||
|
||||
8. Commit your code to your fork's branch.
|
||||
- Make sure you include a `Signed-off-by` message in your commits.
|
||||
Read [this guide](https://github.com/containers/common/blob/main/CONTRIBUTING.md#sign-your-prs)
|
||||
to learn how to sign your commits.
|
||||
- In the commit message, reference the Issue ID that your code fixes and a brief description of
|
||||
the changes.
|
||||
Example: `Fixes #516: Allow empty network`
|
||||
9. Open a pull request to `containers/podman-compose:devel` and wait for a maintainer to review your
|
||||
work.
|
||||
|
||||
## Adding new commands
|
||||
|
||||
To add a command you need to add a function that is decorated
|
||||
with `@cmd_run` passing the compose instance, command name and
|
||||
description. the wrapped function should accept two arguments
|
||||
the compose instance and the command-specific arguments (resulted
|
||||
from python's `argparse` package) inside that command you can
|
||||
run PodMan like this `compose.podman.run(['inspect', 'something'])`
|
||||
and inside that function you can access `compose.pods`
|
||||
and `compose.containers` ...etc.
|
||||
Here is an example
|
||||
To add a command, you need to add a function that is decorated with `@cmd_run`.
|
||||
|
||||
```
|
||||
The decorated function must be declared `async` and should accept two arguments: The compose
|
||||
instance and the command-specific arguments (resulted from the Python's `argparse` package).
|
||||
|
||||
In this function, you can run Podman (e.g. `await compose.podman.run(['inspect', 'something'])`),
|
||||
access `compose.pods`, `compose.containers` etc.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```python
|
||||
@cmd_run(podman_compose, 'build', 'build images defined in the stack')
|
||||
def compose_build(compose, args):
|
||||
compose.podman.run(['build', 'something'])
|
||||
async def compose_build(compose, args):
|
||||
await compose.podman.run(['build', 'something'])
|
||||
```
|
||||
|
||||
## Command arguments parsing
|
||||
|
||||
Add a function that accept `parser` which is an instance from `argparse`.
|
||||
In side that function you can call `parser.add_argument()`.
|
||||
The function decorated with `@cmd_parse` accepting the compose instance,
|
||||
and command names (as a list or as a string).
|
||||
You can do this multiple times.
|
||||
To add arguments to be parsed by a command, you need to add a function that is decorated with
|
||||
`@cmd_parse` which accepts the compose instance and the command's name (as a string list or as a
|
||||
single string).
|
||||
|
||||
Here is an example
|
||||
The decorated function should accept a single argument: An instance of `argparse`.
|
||||
|
||||
```
|
||||
In this function, you can call `parser.add_argument()` to add a new argument to the command.
|
||||
|
||||
Note you can add such a function multiple times.
|
||||
|
||||
Here is an example:
|
||||
|
||||
```python
|
||||
@cmd_parse(podman_compose, 'build')
|
||||
def compose_build_parse(parser):
|
||||
parser.add_argument("--pull",
|
||||
help="attempt to pull a newer version of the image", action='store_true')
|
||||
parser.add_argument("--pull-always",
|
||||
help="attempt to pull a newer version of the image, Raise an error even if the image is present locally.", action='store_true')
|
||||
help="Attempt to pull a newer version of the image, "
|
||||
"raise an error even if the image is present locally.",
|
||||
action='store_true')
|
||||
```
|
||||
|
||||
NOTE: `@cmd_parse` should be after `@cmd_run`
|
||||
NOTE: `@cmd_parse` should be after `@cmd_run`.
|
||||
|
||||
## Calling a command from inside another
|
||||
## Calling a command from another one
|
||||
|
||||
If you need to call `podman-compose down` from inside `podman-compose up`
|
||||
do something like:
|
||||
If you need to call `podman-compose down` from `podman-compose up`, do something like:
|
||||
|
||||
```
|
||||
```python
|
||||
@cmd_run(podman_compose, 'up', 'up desc')
|
||||
def compose_up(compose, args):
|
||||
compose.commands['down'](compose, args)
|
||||
async def compose_up(compose, args):
|
||||
await compose.commands['down'](compose, args)
|
||||
# or
|
||||
compose.commands['down'](argparse.Namespace(foo=123))
|
||||
await compose.commands['down'](argparse.Namespace(foo=123))
|
||||
```
|
||||
|
||||
|
||||
## Missing Commands (help needed)
|
||||
|
||||
```
|
||||
bundle Generate a Docker bundle from the Compose file
|
||||
config Validate and view the Compose file
|
||||
create Create services
|
||||
events Receive real time events from containers
|
||||
images List images
|
||||
kill Kill containers
|
||||
logs View output from containers
|
||||
pause Pause services
|
||||
port Print the public port for a port binding
|
||||
ps List containers
|
||||
rm Remove stopped containers
|
||||
run Run a one-off command
|
||||
scale Set number of containers for a service
|
||||
top Display the running processes
|
||||
unpause Unpause services
|
||||
version Show the Docker-Compose version information
|
||||
```
|
||||
|
64
README.md
64
README.md
@ -1,19 +1,25 @@
|
||||
# Podman Compose
|
||||
## [](https://github.com/containers/podman-compose/actions/workflows/test.yml)
|
||||
|
||||
An implementation of [Compose Spec](https://compose-spec.io/) with [Podman](https://podman.io/) backend.
|
||||
This project focus on:
|
||||
This project focuses on:
|
||||
|
||||
* rootless
|
||||
* daemon-less process model, we directly execute podman, no running daemon.
|
||||
|
||||
This project only depend on:
|
||||
This project only depends on:
|
||||
|
||||
* `podman`
|
||||
* [podman dnsname plugin](https://github.com/containers/dnsname): It is usually found in
|
||||
the `podman-plugins` or `podman-dnsname` distro packages, those packages are not pulled
|
||||
by default and you need to install them. This allows containers to be able to resolve
|
||||
each other if they are on the same CNI network. This is not necessary when podman is using
|
||||
netavark as a network backend.
|
||||
* Python3
|
||||
* [PyYAML](https://pyyaml.org/)
|
||||
* [python-dotenv](https://pypi.org/project/python-dotenv/)
|
||||
|
||||
And it's formed as a single python file script that you can drop into your PATH and run.
|
||||
And it's formed as a single Python file script that you can drop into your PATH and run.
|
||||
|
||||
## References:
|
||||
|
||||
@ -35,16 +41,22 @@ OpenShift/Kubernetes distribution like [OKD](https://www.okd.io/).
|
||||
|
||||
## Versions
|
||||
|
||||
If you have legacy version of `podman` (before 3.x) you might need to stick with legacy `podman-compose` `0.1.x` branch.
|
||||
If you have legacy version of `podman` (before 3.1.0) you might need to stick with legacy `podman-compose` `0.1.x` branch.
|
||||
The legacy branch 0.1.x uses mappings and workarounds to compensate for rootless limitations.
|
||||
|
||||
Modern podman versions (>=3.4) do not have those limitations and thus you can use latest and stable 1.x branch.
|
||||
Modern podman versions (>=3.4) do not have those limitations, and thus you can use latest and stable 1.x branch.
|
||||
|
||||
If you are upgrading from `podman-compose` version `0.1.x` then we no longer have global option `-t` to set mapping type
|
||||
like `hostnet`. If you desire that behavior, pass it the standard way like `network_mode: host` in the YAML.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
Install latest stable version from PyPI:
|
||||
### Pip
|
||||
|
||||
```
|
||||
Install the latest stable version from PyPI:
|
||||
|
||||
```bash
|
||||
pip3 install podman-compose
|
||||
```
|
||||
|
||||
@ -52,37 +64,44 @@ pass `--user` to install inside regular user home without being root.
|
||||
|
||||
Or latest development version from GitHub:
|
||||
|
||||
```
|
||||
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
|
||||
```bash
|
||||
pip3 install https://github.com/containers/podman-compose/archive/main.tar.gz
|
||||
```
|
||||
|
||||
or
|
||||
### Homebrew
|
||||
|
||||
```bash
|
||||
brew install podman-compose
|
||||
```
|
||||
curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py
|
||||
|
||||
### Manual
|
||||
|
||||
```bash
|
||||
curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
|
||||
chmod +x /usr/local/bin/podman-compose
|
||||
```
|
||||
|
||||
or inside your home
|
||||
|
||||
```
|
||||
curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py
|
||||
```bash
|
||||
curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/main/podman_compose.py
|
||||
chmod +x ~/.local/bin/podman-compose
|
||||
```
|
||||
|
||||
or install from Fedora (starting from f31) repositories:
|
||||
|
||||
```
|
||||
```bash
|
||||
sudo dnf install podman-compose
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
|
||||
We have included fully functional sample stacks inside `examples/` directory.
|
||||
You can get more examples from [awesome-compose](https://github.com/docker/awesome-compose).
|
||||
|
||||
A quick example would be
|
||||
|
||||
```
|
||||
```bash
|
||||
cd examples/busybox
|
||||
podman-compose --help
|
||||
podman-compose up --help
|
||||
@ -99,12 +118,21 @@ which have
|
||||
- a django tasks
|
||||
|
||||
|
||||
When testing the `AWX3` example, if you got errors just wait for db migrations to end.
|
||||
|
||||
When testing the `AWX3` example, if you got errors, just wait for db migrations to end.
|
||||
There is also AWX 17.1.0
|
||||
|
||||
## Tests
|
||||
|
||||
Inside `tests/` directory we have many useless docker-compose stacks
|
||||
that are meant to test as much cases as we can to make sure we are compatible
|
||||
that are meant to test as many cases as we can to make sure we are compatible
|
||||
|
||||
### Unit tests with unittest
|
||||
run a unittest with following command
|
||||
|
||||
```shell
|
||||
python -m unittest pytests/*.py
|
||||
```
|
||||
|
||||
# Contributing guide
|
||||
|
||||
If you are a user or a developer and want to contribute please check the [CONTRIBUTING](CONTRIBUTING.md) section
|
||||
|
411
completion/bash/podman-compose
Normal file
411
completion/bash/podman-compose
Normal file
@ -0,0 +1,411 @@
|
||||
# Naming convention:
|
||||
# * _camelCase for function names
|
||||
# * snake_case for variable names
|
||||
|
||||
# all functions will return 0 if they successfully complete the argument
|
||||
# (or establish there is no need or no way to complete), and something
|
||||
# other than 0 if that's not the case
|
||||
|
||||
# complete arguments to global options
|
||||
_completeGlobalOptArgs() {
|
||||
# arguments to options that take paths as arguments: complete paths
|
||||
for el in ${path_arg_global_opts}; do
|
||||
if [[ ${prev} == ${el} ]]; then
|
||||
COMPREPLY=( $(compgen -f -- ${cur}) )
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
# arguments to options that take generic arguments: don't complete
|
||||
for el in ${generic_arg_global_opts}; do
|
||||
if [[ ${prev} == ${el} ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# complete root subcommands and options
|
||||
_completeRoot() {
|
||||
# if we're completing an option
|
||||
if [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${global_opts}" -- ${cur}) )
|
||||
return 0
|
||||
fi
|
||||
# complete root commands
|
||||
COMPREPLY=( $(compgen -W "${root_commands}" -- ${cur}) )
|
||||
return 0
|
||||
}
|
||||
|
||||
# complete names of Compose services
|
||||
_completeServiceNames() {
|
||||
# ideally we should complete service names,
|
||||
# but parsing the compose spec file in the
|
||||
# completion script is quite complex
|
||||
return 0
|
||||
}
|
||||
|
||||
# complete commands to run inside containers
|
||||
_completeCommand() {
|
||||
# we would need to complete commands to run inside
|
||||
# a container
|
||||
return 0
|
||||
}
|
||||
|
||||
|
||||
# complete the arguments for `podman-compose up` and return 0
|
||||
_completeUpArgs() {
|
||||
up_opts="${help_opts} -d --detach --no-color --quiet-pull --no-deps --force-recreate --always-recreate-deps --no-recreate --no-build --no-start --build --abort-on-container-exit -t --timeout -V --renew-anon-volumes --remove-orphans --scale --exit-code-from --pull --pull-always --build-arg --no-cache"
|
||||
if [[ ${prev} == "--scale" || ${prev} == "-t" || ${prev} == "--timeout" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${up_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose exec` and return 0
|
||||
_completeExecArgs() {
|
||||
exec_opts="${help_opts} -d --detach --privileged -u --user -T --index -e --env -w --workdir"
|
||||
if [[ ${prev} == "-u" || ${prev} == "--user" || ${prev} == "--index" || ${prev} == "-e" || ${prev} == "--env" || ${prev} == "-w" || ${prev} == "--workdir" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${exec_opts}" -- ${cur}) )
|
||||
return 0
|
||||
elif [[ ${comp_cword_adj} -eq 2 ]]; then
|
||||
# complete service name
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
elif [[ ${comp_cword_adj} -eq 3 ]]; then
|
||||
_completeCommand
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
|
||||
# complete the arguments for `podman-compose down` and return 0
|
||||
_completeDownArgs() {
|
||||
down_opts="${help_opts} -v --volumes -t --timeout --remove-orphans"
|
||||
if [[ ${prev} == "-t" || ${prev} == "--timeout" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${down_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
|
||||
# complete the arguments for `podman-compose build` and return 0
|
||||
_completeBuildArgs() {
|
||||
build_opts="${help_opts} --pull --pull-always --build-arg --no-cache"
|
||||
if [[ ${prev} == "--build-arg" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${build_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose logs` and return 0
|
||||
_completeLogsArgs() {
|
||||
logs_opts="${help_opts} -f --follow -l --latest -n --names --since -t --timestamps --tail --until"
|
||||
if [[ ${prev} == "--since" || ${prev} == "--tail" || ${prev} == "--until" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${logs_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose ps` and return 0
|
||||
_completePsArgs() {
|
||||
ps_opts="${help_opts} -q --quiet"
|
||||
if [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${ps_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose pull` and return 0
|
||||
_completePullArgs() {
|
||||
pull_opts="${help_opts} --force-local"
|
||||
if [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${pull_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose push` and return 0
|
||||
_completePushArgs() {
|
||||
push_opts="${help_opts} --ignore-push-failures"
|
||||
if [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${push_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose restart` and return 0
|
||||
_completeRestartArgs() {
|
||||
restart_opts="${help_opts} -t --timeout"
|
||||
if [[ ${prev} == "-t" || ${prev} == "--timeout" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${restart_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose stop` and return 0
|
||||
_completeStopArgs() {
|
||||
stop_opts="${help_opts} -t --timeout"
|
||||
if [[ ${prev} == "-t" || ${prev} == "--timeout" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${stop_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose start` and return 0
|
||||
_completeStartArgs() {
|
||||
start_opts="${help_opts}"
|
||||
if [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${start_opts}" -- ${cur}) )
|
||||
return 0
|
||||
else
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# complete the arguments for `podman-compose run` and return 0
|
||||
_completeRunArgs() {
|
||||
run_opts="${help_opts} -d --detach --privileged -u --user -T --index -e --env -w --workdir"
|
||||
if [[ ${prev} == "-u" || ${prev} == "--user" || ${prev} == "--index" || ${prev} == "-e" || ${prev} == "--env" || ${prev} == "-w" || ${prev} == "--workdir" ]]; then
|
||||
return 0
|
||||
elif [[ ${cur} == -* ]]; then
|
||||
COMPREPLY=( $(compgen -W "${run_opts}" -- ${cur}) )
|
||||
return 0
|
||||
elif [[ ${comp_cword_adj} -eq 2 ]]; then
|
||||
# complete service name
|
||||
_completeServiceNames
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
elif [[ ${comp_cword_adj} -eq 3 ]]; then
|
||||
_completeCommand
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
_podmanCompose() {
|
||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
||||
root_commands="help version pull push build up down ps run exec start stop restart logs"
|
||||
|
||||
# options to output help text (used as global and subcommand options)
|
||||
help_opts="-h --help"
|
||||
|
||||
# global options that don't take additional arguments
|
||||
basic_global_opts="${help_opts} -v --no-ansi --no-cleanup --dry-run"
|
||||
|
||||
# global options that take paths as arguments
|
||||
path_arg_global_opts="-f --file --podman-path"
|
||||
path_arg_global_opts_array=($arg_global_opts)
|
||||
|
||||
# global options that take arguments that are not files
|
||||
generic_arg_global_opts="-p --project-name --podman-path --podman-args --podman-pull-args --podman-push-args --podman-build-args --podman-inspect-args --podman-run-args --podman-start-args --podman-stop-args --podman-rm-args --podman-volume-args"
|
||||
generic_arg_global_opts_array=($generic_arg_global_opts)
|
||||
|
||||
# all global options that take arguments
|
||||
arg_global_opts="${path_arg_global_opts} ${generic_arg_global_opts}"
|
||||
arg_global_opts_array=($arg_global_opts)
|
||||
|
||||
# all global options
|
||||
global_opts="${basic_global_opts} ${arg_global_opts}"
|
||||
|
||||
chosen_root_command=""
|
||||
|
||||
|
||||
_completeGlobalOptArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# computing comp_cword_adj, which thruthfully tells us how deep in the subcommands tree we are
|
||||
# additionally, set the chosen_root_command if possible
|
||||
comp_cword_adj=${COMP_CWORD}
|
||||
if [[ ${COMP_CWORD} -ge 2 ]]; then
|
||||
skip_next="no"
|
||||
for el in ${COMP_WORDS[@]}; do
|
||||
# if the user has asked for help text there's no need to complete further
|
||||
if [[ ${el} == "-h" || ${el} == "--help" ]]; then
|
||||
return 0
|
||||
fi
|
||||
if [[ ${skip_next} == "yes" ]]; then
|
||||
let "comp_cword_adj--"
|
||||
skip_next="no"
|
||||
continue
|
||||
fi
|
||||
if [[ ${el} == -* && ${el} != ${cur} ]]; then
|
||||
let "comp_cword_adj--"
|
||||
|
||||
for opt in ${arg_global_opts_array[@]}; do
|
||||
if [[ ${el} == ${opt} ]]; then
|
||||
skip_next="yes"
|
||||
fi
|
||||
done
|
||||
elif [[ ${el} != ${cur} && ${el} != ${COMP_WORDS[0]} && ${chosen_root_command} == "" ]]; then
|
||||
chosen_root_command=${el}
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if [[ ${comp_cword_adj} -eq 1 ]]; then
|
||||
_completeRoot
|
||||
|
||||
# Given that we check the value of comp_cword_adj outside
|
||||
# of it, at the moment _completeRoot should always return
|
||||
# 0, this is just here in case changes are made. The same
|
||||
# will apply to similar functions below
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
case $chosen_root_command in
|
||||
up)
|
||||
_completeUpArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
down)
|
||||
_completeDownArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
exec)
|
||||
_completeExecArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
build)
|
||||
_completeBuildArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
logs)
|
||||
_completeLogsArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
ps)
|
||||
_completePsArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
pull)
|
||||
_completePullArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
push)
|
||||
_completePushArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
restart)
|
||||
_completeRestartArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
start)
|
||||
_completeStartArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
stop)
|
||||
_completeStopArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
run)
|
||||
_completeRunArgs
|
||||
if [[ $? -eq 0 ]]; then
|
||||
return 0
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
complete -F _podmanCompose podman-compose
|
33
docs/Changelog-1.1.0.md
Normal file
33
docs/Changelog-1.1.0.md
Normal file
@ -0,0 +1,33 @@
|
||||
Version v1.1.0 (2024-04-17)
|
||||
===========================
|
||||
|
||||
Bug fixes
|
||||
---------
|
||||
|
||||
- Fixed support for values with equals sign in `-e` argument of `run` and `exec` commands.
|
||||
- Fixed duplicate arguments being emitted in `stop` and `restart` commands.
|
||||
- Removed extraneous debug output. `--verbose` flag has been added to preserve verbose output.
|
||||
- Links aliases are now added to service aliases.
|
||||
- Fixed image build process to use defined environmental variables.
|
||||
- Empty list is now allowed to be `COMMAND` and `ENTRYPOINT`.
|
||||
- Environment files are now resolved relative to current working directory.
|
||||
- Exit code of container build is now preserved as return code of `build` command.
|
||||
|
||||
New features
|
||||
------------
|
||||
|
||||
- Added support for `uidmap`, `gidmap`, `http_proxy` and `runtime` service configuration keys.
|
||||
- Added support for `enable_ipv6` network configuration key.
|
||||
- Added `--parallel` option to support parallel pulling and building of images.
|
||||
- Implemented support for maps in `sysctls` container configuration key.
|
||||
- Implemented `stats` command.
|
||||
- Added `--no-normalize` flag to `config` command.
|
||||
- Added support for `include` global configuration key.
|
||||
- Added support for `build` command.
|
||||
- Added support to start containers with multiple networks.
|
||||
- Added support for `profile` argument.
|
||||
- Added support for starting podman in existing network namespace.
|
||||
- Added IPAM driver support.
|
||||
- Added support for file secrets being passed to `podman build` via `--secret` argument.
|
||||
- Added support for multiple networks with separately specified IP and MAC address.
|
||||
- Added support for `service.build.ulimits` when building image.
|
40
docs/Changelog-1.2.0.md
Normal file
40
docs/Changelog-1.2.0.md
Normal file
@ -0,0 +1,40 @@
|
||||
Version v1.2.0 (2024-06-26)
|
||||
===========================
|
||||
|
||||
Bug fixes
|
||||
---------
|
||||
|
||||
- Fixed handling of `--in-pod` argument. Previously it was hard to provide false value to it.
|
||||
- podman-compose no longer creates pods when registering systemd unit.
|
||||
- Fixed warning `RuntimeWarning: coroutine 'create_pods' was never awaited`
|
||||
- Fixed error when setting up IPAM network with default driver.
|
||||
- Fixed support for having list and dictionary `depends_on` sections in related compose files.
|
||||
- Fixed logging of failed build message.
|
||||
- Fixed support for multiple entries in `include` section.
|
||||
- Fixed environment variable precedence order.
|
||||
|
||||
Changes
|
||||
-------
|
||||
|
||||
- `x-podman` dictionary in container root has been migrated to `x-podman.*` fields in container root.
|
||||
|
||||
New features
|
||||
------------
|
||||
|
||||
- Added support for `--publish` in `podman-compose run`.
|
||||
- Added support for Podman external root filesystem management (`--rootfs` option).
|
||||
- Added support for `podman-compose images` command.
|
||||
- Added support for `env_file` being configured via dictionaries.
|
||||
- Added support for enabling GPU access.
|
||||
- Added support for selinux in verbose mount specification.
|
||||
- Added support for `additional_contexts` section.
|
||||
- Added support for multi-line environment files.
|
||||
- Added support for passing contents of `podman-compose.yml` via stdin.
|
||||
- Added support for specifying the value for `--in-pod` setting in `podman-compose.yml` file.
|
||||
- Added support for environmental secrets.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
- Added instructions on how to install podman-compose on Homebrew.
|
||||
- Added explanation that netavark is an alternative to dnsname plugin
|
111
docs/Extensions.md
Normal file
111
docs/Extensions.md
Normal file
@ -0,0 +1,111 @@
|
||||
# Podman specific extensions to the docker-compose format
|
||||
|
||||
Podman-compose supports the following extension to the docker-compose format. These extensions
|
||||
are generally specified under fields with "x-podman" prefix in the compose file.
|
||||
|
||||
## Container management
|
||||
|
||||
The following extension keys are available under container configuration:
|
||||
|
||||
* `x-podman.uidmap` - Run the container in a new user namespace using the supplied UID mapping.
|
||||
|
||||
* `x-podman.gidmap` - Run the container in a new user namespace using the supplied GID mapping.
|
||||
|
||||
* `x-podman.rootfs` - Run the container without requiring any image management; the rootfs of the
|
||||
container is assumed to be managed externally.
|
||||
|
||||
For example, the following docker-compose.yml allows running a podman container with externally managed rootfs.
|
||||
```yml
|
||||
version: "3"
|
||||
services:
|
||||
my_service:
|
||||
command: ["/bin/busybox"]
|
||||
x-podman.rootfs: "/path/to/rootfs"
|
||||
```
|
||||
|
||||
For explanations of these extensions, please refer to the [Podman Documentation](https://docs.podman.io/).
|
||||
|
||||
|
||||
## Per-network MAC-addresses
|
||||
|
||||
Generic docker-compose files support specification of the MAC address on the container level. If the
|
||||
container has multiple network interfaces, the specified MAC address is applied to the first
|
||||
specified network.
|
||||
|
||||
Podman-compose in addition supports the specification of MAC addresses on a per-network basis. This
|
||||
is done by adding a `x-podman.mac_address` key to the network configuration in the container. The
|
||||
value of the `x-podman.mac_address` key is the MAC address to be used for the network interface.
|
||||
|
||||
Specifying a MAC address for the container and for individual networks at the same time is not
|
||||
supported.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
---
|
||||
version: "3"
|
||||
|
||||
networks:
|
||||
net0:
|
||||
driver: "bridge"
|
||||
ipam:
|
||||
config:
|
||||
- subnet: "192.168.0.0/24"
|
||||
net1:
|
||||
driver: "bridge"
|
||||
ipam:
|
||||
config:
|
||||
- subnet: "192.168.1.0/24"
|
||||
|
||||
services:
|
||||
webserver
|
||||
image: "busybox"
|
||||
command: ["/bin/busybox", "httpd", "-f", "-h", "/etc", "-p", "8001"]
|
||||
networks:
|
||||
net0:
|
||||
ipv4_address: "192.168.0.10"
|
||||
x-podman.mac_address: "02:aa:aa:aa:aa:aa"
|
||||
net1:
|
||||
ipv4_address: "192.168.1.10"
|
||||
x-podman.mac_address: "02:bb:bb:bb:bb:bb"
|
||||
```
|
||||
|
||||
## Podman-specific network modes
|
||||
|
||||
Generic docker-compose supports the following values for `network-mode` for a container:
|
||||
|
||||
- `bridge`
|
||||
- `host`
|
||||
- `none`
|
||||
- `service`
|
||||
- `container`
|
||||
|
||||
In addition, podman-compose supports the following podman-specific values for `network-mode`:
|
||||
|
||||
- `slirp4netns[:<options>,...]`
|
||||
- `ns:<options>`
|
||||
- `pasta[:<options>,...]`
|
||||
- `private`
|
||||
|
||||
The options to the network modes are passed to the `--network` option of the `podman create` command
|
||||
as-is.
|
||||
|
||||
|
||||
## Custom pods management
|
||||
|
||||
Podman-compose can have containers in pods. This can be controlled by extension key x-podman in_pod.
|
||||
It allows providing custom value for --in-pod and is especially relevant when --userns has to be set.
|
||||
|
||||
For example, the following docker-compose.yml allows using userns_mode by overriding the default
|
||||
value of --in-pod (unless it was specifically provided by "--in-pod=True" in command line interface).
|
||||
```yml
|
||||
version: "3"
|
||||
services:
|
||||
cont:
|
||||
image: nopush/podman-compose-test
|
||||
userns_mode: keep-id:uid=1000
|
||||
command: ["dumb-init", "/bin/busybox", "httpd", "-f", "-p", "8080"]
|
||||
|
||||
x-podman:
|
||||
in_pod: false
|
||||
```
|
37
examples/awx17/README.md
Normal file
37
examples/awx17/README.md
Normal file
@ -0,0 +1,37 @@
|
||||
# AWX Compose
|
||||
|
||||
the directory roles is taken from [here](https://github.com/ansible/awx/tree/17.1.0/installer/roles/local_docker)
|
||||
|
||||
also look at https://github.com/ansible/awx/tree/17.1.0/tools/docker-compose
|
||||
|
||||
```
|
||||
mkdir deploy awx17
|
||||
ansible localhost \
|
||||
-e host_port=8080 \
|
||||
-e awx_secret_key='awx,secret.123' \
|
||||
-e secret_key='awx,secret.123' \
|
||||
-e admin_user='admin' \
|
||||
-e admin_password='admin' \
|
||||
-e pg_password='awx,123.' \
|
||||
-e pg_username='awx' \
|
||||
-e pg_database='awx' \
|
||||
-e pg_port='5432' \
|
||||
-e redis_image="docker.io/library/redis:6-alpine" \
|
||||
-e postgres_data_dir="./data/pg" \
|
||||
-e compose_start_containers=false \
|
||||
-e dockerhub_base='docker.io/ansible' \
|
||||
-e awx_image='docker.io/ansible/awx' \
|
||||
-e awx_version='17.1.0' \
|
||||
-e dockerhub_version='17.1.0' \
|
||||
-e docker_deploy_base_path=$PWD/deploy \
|
||||
-e docker_compose_dir=$PWD/awx17 \
|
||||
-e awx_task_hostname=awx \
|
||||
-e awx_web_hostname=awxweb \
|
||||
-m include_role -a name=local_docker
|
||||
cp awx17/docker-compose.yml awx17/docker-compose.yml.orig
|
||||
sed -i -re "s#- \"$PWD/awx17/(.*):/#- \"./\1:/#" awx17/docker-compose.yml
|
||||
cd awx17
|
||||
podman-compose run --rm --service-ports task awx-manage migrate --no-input
|
||||
podman-compose up -d
|
||||
```
|
||||
|
11
examples/awx17/roles/local_docker/defaults/main.yml
Normal file
11
examples/awx17/roles/local_docker/defaults/main.yml
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
dockerhub_version: "{{ lookup('file', playbook_dir + '/../VERSION') }}"
|
||||
|
||||
awx_image: "awx"
|
||||
redis_image: "redis"
|
||||
|
||||
postgresql_version: "12"
|
||||
postgresql_image: "postgres:{{postgresql_version}}"
|
||||
|
||||
compose_start_containers: true
|
||||
upgrade_postgres: false
|
74
examples/awx17/roles/local_docker/tasks/compose.yml
Normal file
74
examples/awx17/roles/local_docker/tasks/compose.yml
Normal file
@ -0,0 +1,74 @@
|
||||
---
|
||||
- name: Create {{ docker_compose_dir }} directory
|
||||
file:
|
||||
path: "{{ docker_compose_dir }}"
|
||||
state: directory
|
||||
|
||||
- name: Create Redis socket directory
|
||||
file:
|
||||
path: "{{ docker_compose_dir }}/redis_socket"
|
||||
state: directory
|
||||
mode: 0777
|
||||
|
||||
- name: Create Docker Compose Configuration
|
||||
template:
|
||||
src: "{{ item.file }}.j2"
|
||||
dest: "{{ docker_compose_dir }}/{{ item.file }}"
|
||||
mode: "{{ item.mode }}"
|
||||
loop:
|
||||
- file: environment.sh
|
||||
mode: "0600"
|
||||
- file: credentials.py
|
||||
mode: "0600"
|
||||
- file: docker-compose.yml
|
||||
mode: "0600"
|
||||
- file: nginx.conf
|
||||
mode: "0600"
|
||||
- file: redis.conf
|
||||
mode: "0664"
|
||||
register: awx_compose_config
|
||||
|
||||
- name: Render SECRET_KEY file
|
||||
copy:
|
||||
content: "{{ secret_key }}"
|
||||
dest: "{{ docker_compose_dir }}/SECRET_KEY"
|
||||
mode: 0600
|
||||
register: awx_secret_key
|
||||
|
||||
- block:
|
||||
- name: Remove AWX containers before migrating postgres so that the old postgres container does not get used
|
||||
docker_compose:
|
||||
project_src: "{{ docker_compose_dir }}"
|
||||
state: absent
|
||||
ignore_errors: true
|
||||
|
||||
- name: Run migrations in task container
|
||||
shell: docker-compose run --rm --service-ports task awx-manage migrate --no-input
|
||||
args:
|
||||
chdir: "{{ docker_compose_dir }}"
|
||||
|
||||
- name: Start the containers
|
||||
docker_compose:
|
||||
project_src: "{{ docker_compose_dir }}"
|
||||
restarted: "{{ awx_compose_config is changed or awx_secret_key is changed }}"
|
||||
register: awx_compose_start
|
||||
|
||||
- name: Update CA trust in awx_web container
|
||||
command: docker exec awx_web '/usr/bin/update-ca-trust'
|
||||
when: awx_compose_config.changed or awx_compose_start.changed
|
||||
|
||||
- name: Update CA trust in awx_task container
|
||||
command: docker exec awx_task '/usr/bin/update-ca-trust'
|
||||
when: awx_compose_config.changed or awx_compose_start.changed
|
||||
|
||||
- name: Wait for launch script to create user
|
||||
wait_for:
|
||||
timeout: 10
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Create Preload data
|
||||
command: docker exec awx_task bash -c "/usr/bin/awx-manage create_preload_data"
|
||||
when: create_preload_data|bool
|
||||
register: cdo
|
||||
changed_when: "'added' in cdo.stdout"
|
||||
when: compose_start_containers|bool
|
15
examples/awx17/roles/local_docker/tasks/main.yml
Normal file
15
examples/awx17/roles/local_docker/tasks/main.yml
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
- name: Generate broadcast websocket secret
|
||||
set_fact:
|
||||
broadcast_websocket_secret: "{{ lookup('password', '/dev/null length=128') }}"
|
||||
run_once: true
|
||||
no_log: true
|
||||
when: broadcast_websocket_secret is not defined
|
||||
|
||||
- import_tasks: upgrade_postgres.yml
|
||||
when:
|
||||
- postgres_data_dir is defined
|
||||
- pg_hostname is not defined
|
||||
|
||||
- import_tasks: set_image.yml
|
||||
- import_tasks: compose.yml
|
46
examples/awx17/roles/local_docker/tasks/set_image.yml
Normal file
46
examples/awx17/roles/local_docker/tasks/set_image.yml
Normal file
@ -0,0 +1,46 @@
|
||||
---
|
||||
- name: Manage AWX Container Images
|
||||
block:
|
||||
- name: Export Docker awx image if it isn't local and there isn't a registry defined
|
||||
docker_image:
|
||||
name: "{{ awx_image }}"
|
||||
tag: "{{ awx_version }}"
|
||||
archive_path: "{{ awx_local_base_config_path|default('/tmp') }}/{{ awx_image }}_{{ awx_version }}.tar"
|
||||
when: inventory_hostname != "localhost" and docker_registry is not defined
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Set docker base path
|
||||
set_fact:
|
||||
docker_deploy_base_path: "{{ awx_base_path|default('/tmp') }}/docker_deploy"
|
||||
when: ansible_connection != "local" and docker_registry is not defined
|
||||
|
||||
- name: Ensure directory exists
|
||||
file:
|
||||
path: "{{ docker_deploy_base_path }}"
|
||||
state: directory
|
||||
when: ansible_connection != "local" and docker_registry is not defined
|
||||
|
||||
- name: Copy awx image to docker execution
|
||||
copy:
|
||||
src: "{{ awx_local_base_config_path|default('/tmp') }}/{{ awx_image }}_{{ awx_version }}.tar"
|
||||
dest: "{{ docker_deploy_base_path }}/{{ awx_image }}_{{ awx_version }}.tar"
|
||||
when: ansible_connection != "local" and docker_registry is not defined
|
||||
|
||||
- name: Load awx image
|
||||
docker_image:
|
||||
name: "{{ awx_image }}"
|
||||
tag: "{{ awx_version }}"
|
||||
load_path: "{{ docker_deploy_base_path }}/{{ awx_image }}_{{ awx_version }}.tar"
|
||||
timeout: 300
|
||||
when: ansible_connection != "local" and docker_registry is not defined
|
||||
|
||||
- name: Set full image path for local install
|
||||
set_fact:
|
||||
awx_docker_actual_image: "{{ awx_image }}:{{ awx_version }}"
|
||||
when: docker_registry is not defined
|
||||
when: dockerhub_base is not defined
|
||||
|
||||
- name: Set DockerHub Image Paths
|
||||
set_fact:
|
||||
awx_docker_actual_image: "{{ dockerhub_base }}/awx:{{ dockerhub_version }}"
|
||||
when: dockerhub_base is defined
|
64
examples/awx17/roles/local_docker/tasks/upgrade_postgres.yml
Normal file
64
examples/awx17/roles/local_docker/tasks/upgrade_postgres.yml
Normal file
@ -0,0 +1,64 @@
|
||||
---
|
||||
|
||||
- name: Create {{ postgres_data_dir }} directory
|
||||
file:
|
||||
path: "{{ postgres_data_dir }}"
|
||||
state: directory
|
||||
|
||||
- name: Get full path of postgres data dir
|
||||
shell: "echo {{ postgres_data_dir }}"
|
||||
register: fq_postgres_data_dir
|
||||
|
||||
- name: Register temporary docker container
|
||||
set_fact:
|
||||
container_command: "docker run --rm -v '{{ fq_postgres_data_dir.stdout }}:/var/lib/postgresql' centos:8 bash -c "
|
||||
|
||||
- name: Check for existing Postgres data (run from inside the container for access to file)
|
||||
shell:
|
||||
cmd: |
|
||||
{{ container_command }} "[[ -f /var/lib/postgresql/10/data/PG_VERSION ]] && echo 'exists'"
|
||||
register: pg_version_file
|
||||
ignore_errors: true
|
||||
|
||||
- name: Record Postgres version
|
||||
shell: |
|
||||
{{ container_command }} "cat /var/lib/postgresql/10/data/PG_VERSION"
|
||||
register: old_pg_version
|
||||
when: pg_version_file is defined and pg_version_file.stdout == 'exists'
|
||||
|
||||
- name: Determine whether to upgrade postgres
|
||||
set_fact:
|
||||
upgrade_postgres: "{{ old_pg_version.stdout == '10' }}"
|
||||
when: old_pg_version.changed
|
||||
|
||||
- name: Set up new postgres paths pre-upgrade
|
||||
shell: |
|
||||
{{ container_command }} "mkdir -p /var/lib/postgresql/12/data/"
|
||||
when: upgrade_postgres | bool
|
||||
|
||||
- name: Stop AWX before upgrading postgres
|
||||
docker_compose:
|
||||
project_src: "{{ docker_compose_dir }}"
|
||||
stopped: true
|
||||
when: upgrade_postgres | bool
|
||||
|
||||
- name: Upgrade Postgres
|
||||
shell: |
|
||||
docker run --rm \
|
||||
-v {{ postgres_data_dir }}/10/data:/var/lib/postgresql/10/data \
|
||||
-v {{ postgres_data_dir }}/12/data:/var/lib/postgresql/12/data \
|
||||
-e PGUSER={{ pg_username }} -e POSTGRES_INITDB_ARGS="-U {{ pg_username }}" \
|
||||
tianon/postgres-upgrade:10-to-12 --username={{ pg_username }}
|
||||
when: upgrade_postgres | bool
|
||||
|
||||
- name: Copy old pg_hba.conf
|
||||
shell: |
|
||||
{{ container_command }} "cp /var/lib/postgresql/10/data/pg_hba.conf /var/lib/postgresql/12/data/pg_hba.conf"
|
||||
when: upgrade_postgres | bool
|
||||
|
||||
- name: Remove old data directory
|
||||
shell: |
|
||||
{{ container_command }} "rm -rf /var/lib/postgresql/10/data"
|
||||
when:
|
||||
- upgrade_postgres | bool
|
||||
- compose_start_containers|bool
|
@ -0,0 +1,13 @@
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ATOMIC_REQUESTS': True,
|
||||
'ENGINE': 'django.db.backends.postgresql',
|
||||
'NAME': "{{ pg_database }}",
|
||||
'USER': "{{ pg_username }}",
|
||||
'PASSWORD': "{{ pg_password }}",
|
||||
'HOST': "{{ pg_hostname | default('postgres') }}",
|
||||
'PORT': "{{ pg_port }}",
|
||||
}
|
||||
}
|
||||
|
||||
BROADCAST_WEBSOCKET_SECRET = "{{ broadcast_websocket_secret | b64encode }}"
|
@ -0,0 +1,208 @@
|
||||
#jinja2: lstrip_blocks: True
|
||||
version: '2'
|
||||
services:
|
||||
|
||||
web:
|
||||
image: {{ awx_docker_actual_image }}
|
||||
container_name: awx_web
|
||||
depends_on:
|
||||
- redis
|
||||
{% if pg_hostname is not defined %}
|
||||
- postgres
|
||||
{% endif %}
|
||||
{% if (host_port is defined) or (host_port_ssl is defined) %}
|
||||
ports:
|
||||
{% if (host_port_ssl is defined) and (ssl_certificate is defined) %}
|
||||
- "{{ host_port_ssl }}:8053"
|
||||
{% endif %}
|
||||
{% if host_port is defined %}
|
||||
- "{{ host_port }}:8052"
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
hostname: {{ awx_web_hostname }}
|
||||
user: root
|
||||
restart: unless-stopped
|
||||
{% if (awx_web_container_labels is defined) and (',' in awx_web_container_labels) %}
|
||||
{% set awx_web_container_labels_list = awx_web_container_labels.split(',') %}
|
||||
labels:
|
||||
{% for awx_web_container_label in awx_web_container_labels_list %}
|
||||
- {{ awx_web_container_label }}
|
||||
{% endfor %}
|
||||
{% elif awx_web_container_labels is defined %}
|
||||
labels:
|
||||
- {{ awx_web_container_labels }}
|
||||
{% endif %}
|
||||
volumes:
|
||||
- supervisor-socket:/var/run/supervisor
|
||||
- rsyslog-socket:/var/run/awx-rsyslog/
|
||||
- rsyslog-config:/var/lib/awx/rsyslog/
|
||||
- "{{ docker_compose_dir }}/SECRET_KEY:/etc/tower/SECRET_KEY"
|
||||
- "{{ docker_compose_dir }}/environment.sh:/etc/tower/conf.d/environment.sh"
|
||||
- "{{ docker_compose_dir }}/credentials.py:/etc/tower/conf.d/credentials.py"
|
||||
- "{{ docker_compose_dir }}/nginx.conf:/etc/nginx/nginx.conf:ro"
|
||||
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
|
||||
{% if project_data_dir is defined %}
|
||||
- "{{ project_data_dir +':/var/lib/awx/projects:rw' }}"
|
||||
{% endif %}
|
||||
{% if custom_venv_dir is defined %}
|
||||
- "{{ custom_venv_dir +':'+ custom_venv_dir +':rw' }}"
|
||||
{% endif %}
|
||||
{% if ca_trust_dir is defined %}
|
||||
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
|
||||
{% endif %}
|
||||
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
|
||||
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
|
||||
- "{{ ssl_certificate_key +':/etc/nginx/awxweb_key.pem:ro' }}"
|
||||
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
|
||||
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
|
||||
{% endif %}
|
||||
{% if (awx_container_search_domains is defined) and (',' in awx_container_search_domains) %}
|
||||
{% set awx_container_search_domains_list = awx_container_search_domains.split(',') %}
|
||||
dns_search:
|
||||
{% for awx_container_search_domain in awx_container_search_domains_list %}
|
||||
- {{ awx_container_search_domain }}
|
||||
{% endfor %}
|
||||
{% elif awx_container_search_domains is defined %}
|
||||
dns_search: "{{ awx_container_search_domains }}"
|
||||
{% endif %}
|
||||
{% if (awx_alternate_dns_servers is defined) and (',' in awx_alternate_dns_servers) %}
|
||||
{% set awx_alternate_dns_servers_list = awx_alternate_dns_servers.split(',') %}
|
||||
dns:
|
||||
{% for awx_alternate_dns_server in awx_alternate_dns_servers_list %}
|
||||
- {{ awx_alternate_dns_server }}
|
||||
{% endfor %}
|
||||
{% elif awx_alternate_dns_servers is defined %}
|
||||
dns: "{{ awx_alternate_dns_servers }}"
|
||||
{% endif %}
|
||||
{% if (docker_compose_extra_hosts is defined) and (':' in docker_compose_extra_hosts) %}
|
||||
{% set docker_compose_extra_hosts_list = docker_compose_extra_hosts.split(',') %}
|
||||
extra_hosts:
|
||||
{% for docker_compose_extra_host in docker_compose_extra_hosts_list %}
|
||||
- "{{ docker_compose_extra_host }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
environment:
|
||||
http_proxy: {{ http_proxy | default('') }}
|
||||
https_proxy: {{ https_proxy | default('') }}
|
||||
no_proxy: {{ no_proxy | default('') }}
|
||||
{% if docker_logger is defined %}
|
||||
logging:
|
||||
driver: {{ docker_logger }}
|
||||
{% endif %}
|
||||
|
||||
task:
|
||||
image: {{ awx_docker_actual_image }}
|
||||
container_name: awx_task
|
||||
depends_on:
|
||||
- redis
|
||||
- web
|
||||
{% if pg_hostname is not defined %}
|
||||
- postgres
|
||||
{% endif %}
|
||||
command: /usr/bin/launch_awx_task.sh
|
||||
hostname: {{ awx_task_hostname }}
|
||||
user: root
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- supervisor-socket:/var/run/supervisor
|
||||
- rsyslog-socket:/var/run/awx-rsyslog/
|
||||
- rsyslog-config:/var/lib/awx/rsyslog/
|
||||
- "{{ docker_compose_dir }}/SECRET_KEY:/etc/tower/SECRET_KEY"
|
||||
- "{{ docker_compose_dir }}/environment.sh:/etc/tower/conf.d/environment.sh"
|
||||
- "{{ docker_compose_dir }}/credentials.py:/etc/tower/conf.d/credentials.py"
|
||||
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
|
||||
{% if project_data_dir is defined %}
|
||||
- "{{ project_data_dir +':/var/lib/awx/projects:rw' }}"
|
||||
{% endif %}
|
||||
{% if custom_venv_dir is defined %}
|
||||
- "{{ custom_venv_dir +':'+ custom_venv_dir +':rw' }}"
|
||||
{% endif %}
|
||||
{% if ca_trust_dir is defined %}
|
||||
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
|
||||
{% endif %}
|
||||
{% if ssl_certificate is defined %}
|
||||
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
|
||||
{% endif %}
|
||||
{% if (awx_container_search_domains is defined) and (',' in awx_container_search_domains) %}
|
||||
{% set awx_container_search_domains_list = awx_container_search_domains.split(',') %}
|
||||
dns_search:
|
||||
{% for awx_container_search_domain in awx_container_search_domains_list %}
|
||||
- {{ awx_container_search_domain }}
|
||||
{% endfor %}
|
||||
{% elif awx_container_search_domains is defined %}
|
||||
dns_search: "{{ awx_container_search_domains }}"
|
||||
{% endif %}
|
||||
{% if (awx_alternate_dns_servers is defined) and (',' in awx_alternate_dns_servers) %}
|
||||
{% set awx_alternate_dns_servers_list = awx_alternate_dns_servers.split(',') %}
|
||||
dns:
|
||||
{% for awx_alternate_dns_server in awx_alternate_dns_servers_list %}
|
||||
- {{ awx_alternate_dns_server }}
|
||||
{% endfor %}
|
||||
{% elif awx_alternate_dns_servers is defined %}
|
||||
dns: "{{ awx_alternate_dns_servers }}"
|
||||
{% endif %}
|
||||
{% if (docker_compose_extra_hosts is defined) and (':' in docker_compose_extra_hosts) %}
|
||||
{% set docker_compose_extra_hosts_list = docker_compose_extra_hosts.split(',') %}
|
||||
extra_hosts:
|
||||
{% for docker_compose_extra_host in docker_compose_extra_hosts_list %}
|
||||
- "{{ docker_compose_extra_host }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
environment:
|
||||
AWX_SKIP_MIGRATIONS: "1"
|
||||
http_proxy: {{ http_proxy | default('') }}
|
||||
https_proxy: {{ https_proxy | default('') }}
|
||||
no_proxy: {{ no_proxy | default('') }}
|
||||
SUPERVISOR_WEB_CONFIG_PATH: '/etc/supervisord.conf'
|
||||
|
||||
redis:
|
||||
image: {{ redis_image }}
|
||||
container_name: awx_redis
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
http_proxy: {{ http_proxy | default('') }}
|
||||
https_proxy: {{ https_proxy | default('') }}
|
||||
no_proxy: {{ no_proxy | default('') }}
|
||||
command: ["/usr/local/etc/redis/redis.conf"]
|
||||
volumes:
|
||||
- "{{ docker_compose_dir }}/redis.conf:/usr/local/etc/redis/redis.conf:ro"
|
||||
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
|
||||
{% if docker_logger is defined %}
|
||||
logging:
|
||||
driver: {{ docker_logger }}
|
||||
{% endif %}
|
||||
|
||||
{% if pg_hostname is not defined %}
|
||||
postgres:
|
||||
image: {{ postgresql_image }}
|
||||
container_name: awx_postgres
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- "{{ postgres_data_dir }}/12/data/:/var/lib/postgresql/data:Z"
|
||||
environment:
|
||||
POSTGRES_USER: {{ pg_username }}
|
||||
POSTGRES_PASSWORD: {{ pg_password }}
|
||||
POSTGRES_DB: {{ pg_database }}
|
||||
http_proxy: {{ http_proxy | default('') }}
|
||||
https_proxy: {{ https_proxy | default('') }}
|
||||
no_proxy: {{ no_proxy | default('') }}
|
||||
{% if docker_logger is defined %}
|
||||
logging:
|
||||
driver: {{ docker_logger }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if docker_compose_subnet is defined %}
|
||||
networks:
|
||||
default:
|
||||
driver: bridge
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
- subnet: {{ docker_compose_subnet }}
|
||||
{% endif %}
|
||||
|
||||
volumes:
|
||||
supervisor-socket:
|
||||
rsyslog-socket:
|
||||
rsyslog-config:
|
@ -0,0 +1,10 @@
|
||||
DATABASE_USER={{ pg_username|quote }}
|
||||
DATABASE_NAME={{ pg_database|quote }}
|
||||
DATABASE_HOST={{ pg_hostname|default('postgres')|quote }}
|
||||
DATABASE_PORT={{ pg_port|default('5432')|quote }}
|
||||
DATABASE_PASSWORD={{ pg_password|default('awxpass')|quote }}
|
||||
{% if pg_admin_password is defined %}
|
||||
DATABASE_ADMIN_PASSWORD={{ pg_admin_password|quote }}
|
||||
{% endif %}
|
||||
AWX_ADMIN_USER={{ admin_user|quote }}
|
||||
AWX_ADMIN_PASSWORD={{ admin_password|quote }}
|
122
examples/awx17/roles/local_docker/templates/nginx.conf.j2
Normal file
122
examples/awx17/roles/local_docker/templates/nginx.conf.j2
Normal file
@ -0,0 +1,122 @@
|
||||
#user awx;
|
||||
|
||||
worker_processes 1;
|
||||
|
||||
pid /tmp/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
server_tokens off;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /dev/stdout main;
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
#gzip on;
|
||||
|
||||
upstream uwsgi {
|
||||
server 127.0.0.1:8050;
|
||||
}
|
||||
|
||||
upstream daphne {
|
||||
server 127.0.0.1:8051;
|
||||
}
|
||||
|
||||
{% if ssl_certificate is defined %}
|
||||
server {
|
||||
listen 8052 default_server;
|
||||
server_name _;
|
||||
|
||||
# Redirect all HTTP links to the matching HTTPS page
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
{%endif %}
|
||||
|
||||
server {
|
||||
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
|
||||
listen 8053 ssl;
|
||||
|
||||
ssl_certificate /etc/nginx/awxweb.pem;
|
||||
ssl_certificate_key /etc/nginx/awxweb_key.pem;
|
||||
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
|
||||
listen 8053 ssl;
|
||||
|
||||
ssl_certificate /etc/nginx/awxweb.pem;
|
||||
ssl_certificate_key /etc/nginx/awxweb.pem;
|
||||
{% else %}
|
||||
listen 8052 default_server;
|
||||
{% endif %}
|
||||
|
||||
# If you have a domain name, this is where to add it
|
||||
server_name _;
|
||||
keepalive_timeout 65;
|
||||
|
||||
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
|
||||
# Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)
|
||||
add_header X-Frame-Options "DENY";
|
||||
|
||||
location /nginx_status {
|
||||
stub_status on;
|
||||
access_log off;
|
||||
allow 127.0.0.1;
|
||||
deny all;
|
||||
}
|
||||
|
||||
location /static/ {
|
||||
alias /var/lib/awx/public/static/;
|
||||
}
|
||||
|
||||
location /favicon.ico { alias /var/lib/awx/public/static/favicon.ico; }
|
||||
|
||||
location /websocket {
|
||||
# Pass request to the upstream alias
|
||||
proxy_pass http://daphne;
|
||||
# Require http version 1.1 to allow for upgrade requests
|
||||
proxy_http_version 1.1;
|
||||
# We want proxy_buffering off for proxying to websockets.
|
||||
proxy_buffering off;
|
||||
# http://en.wikipedia.org/wiki/X-Forwarded-For
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
# enable this if you use HTTPS:
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
# pass the Host: header from the client for the sake of redirects
|
||||
proxy_set_header Host $http_host;
|
||||
# We've set the Host header, so we don't need Nginx to muddle
|
||||
# about with redirects
|
||||
proxy_redirect off;
|
||||
# Depending on the request value, set the Upgrade and
|
||||
# connection headers
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
}
|
||||
|
||||
location / {
|
||||
# Add trailing / if missing
|
||||
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
|
||||
uwsgi_read_timeout 120s;
|
||||
uwsgi_pass uwsgi;
|
||||
include /etc/nginx/uwsgi_params;
|
||||
{%- if extra_nginx_include is defined %}
|
||||
include {{ extra_nginx_include }};
|
||||
{%- endif %}
|
||||
proxy_set_header X-Forwarded-Port 443;
|
||||
uwsgi_param HTTP_X_FORWARDED_PORT 443;
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,4 @@
|
||||
unixsocket /var/run/redis/redis.sock
|
||||
unixsocketperm 660
|
||||
port 0
|
||||
bind 127.0.0.1
|
17
examples/azure-vote/README.md
Normal file
17
examples/azure-vote/README.md
Normal file
@ -0,0 +1,17 @@
|
||||
# Azure Vote Example
|
||||
|
||||
This example have two containers:
|
||||
|
||||
* backend: `redis` used as storage
|
||||
* frontend: having supervisord, nginx, uwsgi/python
|
||||
|
||||
|
||||
```
|
||||
echo "HOST_PORT=8080" > .env
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
after typing the commands above open your browser on the host port you picked above like
|
||||
[http://localhost:8080/](http://localhost:8080/)
|
||||
|
||||
|
16
examples/azure-vote/docker-compose.yaml
Normal file
16
examples/azure-vote/docker-compose.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
---
|
||||
# from https://github.com/Azure-Samples/azure-voting-app-redis/blob/master/docker-compose.yaml
|
||||
version: '3'
|
||||
services:
|
||||
azure-vote-back:
|
||||
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
|
||||
container_name: azure-vote-back
|
||||
environment:
|
||||
ALLOW_EMPTY_PASSWORD: "yes"
|
||||
azure-vote-front:
|
||||
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
|
||||
environment:
|
||||
REDIS: azure-vote-back
|
||||
ports:
|
||||
- "${HOST_PORT:-8080}:80"
|
||||
|
31
examples/echo/README.md
Normal file
31
examples/echo/README.md
Normal file
@ -0,0 +1,31 @@
|
||||
# Echo Service example
|
||||
|
||||
```
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
Test the service with `curl like this`
|
||||
|
||||
```
|
||||
$ curl -X POST -d "foobar" http://localhost:8080/; echo
|
||||
|
||||
CLIENT VALUES:
|
||||
client_address=10.89.31.2
|
||||
command=POST
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://localhost:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.10.0 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
content-length=6
|
||||
content-type=application/x-www-form-urlencoded
|
||||
host=localhost:8080
|
||||
user-agent=curl/7.76.1
|
||||
BODY:
|
||||
foobar
|
||||
```
|
8
examples/echo/docker-compose.yaml
Normal file
8
examples/echo/docker-compose.yaml
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
version: '3'
|
||||
services:
|
||||
web:
|
||||
image: k8s.gcr.io/echoserver:1.4
|
||||
ports:
|
||||
- "${HOST_PORT:-8080}:8080"
|
||||
|
12
examples/hello-app-redis/README.md
Normal file
12
examples/hello-app-redis/README.md
Normal file
@ -0,0 +1,12 @@
|
||||
# GCR Hello App Redis
|
||||
|
||||
A 6-node redis cluster using [Bitnami](https://github.com/bitnami/bitnami-docker-redis-cluster)
|
||||
with a [simple hit counter](https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/tree/main/hello-app-redis) that persists on that redis cluster
|
||||
|
||||
```
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
then open your browser on [http://localhost:8080/](http://localhost:8080/)
|
||||
|
||||
|
67
examples/hello-app-redis/docker-compose.yaml
Normal file
67
examples/hello-app-redis/docker-compose.yaml
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
version: '3'
|
||||
volumes:
|
||||
redis-node1-data:
|
||||
redis-node2-data:
|
||||
redis-node3-data:
|
||||
redis-node4-data:
|
||||
redis-node5-data:
|
||||
redis-data:
|
||||
services:
|
||||
web:
|
||||
image: gcr.io/google-samples/hello-app-redis:1.0
|
||||
depends_on:
|
||||
- redis-cluster
|
||||
ports:
|
||||
- "${HOST_PORT:-8080}:8080"
|
||||
redis-node1:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-node1-data:/bitnami/redis/data
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
redis-node2:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-node2-data:/bitnami/redis/data
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
redis-node3:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-node3-data:/bitnami/redis/data
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
redis-node4:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-node4-data:/bitnami/redis/data
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
redis-node5:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-node5-data:/bitnami/redis/data
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
|
||||
redis-cluster:
|
||||
image: docker.io/bitnami/redis-cluster:6.2
|
||||
volumes:
|
||||
- redis-data:/bitnami/redis/data
|
||||
depends_on:
|
||||
- redis-node1
|
||||
- redis-node2
|
||||
- redis-node3
|
||||
- redis-node4
|
||||
- redis-node5
|
||||
environment:
|
||||
- ALLOW_EMPTY_PASSWORD=yes
|
||||
- REDIS_NODES=redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-cluster
|
||||
- REDIS_CLUSTER_CREATOR=yes
|
||||
|
10
examples/hello-app/README.md
Normal file
10
examples/hello-app/README.md
Normal file
@ -0,0 +1,10 @@
|
||||
# GCR Hello App
|
||||
|
||||
A small ~2MB image, type
|
||||
|
||||
```
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
then open your browser on [http://localhost:8080/](http://localhost:8080/)
|
||||
|
8
examples/hello-app/docker-compose.yaml
Normal file
8
examples/hello-app/docker-compose.yaml
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
version: '3'
|
||||
services:
|
||||
web:
|
||||
image: gcr.io/google-samples/hello-app:1.0
|
||||
ports:
|
||||
- "${HOST_PORT:-8080}:8080"
|
||||
|
12
examples/hello-python/Dockerfile
Normal file
12
examples/hello-python/Dockerfile
Normal file
@ -0,0 +1,12 @@
|
||||
FROM python:3.9-alpine
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY requirements.txt ./
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
CMD [ "python", "-m", "app.web" ]
|
||||
EXPOSE 8080
|
||||
|
8
examples/hello-python/README.md
Normal file
8
examples/hello-python/README.md
Normal file
@ -0,0 +1,8 @@
|
||||
# Simple Python Demo
|
||||
## A Redis counter
|
||||
|
||||
```
|
||||
podman-compose up -d
|
||||
curl localhost:8080/
|
||||
curl localhost:8080/hello.json
|
||||
```
|
0
examples/hello-python/app/__init__.py
Normal file
0
examples/hello-python/app/__init__.py
Normal file
39
examples/hello-python/app/web.py
Normal file
39
examples/hello-python/app/web.py
Normal file
@ -0,0 +1,39 @@
|
||||
# pylint: disable=import-error
|
||||
# pylint: disable=unused-import
|
||||
import asyncio # noqa: F401
|
||||
import os
|
||||
|
||||
import aioredis
|
||||
from aiohttp import web
|
||||
|
||||
REDIS_HOST = os.environ.get("REDIS_HOST", "localhost")
|
||||
REDIS_PORT = int(os.environ.get("REDIS_PORT", "6379"))
|
||||
REDIS_DB = int(os.environ.get("REDIS_DB", "0"))
|
||||
|
||||
redis = aioredis.from_url(f"redis://{REDIS_HOST}:{REDIS_PORT}/{REDIS_DB}")
|
||||
app = web.Application()
|
||||
routes = web.RouteTableDef()
|
||||
|
||||
|
||||
@routes.get("/")
|
||||
async def hello(request): # pylint: disable=unused-argument
|
||||
counter = await redis.incr("mycounter")
|
||||
return web.Response(text=f"counter={counter}")
|
||||
|
||||
|
||||
@routes.get("/hello.json")
|
||||
async def hello_json(request): # pylint: disable=unused-argument
|
||||
counter = await redis.incr("mycounter")
|
||||
data = {"counter": counter}
|
||||
return web.json_response(data)
|
||||
|
||||
|
||||
app.add_routes(routes)
|
||||
|
||||
|
||||
def main():
|
||||
web.run_app(app, port=8080)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
21
examples/hello-python/docker-compose.yaml
Normal file
21
examples/hello-python/docker-compose.yaml
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
version: '3'
|
||||
volumes:
|
||||
redis:
|
||||
services:
|
||||
redis:
|
||||
read_only: true
|
||||
image: docker.io/redis:alpine
|
||||
command: ["redis-server", "--appendonly", "yes", "--notify-keyspace-events", "Ex"]
|
||||
volumes:
|
||||
- redis:/data
|
||||
web:
|
||||
read_only: true
|
||||
build:
|
||||
context: .
|
||||
image: hello-py-aioweb
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
REDIS_HOST: redis
|
||||
|
3
examples/hello-python/requirements.txt
Normal file
3
examples/hello-python/requirements.txt
Normal file
@ -0,0 +1,3 @@
|
||||
aiohttp
|
||||
aioredis
|
||||
# aioredis[hiredis]
|
71
examples/nodeproj/.eslintrc.json
Normal file
71
examples/nodeproj/.eslintrc.json
Normal file
@ -0,0 +1,71 @@
|
||||
{
|
||||
"env": {
|
||||
"node": true,
|
||||
"es6": true
|
||||
},
|
||||
"settings": {
|
||||
"import/resolver": {
|
||||
"node": {
|
||||
"extensions": [".js", ".mjs", ".ts", ".cjs"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"parser": "@typescript-eslint/parser",
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 2020,
|
||||
"sourceType": "module",
|
||||
"allowImportExportEverywhere": true
|
||||
},
|
||||
"extends": [
|
||||
"eslint:recommended",
|
||||
"plugin:import/errors",
|
||||
"plugin:import/warnings",
|
||||
"plugin:import/typescript",
|
||||
"plugin:promise/recommended",
|
||||
"google",
|
||||
"plugin:security/recommended"
|
||||
],
|
||||
"plugins": ["promise", "security", "import"],
|
||||
"overrides": [
|
||||
{
|
||||
"files": "public/**/*.min.js",
|
||||
"env": {
|
||||
"browser": true,
|
||||
"node": false,
|
||||
"es6": false
|
||||
},
|
||||
"parserOptions": {
|
||||
"sourceType": "script"
|
||||
},
|
||||
"extends": ["plugin:compat/recommended"],
|
||||
"plugins": [],
|
||||
"rules": {
|
||||
"no-var": ["off"]
|
||||
}
|
||||
}
|
||||
],
|
||||
"rules": {
|
||||
"security/detect-non-literal-fs-filename":["off"],
|
||||
"security/detect-object-injection":["off"],
|
||||
"camelcase": ["off"],
|
||||
"no-console": ["off"],
|
||||
"require-jsdoc": ["off"],
|
||||
"one-var": ["off"],
|
||||
"guard-for-in": ["off"],
|
||||
"max-len": [
|
||||
"warn",
|
||||
{
|
||||
"ignoreComments": true,
|
||||
"ignoreTrailingComments": true,
|
||||
"ignoreUrls": true,
|
||||
"code": 200
|
||||
}
|
||||
],
|
||||
"indent": ["warn", 4],
|
||||
"no-unused-vars": ["warn"],
|
||||
"no-extra-semi": ["warn"],
|
||||
"linebreak-style": ["error", "unix"],
|
||||
"quotes": ["warn", "double"],
|
||||
"semi": ["error", "always"]
|
||||
}
|
||||
}
|
5
examples/nodeproj/.gitignore
vendored
Normal file
5
examples/nodeproj/.gitignore
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
local.env
|
||||
.env
|
||||
*.pid
|
||||
node_modules
|
||||
|
1
examples/nodeproj/.home/.gitignore
vendored
Normal file
1
examples/nodeproj/.home/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
*
|
16
examples/nodeproj/README.md
Normal file
16
examples/nodeproj/README.md
Normal file
@ -0,0 +1,16 @@
|
||||
# How to run example
|
||||
|
||||
|
||||
|
||||
```
|
||||
cp example.local.env local.env
|
||||
cp example.env .env
|
||||
cat local.env
|
||||
cat .env
|
||||
echo "UID=$UID" >> .env
|
||||
cat .env
|
||||
podman-compose build
|
||||
podman-compose run --rm --no-deps init
|
||||
podman-compose up
|
||||
```
|
||||
|
12
examples/nodeproj/containers/node16-runtime/Dockerfile
Normal file
12
examples/nodeproj/containers/node16-runtime/Dockerfile
Normal file
@ -0,0 +1,12 @@
|
||||
FROM registry.fedoraproject.org/fedora-minimal:35
|
||||
ARG NODE_VER=16
|
||||
# microdnf -y module enable nodejs:${NODE_VER}
|
||||
RUN \
|
||||
echo -e "[nodejs]\nname=nodejs\nstream=${NODE_VER}\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \
|
||||
microdnf -y install shadow-utils nodejs zopfli findutils busybox && \
|
||||
microdnf clean all
|
||||
RUN adduser -d /app app && mkdir -p /app/code/.home && chown app:app -R /app/code && chmod 711 /app /app/code/.home && usermod -d /app/code/.home app
|
||||
ENV XDG_CONFIG_HOME=/app/code/.home
|
||||
ENV HOME=/app/code/.home
|
||||
WORKDIR /app/code
|
||||
|
48
examples/nodeproj/docker-compose.yml
Normal file
48
examples/nodeproj/docker-compose.yml
Normal file
@ -0,0 +1,48 @@
|
||||
version: '3'
|
||||
volumes:
|
||||
redis:
|
||||
services:
|
||||
redis:
|
||||
read_only: true
|
||||
image: docker.io/redis:alpine
|
||||
command: ["redis-server", "--appendonly", "yes", "--notify-keyspace-events", "Ex"]
|
||||
volumes:
|
||||
- redis:/data
|
||||
tmpfs:
|
||||
- /tmp
|
||||
- /var/run
|
||||
- /run
|
||||
init:
|
||||
read_only: true
|
||||
#userns_mode: keep-id
|
||||
user: ${UID:-1000}
|
||||
build:
|
||||
context: ./containers/${NODE_IMG:-node16-runtime}
|
||||
image: ${NODE_IMG:-node16-runtime}
|
||||
env_file:
|
||||
- local.env
|
||||
volumes:
|
||||
- .:/app/code
|
||||
command: ["/bin/sh", "-c", "mkdir -p ~/; [ -d ./node_modules ] && echo '** node_modules exists' || npm install"]
|
||||
tmpfs:
|
||||
- /tmp
|
||||
- /run
|
||||
task:
|
||||
extends:
|
||||
service: init
|
||||
command: ["npm", "run", "cli", "--", "task"]
|
||||
links:
|
||||
- redis
|
||||
depends_on:
|
||||
- redis
|
||||
web:
|
||||
extends:
|
||||
service: init
|
||||
command: ["npm", "run", "cli", "--", "web"]
|
||||
ports:
|
||||
- ${WEB_LISTEN_PORT:-3000}:3000
|
||||
depends_on:
|
||||
- redis
|
||||
links:
|
||||
- mongo
|
||||
|
3
examples/nodeproj/example.env
Normal file
3
examples/nodeproj/example.env
Normal file
@ -0,0 +1,3 @@
|
||||
WEB_LISTEN_PORT=3000
|
||||
# pass UID= your IDE user
|
||||
|
2
examples/nodeproj/example.local.env
Normal file
2
examples/nodeproj/example.local.env
Normal file
@ -0,0 +1,2 @@
|
||||
REDIS_HOST=redis
|
||||
|
6
examples/nodeproj/index.js
Normal file
6
examples/nodeproj/index.js
Normal file
@ -0,0 +1,6 @@
|
||||
#! /usr/bin/env node
|
||||
"use strict";
|
||||
import {start} from "./lib";
|
||||
|
||||
start();
|
||||
|
14
examples/nodeproj/jsconfig.json
Normal file
14
examples/nodeproj/jsconfig.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "es2020",
|
||||
"module": "es2020",
|
||||
"moduleResolution": "node",
|
||||
"allowSyntheticDefaultImports": true
|
||||
},
|
||||
"files": [
|
||||
"index.js"
|
||||
],
|
||||
"include": [
|
||||
"lib/**/*.js"
|
||||
]
|
||||
}
|
31
examples/nodeproj/lib/commands/task.js
Normal file
31
examples/nodeproj/lib/commands/task.js
Normal file
@ -0,0 +1,31 @@
|
||||
"use strict";
|
||||
import {proj} from "../proj";
|
||||
|
||||
async function loop() {
|
||||
const poped = await proj.predis.blpop("queue", 5);
|
||||
const task_desc_s = poped[1];
|
||||
let task_desc;
|
||||
try {
|
||||
task_desc = JSON.parse(task_desc_s);
|
||||
} catch (e) {
|
||||
console.exception(e);
|
||||
}
|
||||
console.info("got task "+task_desc.func);
|
||||
const func = task_desc.func;
|
||||
const args = task_desc.args;
|
||||
if (typeof(proj.tasks[func])!="function") {
|
||||
console.log(`task ${func} not found`);
|
||||
process.exit(-1)
|
||||
}
|
||||
try {
|
||||
await ((this.tasks[func])(...args));
|
||||
} catch (e) {
|
||||
console.exception(e);
|
||||
}
|
||||
}
|
||||
|
||||
export async function start() {
|
||||
while(true) {
|
||||
loop();
|
||||
}
|
||||
}
|
21
examples/nodeproj/lib/commands/web.js
Normal file
21
examples/nodeproj/lib/commands/web.js
Normal file
@ -0,0 +1,21 @@
|
||||
"use strict";
|
||||
import {proj} from "../proj";
|
||||
|
||||
import http from "http";
|
||||
import express from "express";
|
||||
|
||||
|
||||
export async function start() {
|
||||
const app = express();
|
||||
const server = http.createServer(app);
|
||||
|
||||
// Routing
|
||||
app.use(express.static(proj.config.basedir + "/public"));
|
||||
app.get("/healthz", function(req, res) {
|
||||
res.send("ok@"+Date.now());
|
||||
});
|
||||
|
||||
server.listen(proj.config.LISTEN_PORT, proj.config.LISTEN_HOST, function() {
|
||||
console.warn(`listening at port ${proj.config.LISTEN_PORT}`);
|
||||
});
|
||||
}
|
24
examples/nodeproj/package.json
Normal file
24
examples/nodeproj/package.json
Normal file
@ -0,0 +1,24 @@
|
||||
{
|
||||
"name": "nodeproj",
|
||||
"version": "0.0.1",
|
||||
"description": "nodejs example project",
|
||||
"exports": {
|
||||
".": "./index.js",
|
||||
"./lib": "./lib"
|
||||
},
|
||||
"main": "index.js",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"cli": "nodemon -w lib -w index.js --es-module-specifier-resolution=node ./index.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"express": "~4.16.4",
|
||||
"redis": "^3.1.2"
|
||||
},
|
||||
"private": true,
|
||||
"author": "",
|
||||
"license": "proprietary",
|
||||
"devDependencies": {
|
||||
"nodemon": "^2.0.14"
|
||||
}
|
||||
}
|
18
examples/nodeproj/public/index.html
Normal file
18
examples/nodeproj/public/index.html
Normal file
@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Vote</title>
|
||||
<link rel="stylesheet" href="https://unpkg.com/browse/normalize.css@8.0.1/normalize.css">
|
||||
<link rel="stylesheet" href="styles.css">
|
||||
</head>
|
||||
<body>
|
||||
<h1>This is a Heading</h1>
|
||||
<p>This is a paragraph.</p>
|
||||
</body>
|
||||
<script type="text/javascript" src="main.css"></script>
|
||||
<script type="text/javascript">
|
||||
//<![CDATA[
|
||||
console.log("loaded");
|
||||
//]]>
|
||||
</script>
|
||||
</html>
|
11
examples/nvidia-smi/docker-compose.yaml
Normal file
11
examples/nvidia-smi/docker-compose.yaml
Normal file
@ -0,0 +1,11 @@
|
||||
services:
|
||||
test:
|
||||
image: nvidia/cuda:12.3.1-base-ubuntu20.04
|
||||
command: nvidia-smi
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
24
examples/wordpress/docker-compose.yaml
Normal file
24
examples/wordpress/docker-compose.yaml
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
volumes:
|
||||
db_data:
|
||||
services:
|
||||
wordpress:
|
||||
image: docker.io/library/wordpress:latest
|
||||
ports:
|
||||
- 8080:80
|
||||
environment:
|
||||
- WORDPRESS_DB_HOST=db
|
||||
- WORDPRESS_DB_USER=wordpress
|
||||
- WORDPRESS_DB_PASSWORD=password
|
||||
- WORDPRESS_DB_NAME=wordpress
|
||||
db:
|
||||
image: docker.io/library/mariadb:10.6.4-focal
|
||||
command: '--default-authentication-plugin=mysql_native_password'
|
||||
volumes:
|
||||
- db_data:/var/lib/mysql
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=somewordpress
|
||||
- MYSQL_DATABASE=wordpress
|
||||
- MYSQL_USER=wordpress
|
||||
- MYSQL_PASSWORD=password
|
||||
|
13
newsfragments/README.txt
Normal file
13
newsfragments/README.txt
Normal file
@ -0,0 +1,13 @@
|
||||
This is the directory for news fragments used by towncrier: https://github.com/hawkowl/towncrier
|
||||
|
||||
You create a news fragment in this directory when you make a change, and the file gets removed from
|
||||
this directory when the news is published.
|
||||
|
||||
towncrier has a few standard types of news fragments, signified by the file extension. These are:
|
||||
|
||||
.feature: Signifying a new feature.
|
||||
.bugfix: Signifying a bug fix.
|
||||
.doc: Signifying a documentation improvement.
|
||||
.removal: Signifying a deprecation or removal of public API.
|
||||
.change: Signifying a change of behavior
|
||||
.misc: Miscellaneous change
|
3439
podman_compose.py
3439
podman_compose.py
File diff suppressed because it is too large
Load Diff
15
pyproject.toml
Normal file
15
pyproject.toml
Normal file
@ -0,0 +1,15 @@
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
target-version = "py38"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["W", "E", "F", "I"]
|
||||
ignore = [
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
force-single-line = true
|
||||
|
||||
[tool.ruff.format]
|
||||
preview = true # needed for quote-style
|
||||
quote-style = "preserve"
|
0
pytests/__init__.py
Normal file
0
pytests/__init__.py
Normal file
168
pytests/test_can_merge_build.py
Normal file
168
pytests/test_can_merge_build.py
Normal file
@ -0,0 +1,168 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import copy
|
||||
import os
|
||||
import unittest
|
||||
|
||||
import yaml
|
||||
from parameterized import parameterized
|
||||
|
||||
from podman_compose import PodmanCompose
|
||||
|
||||
|
||||
class TestCanMergeBuild(unittest.TestCase):
|
||||
@parameterized.expand([
|
||||
({}, {}, {}),
|
||||
({}, {"test": "test"}, {"test": "test"}),
|
||||
({"test": "test"}, {}, {"test": "test"}),
|
||||
({"test": "test-1"}, {"test": "test-2"}, {"test": "test-2"}),
|
||||
({}, {"build": "."}, {"build": {"context": "."}}),
|
||||
({"build": "."}, {}, {"build": {"context": "."}}),
|
||||
({"build": "./dir-1"}, {"build": "./dir-2"}, {"build": {"context": "./dir-2"}}),
|
||||
({}, {"build": {"context": "./dir-1"}}, {"build": {"context": "./dir-1"}}),
|
||||
({"build": {"context": "./dir-1"}}, {}, {"build": {"context": "./dir-1"}}),
|
||||
(
|
||||
{"build": {"context": "./dir-1"}},
|
||||
{"build": {"context": "./dir-2"}},
|
||||
{"build": {"context": "./dir-2"}},
|
||||
),
|
||||
(
|
||||
{},
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
{},
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-1"}},
|
||||
{"build": {"dockerfile": "./dockerfile-2"}},
|
||||
{"build": {"dockerfile": "./dockerfile-2"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-1"}},
|
||||
{"build": {"context": "./dir-2"}},
|
||||
{"build": {"dockerfile": "./dockerfile-1", "context": "./dir-2"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-1", "context": "./dir-1"}},
|
||||
{"build": {"dockerfile": "./dockerfile-2", "context": "./dir-2"}},
|
||||
{"build": {"dockerfile": "./dockerfile-2", "context": "./dir-2"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-1"}},
|
||||
{"build": {"dockerfile": "./dockerfile-2", "args": ["ENV1=1"]}},
|
||||
{"build": {"dockerfile": "./dockerfile-2", "args": ["ENV1=1"]}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-2", "args": ["ENV1=1"]}},
|
||||
{"build": {"dockerfile": "./dockerfile-1"}},
|
||||
{"build": {"dockerfile": "./dockerfile-1", "args": ["ENV1=1"]}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./dockerfile-2", "args": ["ENV1=1"]}},
|
||||
{"build": {"dockerfile": "./dockerfile-1", "args": ["ENV2=2"]}},
|
||||
{"build": {"dockerfile": "./dockerfile-1", "args": ["ENV1=1", "ENV2=2"]}},
|
||||
),
|
||||
])
|
||||
def test_parse_compose_file_when_multiple_composes(self, input, override, expected):
|
||||
compose_test_1 = {"services": {"test-service": input}}
|
||||
compose_test_2 = {"services": {"test-service": override}}
|
||||
dump_yaml(compose_test_1, "test-compose-1.yaml")
|
||||
dump_yaml(compose_test_2, "test-compose-2.yaml")
|
||||
|
||||
podman_compose = PodmanCompose()
|
||||
set_args(podman_compose, ["test-compose-1.yaml", "test-compose-2.yaml"])
|
||||
|
||||
podman_compose._parse_compose_file() # pylint: disable=protected-access
|
||||
|
||||
actual_compose = {}
|
||||
if podman_compose.services:
|
||||
podman_compose.services["test-service"].pop("_deps")
|
||||
actual_compose = podman_compose.services["test-service"]
|
||||
self.assertEqual(actual_compose, expected)
|
||||
|
||||
# $$$ is a placeholder for either command or entrypoint
|
||||
@parameterized.expand([
|
||||
({}, {"$$$": []}, {"$$$": []}),
|
||||
({"$$$": []}, {}, {"$$$": []}),
|
||||
({"$$$": []}, {"$$$": "sh-2"}, {"$$$": ["sh-2"]}),
|
||||
({"$$$": "sh-2"}, {"$$$": []}, {"$$$": []}),
|
||||
({}, {"$$$": "sh"}, {"$$$": ["sh"]}),
|
||||
({"$$$": "sh"}, {}, {"$$$": ["sh"]}),
|
||||
({"$$$": "sh-1"}, {"$$$": "sh-2"}, {"$$$": ["sh-2"]}),
|
||||
({"$$$": ["sh-1"]}, {"$$$": "sh-2"}, {"$$$": ["sh-2"]}),
|
||||
({"$$$": "sh-1"}, {"$$$": ["sh-2"]}, {"$$$": ["sh-2"]}),
|
||||
({"$$$": "sh-1"}, {"$$$": ["sh-2", "sh-3"]}, {"$$$": ["sh-2", "sh-3"]}),
|
||||
({"$$$": ["sh-1"]}, {"$$$": ["sh-2", "sh-3"]}, {"$$$": ["sh-2", "sh-3"]}),
|
||||
({"$$$": ["sh-1", "sh-2"]}, {"$$$": ["sh-3", "sh-4"]}, {"$$$": ["sh-3", "sh-4"]}),
|
||||
({}, {"$$$": ["sh-3", "sh 4"]}, {"$$$": ["sh-3", "sh 4"]}),
|
||||
({"$$$": "sleep infinity"}, {"$$$": "sh"}, {"$$$": ["sh"]}),
|
||||
({"$$$": "sh"}, {"$$$": "sleep infinity"}, {"$$$": ["sleep", "infinity"]}),
|
||||
(
|
||||
{},
|
||||
{"$$$": "bash -c 'sleep infinity'"},
|
||||
{"$$$": ["bash", "-c", "sleep infinity"]},
|
||||
),
|
||||
])
|
||||
def test_parse_compose_file_when_multiple_composes_keys_command_entrypoint(
|
||||
self, base_template, override_template, expected_template
|
||||
):
|
||||
for key in ['command', 'entrypoint']:
|
||||
base, override, expected = template_to_expression(
|
||||
base_template, override_template, expected_template, key
|
||||
)
|
||||
compose_test_1 = {"services": {"test-service": base}}
|
||||
compose_test_2 = {"services": {"test-service": override}}
|
||||
dump_yaml(compose_test_1, "test-compose-1.yaml")
|
||||
dump_yaml(compose_test_2, "test-compose-2.yaml")
|
||||
|
||||
podman_compose = PodmanCompose()
|
||||
set_args(podman_compose, ["test-compose-1.yaml", "test-compose-2.yaml"])
|
||||
|
||||
podman_compose._parse_compose_file() # pylint: disable=protected-access
|
||||
|
||||
actual = {}
|
||||
if podman_compose.services:
|
||||
podman_compose.services["test-service"].pop("_deps")
|
||||
actual = podman_compose.services["test-service"]
|
||||
self.assertEqual(actual, expected)
|
||||
|
||||
|
||||
def set_args(podman_compose: PodmanCompose, file_names: list[str]) -> None:
|
||||
podman_compose.global_args = argparse.Namespace()
|
||||
podman_compose.global_args.file = file_names
|
||||
podman_compose.global_args.project_name = None
|
||||
podman_compose.global_args.env_file = None
|
||||
podman_compose.global_args.profile = []
|
||||
podman_compose.global_args.in_pod_bool = True
|
||||
podman_compose.global_args.no_normalize = True
|
||||
|
||||
|
||||
def dump_yaml(compose: dict, name: str) -> None:
|
||||
with open(name, "w", encoding="utf-8") as outfile:
|
||||
yaml.safe_dump(compose, outfile, default_flow_style=False)
|
||||
|
||||
|
||||
def template_to_expression(base, override, expected, key):
|
||||
base_copy = copy.deepcopy(base)
|
||||
override_copy = copy.deepcopy(override)
|
||||
expected_copy = copy.deepcopy(expected)
|
||||
|
||||
expected_copy[key] = expected_copy.pop("$$$")
|
||||
if "$$$" in base:
|
||||
base_copy[key] = base_copy.pop("$$$")
|
||||
if "$$$" in override:
|
||||
override_copy[key] = override_copy.pop("$$$")
|
||||
return base_copy, override_copy, expected_copy
|
||||
|
||||
|
||||
def test_clean_test_yamls() -> None:
|
||||
test_files = ["test-compose-1.yaml", "test-compose-2.yaml"]
|
||||
for file in test_files:
|
||||
if os.path.exists(file):
|
||||
os.remove(file)
|
46
pytests/test_compose_exec_args.py
Normal file
46
pytests/test_compose_exec_args.py
Normal file
@ -0,0 +1,46 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import argparse
|
||||
import unittest
|
||||
|
||||
from podman_compose import compose_exec_args
|
||||
|
||||
|
||||
class TestComposeExecArgs(unittest.TestCase):
|
||||
def test_minimal(self):
|
||||
cnt = get_minimal_container()
|
||||
args = get_minimal_args()
|
||||
|
||||
result = compose_exec_args(cnt, "container_name", args)
|
||||
expected = ["--interactive", "--tty", "container_name"]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
def test_additional_env_value_equals(self):
|
||||
cnt = get_minimal_container()
|
||||
args = get_minimal_args()
|
||||
args.env = ["key=valuepart1=valuepart2"]
|
||||
|
||||
result = compose_exec_args(cnt, "container_name", args)
|
||||
expected = [
|
||||
"--interactive",
|
||||
"--tty",
|
||||
"--env",
|
||||
"key=valuepart1=valuepart2",
|
||||
"container_name",
|
||||
]
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
|
||||
def get_minimal_container():
|
||||
return {}
|
||||
|
||||
|
||||
def get_minimal_args():
|
||||
return argparse.Namespace(
|
||||
T=None,
|
||||
cnt_command=None,
|
||||
env=None,
|
||||
privileged=None,
|
||||
user=None,
|
||||
workdir=None,
|
||||
)
|
76
pytests/test_compose_run_update_container_from_args.py
Normal file
76
pytests/test_compose_run_update_container_from_args.py
Normal file
@ -0,0 +1,76 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import argparse
|
||||
import unittest
|
||||
|
||||
from podman_compose import PodmanCompose
|
||||
from podman_compose import compose_run_update_container_from_args
|
||||
|
||||
|
||||
class TestComposeRunUpdateContainerFromArgs(unittest.TestCase):
|
||||
def test_minimal(self):
|
||||
cnt = get_minimal_container()
|
||||
compose = get_minimal_compose()
|
||||
args = get_minimal_args()
|
||||
|
||||
compose_run_update_container_from_args(compose, cnt, args)
|
||||
|
||||
expected_cnt = {"name": "default_name", "tty": True}
|
||||
self.assertEqual(cnt, expected_cnt)
|
||||
|
||||
def test_additional_env_value_equals(self):
|
||||
cnt = get_minimal_container()
|
||||
compose = get_minimal_compose()
|
||||
args = get_minimal_args()
|
||||
args.env = ["key=valuepart1=valuepart2"]
|
||||
|
||||
compose_run_update_container_from_args(compose, cnt, args)
|
||||
|
||||
expected_cnt = {
|
||||
"environment": {
|
||||
"key": "valuepart1=valuepart2",
|
||||
},
|
||||
"name": "default_name",
|
||||
"tty": True,
|
||||
}
|
||||
self.assertEqual(cnt, expected_cnt)
|
||||
|
||||
def test_publish_ports(self):
|
||||
cnt = get_minimal_container()
|
||||
compose = get_minimal_compose()
|
||||
args = get_minimal_args()
|
||||
args.publish = ["1111", "2222:2222"]
|
||||
|
||||
compose_run_update_container_from_args(compose, cnt, args)
|
||||
|
||||
expected_cnt = {
|
||||
"name": "default_name",
|
||||
"ports": ["1111", "2222:2222"],
|
||||
"tty": True,
|
||||
}
|
||||
self.assertEqual(cnt, expected_cnt)
|
||||
|
||||
|
||||
def get_minimal_container():
|
||||
return {}
|
||||
|
||||
|
||||
def get_minimal_compose():
|
||||
return PodmanCompose()
|
||||
|
||||
|
||||
def get_minimal_args():
|
||||
return argparse.Namespace(
|
||||
T=None,
|
||||
cnt_command=None,
|
||||
entrypoint=None,
|
||||
env=None,
|
||||
name="default_name",
|
||||
rm=None,
|
||||
service=None,
|
||||
publish=None,
|
||||
service_ports=None,
|
||||
user=None,
|
||||
volume=None,
|
||||
workdir=None,
|
||||
)
|
558
pytests/test_container_to_args.py
Normal file
558
pytests/test_container_to_args.py
Normal file
@ -0,0 +1,558 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import unittest
|
||||
from os import path
|
||||
from unittest import mock
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from podman_compose import container_to_args
|
||||
|
||||
|
||||
def create_compose_mock(project_name="test_project_name"):
|
||||
compose = mock.Mock()
|
||||
compose.project_name = project_name
|
||||
compose.dirname = "test_dirname"
|
||||
compose.container_names_by_service.get = mock.Mock(return_value=None)
|
||||
compose.prefer_volume_over_mount = False
|
||||
compose.default_net = None
|
||||
compose.networks = {}
|
||||
return compose
|
||||
|
||||
|
||||
def get_minimal_container():
|
||||
return {
|
||||
"name": "project_name_service_name1",
|
||||
"service_name": "service_name",
|
||||
"image": "busybox",
|
||||
}
|
||||
|
||||
|
||||
class TestContainerToArgs(unittest.IsolatedAsyncioTestCase):
|
||||
async def test_minimal(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_runtime(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["runtime"] = "runsc"
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--runtime",
|
||||
"runsc",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_sysctl_list(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["sysctls"] = [
|
||||
"net.core.somaxconn=1024",
|
||||
"net.ipv4.tcp_syncookies=0",
|
||||
]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--sysctl",
|
||||
"net.core.somaxconn=1024",
|
||||
"--sysctl",
|
||||
"net.ipv4.tcp_syncookies=0",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_sysctl_map(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["sysctls"] = {
|
||||
"net.core.somaxconn": 1024,
|
||||
"net.ipv4.tcp_syncookies": 0,
|
||||
}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--sysctl",
|
||||
"net.core.somaxconn=1024",
|
||||
"--sysctl",
|
||||
"net.ipv4.tcp_syncookies=0",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_sysctl_wrong_type(self):
|
||||
c = create_compose_mock()
|
||||
cnt = get_minimal_container()
|
||||
|
||||
# check whether wrong types are correctly rejected
|
||||
for wrong_type in [True, 0, 0.0, "wrong", ()]:
|
||||
with self.assertRaises(TypeError):
|
||||
cnt["sysctls"] = wrong_type
|
||||
await container_to_args(c, cnt)
|
||||
|
||||
async def test_pid(self):
|
||||
c = create_compose_mock()
|
||||
cnt = get_minimal_container()
|
||||
|
||||
cnt["pid"] = "host"
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--pid",
|
||||
"host",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_http_proxy(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["http_proxy"] = False
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--http-proxy=false",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_uidmaps_extension_old_path(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['x-podman'] = {'uidmaps': ['1000:1000:1']}
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
await container_to_args(c, cnt)
|
||||
|
||||
async def test_uidmaps_extension(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['x-podman.uidmaps'] = ['1000:1000:1', '1001:1001:2']
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
'--uidmap',
|
||||
'1000:1000:1',
|
||||
'--uidmap',
|
||||
'1001:1001:2',
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_gidmaps_extension(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['x-podman.gidmaps'] = ['1000:1000:1', '1001:1001:2']
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
'--gidmap',
|
||||
'1000:1000:1',
|
||||
'--gidmap',
|
||||
'1001:1001:2',
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_rootfs_extension(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
del cnt["image"]
|
||||
cnt["x-podman.rootfs"] = "/path/to/rootfs"
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--rootfs",
|
||||
"/path/to/rootfs",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_env_file_str(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
env_file = path.realpath('tests/env-file-tests/env-files/project-1.env')
|
||||
cnt['env_file'] = env_file
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"-e",
|
||||
"ZZVAR1=podman-rocks-123",
|
||||
"-e",
|
||||
"ZZVAR2=podman-rocks-124",
|
||||
"-e",
|
||||
"ZZVAR3=podman-rocks-125",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_env_file_str_not_exists(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['env_file'] = 'notexists'
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
await container_to_args(c, cnt)
|
||||
|
||||
async def test_env_file_str_array_one_path(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
env_file = path.realpath('tests/env-file-tests/env-files/project-1.env')
|
||||
cnt['env_file'] = [env_file]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"-e",
|
||||
"ZZVAR1=podman-rocks-123",
|
||||
"-e",
|
||||
"ZZVAR2=podman-rocks-124",
|
||||
"-e",
|
||||
"ZZVAR3=podman-rocks-125",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_env_file_str_array_two_paths(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
env_file = path.realpath('tests/env-file-tests/env-files/project-1.env')
|
||||
env_file_2 = path.realpath('tests/env-file-tests/env-files/project-2.env')
|
||||
cnt['env_file'] = [env_file, env_file_2]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"-e",
|
||||
"ZZVAR1=podman-rocks-123",
|
||||
"-e",
|
||||
"ZZVAR2=podman-rocks-124",
|
||||
"-e",
|
||||
"ZZVAR3=podman-rocks-125",
|
||||
"-e",
|
||||
"ZZVAR1=podman-rocks-223",
|
||||
"-e",
|
||||
"ZZVAR2=podman-rocks-224",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_env_file_obj_required(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
env_file = path.realpath('tests/env-file-tests/env-files/project-1.env')
|
||||
cnt['env_file'] = {'path': env_file, 'required': True}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"-e",
|
||||
"ZZVAR1=podman-rocks-123",
|
||||
"-e",
|
||||
"ZZVAR2=podman-rocks-124",
|
||||
"-e",
|
||||
"ZZVAR3=podman-rocks-125",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_env_file_obj_required_non_existent_path(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['env_file'] = {'path': 'not-exists', 'required': True}
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
await container_to_args(c, cnt)
|
||||
|
||||
async def test_env_file_obj_optional(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt['env_file'] = {'path': 'not-exists', 'required': False}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_gpu_count_all(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["command"] = ["nvidia-smi"]
|
||||
cnt["deploy"] = {"resources": {"reservations": {"devices": [{}]}}}
|
||||
|
||||
cnt["deploy"]["resources"]["reservations"]["devices"][0] = {
|
||||
"driver": "nvidia",
|
||||
"count": "all",
|
||||
"capabilities": ["gpu"],
|
||||
}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--device",
|
||||
"nvidia.com/gpu=all",
|
||||
"--security-opt=label=disable",
|
||||
"busybox",
|
||||
"nvidia-smi",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_gpu_count_specific(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["command"] = ["nvidia-smi"]
|
||||
cnt["deploy"] = {
|
||||
"resources": {
|
||||
"reservations": {
|
||||
"devices": [
|
||||
{
|
||||
"driver": "nvidia",
|
||||
"count": 2,
|
||||
"capabilities": ["gpu"],
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--device",
|
||||
"nvidia.com/gpu=0",
|
||||
"--device",
|
||||
"nvidia.com/gpu=1",
|
||||
"--security-opt=label=disable",
|
||||
"busybox",
|
||||
"nvidia-smi",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_gpu_device_ids_all(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["command"] = ["nvidia-smi"]
|
||||
cnt["deploy"] = {
|
||||
"resources": {
|
||||
"reservations": {
|
||||
"devices": [
|
||||
{
|
||||
"driver": "nvidia",
|
||||
"device_ids": "all",
|
||||
"capabilities": ["gpu"],
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--device",
|
||||
"nvidia.com/gpu=all",
|
||||
"--security-opt=label=disable",
|
||||
"busybox",
|
||||
"nvidia-smi",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_gpu_device_ids_specific(self):
|
||||
c = create_compose_mock()
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["command"] = ["nvidia-smi"]
|
||||
cnt["deploy"] = {
|
||||
"resources": {
|
||||
"reservations": {
|
||||
"devices": [
|
||||
{
|
||||
"driver": "nvidia",
|
||||
"device_ids": [1, 3],
|
||||
"capabilities": ["gpu"],
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--device",
|
||||
"nvidia.com/gpu=1",
|
||||
"--device",
|
||||
"nvidia.com/gpu=3",
|
||||
"--security-opt=label=disable",
|
||||
"busybox",
|
||||
"nvidia-smi",
|
||||
],
|
||||
)
|
||||
|
||||
@parameterized.expand([
|
||||
(False, "z", ["--mount", "type=bind,source=./foo,destination=/mnt,z"]),
|
||||
(False, "Z", ["--mount", "type=bind,source=./foo,destination=/mnt,Z"]),
|
||||
(True, "z", ["-v", "./foo:/mnt:z"]),
|
||||
(True, "Z", ["-v", "./foo:/mnt:Z"]),
|
||||
])
|
||||
async def test_selinux_volume(self, prefer_volume, selinux_type, expected_additional_args):
|
||||
c = create_compose_mock()
|
||||
c.prefer_volume_over_mount = prefer_volume
|
||||
|
||||
cnt = get_minimal_container()
|
||||
|
||||
# This is supposed to happen during `_parse_compose_file`
|
||||
# but that is probably getting skipped during testing
|
||||
cnt["_service"] = cnt["service_name"]
|
||||
|
||||
cnt["volumes"] = [
|
||||
{
|
||||
"type": "bind",
|
||||
"source": "./foo",
|
||||
"target": "/mnt",
|
||||
"bind": {
|
||||
"selinux": selinux_type,
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
*expected_additional_args,
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"busybox",
|
||||
],
|
||||
)
|
91
pytests/test_container_to_args_secrets.py
Normal file
91
pytests/test_container_to_args_secrets.py
Normal file
@ -0,0 +1,91 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import unittest
|
||||
|
||||
from podman_compose import container_to_args
|
||||
|
||||
from .test_container_to_args import create_compose_mock
|
||||
from .test_container_to_args import get_minimal_container
|
||||
|
||||
|
||||
class TestContainerToArgsSecrets(unittest.IsolatedAsyncioTestCase):
|
||||
async def test_pass_secret_as_env_variable(self):
|
||||
c = create_compose_mock()
|
||||
c.declared_secrets = {
|
||||
"my_secret": {"external": "true"} # must have external or name value
|
||||
}
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["secrets"] = [
|
||||
{
|
||||
"source": "my_secret",
|
||||
"target": "ENV_SECRET",
|
||||
"type": "env",
|
||||
},
|
||||
]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--secret",
|
||||
"my_secret,type=env,target=ENV_SECRET",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_secret_as_env_external_true_has_no_name(self):
|
||||
c = create_compose_mock()
|
||||
c.declared_secrets = {
|
||||
"my_secret": {
|
||||
"name": "my_secret", # must have external or name value
|
||||
}
|
||||
}
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["_service"] = "test-service"
|
||||
cnt["secrets"] = [
|
||||
{
|
||||
"source": "my_secret",
|
||||
"target": "ENV_SECRET",
|
||||
"type": "env",
|
||||
}
|
||||
]
|
||||
|
||||
args = await container_to_args(c, cnt)
|
||||
self.assertEqual(
|
||||
args,
|
||||
[
|
||||
"--name=project_name_service_name1",
|
||||
"-d",
|
||||
"--network=bridge",
|
||||
"--network-alias=service_name",
|
||||
"--secret",
|
||||
"my_secret,type=env,target=ENV_SECRET",
|
||||
"busybox",
|
||||
],
|
||||
)
|
||||
|
||||
async def test_pass_secret_as_env_variable_no_external(self):
|
||||
c = create_compose_mock()
|
||||
c.declared_secrets = {
|
||||
"my_secret": {} # must have external or name value
|
||||
}
|
||||
|
||||
cnt = get_minimal_container()
|
||||
cnt["_service"] = "test-service"
|
||||
cnt["secrets"] = [
|
||||
{
|
||||
"source": "my_secret",
|
||||
"target": "ENV_SECRET",
|
||||
"type": "env",
|
||||
}
|
||||
]
|
||||
|
||||
with self.assertRaises(ValueError) as context:
|
||||
await container_to_args(c, cnt)
|
||||
self.assertIn('ERROR: unparsable secret: ', str(context.exception))
|
298
pytests/test_get_net_args.py
Normal file
298
pytests/test_get_net_args.py
Normal file
@ -0,0 +1,298 @@
|
||||
import unittest
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from podman_compose import get_net_args
|
||||
|
||||
from .test_container_to_args import create_compose_mock
|
||||
|
||||
PROJECT_NAME = "test_project_name"
|
||||
SERVICE_NAME = "service_name"
|
||||
CONTAINER_NAME = f"{PROJECT_NAME}_{SERVICE_NAME}_1"
|
||||
|
||||
|
||||
def get_networked_compose(num_networks=1):
|
||||
compose = create_compose_mock(PROJECT_NAME)
|
||||
for network in range(num_networks):
|
||||
compose.networks[f"net{network}"] = {
|
||||
"driver": "bridge",
|
||||
"ipam": {
|
||||
"config": [
|
||||
{"subnet": f"192.168.{network}.0/24"},
|
||||
{"subnet": f"fd00:{network}::/64"},
|
||||
]
|
||||
},
|
||||
"enable_ipv6": True,
|
||||
}
|
||||
|
||||
return compose
|
||||
|
||||
|
||||
def get_minimal_container():
|
||||
return {
|
||||
"name": CONTAINER_NAME,
|
||||
"service_name": SERVICE_NAME,
|
||||
"image": "busybox",
|
||||
}
|
||||
|
||||
|
||||
class TestGetNetArgs(unittest.TestCase):
|
||||
def test_minimal(self):
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
|
||||
expected_args = [
|
||||
"--network=bridge",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_one_net(self):
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_alias(self):
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {}}
|
||||
container["_aliases"] = ["alias1", "alias2"]
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
"--network-alias=alias1",
|
||||
"--network-alias=alias2",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_one_ipv4(self):
|
||||
ip = "192.168.0.42"
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {"ipv4_address": ip}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--ip={ip}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertEqual(expected_args, args)
|
||||
|
||||
def test_one_ipv6(self):
|
||||
ipv6_address = "fd00:0::42"
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {"ipv6_address": ipv6_address}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--ip6={ipv6_address}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_one_mac(self):
|
||||
mac = "00:11:22:33:44:55"
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {}}
|
||||
container["mac_address"] = mac
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--mac-address={mac}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_one_mac_two_nets(self):
|
||||
mac = "00:11:22:33:44:55"
|
||||
compose = get_networked_compose(num_networks=6)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {}, "net1": {}}
|
||||
container["mac_address"] = mac
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0:mac={mac}",
|
||||
f"--network={PROJECT_NAME}_net1",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_two_nets_as_dict(self):
|
||||
compose = get_networked_compose(num_networks=2)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {}, "net1": {}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--network={PROJECT_NAME}_net1",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_two_nets_as_list(self):
|
||||
compose = get_networked_compose(num_networks=2)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = ["net0", "net1"]
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0",
|
||||
f"--network={PROJECT_NAME}_net1",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_two_ipv4(self):
|
||||
ip0 = "192.168.0.42"
|
||||
ip1 = "192.168.1.42"
|
||||
compose = get_networked_compose(num_networks=2)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {"ipv4_address": ip0}, "net1": {"ipv4_address": ip1}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0:ip={ip0}",
|
||||
f"--network={PROJECT_NAME}_net1:ip={ip1}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_two_ipv6(self):
|
||||
ip0 = "fd00:0::42"
|
||||
ip1 = "fd00:1::42"
|
||||
compose = get_networked_compose(num_networks=2)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {"net0": {"ipv6_address": ip0}, "net1": {"ipv6_address": ip1}}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0:ip={ip0}",
|
||||
f"--network={PROJECT_NAME}_net1:ip={ip1}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
# custom extension; not supported by docker-compose
|
||||
def test_two_mac(self):
|
||||
mac0 = "00:00:00:00:00:01"
|
||||
mac1 = "00:00:00:00:00:02"
|
||||
compose = get_networked_compose(num_networks=2)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {
|
||||
"net0": {"x-podman.mac_address": mac0},
|
||||
"net1": {"x-podman.mac_address": mac1},
|
||||
}
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0:mac={mac0}",
|
||||
f"--network={PROJECT_NAME}_net1:mac={mac1}",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_mixed_mac(self):
|
||||
ip4_0 = "192.168.0.42"
|
||||
ip4_1 = "192.168.1.42"
|
||||
ip4_2 = "192.168.2.42"
|
||||
mac_0 = "00:00:00:00:00:01"
|
||||
mac_1 = "00:00:00:00:00:02"
|
||||
|
||||
compose = get_networked_compose(num_networks=3)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {
|
||||
"net0": {"ipv4_address": ip4_0},
|
||||
"net1": {"ipv4_address": ip4_1, "x-podman.mac_address": mac_0},
|
||||
"net2": {"ipv4_address": ip4_2},
|
||||
}
|
||||
container["mac_address"] = mac_1
|
||||
|
||||
expected_exception = (
|
||||
r"specifying mac_address on both container and network level " r"is not supported"
|
||||
)
|
||||
self.assertRaisesRegex(RuntimeError, expected_exception, get_net_args, compose, container)
|
||||
|
||||
def test_mixed_config(self):
|
||||
ip4_0 = "192.168.0.42"
|
||||
ip4_1 = "192.168.1.42"
|
||||
ip6_0 = "fd00:0::42"
|
||||
ip6_2 = "fd00:2::42"
|
||||
mac = "00:11:22:33:44:55"
|
||||
compose = get_networked_compose(num_networks=4)
|
||||
container = get_minimal_container()
|
||||
container["networks"] = {
|
||||
"net0": {"ipv4_address": ip4_0, "ipv6_address": ip6_0},
|
||||
"net1": {"ipv4_address": ip4_1},
|
||||
"net2": {"ipv6_address": ip6_2},
|
||||
"net3": {},
|
||||
}
|
||||
container["mac_address"] = mac
|
||||
|
||||
expected_args = [
|
||||
f"--network={PROJECT_NAME}_net0:ip={ip4_0},ip={ip6_0},mac={mac}",
|
||||
f"--network={PROJECT_NAME}_net1:ip={ip4_1}",
|
||||
f"--network={PROJECT_NAME}_net2:ip={ip6_2}",
|
||||
f"--network={PROJECT_NAME}_net3",
|
||||
f"--network-alias={SERVICE_NAME}",
|
||||
]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
@parameterized.expand([
|
||||
("bridge", ["--network=bridge", f"--network-alias={SERVICE_NAME}"]),
|
||||
("host", ["--network=host"]),
|
||||
("none", []),
|
||||
("slirp4netns", ["--network=slirp4netns"]),
|
||||
("slirp4netns:cidr=10.42.0.0/24", ["--network=slirp4netns:cidr=10.42.0.0/24"]),
|
||||
("private", ["--network=private"]),
|
||||
("pasta", ["--network=pasta"]),
|
||||
("pasta:--ipv4-only,-a,10.0.2.0", ["--network=pasta:--ipv4-only,-a,10.0.2.0"]),
|
||||
("ns:my_namespace", ["--network=ns:my_namespace"]),
|
||||
("container:my_container", ["--network=container:my_container"]),
|
||||
])
|
||||
def test_network_modes(self, network_mode, expected_args):
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["network_mode"] = network_mode
|
||||
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
||||
|
||||
def test_network_mode_invalid(self):
|
||||
compose = get_networked_compose()
|
||||
container = get_minimal_container()
|
||||
container["network_mode"] = "invalid_mode"
|
||||
|
||||
with self.assertRaises(SystemExit):
|
||||
get_net_args(compose, container)
|
||||
|
||||
def test_network__mode_service(self):
|
||||
compose = get_networked_compose()
|
||||
compose.container_names_by_service = {
|
||||
"service_1": ["container_1"],
|
||||
"service_2": ["container_2"],
|
||||
}
|
||||
|
||||
container = get_minimal_container()
|
||||
container["network_mode"] = "service:service_2"
|
||||
|
||||
expected_args = ["--network=container:container_2"]
|
||||
args = get_net_args(compose, container)
|
||||
self.assertListEqual(expected_args, args)
|
203
pytests/test_get_network_create_args.py
Normal file
203
pytests/test_get_network_create_args.py
Normal file
@ -0,0 +1,203 @@
|
||||
import unittest
|
||||
|
||||
from podman_compose import get_network_create_args
|
||||
|
||||
|
||||
class TestGetNetworkCreateArgs(unittest.TestCase):
|
||||
def test_minimal(self):
|
||||
net_desc = {
|
||||
"labels": [],
|
||||
"internal": False,
|
||||
"driver": None,
|
||||
"driver_opts": {},
|
||||
"ipam": {"config": []},
|
||||
"enable_ipv6": False,
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
||||
|
||||
def test_ipv6(self):
|
||||
net_desc = {
|
||||
"labels": [],
|
||||
"internal": False,
|
||||
"driver": None,
|
||||
"driver_opts": {},
|
||||
"ipam": {"config": []},
|
||||
"enable_ipv6": True,
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
"--ipv6",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
||||
|
||||
def test_bridge(self):
|
||||
net_desc = {
|
||||
"labels": [],
|
||||
"internal": False,
|
||||
"driver": "bridge",
|
||||
"driver_opts": {"opt1": "value1", "opt2": "value2"},
|
||||
"ipam": {"config": []},
|
||||
"enable_ipv6": False,
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
"--driver",
|
||||
"bridge",
|
||||
"--opt",
|
||||
"opt1=value1",
|
||||
"--opt",
|
||||
"opt2=value2",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
||||
|
||||
def test_ipam_driver_default(self):
|
||||
net_desc = {
|
||||
"labels": [],
|
||||
"internal": False,
|
||||
"driver": None,
|
||||
"driver_opts": {},
|
||||
"ipam": {
|
||||
"driver": "default",
|
||||
"config": [
|
||||
{
|
||||
"subnet": "192.168.0.0/24",
|
||||
"ip_range": "192.168.0.2/24",
|
||||
"gateway": "192.168.0.1",
|
||||
}
|
||||
],
|
||||
},
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
"--subnet",
|
||||
"192.168.0.0/24",
|
||||
"--ip-range",
|
||||
"192.168.0.2/24",
|
||||
"--gateway",
|
||||
"192.168.0.1",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
||||
|
||||
def test_ipam_driver(self):
|
||||
net_desc = {
|
||||
"labels": [],
|
||||
"internal": False,
|
||||
"driver": None,
|
||||
"driver_opts": {},
|
||||
"ipam": {
|
||||
"driver": "someipamdriver",
|
||||
"config": [
|
||||
{
|
||||
"subnet": "192.168.0.0/24",
|
||||
"ip_range": "192.168.0.2/24",
|
||||
"gateway": "192.168.0.1",
|
||||
}
|
||||
],
|
||||
},
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
"--ipam-driver",
|
||||
"someipamdriver",
|
||||
"--subnet",
|
||||
"192.168.0.0/24",
|
||||
"--ip-range",
|
||||
"192.168.0.2/24",
|
||||
"--gateway",
|
||||
"192.168.0.1",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
||||
|
||||
def test_complete(self):
|
||||
net_desc = {
|
||||
"labels": ["label1", "label2"],
|
||||
"internal": True,
|
||||
"driver": "bridge",
|
||||
"driver_opts": {"opt1": "value1", "opt2": "value2"},
|
||||
"ipam": {
|
||||
"driver": "someipamdriver",
|
||||
"config": [
|
||||
{
|
||||
"subnet": "192.168.0.0/24",
|
||||
"ip_range": "192.168.0.2/24",
|
||||
"gateway": "192.168.0.1",
|
||||
}
|
||||
],
|
||||
},
|
||||
"enable_ipv6": True,
|
||||
}
|
||||
proj_name = "test_project"
|
||||
net_name = "test_network"
|
||||
expected_args = [
|
||||
"create",
|
||||
"--label",
|
||||
f"io.podman.compose.project={proj_name}",
|
||||
"--label",
|
||||
f"com.docker.compose.project={proj_name}",
|
||||
"--label",
|
||||
"label1",
|
||||
"--label",
|
||||
"label2",
|
||||
"--internal",
|
||||
"--driver",
|
||||
"bridge",
|
||||
"--opt",
|
||||
"opt1=value1",
|
||||
"--opt",
|
||||
"opt2=value2",
|
||||
"--ipam-driver",
|
||||
"someipamdriver",
|
||||
"--ipv6",
|
||||
"--subnet",
|
||||
"192.168.0.0/24",
|
||||
"--ip-range",
|
||||
"192.168.0.2/24",
|
||||
"--gateway",
|
||||
"192.168.0.1",
|
||||
net_name,
|
||||
]
|
||||
args = get_network_create_args(net_desc, proj_name, net_name)
|
||||
self.assertEqual(args, expected_args)
|
43
pytests/test_normalize_depends_on.py
Normal file
43
pytests/test_normalize_depends_on.py
Normal file
@ -0,0 +1,43 @@
|
||||
import copy
|
||||
|
||||
from podman_compose import normalize_service
|
||||
|
||||
test_cases_simple = [
|
||||
(
|
||||
{"depends_on": "my_service"},
|
||||
{"depends_on": {"my_service": {"condition": "service_started"}}},
|
||||
),
|
||||
(
|
||||
{"depends_on": ["my_service"]},
|
||||
{"depends_on": {"my_service": {"condition": "service_started"}}},
|
||||
),
|
||||
(
|
||||
{"depends_on": ["my_service1", "my_service2"]},
|
||||
{
|
||||
"depends_on": {
|
||||
"my_service1": {"condition": "service_started"},
|
||||
"my_service2": {"condition": "service_started"},
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{"depends_on": {"my_service": {"condition": "service_started"}}},
|
||||
{"depends_on": {"my_service": {"condition": "service_started"}}},
|
||||
),
|
||||
(
|
||||
{"depends_on": {"my_service": {"condition": "service_healthy"}}},
|
||||
{"depends_on": {"my_service": {"condition": "service_healthy"}}},
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
def test_normalize_service_simple():
|
||||
for test_case, expected in copy.deepcopy(test_cases_simple):
|
||||
test_original = copy.deepcopy(test_case)
|
||||
test_case = normalize_service(test_case)
|
||||
test_result = expected == test_case
|
||||
if not test_result:
|
||||
print("test: ", test_original)
|
||||
print("expected: ", expected)
|
||||
print("actual: ", test_case)
|
||||
assert test_result
|
271
pytests/test_normalize_final_build.py
Normal file
271
pytests/test_normalize_final_build.py
Normal file
@ -0,0 +1,271 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# pylint: disable=protected-access
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import unittest
|
||||
|
||||
import yaml
|
||||
from parameterized import parameterized
|
||||
|
||||
from podman_compose import PodmanCompose
|
||||
from podman_compose import normalize_final
|
||||
from podman_compose import normalize_service_final
|
||||
|
||||
cwd = os.path.abspath(".")
|
||||
|
||||
|
||||
class TestNormalizeFinalBuild(unittest.TestCase):
|
||||
cases_simple_normalization = [
|
||||
({"image": "test-image"}, {"image": "test-image"}),
|
||||
(
|
||||
{"build": "."},
|
||||
{
|
||||
"build": {"context": cwd, "dockerfile": "Dockerfile"},
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": "../relative"},
|
||||
{
|
||||
"build": {
|
||||
"context": os.path.normpath(os.path.join(cwd, "../relative")),
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": "./relative"},
|
||||
{
|
||||
"build": {
|
||||
"context": os.path.normpath(os.path.join(cwd, "./relative")),
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": "/workspace/absolute"},
|
||||
{
|
||||
"build": {
|
||||
"context": "/workspace/absolute",
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{
|
||||
"build": {
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{
|
||||
"build": {
|
||||
"context": ".",
|
||||
},
|
||||
},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "Dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{
|
||||
"build": {"context": "../", "dockerfile": "test-dockerfile"},
|
||||
},
|
||||
{
|
||||
"build": {
|
||||
"context": os.path.normpath(os.path.join(cwd, "../")),
|
||||
"dockerfile": "test-dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
(
|
||||
{
|
||||
"build": {"context": ".", "dockerfile": "./dev/test-dockerfile"},
|
||||
},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "./dev/test-dockerfile",
|
||||
},
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
@parameterized.expand(cases_simple_normalization)
|
||||
def test_normalize_service_final_returns_absolute_path_in_context(self, input, expected):
|
||||
# Tests that [service.build] is normalized after merges
|
||||
project_dir = cwd
|
||||
self.assertEqual(normalize_service_final(input, project_dir), expected)
|
||||
|
||||
@parameterized.expand(cases_simple_normalization)
|
||||
def test_normalize_returns_absolute_path_in_context(self, input, expected):
|
||||
project_dir = cwd
|
||||
compose_test = {"services": {"test-service": input}}
|
||||
compose_expected = {"services": {"test-service": expected}}
|
||||
self.assertEqual(normalize_final(compose_test, project_dir), compose_expected)
|
||||
|
||||
@parameterized.expand(cases_simple_normalization)
|
||||
def test_parse_compose_file_when_single_compose(self, input, expected):
|
||||
compose_test = {"services": {"test-service": input}}
|
||||
dump_yaml(compose_test, "test-compose.yaml")
|
||||
|
||||
podman_compose = PodmanCompose()
|
||||
set_args(podman_compose, ["test-compose.yaml"], no_normalize=None)
|
||||
|
||||
podman_compose._parse_compose_file()
|
||||
|
||||
actual_compose = {}
|
||||
if podman_compose.services:
|
||||
podman_compose.services["test-service"].pop("_deps")
|
||||
actual_compose = podman_compose.services["test-service"]
|
||||
self.assertEqual(actual_compose, expected)
|
||||
|
||||
@parameterized.expand([
|
||||
(
|
||||
{},
|
||||
{"build": "."},
|
||||
{"build": {"context": cwd, "dockerfile": "Dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": "."},
|
||||
{},
|
||||
{"build": {"context": cwd, "dockerfile": "Dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": "/workspace/absolute"},
|
||||
{"build": "./relative"},
|
||||
{
|
||||
"build": {
|
||||
"context": os.path.normpath(os.path.join(cwd, "./relative")),
|
||||
"dockerfile": "Dockerfile",
|
||||
}
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": "./relative"},
|
||||
{"build": "/workspace/absolute"},
|
||||
{"build": {"context": "/workspace/absolute", "dockerfile": "Dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": "./relative"},
|
||||
{"build": "/workspace/absolute"},
|
||||
{"build": {"context": "/workspace/absolute", "dockerfile": "Dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "test-dockerfile"}},
|
||||
{},
|
||||
{"build": {"context": cwd, "dockerfile": "test-dockerfile"}},
|
||||
),
|
||||
(
|
||||
{},
|
||||
{"build": {"dockerfile": "test-dockerfile"}},
|
||||
{"build": {"context": cwd, "dockerfile": "test-dockerfile"}},
|
||||
),
|
||||
(
|
||||
{},
|
||||
{"build": {"dockerfile": "test-dockerfile"}},
|
||||
{"build": {"context": cwd, "dockerfile": "test-dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "test-dockerfile-1"}},
|
||||
{"build": {"dockerfile": "test-dockerfile-2"}},
|
||||
{"build": {"context": cwd, "dockerfile": "test-dockerfile-2"}},
|
||||
),
|
||||
(
|
||||
{"build": "/workspace/absolute"},
|
||||
{"build": {"dockerfile": "test-dockerfile"}},
|
||||
{"build": {"context": "/workspace/absolute", "dockerfile": "test-dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "test-dockerfile"}},
|
||||
{"build": "/workspace/absolute"},
|
||||
{"build": {"context": "/workspace/absolute", "dockerfile": "test-dockerfile"}},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./test-dockerfile-1"}},
|
||||
{"build": {"dockerfile": "./test-dockerfile-2", "args": ["ENV1=1"]}},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "./test-dockerfile-2",
|
||||
"args": ["ENV1=1"],
|
||||
}
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./test-dockerfile-1", "args": ["ENV1=1"]}},
|
||||
{"build": {"dockerfile": "./test-dockerfile-2"}},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "./test-dockerfile-2",
|
||||
"args": ["ENV1=1"],
|
||||
}
|
||||
},
|
||||
),
|
||||
(
|
||||
{"build": {"dockerfile": "./test-dockerfile-1", "args": ["ENV1=1"]}},
|
||||
{"build": {"dockerfile": "./test-dockerfile-2", "args": ["ENV2=2"]}},
|
||||
{
|
||||
"build": {
|
||||
"context": cwd,
|
||||
"dockerfile": "./test-dockerfile-2",
|
||||
"args": ["ENV1=1", "ENV2=2"],
|
||||
}
|
||||
},
|
||||
),
|
||||
])
|
||||
def test_parse_when_multiple_composes(self, input, override, expected):
|
||||
compose_test_1 = {"services": {"test-service": input}}
|
||||
compose_test_2 = {"services": {"test-service": override}}
|
||||
dump_yaml(compose_test_1, "test-compose-1.yaml")
|
||||
dump_yaml(compose_test_2, "test-compose-2.yaml")
|
||||
|
||||
podman_compose = PodmanCompose()
|
||||
set_args(
|
||||
podman_compose,
|
||||
["test-compose-1.yaml", "test-compose-2.yaml"],
|
||||
no_normalize=None,
|
||||
)
|
||||
|
||||
podman_compose._parse_compose_file()
|
||||
|
||||
actual_compose = {}
|
||||
if podman_compose.services:
|
||||
podman_compose.services["test-service"].pop("_deps")
|
||||
actual_compose = podman_compose.services["test-service"]
|
||||
self.assertEqual(actual_compose, expected)
|
||||
|
||||
|
||||
def set_args(podman_compose: PodmanCompose, file_names: list[str], no_normalize: bool) -> None:
|
||||
podman_compose.global_args = argparse.Namespace()
|
||||
podman_compose.global_args.file = file_names
|
||||
podman_compose.global_args.project_name = None
|
||||
podman_compose.global_args.env_file = None
|
||||
podman_compose.global_args.profile = []
|
||||
podman_compose.global_args.in_pod_bool = True
|
||||
podman_compose.global_args.no_normalize = no_normalize
|
||||
|
||||
|
||||
def dump_yaml(compose: dict, name: str) -> None:
|
||||
# Path(Path.cwd()/"subdirectory").mkdir(parents=True, exist_ok=True)
|
||||
with open(name, "w", encoding="utf-8") as outfile:
|
||||
yaml.safe_dump(compose, outfile, default_flow_style=False)
|
||||
|
||||
|
||||
def test_clean_test_yamls() -> None:
|
||||
test_files = ["test-compose-1.yaml", "test-compose-2.yaml", "test-compose.yaml"]
|
||||
for file in test_files:
|
||||
if os.path.exists(file):
|
||||
os.remove(file)
|
70
pytests/test_normalize_service.py
Normal file
70
pytests/test_normalize_service.py
Normal file
@ -0,0 +1,70 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
import unittest
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from podman_compose import normalize_service
|
||||
|
||||
|
||||
class TestNormalizeService(unittest.TestCase):
|
||||
@parameterized.expand([
|
||||
({"test": "test"}, {"test": "test"}),
|
||||
({"build": "."}, {"build": {"context": "."}}),
|
||||
({"build": "./dir-1"}, {"build": {"context": "./dir-1"}}),
|
||||
({"build": {"context": "./dir-1"}}, {"build": {"context": "./dir-1"}}),
|
||||
(
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
(
|
||||
{"build": {"context": "./dir-1", "dockerfile": "dockerfile-1"}},
|
||||
{"build": {"context": "./dir-1", "dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
(
|
||||
{"build": {"additional_contexts": ["ctx=../ctx", "ctx2=../ctx2"]}},
|
||||
{"build": {"additional_contexts": ["ctx=../ctx", "ctx2=../ctx2"]}},
|
||||
),
|
||||
(
|
||||
{"build": {"additional_contexts": {"ctx": "../ctx", "ctx2": "../ctx2"}}},
|
||||
{"build": {"additional_contexts": ["ctx=../ctx", "ctx2=../ctx2"]}},
|
||||
),
|
||||
])
|
||||
def test_simple(self, input, expected):
|
||||
self.assertEqual(normalize_service(input), expected)
|
||||
|
||||
@parameterized.expand([
|
||||
({"test": "test"}, {"test": "test"}),
|
||||
({"build": "."}, {"build": {"context": "./sub_dir/."}}),
|
||||
({"build": "./dir-1"}, {"build": {"context": "./sub_dir/dir-1"}}),
|
||||
({"build": {"context": "./dir-1"}}, {"build": {"context": "./sub_dir/dir-1"}}),
|
||||
(
|
||||
{"build": {"dockerfile": "dockerfile-1"}},
|
||||
{"build": {"context": "./sub_dir", "dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
(
|
||||
{"build": {"context": "./dir-1", "dockerfile": "dockerfile-1"}},
|
||||
{"build": {"context": "./sub_dir/dir-1", "dockerfile": "dockerfile-1"}},
|
||||
),
|
||||
])
|
||||
def test_normalize_service_with_sub_dir(self, input, expected):
|
||||
self.assertEqual(normalize_service(input, sub_dir="./sub_dir"), expected)
|
||||
|
||||
@parameterized.expand([
|
||||
([], []),
|
||||
(["sh"], ["sh"]),
|
||||
(["sh", "-c", "date"], ["sh", "-c", "date"]),
|
||||
("sh", ["sh"]),
|
||||
("sleep infinity", ["sleep", "infinity"]),
|
||||
(
|
||||
"bash -c 'sleep infinity'",
|
||||
["bash", "-c", "sleep infinity"],
|
||||
),
|
||||
])
|
||||
def test_command_like(self, input, expected):
|
||||
for key in ['command', 'entrypoint']:
|
||||
input_service = {}
|
||||
input_service[key] = input
|
||||
|
||||
expected_service = {}
|
||||
expected_service[key] = expected
|
||||
self.assertEqual(normalize_service(input_service), expected_service)
|
20
pytests/test_volumes.py
Normal file
20
pytests/test_volumes.py
Normal file
@ -0,0 +1,20 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# pylint: disable=redefined-outer-name
|
||||
import unittest
|
||||
|
||||
from podman_compose import parse_short_mount
|
||||
|
||||
|
||||
class ParseShortMountTests(unittest.TestCase):
|
||||
def test_multi_propagation(self):
|
||||
self.assertEqual(
|
||||
parse_short_mount("/foo/bar:/baz:U,Z", "/"),
|
||||
{
|
||||
"type": "bind",
|
||||
"source": "/foo/bar",
|
||||
"target": "/baz",
|
||||
"bind": {
|
||||
"propagation": "U,Z",
|
||||
},
|
||||
},
|
||||
)
|
@ -3,3 +3,7 @@ universal = 1
|
||||
|
||||
[metadata]
|
||||
version = attr: podman_compose.__version__
|
||||
|
||||
[flake8]
|
||||
# The GitHub editor is 127 chars wide
|
||||
max-line-length=127
|
46
setup.py
46
setup.py
@ -1,49 +1,49 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
import os
|
||||
|
||||
from setuptools import setup
|
||||
|
||||
try:
|
||||
readme = open(os.path.join(os.path.dirname(__file__), 'README.md')).read()
|
||||
except:
|
||||
readme = ''
|
||||
README = open(os.path.join(os.path.dirname(__file__), "README.md"), encoding="utf-8").read()
|
||||
except: # noqa: E722 # pylint: disable=bare-except
|
||||
README = ""
|
||||
|
||||
setup(
|
||||
name='podman-compose',
|
||||
name="podman-compose",
|
||||
description="A script to run docker-compose.yml using podman",
|
||||
long_description=readme,
|
||||
long_description_content_type='text/markdown',
|
||||
long_description=README,
|
||||
long_description_content_type="text/markdown",
|
||||
classifiers=[
|
||||
"Programming Language :: Python",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.5",
|
||||
"Programming Language :: Python :: 3.6",
|
||||
"Programming Language :: Python :: 3.7",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"Intended Audience :: Developers",
|
||||
"Operating System :: OS Independent",
|
||||
"Development Status :: 3 - Alpha",
|
||||
"Topic :: Software Development :: Build Tools",
|
||||
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
|
||||
],
|
||||
keywords='podman, podman-compose',
|
||||
author='Muayyad Alsadi',
|
||||
author_email='alsadi@gmail.com',
|
||||
url='https://github.com/containers/podman-compose',
|
||||
py_modules=['podman_compose'],
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'podman-compose = podman_compose:main'
|
||||
]
|
||||
},
|
||||
keywords="podman, podman-compose",
|
||||
author="Muayyad Alsadi",
|
||||
author_email="alsadi@gmail.com",
|
||||
url="https://github.com/containers/podman-compose",
|
||||
py_modules=["podman_compose"],
|
||||
entry_points={"console_scripts": ["podman-compose = podman_compose:main"]},
|
||||
include_package_data=True,
|
||||
license='GPL-2.0-only',
|
||||
license="GPL-2.0-only",
|
||||
install_requires=[
|
||||
'pyyaml',
|
||||
'python-dotenv',
|
||||
"pyyaml",
|
||||
"python-dotenv",
|
||||
],
|
||||
extras_require={"devel": ["ruff", "pre-commit", "coverage", "parameterized"]},
|
||||
# test_suite='tests',
|
||||
# tests_require=[
|
||||
# 'coverage',
|
||||
# 'pytest-cov',
|
||||
# 'pytest',
|
||||
# 'tox',
|
||||
# ]
|
||||
)
|
||||
|
@ -1,8 +1,33 @@
|
||||
# The order of packages is significant, because pip processes them in the order
|
||||
# of appearance. Changing the order has an impact on the overall integration
|
||||
# process, which may cause wedges in the gate later.
|
||||
-e .
|
||||
coverage==7.4.3
|
||||
parameterized==0.9.0
|
||||
pytest==8.0.2
|
||||
tox==4.13.0
|
||||
ruff==0.3.1
|
||||
pylint==3.1.0
|
||||
|
||||
coverage
|
||||
pytest-cov
|
||||
pytest
|
||||
tox
|
||||
# The packages below are transitive dependencies of the packages above and are included here
|
||||
# to make testing reproducible.
|
||||
# To refresh, create a new virtualenv and do:
|
||||
# pip install -r requirements.txt -r test-requirements.txt
|
||||
# pip freeze > test-requirements.txt
|
||||
# and edit test-requirements.txt to add this comment
|
||||
|
||||
astroid==3.1.0
|
||||
cachetools==5.3.3
|
||||
chardet==5.2.0
|
||||
colorama==0.4.6
|
||||
dill==0.3.8
|
||||
distlib==0.3.8
|
||||
filelock==3.13.1
|
||||
iniconfig==2.0.0
|
||||
isort==5.13.2
|
||||
mccabe==0.7.0
|
||||
packaging==23.2
|
||||
platformdirs==4.2.0
|
||||
pluggy==1.4.0
|
||||
pyproject-api==1.6.1
|
||||
python-dotenv==1.0.1
|
||||
PyYAML==6.0.1
|
||||
tomlkit==0.12.4
|
||||
virtualenv==20.25.1
|
||||
|
12
tests/__init__.py
Normal file
12
tests/__init__.py
Normal file
@ -0,0 +1,12 @@
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
|
||||
def create_base_test_image():
|
||||
subprocess.check_call(
|
||||
['podman', 'build', '-t', 'nopush/podman-compose-test', '.'],
|
||||
cwd=os.path.join(os.path.dirname(__file__), "base_image"),
|
||||
)
|
||||
|
||||
|
||||
create_base_test_image()
|
14
tests/additional_contexts/README.md
Normal file
14
tests/additional_contexts/README.md
Normal file
@ -0,0 +1,14 @@
|
||||
# Test podman-compose with build.additional_contexts
|
||||
|
||||
```
|
||||
podman-compose build
|
||||
podman-compose up
|
||||
podman-compose down
|
||||
```
|
||||
|
||||
expected output would be
|
||||
|
||||
```
|
||||
[dict] | Data for dict
|
||||
[list] | Data for list
|
||||
```
|
1
tests/additional_contexts/data_for_dict/data.txt
Normal file
1
tests/additional_contexts/data_for_dict/data.txt
Normal file
@ -0,0 +1 @@
|
||||
Data for dict
|
1
tests/additional_contexts/data_for_list/data.txt
Normal file
1
tests/additional_contexts/data_for_list/data.txt
Normal file
@ -0,0 +1 @@
|
||||
Data for list
|
3
tests/additional_contexts/project/Dockerfile
Normal file
3
tests/additional_contexts/project/Dockerfile
Normal file
@ -0,0 +1,3 @@
|
||||
FROM busybox
|
||||
COPY --from=data data.txt /data/data.txt
|
||||
CMD ["busybox", "cat", "/data/data.txt"]
|
12
tests/additional_contexts/project/docker-compose.yml
Normal file
12
tests/additional_contexts/project/docker-compose.yml
Normal file
@ -0,0 +1,12 @@
|
||||
version: "3.7"
|
||||
services:
|
||||
dict:
|
||||
build:
|
||||
context: .
|
||||
additional_contexts:
|
||||
data: ../data_for_dict
|
||||
list:
|
||||
build:
|
||||
context: .
|
||||
additional_contexts:
|
||||
- data=../data_for_list
|
6
tests/base_image/Dockerfile
Normal file
6
tests/base_image/Dockerfile
Normal file
@ -0,0 +1,6 @@
|
||||
FROM docker.io/library/debian:bookworm-slim
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y \
|
||||
dumb-init \
|
||||
busybox \
|
||||
wget
|
22
tests/build_fail/README.md
Normal file
22
tests/build_fail/README.md
Normal file
@ -0,0 +1,22 @@
|
||||
# Test podman-compose with build (fail scenario)
|
||||
|
||||
```shell
|
||||
podman-compose build || echo $?
|
||||
```
|
||||
|
||||
expected output would be something like
|
||||
|
||||
```
|
||||
STEP 1/3: FROM busybox
|
||||
STEP 2/3: RUN this_command_does_not_exist
|
||||
/bin/sh: this_command_does_not_exist: not found
|
||||
Error: building at STEP "RUN this_command_does_not_exist": while running runtime: exit status 127
|
||||
|
||||
exit code: 127
|
||||
```
|
||||
|
||||
Expected `podman-compose` exit code:
|
||||
```shell
|
||||
echo $?
|
||||
127
|
||||
```
|
3
tests/build_fail/context/Dockerfile
Normal file
3
tests/build_fail/context/Dockerfile
Normal file
@ -0,0 +1,3 @@
|
||||
FROM busybox
|
||||
RUN this_command_does_not_exist
|
||||
CMD ["sh"]
|
5
tests/build_fail/docker-compose.yml
Normal file
5
tests/build_fail/docker-compose.yml
Normal file
@ -0,0 +1,5 @@
|
||||
version: "3"
|
||||
services:
|
||||
test:
|
||||
build: ./context
|
||||
image: build-fail-img
|
9
tests/build_secrets/Dockerfile
Normal file
9
tests/build_secrets/Dockerfile
Normal file
@ -0,0 +1,9 @@
|
||||
FROM busybox
|
||||
|
||||
RUN --mount=type=secret,required=true,id=build_secret \
|
||||
ls -l /run/secrets/ && cat /run/secrets/build_secret
|
||||
|
||||
RUN --mount=type=secret,required=true,id=build_secret,target=/tmp/secret \
|
||||
ls -l /run/secrets/ /tmp/ && cat /tmp/secret
|
||||
|
||||
CMD [ 'echo', 'nothing here' ]
|
22
tests/build_secrets/docker-compose.yaml
Normal file
22
tests/build_secrets/docker-compose.yaml
Normal file
@ -0,0 +1,22 @@
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
test:
|
||||
image: test
|
||||
secrets:
|
||||
- run_secret # implicitly mount to /run/secrets/run_secret
|
||||
- source: run_secret
|
||||
target: /tmp/run_secret2 # explicit mount point
|
||||
|
||||
build:
|
||||
context: .
|
||||
secrets:
|
||||
- build_secret # can be mounted in Dockerfile with "RUN --mount=type=secret,id=build_secret"
|
||||
- source: build_secret
|
||||
target: build_secret2 # rename to build_secret2
|
||||
|
||||
secrets:
|
||||
build_secret:
|
||||
file: ./my_secret
|
||||
run_secret:
|
||||
file: ./my_secret
|
18
tests/build_secrets/docker-compose.yaml.invalid
Normal file
18
tests/build_secrets/docker-compose.yaml.invalid
Normal file
@ -0,0 +1,18 @@
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
test:
|
||||
image: test
|
||||
build:
|
||||
context: .
|
||||
secrets:
|
||||
# invalid target argument
|
||||
#
|
||||
# According to https://github.com/compose-spec/compose-spec/blob/master/build.md, target is
|
||||
# supposed to be the "name of a *file* to be mounted in /run/secrets/". Not a path.
|
||||
- source: build_secret
|
||||
target: /build_secret
|
||||
|
||||
secrets:
|
||||
build_secret:
|
||||
file: ./my_secret
|
1
tests/build_secrets/my_secret
Normal file
1
tests/build_secrets/my_secret
Normal file
@ -0,0 +1 @@
|
||||
important-secret-is-important
|
@ -1,4 +1,4 @@
|
||||
|
||||
```
|
||||
podman-compose run --rm sleep /bin/sh -c 'wget -O - http://localhost:8000/hosts'
|
||||
podman-compose run --rm sleep /bin/sh -c 'wget -O - http://web:8000/hosts'
|
||||
```
|
||||
|
@ -1,21 +1,22 @@
|
||||
version: "3.7"
|
||||
services:
|
||||
web:
|
||||
image: busybox
|
||||
command: ["/bin/busybox", "httpd", "-f", "-h", "/etc/", "-p", "8000"]
|
||||
image: nopush/podman-compose-test
|
||||
command: ["dumb-init", "/bin/busybox", "httpd", "-f", "-h", "/etc/", "-p", "8000"]
|
||||
tmpfs:
|
||||
- /run
|
||||
- /tmp
|
||||
sleep:
|
||||
image: busybox
|
||||
command: ["/bin/busybox", "sh", "-c", "sleep 3600"]
|
||||
depends_on: "web"
|
||||
image: nopush/podman-compose-test
|
||||
command: ["dumb-init", "/bin/busybox", "sh", "-c", "sleep 3600"]
|
||||
depends_on:
|
||||
- "web"
|
||||
tmpfs:
|
||||
- /run
|
||||
- /tmp
|
||||
sleep2:
|
||||
image: busybox
|
||||
command: ["/bin/busybox", "sh", "-c", "sleep 3600"]
|
||||
image: nopush/podman-compose-test
|
||||
command: ["dumb-init", "/bin/busybox", "sh", "-c", "sleep 3600"]
|
||||
depends_on:
|
||||
- sleep
|
||||
tmpfs:
|
||||
|
2
tests/env-file-tests/.env
Normal file
2
tests/env-file-tests/.env
Normal file
@ -0,0 +1,2 @@
|
||||
ZZVAR1='This value is overwritten by env-file-tests/.env'
|
||||
ZZVAR3='This value is loaded from env-file-tests/.env'
|
4
tests/env-file-tests/.gitignore
vendored
Normal file
4
tests/env-file-tests/.gitignore
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
# This overrides the repository root .gitignore (ignoring all .env).
|
||||
# The .env files in this directory are important for the test cases.
|
||||
!.env
|
||||
!project/.env
|
37
tests/env-file-tests/README.md
Normal file
37
tests/env-file-tests/README.md
Normal file
@ -0,0 +1,37 @@
|
||||
running the following commands should always give podman-rocks-123
|
||||
|
||||
```
|
||||
podman-compose -f project/container-compose.yaml --env-file env-files/project-1.env up
|
||||
```
|
||||
|
||||
```
|
||||
podman-compose -f $(pwd)/project/container-compose.yaml --env-file $(pwd)/env-files/project-1.env up
|
||||
```
|
||||
|
||||
```
|
||||
podman-compose -f $(pwd)/project/container-compose.env-file-flat.yaml up
|
||||
```
|
||||
|
||||
```
|
||||
podman-compose -f $(pwd)/project/container-compose.env-file-obj.yaml up
|
||||
```
|
||||
|
||||
```
|
||||
podman-compose -f $(pwd)/project/container-compose.env-file-obj-optional.yaml up
|
||||
```
|
||||
|
||||
based on environment variable precedent this command should give podman-rocks-321
|
||||
|
||||
```
|
||||
ZZVAR1=podman-rocks-321 podman-compose -f $(pwd)/project/container-compose.yaml --env-file $(pwd)/env-files/project-1.env up
|
||||
```
|
||||
|
||||
_The below test should print three environment variables_
|
||||
|
||||
```
|
||||
podman-compose -f $(pwd)/project/container-compose.load-.env-in-project.yaml run --rm app
|
||||
|
||||
ZZVAR1=This value is overwritten by env-file-tests/.env
|
||||
ZZVAR2=This value is loaded from .env in project/ directory
|
||||
ZZVAR3=This value is loaded from env-file-tests/.env
|
||||
```
|
3
tests/env-file-tests/env-files/project-1.env
Normal file
3
tests/env-file-tests/env-files/project-1.env
Normal file
@ -0,0 +1,3 @@
|
||||
ZZVAR1=podman-rocks-123
|
||||
ZZVAR2=podman-rocks-124
|
||||
ZZVAR3=podman-rocks-125
|
2
tests/env-file-tests/env-files/project-2.env
Normal file
2
tests/env-file-tests/env-files/project-2.env
Normal file
@ -0,0 +1,2 @@
|
||||
ZZVAR1=podman-rocks-223
|
||||
ZZVAR2=podman-rocks-224
|
2
tests/env-file-tests/project/.env
Normal file
2
tests/env-file-tests/project/.env
Normal file
@ -0,0 +1,2 @@
|
||||
ZZVAR1='This value is loaded but should be overwritten'
|
||||
ZZVAR2='This value is loaded from .env in project/ directory'
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user