28 Commits

Author SHA1 Message Date
24ec539932 release 1.0.3 2021-12-21 23:15:52 +02:00
2803046ac3 add awx 17 example 2021-12-21 22:57:45 +02:00
d1768c1d9d FIXES #377: down -v 2021-12-21 22:57:45 +02:00
820ea012c5 FIXES #: U mount propagation option 2021-12-21 22:57:45 +02:00
5ba96a1082 #365: 'Namespace' object has no attribute 'volumes' 2021-12-21 22:57:45 +02:00
49fe6e7e0f Update README.md 2021-12-18 23:34:32 +02:00
6c1ccfcefa Add missing arguments to the log (latest, names, since, until) 2021-12-14 11:35:30 +02:00
724d2fd18c Support viewing all logs 2021-12-14 11:35:30 +02:00
3e940579d9 Support for starting/stopping/restarting all services
Reverse services when stopping or restarting
2021-12-14 11:35:30 +02:00
af1697e9bf FIXES #288: extenal as dict 2021-12-13 03:25:17 +02:00
e62f1a54af FIXES #288: extenal as dict 2021-12-13 01:21:34 +02:00
179f9ab0e3 FIXES #288: do not create external network 2021-12-13 00:24:23 +02:00
dd6b1ee88c FIXES #288: do not create external network 2021-12-13 00:21:53 +02:00
9a8dc4ca17 release 1.0.2 2021-12-11 02:06:10 +02:00
6b5f62d693 Fixes #199: seccomp:unconfined 2021-12-11 01:50:40 +02:00
3782b4ab84 FIXES #371: respect COMPOSE_FILE env 2021-12-10 23:26:13 +02:00
95e07e27f0 FIXES #185: creates dirs 2021-12-10 22:46:22 +02:00
a3123ce480 #222: normalize basedir using os.path.realpath 2021-12-10 22:27:00 +02:00
02f78dc3d7 FIXES #333: when volumes are merged, remove duplicates 2021-12-10 02:06:43 +02:00
8cd97682d0 FIXES #370: bug-for-bug hanlding of .env 2021-12-10 01:01:45 +02:00
85244272ff FIXES #368: parse depends_on of type dict 2021-12-09 16:18:52 +02:00
30cfe2317c set version 2021-12-09 16:12:59 +02:00
7fda1cc835 fix AttributeError when running a one-off command
Without this, I get errors when running "podman-compose -p podname run".
2021-12-09 16:11:04 +02:00
5f40f4df31 Remove named volumes during "down -v"
Fixes containers#105

Signed-off-by: Luiz Carvalho <lucarval@redhat.com>
2021-12-09 16:09:59 +02:00
d38aeaa713 update README 2021-12-09 15:59:34 +02:00
17f9ca61bd test fixes for SELinux (Fedora) 2021-11-24 18:06:18 +02:00
80a47a13d5 add network-alias 2021-11-21 12:35:13 +02:00
872404c3a7 initial work on CNI podman network create 2021-11-21 01:23:29 +02:00
33 changed files with 976 additions and 255 deletions

View File

@ -1,28 +1,44 @@
# Podman Compose # Podman Compose
An implementation of `docker-compose` with [Podman](https://podman.io/) backend. An implementation of [Compose Spec](https://compose-spec.io/) with [Podman](https://podman.io/) backend.
The main objective of this project is to be able to run `docker-compose.yml` unmodified and rootless. This project focus on:
This project is aimed to provide drop-in replacement for `docker-compose`,
and it's very useful for certain cases because:
- can run rootless * rootless
- only depend on `podman` and Python3 and [PyYAML](https://pyyaml.org/) * daemon-less process model, we directly execute podman, no running daemon.
- no daemon, no setup.
- can be used by developers to run single-machine containerized stacks using single familiar YAML file This project only depend on:
* `podman`
* Python3
* [PyYAML](https://pyyaml.org/)
* [python-dotenv](https://pypi.org/project/python-dotenv/)
And it's formed as a single python file script that you can drop into your PATH and run.
## References:
* [spec.md](https://github.com/compose-spec/compose-spec/blob/master/spec.md)
* [docker-compose compose-file-v3](https://docs.docker.com/compose/compose-file/compose-file-v3/)
* [docker-compose compose-file-v2](https://docs.docker.com/compose/compose-file/compose-file-v2/)
## Alternatives
As in [this article](https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/) you can setup a `podman.socket` and use unmodified `docker-compose` that talks to that socket but in this case you lose the process-model (ex. `docker-compose build` will send a possibly large context tarball to the daemon)
For production-like single-machine containerized environment consider For production-like single-machine containerized environment consider
- [k3s](https://k3s.io) | [k3s github](https://github.com/rancher/k3s) - [k3s](https://k3s.io) | [k3s github](https://github.com/rancher/k3s)
- [MiniKube](https://minikube.sigs.k8s.io/) - [MiniKube](https://minikube.sigs.k8s.io/)
- [MiniShift](https://www.okd.io/minishift/)
For the real thing (multi-node clusters) check any production For the real thing (multi-node clusters) check any production
OpenShift/Kubernetes distribution like [OKD](https://www.okd.io/minishift/). OpenShift/Kubernetes distribution like [OKD](https://www.okd.io/).
## NOTE ## Versions
This project is still under development. If you have legacy version of `podman` (before 3.1.0) you might need to stick with legacy `podman-compose` `0.1.x` branch.
The legacy branch 0.1.x uses mappings and workarounds to compensate for rootless limitations.
Modern podman versions (>=3.4) do not have those limitations and thus you can use latest and stable 1.x branch.
## Installation ## Installation
@ -47,7 +63,7 @@ curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containe
chmod +x /usr/local/bin/podman-compose chmod +x /usr/local/bin/podman-compose
``` ```
or or inside your home
``` ```
curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py
@ -84,18 +100,11 @@ which have
When testing the `AWX3` example, if you got errors just wait for db migrations to end. When testing the `AWX3` example, if you got errors just wait for db migrations to end.
There is also AWX 17.1.0
## Tests ## Tests
Inside `tests/` directory we have many useless docker-compose stacks Inside `tests/` directory we have many useless docker-compose stacks
that are meant to test as much cases as we can to make sure we are compatible that are meant to test as much cases as we can to make sure we are compatible
## How it works
The default mapping `1podfw` creates a single pod and attach all containers to
its network namespace so that all containers talk via localhost.
For more information see [docs/Mappings.md](docs/Mappings.md).
If you are running as root, you might use identity mapping.

37
examples/awx17/README.md Normal file
View File

@ -0,0 +1,37 @@
# AWX Compose
the directory roles is taken from [here](https://github.com/ansible/awx/tree/17.1.0/installer/roles/local_docker)
also look at https://github.com/ansible/awx/tree/17.1.0/tools/docker-compose
```
mkdir deploy awx17
ansible localhost \
-e host_port=8080 \
-e awx_secret_key='awx,secret.123' \
-e secret_key='awx,secret.123' \
-e admin_user='admin' \
-e admin_password='admin' \
-e pg_password='awx,123.' \
-e pg_username='awx' \
-e pg_database='awx' \
-e pg_port='5432' \
-e redis_image="docker.io/library/redis:6-alpine" \
-e postgres_data_dir="./data/pg" \
-e compose_start_containers=false \
-e dockerhub_base='docker.io/ansible' \
-e awx_image='docker.io/ansible/awx' \
-e awx_version='17.1.0' \
-e dockerhub_version='17.1.0' \
-e docker_deploy_base_path=$PWD/deploy \
-e docker_compose_dir=$PWD/awx17 \
-e awx_task_hostname=awx \
-e awx_web_hostname=awxweb \
-m include_role -a name=local_docker
cp awx17/docker-compose.yml awx17/docker-compose.yml.orig
sed -i -re "s#- \"$PWD/awx17/(.*):/#- \"./\1:/#" awx17/docker-compose.yml
cd awx17
podman-compose run --rm --service-ports task awx-manage migrate --no-input
podman-compose up -d
```

View File

@ -0,0 +1,11 @@
---
dockerhub_version: "{{ lookup('file', playbook_dir + '/../VERSION') }}"
awx_image: "awx"
redis_image: "redis"
postgresql_version: "12"
postgresql_image: "postgres:{{postgresql_version}}"
compose_start_containers: true
upgrade_postgres: false

View File

@ -0,0 +1,74 @@
---
- name: Create {{ docker_compose_dir }} directory
file:
path: "{{ docker_compose_dir }}"
state: directory
- name: Create Redis socket directory
file:
path: "{{ docker_compose_dir }}/redis_socket"
state: directory
mode: 0777
- name: Create Docker Compose Configuration
template:
src: "{{ item.file }}.j2"
dest: "{{ docker_compose_dir }}/{{ item.file }}"
mode: "{{ item.mode }}"
loop:
- file: environment.sh
mode: "0600"
- file: credentials.py
mode: "0600"
- file: docker-compose.yml
mode: "0600"
- file: nginx.conf
mode: "0600"
- file: redis.conf
mode: "0664"
register: awx_compose_config
- name: Render SECRET_KEY file
copy:
content: "{{ secret_key }}"
dest: "{{ docker_compose_dir }}/SECRET_KEY"
mode: 0600
register: awx_secret_key
- block:
- name: Remove AWX containers before migrating postgres so that the old postgres container does not get used
docker_compose:
project_src: "{{ docker_compose_dir }}"
state: absent
ignore_errors: true
- name: Run migrations in task container
shell: docker-compose run --rm --service-ports task awx-manage migrate --no-input
args:
chdir: "{{ docker_compose_dir }}"
- name: Start the containers
docker_compose:
project_src: "{{ docker_compose_dir }}"
restarted: "{{ awx_compose_config is changed or awx_secret_key is changed }}"
register: awx_compose_start
- name: Update CA trust in awx_web container
command: docker exec awx_web '/usr/bin/update-ca-trust'
when: awx_compose_config.changed or awx_compose_start.changed
- name: Update CA trust in awx_task container
command: docker exec awx_task '/usr/bin/update-ca-trust'
when: awx_compose_config.changed or awx_compose_start.changed
- name: Wait for launch script to create user
wait_for:
timeout: 10
delegate_to: localhost
- name: Create Preload data
command: docker exec awx_task bash -c "/usr/bin/awx-manage create_preload_data"
when: create_preload_data|bool
register: cdo
changed_when: "'added' in cdo.stdout"
when: compose_start_containers|bool

View File

@ -0,0 +1,15 @@
---
- name: Generate broadcast websocket secret
set_fact:
broadcast_websocket_secret: "{{ lookup('password', '/dev/null length=128') }}"
run_once: true
no_log: true
when: broadcast_websocket_secret is not defined
- import_tasks: upgrade_postgres.yml
when:
- postgres_data_dir is defined
- pg_hostname is not defined
- import_tasks: set_image.yml
- import_tasks: compose.yml

View File

@ -0,0 +1,46 @@
---
- name: Manage AWX Container Images
block:
- name: Export Docker awx image if it isnt local and there isnt a registry defined
docker_image:
name: "{{ awx_image }}"
tag: "{{ awx_version }}"
archive_path: "{{ awx_local_base_config_path|default('/tmp') }}/{{ awx_image }}_{{ awx_version }}.tar"
when: inventory_hostname != "localhost" and docker_registry is not defined
delegate_to: localhost
- name: Set docker base path
set_fact:
docker_deploy_base_path: "{{ awx_base_path|default('/tmp') }}/docker_deploy"
when: ansible_connection != "local" and docker_registry is not defined
- name: Ensure directory exists
file:
path: "{{ docker_deploy_base_path }}"
state: directory
when: ansible_connection != "local" and docker_registry is not defined
- name: Copy awx image to docker execution
copy:
src: "{{ awx_local_base_config_path|default('/tmp') }}/{{ awx_image }}_{{ awx_version }}.tar"
dest: "{{ docker_deploy_base_path }}/{{ awx_image }}_{{ awx_version }}.tar"
when: ansible_connection != "local" and docker_registry is not defined
- name: Load awx image
docker_image:
name: "{{ awx_image }}"
tag: "{{ awx_version }}"
load_path: "{{ docker_deploy_base_path }}/{{ awx_image }}_{{ awx_version }}.tar"
timeout: 300
when: ansible_connection != "local" and docker_registry is not defined
- name: Set full image path for local install
set_fact:
awx_docker_actual_image: "{{ awx_image }}:{{ awx_version }}"
when: docker_registry is not defined
when: dockerhub_base is not defined
- name: Set DockerHub Image Paths
set_fact:
awx_docker_actual_image: "{{ dockerhub_base }}/awx:{{ dockerhub_version }}"
when: dockerhub_base is defined

View File

@ -0,0 +1,64 @@
---
- name: Create {{ postgres_data_dir }} directory
file:
path: "{{ postgres_data_dir }}"
state: directory
- name: Get full path of postgres data dir
shell: "echo {{ postgres_data_dir }}"
register: fq_postgres_data_dir
- name: Register temporary docker container
set_fact:
container_command: "docker run --rm -v '{{ fq_postgres_data_dir.stdout }}:/var/lib/postgresql' centos:8 bash -c "
- name: Check for existing Postgres data (run from inside the container for access to file)
shell:
cmd: |
{{ container_command }} "[[ -f /var/lib/postgresql/10/data/PG_VERSION ]] && echo 'exists'"
register: pg_version_file
ignore_errors: true
- name: Record Postgres version
shell: |
{{ container_command }} "cat /var/lib/postgresql/10/data/PG_VERSION"
register: old_pg_version
when: pg_version_file is defined and pg_version_file.stdout == 'exists'
- name: Determine whether to upgrade postgres
set_fact:
upgrade_postgres: "{{ old_pg_version.stdout == '10' }}"
when: old_pg_version.changed
- name: Set up new postgres paths pre-upgrade
shell: |
{{ container_command }} "mkdir -p /var/lib/postgresql/12/data/"
when: upgrade_postgres | bool
- name: Stop AWX before upgrading postgres
docker_compose:
project_src: "{{ docker_compose_dir }}"
stopped: true
when: upgrade_postgres | bool
- name: Upgrade Postgres
shell: |
docker run --rm \
-v {{ postgres_data_dir }}/10/data:/var/lib/postgresql/10/data \
-v {{ postgres_data_dir }}/12/data:/var/lib/postgresql/12/data \
-e PGUSER={{ pg_username }} -e POSTGRES_INITDB_ARGS="-U {{ pg_username }}" \
tianon/postgres-upgrade:10-to-12 --username={{ pg_username }}
when: upgrade_postgres | bool
- name: Copy old pg_hba.conf
shell: |
{{ container_command }} "cp /var/lib/postgresql/10/data/pg_hba.conf /var/lib/postgresql/12/data/pg_hba.conf"
when: upgrade_postgres | bool
- name: Remove old data directory
shell: |
{{ container_command }} "rm -rf /var/lib/postgresql/10/data"
when:
- upgrade_postgres | bool
- compose_start_containers|bool

View File

@ -0,0 +1,13 @@
DATABASES = {
'default': {
'ATOMIC_REQUESTS': True,
'ENGINE': 'django.db.backends.postgresql',
'NAME': "{{ pg_database }}",
'USER': "{{ pg_username }}",
'PASSWORD': "{{ pg_password }}",
'HOST': "{{ pg_hostname | default('postgres') }}",
'PORT': "{{ pg_port }}",
}
}
BROADCAST_WEBSOCKET_SECRET = "{{ broadcast_websocket_secret | b64encode }}"

View File

@ -0,0 +1,208 @@
#jinja2: lstrip_blocks: True
version: '2'
services:
web:
image: {{ awx_docker_actual_image }}
container_name: awx_web
depends_on:
- redis
{% if pg_hostname is not defined %}
- postgres
{% endif %}
{% if (host_port is defined) or (host_port_ssl is defined) %}
ports:
{% if (host_port_ssl is defined) and (ssl_certificate is defined) %}
- "{{ host_port_ssl }}:8053"
{% endif %}
{% if host_port is defined %}
- "{{ host_port }}:8052"
{% endif %}
{% endif %}
hostname: {{ awx_web_hostname }}
user: root
restart: unless-stopped
{% if (awx_web_container_labels is defined) and (',' in awx_web_container_labels) %}
{% set awx_web_container_labels_list = awx_web_container_labels.split(',') %}
labels:
{% for awx_web_container_label in awx_web_container_labels_list %}
- {{ awx_web_container_label }}
{% endfor %}
{% elif awx_web_container_labels is defined %}
labels:
- {{ awx_web_container_labels }}
{% endif %}
volumes:
- supervisor-socket:/var/run/supervisor
- rsyslog-socket:/var/run/awx-rsyslog/
- rsyslog-config:/var/lib/awx/rsyslog/
- "{{ docker_compose_dir }}/SECRET_KEY:/etc/tower/SECRET_KEY"
- "{{ docker_compose_dir }}/environment.sh:/etc/tower/conf.d/environment.sh"
- "{{ docker_compose_dir }}/credentials.py:/etc/tower/conf.d/credentials.py"
- "{{ docker_compose_dir }}/nginx.conf:/etc/nginx/nginx.conf:ro"
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
{% if project_data_dir is defined %}
- "{{ project_data_dir +':/var/lib/awx/projects:rw' }}"
{% endif %}
{% if custom_venv_dir is defined %}
- "{{ custom_venv_dir +':'+ custom_venv_dir +':rw' }}"
{% endif %}
{% if ca_trust_dir is defined %}
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
{% endif %}
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
- "{{ ssl_certificate_key +':/etc/nginx/awxweb_key.pem:ro' }}"
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
{% endif %}
{% if (awx_container_search_domains is defined) and (',' in awx_container_search_domains) %}
{% set awx_container_search_domains_list = awx_container_search_domains.split(',') %}
dns_search:
{% for awx_container_search_domain in awx_container_search_domains_list %}
- {{ awx_container_search_domain }}
{% endfor %}
{% elif awx_container_search_domains is defined %}
dns_search: "{{ awx_container_search_domains }}"
{% endif %}
{% if (awx_alternate_dns_servers is defined) and (',' in awx_alternate_dns_servers) %}
{% set awx_alternate_dns_servers_list = awx_alternate_dns_servers.split(',') %}
dns:
{% for awx_alternate_dns_server in awx_alternate_dns_servers_list %}
- {{ awx_alternate_dns_server }}
{% endfor %}
{% elif awx_alternate_dns_servers is defined %}
dns: "{{ awx_alternate_dns_servers }}"
{% endif %}
{% if (docker_compose_extra_hosts is defined) and (':' in docker_compose_extra_hosts) %}
{% set docker_compose_extra_hosts_list = docker_compose_extra_hosts.split(',') %}
extra_hosts:
{% for docker_compose_extra_host in docker_compose_extra_hosts_list %}
- "{{ docker_compose_extra_host }}"
{% endfor %}
{% endif %}
environment:
http_proxy: {{ http_proxy | default('') }}
https_proxy: {{ https_proxy | default('') }}
no_proxy: {{ no_proxy | default('') }}
{% if docker_logger is defined %}
logging:
driver: {{ docker_logger }}
{% endif %}
task:
image: {{ awx_docker_actual_image }}
container_name: awx_task
depends_on:
- redis
- web
{% if pg_hostname is not defined %}
- postgres
{% endif %}
command: /usr/bin/launch_awx_task.sh
hostname: {{ awx_task_hostname }}
user: root
restart: unless-stopped
volumes:
- supervisor-socket:/var/run/supervisor
- rsyslog-socket:/var/run/awx-rsyslog/
- rsyslog-config:/var/lib/awx/rsyslog/
- "{{ docker_compose_dir }}/SECRET_KEY:/etc/tower/SECRET_KEY"
- "{{ docker_compose_dir }}/environment.sh:/etc/tower/conf.d/environment.sh"
- "{{ docker_compose_dir }}/credentials.py:/etc/tower/conf.d/credentials.py"
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
{% if project_data_dir is defined %}
- "{{ project_data_dir +':/var/lib/awx/projects:rw' }}"
{% endif %}
{% if custom_venv_dir is defined %}
- "{{ custom_venv_dir +':'+ custom_venv_dir +':rw' }}"
{% endif %}
{% if ca_trust_dir is defined %}
- "{{ ca_trust_dir +':/etc/pki/ca-trust/source/anchors:ro' }}"
{% endif %}
{% if ssl_certificate is defined %}
- "{{ ssl_certificate +':/etc/nginx/awxweb.pem:ro' }}"
{% endif %}
{% if (awx_container_search_domains is defined) and (',' in awx_container_search_domains) %}
{% set awx_container_search_domains_list = awx_container_search_domains.split(',') %}
dns_search:
{% for awx_container_search_domain in awx_container_search_domains_list %}
- {{ awx_container_search_domain }}
{% endfor %}
{% elif awx_container_search_domains is defined %}
dns_search: "{{ awx_container_search_domains }}"
{% endif %}
{% if (awx_alternate_dns_servers is defined) and (',' in awx_alternate_dns_servers) %}
{% set awx_alternate_dns_servers_list = awx_alternate_dns_servers.split(',') %}
dns:
{% for awx_alternate_dns_server in awx_alternate_dns_servers_list %}
- {{ awx_alternate_dns_server }}
{% endfor %}
{% elif awx_alternate_dns_servers is defined %}
dns: "{{ awx_alternate_dns_servers }}"
{% endif %}
{% if (docker_compose_extra_hosts is defined) and (':' in docker_compose_extra_hosts) %}
{% set docker_compose_extra_hosts_list = docker_compose_extra_hosts.split(',') %}
extra_hosts:
{% for docker_compose_extra_host in docker_compose_extra_hosts_list %}
- "{{ docker_compose_extra_host }}"
{% endfor %}
{% endif %}
environment:
AWX_SKIP_MIGRATIONS: "1"
http_proxy: {{ http_proxy | default('') }}
https_proxy: {{ https_proxy | default('') }}
no_proxy: {{ no_proxy | default('') }}
SUPERVISOR_WEB_CONFIG_PATH: '/etc/supervisord.conf'
redis:
image: {{ redis_image }}
container_name: awx_redis
restart: unless-stopped
environment:
http_proxy: {{ http_proxy | default('') }}
https_proxy: {{ https_proxy | default('') }}
no_proxy: {{ no_proxy | default('') }}
command: ["/usr/local/etc/redis/redis.conf"]
volumes:
- "{{ docker_compose_dir }}/redis.conf:/usr/local/etc/redis/redis.conf:ro"
- "{{ docker_compose_dir }}/redis_socket:/var/run/redis/:rw"
{% if docker_logger is defined %}
logging:
driver: {{ docker_logger }}
{% endif %}
{% if pg_hostname is not defined %}
postgres:
image: {{ postgresql_image }}
container_name: awx_postgres
restart: unless-stopped
volumes:
- "{{ postgres_data_dir }}/12/data/:/var/lib/postgresql/data:Z"
environment:
POSTGRES_USER: {{ pg_username }}
POSTGRES_PASSWORD: {{ pg_password }}
POSTGRES_DB: {{ pg_database }}
http_proxy: {{ http_proxy | default('') }}
https_proxy: {{ https_proxy | default('') }}
no_proxy: {{ no_proxy | default('') }}
{% if docker_logger is defined %}
logging:
driver: {{ docker_logger }}
{% endif %}
{% endif %}
{% if docker_compose_subnet is defined %}
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: {{ docker_compose_subnet }}
{% endif %}
volumes:
supervisor-socket:
rsyslog-socket:
rsyslog-config:

View File

@ -0,0 +1,10 @@
DATABASE_USER={{ pg_username|quote }}
DATABASE_NAME={{ pg_database|quote }}
DATABASE_HOST={{ pg_hostname|default('postgres')|quote }}
DATABASE_PORT={{ pg_port|default('5432')|quote }}
DATABASE_PASSWORD={{ pg_password|default('awxpass')|quote }}
{% if pg_admin_password is defined %}
DATABASE_ADMIN_PASSWORD={{ pg_admin_password|quote }}
{% endif %}
AWX_ADMIN_USER={{ admin_user|quote }}
AWX_ADMIN_PASSWORD={{ admin_password|quote }}

View File

@ -0,0 +1,122 @@
#user awx;
worker_processes 1;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
sendfile on;
#tcp_nopush on;
#gzip on;
upstream uwsgi {
server 127.0.0.1:8050;
}
upstream daphne {
server 127.0.0.1:8051;
}
{% if ssl_certificate is defined %}
server {
listen 8052 default_server;
server_name _;
# Redirect all HTTP links to the matching HTTPS page
return 301 https://$host$request_uri;
}
{%endif %}
server {
{% if (ssl_certificate is defined) and (ssl_certificate_key is defined) %}
listen 8053 ssl;
ssl_certificate /etc/nginx/awxweb.pem;
ssl_certificate_key /etc/nginx/awxweb_key.pem;
{% elif (ssl_certificate is defined) and (ssl_certificate_key is not defined) %}
listen 8053 ssl;
ssl_certificate /etc/nginx/awxweb.pem;
ssl_certificate_key /etc/nginx/awxweb.pem;
{% else %}
listen 8052 default_server;
{% endif %}
# If you have a domain name, this is where to add it
server_name _;
keepalive_timeout 65;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)
add_header X-Frame-Options "DENY";
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location /static/ {
alias /var/lib/awx/public/static/;
}
location /favicon.ico { alias /var/lib/awx/public/static/favicon.ico; }
location /websocket {
# Pass request to the upstream alias
proxy_pass http://daphne;
# Require http version 1.1 to allow for upgrade requests
proxy_http_version 1.1;
# We want proxy_buffering off for proxying to websockets.
proxy_buffering off;
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if you use HTTPS:
proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client for the sake of redirects
proxy_set_header Host $http_host;
# We've set the Host header, so we don't need Nginx to muddle
# about with redirects
proxy_redirect off;
# Depending on the request value, set the Upgrade and
# connection headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
# Add trailing / if missing
rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;
uwsgi_read_timeout 120s;
uwsgi_pass uwsgi;
include /etc/nginx/uwsgi_params;
{%- if extra_nginx_include is defined %}
include {{ extra_nginx_include }};
{%- endif %}
proxy_set_header X-Forwarded-Port 443;
uwsgi_param HTTP_X_FORWARDED_PORT 443;
}
}
}

View File

@ -0,0 +1,4 @@
unixsocket /var/run/redis/redis.sock
unixsocketperm 660
port 0
bind 127.0.0.1

View File

@ -32,10 +32,10 @@ except ImportError:
# import fnmatch # import fnmatch
# fnmatch.fnmatchcase(env, "*_HOST") # fnmatch.fnmatchcase(env, "*_HOST")
import json
import yaml import yaml
from dotenv import dotenv_values
__version__ = '0.1.8' __version__ = '1.0.3'
PY3 = sys.version_info[0] == 3 PY3 = sys.version_info[0] == 3
if PY3: if PY3:
@ -68,7 +68,7 @@ def try_float(i, fallback=None):
return fallback return fallback
dir_re = re.compile("^[~/\.]") dir_re = re.compile("^[~/\.]")
propagation_re = re.compile("^(?:z|Z|r?shared|r?slave|r?private)$") propagation_re = re.compile("^(?:z|Z|O|U|r?shared|r?slave|r?private|r?unbindable|r?bind|(?:no)?(?:exec|dev|suid))$")
norm_re = re.compile('[^-_a-z0-9]') norm_re = re.compile('[^-_a-z0-9]')
num_split_re = re.compile(r'(\d+|\D+)') num_split_re = re.compile(r'(\d+|\D+)')
@ -112,8 +112,7 @@ def parse_short_mount(mount_str, basedir):
# User-relative path # User-relative path
# - ~/configs:/etc/configs/:ro # - ~/configs:/etc/configs/:ro
mount_type = "bind" mount_type = "bind"
# TODO: should we use os.path.realpath(basedir)? mount_src = os.path.realpath(os.path.join(basedir, os.path.expanduser(mount_src)))
mount_src = os.path.join(basedir, os.path.expanduser(mount_src))
else: else:
# Named volume # Named volume
# - datavolume:/var/lib/mysql # - datavolume:/var/lib/mysql
@ -257,153 +256,37 @@ def norm_ulimit(inner_value):
# if int or string return as is # if int or string return as is
return inner_value return inner_value
#def tr_identity(project_name, given_containers):
# pod_name = f'pod_{project_name}'
# pod = dict(name=pod_name)
# containers = []
# for cnt in given_containers:
# containers.append(dict(cnt, pod=pod_name))
# return [pod], containers
# transformation helpers def tr_identity(project_name, given_containers):
def adj_hosts(services, cnt, dst="127.0.0.1"):
"""
adjust container cnt in-place to add hosts pointing to dst for services
"""
common_extra_hosts = []
for srv, cnts in services.items():
common_extra_hosts.append("{}:{}".format(srv, dst))
for cnt0 in cnts:
common_extra_hosts.append("{}:{}".format(cnt0, dst))
extra_hosts = list(cnt.get("extra_hosts", []))
extra_hosts.extend(common_extra_hosts)
# link aliases
for link in cnt.get("links", []):
a = link.strip().split(':', 1)
if len(a) == 2:
alias = a[1].strip()
extra_hosts.append("{}:{}".format(alias, dst))
cnt["extra_hosts"] = extra_hosts
def move_list(dst, containers, key):
"""
move key (like port forwarding) from containers to dst (a pod or a infra container)
"""
a = set(dst.get(key, None) or [])
for cnt in containers:
a0 = cnt.get(key, None)
if a0:
a.update(a0)
del cnt[key]
if a:
dst[key] = list(a)
def move_port_fw(dst, containers):
"""
move port forwarding from containers to dst (a pod or a infra container)
"""
move_list(dst, containers, "ports")
def move_extra_hosts(dst, containers):
"""
move port forwarding from containers to dst (a pod or a infra container)
"""
move_list(dst, containers, "extra_hosts")
# transformations
transformations = {}
def trans(func):
transformations[func.__name__.replace("tr_", "")] = func
return func
@trans
def tr_identity(project_name, services, given_containers):
containers = [] containers = []
for cnt in given_containers: for cnt in given_containers:
containers.append(dict(cnt)) containers.append(dict(cnt))
return [], containers return [], containers
@trans
def tr_publishall(project_name, services, given_containers):
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, publishall=True)
# adjust hosts to point to the gateway, TODO: adjust host env
adj_hosts(services, cnt, '10.0.2.2')
containers.append(cnt)
return [], containers
@trans
def tr_hostnet(project_name, services, given_containers):
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, network_mode="host")
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
containers.append(cnt)
return [], containers
@trans
def tr_cntnet(project_name, services, given_containers):
containers = []
infra_name = project_name + "_infra"
infra = dict(
name=infra_name,
image="k8s.gcr.io/pause:3.1",
)
for cnt0 in given_containers:
cnt = dict(cnt0, network_mode="container:"+infra_name)
deps = cnt.get("depends_on", None) or []
deps.append(infra_name)
cnt["depends_on"] = deps
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
if "hostname" in cnt:
del cnt["hostname"]
containers.append(cnt)
move_port_fw(infra, containers)
move_extra_hosts(infra, containers)
containers.insert(0, infra)
return [], containers
@trans
def tr_1pod(project_name, services, given_containers):
"""
project_name:
services: {service_name: ["container_name1", "..."]}, currently only one is supported
given_containers: [{}, ...]
"""
pod = dict(name=project_name)
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, pod=project_name)
# services can be accessed as localhost because they are on one pod
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
containers.append(cnt)
return [pod], containers
@trans
def tr_1podfw(project_name, services, given_containers):
pods, containers = tr_1pod(project_name, services, given_containers)
pod = pods[0]
move_port_fw(pod, containers)
return pods, containers
def assert_volume(compose, mount_dict): def assert_volume(compose, mount_dict):
""" """
inspect volume to get directory inspect volume to get directory
create volume if needed create volume if needed
""" """
vol = mount_dict.get("_vol", None) vol = mount_dict.get("_vol", None)
if mount_dict["type"] == "bind":
basedir = os.path.realpath(compose.dirname)
mount_src = mount_dict["source"]
mount_src = os.path.realpath(os.path.join(basedir, os.path.expanduser(mount_src)))
if not os.path.exists(mount_src):
try:
os.makedirs(mount_src, exist_ok=True)
except OSError:
pass
return
if mount_dict["type"] != "volume" or not vol or vol.get("external", None) or not vol.get("name", None): return if mount_dict["type"] != "volume" or not vol or vol.get("external", None) or not vol.get("name", None): return
proj_name = compose.project_name proj_name = compose.project_name
vol_name = vol["name"] vol_name = vol["name"]
@ -495,11 +378,13 @@ def mount_desc_to_volume_args(compose, mount_desc, srv_name, cnt_name):
# --volume, -v[=[[SOURCE-VOLUME|HOST-DIR:]CONTAINER-DIR[:OPTIONS]]] # --volume, -v[=[[SOURCE-VOLUME|HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]
# [rw|ro] # [rw|ro]
# [z|Z] # [z|Z]
# [[r]shared|[r]slave|[r]private] # [[r]shared|[r]slave|[r]private]|[r]unbindable
# [[r]bind] # [[r]bind]
# [noexec|exec] # [noexec|exec]
# [nodev|dev] # [nodev|dev]
# [nosuid|suid] # [nosuid|suid]
# [O]
# [U]
read_only = mount_desc.get("read_only", None) read_only = mount_desc.get("read_only", None)
if read_only is not None: if read_only is not None:
opts.append('ro' if read_only else 'rw') opts.append('ro' if read_only else 'rw')
@ -661,6 +546,56 @@ def norm_ports(ports_in):
ports_out.append(port) ports_out.append(port)
return ports_out return ports_out
def assert_cnt_nets(compose, cnt):
"""
create missing networks
"""
proj_name = compose.project_name
nets = compose.networks
default_net = compose.default_net
cnt_nets = norm_as_list(cnt.get("networks", None) or default_net)
for net in cnt_nets:
net_desc = nets[net] or {}
is_ext = net_desc.get("external", None)
ext_desc = is_ext if is_dict(is_ext) else {}
default_net_name = net if is_ext else f"{proj_name}_{net}"
net_name = ext_desc.get("name", None) or net_desc.get("name", None) or default_net_name
try: compose.podman.output([], "network", ["exists", net_name])
except subprocess.CalledProcessError:
if is_ext:
raise RuntimeError(f"External network [{net_name}] does not exists")
args = [
"create",
"--label", "io.podman.compose.project={}".format(proj_name),
"--label", "com.docker.compose.project={}".format(proj_name),
]
# TODO: add more options here, like driver, internal, ..etc
labels = net_desc.get("labels", None) or []
for item in norm_as_list(labels):
args.extend(["--label", item])
if net_desc.get("internal", None):
args.append("--internal")
args.append(net_name)
compose.podman.output([], "network", args)
compose.podman.output([], "network", ["exists", net_name])
def get_net_args(compose, cnt):
service_name = cnt["service_name"]
proj_name = compose.project_name
default_net = compose.default_net
nets = compose.networks
cnt_nets = norm_as_list(cnt.get("networks", None) or default_net)
net_names = set()
for net in cnt_nets:
net_desc = nets[net] or {}
is_ext = net_desc.get("external", None)
ext_desc = is_ext if is_dict(is_ext) else {}
default_net_name = net if is_ext else f"{proj_name}_{net}"
net_name = ext_desc.get("name", None) or net_desc.get("name", None) or default_net_name
net_names.add(net_name)
net_names_str = ",".join(net_names)
return ["--net", net_names_str, "--network-alias", service_name]
def container_to_args(compose, cnt, detached=True): def container_to_args(compose, cnt, detached=True):
# TODO: double check -e , --add-host, -v, --read-only # TODO: double check -e , --add-host, -v, --read-only
dirname = compose.dirname dirname = compose.dirname
@ -706,8 +641,9 @@ def container_to_args(compose, cnt, detached=True):
for i in tmpfs_ls: for i in tmpfs_ls:
podman_args.extend(['--tmpfs', i]) podman_args.extend(['--tmpfs', i])
for volume in cnt.get('volumes', []): for volume in cnt.get('volumes', []):
# TODO: should we make it os.path.realpath(os.path.join(, i))?
podman_args.extend(get_mount_args(compose, cnt, volume)) podman_args.extend(get_mount_args(compose, cnt, volume))
assert_cnt_nets(compose, cnt)
podman_args.extend(get_net_args(compose, cnt))
log = cnt.get('logging') log = cnt.get('logging')
if log is not None: if log is not None:
podman_args.append(f'--log-driver={log.get("driver", "k8s-file")}') podman_args.append(f'--log-driver={log.get("driver", "k8s-file")}')
@ -730,7 +666,7 @@ def container_to_args(compose, cnt, detached=True):
elif not isinstance(port, str): elif not isinstance(port, str):
raise TypeError("port should be either string or dict") raise TypeError("port should be either string or dict")
podman_args.extend(['-p', port]) podman_args.extend(['-p', port])
user = cnt.get('user', None) user = cnt.get('user', None)
if user is not None: if user is not None:
podman_args.extend(['-u', user]) podman_args.extend(['-u', user])
@ -855,7 +791,8 @@ def flat_deps(services, with_extends=False):
if ext != name: deps.add(ext) if ext != name: deps.add(ext)
continue continue
deps_ls = srv.get("depends_on", None) or [] deps_ls = srv.get("depends_on", None) or []
if not is_list(deps_ls): deps_ls=[deps_ls] if is_str(deps_ls): deps_ls=[deps_ls]
elif is_dict(deps_ls): deps_ls=list(deps_ls.keys())
deps.update(deps_ls) deps.update(deps_ls)
# parse link to get service name and remove alias # parse link to get service name and remove alias
links_ls = srv.get("links", None) or [] links_ls = srv.get("links", None) or []
@ -903,10 +840,26 @@ class Podman:
time.sleep(sleep) time.sleep(sleep)
return p return p
def volume_inspect_all(self):
output = self.output([], "volume", ["inspect", "--all"]).decode('utf-8')
return json.loads(output)
def volume_inspect_proj(self, proj=None):
if not proj:
proj = self.compose.project_name
volumes = [(vol.get("Labels", {}), vol) for vol in self.volume_inspect_all()]
volumes = [(labels.get("io.podman.compose.project", None), vol) for labels, vol in volumes]
return [vol for vol_proj, vol in volumes if vol_proj==proj]
def normalize_service(service): def normalize_service(service):
for key in ("env_file", "security_opt"): for key in ("env_file", "security_opt", "volumes"):
if key not in service: continue if key not in service: continue
if is_str(service[key]): service[key]=[service[key]] if is_str(service[key]): service[key]=[service[key]]
if "security_opt" in service:
sec_ls = service["security_opt"]
for ix, item in enumerate(sec_ls):
if item=="seccomp:unconfined" or item=="apparmor:unconfined":
sec_ls[ix] = item.replace(":", "=")
for key in ("environment", "labels"): for key in ("environment", "labels"):
if key not in service: continue if key not in service: continue
service[key] = norm_as_dict(service[key]) service[key] = norm_as_dict(service[key])
@ -942,7 +895,15 @@ def rec_merge_one(target, source):
if type(value2)!=type(value): if type(value2)!=type(value):
raise ValueError("can't merge value of {} of type {} and {}".format(key, type(value), type(value2))) raise ValueError("can't merge value of {} of type {} and {}".format(key, type(value), type(value2)))
if is_list(value2): if is_list(value2):
value.extend(value2) if key == 'volumes':
# clean duplicate mount targets
pts = set([ v.split(':', 1)[1] for v in value2 if ":" in v ])
del_ls = [ ix for (ix, v) in enumerate(value) if ":" in v and v.split(':', 1)[1] in pts ]
for ix in reversed(del_ls):
del value[ix]
value.extend(value2)
else:
value.extend(value2)
elif is_dict(value2): elif is_dict(value2):
rec_merge_one(value, value2) rec_merge_one(value, value2)
else: else:
@ -983,6 +944,27 @@ def resolve_extends(services, service_names, environ):
new_service = rec_merge({}, from_service, service) new_service = rec_merge({}, from_service, service)
services[name] = new_service services[name] = new_service
def dotenv_to_dict(dotenv_path):
if not os.path.isfile(dotenv_path):
return {}
return dotenv_values(dotenv_path)
COMPOSE_DEFAULT_LS = [
"compose.yaml",
"compose.yml",
"compose.override.yaml",
"compose.override.yml",
"podman-compose.yaml",
"podman-compose.yml",
"docker-compose.yml",
"docker-compose.yaml",
"docker-compose.override.yml",
"docker-compose.override.yaml",
"container-compose.yml",
"container-compose.yaml",
"container-compose.override.yml",
"container-compose.override.yaml",
]
class PodmanCompose: class PodmanCompose:
def __init__(self): def __init__(self):
@ -995,6 +977,8 @@ class PodmanCompose:
self.pods = None self.pods = None
self.containers = None self.containers = None
self.vols = None self.vols = None
self.networks = {}
self.default_net = "default"
self.declared_secrets = None self.declared_secrets = None
self.container_names_by_service = None self.container_names_by_service = None
self.container_by_name = None self.container_by_name = None
@ -1042,23 +1026,14 @@ class PodmanCompose:
def _parse_compose_file(self): def _parse_compose_file(self):
args = self.global_args args = self.global_args
cmd = args.command cmd = args.command
pathsep = os.environ.get("COMPOSE_PATH_SEPARATOR", None) or os.pathsep
if not args.file: if not args.file:
args.file = list(filter(os.path.exists, [ default_str = os.environ.get("COMPOSE_FILE", None)
"compose.yaml", if default_str:
"compose.yml", default_ls = default_str.split(pathsep)
"compose.override.yaml", else:
"compose.override.yml", default_ls = COMPOSE_DEFAULT_LS
"podman-compose.yaml", args.file = list(filter(os.path.exists, default_ls))
"podman-compose.yml",
"docker-compose.yml",
"docker-compose.yaml",
"docker-compose.override.yml",
"docker-compose.override.yaml",
"container-compose.yml",
"container-compose.yaml",
"container-compose.override.yml",
"container-compose.override.yaml"
]))
files = args.file files = args.file
if not files: if not files:
print("no compose.yaml, docker-compose.yml or container-compose.yml file found, pass files with -f") print("no compose.yaml, docker-compose.yml or container-compose.yml file found, pass files with -f")
@ -1076,9 +1051,8 @@ class PodmanCompose:
no_ansi = args.no_ansi no_ansi = args.no_ansi
no_cleanup = args.no_cleanup no_cleanup = args.no_cleanup
dry_run = args.dry_run dry_run = args.dry_run
transform_policy = args.transform_policy
host_env = None host_env = None
dirname = os.path.dirname(filename) dirname = os.path.realpath(os.path.dirname(filename))
dir_basename = os.path.basename(dirname) dir_basename = os.path.basename(dirname)
self.dirname = dirname self.dirname = dirname
# TODO: remove next line # TODO: remove next line
@ -1095,17 +1069,13 @@ class PodmanCompose:
dotenv_path = os.path.join(dirname, ".env") dotenv_path = os.path.join(dirname, ".env")
self.environ = dict(os.environ) self.environ = dict(os.environ)
if os.path.isfile(dotenv_path): self.environ.update(dotenv_to_dict(dotenv_path))
with open(dotenv_path, 'r') as f:
dotenv_ls = [l.strip() for l in f if l.strip() and not l.startswith('#')]
self.environ.update(dict([l.split("=", 1) for l in dotenv_ls if "=" in l]))
# TODO: should read and respect those env variables
# see: https://docs.docker.com/compose/reference/envvars/ # see: https://docs.docker.com/compose/reference/envvars/
# see: https://docs.docker.com/compose/env-file/ # see: https://docs.docker.com/compose/env-file/
self.environ.update({ self.environ.update({
"COMPOSE_FILE": os.path.basename(filename), "COMPOSE_FILE": os.path.basename(filename),
"COMPOSE_PROJECT_NAME": self.project_name, "COMPOSE_PROJECT_NAME": self.project_name,
"COMPOSE_PATH_SEPARATOR": ":", "COMPOSE_PATH_SEPARATOR": pathsep,
}) })
compose = {'_dirname': dirname} compose = {'_dirname': dirname}
for filename in files: for filename in files:
@ -1136,6 +1106,26 @@ class PodmanCompose:
flat_deps(services) flat_deps(services)
service_names = sorted([ (len(srv["_deps"]), name) for name, srv in services.items() ]) service_names = sorted([ (len(srv["_deps"]), name) for name, srv in services.items() ])
service_names = [ name for _, name in service_names] service_names = [ name for _, name in service_names]
nets = compose.get("networks", None) or {}
if not nets:
nets["default"] = None
self.networks = nets
if len(self.networks)==1:
self.default_net = list(nets.keys())[0]
elif "default" in nets:
self.default_net = "default"
else:
self.default_net = None
default_net = self.default_net
allnets = set()
for name, srv in services.items():
srv_nets = norm_as_list(srv.get("networks", None) or default_net)
allnets.update(srv_nets)
given_nets = set(nets.keys())
missing_nets = given_nets - allnets
if len(missing_nets):
missing_nets_str= ",".join(missing_nets)
raise RuntimeError(f"missing networks: {missing_nets_str}")
# volumes: [...] # volumes: [...]
self.vols = compose.get('volumes', {}) self.vols = compose.get('volumes', {})
podman_compose_labels = [ podman_compose_labels = [
@ -1192,14 +1182,11 @@ class PodmanCompose:
given_containers = list(container_by_name.values()) given_containers = list(container_by_name.values())
given_containers.sort(key=lambda c: len(c.get('_deps', None) or [])) given_containers.sort(key=lambda c: len(c.get('_deps', None) or []))
#print("sorted:", [c["name"] for c in given_containers]) #print("sorted:", [c["name"] for c in given_containers])
tr = transformations[transform_policy] pods, containers = tr_identity(project_name, given_containers)
pods, containers = tr(
project_name, container_names_by_service, given_containers)
self.pods = pods self.pods = pods
self.containers = containers self.containers = containers
self.container_by_name = dict([ (c["name"], c) for c in containers]) self.container_by_name = dict([ (c["name"], c) for c in containers])
def _parse_args(self): def _parse_args(self):
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
formatter_class=argparse.RawTextHelpFormatter formatter_class=argparse.RawTextHelpFormatter
@ -1244,17 +1231,6 @@ class PodmanCompose:
help="Do not stop and remove existing pod & containers", action='store_true') help="Do not stop and remove existing pod & containers", action='store_true')
parser.add_argument("--dry-run", parser.add_argument("--dry-run",
help="No action; perform a simulation of commands", action='store_true') help="No action; perform a simulation of commands", action='store_true')
parser.add_argument("-t", "--transform_policy",
help=textwrap.dedent("""\
how to translate docker compose to podman (default: 1podfw)
1podfw - create all containers in one pod (inter-container communication is done via localhost), doing port mapping in that pod
1pod - create all containers in one pod, doing port mapping in each container (does not work)
identity - no mapping
hostnet - use host network, and inter-container communication is done via host gateway and published ports
cntnet - create a container and use it via --network container:name (inter-container communication via localhost)
publishall - publish all ports to host (using -P) and communicate via gateway
"""),
choices=['1pod', '1podfw', 'hostnet', 'cntnet', 'publishall', 'identity'], default='1podfw')
podman_compose = PodmanCompose() podman_compose = PodmanCompose()
@ -1383,10 +1359,9 @@ def create_pods(compose, args):
podman_args = [ podman_args = [
"create", "create",
"--name={}".format(pod["name"]), "--name={}".format(pod["name"]),
"--share", "net",
] ]
if compose.podman_version and not strverscmp_lt(compose.podman_version, "3.4.0"): #if compose.podman_version and not strverscmp_lt(compose.podman_version, "3.4.0"):
podman_args.append("--infra-name={}_infra".format(pod["name"])) # podman_args.append("--infra-name={}_infra".format(pod["name"]))
ports = pod.get("ports", None) or [] ports = pod.get("ports", None) or []
if isinstance(ports, str): if isinstance(ports, str):
ports = [ports] ports = [ports]
@ -1426,7 +1401,8 @@ def compose_up(compose, args):
# TODO: implement check hash label for change # TODO: implement check hash label for change
if args.force_recreate: if args.force_recreate:
compose.commands['down'](compose, args) down_args = argparse.Namespace(**dict(args.__dict__, volumes=False))
compose.commands['down'](compose, down_args)
# args.no_recreate disables check for changes (which is not implemented) # args.no_recreate disables check for changes (which is not implemented)
podman_command = 'run' if args.detach and not args.no_start else 'create' podman_command = 'run' if args.detach and not args.no_start else 'create'
@ -1491,6 +1467,10 @@ def compose_down(compose, args):
return return
for pod in compose.pods: for pod in compose.pods:
compose.podman.run([], "pod", ["rm", pod["name"]], sleep=0) compose.podman.run([], "pod", ["rm", pod["name"]], sleep=0)
if args.volumes:
volume_names = [vol["Name"] for vol in compose.podman.volume_inspect_proj()]
for volume_name in volume_names:
compose.podman.run([], "volume", ["rm", volume_name])
@cmd_run(podman_compose, 'ps', 'show status of containers') @cmd_run(podman_compose, 'ps', 'show status of containers')
def compose_ps(compose, args): def compose_ps(compose, args):
@ -1505,13 +1485,13 @@ def compose_run(compose, args):
create_pods(compose, args) create_pods(compose, args)
container_names=compose.container_names_by_service[args.service] container_names=compose.container_names_by_service[args.service]
container_name=container_names[0] container_name=container_names[0]
cnt = compose.container_by_name[container_name] cnt = dict(compose.container_by_name[container_name])
deps = cnt["_deps"] deps = cnt["_deps"]
if not args.no_deps: if not args.no_deps:
up_args = argparse.Namespace(**dict(args.__dict__, up_args = argparse.Namespace(**dict(args.__dict__,
detach=True, services=deps, detach=True, services=deps,
# defaults # defaults
no_build=False, build=True, force_recreate=False, no_start=False, no_build=False, build=True, force_recreate=False, no_start=False, no_cache=False, build_arg=[],
) )
) )
compose.commands['up'](compose, up_args) compose.commands['up'](compose, up_args)
@ -1536,6 +1516,9 @@ def compose_run(compose, args):
cnt['tty']=False if args.T else True cnt['tty']=False if args.T else True
if args.cnt_command is not None and len(args.cnt_command) > 0: if args.cnt_command is not None and len(args.cnt_command) > 0:
cnt['command']=args.cnt_command cnt['command']=args.cnt_command
# can't restart and --rm
if args.rm and 'restart' in cnt:
del cnt['restart']
# run podman # run podman
podman_args = container_to_args(compose, cnt, args.detach) podman_args = container_to_args(compose, cnt, args.detach)
if not args.detach: if not args.detach:
@ -1569,11 +1552,15 @@ def compose_exec(compose, args):
def transfer_service_status(compose, args, action): def transfer_service_status(compose, args, action):
# TODO: handle dependencies, handle creations # TODO: handle dependencies, handle creations
container_names_by_service = compose.container_names_by_service container_names_by_service = compose.container_names_by_service
if not args.services:
args.services = container_names_by_service.keys()
targets = [] targets = []
for service in args.services: for service in args.services:
if service not in container_names_by_service: if service not in container_names_by_service:
raise ValueError("unknown service: " + service) raise ValueError("unknown service: " + service)
targets.extend(container_names_by_service[service]) targets.extend(container_names_by_service[service])
if action in ['stop', 'restart']:
targets = list(reversed(targets))
podman_args=[] podman_args=[]
timeout=getattr(args, 'timeout', None) timeout=getattr(args, 'timeout', None)
if timeout is not None: if timeout is not None:
@ -1596,20 +1583,33 @@ def compose_restart(compose, args):
@cmd_run(podman_compose, 'logs', 'show logs from services') @cmd_run(podman_compose, 'logs', 'show logs from services')
def compose_logs(compose, args): def compose_logs(compose, args):
container_names_by_service = compose.container_names_by_service container_names_by_service = compose.container_names_by_service
target = None if not args.services and not args.latest:
if args.service not in container_names_by_service: args.services = container_names_by_service.keys()
raise ValueError("unknown service: " + args.service) targets = []
target = container_names_by_service[args.service] for service in args.services:
if service not in container_names_by_service:
raise ValueError("unknown service: " + service)
targets.extend(container_names_by_service[service])
podman_args = [] podman_args = []
if args.follow: if args.follow:
podman_args.append('-f') podman_args.append('-f')
if args.latest:
podman_args.append("-l")
if args.names:
podman_args.append('-n')
if args.since:
podman_args.extend(['--since', args.since])
# the default value is to print all logs which is in podman = 0 and not # the default value is to print all logs which is in podman = 0 and not
# needed to be passed # needed to be passed
if args.tail and args.tail != 'all': if args.tail and args.tail != 'all':
podman_args.extend(['--tail', args.tail]) podman_args.extend(['--tail', args.tail])
if args.timestamps: if args.timestamps:
podman_args.append('-t') podman_args.append('-t')
compose.podman.run([], 'logs', podman_args+target) if args.until:
podman_args.extend(['--until', args.until])
for target in targets:
podman_args.append(target)
compose.podman.run([], 'logs', podman_args)
################### ###################
# command arguments parsing # command arguments parsing
@ -1650,6 +1650,12 @@ def compose_up_parse(parser):
parser.add_argument("--exit-code-from", metavar='SERVICE', type=str, default=None, parser.add_argument("--exit-code-from", metavar='SERVICE', type=str, default=None,
help="Return the exit code of the selected service container. Implies --abort-on-container-exit.") help="Return the exit code of the selected service container. Implies --abort-on-container-exit.")
@cmd_parse(podman_compose, 'down')
def compose_down_parse(parser):
parser.add_argument("-v", "--volumes", action='store_true', default=False,
help="Remove named volumes declared in the `volumes` section of the Compose file and "
"anonymous volumes attached to containers.")
@cmd_parse(podman_compose, 'run') @cmd_parse(podman_compose, 'run')
def compose_run_parse(parser): def compose_run_parse(parser):
parser.add_argument("-d", "--detach", action='store_true', parser.add_argument("-d", "--detach", action='store_true',
@ -1711,23 +1717,26 @@ def compose_parse_timeout(parser):
help="Specify a shutdown timeout in seconds. ", help="Specify a shutdown timeout in seconds. ",
type=int, default=10) type=int, default=10)
@cmd_parse(podman_compose, ['start', 'stop', 'restart'])
def compose_parse_services(parser):
parser.add_argument('services', metavar='services', nargs='+',
help='affected services')
@cmd_parse(podman_compose, ['logs']) @cmd_parse(podman_compose, ['logs'])
def compose_logs_parse(parser): def compose_logs_parse(parser):
parser.add_argument("-f", "--follow", action='store_true', parser.add_argument("-f", "--follow", action='store_true',
help="Follow log output.") help="Follow log output. The default is false")
parser.add_argument("-l", "--latest", action='store_true',
help="Act on the latest container podman is aware of")
parser.add_argument("-n", "--names", action='store_true',
help="Output the container name in the log")
parser.add_argument("--since", help="Show logs since TIMESTAMP",
type=str, default=None)
parser.add_argument("-t", "--timestamps", action='store_true', parser.add_argument("-t", "--timestamps", action='store_true',
help="Show timestamps.") help="Show timestamps.")
parser.add_argument("--tail", parser.add_argument("--tail",
help="Number of lines to show from the end of the logs for each " help="Number of lines to show from the end of the logs for each "
"container.", "container.",
type=str, default="all") type=str, default="all")
parser.add_argument('service', metavar='service', nargs=None, parser.add_argument("--until", help="Show logs until TIMESTAMP",
help='service name') type=str, default=None)
parser.add_argument('services', metavar='services', nargs='*', default=None,
help='service names')
@cmd_parse(podman_compose, 'pull') @cmd_parse(podman_compose, 'pull')
def compose_pull_parse(parser): def compose_pull_parse(parser):
@ -1757,7 +1766,7 @@ def compose_build_parse(parser):
parser.add_argument("--no-cache", parser.add_argument("--no-cache",
help="Do not use cache when building the image.", action='store_true') help="Do not use cache when building the image.", action='store_true')
@cmd_parse(podman_compose, ['build', 'up', 'down']) @cmd_parse(podman_compose, ['build', 'up', 'down', 'start', 'stop', 'restart'])
def compose_build_parse(parser): def compose_build_parse(parser):
parser.add_argument('services', metavar='services', nargs='*',default=None, parser.add_argument('services', metavar='services', nargs='*',default=None,
help='affected services') help='affected services')

View File

@ -3,3 +3,5 @@
# process, which may cause wedges in the gate later. # process, which may cause wedges in the gate later.
pyyaml pyyaml
python-dotenv

View File

@ -36,7 +36,8 @@ setup(
include_package_data=True, include_package_data=True,
license='GPL-2.0-only', license='GPL-2.0-only',
install_requires=[ install_requires=[
'pyyaml' 'pyyaml',
'python-dotenv',
], ],
# test_suite='tests', # test_suite='tests',
# tests_require=[ # tests_require=[

View File

@ -4,7 +4,7 @@ services:
image: busybox image: busybox
command: busybox httpd -h /var/www/html/ -f -p 8001 command: busybox httpd -h /var/www/html/ -f -p 8001
volumes: volumes:
- ./1.env:/var/www/html/index.txt - ./1.env:/var/www/html/index.txt:z
env_file: ./1.env env_file: ./1.env
labels: labels:
l1: v1 l1: v1

View File

@ -1,10 +1,11 @@
version: '3' version: '3'
services: services:
web1: web1:
image: busybox
env_file: ./12.env env_file: ./12.env
labels: labels:
- l1=v2 - l1=v2
- l2=v2 - l2=v2
environment: environment:
mykey1: myval2 mykey1: myval2
mykey2: myval2 mykey2: myval2
@ -13,6 +14,6 @@ services:
image: busybox image: busybox
command: busybox httpd -h /var/www/html/ -f -p 8002 command: busybox httpd -h /var/www/html/ -f -p 8002
volumes: volumes:
- ./2.env:/var/www/html/index.txt - ./2.env:/var/www/html/index.txt:z
env_file: ./2.env env_file: ./2.env

View File

@ -0,0 +1,21 @@
version: "3"
services:
web1:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -0,0 +1,23 @@
version: "3"
networks:
mystack:
services:
web1:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -0,0 +1,31 @@
version: "3"
networks:
net1:
net2:
services:
web1:
image: busybox
#container_name: web1
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
networks:
- net1
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
#container_name: web2
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
networks:
- net1
- net2
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -2,32 +2,34 @@ version: "3"
services: services:
web1: web1:
image: busybox image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html working_dir: /var/www/html
ports: ports:
- 8001:8001 - 8001:8001
volumes: volumes:
- ./test1.txt:/var/www/html/index.txt:ro - ./test1.txt:/var/www/html/index.txt:ro,z
web2: web2:
image: busybox image: busybox
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]
working_dir: /var/www/html working_dir: /var/www/html
ports: ports:
- 8002:8002 - 8002:8002
- target: 8003 - target: 8003
host_ip: 127.0.0.1 host_ip: 127.0.0.1
published: 8003 published: 8003
protocol: udp protocol: udp
- target: 8004 - target: 8004
host_ip: 127.0.0.1 host_ip: 127.0.0.1
published: 8004 published: 8004
protocol: tcp protocol: tcp
- target: 8005 - target: 8005
published: 8005 published: 8005
- target: 8006 - target: 8006
protocol: udp protocol: udp
- target: 8007 - target: 8007
host_ip: 127.0.0.1 host_ip: 127.0.0.1
volumes: volumes:
- ./test2.txt:/var/www/html/index.txt:ro - ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1,12 @@
version: "3"
services:
web1:
image: busybox
command: httpd -f -p 80 -h /var/www/html
volumes:
- ./docker-compose.yml:/var/www/html/index.html
ports:
- "8080:80"
security_opt:
- seccomp:unconfined

View File

@ -8,7 +8,7 @@ services:
- /run - /run
- /tmp - /tmp
volumes: volumes:
- ./print_secrets.sh:/tmp/print_secrets.sh - ./print_secrets.sh:/tmp/print_secrets.sh:z
secrets: secrets:
- my_secret - my_secret
- my_secret_2 - my_secret_2

View File

View File

View File

@ -4,7 +4,7 @@ services:
image: redis:alpine image: redis:alpine
command: ["redis-server", "--appendonly yes", "--notify-keyspace-events", "Ex"] command: ["redis-server", "--appendonly yes", "--notify-keyspace-events", "Ex"]
volumes: volumes:
- ./data/redis:/data - ./data/redis:/data:z
tmpfs: /run1 tmpfs: /run1
ports: ports:
- "6379" - "6379"
@ -25,16 +25,16 @@ services:
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html working_dir: /var/www/html
volumes: volumes:
- ./data/web:/var/www/html:ro - ./data/web:/var/www/html:ro,z
web2: web2:
image: busybox image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]
working_dir: /var/www/html working_dir: /var/www/html
volumes: volumes:
- ~/Downloads/www:/var/www/html:ro - ~/Downloads/www:/var/www/html:ro,z
web3: web3:
image: busybox image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8003"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8003"]
working_dir: /var/www/html working_dir: /var/www/html
volumes: volumes:
- /var/www/html:/var/www/html:ro - /var/www/html:/var/www/html:ro,z

View File

@ -14,7 +14,7 @@ services:
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html working_dir: /var/www/html
volumes: volumes:
- myvol1:/var/www/html:ro - myvol1:/var/www/html:ro,z
web2: web2:
image: busybox image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"] command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]