15 Commits

Author SHA1 Message Date
9a8dc4ca17 release 1.0.2 2021-12-11 02:06:10 +02:00
6b5f62d693 Fixes #199: seccomp:unconfined 2021-12-11 01:50:40 +02:00
3782b4ab84 FIXES #371: respect COMPOSE_FILE env 2021-12-10 23:26:13 +02:00
95e07e27f0 FIXES #185: creates dirs 2021-12-10 22:46:22 +02:00
a3123ce480 #222: normalize basedir using os.path.realpath 2021-12-10 22:27:00 +02:00
02f78dc3d7 FIXES #333: when volumes are merged, remove duplicates 2021-12-10 02:06:43 +02:00
8cd97682d0 FIXES #370: bug-for-bug hanlding of .env 2021-12-10 01:01:45 +02:00
85244272ff FIXES #368: parse depends_on of type dict 2021-12-09 16:18:52 +02:00
30cfe2317c set version 2021-12-09 16:12:59 +02:00
7fda1cc835 fix AttributeError when running a one-off command
Without this, I get errors when running "podman-compose -p podname run".
2021-12-09 16:11:04 +02:00
5f40f4df31 Remove named volumes during "down -v"
Fixes containers#105

Signed-off-by: Luiz Carvalho <lucarval@redhat.com>
2021-12-09 16:09:59 +02:00
d38aeaa713 update README 2021-12-09 15:59:34 +02:00
17f9ca61bd test fixes for SELinux (Fedora) 2021-11-24 18:06:18 +02:00
80a47a13d5 add network-alias 2021-11-21 12:35:13 +02:00
872404c3a7 initial work on CNI podman network create 2021-11-21 01:23:29 +02:00
22 changed files with 316 additions and 236 deletions

View File

@ -1,28 +1,44 @@
# Podman Compose
An implementation of `docker-compose` with [Podman](https://podman.io/) backend.
The main objective of this project is to be able to run `docker-compose.yml` unmodified and rootless.
This project is aimed to provide drop-in replacement for `docker-compose`,
and it's very useful for certain cases because:
An implementation of [Compose Spec](https://compose-spec.io/) with [Podman](https://podman.io/) backend.
This project focus on:
- can run rootless
- only depend on `podman` and Python3 and [PyYAML](https://pyyaml.org/)
- no daemon, no setup.
- can be used by developers to run single-machine containerized stacks using single familiar YAML file
* rootless
* daemon-less process model, we directly execute podman, no running daemon.
This project only depend on:
* `podman`
* Python3
* [PyYAML](https://pyyaml.org/)
* [python-dotenv](https://pypi.org/project/python-dotenv/)
And it's formed as a single python file script that you can drop into your PATH and run.
## References:
* [spec.md](https://github.com/compose-spec/compose-spec/blob/master/spec.md)
* [docker-compose compose-file-v3](https://docs.docker.com/compose/compose-file/compose-file-v3/)
* [docker-compose compose-file-v2](https://docs.docker.com/compose/compose-file/compose-file-v2/)
## Alternatives
As in [this article](https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/) you can setup a `podman.socket` and use unmodified `docker-compose` that talks to that socket but in this case you lose the process-model (ex. `docker-compose build` will send a possibly large context tarball to the daemon)
For production-like single-machine containerized environment consider
- [k3s](https://k3s.io) | [k3s github](https://github.com/rancher/k3s)
- [MiniKube](https://minikube.sigs.k8s.io/)
- [MiniShift](https://www.okd.io/minishift/)
For the real thing (multi-node clusters) check any production
OpenShift/Kubernetes distribution like [OKD](https://www.okd.io/minishift/).
OpenShift/Kubernetes distribution like [OKD](https://www.okd.io/).
## NOTE
## Versions
This project is still under development.
If you have legacy version of `podman` (before 3.x) you might need to stick with legacy `podman-compose` `0.1.x` branch.
The legacy branch 0.1.x uses mappings and workarounds to compensate for rootless limitations.
Modern podman versions (>=3.4) do not have those limitations and thus you can use latest and stable 1.x branch.
## Installation
@ -47,7 +63,7 @@ curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containe
chmod +x /usr/local/bin/podman-compose
```
or
or inside your home
```
curl -o ~/.local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py
@ -91,11 +107,4 @@ When testing the `AWX3` example, if you got errors just wait for db migrations t
Inside `tests/` directory we have many useless docker-compose stacks
that are meant to test as much cases as we can to make sure we are compatible
## How it works
The default mapping `1podfw` creates a single pod and attach all containers to
its network namespace so that all containers talk via localhost.
For more information see [docs/Mappings.md](docs/Mappings.md).
If you are running as root, you might use identity mapping.

View File

@ -32,10 +32,10 @@ except ImportError:
# import fnmatch
# fnmatch.fnmatchcase(env, "*_HOST")
import json
import yaml
from dotenv import dotenv_values
__version__ = '0.1.8'
__version__ = '1.0.2'
PY3 = sys.version_info[0] == 3
if PY3:
@ -112,8 +112,7 @@ def parse_short_mount(mount_str, basedir):
# User-relative path
# - ~/configs:/etc/configs/:ro
mount_type = "bind"
# TODO: should we use os.path.realpath(basedir)?
mount_src = os.path.join(basedir, os.path.expanduser(mount_src))
mount_src = os.path.realpath(os.path.join(basedir, os.path.expanduser(mount_src)))
else:
# Named volume
# - datavolume:/var/lib/mysql
@ -257,153 +256,37 @@ def norm_ulimit(inner_value):
# if int or string return as is
return inner_value
#def tr_identity(project_name, given_containers):
# pod_name = f'pod_{project_name}'
# pod = dict(name=pod_name)
# containers = []
# for cnt in given_containers:
# containers.append(dict(cnt, pod=pod_name))
# return [pod], containers
# transformation helpers
def adj_hosts(services, cnt, dst="127.0.0.1"):
"""
adjust container cnt in-place to add hosts pointing to dst for services
"""
common_extra_hosts = []
for srv, cnts in services.items():
common_extra_hosts.append("{}:{}".format(srv, dst))
for cnt0 in cnts:
common_extra_hosts.append("{}:{}".format(cnt0, dst))
extra_hosts = list(cnt.get("extra_hosts", []))
extra_hosts.extend(common_extra_hosts)
# link aliases
for link in cnt.get("links", []):
a = link.strip().split(':', 1)
if len(a) == 2:
alias = a[1].strip()
extra_hosts.append("{}:{}".format(alias, dst))
cnt["extra_hosts"] = extra_hosts
def move_list(dst, containers, key):
"""
move key (like port forwarding) from containers to dst (a pod or a infra container)
"""
a = set(dst.get(key, None) or [])
for cnt in containers:
a0 = cnt.get(key, None)
if a0:
a.update(a0)
del cnt[key]
if a:
dst[key] = list(a)
def move_port_fw(dst, containers):
"""
move port forwarding from containers to dst (a pod or a infra container)
"""
move_list(dst, containers, "ports")
def move_extra_hosts(dst, containers):
"""
move port forwarding from containers to dst (a pod or a infra container)
"""
move_list(dst, containers, "extra_hosts")
# transformations
transformations = {}
def trans(func):
transformations[func.__name__.replace("tr_", "")] = func
return func
@trans
def tr_identity(project_name, services, given_containers):
def tr_identity(project_name, given_containers):
containers = []
for cnt in given_containers:
containers.append(dict(cnt))
return [], containers
@trans
def tr_publishall(project_name, services, given_containers):
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, publishall=True)
# adjust hosts to point to the gateway, TODO: adjust host env
adj_hosts(services, cnt, '10.0.2.2')
containers.append(cnt)
return [], containers
@trans
def tr_hostnet(project_name, services, given_containers):
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, network_mode="host")
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
containers.append(cnt)
return [], containers
@trans
def tr_cntnet(project_name, services, given_containers):
containers = []
infra_name = project_name + "_infra"
infra = dict(
name=infra_name,
image="k8s.gcr.io/pause:3.1",
)
for cnt0 in given_containers:
cnt = dict(cnt0, network_mode="container:"+infra_name)
deps = cnt.get("depends_on", None) or []
deps.append(infra_name)
cnt["depends_on"] = deps
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
if "hostname" in cnt:
del cnt["hostname"]
containers.append(cnt)
move_port_fw(infra, containers)
move_extra_hosts(infra, containers)
containers.insert(0, infra)
return [], containers
@trans
def tr_1pod(project_name, services, given_containers):
"""
project_name:
services: {service_name: ["container_name1", "..."]}, currently only one is supported
given_containers: [{}, ...]
"""
pod = dict(name=project_name)
containers = []
for cnt0 in given_containers:
cnt = dict(cnt0, pod=project_name)
# services can be accessed as localhost because they are on one pod
# adjust hosts to point to localhost, TODO: adjust host env
adj_hosts(services, cnt, '127.0.0.1')
containers.append(cnt)
return [pod], containers
@trans
def tr_1podfw(project_name, services, given_containers):
pods, containers = tr_1pod(project_name, services, given_containers)
pod = pods[0]
move_port_fw(pod, containers)
return pods, containers
def assert_volume(compose, mount_dict):
"""
inspect volume to get directory
create volume if needed
"""
vol = mount_dict.get("_vol", None)
if mount_dict["type"] == "bind":
basedir = os.path.realpath(compose.dirname)
mount_src = mount_dict["source"]
mount_src = os.path.realpath(os.path.join(basedir, os.path.expanduser(mount_src)))
if not os.path.exists(mount_src):
try:
os.makedirs(mount_src, exist_ok=True)
except OSError:
pass
return
if mount_dict["type"] != "volume" or not vol or vol.get("external", None) or not vol.get("name", None): return
proj_name = compose.project_name
vol_name = vol["name"]
@ -661,6 +544,47 @@ def norm_ports(ports_in):
ports_out.append(port)
return ports_out
def assert_cnt_nets(compose, cnt):
"""
create missing networks
"""
proj_name = compose.project_name
nets = compose.networks
default_net = compose.default_net
cnt_nets = norm_as_list(cnt.get("networks", None) or default_net)
for net in cnt_nets:
net_desc = nets[net] or {}
net_name = net_desc.get("name", None) or f"{proj_name}_{net}"
print(f"podman network exists '{net_name}' || podman network create '{net_name}'")
try: compose.podman.output([], "network", ["exists", net_name])
except subprocess.CalledProcessError:
args = [
"create",
"--label", "io.podman.compose.project={}".format(proj_name),
"--label", "com.docker.compose.project={}".format(proj_name),
]
# TODO: add more options here, like driver, internal, ..etc
labels = net_desc.get("labels", None) or []
for item in norm_as_list(labels):
args.extend(["--label", item])
args.append(net_name)
compose.podman.output([], "network", args)
compose.podman.output([], "network", ["exists", net_name])
def get_net_args(compose, cnt):
service_name = cnt["service_name"]
project_name = compose.project_name
default_net = compose.default_net
nets = compose.networks
cnt_nets = norm_as_list(cnt.get("networks", None) or default_net)
net_names = set()
for net in cnt_nets:
net_desc = nets[net] or {}
net_name = net_desc.get("name", None) or f"{project_name}_{net}"
net_names.add(net_name)
net_names_str = ",".join(net_names)
return ["--net", net_names_str, "--network-alias", service_name]
def container_to_args(compose, cnt, detached=True):
# TODO: double check -e , --add-host, -v, --read-only
dirname = compose.dirname
@ -706,8 +630,9 @@ def container_to_args(compose, cnt, detached=True):
for i in tmpfs_ls:
podman_args.extend(['--tmpfs', i])
for volume in cnt.get('volumes', []):
# TODO: should we make it os.path.realpath(os.path.join(, i))?
podman_args.extend(get_mount_args(compose, cnt, volume))
assert_cnt_nets(compose, cnt)
podman_args.extend(get_net_args(compose, cnt))
log = cnt.get('logging')
if log is not None:
podman_args.append(f'--log-driver={log.get("driver", "k8s-file")}')
@ -730,7 +655,7 @@ def container_to_args(compose, cnt, detached=True):
elif not isinstance(port, str):
raise TypeError("port should be either string or dict")
podman_args.extend(['-p', port])
user = cnt.get('user', None)
if user is not None:
podman_args.extend(['-u', user])
@ -855,7 +780,8 @@ def flat_deps(services, with_extends=False):
if ext != name: deps.add(ext)
continue
deps_ls = srv.get("depends_on", None) or []
if not is_list(deps_ls): deps_ls=[deps_ls]
if is_str(deps_ls): deps_ls=[deps_ls]
elif is_dict(deps_ls): deps_ls=list(deps_ls.keys())
deps.update(deps_ls)
# parse link to get service name and remove alias
links_ls = srv.get("links", None) or []
@ -903,10 +829,22 @@ class Podman:
time.sleep(sleep)
return p
def volume_inspect_all(self):
output = self.output(["volume", "inspect", "--all"]).decode('utf-8')
return json.loads(output)
def volume_rm(self, name):
return self.run(["volume", "rm", name])
def normalize_service(service):
for key in ("env_file", "security_opt"):
for key in ("env_file", "security_opt", "volumes"):
if key not in service: continue
if is_str(service[key]): service[key]=[service[key]]
if "security_opt" in service:
sec_ls = service["security_opt"]
for ix, item in enumerate(sec_ls):
if item=="seccomp:unconfined" or item=="apparmor:unconfined":
sec_ls[ix] = item.replace(":", "=")
for key in ("environment", "labels"):
if key not in service: continue
service[key] = norm_as_dict(service[key])
@ -942,7 +880,15 @@ def rec_merge_one(target, source):
if type(value2)!=type(value):
raise ValueError("can't merge value of {} of type {} and {}".format(key, type(value), type(value2)))
if is_list(value2):
value.extend(value2)
if key == 'volumes':
# clean duplicate mount targets
pts = set([ v.split(':', 1)[1] for v in value2 if ":" in v ])
del_ls = [ ix for (ix, v) in enumerate(value) if ":" in v and v.split(':', 1)[1] in pts ]
for ix in reversed(del_ls):
del value[ix]
value.extend(value2)
else:
value.extend(value2)
elif is_dict(value2):
rec_merge_one(value, value2)
else:
@ -983,6 +929,27 @@ def resolve_extends(services, service_names, environ):
new_service = rec_merge({}, from_service, service)
services[name] = new_service
def dotenv_to_dict(dotenv_path):
if not os.path.isfile(dotenv_path):
return {}
return dotenv_values(dotenv_path)
COMPOSE_DEFAULT_LS = [
"compose.yaml",
"compose.yml",
"compose.override.yaml",
"compose.override.yml",
"podman-compose.yaml",
"podman-compose.yml",
"docker-compose.yml",
"docker-compose.yaml",
"docker-compose.override.yml",
"docker-compose.override.yaml",
"container-compose.yml",
"container-compose.yaml",
"container-compose.override.yml",
"container-compose.override.yaml",
]
class PodmanCompose:
def __init__(self):
@ -995,6 +962,8 @@ class PodmanCompose:
self.pods = None
self.containers = None
self.vols = None
self.networks = {}
self.default_net = "default"
self.declared_secrets = None
self.container_names_by_service = None
self.container_by_name = None
@ -1042,23 +1011,14 @@ class PodmanCompose:
def _parse_compose_file(self):
args = self.global_args
cmd = args.command
pathsep = os.environ.get("COMPOSE_PATH_SEPARATOR", None) or os.pathsep
if not args.file:
args.file = list(filter(os.path.exists, [
"compose.yaml",
"compose.yml",
"compose.override.yaml",
"compose.override.yml",
"podman-compose.yaml",
"podman-compose.yml",
"docker-compose.yml",
"docker-compose.yaml",
"docker-compose.override.yml",
"docker-compose.override.yaml",
"container-compose.yml",
"container-compose.yaml",
"container-compose.override.yml",
"container-compose.override.yaml"
]))
default_str = os.environ.get("COMPOSE_FILE", None)
if default_str:
default_ls = default_str.split(pathsep)
else:
default_ls = COMPOSE_DEFAULT_LS
args.file = list(filter(os.path.exists, default_ls))
files = args.file
if not files:
print("no compose.yaml, docker-compose.yml or container-compose.yml file found, pass files with -f")
@ -1076,9 +1036,8 @@ class PodmanCompose:
no_ansi = args.no_ansi
no_cleanup = args.no_cleanup
dry_run = args.dry_run
transform_policy = args.transform_policy
host_env = None
dirname = os.path.dirname(filename)
dirname = os.path.realpath(os.path.dirname(filename))
dir_basename = os.path.basename(dirname)
self.dirname = dirname
# TODO: remove next line
@ -1095,17 +1054,13 @@ class PodmanCompose:
dotenv_path = os.path.join(dirname, ".env")
self.environ = dict(os.environ)
if os.path.isfile(dotenv_path):
with open(dotenv_path, 'r') as f:
dotenv_ls = [l.strip() for l in f if l.strip() and not l.startswith('#')]
self.environ.update(dict([l.split("=", 1) for l in dotenv_ls if "=" in l]))
# TODO: should read and respect those env variables
self.environ.update(dotenv_to_dict(dotenv_path))
# see: https://docs.docker.com/compose/reference/envvars/
# see: https://docs.docker.com/compose/env-file/
self.environ.update({
"COMPOSE_FILE": os.path.basename(filename),
"COMPOSE_PROJECT_NAME": self.project_name,
"COMPOSE_PATH_SEPARATOR": ":",
"COMPOSE_PATH_SEPARATOR": pathsep,
})
compose = {'_dirname': dirname}
for filename in files:
@ -1136,6 +1091,26 @@ class PodmanCompose:
flat_deps(services)
service_names = sorted([ (len(srv["_deps"]), name) for name, srv in services.items() ])
service_names = [ name for _, name in service_names]
nets = compose.get("networks", None) or {}
if not nets:
nets["default"] = None
self.networks = nets
if len(self.networks)==1:
self.default_net = list(nets.keys())[0]
elif "default" in nets:
self.default_net = "default"
else:
self.default_net = None
default_net = self.default_net
allnets = set()
for name, srv in services.items():
srv_nets = norm_as_list(srv.get("networks", None) or default_net)
allnets.update(srv_nets)
given_nets = set(nets.keys())
missing_nets = given_nets - allnets
if len(missing_nets):
missing_nets_str= ",".join(missing_nets)
raise RuntimeError(f"missing networks: {missing_nets_str}")
# volumes: [...]
self.vols = compose.get('volumes', {})
podman_compose_labels = [
@ -1192,14 +1167,11 @@ class PodmanCompose:
given_containers = list(container_by_name.values())
given_containers.sort(key=lambda c: len(c.get('_deps', None) or []))
#print("sorted:", [c["name"] for c in given_containers])
tr = transformations[transform_policy]
pods, containers = tr(
project_name, container_names_by_service, given_containers)
pods, containers = tr_identity(project_name, given_containers)
self.pods = pods
self.containers = containers
self.container_by_name = dict([ (c["name"], c) for c in containers])
def _parse_args(self):
parser = argparse.ArgumentParser(
formatter_class=argparse.RawTextHelpFormatter
@ -1244,17 +1216,6 @@ class PodmanCompose:
help="Do not stop and remove existing pod & containers", action='store_true')
parser.add_argument("--dry-run",
help="No action; perform a simulation of commands", action='store_true')
parser.add_argument("-t", "--transform_policy",
help=textwrap.dedent("""\
how to translate docker compose to podman (default: 1podfw)
1podfw - create all containers in one pod (inter-container communication is done via localhost), doing port mapping in that pod
1pod - create all containers in one pod, doing port mapping in each container (does not work)
identity - no mapping
hostnet - use host network, and inter-container communication is done via host gateway and published ports
cntnet - create a container and use it via --network container:name (inter-container communication via localhost)
publishall - publish all ports to host (using -P) and communicate via gateway
"""),
choices=['1pod', '1podfw', 'hostnet', 'cntnet', 'publishall', 'identity'], default='1podfw')
podman_compose = PodmanCompose()
@ -1383,10 +1344,9 @@ def create_pods(compose, args):
podman_args = [
"create",
"--name={}".format(pod["name"]),
"--share", "net",
]
if compose.podman_version and not strverscmp_lt(compose.podman_version, "3.4.0"):
podman_args.append("--infra-name={}_infra".format(pod["name"]))
#if compose.podman_version and not strverscmp_lt(compose.podman_version, "3.4.0"):
# podman_args.append("--infra-name={}_infra".format(pod["name"]))
ports = pod.get("ports", None) or []
if isinstance(ports, str):
ports = [ports]
@ -1491,6 +1451,12 @@ def compose_down(compose, args):
return
for pod in compose.pods:
compose.podman.run([], "pod", ["rm", pod["name"]], sleep=0)
if args.volumes:
volumes = compose.podman.volume_inspect_all()
for volume in volumes:
project = volume.get("Labels", {}).get("io.podman.compose.project")
if project == compose.project_name:
compose.podman.volume_rm(volume["Name"])
@cmd_run(podman_compose, 'ps', 'show status of containers')
def compose_ps(compose, args):
@ -1511,7 +1477,7 @@ def compose_run(compose, args):
up_args = argparse.Namespace(**dict(args.__dict__,
detach=True, services=deps,
# defaults
no_build=False, build=True, force_recreate=False, no_start=False,
no_build=False, build=True, force_recreate=False, no_start=False, no_cache=False, build_arg=[],
)
)
compose.commands['up'](compose, up_args)
@ -1650,6 +1616,12 @@ def compose_up_parse(parser):
parser.add_argument("--exit-code-from", metavar='SERVICE', type=str, default=None,
help="Return the exit code of the selected service container. Implies --abort-on-container-exit.")
@cmd_parse(podman_compose, 'down')
def compose_down_parse(parser):
parser.add_argument("-v", "--volumes", action='store_true', default=False,
help="Remove named volumes declared in the `volumes` section of the Compose file and "
"anonymous volumes attached to containers.")
@cmd_parse(podman_compose, 'run')
def compose_run_parse(parser):
parser.add_argument("-d", "--detach", action='store_true',

View File

@ -3,3 +3,5 @@
# process, which may cause wedges in the gate later.
pyyaml
python-dotenv

View File

@ -36,7 +36,8 @@ setup(
include_package_data=True,
license='GPL-2.0-only',
install_requires=[
'pyyaml'
'pyyaml',
'python-dotenv',
],
# test_suite='tests',
# tests_require=[

View File

@ -4,7 +4,7 @@ services:
image: busybox
command: busybox httpd -h /var/www/html/ -f -p 8001
volumes:
- ./1.env:/var/www/html/index.txt
- ./1.env:/var/www/html/index.txt:z
env_file: ./1.env
labels:
l1: v1

View File

@ -1,10 +1,11 @@
version: '3'
services:
web1:
image: busybox
env_file: ./12.env
labels:
- l1=v2
- l2=v2
- l1=v2
- l2=v2
environment:
mykey1: myval2
mykey2: myval2
@ -13,6 +14,6 @@ services:
image: busybox
command: busybox httpd -h /var/www/html/ -f -p 8002
volumes:
- ./2.env:/var/www/html/index.txt
- ./2.env:/var/www/html/index.txt:z
env_file: ./2.env

View File

@ -0,0 +1,21 @@
version: "3"
services:
web1:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -0,0 +1,23 @@
version: "3"
networks:
mystack:
services:
web1:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -0,0 +1,31 @@
version: "3"
networks:
net1:
net2:
services:
web1:
image: busybox
#container_name: web1
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
networks:
- net1
ports:
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
#container_name: web2
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
networks:
- net1
- net2
ports:
- 8002:8001
volumes:
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1 @@
test1

View File

@ -0,0 +1 @@
test2

View File

@ -2,32 +2,34 @@ version: "3"
services:
web1:
image: busybox
hostname: web1
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
ports:
- 8001:8001
- 8001:8001
volumes:
- ./test1.txt:/var/www/html/index.txt:ro
- ./test1.txt:/var/www/html/index.txt:ro,z
web2:
image: busybox
hostname: web2
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]
working_dir: /var/www/html
ports:
- 8002:8002
- target: 8003
host_ip: 127.0.0.1
published: 8003
protocol: udp
- target: 8004
host_ip: 127.0.0.1
published: 8004
protocol: tcp
- target: 8005
published: 8005
- target: 8006
protocol: udp
- target: 8007
host_ip: 127.0.0.1
- 8002:8002
- target: 8003
host_ip: 127.0.0.1
published: 8003
protocol: udp
- target: 8004
host_ip: 127.0.0.1
published: 8004
protocol: tcp
- target: 8005
published: 8005
- target: 8006
protocol: udp
- target: 8007
host_ip: 127.0.0.1
volumes:
- ./test2.txt:/var/www/html/index.txt:ro
- ./test2.txt:/var/www/html/index.txt:ro,z

View File

@ -0,0 +1,12 @@
version: "3"
services:
web1:
image: busybox
command: httpd -f -p 80 -h /var/www/html
volumes:
- ./docker-compose.yml:/var/www/html/index.html
ports:
- "8080:80"
security_opt:
- seccomp:unconfined

View File

@ -8,7 +8,7 @@ services:
- /run
- /tmp
volumes:
- ./print_secrets.sh:/tmp/print_secrets.sh
- ./print_secrets.sh:/tmp/print_secrets.sh:z
secrets:
- my_secret
- my_secret_2

View File

View File

View File

@ -4,7 +4,7 @@ services:
image: redis:alpine
command: ["redis-server", "--appendonly yes", "--notify-keyspace-events", "Ex"]
volumes:
- ./data/redis:/data
- ./data/redis:/data:z
tmpfs: /run1
ports:
- "6379"
@ -25,16 +25,16 @@ services:
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
volumes:
- ./data/web:/var/www/html:ro
- ./data/web:/var/www/html:ro,z
web2:
image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]
working_dir: /var/www/html
volumes:
- ~/Downloads/www:/var/www/html:ro
- ~/Downloads/www:/var/www/html:ro,z
web3:
image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8003"]
working_dir: /var/www/html
volumes:
- /var/www/html:/var/www/html:ro
- /var/www/html:/var/www/html:ro,z

View File

@ -14,7 +14,7 @@ services:
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8001"]
working_dir: /var/www/html
volumes:
- myvol1:/var/www/html:ro
- myvol1:/var/www/html:ro,z
web2:
image: busybox
command: ["/bin/busybox", "httpd", "-f", "-h", "/var/www/html", "-p", "8002"]