shorewall_code/docs/OpenVZ.xml

823 lines
28 KiB
XML
Raw Normal View History

2009-06-30 17:55:19 +02:00
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN"
"http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd">
<article>
<!--$Id$-->
<articleinfo>
<title>Shorewall and OpenVZ</title>
<authorgroup>
<author>
<firstname>Tom</firstname>
<surname>Eastep</surname>
</author>
</authorgroup>
<pubdate><?dbtimestamp format="Y/m/d"?></pubdate>
<copyright>
2009-07-05 18:09:58 +02:00
<year>2009</year>
2009-06-30 17:55:19 +02:00
<holder>Thomas M. Eastep</holder>
</copyright>
<legalnotice>
<para>Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License, Version
1.2 or any later version published by the Free Software Foundation; with
no Invariant Sections, with no Front-Cover, and with no Back-Cover
Texts. A copy of the license is included in the section entitled
<quote><ulink url="GnuCopyright.htm">GNU Free Documentation
License</ulink></quote>.</para>
</legalnotice>
</articleinfo>
<section>
<title>Introduction</title>
<para><ulink url="http://wiki.openvz.org/">Open Virtuoso (OpenVZ)</ulink>
is an open source kernel-based virtualization solution from
2009-07-02 02:30:56 +02:00
<trademark><ulink
url="http://www.parallels.com">Parallels</ulink></trademark> (formerly
<trademark>SWSoft</trademark>). Virtual servers take the form of
2009-07-05 18:09:58 +02:00
<firstterm>containers</firstterm> (the OpenVZ documentation calls these
<firstterm>Virtual Environments</firstterm> or <firstterm>VEs</firstterm>)
which are created via <firstterm>templates</firstterm>. Templates are
available for a wide variety of distributions and architectures.</para>
2009-06-30 17:55:19 +02:00
<para>OpenVZ requires a patched kernel. Beginning with Lenny,
<trademark>Debian</trademark> supplies OpenVZ kernels through the standard
stable repository.</para>
</section>
<section>
<title>Shorewall on an OpenVZ Host</title>
<para>As with any Shorewall installation involving other software, we
suggest that you first install OpenVZ and get it working before attempting
to add Shorewall. Alternatively, execute <command>shorewall
clear</command> while <ulink
url="http://wiki.openvz.org/Installation_on_Debian">installing and
configuring OpenVZ</ulink>.</para>
<section>
<title>Networking</title>
<para>The default OpenVZ networking configuration uses Proxy ARP. You
assign containers IP addresses in the IP network from one of your
interfaces and you are expected to set the proxy_arp flag on that
interface
(<filename>/proc/sys/net/ipv4/conf/<replaceable>interface</replaceable>/proxy_arp</filename>).</para>
2009-07-05 18:09:58 +02:00
<para>OpenVZ creates a point-to-point virtual interface in the host with
a rather odd configuration.</para>
2009-06-30 17:55:19 +02:00
2009-07-05 18:09:58 +02:00
<para>Example (Single VE with IP address 206.124.146.178):</para>
2009-06-30 17:55:19 +02:00
<programlisting>gateway:~# <command>ip addr ls dev venet0</command>
10: venet0: &lt;BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UNKNOWN
link/void
gateway:~# <command>ip route ls dev venet0</command>
206.124.146.178 scope link
gateway:~# </programlisting>
<para>The interface has no IP configuration yet it has a route to
206.124.146.178!</para>
2009-07-05 18:09:58 +02:00
<para>From within the VE with IP address 206.124.146.178, we have the
following:</para>
2009-06-30 17:55:19 +02:00
<programlisting>server:~ # <command>ip addr ls</command>
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: &lt;BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.1/32 scope host venet0
inet 206.124.146.178/32 scope global venet0:0
server:~ # <command>ip route ls</command>
192.0.2.0/24 dev venet0 scope link
127.0.0.0/8 dev lo scope link
default via 192.0.2.1 dev venet0
server:~ # </programlisting>
<para>There are a couple of unique features of this
configuration:</para>
<itemizedlist>
<listitem>
<para>127.0.0.1/32 is configured on venet0 although the main routing
table routes loopback traffic through the <filename
class="devicefile">lo</filename> interface as normal.</para>
</listitem>
<listitem>
<para>There is a route to 192.0.2.0/24 through venet0 even though
the interface has no IP address in that network. Note: 192.0.2.0/24
is reserved for use in documentation and for testing.</para>
</listitem>
<listitem>
<para>The default route is via 192.0.2.1 yet there is no interface
on the host with that IP address.</para>
</listitem>
</itemizedlist>
<para>All of this doesn't really affect the Shorewall configuration but
it is interesting none the less.</para>
</section>
<section>
<title>Shorewall Configuration</title>
2009-12-13 16:08:05 +01:00
<para>We recommend handling the strange OpenVZ configuration in
2009-06-30 17:55:19 +02:00
Shorewall as follows:</para>
<para><filename>/etc/shorewall/zones</filename>:</para>
<programlisting>###############################################################################
#ZONE TYPE OPTIONS IN_OPTION OUT_OPTIONS
2009-07-11 17:16:25 +02:00
net ipv4
2009-06-30 17:55:19 +02:00
vz ipv4</programlisting>
<para><filename>/etc/shorewall/interfaces</filename>:</para>
<programlisting>###############################################################################
#ZONE INTERFACE OPTIONS
net eth0 proxyarp=1
vz venet0 <emphasis role="bold">routeback,arp_filter=0</emphasis></programlisting>
2009-06-30 17:55:19 +02:00
</section>
<section>
<title>Multi-ISP</title>
<para>If you run Shorewall Multi-ISP support on the host, you should
arrange for traffic to your containers to use the main routing table. In
the configuration shown here, this entry in /etc/shorewall/rtrules is
appropriate:</para>
2009-06-30 17:55:19 +02:00
<programlisting>#SOURCE DEST PROVIDER PRIORITY
- 206.124.146.178 main 1000</programlisting>
</section>
2009-07-05 18:09:58 +02:00
<section>
<title>RFC 1918 Addresses in a Container</title>
<para>You can assign an RFC 1918 address to a VE and use masquerade/SNAT
to provide Internet access to the container. This is just a normal
simple Shorewall configuration as shown in the <ulink
url="two-interface.htm">Two-interface Quick Start Guide</ulink>. In this
configuration the firewall's internal interface is <filename
2009-07-08 01:25:35 +02:00
class="devicefile">venet0</filename>. Be sure to include the options
shown above.</para>
2009-07-05 18:09:58 +02:00
</section>
2009-06-30 17:55:19 +02:00
</section>
<section>
2009-07-05 18:09:58 +02:00
<title>Shorewall in an OpenVZ Virtual Environment</title>
2009-06-30 17:55:19 +02:00
<para>If you have obtained an OpenVZ VE from a hosting service provider,
you may find it difficult to configure any type of firewall within your
VE. There are two VE parameters that control iptables behavior within the
2009-07-05 18:09:58 +02:00
container:</para>
2009-06-30 17:55:19 +02:00
<variablelist>
<varlistentry>
<term>--iptables <replaceable>name </replaceable></term>
<listitem>
2009-07-05 18:09:58 +02:00
<para>Restrict access to iptables modules inside a container (The
OpenVZ claims that by default all iptables modules that are loaded
in the host system are accessible inside a container; I haven't
tried that).</para>
<para>You can use the following values for
<replaceable>name</replaceable>: <option>iptable_filter</option>,
<option>iptable_mangle</option>, <option>ipt_limit</option>,
<option>ipt_multiport</option>, <option>ipt_tos</option>,
<option>ipt_TOS</option>, <option>ipt_REJECT</option>,
<option>ipt_TCPMSS</option>, <option>ipt_tcpmss</option>,
<option>ipt_ttl</option>, <option>ipt_LOG</option>,
<option>ipt_length</option>, <option>ip_conntrack</option>,
<option>ip_conntrack_ftp</option>,
<option>ip_conntrack_irc</option>, <option>ipt_conntrack</option>,
<option>ipt_state</option>, <option>ipt_helper</option>,
<option>iptable_nat</option>, <option>ip_nat_ftp</option>,
<option>ip_nat_irc</option>, <option>ipt_REDIRECT</option>,
<option>xt_mac</option>, <option>ipt_owner</option>.</para>
2009-06-30 17:55:19 +02:00
<para>If your provider is using this option, you may be in deep
2009-07-05 18:09:58 +02:00
trouble trying to use Shorewall in your container. Look at the
output of <command>shorewall show capabilities</command> and weep.
Then try to get your provider to remove this restriction on your
container.</para>
2009-06-30 17:55:19 +02:00
</listitem>
</varlistentry>
<varlistentry>
<term>--numiptent <replaceable>num</replaceable></term>
<listitem>
<para>This parameter limits the number of iptables rules that are
allowed within the container. The default is 100 which is too small
for a Shorewall configuration. We recommend setting this to at least
200.</para>
</listitem>
</varlistentry>
</variablelist>
2009-07-06 19:03:27 +02:00
<para>if you see annoying error messages as shown below during
start/restart, remove the module-init-tools package from the VE.</para>
2009-06-30 17:55:19 +02:00
<programlisting>server:/etc/shorewall # shorewall restart
Compiling...
Compiling /etc/shorewall/zones...
Compiling /etc/shorewall/interfaces...
Determining Hosts in Zones...
Preprocessing Action Files...
Pre-processing /usr/share/shorewall/action.Drop...
Pre-processing /usr/share/shorewall/action.Reject...
Compiling /etc/shorewall/policy...
Adding Anti-smurf Rules
Adding rules for DHCP
Compiling TCP Flags filtering...
Compiling Kernel Route Filtering...
Compiling Martian Logging...
Compiling MAC Filtration -- Phase 1...
Compiling /etc/shorewall/rules...
Generating Transitive Closure of Used-action List...
Processing /usr/share/shorewall/action.Reject for chain Reject...
Processing /usr/share/shorewall/action.Drop for chain Drop...
Compiling MAC Filtration -- Phase 2...
Applying Policies...
Generating Rule Matrix...
Creating iptables-restore input...
Compiling iptables-restore input for chain mangle:...
Compiling /etc/shorewall/routestopped...
Shorewall configuration compiled to /var/lib/shorewall/.restart
Restarting Shorewall....
Initializing...
Processing /etc/shorewall/init ...
Processing /etc/shorewall/tcclear ...
Setting up Route Filtering...
Setting up Martian Logging...
Setting up Proxy ARP...
Setting up Traffic Control...
Preparing iptables-restore input...
Running /usr/sbin/iptables-restore...
<emphasis role="bold">FATAL: Could not load /lib/modules/2.6.26-2-openvz-amd64/modules.dep: No such file or directory
FATAL: Could not load /lib/modules/2.6.26-2-openvz-amd64/modules.dep: No such file or directory
FATAL: Could not load /lib/modules/2.6.26-2-openvz-amd64/modules.dep: No such file or directory
FATAL: Could not load /lib/modules/2.6.26-2-openvz-amd64/modules.dep: No such file or directory</emphasis>
IPv4 Forwarding Enabled
Processing /etc/shorewall/start ...
Processing /etc/shorewall/started ...
done.
</programlisting>
</section>
<section>
<title>Working Example</title>
<para>This section presents a working example. This is the configuration
at shorewall.net during the summer of 2009.</para>
<para>The network diagram is shown below.</para>
<graphic fileref="images/Network2009c.png"/>
<para>The two systems shown in the green box are OpenVZ Virtual
Environments (containers).</para>
<section>
<title>OpenVZ Configuration</title>
<para>In the files below, items in <emphasis role="bold">bold
font</emphasis> are relevant to the networking/Shorewall
configuration.</para>
<para><filename>/etc/vz/conf</filename> (long lines folded for
clarity).</para>
<programlisting>## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000
## Logging parameters
LOGGING=yes
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0
## Disk quota parameters
DISK_QUOTA=no
VZFASTBOOT=no
# The name of the device whose ip address will be used as source ip for VE.
# By default automatically assigned.
<emphasis role="bold">VE_ROUTE_SRC_DEV="eth3"</emphasis>
# Controls which interfaces to send ARP requests and modify APR tables on.
NEIGHBOUR_DEVS=detect
## Template parameters
TEMPLATE=/var/lib/vz/template
## Defaults for VEs
VE_ROOT=/home/vz/root/$VEID
VE_PRIVATE=/home/vz/private/$VEID
CONFIGFILE="vps.basic"
#DEF_OSTEMPLATE="fedora-core-4"
DEF_OSTEMPLATE="debian"
## Load vzwdog module
VZWDOG="no"
## IPv4 iptables kernel modules
<emphasis role="bold">IPTABLES="iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos
ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length
ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack
ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc ipt_REDIRECT
xt_mac ipt_owner"
</emphasis>
## Enable IPv6
IPV6="no"</programlisting>
<para><filename>/etc/vz/conf/101.conf</filename>:</para>
<programlisting>ONBOOT="yes"
# UBC parameters (in form of barrier:limit)
KMEMSIZE="574890800:589781600"
LOCKEDPAGES="256:256"
PRIVVMPAGES="1073741824:2137483648"
SHMPAGES="21504:21504"
NUMPROC="240:240"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="262144:9223372036854775807"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="188:206"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="1720320:2703360"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="1126080:2097152"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="360:360"
DCACHESIZE="3409920:3624960"
NUMFILE="9312:9312"
AVNUMPROC="180:180"
<emphasis role="bold">NUMIPTENT="200:200"</emphasis>
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="1048576:1153024"
DISKINODES="200000:220000"
QUOTATIME="0"
# CPU fair sheduler parameter
CPUUNITS="1000"
VE_ROOT="/home/vz/root/$VEID"
VE_PRIVATE="/home/vz/private/$VEID"
OSTEMPLATE="suse-11.1-x86_64"
ORIGIN_SAMPLE="vps.basic"
<emphasis role="bold">HOSTNAME="lists.shorewall.net"
IP_ADDRESS="206.124.146.177"
NAMESERVER="127.0.0.1"
NAME="lists"
SEARCHDOMAIN="shorewall.net"</emphasis></programlisting>
<para>This VE is the main server at shorewall.net. Note that some of the
memory parameters are set ridiculously large -- I got tired of
out-of-memory issues.</para>
<para><filename>/etc/vz/conf/102.conf</filename> (nearly default
configuration on Debian):</para>
<programlisting>ONBOOT="yes"
# UBC parameters (in form of barrier:limit)
KMEMSIZE="14372700:14790164"
LOCKEDPAGES="256:256"
PRIVVMPAGES="65536:69632"
SHMPAGES="21504:21504"
NUMPROC="240:240"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="33792:9223372036854775807"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="188:206"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="1720320:2703360"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="1126080:2097152"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="360:360"
DCACHESIZE="3409920:3624960"
NUMFILE="9312:9312"
AVNUMPROC="180:180"
<emphasis role="bold">NUMIPTENT="200:200"</emphasis>
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="1048576:1153024"
DISKINODES="200000:220000"
QUOTATIME="0"
# CPU fair sheduler parameter
CPUUNITS="1000"
VE_ROOT="/home/vz/root/$VEID"
VE_PRIVATE="/home/vz/private/$VEID"
OSTEMPLATE="debian-5.0-amd64-minimal"
ORIGIN_SAMPLE="vps.basic"
<emphasis role="bold">HOSTNAME="server.shorewall.net"
IP_ADDRESS="206.124.146.178"
NAMESERVER="206.124.146.177"
NAME="server"</emphasis></programlisting>
<para>I really don't use this server for anything currently but I'm
planning to eventually splt the services between the two VEs.</para>
</section>
<section>
<title>Shorewall Configuration on the Host</title>
2014-06-13 13:25:54 +02:00
<para>Below are excerpts from the configuration files as they pertain to
the OpenVZ environment.</para>
<para><filename>/etc/shorewall/zones</filename>:</para>
<programlisting>#ZONE TYPE OPTIONS IN_OPTIONS OUT_OPTIONS
fw firewall
2009-07-11 17:16:25 +02:00
net ipv4 #Internet
loc ipv4 #Local wired Zone
<emphasis role="bold">dmz ipv4 #DMZ</emphasis>
...</programlisting>
<para><filename>/etc/shorewall/params</filename>:</para>
2009-07-11 17:16:25 +02:00
<programlisting>NET_IF=eth3
INT_IF=eth1
<emphasis role="bold">VPS_IF=venet0</emphasis>
...</programlisting>
<para><filename>/etc/shorewall/interfaces</filename>:<programlisting>#ZONE INTERFACE OPTIONS
net $NET_IF dhcp,blacklist,tcpflags,optional,routefilter=0,nosmurfs,logmartions=0,<emphasis
2009-07-11 17:16:25 +02:00
role="bold">proxyarp=1</emphasis>
loc $INT_IF dhcp,logmartians=1,routefilter=1,nets=(172.20.1.0/24),tcpflags
<emphasis role="bold">dmz $VPS_IF logmartians=0,routefilter=0,nets=(206.124.146.177,206.124.146.178),routeback</emphasis>
...</programlisting>This is a multi-ISP configuration so entries are required
in <filename>/etc/shorewall/rtrules</filename>:</para>
<programlisting>#SOURCE DEST PROVIDER PRIORITY
- 172.20.0.0/24 main 1000
<emphasis role="bold">- 206.124.146.177 main 1001
- 206.124.146.178 main 1001</emphasis></programlisting>
</section>
<section>
<title>Shorewall Configuration on Server</title>
2011-09-22 04:27:38 +02:00
<warning>
<para>If you are running Debian Squeeze, Shorewall will not work in an
OpenVZ container. This is a Debian OpenVZ issue and not a Shorewall
issue.</para>
</warning>
<para>I have set up Shorewall on Server (206.124.146.178) just to have
an environment to test with. It is a quite vanilla one-interface
configuration.</para>
<para>/etc/shorewall/zones:</para>
<programlisting>#ZONE TYPE OPTIONS IN_OPTIONS OUT_OPTIONS
fw firewall
net ipv4</programlisting>
<para>/etc/shorewall/interfaces:</para>
<programlisting>#ZONE INTERFACE BROADCAST OPTIONS
net <emphasis role="bold">venet0 </emphasis> detect dhcp,tcpflags,logmartians,nosmurfs</programlisting>
</section>
</section>
<section>
<title>Working Example Using a Bridge</title>
<para>This is the configuration at shorewall.net during the spring of
2010. Rather than using the venet0 configuration shown above, this
configuration uses a bridge in preparation for adding IPv6 support in the
DMZ. The eth0 interface in each of the containers is statically configured
using the distributions' configuration tools
(<filename>/etc/network/interfaces</filename> on Debian and Yast on
OpenSuSE).</para>
<para>The network diagram is shown below.</para>
<graphic fileref="images/Network2010.png"/>
<para>The two systems shown in the green box are OpenVZ Virtual
Environments (containers).</para>
<section>
<title>Bridge Configuration</title>
<para>The following stanza in /etc/network/interfaces on the host
configures the bridge.</para>
<programlisting>auto vzbr0
iface vzbr0 inet static
pre-up /usr/sbin/brctl addbr vzbr0
address 206.124.146.176
network 206.124.146.176
broadcast 206.124.146.176
netmask 255.255.255.255
post-down /usr/sbin/brctl delbr br0
</programlisting>
</section>
<section>
<title>OpenVZ Configuration</title>
<para>In the files below, items in <emphasis role="bold">bold
font</emphasis> show the changes from the preceeding example.</para>
<para><filename>/etc/vz/conf</filename> (long lines folded for
clarity).</para>
<programlisting>## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000
## Logging parameters
LOGGING=yes
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0
## Disk quota parameters
DISK_QUOTA=no
VZFASTBOOT=no
# The name of the device whose ip address will be used as source ip for VE.
# By default automatically assigned.
VE_ROUTE_SRC_DEV="eth3"
# Controls which interfaces to send ARP requests and modify APR tables on.
NEIGHBOUR_DEVS=detect
## Template parameters
TEMPLATE=/var/lib/vz/template
## Defaults for VEs
VE_ROOT=/home/vz/root/$VEID
VE_PRIVATE=/home/vz/private/$VEID
CONFIGFILE="vps.basic"
#DEF_OSTEMPLATE="fedora-core-4"
DEF_OSTEMPLATE="debian"
## Load vzwdog module
VZWDOG="no"
## IPv4 iptables kernel modules
IPTABLES="iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos
ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length
ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack
ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc ipt_REDIRECT
xt_mac ipt_owner"
## Enable IPv6
IPV6="no"</programlisting>
<para><filename>/etc/vz/conf/101.conf</filename>:</para>
<programlisting>ONBOOT="yes"
# UBC parameters (in form of barrier:limit)
KMEMSIZE="574890800:589781600"
LOCKEDPAGES="256:256"
PRIVVMPAGES="1073741824:2137483648"
SHMPAGES="21504:21504"
NUMPROC="240:240"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="262144:9223372036854775807"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="188:206"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="1720320:2703360"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="1126080:2097152"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="360:360"
DCACHESIZE="3409920:3624960"
NUMFILE="9312:9312"
AVNUMPROC="180:180"
NUMIPTENT="200:200"
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="1048576:1153024"
DISKINODES="200000:220000"
QUOTATIME="0"
# CPU fair sheduler parameter
CPUUNITS="1000"
VE_ROOT="/home/vz/root/$VEID"
VE_PRIVATE="/home/vz/private/$VEID"
OSTEMPLATE="suse-11.1-x86_64"
ORIGIN_SAMPLE="vps.basic"
HOSTNAME="lists.shorewall.net"
NAMESERVER="127.0.0.1"
NAME="lists"
SEARCHDOMAIN="shorewall.net"
<emphasis role="bold">NETIF="ifname=eth0,mac=00:18:51:22:24:81,host_ifname=veth101.0,host_mac=00:18:51:B6:1A:F1"</emphasis></programlisting>
<para>This VE is the mail server at shorewall.net (MX and IMAP). Note
that some of the memory parameters are set ridiculously large -- I got
tired of out-of-memory issues.</para>
<para><filename>/etc/vz/conf/102.conf</filename> (nearly default
configuration on Debian):</para>
<programlisting>ONBOOT="yes"
# UBC parameters (in form of barrier:limit)
KMEMSIZE="14372700:14790164"
LOCKEDPAGES="256:256"
PRIVVMPAGES="65536:69632"
SHMPAGES="21504:21504"
NUMPROC="240:240"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="33792:9223372036854775807"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="188:206"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="1720320:2703360"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="1126080:2097152"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="360:360"
DCACHESIZE="3409920:3624960"
NUMFILE="9312:9312"
AVNUMPROC="180:180"
NUMIPTENT="200:200"
# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="1048576:1153024"
DISKINODES="200000:220000"
QUOTATIME="0"
# CPU fair sheduler parameter
CPUUNITS="1000"
VE_ROOT="/home/vz/root/$VEID"
VE_PRIVATE="/home/vz/private/$VEID"
OSTEMPLATE="debian-5.0-amd64-minimal"
ORIGIN_SAMPLE="vps.basic"
HOSTNAME="server.shorewall.net"
NAMESERVER="206.124.146.177"
NAME="server"
<emphasis role="bold">NETIF="ifname=eth0,mac=00:18:51:22:24:80,host_ifname=veth102.0,host_mac=00:18:51:B6:1A:F0"</emphasis></programlisting>
<para>This server runs the rest of the services for shorewall.net (web
server, ftp server, rsyncd, etc.).</para>
<para>With a bridged configuration, the VIF for a VE must be added to
the bridge when the VE starts. That is accomplished using
<firstterm>mount</firstterm> files.</para>
<para><filename>/etc/vz/conf/101.mount:</filename></para>
<programlisting><emphasis role="bold">#!/bin/bash
# This script source VPS configuration files in the same order as vzctl does
# if one of these files does not exist then something is really broken
[ -f /etc/vz/vz.conf ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# source both files. Note the order, it is important
. /etc/vz/vz.conf
. $VE_CONFFILE
# Add the VIF to the bridge after VE has started
{
BRIDGE=vzbr0
DEV=veth101.0
while sleep 1; do
/sbin/ifconfig $DEV 0 &gt;/dev/null 2&gt;&amp;1
if [ $? -eq 0 ]; then
/usr/sbin/brctl addif $BRIDGE $DEV
break
fi
done
} &amp;</emphasis></programlisting>
<para><filename>/etc/vz/conf/102.mount:</filename></para>
<programlisting><emphasis role="bold">#!/bin/bash
# This script source VPS configuration files in the same order as vzctl does
# if one of these files does not exist then something is really broken
[ -f /etc/vz/vz.conf ] || exit 1
[ -f $VE_CONFFILE ] || exit 1
# source both files. Note the order, it is important
. /etc/vz/vz.conf
. $VE_CONFFILE
# Add VIF to bridge after VE has started
{
BRIDGE=vzbr0
DEV=veth102.0
while sleep 1; do
/sbin/ifconfig $DEV 0 &gt;/dev/null 2&gt;&amp;1
if [ $? -eq 0 ]; then
/usr/sbin/brctl addif $BRIDGE $DEV
break
fi
done
} &amp;</emphasis></programlisting>
</section>
<section>
<title>Shorewall Configuration on the Host</title>
2014-06-13 13:25:54 +02:00
<para>Below are excerpts from the configuration files as they pertain to
the OpenVZ environment. Again, bold font indicates change from the prior
configuration.</para>
<para><filename><filename>/etc/shorewall/zones</filename>:</filename></para>
<programlisting>#ZONE TYPE OPTIONS IN_OPTIONS OUT_OPTIONS
fw firewall
net ipv4 #Internet
loc ipv4 #Local wired Zone
dmz ipv4 #DMZ
...</programlisting>
<para><filename><filename>/etc/shorewall/params</filename>:</filename></para>
<programlisting>NET_IF=eth3
INT_IF=eth1
<emphasis role="bold">VPS_IF=vzbr0</emphasis>
...</programlisting>
<para><filename>/etc/shorewall/interfaces</filename>:<programlisting>#ZONE INTERFACE OPTIONS
net $NET_IF dhcp,blacklist,tcpflags,optional,routefilter=0,nosmurfs,logmartions=0
loc $INT_IF dhcp,logmartians=1,routefilter=1,nets=(172.20.1.0/24),tcpflags
dmz $VPS_IF logmartians=0,routefilter=0,nets=(206.124.146.177,206.124.146.178),routeback
...</programlisting></para>
<para><filename>/etc/shorewall/proxyarp:</filename></para>
<programlisting>#ADDRESS INTERFACE EXTERNAL HAVEROUTE PERSISTENT
<emphasis role="bold">206.124.146.177 DMZ_IF eth2 no yes
206.124.146.178 DMZ_IF eth2 no yes</emphasis></programlisting>
<para>This is a multi-ISP configuration so entries are required in
<filename>/etc/shorewall/rtrules</filename>:</para>
<programlisting>#SOURCE DEST PROVIDER PRIORITY
- 172.20.0.0/24 main 1000
- 206.124.146.177 main 1001
- 206.124.146.178 main 1001</programlisting>
</section>
<section>
<title>Shorewall Configuration on Server</title>
<para>I have set up Shorewall on VE 101 (206.124.146.178) just to have
an environment to test with. It is a quite vanilla one-interface
configuration.</para>
<para><filename>/etc/shorewall/zones:</filename></para>
<programlisting>#ZONE TYPE OPTIONS IN_OPTIONS OUT_OPTIONS
fw firewall
net ipv4</programlisting>
<para><filename>/etc/shorewall/interfaces:</filename></para>
<programlisting>#ZONE INTERFACE OPTIONS
net <emphasis role="bold">eth0 </emphasis> dhcp,tcpflags,logmartians,nosmurfs</programlisting>
</section>
</section>
2009-06-30 17:55:19 +02:00
</article>