Home Network Setup, Part 3

This is the third part of a multipart series as I go through the process of setting up a home network. If you’ve just hit this article I’d recommend going through Part 1 & 2 first. I guess the first thing we should do is run through what we’ve achieved using the requirements we defined within the first of these articles:

  • Reliable shared internet access with either automatic or manual failover to an alternate means of connectivity (ala iBurst Wireless or ick, !dialup!).
  • A method of handing out IP addresses to all “dynamic” clients on the network. That is to say, we’re looking for a DHCP server.
  • Optimisation of possible bottle necks associated with a home based broadband connection. DNS & HTTP caching come to mind.

HTTP Caching is something we’ll worry about soon but now I think it’s necessary to begin setting up Tethys first. 🙂

We cover the following topics in this article:

  • Secondary DNS Server
  • Local DNS Zone
  • Local Zone Slave DNS Setup
  • Centralised File sharing
  • Transparent/HTTP Caching Proxy Server

Secondary DNS Server

We’ve already got a caching DNS server setup on the main gateway machine (Dione). Now we do the same thing on Tethys so we have multiple DNS servers.

[root@tethys ~]# yum -y install caching-nameserver
Setting up Install Process
Setting up repositories
update 100% |=========================| 951 B 00:00
base 100% |=========================| 1.1 kB 00:00
addons 100% |=========================| 951 B 00:00
extras 100% |=========================| 1.1 kB 00:00
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for caching-nameserver to pack into transaction set.
caching-nameserver-7.3-3. 100% |=========================| 6.8 kB 00:00
—> Package caching-nameserver.noarch 0:7.3-3 set to be updated
–> Running transaction check
–> Processing Dependency: bind for package: caching-nameserver
–> Processing Dependency: bind >= 9.1.3-0.rc2.3 for package: caching-nameserver
–> Restarting Dependency Resolution with new changes.
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for bind to pack into transaction set.
bind-9.2.4-2.i386.rpm 100% |=========================| 33 kB 00:01
—> Package bind.i386 20:9.2.4-2 set to be updated
–> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
caching-nameserver noarch 7.3-3 base 22 k
Installing for dependencies:
bind i386 20:9.2.4-2 base 462 k

Transaction Summary
=============================================================================
Install 2 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 484 k
Downloading Packages:
(1/2): caching-nameserver 100% |=========================| 22 kB 00:00
(2/2): bind-9.2.4-2.i386. 100% |=========================| 462 kB 00:06
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: bind ######################### [1/2]
Installing: caching-nameserver ######################### [2/2]

Installed: caching-nameserver.noarch 0:7.3-3
Dependency Installed: bind.i386 20:9.2.4-2
Complete!
[root@tethys ~]# chkconfig –level 345 named on
[root@tethys ~]# service named start
Starting named: [ OK ]
[root@tethys ~]#

Given that we’ve already done this on Dione I’m going to assume I don’t need to run you through the process of installation & testing again. 🙂

Local DNS Zone

Now that we have a working caching DNS server on each server we can look at setting up a local DNS zone. As I’ve previously explained, setting up a local DNS zone means you can name your servers and not always have to remember their IP addresses.

So we begin first by writing a DNS zone for our domain name (in this case, it’s Seekbrain.com since I use that at home & publicly). I won’t bother running you through the specific components with writing a DNS zone since I feel it has been explained numerous times in other articles.

So as a brief run down I’ve made the following changes made on Tethys:

/var/named/seekbrain.com.hosts:

$ttl 38400
seekbrain.com. IN SOA tethys.seekbrain.com. stuart.seekbrain.com. (
1136486031
10800
3600
604800
38400 )
seekbrain.com. IN NS tethys.seekbrain.com.
seekbrain.com. IN NS dione.seekbrain.com.
seekbrain.com. IN A 202.60.73.6
www.seekbrain.com. IN CNAME seekbrain.com.
au.seekbrain.com. IN CNAME www
levity.seekbrain.com. IN A 192.168.50.1
mailserver.seekbrain.com. IN A 192.168.128.3
recipes.seekbrain.com. IN CNAME seekbrain.com.
tethys.seekbrain.com. IN A 192.168.128.4
dione.seekbrain.com. IN A 192.168.128.1
telesto.seekbrain.com. IN A 192.168.128.5

/var/named/192.168.50.rev:

$ttl 38400
50.168.192.in-addr.arpa. IN SOA tethys.seekbrain.com. stuart.seekbrain.com. (
1136486588
10800
3600
604800
38400 )
50.168.192.in-addr.arpa. IN NS dione.seekbrain.com.
50.168.192.in-addr.arpa. IN NS tethys.seekbrain.com.
1.50.168.192.in-addr.arpa. IN PTR levity.seekbrain.com.

/var/named/192.168.128.rev:

$ttl 38400
128.168.192.in-addr.arpa. IN SOA tethys.seekbrain.com. stuart.seekbrain.com. (
1136486693
10800
3600
604800
38400 )
128.168.192.in-addr.arpa. IN NS dione.seekbrain.com.
1.128.168.192.in-addr.arpa. IN PTR dione.seekbrain.com.
2.128.168.192.in-addr.arpa. IN PTR levity.seekbrain.com.
3.128.168.192.in-addr.arpa. IN PTR mailserver.seekbrain.com.

Now that we’ve setup our forward and reverse DNS zones we can add them to the named.conf:

/etc/named.conf:

;

zone “seekbrain.com” IN {
type master;
file “seekbrain.com.hosts”;
allow-update { none; };
allow-transfer { 192.168.128.1; };
};

zone “50.168.192.in-addr.arpa” IN {
type master;
file “192.168.50.rev”;
allow-transfer { 192.168.128.1; };
allow-update { none; };
};

zone “128.168.192.in-addr.arpa” IN {
type master;
file “192.168.128.rev”;
allow-transfer { 192.168.128.1; };
allow-update { none; };
};

This basically says that Tethy’s is the master DNS server for the 3 zones but that we allow zone transfers to Dione.

A quick restart later and we should be do a local DNS lookup for a server which only exists in our local zone (ie. our NON internet zone):

[root@tethys named]# service named restart
Stopping named: [ OK ]
Starting named: [ OK ]
[root@tethys named]# host tethys.seekbrain.com localhost
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:

tethys.seekbrain.com has address 192.168.128.4
[root@tethys named]#

Local Zone Slave DNS Setup

Now we have to setup Dione to pull the zone from Tethys. Fortunately for us this is fairly easy since it’s only a slave zone and we have the master zone we’ve setup above already available. I simply added the following lines to Dione’s /etc/named.conf:

zone “seekbrain.com” {
type slave;
file “slaves/seekbrain.com”;
masters { 192.168.128.4; };
allow-transfer { none; };
};

zone “50.168.192.in-addr.arpa” IN {
type slave;
file “slaves/192.168.50.rev”;
masters { 192.168.128.4; };
allow-transfer { none; };
};

zone “128.168.192.in-addr.arpa” IN {
type slave;
file “slaves/192.168.128.rev”;
masters { 192.168.128.4; };
allow-transfer { none; };
};

After restarting named it’s fairly easy to make sure it was successful by tailing /var/log/messages:

Feb 6 05:52:55 dione named[4905]: zone seekbrain.com/IN: sending notifies (serial 1136486031)
Feb 6 05:52:54 dione named[4905]: zone 50.168.192.in-addr.arpa/IN: transferred serial 1136486588
Feb 6 05:52:54 dione named[4905]: transfer of ‘50.168.192.in-addr.arpa/IN’ from 192.168.128.4#53: end of transfer
Feb 6 05:52:55 dione named[4905]: zone 50.168.192.in-addr.arpa/IN: sending notifies (serial 1136486588)
Feb 6 05:52:55 dione named[4905]: received notify for zone ‘seekbrain.com’
Feb 6 05:52:56 dione named[4905]: zone 128.168.192.in-addr.arpa/IN: transferred serial 1136486693
Feb 6 05:52:56 dione named[4905]: transfer of ‘128.168.192.in-addr.arpa/IN’ from 192.168.128.4#53: end of transfer

So that’s it! We have a two DNS server setup in a master/slave arrangement with caching.

Centralised File sharing

As I previously explained another requirement is a centralised file server. I use NFS personally partially because of it’s minimal overhead but also because there are 0 Windows machines in my house. For those who want to use Samba there are numerous guides available for setting up those shares. I’ve added the following to Tethys /etc/exports file:

/share/spool0 192.168.128.0/24(rw,no_root_squash) 192.168.50.0/24(rw,no_root_squash)
/share/spool1 192.168.128.0/24(rw,no_root_squash) 192.168.50.0/24(rw,no_root_squash)

I then set NFS to start at boot (chkconfig –level 345 nfs on) and started NFS (service nfs start).

Security wise, this isn’t GREAT. I wouldn’t recommend this setup for anything OTHER than a home network. Realistically, Sally is the only other person who is regularly on the network, the server itself is behind a gateway and the data on the server is somewhat recoverable (ie. it’d hurt but it wouldn’t be the end of my PHP RPMs). 😉

HTTP Caching Proxy Server

So now that we have DNS & central file storage organised we can setup a HTTP proxy. Since I like to use Transparent proxying (ie. proxying that doesn’t require modification from the client side) the most appropriate place to put this is on the NAT system (ie. Dione).

Consequently I ran the following commands on Dione:

[root@dione ~]# yum -y install squid
Setting up Install Process
Setting up repositories
update 100% |=========================| 951 B 00:00
base 100% |=========================| 1.1 kB 00:00
addons 100% |=========================| 951 B 00:00
extras 100% |=========================| 1.1 kB 00:00
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
–> Populating transaction set with selected packages. Please wait.
—> Downloading header for squid to pack into transaction set.
squid-2.5.STABLE6-3.4E.11 100% |=========================| 125 kB 00:02
—> Package squid.i386 7:2.5.STABLE6-3.4E.11 set to be updated
–> Running transaction check

Dependencies Resolved

=============================================================================
Package Arch Version Repository Size
=============================================================================
Installing:
squid i386 7:2.5.STABLE6-3.4E.11 base 1.1 M

Transaction Summary
=============================================================================
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 1.1 M
Downloading Packages:
(1/1): squid-2.5.STABLE6- 100% |=========================| 1.1 MB 00:04
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing: squid ######################### [1/1]

Installed: squid.i386 7:2.5.STABLE6-3.4E.11
Complete!
[root@dione ~]#

Initialising the Squid database is pretty easy as well:

[root@dione ~]# squid -z
2006/02/06 06:36:28| Creating Swap Directories
[root@dione ~]#

Generally speaking it’s fairly safe to accept most of the defaults except those that are required for the successful operation of the proxy itself. I made the following variable specific changes/additions to the /etc/squid/squid.conf file:

httpd_accel_uses_host_header on
httpd_accel_with_proxy on
httpd_accel_host virtual

acl localnet src 192.168.128.0/24 192.168.50.0/24
http_access allow localnet

The httpd_accel_* lines are purely for our transparent proxy requirements. Meanwhile the acl & http_access allows for our local network to use Squid while not creating an open proxy server to be exploited.

Then I restart (or start as the case may be) squid:

[root@dione ~]# service squid restart
Stopping squid: [FAILED]
Starting squid: . [ OK ]
[root@dione ~]# chkconfig –level 345 squid on

Forcing transparent proxying is then as easy as adding a single iptables rule:

[root@dione ~]# iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 3128
[root@dione ~]# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
[root@dione ~]#

So now we have a transparent proxy server. Cool eh? 🙂

Conclusion

Well that’ll do for this article. Next article I’ll run through how to setup centralised NIS authentication (master/slave) and dig into the meat of setting up our local Mail server.

Have fun! 🙂

Stuart