Using Apache for pass-through authentication and DAV-SVN

The goal: How do you configure Apache to authenticate user and grant him an access to web pages based on their Unix permissions on a filesystem? Is that possible?

The problem we are facing here is that Apache daemon runs under single account and therefore can not access protected areas on NFS (or even local filesystem).

IIS can solve this using so called pass-through authentication, but with plan Apache you have no luck. But still – if you want to use Apache you have 2 options:

1. Apache accessing NFS via Kerberos

Quite nice option for NFS based filesystems is to use Kerberos authentication in Apache (via modules mod_auth_kerb, or more modern mod_auth_gssapi) to store user’s Kerberos credentials on webserver (typically in /tmp).

Web server can use this user credentials to make authenticated mount (if you have configured automounter) to the NFS server. You are effectively using this impersonate authenticated user identity behind user Apache daemon is running under.

Example:

Apache is running as user ‘apache’. Remote user ‘ondrejv’ authenticates on your web page, storing his credentials in /tmp/krb5cc_apache. This file is owned by user ‘apache’ but stores Ondrej’s credentials. Webserver then tries to access NFS share, rpc.gssd grabs the Kerberos cache, but since it contains Ondrej’s credentials, kernel creates GSS context with remote NFS server using Ondrej account.

This approach is nice, but does not work well with multiple simultaneous users trying to access different shares – given certain system user (apache), kernel establishes a certain GSS context for certain remote authenticated user. This context is then valid for some time so that subsequent authentication is effectively ignored.

2. Accessing NFS shares using Apache-ITK

Another interesting option is to use Apache-ITK, because since version 2.4, there is a promising parameter:

AssignUserIDExpr %{reqenv:REMOTE_USER}

which allows Apache to spawn a helper process under dynamically specified user. Unfortunately you can not use REMOTE_USER here directly (as mentioned above), because at the time apache spawns the helper process, user is not yet authenticated :-(.

To deal with this problem, we have to have two Apache servers running. First one (facing the LAN/Internet) doing authentication and forwarding REMOTE_USER identity hidden in specific HTTP header to second Apache worker server.

3. Adding DAV_SVN into the loop

Theoretically we have it solved. The only problem is with SVN as certain SVN operations can be only done with authenticated user – but since the worker server is not doing any authentication, we are out of luck…

… well, not exactly, we can use Basic anonymous authentication here to ‘forward’ user identity from the frontend server and authenticate the worker server, too – just to make SVN happy.

The complete configuration would then look like this, Frontend server:

<VirtualHost *:443>
 <Location />
 AuthType Kerberos
 AuthName "SVN Login"
 KrbVerifyKDC off
 KrbAuthRealms MYDOMAIN.COM
 KrbServiceName HTTP
 require valid-user

# Construct fake authentication header for the worker ITK behind us
# Put the username and dummy password
 AuthBasicFake %{REMOTE_USER} password

 RewriteEngine On
# We have to pass username in the HTTP headers so that ITK behind us knows which username to use
# unfortunately it can't use %{REMOTE_USER} directly 
 RequestHeader set Proxy-User %{REMOTE_USER}s
# strip Kerberos realm string
 RequestHeader edit Proxy-User @MYDOMAIN.COM ""
 ProxyPass "http://localhost/"
 ProxyPassReverse http://localhost/

 </Location>
</VirtualHost>

We are constructing fake authentication header here so that worker ITK can be authenticated as well and also defining Proxy-User header so that ITK knows which username to use for the helper fork process.

The ITK is (for security reasons) only listening on localhost. HTTPS is therefore not needed here:

<VirtualHost localhost:80>

AssignUserIDExpr %{HTTP:Proxy-user}

# As mentioned, we can use ITK to serve static HTML pages, too
# but since ITK runs as root, make sure no_root_squash is there for NFS
# ...no worries, filesystem permissions are still followed
 <Directory />
  Options Indexes FollowSymLinks
  AllowOverride None
  Order allow,deny
  Allow from all
 </Directory>
 
# DAV SVN is using anonymous authentication, this seems to be the only way
# how to make dav_svn believe we are authenticated
 <Location /my_svn_project>
  DAV svn
  AuthName "anonymous"
  AuthType Basic
  AuthBasicProvider anon
  Anonymous "*"
  require valid-user

  SVNParentPath /var/svnroot
  SVNListParentPath On
 </Location>
</VirtualHost>

Small disadvantage is that root account still needs to be able to browse the filesystem (so no_root_squash export option is needed on the NFS server). This is because ITK does not mask the core_map_to_storage(). This limitation only affect Apache’s ability to serve static HTML pages, not dav_svn / php / cgi.

Enjoy!

Advertisements
Using Apache for pass-through authentication and DAV-SVN

Enable Kerberized NFS with SSSD and Active Directory

Once we have Linux computers joined to AD domain and running, we can also enable Kerberized NFS, Let’s assume AD domain ‘EXAMPLE.COM’:

  • On all computers enable ‘secure nfs’ – on RHEL-6 and older we do so in config file /etc/sysconfig/nfs (enable ‘SECURE=yes’), on RHEL-7 and newer enable nfs-client target (systemctl enable nfs.client)
  • Make sure clock is in sync with Windows DC, also make sure Kerberos library is properly configured – but since we are running SSSD, this should be done already, anyway…
  • On server additionally configure exports, i.e.:
/export  *(rw,sec=sys:krb5:krb5i:krb5p)
    •  server needs NFS keytab (“net ads keytab add nfs”) – this populates ServicePrincipalName (SPN) computer attribute in AD
    • Addionally load nfs server (“service nfs start” resp “systemctl enable nfs-server”)
    • Servers running RH-6 or older can suffer from a nasty problem affecting users who are members of too many groups
  • Verify all daemons (i.e. rpc.gssd on the client and rpc.svcgssd or gssproxy on the server) are running
  • If your NFS server is Netapp NAS, configure Kerberos simply by running “nfs setup” wizard and select option 2 (use Microsoft KDC)
  • If using NFSv4, make sure you have properly configured Idmapper (see file /etc/idmap.conf)

Now you  have everything configured, you should be able to mount the share:

 mount -o sec=krb5 server_name:/ /mnt

Important Note:  On the NFS client, you actually need two TGT Kerberos principals:

  1. Machine principal – that’s the one stored in the system Kerberos database (usually /etc/krb5.keytab). In this database, we are interested in principal of form `hostname -s`$ – or, using AD syntax, it is the sAMAccountName attribute of the client machine in AD. This principal is needed to perform the mount of the remote filesystem.
  2. User principal – usually stored in user’s Kerberos cache – which is either a small file in /tmp (see the output of command klist) or kernel ring space. This ticket is actually prove of user’s identity – it enables him to actually access the mounted filesystem. Without a valid ticket, user is usually denied access to the mounted filesystem.

Troubleshooting

if you are unable to mount the share then:

  • make sure it is Kerberos problem, i.e. you can mount with system authentication just fine
  • make sure GSSD has a working principal to work with – the easiest test is to obtain a TGT using machine credentials (should return with no errors):
# kinit -k `hostname -s`$
  • Make sure that your NFS server name is resolvable in DNS – including reverse DNS lookup. Note that newer systems running nfs-utils version 1.2.8 and newer (like RHEL-7) are not so picky regarding reverse DNS, If this is your case and you do not want to rely on reverse DNS, make sure you use FQDN in your mount command, i.e. “mount server.example.com:/vol/vol0 /mnt”
  • In all cases the nfs server’s FQDN must exist in SPN attribute (see above), i.e. (“nfs/server.example.com”)
  • spn
    • If FQDN of your linux client and server box does not match the AD domain (i.e. for example server’s FQDN is “filer.unix.example.com”, not “filer.example.com”), you are not lost – you only have to add the correct SPN to the server manually using (for example) ADSI edit – so say you add “nfs/filer.unix.example.com” to the SPN list
  • If you have reverse DNS configured, make sure just one PTR record is returned for the server. If DNS resolves multiple PTR records for your NFS server, rpc.gssd might fail miserably on the client.

It is also useful to enable debugging (run GSSD with argument -vvv).

Myths

DNS/rDNS records of the client machine are not so important. Make sure all records (see above) for the server are ok. Also, nowadays you do not have to enable weak_crypto in krb5 library. With modern kernel, Kerberized NFS will work even with strong ciphers.

Enable Kerberized NFS with SSSD and Active Directory

Add automount rules to Active Directory and access them with SSSD

Centralizing automount rules in a centralized identity store such as FreeIPA is usually a good choice for your environment as opposed to copying the automount map files around – the administrator has one place to edit the automount rules and the rule set is always up to date. Replication mitigates most of the single-point-of-failure woes and by using modern clients like the SSSD, the rules can also be cached on the client side, making the client resilient against network outages.

What if your identity store is Active Directory though? In this post, I’ll show you how to load automount maps to an AD server and how to configure SSSD to retrieve and cache the rules. A prerequisite is a running AD instance and a Linux client enrolled to the AD instance using tools like realmd or adcli. In this post, I’ll use dc=DOMAINNAME,dc=LOCAL as the Windows domain name.

SSSD (as well as automounter LDAP backend) by default expects RFC2307bis schema on the LDAP server. Unfortunately AD (as of Windows 2008) is not fully compatible RFC2307bis schema so we have two options:

  • Use (older) RFC2307 recommendation to store maps – more SSSD configuration is needed
  • extend AD schema to fully meet RFC2307bis and use SSSD with default configuration

As extending AD schema is irreversible operation that can be potentially dangerous – and not every Linux admin has right (Forest schema admins are needed) to do so, in this article we will describe the first option.

As the first step we need to create an LDAP container that would store the automount maps. It’s not a good idea to mix automounter rules into the same OU that already stores other objects, like users – a separate OU makes management easier and allows to set more fine-grained permissions. You can create the automount OU in “ADSI Edit” quite easily by right-clicking the top-level container (dc=DOMAINNAME,dc=LOCAL), selecting “New->Object”. In the dialog that opens, select “organizationalUnit”, click “Next” and finally name the new OU “automount“. Note that ldap_autofs_search_base defaults to the RootDSE so we have to tell SSSD about the autofs maps location by in sssd.conf.

We also need to re-map automounter maps to the NIS friendly format – this is also done in sssd.conf. The final configuration snippet will look like this:

autofs_provider = ldap
ldap_autofs_entry_key = cn
ldap_autofs_entry_object_class = nisObject
ldap_autofs_entry_value = nisMapEntry
ldap_autofs_map_name = nisMapName
ldap_autofs_map_object_class = nisMap
ldap_autofs_search_base = ou=automount,DC=DOMAINNAME,dc=local

Note:

As of sssd version 13.3, ad provider can be directly used to feed automounter – you can directly use “autofs_provider = ad” and omit the mapping part. Ad provider does it automatically for you

You could notice that we specified an ldap (not ad) provider for the autofs backend in AD. This a bit confusing, but it has to be done this way due to the current limitation in SSSD. Fortunately, no other ldap settings (authentication, credentials, etc) is necessary, SSSD actually takes the missing bits from the AD provider which has already been configured using adcli or realmd tools.

Now, let’s add the the maps themselves. First we need to define auto.master map to contain all other indirect (we expect indirect maps here, but direct autofs maps can be configured similarly).

In my test, I used “ADSI Edit” again. Just right-click the AUTOMOUNT container, select “New->Object” and then you should see nisMap in the list of objectClasses. You will be asked for name (CN) and nisMapName attribute values, so enter “auto.master” for both. Similarly, create an additional nisMap called for example auto.home – this one, in our example, will hold maps for user directories.

Now we need to put a reference for the auto.home map we just created in the main auto.master. Right click on the “auto.master” map we just created and select “New-Object”, pick “nisObject”. You will be asked for name (CN) – enter “/home”, nisMapName – enter “auto.master” and nisMapEntry – enter “auto.home”.

As a last step, let’s define keys for particular users in our auto.home map. Right click on the “auto.home” map and select “New-Object”, pick “nisObject”. You will be asked for name (CN) – enter “johndoe”, nisMapName – enter “auto.home” and nisMapEntry – enter for example “-fstype=nfs4 -sec=krb5p Netapp:/vol/vol1/users/johndoe” to reflect a valid path to the NFS server.

The client configuration involves minor modifications to two configuration files. First, edit /etc/nsswitch.conf and append ‘sss’ to the ‘automount:’ database configuration:

automount: files sss

If the automount database was not present in nsswitch.conf at all, just add the line as above. This modification would allow automounter to communicate with the sssd with the libsss_autofs library.
Finally, open the /etc/sssd/sssd.conf file and edit the [sssd] section to include the autofs service:

services = nss, pam, autofs

Then just restart sssd and the setup is done! For testing, run:

automount -m

You should be able to see something like this in the output:

autofs dump map information
===========================

global options: none configured

Mount point: /home

source(s):

instance type(s): sss
map: auto.home

victim | -fstype=nfs4 -sec=krb5p polaris1:/vol/vol1/users/victim

That’s it! Now you can use your AD server as an centralized automount maps storage and the maps are cached and available offline with the SSSD.

Add automount rules to Active Directory and access them with SSSD