Everyone needs to start somewhere - and it is only by trying that you can really learn how to do these things. That said, just because a server can serve your site, doesn't mean it should - most security flaws tend to not be evident until they are exploited. Also remember that if your server is compromised, it doesn't only affect you, but has an adverse effect on all other users of the Internet.
I'd highly recommend starting with a virtual machine (e.g. using Virtualbox) - so that you can experiment without risk. (On the other hand, I realize it isn't as exciting as putting together a 'live' server, but it is good to have someone around that you can call to bail you out when things go wrong).
Before proceding, I'd recommend reading tips for securing a LAMP server.
There isn't always only one right way, so, some points to get you started:
The path that Apache serves from is defined in your httpd.conf - look for the DocumentRoot
directive, which basically defines the file system path that maps to the root path of a website. You will usually find a matching Directory
block that defines specific options, permissions, and behaviours for matching files. I would recommend the 'Perfect Server' articles on HowToForge as a good starting point (however, the more recent ones do have significant portions of the setup configured by ISPConfig - which is less than ideal for learning).
That said, using /var/www
is fairly common practise (even if it doesn't quite fit with the normal filesystem hierarchy standard, which might favour /srv/www
). You want restrictive permissions - and having your containing directory owned by root limits the damage that can occur if an account/site is compromised. I'd suggest simply adding each site under /var/www. For instance, a possible layout might be:
- /var/www/domain1.com - the home directory of a restricted user (owned by that user)
- cgi-bin - directory for cgi scripts (possibly including FastCGI wrappers)
- log - symlink to site specific log file under /var/log/
- tmp - temporary directory (e.g. for uploads, sessions, etc.) as needed
- public_html - document root for website
Keep your permissions 644 for files and 755 for directories (use more restrictive permissions for configuration files that may contain passwords). (You may also add the webserver to the group, depending on your setup).
You mention multiple sites (Wordpress and Drupal sites) - it is preferable to separate these to the extent possible, and having them run as different users is a good start. The less a particular user can do, the less damage can be done if a particular site is compromised (that said, trying to 'fix' things after many such compromises is probably not a good idea - best to start over at that point - recommended reading). suExec is a good idea, but you may be able to get away without it if you use PHP-FPM (although, you can still use suExec with PHP-FPM).
mod_php is faster than FastCGI - but it can't handle load (since every request gets a dedicated PHP interpreter, you get speed, but huge memory usage). Therefore, for any but the most trivial of applications, you should go the FastCGI route. I would recommend PHP-FPM - it is a FastCGI process manager that has good performance and greatly simplifies the implementation of many essential features (like suExec). It is easily available for PHP versions above 5.3.3 (previous versions need to be compiled with the php-fpm patch). On the Apache side of things, if you use PHP-FPM, you would use mod_fastcgi
and pass requests to the running PHP-FPM daemon with FastCgiExternalServer
.
Apache uses threads and processes to respond to requests. A multi-threaded approach is more efficient, but not all modules are thread safe. Launching more processes require more memory overall, but can be more stable under certain circumstances. Every request is handled by a single thread.
- mpm_prefork is the older process manager - every process has only one thread - so each request requires a new process - this requires more memory.
- mpm_worker - newer process manager - launches a small number of processes, each with multiple threads. This should scale better and consume fewer resources.
There are a couple of possible reasons why your memory may have dropped when making the switch. Firstly, you probably restarted Apache - this terminates any existing processes and launches new ones. Over time, processes will grow in size and consume more memory - fresh processes use less memory. Secondly, the values you have setup in httpd.conf likely define different starting points for each process manager (for instance, you would expect that if mpm_worker has more threads, the same number of running processes will consume more memory - however, they will also be able to handle more requests). (The meaning of each of the process manager directives is explained in httpd.conf - or you can see this answer).
Finally, if you are running on a low memory VPS, I would recommend looking into NginX instead of Apache. While Apache is more widely used; Nginx is easier to setup, uses less resources, and usually offers better performance (especially 'out of the box').
Whatever route you take, consider the Internet to be a hostile environment - keep your packages up to date (use your package manager, and avoid compiling to the extent possible) and always keep security in mind (e.g. avoid FTP, use SCP instead) - hope for the best, but expect the worst.