Customer support forums for Atomic Protector (formerly Atomic Secured Linux). There is no such thing as a bad question here as long as it pertains to using Atomic Protector. Newbies feel free to get help getting started or asking questions that may be obvious. Regular users are asked to be gentle.
Nginx
Varnish
PHP-FPM (PHP v5.4)
Percona 5.6
ASL
Test Application: Wordpress
Server config:
1 x Intel Xeon E5450
32 GB RAM
1 x 1TB 7200RPM HDD
I'm using Loader.io for load testing with 300 users over 1 minute.
without TWAF enabled I get an average response time of 50 ms with the server barely registering the load on htop. Below is the screenshot of loader.io for this test
With TWAF enabled the average load time jumps to ~3200 ms with all the cores touching 100% cpu usage. Bulk of the load is taken by tortixd threads.
I've tried playing with the prefork MPM section of tortixd.conf at /var/asl/httpd/conf, the number of threads increase but it doesn't do anything to reduce the server load.
Any ideas?
The website which needs to run does 50+ Million monthly pageviews.
Its apache based, so its already at a disadvantage here against varnish. So let me start with an example of something we put together for a mobile gaming company (about 100,000,000 users) that worked out really well.
load balancer layer (2 servers)
- varnish (port 80 traffic)
- nginx (port 443 traffic), the plan is to replace this with pound and redirect into varnish
web server layer (4 servers) <- WAF lives here
apache->DSO WAF->DSO php->redis (this is used as an object cache for PHP)
database layer (3 servers)
- Galera & mariadb
Note that the WAF lives after the load balancer/content cache layer, and in front of the PHP object cache. This is a key part of the design, it both reduces the workload of the web server by eliminating it serving static content, and the redis object store cuts down on database traffic & cpu overhead. This setup could easily go past the current 100m userbase, and it puts all the scaling into the web server layer. Its a pretty modest config when you look at it, and it demonstrates some different performance strategies. The original design started on AWS instances and would scale up to 100's of nodes, it was terribly inefficient & expensive.
There is more than one way to do it here, depending on your environment. If I were to try and collapse the above into a single system a place to start is:
varnish on port 80
nginx on port 443 (you could also use pound here)
apache on port 8080 <- WAF lives here. This is regular apache & ASL, not the T-WAF
php-fpm on 9001
redis <- the performance impact here can be impressive, depending on the application
percona
varnish and nginx are configured as reverse proxies for apache, nginx does not communicate with php-fpm, both are configured to serve static content
apache is configured to use PHP and the application is configured to make use of redis. Additionally, use mod_rpaf here in the web server to keep your client IP's intact
percona, nothing different here, except that the redis layer is cutting down on DB traffic/PHP processing.
So another spin on this would use pound/nginx as the application server:
<load balancer/content cache layer>
varnish on 80
pound on 443 -> redirects to varnish on 80
This is a transparent proxy to your nginx server running on say port 8080
<web layer>
nginx/fpm <- T-WAF here (frankly I think this wont perform as well as the first way, but you could give it a shot & report your findings. Id be curious to see what you get. On the plus side, its closer to your current config)
<database layer>
precona
Note in all configurations we're leveraging varnish, pound, or nginx to answer the original request. Varnish alone is going to outperform anything out there, so you'll always want this to be the first thing in the chain.