I have an api that serves oauth2.0 via passport. It was working fine for low volume traffic but since we have ramped up traffic it buckled. We have a docker application on an EC2 machine with 8 cpu / 16gb ram / SSD. All it the api serves is email/password/client_id/client_secret check, then write the token into the DB, and the user data inside a redis store. (2 read / 2 writes / 1 write to redis). It returns a JWT access_token and a refresh_token (plus the ttl)
The laravel application is dockerized via an ubuntu docker + nginx + 7.0php-fpm with the necessary extensions. (its the same dockerize that Chris Fidao created in his series shippingwithdocker)
I ran locust.io server (on another ec2 machine on AWS) testing at 200 users / 10 hatch/sec and 100% cpu utilization on all cores. If i increase it to 255ish - I start having some 502 bad gateways. I check the php log and php-fpm is unable to service anymore calls. I made a tweak to the php-fpm.conf file: pm = ondemand with 200 children, 1000 max request. I am able to achieve 300 more users at 15 hatch/sec 100% cpu utilization. As another test - what i have done is taken the RAW read DB query + raw insert query and removed redis writting - simply wrote it as part of the routes/api.php closure and let it ripped. I managed to get up to 500 users approx with 134 request/second (no controllers)
My question is: does these metrics seems wrong to anyone? I have an 8 core / 16GB and im getting a low throughput? after finding Taylor Otwell's blog about how he manage to hit 500+ request/sec with a 2GB ram machine on Digital Ocean - is there a setting that I should be checking? I am just trying to ensure that this is my final benchmark metrics and effectively in order to receive more request - I should add another server under a load balancer.
Sign in to participate in this thread!
The Laravel portal for problem solving, knowledge sharing and community building.
The community