I recently set up an AWS Server for survey solutions for a project, using a PostgreSQL instance (db.t4g.micro) with 20GiB General purpose SSD and an EC2 instance (t3.large). Our survey has 416 participants each with one or two image uploads and we have interviewed around 50 something so far . The CPU usage on the server has been consistently low, which is reassuring.
However, I’m concerned about potential issues that might come up in the future. Specifically, I’m wondering if the email queue could become a bottleneck and how to address any potential problems related to it.
Are there specific metrics I should I monitor to ensure the server remains stable and performs efficiently as the number of participants grows apart from CPU and database storage?
Any advice or best practices for maintaining server health and performance would be greatly appreciated. Below are my server metrics. Thanks in advance!
Your DB instance is very small. Default PostgreSQL connections limit for this instance (~112) could be reached very quickly once more clients are using your application simultaneously.
Consider setting up alarms for DB connections as well (less important than CPU in your case).
Depending on your projection of the number simultaneous clients, your connection string might to be updated to expand client-side connection pool limit.
If all 400+ participants decided to visit your application at the same time there (theoretically) could be an issue and someone will wait a bit, but if you are expecting that visits will be rather distributed in time, you’ll be OK.
Thank you @vitalii .The maximum number of users is 12 per day distributed throughout the day. Is the email queue a potential issue and how do i address it ?
No worries here - you’ll be OK. For every completed interview a record is created to send notification if settings were configured.
You’ll see a correlation between the number of completed interview and this queue size but it’s totally OK.