What could explain this big drop in performance in an Azure SQL DB after moving the app from an hosted VPS to an Azure App service?
Here's a typical chart from Query Store's High Variation chart over the past two weeks. The red arrow indicates when I moved the production app from another hosting provider to an Azure App. Prior to moving the app, I experienced zero timeouts. Now, using the same Azure SQL DB, timeouts are triggering frequently for longish queries (but by no means too arduous).
The only other change I made was change the user principle in the connection string. This user only has SELECT, INSERT, UPDATE, DELETE and EXECUTE permissions.
My theories are: - something to do with networking between the app and the db. Resiliency? But I have a SQL exec plan specified - something wrong with the user I set up? - bad plan regression (I have now enabled auto FORCE PLAN tuning) - a problem caused by Hangfire running on two servers simultaneously (now mitigated by moving HF tables to a new DB) - something is triggering some kind of throttling that I cannot figure out.
Here is a chart of timeouts from Log Analytics:
All help appreciated. Note: this site has had almost identical traffic over the past 30 days.
In fact, take a look at this from the SQL DB metrics over the past week:
And here is some Wait info - last 6 hours:
Blue = PARALLELISM Orange = BUFFERIO