Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance testing on SQL Server view resulted in surprising results. Why?

Recently, we made some changes to some SQL Server views that had the potential to negatively affect performance. We decided to run some performance tests on these views to see how we had affected them. The results were surprising.

The graph below shows the results of the performance tests that we ran. Here is what the graph represents:

  • The blue line is the view before any changes were made.
  • The red line shows the view after the changes were made.
  • The x-axis represents iterations in a loop.
    • Each iteration, a thousand new records are inserted (that the view will return).
    • Each iteration, we do several selects from the view we are testing and average the results.
  • The y-axis represents the time it takes for the view to return the results
  • The select statement that was performance tested had a where clause on it to only return 100 records each time. (there were 100 records inserted on each name during the test).

The results show us that we definitely did take a performance hit, but the thing that baffles us is that huge increase in performance once we hit around 40,000 records in the database. We have ran this test on several different servers and get similar results every time.

I am wondering if anyone can give insight into why this is happening. Why do we get a huge performance gain when we breach the 40,000 record level? Has anyone seen anything like this before? I have tried searching for some reason for this, but have come up empty handed.

We have tried tweaking the view, messing with indexes, rebuilding and reorganizing indexes, analyzing the execution plan, and various other things, but so far we have not found anything that would cause this.

View Performance

Any help or insight would be much appreciated. Thanks.

like image 213
Jacob Adams Avatar asked Sep 06 '25 03:09

Jacob Adams


1 Answers

You should approach this just like any other performance investigation: use a sound methodology and measure. Waits and Queues will, again, be priceless a s a methodology to identify the bottlenecks. Once you identify and measure the relevant metrics then the answer can be given what's happening.

Right now you simply measured response time, w/o any actual data of how is the time spent. W/o a single actual data point presented (collected metrics, test specifications for others to attempt etc), any explanation could be ventured with equal chance of being right: client code, locking contention, file growth, log growth, index stats, query plan changes, human error, gremlins, moon rays and of course, my favorite: fragmentation.

So either do the proper analysis and investigation and collect the relevant metrics, or post the exact test (repro scripts, methodology) so we can measure ourselves.

like image 108
Remus Rusanu Avatar answered Sep 07 '25 20:09

Remus Rusanu