I am testing the speed of the drives on the Windows Azure VM's Drives. It seems to me like the speed difference between the temporary drive d and the attached drives is huge!
The test I have is running iometer program on
Maximum Disk Size 20 Gigs 16 Outstanding I/O's 4k100% read, 0% random
60seconds Run time
Results: Temporary Drive D: Total I/Os per second 60978.94 Drive E (1 30 gig drive): 910.51 Drive F (4 30 gig drives striped together): 899.6
Is this normal?
The reason I am really noticing the difference is in SQL. I basically tried to migrate from my old physical server with sql2000 2 gigs of ram and scsi drives, and that thing is faster than windows azure large image. Faster in that I can run queries about twice as fast.
I turned off diskcaching on os drive in os.
Can someone explain to me what is going on? am I comparing apples to oranges? thanks!
Yes this is normal. The temp disk is a physical disk on the node (only disk I/O here), and the E/F/... disks are persisted disks. This means they are actually page blobs in blob storage and you'll need to take into account network I/O as well.
To improve I/O and throughput you might consider to disable the cache for those disks (this incurs more transaction costs). Read more about this on the Windows Azure Storage Blog: Exploring Windows Azure Drives, Disks, and Images
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With