Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Slot time consumed in BigQuery

I ran a query which resulted in the below stats.

Elapsed time: 12.1 sec

Slot time consumed: 14 hr 12 min

total_slot_ms: 51147110 ( which is 14 hr 12 min)

We are on an on-demand pricing plan. So the max slots would be 2000. That being said, if I used 2000 slots for the whole 12.1 seconds span then I should end up with total_slot_ms as 24200000 ( which is 2000x12.1x1000). However, the total_slot_ms is 51147110. Average number of slots used are 51147110/121000 = 4225 ( which is way above 2000). Can some explain to me how I ended up using more than 2000 slots?

like image 395
Bob Avatar asked Oct 28 '25 06:10

Bob


2 Answers

In a course of Google, there is an example where a query shows 13 "elapsed time" seconds and 50 minutes of "slot time consumed". They says:

Hey, across all of our workers, we did essentially 50 minutes of work massively in parallel, 50 minutes so that your query could be returned back in 13 seconds. Best of all for you, you don't need to worry about spinning up those workers, moving data in-between them, making sure they're sharing all their results between their aggregations. All you care about is writing the SQL, finding the insights, and then running that query in a very fast turnaround. But there is abstracted from you a lot of distributed parallel processing that's happening.

like image 143
Daniel Sepulveda Avatar answered Oct 29 '25 20:10

Daniel Sepulveda


BigQuery on-demand supports limited bursting. https://cloud.google.com/bigquery/docs/release-notes#December_10_2019

like image 23
Bob Avatar answered Oct 29 '25 21:10

Bob