If I need to use tmp slices in a function and the function will be called many times, their max capacity will not exceed 10. But the length of them are varied. Just for example, maybe 80% of them only have size of 1. 10% of them have size 3 and 10% of them have size 10.
I can think of an example function like the following:
func getDataFromDb(s []string) []string {
tmpSlice := make([]string, 0, 10)
for _, v := range s {
if check(v) {
tmpSlice = append(tmpSlice, v)
}
}
......
return searchDb(tmpSlice)
}
So should I do var tmpSlice []string, tmpSlice := make([]string, 0, 0), tmpSlice := make([]string, 0, 5), or tmpSlice := make([]string, 0, 10)? or any other suggestions?
Fastest would be if code doesn't allocate on the heap.
Create variables that allocate on the stack and do no escape (pass variables by value, otherwise they will escape).
Escaping you can check by adding -gcflags "-m -l" on building.
Here is an example that shows if we substitute slice with array and pass it by value, it results in fast code without allocation (on the heap).
package main
import "testing"
func BenchmarkAllocation(b *testing.B) {
b.Run("Slice", func(b2 *testing.B) {
for i := 0; i < b2.N; i++ {
_ = getDataFromDbSlice([]string{"one", "two"})
}
})
b.Run("Array", func(b2 *testing.B) {
for i := 0; i < b2.N; i++ {
_ = getDataFromDbArray([]string{"one", "two"})
}
})
}
type DbQuery [10]string
type DbQueryResult [10]string
func getDataFromDbArray(s []string) DbQueryResult {
q := DbQuery{}
return processQueryArray(q)
}
func processQueryArray(q DbQuery) DbQueryResult {
return (DbQueryResult)(q)
}
func getDataFromDbSlice(s []string) []string {
tmpArray := make([]string, 0, 10)
return processQuerySlice(tmpArray)
}
func processQuerySlice(q []string) []string {
return q
}
Running benchmark with benchmem gives this results:
BenchmarkAllocation/Slice-6 30000000 51.8 ns/op 160 B/op 1 allocs/op
BenchmarkAllocation/Array-6 100000000 15.7 ns/op 0 B/op 0 allocs/op
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With