I have the following basic http server in Go. For every incoming request it posts 5 outgoing http requests. Each of them roughly take 3-5 seconds. I am not able to achieve more than 200 requests/second on 8 gig Ram, quad core machine.
package main
import (
"flag"
"fmt"
"net/http"
_"net/url"
//"io/ioutil"
"time"
"log"
"sync"
//"os"
"io/ioutil"
)
// Job holds the attributes needed to perform unit of work.
type Job struct {
Name string
Delay time.Duration
}
func requestHandler(w http.ResponseWriter, r *http.Request) {
// Make sure we can only be called with an HTTP POST request.
fmt.Println("in request handler")
if r.Method != "POST" {
w.Header().Set("Allow", "POST")
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
// Set name and validate value.
name := r.FormValue("name")
if name == "" {
http.Error(w, "You must specify a name.", http.StatusBadRequest)
return
}
delay := time.Second * 0
// Create Job and push the work onto the jobQueue.
job := Job{Name: name, Delay: delay}
//jobQueue <- job
fmt.Println("creating worker")
result := naiveWorker(name, job)
fmt.Fprintf(w, "your task %s has been completed ,here are the results : %s", job.Name, result)
}
func naiveWorker(id string, job Job) string {
var wg sync.WaitGroup
responseCounter := 0;
totalBodies := "";
fmt.Printf("worker%s: started %s\n", id, job.Name)
var urls = []string{
"https://someurl1",
"https://someurl2",
"https://someurl3",
"https://someurl4",
"https://someurl5",
}
for _, url := range urls {
// Increment the WaitGroup counter.
wg.Add(1)
// Launch a goroutine to fetch the URL.
go func(url string) {
// Fetch the URL.
resp, err := http.Get(url)
if err != nil {
fmt.Printf("got an error")
// panic(err)
} else {
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
totalBodies += string(body)
}
}
responseCounter ++
// Decrement the counter when the goroutine completes.
defer wg.Done()
}(url)
}
wg.Wait()
fmt.Printf("worker%s: completed %s with %d calls\n", id, job.Name, responseCounter)
return totalBodies
}
func main() {
var (
port = flag.String("port", "8181", "The server port")
)
flag.Parse()
// Start the HTTP handler.
http.HandleFunc("/work", func(w http.ResponseWriter, r *http.Request) {
requestHandler(w, r)
})
log.Fatal(http.ListenAndServe(":" + *port, nil))
}
I have the following questions:
The http connections get reset when number of concurrent threads go above 1000. Is this acceptable/intended behaviour?
if I write go requestHandler(w,r) instead of requestHandler(w,r) I get http: multiple response.WriteHeader calls
An http handler is expected to run synchronously, because the return of the handler function signals the end of the request. Accessing the http.Request and http.ResponseWriter after the handler returns is not valid, so there is no reason to dispatch the handler in a goroutine.
As the comments have noted, you can't open more file descriptors than the process ulimit allows. Besides increasing the ulimit appropriately, you should have a limit on the number of concurrent requests that can be dispatched at once.
If you're making many connections to the same hosts, you should also adjust your http.Transport accordingly. The default idle connection per host is only 2, so if you need more than 2 concurrent connections to that host, the new connections won't be reused. See Go http.Get, concurrency, and "Connection reset by peer"
If you connect to many different hosts, setting Transport.IdleConnTimeout is a good idea to get rid of unused connections.
And as always, on a long running service you will want to make sure that timeouts are set for everything, so that slow or broken connections don't hold unnecessary resources.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With