Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for #18161 AsyncHttpServer Issue: Too many open files, against devel branch. #18205

Closed
wants to merge 1 commit into from

Conversation

mrhdias
Copy link
Contributor

@mrhdias mrhdias commented Jun 6, 2021

The client header "connection" if equal to "keep-alive" is disabled whenever the number of file descriptors exceeds the specified maxFDs or exceeds the keep-alive timeout.
If the value of header "connection" equal "close", the connection is closed, if it is "keep-alive" it remains open and the number of file descriptors increases and the server die.
To avoid this issue the "keep-alive" connection is disabled. It is temporary until the number of files descriptors goes down, when the value goes down the connection can remain open. It also solves the issue when the "connection" is equal to "update".

The client header "connection" if equal to "keep-alive" is disabled whenever the number of file descriptors exceeds the specified maxFDs or exceeds the keep-alive timeout.
If the value of header "connection" equal "close", the connection is closed, if it is "keep-alive" it remains open and the number of file descriptors increases and the server die.
To avoid this issue the "keep-alive" connection is disabled. It is temporary until the number of files descriptors goes down, when the value goes down the connection can remain open. It also solves the issue when the "connection" is equal to "update".
Copy link
Contributor

@dom96 dom96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see my comment in #18161 (comment) on what the approach should be.

@mrhdias
Copy link
Contributor Author

mrhdias commented Jun 6, 2021

Please see my comment in #18161 (comment) on what the approach should be.

I took a look at the asyncdispatch file and I think the activeDescriptors function doesn't read the real number of open file descriptors!
The following function gives the actual number of open file descriptors for the process, but it's slow!

import posix
import strutils
import os

proc getFdForProcess(pid: int): int =
  var count = 0
  for path in walkPattern("/proc/$1/fd/[0-9]*" % $pid):
    inc(count)

  return count

let pid = getpid()
echo getFdForProcess(pid)

My only goal is to make the http server stay up and running and not die like it does now.

@dom96
Copy link
Contributor

dom96 commented Jun 6, 2021

Yes, and you can do that by following what I suggest.

@timotheecour
Copy link
Member

The following function gives the actual number of open file descriptors for the process, but it's slow!

did you measure? for me it's 3s per 1M calls, ie 3us per call; compared to typical network latencies (with order of magnitude of 10ms or 100ms), this is nothing; furthermore it could be cached for a small duration if needed for extra performane.

benchmark:
nim r -d:release main
(3.557653119, 4000000)

import std/[posix, strutils, os, times]
proc getFdForProcess(pid: int): int =
  for path in walkPattern("/proc/$1/fd/[0-9]*" % $pid):
    result.inc
proc main =
  let pid = getpid()
  let n = 1000000
  var c = 0
  let t = cpuTime()
  for i in 0..<n:
    c += getFdForProcess(pid)
  let t2 = cpuTime()
  echo (t2 - t, c)
main()

@@ -85,9 +89,10 @@ proc getPort*(self: AsyncHttpServer): Port {.since: (1, 5, 1).} =
result = getLocalAddr(self.socket.getFd, AF_INET)[1]

proc newAsyncHttpServer*(reuseAddr = true, reusePort = false,
maxBody = 8388608): AsyncHttpServer =
maxBody = 8388608, keepaliveTimeout = 3600): AsyncHttpServer =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see also: nim-lang/RFCs#383

@mrhdias
Copy link
Contributor Author

mrhdias commented Jun 11, 2021

The following function gives the actual number of open file descriptors for the process, but it's slow!

did you measure? for me it's 3s per 1M calls, ie 3us per call; compared to typical network latencies (with order of magnitude of 10ms or 100ms), this is nothing; furthermore it could be cached for a small duration if needed for extra performane.

benchmark:
nim r -d:release main
(3.557653119, 4000000)

import std/[posix, strutils, os, times]
proc getFdForProcess(pid: int): int =
  for path in walkPattern("/proc/$1/fd/[0-9]*" % $pid):
    result.inc
proc main =
  let pid = getpid()
  let n = 1000000
  var c = 0
  let t = cpuTime()
  for i in 0..<n:
    c += getFdForProcess(pid)
  let t2 = cpuTime()
  echo (t2 - t, c)
main()

I took a look at python "psutils" package and has the "num_fds" function that gives he number of file descriptors currently opened for every OS.

#!/usr/bin/python

# https://psutil.readthedocs.io/en/latest/#psutil.Process.num_fds
# https://github.com/giampaolo/psutil/blob/master/psutil/_pslinux.py#L2135

import psutil

p = psutil.Process()
print(p.num_fds())

@stale
Copy link

stale bot commented Jun 12, 2022

This pull request has been automatically marked as stale because it has not had recent activity. If you think it is still a valid PR, please rebase it on the latest devel; otherwise it will be closed. Thank you for your contributions.

@stale stale bot added the stale Staled PR/issues; remove the label after fixing them label Jun 12, 2022
@stale stale bot closed this Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Staled PR/issues; remove the label after fixing them
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants