-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error : "socket already registered" when upgrading from 0.1.11 to 0.1.13 #774
Comments
Interesting... Do you have a repro? thoughts @stjepang? |
Hi @carllerche , I made simple "proof-of-concept" for this error : https://github.com/guydunigo/tokio_test |
@guydunigo Thanks for the example! I can confirm the issue is reproducible and something between |
Ok, I've found the problem - it's in TL;DR IntroductionTake a look at pub struct TcpStream {
io: PollEvented<mio::net::TcpStream>,
}
pub struct PollEvented<E: Evented> {
io: Option<E>,
inner: Inner,
}
struct Inner {
registration: Registration,
/// Currently visible read readiness
read_readiness: AtomicUsize,
/// Currently visible write readiness
write_readiness: AtomicUsize,
} The pub struct Registration {
/// Stores the handle. Once set, the value is not changed.
///
/// Setting this requires acquiring the lock from state.
inner: UnsafeCell<Option<Inner>>,
/// Tracks the state of the registration.
///
/// The least significant 2 bits are used to track the lifecycle of the
/// registration. The rest of the `state` variable is a pointer to tasks
/// that must be notified once the lock is released.
state: AtomicUsize,
} If we go down the rabbit hole and see what is this Once a The problemSuppose we have an instance of pub fn try_clone(&self) -> io::Result<TcpStream> {
let io = self.io.get_ref().try_clone()?;
Ok(TcpStream::new(io))
} Now let's see what goes on in pub fn try_clone(&self) -> io::Result<TcpStream> {
self.sys.try_clone().map(|s| {
TcpStream {
sys: s,
selector_id: self.selector_id.clone(),
}
})
} Now, pub struct TcpStream {
sys: sys::TcpStream,
selector_id: SelectorId,
} The So the crux of the problem is this: When cloning a This means the new SolutionI tried a very simple change in pub fn try_clone(&self) -> io::Result<TcpStream> {
self.sys.try_clone().map(|s| {
TcpStream {
sys: s,
// selector_id: self.selector_id.clone(), // Buggy.
selector_id: SelectorId::new(), // Yay, fixed!
}
})
} This seems to fix the reported issue, but I'm not sure whether it's the correct solution because I don't know if a clone of a If this is correct, should we apply the same fix in other places, e.g. |
@stjepang I feel like we discussed this in Gitter. Do you remember the conclusion? IIRC, we came to the conclusion that sockets could not be cloned due to how things worked. In this case, the solution would be to deprecate @guydunigo could you explain your use case for |
We should probably follow up with better splitting on sockets. |
How do I send a Shutdown(Shutdown::Write) to a socket which has been split? The use case is: split a stream, spawn two futures (read and write), when write is completed shutdown the write side so that remote peer shuts down and the read side terminates. |
I tried the impl<'a> AsyncWrite for &'a TcpStream {
fn shutdown(&mut self) -> Poll<(), io::Error> {
Ok(().into())
} Is that a missing functionality bug? |
Removing the
I guess that Maybe I'm also doing something wrong and there's alternative way to do this? |
@carllerche What if we just wrap the The only problem is that we pay a performance penalty due to locking, but maybe it's not too bad? |
@stjepang I'm not sure how that would solve the problem. IIRC, to handle it Tokio would have to keep track of all outstanding clones and fanout notifications to all of them. @artemii235 re: your example, you should be able to do this without cloning. Instead, just drop the stream on idle. |
Version
Working version (0.1.11) :
└── tokio v0.1.11
├── tokio-codec v0.1.1
│ └── tokio-io v0.1.10
├── tokio-current-thread v0.1.4
│ └── tokio-executor v0.1.5
├── tokio-executor v0.1.5 ()
├── tokio-fs v0.1.4
│ ├── tokio-io v0.1.10 ()
│ └── tokio-threadpool v0.1.9
│ └── tokio-executor v0.1.5 ()
│ └── tokio-io v0.1.10 ()
├── tokio-io v0.1.10 ()
├── tokio-reactor v0.1.7
│ ├── tokio-executor v0.1.5 ()
│ └── tokio-io v0.1.10 ()
├── tokio-tcp v0.1.2
│ ├── tokio-io v0.1.10 ()
│ └── tokio-reactor v0.1.7 ()
├── tokio-threadpool v0.1.9 ()
├── tokio-timer v0.2.8
│ └── tokio-executor v0.1.5 ()
├── tokio-udp v0.1.3
│ ├── tokio-codec v0.1.1 ()
│ ├── tokio-io v0.1.10 ()
│ └── tokio-reactor v0.1.7 ()
└── tokio-uds v0.2.4
├── tokio-codec v0.1.1 ()
├── tokio-io v0.1.10 ()
└── tokio-reactor v0.1.7 (*)
Failing version (0.1.13) :
└── tokio v0.1.13
├── tokio-codec v0.1.1
│ └── tokio-io v0.1.10
├── tokio-current-thread v0.1.3
│ └── tokio-executor v0.1.5
├── tokio-executor v0.1.5 ()
├── tokio-fs v0.1.4
│ ├── tokio-io v0.1.10 ()
│ └── tokio-threadpool v0.1.8
│ └── tokio-executor v0.1.5 ()
│ └── tokio-io v0.1.10 ()
├── tokio-io v0.1.10 ()
├── tokio-reactor v0.1.6
│ ├── tokio-executor v0.1.5 ()
│ └── tokio-io v0.1.10 ()
├── tokio-tcp v0.1.2
│ ├── tokio-io v0.1.10 ()
│ └── tokio-reactor v0.1.6 ()
├── tokio-threadpool v0.1.8 ()
├── tokio-timer v0.2.8
│ └── tokio-executor v0.1.5 ()
├── tokio-udp v0.1.2
│ ├── tokio-codec v0.1.1 ()
│ ├── tokio-io v0.1.10 ()
│ └── tokio-reactor v0.1.6 ()
└── tokio-uds v0.2.3
├── tokio-io v0.1.10 ()
└── tokio-reactor v0.1.6 ()
Platform
Linux Moi-arch 4.19.2-arch1-1-ARCH #1 SMP PREEMPT Tue Nov 13 21:16:19 UTC 2018 x86_64 GNU/Linux
Description
When updating from tokio
0.1.11
to0.1.13
, an io error appeared :socket already registered
.This appears when creating a new
Frame
on aTcpStream
that already has aFrame
attached to it.For instance : I have a frame that listens to arriving packets and I want to create one to send messages.
I think this error comes from #660 : The new frame is registered in another reactor than the first one, thus creating the
already registered
error.The text was updated successfully, but these errors were encountered: