-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid buffering large amounts of rustc output. #7838
Changes from 2 commits
458138b
e2b28f7
c67cd7a
05a1f43
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,86 @@ | ||
use std::collections::VecDeque; | ||
use std::sync::{Condvar, Mutex}; | ||
use std::time::{Duration, Instant}; | ||
|
||
/// A simple, threadsafe, queue of items of type `T` | ||
/// | ||
/// This is a sort of channel where any thread can push to a queue and any | ||
/// thread can pop from a queue. | ||
/// | ||
/// This supports both bounded and unbounded operations. `push` will never block, | ||
/// and allows the queue to grow without bounds. `push_bounded` will block if the | ||
/// queue is over capacity, and will resume once there is enough capacity. | ||
pub struct Queue<T> { | ||
state: Mutex<State<T>>, | ||
popper_cv: Condvar, | ||
bounded_cv: Condvar, | ||
bound: usize, | ||
} | ||
|
||
struct State<T> { | ||
items: VecDeque<T>, | ||
} | ||
|
||
impl<T> Queue<T> { | ||
pub fn new(bound: usize) -> Queue<T> { | ||
Queue { | ||
state: Mutex::new(State { | ||
items: VecDeque::new(), | ||
}), | ||
popper_cv: Condvar::new(), | ||
bounded_cv: Condvar::new(), | ||
bound, | ||
} | ||
} | ||
|
||
pub fn push(&self, item: T) { | ||
self.state.lock().unwrap().items.push_back(item); | ||
self.popper_cv.notify_one(); | ||
} | ||
|
||
/// Pushes an item onto the queue, blocking if the queue is full. | ||
pub fn push_bounded(&self, item: T) { | ||
let mut state = self.state.lock().unwrap(); | ||
loop { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This might be able to make use of the nifty let state = self.bounded_cv.wait_until(state, |s| s.items.len() < self.bound).unwrap(); There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Didn't know that existed! |
||
if state.items.len() >= self.bound { | ||
state = self.bounded_cv.wait(state).unwrap(); | ||
} else { | ||
state.items.push_back(item); | ||
self.popper_cv.notify_one(); | ||
break; | ||
} | ||
} | ||
} | ||
|
||
pub fn pop(&self, timeout: Duration) -> Option<T> { | ||
let mut state = self.state.lock().unwrap(); | ||
let now = Instant::now(); | ||
while state.items.is_empty() { | ||
let elapsed = now.elapsed(); | ||
if elapsed >= timeout { | ||
break; | ||
} | ||
let (lock, result) = self | ||
.popper_cv | ||
.wait_timeout(state, timeout - elapsed) | ||
.unwrap(); | ||
state = lock; | ||
if result.timed_out() { | ||
break; | ||
} | ||
} | ||
let value = state.items.pop_front()?; | ||
if state.items.len() < self.bound { | ||
// Assumes threads cannot be canceled. | ||
self.bounded_cv.notify_one(); | ||
} | ||
Some(value) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This might actually also get cleaned up a good amount with let (mut state, result) = self.popper_cv.wait_timeout_until(
self.state.lock().unwrap(),
timeout,
|s| s.items.len() > 0,
).unwrap();
if result.timed_out() {
None
} else {
// conditionally notify `bounded_cv`
state.items.pop_front()
} There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, after thinking about it some more, this subtly changes the semantics. If there are multiple poppers, and both are awoken, then one will get a value and the other won't. We don't use multiple poppers, but for the In general, it probably doesn't matter, but I would prefer to keep the current semantics with the loop that "retries" after the thread is awakened. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm I'm not sure I follow, because if the closure returns true then that lock is persisted and returned, so we can't have two poppers simultaneously exit the wait timeout loop I believe? I think this is the same for the push case as well, where when we get a lock back after There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah. Somehow it didn't click that it was atomically locked. Pushed a commit with the change. Since it is unstable until 1.42, it will need to wait until Thursday. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh oops sorry about that, I totally didn't realize it was still unsable... In any case at least Thursday isn't that far off! |
||
} | ||
|
||
pub fn try_pop_all(&self) -> Vec<T> { | ||
let mut state = self.state.lock().unwrap(); | ||
let result = state.items.drain(..).collect(); | ||
self.bounded_cv.notify_all(); | ||
result | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change may no longer be necessary, but did you want to include it anyway here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is necessary, otherwise the cached message playback would deadlock if there were more than 100 messages. The playback shouldn't happen on the main thread, otherwise there is nothing to drain messages while they are added to the queue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right yeah, forgot about that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a test for message caching to check for deadlock.