-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Issue 21991: Fix SampleBatch
slicing for SampleBatch.INFOS
in RNN cases
#22050
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -936,26 +936,30 @@ def _slice(self, slice_: slice) -> "SampleBatch": | |
# Build our slice-map, if not done already. | ||
if not self._slice_map: | ||
sum_ = 0 | ||
for i, l in enumerate(self[SampleBatch.SEQ_LENS]): | ||
for _ in range(l): | ||
self._slice_map.append((i, sum_)) | ||
sum_ += l | ||
Comment on lines
-939
to
-942
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Python's built-in type >>> x = 1000
>>> x
1000
>>> id(x)
140102686402576
>>> x += 1
>>> x
1001
>>> id(x)
140102686403152 Here, >>> import torch
>>> sum_ = 0
>>> l = torch.ones((), dtype=torch.int32)
>>> sum_
0
>>> id(sum_)
94230645960128
>>> sum_ += l
>>> sum_
tensor(1, dtype=torch.int32)
>>> id(sum_)
140100675095104
>>> sum_ += l
>>> sum_
tensor(2, dtype=torch.int32)
>>> id(sum_)
140100675095104 |
||
for i, l in enumerate(map(int, self[SampleBatch.SEQ_LENS])): | ||
self._slice_map.extend([(i, sum_)] * l) | ||
sum_ = sum_ + l | ||
# In case `stop` points to the very end (lengths of this | ||
# batch), return the last sequence (the -1 here makes sure we | ||
# never go beyond it; would result in an index error below). | ||
self._slice_map.append((len(self[SampleBatch.SEQ_LENS]), sum_)) | ||
|
||
start_seq_len, start = self._slice_map[start] | ||
stop_seq_len, stop = self._slice_map[stop] | ||
start_seq_len, start_unpadded = self._slice_map[start] | ||
stop_seq_len, stop_unpadded = self._slice_map[stop] | ||
start_padded = start_unpadded | ||
stop_padded = stop_unpadded | ||
if self.zero_padded: | ||
start = start_seq_len * self.max_seq_len | ||
stop = stop_seq_len * self.max_seq_len | ||
start_padded = start_seq_len * self.max_seq_len | ||
stop_padded = stop_seq_len * self.max_seq_len | ||
|
||
def map_(path, value): | ||
if path[0] != SampleBatch.SEQ_LENS and not path[0].startswith( | ||
"state_in_" | ||
): | ||
return value[start:stop] | ||
if path[0] != SampleBatch.INFOS: | ||
return value[start_padded:stop_padded] | ||
else: | ||
return value[start_unpadded:stop_unpadded] | ||
else: | ||
return value[start_seq_len:stop_seq_len] | ||
Comment on lines
955
to
964
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Special case for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nice! Thanks for this fix. |
||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!