Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BE] Do not use unicode quotes #99446

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions torch/_dynamo/variables/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -1136,18 +1136,18 @@ def wrap_to_fake_tensor_and_record(
curr_sizes = None
if name not in tx.output.frame_state:
# If there is no entry for this source, add the tensor to frame state with its current static size.
# E.g., {} -> {“x”: [2, 4]}
# E.g., {} -> {"x": [2, 4]}
curr_sizes = list(e.size())
else:
curr_sizes = tx.output.frame_state[name]
if curr_sizes is not None:
if e.ndim != len(curr_sizes):
# If there is already an entry, and the dim mismatches, replace the frame state entry with None.
# E.g. {“x”: [2, 3, 4]} -> {“x”: None}
# E.g. {"x": [2, 3, 4]} -> {"x": None}
curr_sizes = None
else:
# If there is already an entry, and the dim matches, for every size in the frame state which
# disagrees with the current static size, replace it with None. E.g., {“x”: [2, 3]} -> {“x”: [2, None]}
# disagrees with the current static size, replace it with None. E.g., {"x": [2, 3]} -> {"x": [2, None]}
for i, dim in enumerate(curr_sizes):
if e.size()[i] != dim:
curr_sizes[i] = None
Expand Down
2 changes: 1 addition & 1 deletion torch/_functorch/autograd_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -500,7 +500,7 @@ def get_tangents_in_dims(input_dims, tangents):
# def backward_no_context(gy):
# return gy.expand([B, 4])
#
# gx = vmap(backward_no_context, dims)(gy: Tensor[B])
# gx = vmap(backward_no_context, dims)(gy: "Tensor[B]")
#
# This gives us the wrong result (gx has shape [B, B, 4], but it should
# have shape [4]). Performing vmap over setup_context means the shape
Expand Down
8 changes: 4 additions & 4 deletions torch/ao/quantization/fx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,10 +202,10 @@ The overall logic to insert QDQStub1 and QDQStub2 inplace is the following:
# node_name_to_target_dtype_info =
# {
# # this is placeholder node in FX Graph
# input : {input_activation: torch.float32, output_activation: torch.float32},
# qat_linear_relu: {input_activation: torch.quint8, output_activation: torch.quint8, weight: ...}
# "input" : {"input_activation": torch.float32, "output_activation": torch.float32},
# "qat_linear_relu": {"input_activation": torch.quint8, "output_activation": torch.quint8, "weight": ...}
# # this is the return node in FX Graph
# output: {input_activation: torch.float32, output_activation: torch.float32}
# "output": {"input_activation": torch.float32, "output_activation": torch.float32}
# }
```
Note: this map is generated before we insert qdqstub to graph1, and will not change in the process.
Expand Down Expand Up @@ -259,7 +259,7 @@ Let’s say the output of `qat_linear_relu` Node is configured as float32, both
}
```

What we’ll do here is when we are trying to insert output QDQStub for `qat_linear_relu`, we look at the target output dtype for this node (node_name_to_target_dtype_info[qat_linear_relu”][“output_activation], and find that it is float, which is not a quantized dtype, so
What we’ll do here is when we are trying to insert output QDQStub for `qat_linear_relu`, we look at the target output dtype for this node (node_name_to_target_dtype_info["qat_linear_relu"]["output_activation"], and find that it is float, which is not a quantized dtype, so
will do nothing here.
Note that this does not prevent other operators following `qat_linear_relu` to insert a QDQStub at the output of `qat_linear_relu`, since we are dealing with an `edge` of the graph here, and an `edge` is connected to two nodes, which means
the output of `qat_linear_relu` will also be the input of a node following `qat_linear_relu`.
Expand Down