-
Notifications
You must be signed in to change notification settings - Fork 40
Issues: google-ai-edge/ai-edge-torch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unable to resolve runtime symbol: `xla_mark_tensor'.
status:awaiting user response
When awaiting user response
status:more data needed
This label needs to be added to stale issues and PRs.
type:bug
Bug
#265
opened Sep 27, 2024 by
Clod98
Problems with quantization
status:awaiting ai-edge-developer
type:bug
Bug
#264
opened Sep 27, 2024 by
spacycoder
Accuracy is not matching for generated tiny_llama model
status:awaiting user response
When awaiting user response
type:bug
Bug
#258
opened Sep 25, 2024 by
akshatshah17
Trace model in model-explorer
status:awaiting user response
When awaiting user response
type:support
For use-related issues
#254
opened Sep 24, 2024 by
nigelzzzzzzz
Replace Int64 with Int32 for edge
type:support
For use-related issues
#246
opened Sep 21, 2024 by
rfechtner
data_ptr_value % kDefaultTensorAlignment == 0 was not true.
status:awaiting ai-edge-developer
status:awaiting review
Awaiting PR review
status:contribution welcome
type:bug
Bug
#237
opened Sep 18, 2024 by
nigelzzzzzzz
Different with gemma2 / gemma
status:awaiting user response
When awaiting user response
type:bug
Bug
#236
opened Sep 18, 2024 by
nigelzzzzzzz
Encountered 'Redefinition of symbol: gelu_decomp_27' issue while converting Qwen2 model to TFLite
status:awaiting user response
When awaiting user response
status:stale
type:bug
Bug
#235
opened Sep 18, 2024 by
tilfdev
Conversion fails on model loaded via torch.load or torch.jit.load
status:awaiting user response
When awaiting user response
type:support
For use-related issues
#221
opened Sep 13, 2024 by
saseptim
Error when importing 0.3.0
status:awaiting user response
When awaiting user response
status:stale
type:build/install
Build and install issues
type:support
For use-related issues
#218
opened Sep 12, 2024 by
scarlettekk
OOM Error in Gemini 2 2B TFLite Conversion with Quantization on 80GB RAM
status:awaiting ai-edge-developer
type:memory
An issue with memory, memory performance, or memory leaks
#192
opened Sep 5, 2024 by
KennethanCeyer
PT2E conversion creates Transpose op for each conv2d weight set
status:awaiting user response
When awaiting user response
type:feature
For feature requests
type:performance
An issue with performance, primarily inference latency
#179
opened Aug 29, 2024 by
edupuis-psee
Tiny-llama Encountered unresolved custom op: odml.update_kv_cache
type:bug
Bug
#175
opened Aug 28, 2024 by
vignesh-spericorn
int8 tflite conversion crashes
status:awaiting ai-edge-developer
type:feature
For feature requests
#150
opened Aug 15, 2024 by
codewarrior26
Tensor Shape Mismatch During TFLite Quantization Conversion
type:bug
Bug
#137
opened Aug 8, 2024 by
spacycoder
quant_config Dtype INT16 support request
type:feature
For feature requests
#136
opened Aug 8, 2024 by
ZORO-Q
text_generator_main.cc using tinyllama model to inference can show Garbled characters
status:awaiting ai-edge-developer
type:bug
Bug
#109
opened Jul 26, 2024 by
nigelzzz
Converting Torch modules that use the max function
status:awaiting review
Awaiting PR review
type:bug
Bug
#81
opened Jul 5, 2024 by
hbellafkir
Unable to Convert _sa_block Method of torch.nn.TransformerEncoderLayer
#19
opened May 29, 2024 by
bisnu-sarkar-inverseai
ProTip!
Exclude everything labeled
bug
with -label:bug.