Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to load Intel/bge-base-en-v1.5-rag-int8-static model #731

Open
dilip467 opened this issue May 28, 2024 · 4 comments
Open

Not able to load Intel/bge-base-en-v1.5-rag-int8-static model #731

dilip467 opened this issue May 28, 2024 · 4 comments

Comments

@dilip467
Copy link

dilip467 commented May 28, 2024

from optimum.intel import IPEXModel
model_path = "/data/notebook/Dilip/models/bge-base-en-v1.5-rag-int8-static"
model = IPEXModel.from_pretrained(model_path)

Error:

Passing the argument `library_name` to `get_supported_tasks_for_model_type` is required, but got library_name=None. Defaulting to `transformers`. An error will be raised in a future version of Optimum if `library_name` is not provided.
/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/torch/amp/autocast_mode.py:267: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16, torch.float16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 317, in _call_model
    out = self.model(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):

      graph(%add_a, %add_b, %alpha, %shape:int[], %w, %b, %eps:float, %cudnn_enable:bool):
        %r = ipex::add_layernorm(%add_a, %add_b, %alpha, %shape, %w, %b, %eps, %cudnn_enable)
             ~~~~ <--- HERE
        return (%r) 
RuntimeError: could not create a primitive descriptor for a layer normalization forward propagation primitive


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/modeling_base.py", line 402, in from_pretrained
    return from_pretrained_method(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 258, in _from_pretrained
    return cls(model, config=config, model_save_dir=model_save_dir, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 152, in __init__
    self._init_warmup()
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 332, in _init_warmup
    self(**dummy_inputs)
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/modeling_base.py", line 92, in __call__
    return self.forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 279, in forward
    outputs = self._call_model(**inputs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/optimum/intel/ipex/modeling_base.py", line 319, in _call_model
    out = self.model(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/data/notebook/Dilip/virtualenv/intel-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: could not create a primitive descriptor for a softmax forward propagation primitive**
@IlyasMoutawwakil
Copy link
Member

Can you add more details please, like hardware and software details. A minimal reproducible code would be good too.
The model you're referencing is one that you created yourself ? or downloaded from the hub ?
IMO this seems like your CPU might not support the features you're trying to run.
I've seen similar errors when trying to run on AMD (it doesn't support AMX).

@dilip467
Copy link
Author

dilip467 commented Jun 10, 2024

`(base) [SIBUDDY@nvmbd2s03dvv010 datasets]$ lscpu
Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2394.375
BogoMIPS: 4788.75
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt arat`

`from optimum.intel import IPEXModel

model = IPEXModel.from_pretrained("Intel/bge-base-en-v1.5-rag-int8-static")
`

I downloaded model from hub: https://huggingface.co/Intel/bge-base-en-v1.5-rag-int8-static

@echarlaix
Copy link
Collaborator

cc @jiqing-feng who might have some insight on this

@jiqing-feng
Copy link
Collaborator

jiqing-feng commented Jun 11, 2024

Can you add more details please, like hardware and software details. A minimal reproducible code would be good too. The model you're referencing is one that you created yourself ? or downloaded from the hub ? IMO this seems like your CPU might not support the features you're trying to run. I've seen similar errors when trying to run on AMD (it doesn't support AMX).

Agree!

I can run it in a SPR node with the following CPU info:

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              224
On-line CPU(s) list: 0-223
Thread(s) per core:  2
Core(s) per socket:  56
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               143
Model name:          Intel(R) Xeon(R) Platinum 8480L
Stepping:            7
CPU MHz:             2000.000
CPU max MHz:         3800.0000
CPU min MHz:         800.0000
BogoMIPS:            4000.00
L1d cache:           48K
L1i cache:           32K
L2 cache:            2048K
L3 cache:            107520K
NUMA node0 CPU(s):   0-55,112-167
NUMA node1 CPU(s):   56-111,168-223
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants