Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What effect on qwen1.5 will be if i use self-extend trick? #30

Open
WeixuanXiong opened this issue Mar 28, 2024 · 4 comments
Open

What effect on qwen1.5 will be if i use self-extend trick? #30

WeixuanXiong opened this issue Mar 28, 2024 · 4 comments

Comments

@WeixuanXiong
Copy link

Thanks for your contribution on accommondating qwen on self-extend.
Qwen1.5 has already been 32k context length. I'm wondering if i can use self-extend to make it to about 100K?
Have you ever tested the effect on qwen1.5 using self-extend?

@Mooler0410
Copy link
Collaborator

We believe how good is self extend highly depends on how good is the extended model within its original pretraining context window. This means if Qwen1.5's 32k context window is not well trained, SelfExtend may not work. Otherwise, it will work well. [Currently, we have no plan for a serious test considering the massive computational resource requirement: 32k 8x -> 256k, 4x -> 128k. We may do serious benchmarking for Qwen1.5 when we have enough resources.]

@WeixuanXiong
Copy link
Author

We believe how good is self extend highly depends on how good is the extended model within its original pretraining context window. This means if Qwen1.5's 32k context window is not well trained, SelfExtend may not work. Otherwise, it will work well. [Currently, we have no plan for a serious test considering the massive computational resource requirement: 32k 8x -> 256k, 4x -> 128k. We may do serious benchmarking for Qwen1.5 when we have enough resources.]

Ahhh, if I test it in the future work, i'll share it with you guys. Thanks for your reply~

@WeixuanXiong
Copy link
Author

WeixuanXiong commented May 7, 2024

1789dd1fa9730b8fd239a672c501cab
Results on 128k length in str (around 70k as for qwen tokenizer). It seems work!

@233function
Copy link

What is the scale base set to?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants