Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练速度较慢 #2

Open
zhifengqi opened this issue Jan 24, 2019 · 12 comments
Open

训练速度较慢 #2

zhifengqi opened this issue Jan 24, 2019 · 12 comments

Comments

@zhifengqi
Copy link

训练速度要低很多

@liuhu-bigeye
Copy link
Owner

liuhu-bigeye commented Jan 24, 2019

因为用pytorch实现,确实比c实现的warp_ctc要慢很多。文中Synth90k的结果,我们训练了一周左右。

@zhifengqi
Copy link
Author

这个是否可以在原始的ctc上去做更改那

@liuhu-bigeye
Copy link
Owner

作者已经毕业投身产业界,暂时没有改成c实现的计划。enctc在supplementary里提供了反向的公式,参照warp_ctc改起来不会很难。有需要欢迎讨论。

@allen4747
Copy link

allen4747 commented Jul 23, 2019

作者已经毕业投身产业界,暂时没有改成c实现的计划。enctc在supplementary里提供了反向的公式,参照warp_ctc改起来不会很难。有需要欢迎讨论。

@liuhu-bigeye Hi, 您好。想咨询一下您,如果我想把您的方法加入到Pytorch_warpctc里面,我需要到Pytorch_warpctc哪里进行修改呢?谢谢

@liuhu-bigeye
Copy link
Owner

liuhu-bigeye commented Jul 24, 2019

@Allenhu47
对比一下可以看到enctc的递归计算方式和ctc十分类似。

建议仔细读下warp-ctc的实现:https://github.com/SeanNaren/warp-ctc/blob/pytorch_bindings/include/detail/cpu_ctc.h#L189

然后把这部分python代码用类似的C实现(矩阵运算换成逐点计算):https://github.com/liuhu-bigeye/enctc.crnn/blob/master/pytorch_ctc/ctc_ent.py#L82

最后根据enctc在supplementary中的反向公式,用C实现backward。

@allen4747
Copy link

@liuhu-bigeye 感谢!我研究一下

@chengmengli06
Copy link

@allen4747 你实现了吗,可以分享一下吗?

@chengmengli06
Copy link

@liuhu-bigeye 我找不到supplementary,能发一下吗?

@jin-s13
Copy link
Collaborator

jin-s13 commented Oct 27, 2019

@chengmengli06 你可以在下面的链接里找到 ‘Supplemental’ 下载地址
https://papers.nips.cc/paper/7363-connectionist-temporal-classification-with-maximum-entropy-regularization

@luvwinnie
Copy link

有人使用 warp-ctc改了吗?

@Ann3S
Copy link

Ann3S commented Feb 28, 2023

你好,我用的是pytorch自带的CTCLoss,自带的可以不带参数,在使用cuda加速器时,使用entCTC的话必须让设置Loss的四个参数,请问有什么解决方法吗?criterion = CTCLoss()
if opt.cuda:
crnn.cuda()
crnn = torch.nn.DataParallel(crnn, device_ids=range(opt.ngpu))
image = image.cuda()
criterion = criterion.cuda() @liuhu-bigeye 谢谢

@My-captain
Copy link

作者已经毕业投身产业界,暂时没有改成c实现的计划。enctc在supplementary里提供了反向的公式,参照warp_ctc改起来不会很难。有需要欢迎讨论。

@liuhu-bigeye Hi, 您好。想咨询一下您,如果我想把您的方法加入到Pytorch_warpctc里面,我需要到Pytorch_warpctc哪里进行修改呢?谢谢

请问后续有进展嘛?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants