You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are there any tests that show that statsforecast gives exactly the same results as the R packages? I see a difference in parameter selection and values for the AutoCES.
Link
No response
The text was updated successfully, but these errors were encountered:
Hi @ncooder, there are some slight variations in the R results, mainly due to the optimization method used. I have observed similar discrepancies when working on TBATS. In our case, we use the minimize function from scipy.optimize, while the forecast R package uses optim from stats. The results are identical to those in R until we optimize the likelihood; then, some variations appear. So the differences you've seen in parameter selection and values for the AutoCES are to be expected.
If you think the differences are too significant, please provide us with a reproducible example, and we will take a closer look at it.
@MMenchero I will try to demonstrate the difference between the statsforecast and the smooth package for the AutoCES model. You are right that the problem could be related to optimization. In any case, the complex numbers I get when using the statsforecast and the smooth in R are significantly different.
Description
Are there any tests that show that statsforecast gives exactly the same results as the R packages? I see a difference in parameter selection and values for the AutoCES.
Link
No response
The text was updated successfully, but these errors were encountered: