Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with forecasted/predictive state variables #115

Open
kevin-vanvaerenbergh opened this issue Oct 12, 2022 · 3 comments
Open

Issue with forecasted/predictive state variables #115

kevin-vanvaerenbergh opened this issue Oct 12, 2022 · 3 comments

Comments

@kevin-vanvaerenbergh
Copy link

image

There you check if the observations are part of the predictive vars (which is generated by running the forecast call to the Boptest framework)
When I check the observation keys, they do not seem to match the keys coming from a forecast call.
image

The keys of these two calls are not the same, thus the loop in the code above is never run since none of the observation names are found in the forecast keys.

@javiarrobas
Copy link
Contributor

@kevin-vanvaerenbergh thanks for raising the issue! I think there is a misunderstanding on what measurements, observations, and predictive_vars mean in the framework. Measurements are not intended to be predictive because they can never be. Predictive variables are the selection from all available forecast variables that the user includes in the observation space. Please note the following part of the code that is run at initialization:

# Check if agent uses predictions in state and parse predictive variables
        self.is_predictive = False
        self.predictive_vars = []
        if any([obs in self.all_predictive_vars for obs in observations]):
            self.is_predictive = True

@kevin-vanvaerenbergh
Copy link
Author

kevin-vanvaerenbergh commented Oct 25, 2022

Thanks @JavierArroyoBastida for your clarification. I understand how you use it now. But the check always passes when you are using the 'time' variable since that is in both predictive_vars as observations. So maybe the 'time' variable can be removed as such that you raise an error when the framework is misused.

I added an alternative where you do not need to define the forecasted values as they would be automatically transformed to the forecasted variant of the normal measurement when you define a predictive horizon. Therefore I use this update:

# Parse predictive vars
# self.predictive_vars = [obs for obs in observations if \
#                        (obs in self.all_predictive_vars and obs!='time')]
self.predictive_vars = ([var for var in self.all_predictive_vars.keys() 
                           if (var in '\t'.join(observations) and var != 'time')])

Since measurements and forecasted values use the same name but have no prefix, I just search for the forecasted values that are linked to the measurements in the observations.
I am still running some experiments to see if it works, but the end result is probably the same as your approach but maybe this can be an extra functionality that searches for the forecasted variables when they are not defined as observations at creation time and will be added automatically based on the observations, i.e., adding the forecasted version of the measurements in the observation dictionary, automatically.

@javiarrobas
Copy link
Contributor

That's a good catch, thanks! I think what would make the most sense is to remove the 'time' when defining self.all_predictive_vars since time is never a variable to be predicted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants