-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding measurement components back to a measurement after iteratively solving for a value #108
Comments
Thanks for the interesting question. If I understand it correctly, the |
Thanks for your advice! And yep! Basically, the act of solving for So I have to ask, what kind of deep magic is the For anyone else with this kind of problem where you can't get the components but can get the overall error: My alternative "brute force" solution is to simply solve the calculation again with every variable's uncertainty except for one set to zero. This way, the variance formula will be simplified to only contain that component, and the resulting error will simply be the square root of the variables contribution. For example: if you're starting with the variable x for some function f(x, y, x), the result is sigma_f = sqrt(sigma_x^2 * (df/dx)^2), take this value squared over the true uncertainty using all variables. |
In |
I cannot believe I missed this. Thank you Mosè! Heres a MWE demonstrating this working just in case anyone else sees this in the future: using Optim
using Measurements
f(x, y, z) = - x * y + z*x + z*x*y +sin(y)
# say we know f, x, y but not z
f_known = 1 ± 0.2
x_known = 23 ± 2.4
y_known = 2 ± 0.01
function solve_for_z(x_meas, y_meas, f_meas)
f_error(z) = (f(x_meas, y_meas, z[1]) - f_meas)^2
z_solved = Optim.optimize(f_error, [1.], GradientDescent(); autodiff = :forward)
return Optim.minimizer(z_solved)[1]
end
# Note that running this throws an error as Optim currently cannot handle measurement types
# solve_for_z(x_known, y_known, f_known)
z = @uncertain solve_for_z(x_known, y_known, f_known)
x_key = (x_known.val, x_known.err, x_known.tag)
y_key = (y_known.val, y_known.err, y_known.tag)
f_key = (f_known.val, f_known.err, f_known.tag)
components = uncertainty_components(z)
contributions = (components[x_key], components[y_key], components[f_key]).^2 ./ z.err^2
println(contributions)
println(sum(contributions)) # if we accounted for all error sources, this will sum to 1 |
The only caveat is that I'm using |
Thanks for the tip! I forgot to ask one quick question: will the finite differencing used in calculating the partials through @Uncertain affect the base calculations at all? I know I can forwarddiff through the @Uncertain macro without the typical issues seen before, but I was unsure if there was any additional performance degradation in the actual values themselves. |
The value is computed normally, Lines 150 to 170 in ba56443
Are you talking about speed performance (it'll be slower) or "goodness" performance (the value shouldn't be any different, only the error part would be slightly different compared to not using |
Hey! I've been using Measurements since I started using Julia, and I've finally run into a project where I need to be able to get components back out via
uncertainty_components
. I've considered a number of ways to recover the actual partial derivatives, but none are too attractive in terms of effort and how they would work applied generically to any problem.I'm wondering if, upon calculating the iterative solution, I could go ahead and calculate the partial derivatives of the solution either with finite differences or some other method, then add them back in to the
der
field of the new measurement I construct. Is this doable?To reiterative my problem a bit more concretely: here's what I'm doing now and also why it isn't working
c
, but the problem is, that in the function f, I solvea
iteratively, then later go back and analytically solve for the uncertainty ofa
by solving the variance formula for whatever direct expression was originally available. The result of this is that I had to define a "new" measurement fora
, since we originally found it by guessing (through some optimizer), which means it has no partial derivative / history of the pathway.der
field with appropriate(val, err, tag)
keys, then I can senda
on its merry way and it's business as usual......Or so I think.
Does this problem make sense, and is there an obvious way around it in the measurments library? Or am I not seeing another better solution in finding error contributions?
For clarity, I already am able to get the overall uncertainty and have experimentally confirmed that these uncertainties are correct. The issue is in finding what contributes to uncertainty the most.
The text was updated successfully, but these errors were encountered: