Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major/core rewrite #2

Open
wants to merge 84 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
71eec3b
WIP on bugfix/avoid_test_case_overwrites
jagerber48 Jul 14, 2024
6515420
some updates
jagerber48 Jul 15, 2024
8299f58
cleanup
jagerber48 Jul 15, 2024
f24ea0f
tests
jagerber48 Jul 15, 2024
6946067
some nan handling
jagerber48 Jul 15, 2024
7d3ec32
comment
jagerber48 Jul 15, 2024
c3dc427
revert test_uncertainties changes
jagerber48 Jul 15, 2024
d2388c2
operation type hints and cleanup
jagerber48 Jul 16, 2024
08b22dd
docstring
jagerber48 Jul 16, 2024
b1b534a
documentation
jagerber48 Jul 16, 2024
deaf4f9
Cache standard deviation calculation
jagerber48 Jul 16, 2024
2ceb966
Cleanup and documentation
jagerber48 Jul 16, 2024
7a53bbc
comments
jagerber48 Jul 16, 2024
1c121a7
raise on negative uncertainty
jagerber48 Jul 17, 2024
a602f84
new version of umath
jagerber48 Jul 17, 2024
d7ee99b
loop through args and kwargs instead of using inspect.signature
jagerber48 Jul 17, 2024
b841984
analytical and partial derivatives test
jagerber48 Jul 18, 2024
b277ab4
position only arguments
jagerber48 Jul 18, 2024
51c0f61
whitespace
jagerber48 Jul 18, 2024
71ec975
add value and uncertainty properties
jagerber48 Jul 18, 2024
18ae733
add asinh and hypot, function to add ufuncs
jagerber48 Jul 18, 2024
7636d9a
add UArray
jagerber48 Jul 18, 2024
22e6a07
return NotImplemented when we don't get a UFloat in a something conve…
jagerber48 Jul 18, 2024
3147eb0
a type hint
jagerber48 Jul 18, 2024
f536a4a
a test
jagerber48 Jul 18, 2024
6074bdf
hack to fix mean
jagerber48 Jul 18, 2024
2b5aa7d
fixed mean
jagerber48 Jul 18, 2024
37f8e7f
remove old comment
jagerber48 Jul 18, 2024
48cc179
refactor into new subpackage
jagerber48 Jul 19, 2024
7a83977
update test
jagerber48 Jul 19, 2024
25cc58a
import/cleanup
jagerber48 Jul 19, 2024
a6ef02f
inject function
jagerber48 Jul 19, 2024
21f2b92
refactor to add func_conversion
jagerber48 Jul 19, 2024
a09fad2
dataclass for uncertainty linear combination, hashes and immutability
jagerber48 Jul 19, 2024
084968c
ucombo file and some typing
jagerber48 Jul 19, 2024
6aa8e73
change import structure
jagerber48 Jul 19, 2024
9e2ce85
move docstring
jagerber48 Jul 19, 2024
d46526a
__slots__
jagerber48 Jul 19, 2024
316af60
re organize new/umath.py
jagerber48 Jul 21, 2024
de0a05d
Pull all numeric dummy method type stubs into numeric_base file
jagerber48 Jul 21, 2024
2b547a2
comment
jagerber48 Jul 21, 2024
1786ff2
cast weights to float
jagerber48 Jul 21, 2024
45fc4b5
undo reorder umath.py
jagerber48 Jul 21, 2024
32a2411
bind typevar to NumericBase
jagerber48 Jul 21, 2024
44709aa
Self typevar in UFloat
jagerber48 Jul 21, 2024
178e07d
hash type annotation
jagerber48 Jul 21, 2024
8cd237f
add formatting
jagerber48 Jul 21, 2024
8aff054
some type annotation
jagerber48 Jul 21, 2024
e467ebd
repr returns str for now. More readable in arrays.
jagerber48 Jul 21, 2024
d64126b
add correlated_values and covariance_matrix functions
jagerber48 Jul 21, 2024
2dd15f1
refactor ToUFunc to not require the ufloat_params input
jagerber48 Jul 21, 2024
91da3cb
some reorganization and comments
jagerber48 Jul 22, 2024
1f8fe7d
get rid of ufloat for now
jagerber48 Jul 22, 2024
400545c
no repr monkey patch
jagerber48 Jul 22, 2024
3ac5f02
refactor UAtom and UCombo, UCombo supports + and * now
jagerber48 Jul 23, 2024
7311885
move NotImplemented call out of ToUFunc wrapper and into the reflexiv…
jagerber48 Jul 24, 2024
cd10953
to_uarray_func
jagerber48 Jul 24, 2024
53f962c
rename
jagerber48 Jul 24, 2024
2dcebdb
updates, working on numpy
jagerber48 Jul 24, 2024
257e1aa
ExpandedUCombo dict functions
jagerber48 Jul 24, 2024
c541517
more accessors for expanded uncertainty
jagerber48 Jul 24, 2024
dde5156
UAtom __str__
jagerber48 Jul 24, 2024
b9e254c
some UArray tests
jagerber48 Jul 24, 2024
52af895
incorporate changes from other branch
jagerber48 Jul 25, 2024
ddcd285
tag and strip
jagerber48 Jul 25, 2024
9ca6f1a
begin modifying tests
jagerber48 Jul 25, 2024
3b26d4a
copy test --- basically reverse behavior of some tests
jagerber48 Jul 25, 2024
6db3a54
slots test
jagerber48 Jul 25, 2024
7f98ec4
test comparison ops
jagerber48 Jul 25, 2024
b6d2fed
bug call out
jagerber48 Jul 25, 2024
2152c26
type hint
jagerber48 Jul 25, 2024
7208615
test_wrapped_func_no_args_no_kwargs
jagerber48 Jul 25, 2024
ad94860
support None function again
jagerber48 Jul 25, 2024
5188151
another wrap test
jagerber48 Jul 26, 2024
7658770
merge
jagerber48 Aug 16, 2024
6d54393
Various tests, including covariance tests
jagerber48 Aug 17, 2024
56c4d3c
more tests
jagerber48 Aug 17, 2024
f17825f
wrap tests
jagerber48 Aug 17, 2024
ee9776d
all test_uncertainties tests passing
jagerber48 Aug 17, 2024
74ad6ab
single input function derivative comparisons
jagerber48 Aug 17, 2024
64b144a
double input tests
jagerber48 Aug 17, 2024
baa676a
test double inputs
jagerber48 Aug 17, 2024
b6795c9
tests
jagerber48 Aug 17, 2024
9d61883
some tests
jagerber48 Aug 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 86 additions & 0 deletions tests/data/double_inputs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
{
"real": [
[
61.948769256033415,
70.5550954777836
],
[
46.850712521157874,
-65.85302740822756
],
[
-94.01945784686339,
53.2050384231714
],
[
-1.8714783718018992,
-40.62274499568404
],
[
55.48271966281476,
-59.66716850518983
],
[
85.57625158786456,
18.923183930053142
],
[
-25.543794819578864,
28.458901597667307
],
[
-58.22572652851596,
-55.149994961379136
],
[
19.7847891211907,
13.045151558451337
],
[
-78.72099898638444,
29.78965840724598
]
],
"positive": [
[
39.62120657307086,
38.607310262495766
],
[
54.33981543838278,
28.062652449376692
],
[
17.743737673925896,
1.7777965570752063
],
[
85.60124763154259,
50.30480800764012
],
[
16.00414707453346,
7.824807251032462
],
[
20.264307051880557,
8.596303854680299
],
[
41.032040066580464,
52.56582126667049
],
[
3.880748834752179,
68.13725191777147
],
[
7.612002572953491,
59.60418726181359
],
[
90.97870010384057,
77.57227867030959
]
]
}
35 changes: 35 additions & 0 deletions tests/data/gen_math_input.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import json
import random


valid_inputs_dict = {}


def main():
num_reps = 10

single_inputs_dict = {
"real": [random.uniform(-100, 100) for _ in range(num_reps)],
"positive": [random.uniform(0, 100) for _ in range(num_reps)],
"minus_one_to_plus_one": [random.uniform(-1, +1) for _ in range(num_reps)],
"greater_than_one": [random.uniform(+1, 100) for _ in range(num_reps)],
}
with open("single_inputs.json", "w+") as f:
json.dump(single_inputs_dict, f, indent=True)

double_inputs_dict = {
"real": [
[random.uniform(-100, 100), random.uniform(-100, +100)]
for _ in range(num_reps)
],
"positive": [
[random.uniform(0, 100), random.uniform(0, +100)]
for _ in range(num_reps)
],
}
with open("double_inputs.json", "w+") as f:
json.dump(double_inputs_dict, f, indent=True)


if __name__ == "__main__":
main()
50 changes: 50 additions & 0 deletions tests/data/single_inputs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
{
"real": [
-36.60911685237882,
-54.04795785278229,
35.226273393828535,
16.722777334693546,
-55.5181324887325,
-62.49825495233803,
50.512955649849545,
-47.63011552984197,
90.78699381342642,
-72.40556773563449
],
"positive": [
5.07297531068115,
64.62026513266443,
21.952976518206846,
90.61088865462847,
59.81071602890059,
46.17566226725043,
30.982963044884336,
5.6489912218142475,
97.59743784477794,
28.722096237490323
],
"minus_one_to_plus_one": [
-0.9412686734683116,
0.3670101257679639,
-0.11887039329301285,
0.2205312239720758,
-0.9996974354519661,
0.9117325174017104,
-0.813521041155469,
0.8869249308007081,
0.9985145705229643,
-0.9749926023995483
],
"greater_than_one": [
96.9308840672482,
31.44674643194246,
70.78595897202372,
7.181134117830289,
83.0726592694887,
73.19779748965216,
96.57319519176947,
32.9817245997553,
31.64207124559558,
71.43971257472222
]
}
73 changes: 27 additions & 46 deletions tests/helpers.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
import random
from math import isnan, isinf

from uncertainties.new import UFloat
import uncertainties.core as uncert_core
from uncertainties.core import ufloat, AffineScalarFunc


def get_single_uatom_and_weight(uval: UFloat):
error_components = uval.error_components
if len(error_components) != 1:
raise ValueError("uval does not have exactly 1 UAtom.")
uatom = next(iter(error_components))
weight = error_components[uatom]
return uatom, weight


def power_all_cases(op):
"""
Checks all cases for the value and derivatives of power-like
Expand Down Expand Up @@ -163,7 +173,7 @@ def power_wrt_ref(op, ref_op):
# Utilities for unit testing


def numbers_close(x, y, tolerance=1e-6):
def numbers_close(x, y, tolerance=1e-6, fractional=False):
"""
Returns True if the given floats are close enough.

Expand All @@ -177,18 +187,19 @@ def numbers_close(x, y, tolerance=1e-6):

# Instead of using a try and ZeroDivisionError, we do a test,
# NaN could appear silently:

if x != 0 and y != 0:
if isinf(x):
return isinf(y)
elif isnan(x):
return isnan(y)
else:
# Symmetric form of the test:
return 2 * abs(x - y) / (abs(x) + abs(y)) < tolerance

else: # Either x or y is zero
return abs(x or y) < tolerance
if isnan(x):
return isnan(y)
elif isinf(x):
return isinf(y) and (y > 0) is (x > 0)
elif x == 0:
return abs(y) < tolerance
elif y == 0:
return abs(x) < tolerance
else:
diff = abs(x - y)
if fractional:
diff = 2 * diff / (abs(x + y))
return diff < tolerance


def ufloats_close(x, y, tolerance=1e-6):
Expand All @@ -200,11 +211,8 @@ def ufloats_close(x, y, tolerance=1e-6):
The tolerance is applied to both the nominal value and the
standard deviation of the difference between the numbers.
"""

diff = x - y
return numbers_close(diff.nominal_value, 0, tolerance) and numbers_close(
diff.std_dev, 0, tolerance
)
return numbers_close(diff.n, 0) and numbers_close(diff.s, 0)


class DerivativesDiffer(Exception):
Expand Down Expand Up @@ -360,34 +368,7 @@ def compare_derivatives(func, numerical_derivatives, num_args_list=None):
else:

def uarrays_close(m1, m2, precision=1e-4):
"""
Returns True iff m1 and m2 are almost equal, where elements
can be either floats or AffineScalarFunc objects.

Two independent AffineScalarFunc objects are deemed equal if
both their nominal value and uncertainty are equal (up to the
given precision).

m1, m2 -- NumPy arrays.

precision -- precision passed through to
uncertainties.test_uncertainties.numbers_close().
"""

# ! numpy.allclose() is similar to this function, but does not
# work on arrays that contain numbers with uncertainties, because
# of the isinf() function.

for elmt1, elmt2 in zip(m1.flat, m2.flat):
# For a simpler comparison, both elements are
# converted to AffineScalarFunc objects:
elmt1 = uncert_core.to_affine_scalar(elmt1)
elmt2 = uncert_core.to_affine_scalar(elmt2)

if not numbers_close(elmt1.nominal_value, elmt2.nominal_value, precision):
for v1, v2 in zip(m1, m2):
if not ufloats_close(v1, v2, tolerance=precision):
return False

if not numbers_close(elmt1.std_dev, elmt2.std_dev, precision):
return False

return True
24 changes: 24 additions & 0 deletions tests/new/data_gen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import random


from uncertainties.new.umath import float_funcs_dict


no_other_list = [
"__abs__",
"__pos__",
"__neg__",
"__trunc__",
]

for func in float_funcs_dict:
vals = []
first = random.uniform(-2, +2)
vals.append(first)
if func not in no_other_list:
second = random.uniform(-2, +2)
vals.append(second)
vals = tuple(vals)
unc = random.uniform(-1, 1)

print(f"(\"{func}\", {vals}, {unc}),")
17 changes: 17 additions & 0 deletions tests/new/numpy/test_covariance.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import numpy as np

from uncertainties.new import correlated_values, covariance_matrix, UArray


mean_vals = [1, 2, 3]
cov = np.array([
[1, 0.2, 0.3],
[0.2, 2, 0.2],
[0.3, 0.2, 4],
])


def test_covariance():
ufloats = correlated_values(mean_vals, cov)
np.testing.assert_array_equal(UArray(ufloats).nominal_value, np.array(mean_vals))
np.testing.assert_array_almost_equal(covariance_matrix(ufloats), cov)
Loading
Loading