Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Numpy-isation" of irradiance decomposition functions #1456

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

alexisstgc
Copy link

@alexisstgc alexisstgc commented May 11, 2022

  • Closes "Numpy-isation" of irradiance decomposition functions #1455
  • I am familiar with the contributing guidelines
  • Tests added
  • Updates entries in docs/sphinx/source/reference for API changes.
  • Adds description and name entries in the appropriate "what's new" file in docs/sphinx/source/whatsnew for all changes. Includes link to the GitHub Issue with :issue:`num` or this Pull Request with :pull:`num`. Includes contributor name and/or GitHub username (link with :ghuser:`user`).
  • New code is fully documented. Includes numpydoc compliant docstrings, examples, and comments where necessary.
  • Pull request is nearly complete and ready for detailed review.
  • Maintainer: Appropriate GitHub Labels and Milestone are assigned to the Pull Request and linked Issue.

@alexisstgc alexisstgc changed the title add a todo in disc function "Numpy-isation" of irradiance decomposition functions May 11, 2022
# this is the I0 calculation from the reference
# SSC uses solar constant = 1367.0 (checked 2018 08 15)
I0 = get_extra_radiation(datetime_or_doy, 1370., 'spencer')

# Considering the extra radiation is only time dependent, broadcast it to ghi's dimensions
I0 = np.broadcast_to(I0, ghi.shape).astype(np.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe that all pvlib users would prefer to use float32 - this seems to be a specific requirement of your use case

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any downside to using float32 here? Is I0 known with sufficient accuracy to justify float64?

Copy link
Member

@wholmgren wholmgren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @alexisstgc. This seems like a good start. Have you checked out the test failures?

# this is the I0 calculation from the reference
# SSC uses solar constant = 1367.0 (checked 2018 08 15)
I0 = get_extra_radiation(datetime_or_doy, 1370., 'spencer')

# Considering the extra radiation is only time dependent, broadcast it to ghi's dimensions
I0 = np.broadcast_to(I0, ghi.shape).astype(np.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any downside to using float32 here? Is I0 known with sufficient accuracy to justify float64?

@mikofski
Copy link
Member

Is there any downside to using float32 here? Is I0 known with sufficient accuracy to justify float64?

Probably not because the range of I0 is well known, but these compromises sometimes come back to bite you that's why I always use the max precision. Some possible downsides:

  1. if any calculation with I0 involves division, multiplication, logs, powers, or exponents with really big or small numbers then there will be floating point errors
  2. if anyone comes along later and tries to use disc in an algorithm that requires either gradient or Jacobian calculation that I0 has weak coupling, then it could become badly conditions and any determinants would be singular
  3. In general floating point errors will increase, which might require updating tests, although it's difficult to know in advance it the test values will be affected beyond the precision those tests already use, EG: 1e-8 might be fine or not enough

Some upsides:

  1. faster (will double use of SIMD and AVX registers)
  2. less memory (can stack 2x more into L2 cache)

@alexisstgc
Copy link
Author

@mikofski I added the option of specifying the dtype of these variables as a environment variable 'PVLIB_IRR_DTYPE' which defaults to np.float64. This way, it is transparent for users who do not need to reduce the precision.

@wholmgren
Copy link
Member

@alexisstgc can you provide a simplified example of how you're calling this function? I think we can avoid the explicit casting with some care for the input dtypes.

@alexisstgc
Copy link
Author

@wholmgren Here below is my "example". I compute a series of GHI maps, contained in a 3D np.array named ghi with shape (latitude, longitude, time). Each lat/lon map corresponds to a timestep in the pd.DateTimeIndex named times. I have also, for the same timesteps, series of maps for solar zenith, clear-sky GHI and clear-sky DNI data. All the input values are np.float32 and, in the state that I posted my code here, the returning DNI value is float32 as well. However, if I comment out the part of code where I explicitly cast to float32 (namely lines 1405, 1612, 1617 and 1676), the output is in float64.

dni = dirindex(ghi=ghi, ghi_clearsky=self.csky_ghi, dni_clearsky=self.csky_bni, solar_zenith=solar_zenith, times=self.times)
bhi = dni * np.cos(np.radians(solar_zenith))
dhi = ghi - bhi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

"Numpy-isation" of irradiance decomposition functions
3 participants