Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[i will merge] Upgrade python version to 3.9 #488

Merged
merged 7 commits into from
Jul 9, 2024
Merged

Conversation

msaroufim
Copy link
Member

@msaroufim msaroufim commented Jul 9, 2024

We're trying to upgrade from 3.8 to a more modern Python version to use some newer features like

3.8 is reaching EOL imminently so we must upgrade to at least 3.9 https://devguide.python.org/versions/

If we were to do what PyTorch does we'd simply upgrade to 3.9 and try to avoid using 3.10+ features pytorch/rfcs#65

Upgrading to 3.9
This also works and is probably the least controversial thing to merge

Upgrading to 3.10
This just works, CI is green so we can merge this PR as is

Upgrading to 3.11
This made the dora optimizer crash - cc @jeromeku in case you're interested

  FAILED test/dora/test_dora_fusion.py::test_dora_column_norm[4096x4096x16-False-True-True-True-torch.float32] - AttributeError: 'CudaDriver' object has no attribute 'active'
  FAILED test/dora/test_dora_fusion.py::test_dora_column_norm[4096x4096x16-False-True-True-True-torch.float16] - AttributeError: 'CudaDriver' object has no attribute 'active'
  FAILED test/dora/test_dora_fusion.py::test_dora_column_norm[4096x4096x16-False-True-True-True-torch.bfloat16] - AttributeError: 'CudaDriver' object has no attribute 'active'
  FAILED test/dora/test_dora_fusion.py::test_dora_matmul[512x4096x4096-torch.float32-True-True] - triton.compiler.errors.CompilationError: at 69:29:            _0 = tl.zeros((1, 1), dtype=C.dtype.element_ty)
              a = tl.load(A, mask=rk[None, :] < k_remaining, other=_0)
              b = tl.load(B, mask=rk[:, None] < k_remaining, other=_0)
          if AB_DTYPE is not None:
              a = a.to(AB_DTYPE)
              b = b.to(AB_DTYPE)
          if fp8_fast_accum:
              acc = tl.dot(
                  a, b, acc, out_dtype=acc_dtype, input_precision=input_precision
              )
          else:
              acc += tl.dot(a, b, out_dtype=acc_dtype, input_precision=input_precision)
                               ^
  TypeError("dot() got an unexpected keyword argument 'input_precision'")
  FAILED test/dora/test_dora_fusion.py::test_dora_matmul[512x4096x4096-torch.float16-True-True] - triton.compiler.errors.CompilationError: at 69:29:            _0 = tl.zeros((1, 1), dtype=C.dtype.element_ty)
              a = tl.load(A, mask=rk[None, :] < k_remaining, other=_0)
              b = tl.load(B, mask=rk[:, None] < k_remaining, other=_0)
          if AB_DTYPE is not None:
              a = a.to(AB_DTYPE)
              b = b.to(AB_DTYPE)
          if fp8_fast_accum:
              acc = tl.dot(
                  a, b, acc, out_dtype=acc_dtype, input_precision=input_precision
              )
          else:
              acc += tl.dot(a, b, out_dtype=acc_dtype, input_precision=input_precision)
                               ^
  TypeError("dot() got an unexpected keyword argument 'input_precision'")
  FAILED test/dora/test_dora_fusion.py::test_dora_matmul[512x4096x4096-torch.bfloat16-True-True] - triton.compiler.errors.CompilationError: at 69:29:            _0 = tl.zeros((1, 1), dtype=C.dtype.element_ty)
              a = tl.load(A, mask=rk[None, :] < k_remaining, other=_0)
              b = tl.load(B, mask=rk[:, None] < k_remaining, other=_0)
          if AB_DTYPE is not None:
              a = a.to(AB_DTYPE)
              b = b.to(AB_DTYPE)
          if fp8_fast_accum:
              acc = tl.dot(
                  a, b, acc, out_dtype=acc_dtype, input_precision=input_precision
              )
          else:
              acc += tl.dot(a, b, out_dtype=acc_dtype, input_precision=input_precision)
                               ^
  TypeError("dot() got an unexpected keyword argument 'input_precision'")

Copy link

pytorch-bot bot commented Jul 9, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/488

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 7e2c6bc with merge base 12ac498 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 9, 2024
@msaroufim msaroufim changed the title msaroufim/310 Upgrade pytorch version to 3.11 Jul 9, 2024
@msaroufim msaroufim changed the title Upgrade pytorch version to 3.11 Upgrade pytorch version to 3.10 Jul 9, 2024
@msaroufim msaroufim changed the title Upgrade pytorch version to 3.10 [i will merge] Upgrade pytorch version to 3.10 Jul 9, 2024
@msaroufim msaroufim changed the title [i will merge] Upgrade pytorch version to 3.10 [i will merge] Upgrade pytorch version to 3.9 Jul 9, 2024
@msaroufim
Copy link
Member Author

FYI @ebsmothers for now just decided to drop 3.8, I think we will end up being conservating with upgrading python versions but will revisit with the rest of the ao team if this ends up being a controversial decisision

@msaroufim msaroufim merged commit 05038a1 into main Jul 9, 2024
13 checks passed
@msaroufim msaroufim deleted the msaroufim/310 branch July 9, 2024 04:57
@msaroufim msaroufim changed the title [i will merge] Upgrade pytorch version to 3.9 [i will merge] Upgrade python version to 3.9 Jul 9, 2024
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
* push

* Upgrade to python 3.11

* push

* push

* push

* Update README.md

* Update regression_test.yml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants