Skip to content

Conversation

@justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Apr 16, 2025

The numbers seem closer-ish for fft_c2r, but don't really match.

@justinchuby justinchuby added the module: torchlib Related to the torch/aten function lib in development label Apr 16, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 1 out of 1 changed files in this pull request and generated no comments.

Comments suppressed due to low confidence (1)

onnxscript/function_libs/torch_lib/ops/fft.py:121

  • Verify that op.Shape returns a scalar value when using start and end to extract the dimension. If op.Shape returns a tensor, explicitly extract its value to ensure proper normalization scaling.
scale = (op.CastLike(last_dim_size, self)) / op.CastLike(

from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check notice

Code scanning / CodeQL

Unused import Note

Import of 'ir' is not used.

Copilot Autofix

AI 9 months ago

To fix the issue, we should remove the unused import of ir from the onnxscript module. This will clean up the code and eliminate the unnecessary dependency. The change should be made on line 17, where the import statement is defined.

Suggested changeset 1
onnxscript/function_libs/torch_lib/ops/fft.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/onnxscript/function_libs/torch_lib/ops/fft.py b/onnxscript/function_libs/torch_lib/ops/fft.py
--- a/onnxscript/function_libs/torch_lib/ops/fft.py
+++ b/onnxscript/function_libs/torch_lib/ops/fft.py
@@ -16,3 +16,3 @@
 
-from onnxscript import INT64, ir
+from onnxscript import INT64
 from onnxscript.function_libs.torch_lib.registration import torch_op
EOF
@@ -16,3 +16,3 @@

from onnxscript import INT64, ir
from onnxscript import INT64
from onnxscript.function_libs.torch_lib.registration import torch_op
Copilot is powered by AI and may make mistakes. Always verify output.
@codecov
Copy link

codecov bot commented Apr 16, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.34%. Comparing base (a3e9cbe) to head (7e6182b).
⚠️ Report is 349 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2209      +/-   ##
==========================================
- Coverage   74.36%   74.34%   -0.02%     
==========================================
  Files         230      230              
  Lines       29687    29688       +1     
  Branches     3450     3450              
==========================================
- Hits        22076    22073       -3     
- Misses       6442     6447       +5     
+ Partials     1169     1168       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check warning

Code scanning / lintrunner

PYLINT/W0611 Warning

Unused ir imported from onnxscript (unused-import)
See unused-import. To disable, use # pylint: disable=unused-import
from typing import Optional, Sequence

from onnxscript import INT64
from onnxscript import INT64, ir

Check warning

Code scanning / lintrunner

RUFF/F401 Warning

onnxscript.ir imported but unused.
See https://docs.astral.sh/ruff/rules/unused-import
* scale
)
transformed = _fftn_onnx_normalization(
transformed,
Copy link
Contributor

@bmehta001 bmehta001 Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps replace line 137 (op.Shape...) with op.CastLike(last_dim_size, self) and then remove scale? Would that yield the same/better results?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought last_dim_size was op.Shape(transformed, start=dimension, end=dimension + 1)? Let me try

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, nvm, sorry, you're completely right about op.Shape(transformed, start=dimension, end=dimension + 1) being different between line 122 and 137. But your code made me realize that without modifying anything else, line 137 perhaps should be directly replaced with last_dim_size just to save a call to op.Shape.

Copy link
Contributor

@bmehta001 bmehta001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help

@github-project-automation github-project-automation bot moved this from Todo to In Progress in ONNX Script Review Board Apr 17, 2025
@justinchuby
Copy link
Collaborator Author

Perhaps simplify the normalization by passing in the dimension size you want directly to the normalization function. I think there is an underlying issue with my c2r implementation, though, because it should be able to support multiple axes. I can try to see if something like what is done in onnx/onnx#6016 will help

👍 looks like they recreated the conjugate part. It's like doubling the work that was meant to be saved but I guess that's the best option right now?

@justinchuby
Copy link
Collaborator Author

Fixed differently

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: torchlib Related to the torch/aten function lib in development

Projects

Development

Successfully merging this pull request may close these issues.

3 participants