Skip to content

Introduce XPU scope profiler extending existing XPU profiler plugin#1174

Open
moksiuc wants to merge 109 commits intopytorch:mainfrom
moksiuc:moksiuci_6674_scope_profiler
Open

Introduce XPU scope profiler extending existing XPU profiler plugin#1174
moksiuc wants to merge 109 commits intopytorch:mainfrom
moksiuc:moksiuci_6674_scope_profiler

Conversation

@moksiuc
Copy link
Copy Markdown
Contributor

@moksiuc moksiuc commented Nov 13, 2025

Summary:

As XPU became a PyTorch built-in device, the profiler support is indispensable part of functionality completeness. In this PR, the XPU scope profiler is introduced by extending existing XPU profiler plugin. The XPU scope profiler is built on the foundation of intel PTI toolkit (https://github.com/intel/pti-gpu), and underlying SYCL runtime. It allows to gather XPU hardware metrics. The LIBKINETO_NOXPUPTI option is used to enable or disable the whole XPU profiler plugin during kineto build stage.

Changes:

  • Added new ActivityType : XPU_SCOPE_PROFILER, enabling the new scope profiler
  • Outputs XPU hardware metrics from the new scope profiler in Perfetto counters display mode ("C")
  • Added gtest

@meta-cla meta-cla bot added the cla signed label Nov 13, 2025
@moksiuc moksiuc changed the title scope profiler squashed Introduce XPU scope profiler extending existing XPU profiler plugin Nov 13, 2025
@moksiuc
Copy link
Copy Markdown
Contributor Author

moksiuc commented Nov 13, 2025

@EikanWang, @gujinghui

Comment thread libkineto/src/plugin/xpupti/FindSYCLToolkit.cmake Outdated
@gujinghui
Copy link
Copy Markdown

@moksiuc It's great that we are going to update our PTI integration code, and introduce new profiler path.
Could you help address below questions?

  1. The alternative of ScopeProfiler is the RangeProfiler for CUDA? Looks like the RangeProfler is not enabled in PyTorch by default so far. Do you know why?
  2. This PR is too huge to review. Can we split it to several PRs? For example, one PR for code refactor or cleanup per kineto or PTI changes, one or two PRs for ScopeProfiler, one PR for ChromeTraceLogger enhancement, and add test cases for each PRs.
  3. BTW, CUDA provides CUDA_DRIVER activity to trace the driver actions. We should provide L0 actions as the counterpart, right? I remember, PTI should be able to do that. Do we have plan to cover it?
    {"cuda_driver", ActivityType::CUDA_DRIVER},

@moksiuc
Copy link
Copy Markdown
Contributor Author

moksiuc commented Nov 14, 2025

  1. The alternative of ScopeProfiler is the RangeProfiler for CUDA? Looks like the RangeProfler is not enabled in PyTorch by default so far. Do you know why?
    It is enabled by providing experimental_config=_ExperimentalConfig(...). I don't know why it is this way but we are enabling our profiler the same way. One of the reasons may be that Range/Scope profiler requires parameters like HW metrics names that are passed through _ExperimentalConfig.

@moksiuc
Copy link
Copy Markdown
Contributor Author

moksiuc commented Nov 14, 2025

  1. BTW, CUDA provides CUDA_DRIVER activity to trace the driver actions. We should provide L0 actions as the counterpart, right? I remember, PTI should be able to do that. Do we have plan to cover it?
    {"cuda_driver", ActivityType::CUDA_DRIVER},

For sure not in this PR. I'll add this to our list of tasks.

@moksiuc
Copy link
Copy Markdown
Contributor Author

moksiuc commented Nov 17, 2025

  1. This PR is too huge to review. Can we split it to several PRs? For example, one PR for code refactor or cleanup per kineto or PTI changes, one or two PRs for ScopeProfiler, one PR for ChromeTraceLogger enhancement, and add test cases for each PRs.

Extracted clean up and adding config for scope profiler to separate PR's.
This one should be much smaller afterwards.
Currently I don't see further areas of extracting separate PR's as what would remain is full scope profiler implementation with tests and we'd like not to introduce half of the implementation that is not working functionally.

- removed rangeEnabled
- fix test to align to this removal
- erase used kernelActivity from map
- place of config initialization
- removal of passing unused C compiler flag into test cmake file
Copy link
Copy Markdown

@gujinghui gujinghui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is split to #1177, #1180, and more.

@moksiuc moksiuc marked this pull request as ready for review November 24, 2025 10:01
@gujinghui
Copy link
Copy Markdown

@moksiuc Let's close this PR.

@moksiuc
Copy link
Copy Markdown
Contributor Author

moksiuc commented Dec 3, 2025

@gujinghui this is the core of the scope profiler. When 2 smaller parts are merged this one would have only core profiler left.

This reverts commit f55b81c.
@gujinghui
Copy link
Copy Markdown

@divyanshk This PR is ready now. Could you help review? Thanks.

Copy link
Copy Markdown
Contributor

@divyanshk divyanshk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments. Important one is in ActivityType.h

I need to look again at output_json.cpp.

Comment thread libkineto/include/ActivityType.h Outdated
Comment thread libkineto/test/xpupti/XpuptiTestUtilities.cpp Outdated
Comment thread libkineto/src/output_json.cpp Outdated
Comment thread libkineto/src/output_json.cpp Outdated
Comment thread libkineto/include/ActivityType.h Outdated
Comment thread libkineto/libkineto_defs.bzl Outdated
Comment thread libkineto/src/plugin/xpupti/XpuptiActivityApi.cpp
Comment thread libkineto/src/plugin/xpupti/XpuptiScopeProfilerApi.cpp Outdated
@gujinghui
Copy link
Copy Markdown

@divyanshk @scotts all comments have been addressed. Could you help review again? Thanks.

@moksiuc moksiuc requested review from divyanshk and scotts April 10, 2026 08:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Aged Pending Review

Development

Successfully merging this pull request may close these issues.

7 participants