Skip to content

feat(pack): support output.entryRootExport#2550

Merged
fireairforce merged 1 commit intonextfrom
support-expose-entry-exports
Feb 2, 2026
Merged

feat(pack): support output.entryRootExport#2550
fireairforce merged 1 commit intonextfrom
support-expose-entry-exports

Conversation

@fireairforce
Copy link
Contributor

@fireairforce fireairforce commented Jan 28, 2026

Summary

closes: #2532

ref pr: utooland/next.js#108

添加一个配置:

{
  "output": {
      "entryRootExport": true,
  }
}

开启之后 utoopack 的产物格式会按照 webpack library 的导出格式来构造,适配 qiankun 需要的产物格式

Test Plan

添加了快照测试。

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @fireairforce, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the pack bundler by introducing a new configuration option, output.exposeEntryExports. This feature provides a mechanism to automatically expose the named exports of an entry module to the global JavaScript scope, such as window or globalThis. The implementation involves modifications to the core configuration, schema definition, and client-side runtime to correctly process and apply this setting, ensuring that the module's public interface is accessible globally when desired. A new example and associated snapshot tests have been added to validate the functionality and provide a clear demonstration of its use.

Highlights

  • New Configuration Option: Introduced output.exposeEntryExports to the OutputConfig in pack-core, allowing entry module exports to be exposed to the global scope.
  • Runtime Implementation: Added logic within the client chunking context to read and apply the exposeEntryExports setting, enabling the global exposure functionality.
  • Schema Updates: Updated the pack-schema to include the new exposeEntryExports field, ensuring configuration validation and documentation.
  • New Example and Tests: Included a new example (examples/expose-entry-exports) demonstrating the usage of exposeEntryExports, along with corresponding snapshot tests to verify its behavior.
  • Dependency Updates: Modified package.json and package-lock.json to incorporate the new example workspace and adjust peer dependency flags for several packages.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@fireairforce fireairforce changed the title feat(pack): support output.exposeEntryExports [WIP]feat(pack): support output.exposeEntryExports Jan 28, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new output.exposeEntryExports configuration option. When enabled, this feature exposes the exports of the entry module to the global scope, making them accessible via a global variable. The changes include adding the configuration option, updating the client chunking context, and adding a new example and snapshot tests to validate the functionality.

The implementation appears to be correct for its purpose. My main feedback is to improve the clarity of the documentation for the new exposeEntryExports option in both pack-core and pack-schema. The current description could be misinterpreted. I've left specific suggestions to make it more precise about how the exports are exposed (as a single library object on the global scope, rather than polluting the global namespace with individual exports).

@fireairforce fireairforce force-pushed the support-expose-entry-exports branch 2 times, most recently from 4fff6b2 to 6bc956b Compare January 30, 2026 07:54
@fireairforce fireairforce changed the title [WIP]feat(pack): support output.exposeEntryExports feat(pack): support output.exposeEntryExports Jan 30, 2026
@fireairforce fireairforce force-pushed the support-expose-entry-exports branch from 6bc956b to ceb8e0c Compare January 30, 2026 09:29
@fireairforce fireairforce changed the title feat(pack): support output.exposeEntryExports feat(pack): support output.entryRootExport Jan 30, 2026
@fireairforce fireairforce force-pushed the support-expose-entry-exports branch from ceb8e0c to 1d163bc Compare February 2, 2026 03:10
@github-actions
Copy link

github-actions bot commented Feb 2, 2026

📊 Performance Benchmark Report (with-antd)

🚀 Utoopack Performance Report: Async Task Scheduling Overhead Analysis

Report ID: utoopack_performance_report_20260202_031845
Generated: 2026-02-02 03:18:45
Trace File: trace_antd.json (1.5GB, 8.00M events)
Test Project: Unknown Project


📊 Executive Summary

This report analyzes the performance of Utoopack/Turbopack, covering the full spectrum of the Performance Analysis Protocol (P0-P4).

Key Findings

Metric Value Assessment
Total Wall Time 10,040.3 ms Baseline
Total Thread Work 87,211.7 ms ~8.7x parallelism
Thread Utilization 62.0% 🆗 Average
turbo_tasks::function Invocations 3,880,023 Total count
Meaningful Tasks (≥ 10µs) 1,520,622 (39.2% of total)
Tracing Noise (< 10µs) 2,359,401 (60.8% of total)

Workload Distribution by Tier

Category Tasks Total Time (ms) % of Work
P0: Runtime/Resolution 1,040,707 52,473.5 60.2%
P1: I/O & Heavy Tasks 37,375 3,636.5 4.2%
P3: Asset Pipeline 27,809 4,491.6 5.2%
P4: Bridge/Interop 0 0.0 0.0%
Other 414,731 19,241.2 22.1%

⚡ Parallelization Analysis (P0-P2)

Thread Utilization

Metric Value
Number of Threads 14
Total Thread Work 87,211.7 ms
Avg Work per Thread 6,229.4 ms
Theoretical Parallelism 8.69x
Thread Utilization 62.0%

Assessment: With 14 threads available, achieving 8.7x parallelism indicates significant loss of potential parallelism.


📈 Top 20 Tasks (Global)

These are the most significant tasks by total duration:

Total (ms) Count Avg (µs) % Work Task Name
43,713.3 865,320 50.5 50.1% turbo_tasks::function
8,161.8 124,062 65.8 9.4% task execution completed
6,258.1 81,479 76.8 7.2% turbo_tasks::resolve_call
3,046.7 32,448 93.9 3.5% analyze ecmascript module
2,129.8 66,945 31.8 2.4% precompute code generation
2,009.8 68,077 29.5 2.3% resolving
1,805.3 35,753 50.5 2.1% module
1,769.7 20,471 86.5 2.0% effects processing
1,462.9 11,583 126.3 1.7% process parse result
1,200.3 6,491 184.9 1.4% parse ecmascript
1,025.0 31,800 32.2 1.2% process module
977.7 35,878 27.2 1.1% internal resolving
759.6 28,389 26.8 0.9% resolve_relative_request
666.0 1,916 347.6 0.8% analyze variable values
491.7 1,939 253.6 0.6% swc_parse
453.6 15,449 29.4 0.5% resolve_module_request
445.6 19,569 22.8 0.5% handle_after_resolve_plugins
421.6 10,884 38.7 0.5% code generation
394.9 17,598 22.4 0.5% resolved
348.4 4,253 81.9 0.4% read file

🔍 Deep Dive by Tier

🔴 Tier 1: Runtime & Resolution (P0)

Focus: Task scheduling and dependency resolution.

Metric Value Status
Total Scheduling Time 52,473.5 ms ⚠️ High
Resolution Hotspots 9 tasks 🔍 Check Top Tasks

Potential P0 Issues:

  • Low thread utilization (62.0%) suggests critical path serialization or lock contention.
  • 2,359,401 tasks < 10µs (60.8%) contribute to scheduler pressure.

🟠 Tier 2: Physical & Resource Barriers (P1)

Focus: Hardware utilization, I/O, and heavy monoliths.

Metric Value Status
I/O Work (Estimated) 3,636.5 ms ✅ Healthy
Large Tasks (> 100ms) 16 🚨 Critical

Potential P1 Issues:

  • 16 tasks exceed 100ms. These "Heavy Monoliths" are prime candidates for splitting.

🟡 Tier 3: Architecture & Asset Pipeline (P2-P3)

Focus: Global state and transformation pipeline.

Metric Value Status
Asset Processing (P3) 4,491.6 ms 5.2% of work
Bridge Overhead (P4) 0.0 ms ✅ Low

💡 Recommendations (Prioritized P0-P2)

🚨 Critical: (P0) Improvement

Problem: 62.0% thread utilization.
Action:

  1. Profile lock contention if utilization < 60%.
  2. Convert sequential await chains to try_join.

⚠️ High Priority: (P1) Optimization

Problem: 16 heavy tasks detected.
Action:

  1. Identify module-level bottlenecks (e.g., barrel files).
  2. Optimize I/O batching for metadata.

⚠️ Medium Priority: (P3) Pipeline Efficiency

Action:

  1. Review transformation logic for frequently changed assets.
  2. Minimize cross-language serialization (P4) if overhead exceeds 10%.

📐 Diagnostic Signal Summary

Signal Status Finding
Tracing Noise (P0) ⚠️ Significant 60.8% of tasks < 10µs
Thread Utilization (P0) ✅ Good 62.0% utilization
Heavy Monoliths (P1) ⚠️ Detected 16 tasks > 100ms
Asset Pipeline (P3) 🔍 Review 4,491.6 ms total
Bridge/Interop (P4) ✅ Low 0.0 ms total

🎯 Action Items (Comprehensive P0-P4)

  1. [P0] Profile lock contention to address 37% lost parallelism
  2. [P1] Breakdown heavy monolith tasks (>100ms) to improve granularity
  3. [P1] Review I/O patterns for potential batching opportunities
  4. [P3] Optimize asset transformation pipeline hot-spots
  5. [P4] Reduce "chatty" bridge operations if interop overhead is significant

Report generated by Utoopack Performance Analysis Agent on 2026-02-02
Following: Utoopack Performance Analysis Agent Protocol

@xusd320
Copy link
Contributor

xusd320 commented Feb 2, 2026

library 下不允许配这个吧?加个校验

@fireairforce
Copy link
Contributor Author

fireairforce commented Feb 2, 2026

library 下不允许配这个吧?加个校验

配置了不会生效而已,现在有很多这一类的配置,都需要校验吗?目前 utoopack 的这个配置也只是用来适配 qiankun 应用而已,不会给用户单独去使用...

现在 library 这个格式基本不会被单独使用:

#2467 library 构建目前运行时也还有许多其他适配的地方,现在 father 的 utoopack 还维持在一个比较低的版本,升级不上去。

@xusd320
Copy link
Contributor

xusd320 commented Feb 2, 2026

感觉要不放一个 experimental 配置,这个后面应该也不是主流用法。

配置了不会生效而已,现在有很多这一类的配置

还有哪些配置配了不会生效?

@fireairforce fireairforce merged commit 628efb0 into next Feb 2, 2026
16 checks passed
@fireairforce fireairforce deleted the support-expose-entry-exports branch February 2, 2026 07:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Utoopack] 适配 umi qiankun 子应用

2 participants