Skip to content

feat: add cached PTCs to the state#9047

Closed
nflaig wants to merge 20 commits intounstablefrom
nflaig/cached-ptcs
Closed

feat: add cached PTCs to the state#9047
nflaig wants to merge 20 commits intounstablefrom
nflaig/cached-ptcs

Conversation

@nflaig
Copy link
Copy Markdown
Member

@nflaig nflaig commented Mar 16, 2026

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request optimizes the handling of Payload Timeliness Committees (PTCs) for the Gloas fork by implementing a direct caching mechanism within the BeaconState. This change aims to enhance performance and simplify the access patterns for PTCs by storing currentPtc and previousPtc directly in the state, rather than relying solely on EpochCache. The state transition logic has been updated to manage the rotation of these cached committees, ensuring their availability and correctness throughout slot processing.

Highlights

  • Payload Timeliness Committee (PTC) Caching: Introduced direct caching of currentPtc and previousPtc within the BeaconState for the Gloas fork, moving away from EpochCache for immediate access.
  • Refactored PTC Logic: Moved PTC computation and retrieval functions from EpochCache into dedicated utility functions (packages/state-transition/src/util/gloas.ts and packages/state-transition/src/util/seed.ts) to centralize and streamline their usage.
  • State Transition Integration: Updated the state transition and upgrade processes to correctly initialize and rotate the cached PTCs at each slot and epoch boundary for the Gloas fork.
  • Specification Alignment: Modified specrefs to reflect the new caching mechanism, function locations, and updated specifications for get_ptc, compute_ptc, and get_indexed_payload_attestation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • packages/beacon-node/src/chain/validation/payloadAttestationMessage.ts
    • Imported getPayloadTimelinessCommittee from @lodestar/state-transition.
    • Updated validatePayloadAttestationMessage to use the new getPayloadTimelinessCommittee utility function.
  • packages/state-transition/src/block/processPayloadAttestation.ts
    • Imported getIndexedPayloadAttestation from ../util/gloas.js.
    • Updated processPayloadAttestation to use the new getIndexedPayloadAttestation utility function.
  • packages/state-transition/src/cache/epochCache.ts
    • Removed the gloas import.
    • Removed previousPayloadTimelinessCommittees property and its associated logic.
    • Simplified the getPayloadTimelinessCommittee method to only return the current epoch's PTC.
    • Removed the getIndexedPayloadAttestation method.
  • packages/state-transition/src/slot/upgradeStateToGloas.ts
    • Imported initializePayloadTimelinessCommittee from ../util/gloas.js.
    • Called initializePayloadTimelinessCommittee during the upgradeStateToGloas function.
  • packages/state-transition/src/stateTransition.ts
    • Imported ForkSeq, CachedBeaconStateGloas, and rotatePayloadTimelinessCommittees.
    • Added calls to rotatePayloadTimelinessCommittees within processSlotsWithTransientCache for Gloas fork slot and epoch transitions.
  • packages/state-transition/src/stateView/beaconStateView.ts
    • Updated the JSDoc comment for validatorPTCCommitteeIndex.
  • packages/state-transition/src/util/genesis.ts
    • Imported CachedBeaconStateGloas and initializePayloadTimelinessCommittee.
    • Called initializePayloadTimelinessCommittee when initializing the BeaconState from Eth1 for the Gloas fork.
  • packages/state-transition/src/util/gloas.ts
    • Imported Slot, ssz, and computePayloadTimelinessCommitteeAtSlot.
    • Added computePayloadTimelinessCommittee to calculate the PTC for the current slot.
    • Added initializePayloadTimelinessCommittee to set the initial currentPtc in the state.
    • Added rotatePayloadTimelinessCommittees to shift currentPtc to previousPtc and compute the new currentPtc.
    • Added getPayloadTimelinessCommittee to retrieve the cached PTC for the current or previous slot.
    • Added getIndexedPayloadAttestation to compute indexed payload attestations using the new getPayloadTimelinessCommittee.
  • packages/state-transition/src/util/seed.ts
    • Imported Slot.
    • Added computePayloadTimelinessCommitteeAtSlot function for single slot PTC computation.
  • packages/state-transition/test/unit/upgradeState.test.ts
    • Added new imports for Gloas types, state transition, and utility functions.
    • Added test cases for upgradeStateToGloas and processSlots related to PTC rotation.
    • Added helper functions getInitializedGloasState, generateFuluState, generateGloasState, and seedPayloadTimelinessInputs for testing.
  • packages/types/package.json
    • Added an export mapping for the new ./gloas path.
  • packages/types/src/gloas/sszTypes.ts
    • Imported VectorBasicType.
    • Defined PayloadTimelinessCommittee as a VectorBasicType of ValidatorIndex.
    • Added previousPtc and currentPtc fields to the BeaconState SSZ container.
  • packages/types/src/gloas/types.ts
    • Exported the new PayloadTimelinessCommittee type.
  • specrefs/.ethspecify.yml
    • Added compute_ptc#gloas to the list of exceptions.
    • Removed get_ptc_assignment#gloas from the list of exceptions.
  • specrefs/functions.yml
    • Updated the sources and spec for get_indexed_payload_attestation#gloas.
    • Updated the sources and spec for get_ptc#gloas.
    • Added a new entry for compute_ptc#gloas.
    • Removed the get_ptc_assignment#gloas entry.
Activity
  • No specific activity (comments, reviews, progress) was provided in the context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

/** TODO: Indexed SyncCommitteeCache */
nextSyncCommitteeIndexed: SyncCommitteeCache;

// TODO GLOAS: See if we need to cache PTC for next epoch
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed since PTC assignments are not stable for the next epoch, see ethereum/beacon-APIs#586


// TODO GLOAS: See if we need to cache PTC for next epoch
// PTC for previous epoch, required for slot N block validating slot N-1 attestations
previousPayloadTimelinessCommittees: Uint32Array[];
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we now longer need to cache this since we only need to access the last slot of previous epoch which is now cached in the state via state.previous_ptc

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces cached Payload Timeliness Committees (PTCs) to the state, enhancing the efficiency of payload attestation validation. It involves modifications across several files, including adding a new function for computing PTCs, initializing and rotating PTCs within the state, and updating relevant data structures and processes. The changes aim to optimize performance by caching PTC data for faster access during validation.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 16, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 60eea41 Previous: 26ed5ad Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 998.04 us/op 1.1351 ms/op 0.88
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 52.789 us/op 37.966 us/op 1.39
BLS verify - blst 1.2881 ms/op 1.1954 ms/op 1.08
BLS verifyMultipleSignatures 3 - blst 1.3564 ms/op 1.3485 ms/op 1.01
BLS verifyMultipleSignatures 8 - blst 2.3129 ms/op 1.8736 ms/op 1.23
BLS verifyMultipleSignatures 32 - blst 6.6278 ms/op 7.2235 ms/op 0.92
BLS verifyMultipleSignatures 64 - blst 10.162 ms/op 10.976 ms/op 0.93
BLS verifyMultipleSignatures 128 - blst 16.925 ms/op 17.831 ms/op 0.95
BLS deserializing 10000 signatures 648.04 ms/op 718.02 ms/op 0.90
BLS deserializing 100000 signatures 6.7266 s/op 7.0954 s/op 0.95
BLS verifyMultipleSignatures - same message - 3 - blst 1.3166 ms/op 926.60 us/op 1.42
BLS verifyMultipleSignatures - same message - 8 - blst 1.6345 ms/op 1.0663 ms/op 1.53
BLS verifyMultipleSignatures - same message - 32 - blst 1.8945 ms/op 1.7879 ms/op 1.06
BLS verifyMultipleSignatures - same message - 64 - blst 2.6545 ms/op 2.6761 ms/op 0.99
BLS verifyMultipleSignatures - same message - 128 - blst 4.2991 ms/op 4.8461 ms/op 0.89
BLS aggregatePubkeys 32 - blst 19.163 us/op 19.765 us/op 0.97
BLS aggregatePubkeys 128 - blst 68.250 us/op 70.589 us/op 0.97
getSlashingsAndExits - default max 65.287 us/op 76.542 us/op 0.85
getSlashingsAndExits - 2k 324.65 us/op 338.00 us/op 0.96
isKnown best case - 1 super set check 202.00 ns/op 214.00 ns/op 0.94
isKnown normal case - 2 super set checks 194.00 ns/op 205.00 ns/op 0.95
isKnown worse case - 16 super set checks 195.00 ns/op 208.00 ns/op 0.94
validate api signedAggregateAndProof - struct 2.5878 ms/op 2.5742 ms/op 1.01
validate gossip signedAggregateAndProof - struct 2.5803 ms/op 2.5680 ms/op 1.00
batch validate gossip attestation - vc 640000 - chunk 32 111.49 us/op 120.16 us/op 0.93
batch validate gossip attestation - vc 640000 - chunk 64 100.56 us/op 107.18 us/op 0.94
batch validate gossip attestation - vc 640000 - chunk 128 118.06 us/op 99.843 us/op 1.18
batch validate gossip attestation - vc 640000 - chunk 256 91.163 us/op 95.724 us/op 0.95
bytes32 toHexString 344.00 ns/op 373.00 ns/op 0.92
bytes32 Buffer.toString(hex) 234.00 ns/op 246.00 ns/op 0.95
bytes32 Buffer.toString(hex) from Uint8Array 423.00 ns/op 364.00 ns/op 1.16
bytes32 Buffer.toString(hex) + 0x 233.00 ns/op 253.00 ns/op 0.92
Return object 10000 times 0.22340 ns/op 0.23680 ns/op 0.94
Throw Error 10000 times 4.0464 us/op 4.3828 us/op 0.92
toHex 139.46 ns/op 137.34 ns/op 1.02
Buffer.from 121.31 ns/op 128.60 ns/op 0.94
shared Buffer 75.468 ns/op 77.939 ns/op 0.97
fastMsgIdFn sha256 / 200 bytes 1.8020 us/op 1.8970 us/op 0.95
fastMsgIdFn h32 xxhash / 200 bytes 191.00 ns/op 205.00 ns/op 0.93
fastMsgIdFn h64 xxhash / 200 bytes 323.00 ns/op 331.00 ns/op 0.98
fastMsgIdFn sha256 / 1000 bytes 5.7160 us/op 6.2930 us/op 0.91
fastMsgIdFn h32 xxhash / 1000 bytes 352.00 ns/op 300.00 ns/op 1.17
fastMsgIdFn h64 xxhash / 1000 bytes 309.00 ns/op 314.00 ns/op 0.98
fastMsgIdFn sha256 / 10000 bytes 51.108 us/op 53.917 us/op 0.95
fastMsgIdFn h32 xxhash / 10000 bytes 1.3740 us/op 1.4140 us/op 0.97
fastMsgIdFn h64 xxhash / 10000 bytes 1.3470 us/op 916.00 ns/op 1.47
send data - 1000 256B messages 4.4995 ms/op 4.8643 ms/op 0.93
send data - 1000 512B messages 4.7703 ms/op 8.5206 ms/op 0.56
send data - 1000 1024B messages 4.7491 ms/op 4.8767 ms/op 0.97
send data - 1000 1200B messages 5.2270 ms/op 5.7448 ms/op 0.91
send data - 1000 2048B messages 5.8917 ms/op 6.4191 ms/op 0.92
send data - 1000 4096B messages 7.5451 ms/op 7.1353 ms/op 1.06
send data - 1000 16384B messages 28.671 ms/op 54.632 ms/op 0.52
send data - 1000 65536B messages 139.83 ms/op 107.58 ms/op 1.30
enrSubnets - fastDeserialize 64 bits 853.00 ns/op 914.00 ns/op 0.93
enrSubnets - ssz BitVector 64 bits 459.00 ns/op 344.00 ns/op 1.33
enrSubnets - fastDeserialize 4 bits 130.00 ns/op 164.00 ns/op 0.79
enrSubnets - ssz BitVector 4 bits 336.00 ns/op 342.00 ns/op 0.98
prioritizePeers score -10:0 att 32-0.1 sync 2-0 266.90 us/op 252.93 us/op 1.06
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 250.41 us/op 266.16 us/op 0.94
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 359.43 us/op 379.99 us/op 0.95
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 672.47 us/op 709.76 us/op 0.95
prioritizePeers score 0:0 att 64-1 sync 4-1 809.52 us/op 851.22 us/op 0.95
array of 16000 items push then shift 1.5935 us/op 1.6429 us/op 0.97
LinkedList of 16000 items push then shift 7.1250 ns/op 7.5550 ns/op 0.94
array of 16000 items push then pop 74.676 ns/op 77.664 ns/op 0.96
LinkedList of 16000 items push then pop 6.9550 ns/op 7.3570 ns/op 0.95
array of 24000 items push then shift 2.2916 us/op 2.4931 us/op 0.92
LinkedList of 24000 items push then shift 7.2390 ns/op 7.5940 ns/op 0.95
array of 24000 items push then pop 102.20 ns/op 107.42 ns/op 0.95
LinkedList of 24000 items push then pop 6.9410 ns/op 7.4090 ns/op 0.94
intersect bitArray bitLen 8 5.5130 ns/op 5.8780 ns/op 0.94
intersect array and set length 8 32.111 ns/op 34.421 ns/op 0.93
intersect bitArray bitLen 128 27.563 ns/op 29.498 ns/op 0.93
intersect array and set length 128 528.46 ns/op 568.25 ns/op 0.93
bitArray.getTrueBitIndexes() bitLen 128 934.00 ns/op 1.0240 us/op 0.91
bitArray.getTrueBitIndexes() bitLen 248 1.6550 us/op 1.7750 us/op 0.93
bitArray.getTrueBitIndexes() bitLen 512 3.4190 us/op 3.6580 us/op 0.93
Full columns - reconstruct all 6 blobs 252.28 us/op 205.42 us/op 1.23
Full columns - reconstruct half of the blobs out of 6 96.708 us/op 100.16 us/op 0.97
Full columns - reconstruct single blob out of 6 31.643 us/op 31.486 us/op 1.00
Half columns - reconstruct all 6 blobs 260.40 ms/op 275.48 ms/op 0.95
Half columns - reconstruct half of the blobs out of 6 130.14 ms/op 138.37 ms/op 0.94
Half columns - reconstruct single blob out of 6 48.727 ms/op 50.292 ms/op 0.97
Full columns - reconstruct all 10 blobs 342.21 us/op 288.27 us/op 1.19
Full columns - reconstruct half of the blobs out of 10 147.99 us/op 162.57 us/op 0.91
Full columns - reconstruct single blob out of 10 29.821 us/op 30.486 us/op 0.98
Half columns - reconstruct all 10 blobs 465.22 ms/op 457.92 ms/op 1.02
Half columns - reconstruct half of the blobs out of 10 273.06 ms/op 230.80 ms/op 1.18
Half columns - reconstruct single blob out of 10 48.017 ms/op 51.104 ms/op 0.94
Full columns - reconstruct all 20 blobs 524.82 us/op 543.38 us/op 0.97
Full columns - reconstruct half of the blobs out of 20 285.51 us/op 275.71 us/op 1.04
Full columns - reconstruct single blob out of 20 31.082 us/op 32.018 us/op 0.97
Half columns - reconstruct all 20 blobs 851.38 ms/op 912.11 ms/op 0.93
Half columns - reconstruct half of the blobs out of 20 429.39 ms/op 460.22 ms/op 0.93
Half columns - reconstruct single blob out of 20 47.872 ms/op 51.461 ms/op 0.93
Set add up to 64 items then delete first 1.9114 us/op 2.0805 us/op 0.92
OrderedSet add up to 64 items then delete first 3.6708 us/op 3.0961 us/op 1.19
Set add up to 64 items then delete last 2.2564 us/op 2.4267 us/op 0.93
OrderedSet add up to 64 items then delete last 3.2166 us/op 3.6058 us/op 0.89
Set add up to 64 items then delete middle 2.2609 us/op 2.4178 us/op 0.94
OrderedSet add up to 64 items then delete middle 4.7706 us/op 5.2184 us/op 0.91
Set add up to 128 items then delete first 4.6705 us/op 4.8871 us/op 0.96
OrderedSet add up to 128 items then delete first 6.9880 us/op 7.0654 us/op 0.99
Set add up to 128 items then delete last 4.5031 us/op 4.8785 us/op 0.92
OrderedSet add up to 128 items then delete last 6.5030 us/op 7.2053 us/op 0.90
Set add up to 128 items then delete middle 4.4372 us/op 4.7282 us/op 0.94
OrderedSet add up to 128 items then delete middle 12.784 us/op 13.673 us/op 0.94
Set add up to 256 items then delete first 9.5889 us/op 10.198 us/op 0.94
OrderedSet add up to 256 items then delete first 15.095 us/op 14.759 us/op 1.02
Set add up to 256 items then delete last 9.0492 us/op 9.7779 us/op 0.93
OrderedSet add up to 256 items then delete last 13.557 us/op 14.828 us/op 0.91
Set add up to 256 items then delete middle 8.9511 us/op 9.6093 us/op 0.93
OrderedSet add up to 256 items then delete middle 39.513 us/op 42.114 us/op 0.94
pass gossip attestations to forkchoice per slot 462.81 us/op 513.75 us/op 0.90
computeDeltas 1400000 validators 0% inactive 13.248 ms/op 14.720 ms/op 0.90
computeDeltas 1400000 validators 10% inactive 12.396 ms/op 13.794 ms/op 0.90
computeDeltas 1400000 validators 20% inactive 11.537 ms/op 12.898 ms/op 0.89
computeDeltas 1400000 validators 50% inactive 8.9865 ms/op 10.105 ms/op 0.89
computeDeltas 2100000 validators 0% inactive 19.949 ms/op 22.310 ms/op 0.89
computeDeltas 2100000 validators 10% inactive 18.645 ms/op 20.409 ms/op 0.91
computeDeltas 2100000 validators 20% inactive 17.316 ms/op 26.037 ms/op 0.67
computeDeltas 2100000 validators 50% inactive 13.002 ms/op 15.171 ms/op 0.86
altair processAttestation - setStatus - 1/6 committees join 530.00 ns/op 566.00 ns/op 0.94
altair processAttestation - setStatus - 1/3 committees join 885.00 ns/op 938.00 ns/op 0.94
altair processAttestation - setStatus - 1/2 committees join 1.2320 us/op 1.3350 us/op 0.92
altair processAttestation - setStatus - 2/3 committees join 1.4360 us/op 1.5740 us/op 0.91
altair processAttestation - setStatus - 4/5 committees join 1.6340 us/op 1.8160 us/op 0.90
altair processAttestation - setStatus - 100% committees join 1.9160 us/op 2.1280 us/op 0.90
phase0 processBlock - 250000 vs - 7PWei normalcase 1.6446 ms/op 1.8151 ms/op 0.91
phase0 processBlock - 250000 vs - 7PWei worstcase 33.869 ms/op 21.219 ms/op 1.60
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 9.3140 us/op 5.8690 us/op 1.59
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 33.154 us/op 37.466 us/op 0.88
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 16.188 us/op 10.444 us/op 1.55
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 6.5180 us/op 6.7820 us/op 0.96
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 231.82 us/op 165.33 us/op 1.40
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 2.6832 ms/op 2.1904 ms/op 1.22
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.9541 ms/op 2.9148 ms/op 0.67
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.2077 ms/op 2.3829 ms/op 0.93
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 4.3331 ms/op 4.8224 ms/op 0.90
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.4436 ms/op 2.8608 ms/op 0.85
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.7163 ms/op 5.1172 ms/op 0.92
Tree 40 250000 create 378.73 ms/op 440.54 ms/op 0.86
Tree 40 250000 get(125000) 125.80 ns/op 135.58 ns/op 0.93
Tree 40 250000 set(125000) 1.1968 us/op 1.3034 us/op 0.92
Tree 40 250000 toArray() 12.273 ms/op 16.832 ms/op 0.73
Tree 40 250000 iterate all - toArray() + loop 12.425 ms/op 16.176 ms/op 0.77
Tree 40 250000 iterate all - get(i) 42.219 ms/op 50.690 ms/op 0.83
Array 250000 create 2.3919 ms/op 2.5858 ms/op 0.93
Array 250000 clone - spread 780.44 us/op 841.46 us/op 0.93
Array 250000 get(125000) 0.33500 ns/op 0.36400 ns/op 0.92
Array 250000 set(125000) 0.35000 ns/op 0.36600 ns/op 0.96
Array 250000 iterate all - loop 59.606 us/op 63.358 us/op 0.94
phase0 afterProcessEpoch - 250000 vs - 7PWei 39.992 ms/op 44.369 ms/op 0.90
Array.fill - length 1000000 2.8872 ms/op 3.1515 ms/op 0.92
Array push - length 1000000 10.195 ms/op 11.111 ms/op 0.92
Array.get 0.20397 ns/op 0.22661 ns/op 0.90
Uint8Array.get 0.21769 ns/op 0.22973 ns/op 0.95
phase0 beforeProcessEpoch - 250000 vs - 7PWei 15.075 ms/op 15.814 ms/op 0.95
altair processEpoch - mainnet_e81889 327.70 ms/op 321.41 ms/op 1.02
mainnet_e81889 - altair beforeProcessEpoch 39.453 ms/op 23.392 ms/op 1.69
mainnet_e81889 - altair processJustificationAndFinalization 6.6350 us/op 7.6750 us/op 0.86
mainnet_e81889 - altair processInactivityUpdates 3.8183 ms/op 4.1262 ms/op 0.93
mainnet_e81889 - altair processRewardsAndPenalties 26.966 ms/op 24.192 ms/op 1.11
mainnet_e81889 - altair processRegistryUpdates 613.00 ns/op 710.00 ns/op 0.86
mainnet_e81889 - altair processSlashings 199.00 ns/op 178.00 ns/op 1.12
mainnet_e81889 - altair processEth1DataReset 153.00 ns/op 181.00 ns/op 0.85
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.6063 ms/op 2.4311 ms/op 0.66
mainnet_e81889 - altair processSlashingsReset 943.00 ns/op 934.00 ns/op 1.01
mainnet_e81889 - altair processRandaoMixesReset 1.1430 us/op 1.6690 us/op 0.68
mainnet_e81889 - altair processHistoricalRootsUpdate 155.00 ns/op 178.00 ns/op 0.87
mainnet_e81889 - altair processParticipationFlagUpdates 603.00 ns/op 522.00 ns/op 1.16
mainnet_e81889 - altair processSyncCommitteeUpdates 124.00 ns/op 142.00 ns/op 0.87
mainnet_e81889 - altair afterProcessEpoch 43.355 ms/op 46.446 ms/op 0.93
capella processEpoch - mainnet_e217614 864.99 ms/op
mainnet_e217614 - capella beforeProcessEpoch 54.100 ms/op
mainnet_e217614 - capella processJustificationAndFinalization 5.7720 us/op
mainnet_e217614 - capella processInactivityUpdates 16.596 ms/op
mainnet_e217614 - capella processRewardsAndPenalties 110.78 ms/op
mainnet_e217614 - capella processRegistryUpdates 5.7970 us/op
mainnet_e217614 - capella processSlashings 157.00 ns/op
mainnet_e217614 - capella processEth1DataReset 158.00 ns/op
mainnet_e217614 - capella processEffectiveBalanceUpdates 11.938 ms/op
mainnet_e217614 - capella processSlashingsReset 827.00 ns/op
mainnet_e217614 - capella processRandaoMixesReset 1.4940 us/op
mainnet_e217614 - capella processHistoricalRootsUpdate 199.00 ns/op
mainnet_e217614 - capella processParticipationFlagUpdates 475.00 ns/op
mainnet_e217614 - capella afterProcessEpoch 112.97 ms/op
phase0 processEpoch - mainnet_e58758 228.53 ms/op 245.66 ms/op 0.93
mainnet_e58758 - phase0 beforeProcessEpoch 43.477 ms/op 60.552 ms/op 0.72
mainnet_e58758 - phase0 processJustificationAndFinalization 5.1090 us/op 7.0410 us/op 0.73
mainnet_e58758 - phase0 processRewardsAndPenalties 20.756 ms/op 17.647 ms/op 1.18
mainnet_e58758 - phase0 processRegistryUpdates 2.6400 us/op 3.2950 us/op 0.80
mainnet_e58758 - phase0 processSlashings 217.00 ns/op 210.00 ns/op 1.03
mainnet_e58758 - phase0 processEth1DataReset 165.00 ns/op 213.00 ns/op 0.77
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.0517 ms/op 1.1410 ms/op 0.92
mainnet_e58758 - phase0 processSlashingsReset 881.00 ns/op 1.0490 us/op 0.84
mainnet_e58758 - phase0 processRandaoMixesReset 1.0540 us/op 1.2470 us/op 0.85
mainnet_e58758 - phase0 processHistoricalRootsUpdate 240.00 ns/op 203.00 ns/op 1.18
mainnet_e58758 - phase0 processParticipationRecordUpdates 813.00 ns/op 1.0220 us/op 0.80
mainnet_e58758 - phase0 afterProcessEpoch 34.814 ms/op 38.482 ms/op 0.90
phase0 processEffectiveBalanceUpdates - 250000 normalcase 2.7097 ms/op 1.9192 ms/op 1.41
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.9525 ms/op 1.7949 ms/op 1.09
altair processInactivityUpdates - 250000 normalcase 88.963 us/op 77.269 us/op 1.15
altair processInactivityUpdates - 250000 worstcase 100.82 us/op 74.542 us/op 1.35
phase0 processRegistryUpdates - 250000 normalcase 9.0690 us/op 5.4830 us/op 1.65
phase0 processRegistryUpdates - 250000 badcase_full_deposits 426.65 us/op 318.31 us/op 1.34
phase0 processRegistryUpdates - 250000 worstcase 0.5 112.84 ms/op 81.200 ms/op 1.39
altair processRewardsAndPenalties - 250000 normalcase 125.73 us/op 111.99 us/op 1.12
altair processRewardsAndPenalties - 250000 worstcase 121.38 us/op 105.98 us/op 1.15
phase0 getAttestationDeltas - 250000 normalcase 7.0642 ms/op 7.7903 ms/op 0.91
phase0 getAttestationDeltas - 250000 worstcase 7.0297 ms/op 7.8200 ms/op 0.90
phase0 processSlashings - 250000 worstcase 116.28 us/op 94.019 us/op 1.24
altair processSyncCommitteeUpdates - 250000 8.4848 ms/op 9.7282 ms/op 0.87
BeaconState.hashTreeRoot - No change 230.00 ns/op 257.00 ns/op 0.89
BeaconState.hashTreeRoot - 1 full validator 89.918 us/op 105.29 us/op 0.85
BeaconState.hashTreeRoot - 32 full validator 955.93 us/op 1.2070 ms/op 0.79
BeaconState.hashTreeRoot - 512 full validator 7.4879 ms/op 9.8670 ms/op 0.76
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 99.304 us/op 140.37 us/op 0.71
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.7289 ms/op 2.6124 ms/op 0.66
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 16.518 ms/op 26.849 ms/op 0.62
BeaconState.hashTreeRoot - 1 balances 86.139 us/op 92.693 us/op 0.93
BeaconState.hashTreeRoot - 32 balances 911.74 us/op 1.3339 ms/op 0.68
BeaconState.hashTreeRoot - 512 balances 6.2408 ms/op 8.5937 ms/op 0.73
BeaconState.hashTreeRoot - 250000 balances 159.47 ms/op 182.95 ms/op 0.87
aggregationBits - 2048 els - zipIndexesInBitList 20.928 us/op 23.880 us/op 0.88
regular array get 100000 times 24.557 us/op 28.237 us/op 0.87
wrappedArray get 100000 times 24.650 us/op 28.404 us/op 0.87
arrayWithProxy get 100000 times 15.043 ms/op 15.674 ms/op 0.96
ssz.Root.equals 23.667 ns/op 27.122 ns/op 0.87
byteArrayEquals 23.231 ns/op 27.135 ns/op 0.86
Buffer.compare 10.018 ns/op 11.771 ns/op 0.85
processSlot - 1 slots 9.9730 us/op 13.012 us/op 0.77
processSlot - 32 slots 2.4103 ms/op 3.0607 ms/op 0.79
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 2.7331 ms/op 5.3861 ms/op 0.51
getCommitteeAssignments - req 1 vs - 250000 vc 1.9094 ms/op 2.3003 ms/op 0.83
getCommitteeAssignments - req 100 vs - 250000 vc 3.7791 ms/op 4.5036 ms/op 0.84
getCommitteeAssignments - req 1000 vs - 250000 vc 4.0270 ms/op 4.9461 ms/op 0.81
findModifiedValidators - 10000 modified validators 416.94 ms/op 775.05 ms/op 0.54
findModifiedValidators - 1000 modified validators 411.39 ms/op 783.30 ms/op 0.53
findModifiedValidators - 100 modified validators 313.88 ms/op 365.64 ms/op 0.86
findModifiedValidators - 10 modified validators 192.27 ms/op 251.80 ms/op 0.76
findModifiedValidators - 1 modified validators 170.78 ms/op 206.28 ms/op 0.83
findModifiedValidators - no difference 161.04 ms/op 202.15 ms/op 0.80
migrate state 1500000 validators, 3400 modified, 2000 new 378.18 ms/op 405.13 ms/op 0.93
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.1200 ns/op 5.1300 ns/op 0.80
state getBlockRootAtSlot - 250000 vs - 7PWei 554.94 ns/op 607.23 ns/op 0.91
computeProposerIndex 100000 validators 1.5416 ms/op 1.9445 ms/op 0.79
getNextSyncCommitteeIndices 1000 validators 3.3272 ms/op 3.9654 ms/op 0.84
getNextSyncCommitteeIndices 10000 validators 3.3302 ms/op 4.4297 ms/op 0.75
getNextSyncCommitteeIndices 100000 validators 3.3211 ms/op 4.0518 ms/op 0.82
computeProposers - vc 250000 606.93 us/op 693.43 us/op 0.88
computeEpochShuffling - vc 250000 40.706 ms/op 46.860 ms/op 0.87
getNextSyncCommittee - vc 250000 10.322 ms/op 12.486 ms/op 0.83
nodejs block root to RootHex using toHex 139.30 ns/op 169.51 ns/op 0.82
nodejs block root to RootHex using toRootHex 84.733 ns/op 106.37 ns/op 0.80
nodejs fromHex(blob) 481.45 us/op 501.72 us/op 0.96
nodejs fromHexInto(blob) 686.86 us/op 835.35 us/op 0.82
nodejs block root to RootHex using the deprecated toHexString 564.79 ns/op 564.60 ns/op 1.00
nodejs byteArrayEquals 32 bytes (block root) 28.083 ns/op 32.254 ns/op 0.87
nodejs byteArrayEquals 48 bytes (pubkey) 40.203 ns/op 47.933 ns/op 0.84
nodejs byteArrayEquals 96 bytes (signature) 48.777 ns/op 47.023 ns/op 1.04
nodejs byteArrayEquals 1024 bytes 56.269 ns/op 55.118 ns/op 1.02
nodejs byteArrayEquals 131072 bytes (blob) 1.8563 us/op 2.1829 us/op 0.85
browser block root to RootHex using toHex 277.87 ns/op 278.48 ns/op 1.00
browser block root to RootHex using toRootHex 151.20 ns/op 176.41 ns/op 0.86
browser fromHex(blob) 1.2374 ms/op 1.1836 ms/op 1.05
browser fromHexInto(blob) 679.57 us/op 771.18 us/op 0.88
browser block root to RootHex using the deprecated toHexString 553.70 ns/op 519.21 ns/op 1.07
browser byteArrayEquals 32 bytes (block root) 30.359 ns/op 33.715 ns/op 0.90
browser byteArrayEquals 48 bytes (pubkey) 42.353 ns/op 46.725 ns/op 0.91
browser byteArrayEquals 96 bytes (signature) 82.952 ns/op 92.855 ns/op 0.89
browser byteArrayEquals 1024 bytes 777.60 ns/op 869.18 ns/op 0.89
browser byteArrayEquals 131072 bytes (blob) 98.398 us/op 110.35 us/op 0.89

by benchmarkbot/action

@nflaig nflaig marked this pull request as ready for review March 16, 2026 19:00
@nflaig nflaig requested a review from a team as a code owner March 16, 2026 19:00
@nflaig nflaig changed the title feat: add cached PTCs to the state [wip] feat: add cached PTCs to the state Mar 16, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ced9835441

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

export function initializePayloadTimelinessCommittee(state: CachedBeaconStateGloas): void {
state.currentPtc = ssz.gloas.PayloadTimelinessCommittee.toViewDU(
// TODO: Array.from shouldn't be required here
Array.from(computePayloadTimelinessCommittee(state))
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the Array.from here does an unnecessary copy just to make types happy, we could do ChainSafe/ssz#512 to resolve this

@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Mar 17, 2026

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e3463cefbc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +217 to +238
const attestingIndices = payloadAttestation.aggregationBits.intersectValues(ptc);
const attestingIndices: number[] = [];

for (let i = 0; i < ptc.length; i++) {
if (payloadAttestation.aggregationBits.get(i)) {
attestingIndices.push(ptc.get(i));
}
}
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering if the previous approach here was better using intersectValues() but having to do getAll() on the ptc cached in the state isn't great

@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Mar 17, 2026

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3f9a4fb81d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Mar 17, 2026

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 76809448da

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

return headState;
}

async getHeadStateAtSlot(slot: Slot, regenCaller: RegenCaller): Promise<CachedBeaconStateAllForks> {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this method should be bound to currentSlot only to prevent a DOS attack

);
}

export function getPayloadTimelinessCommittee(state: CachedBeaconStateGloas, slot: Slot): PtcCommitteeView {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
export function getPayloadTimelinessCommittee(state: CachedBeaconStateGloas, slot: Slot): PtcCommitteeView {
export function getPayloadTimelinessCommittee(state: CachedBeaconStateGloas, slot: Slot): Uint32Array {

we can get from state.epochCtx.payloadTimelinessCommittees[slot % SLOTS_PER_EPOCH] to avoid traversing the state tree

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can for the current epoch but not for previous epoch, I actually explicitly wanted to use the ptcs in the state here to minimize risk of cache/state ptc not being in sync for some reason, it shouldn't happen but just to minimize potential bugs

if we wanna use state.epochCtx can add a separate item to epoch cache for only the first slot of previous epoch, my thinking here though was that we kinda wanna avoid that and if we can use info we have anyways in state

});
}

const state = chain.getHeadState() as CachedBeaconStateGloas;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can just use the state at the same epoch and query epochCtx from there?
then reusing chain.getHeadStateAtEpoch should be enough

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason is getHeadStateAtSlot is not as cheap as in the past anymore

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can just use the state at the same epoch and query epochCtx from there?

if we add previous ptc to the epochCtx this might be fine, there was a concern that the state is not dialed forward correctly since ptcs change across slots even if empty, need to make sure this code is safe and handles epoch transitions correctly

validatorCommitteeIndex: number;
};

export async function validateApiPayloadAttestationMessage(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

api PayloadAttestationMessage should have higher priority to verify signature, similar to the regular attestation

something like:

const isValid = await chain.bls.verifySignatureSets([signatureSet], {batchable: true, priority: prioritizeBls});

}

if (fork >= ForkSeq.gloas) {
rotatePayloadTimelinessCommittees(postState as CachedBeaconStateGloas);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a comment saying that payloadTimelinessCommittees is computed in finalProcessEpoch above

Copy link
Copy Markdown
Contributor

@twoeths twoeths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this ptc validation is a great candidate to validate in batch to save computeSigningRoot() time + verifySignatureSetsSameMessage
but it's too much to start with, we can explore at later devnets

@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Mar 18, 2026

putting as draft, will propose a different approach in the spec, let's revisit after spec PR is merged

@nflaig nflaig marked this pull request as draft March 18, 2026 21:48
@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Mar 27, 2026

ethereum/consensus-specs#4979 was merged on the spec side

@nflaig nflaig closed this Mar 27, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.32%. Comparing base (21d4a81) to head (47cf55f).
⚠️ Report is 29 commits behind head on unstable.

Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #9047   +/-   ##
=========================================
  Coverage     52.32%   52.32%           
=========================================
  Files           848      848           
  Lines         62472    62470    -2     
  Branches       4597     4597           
=========================================
- Hits          32691    32690    -1     
+ Misses        29716    29715    -1     
  Partials         65       65           
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants