diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..aeb9f98 --- /dev/null +++ b/.gitignore @@ -0,0 +1,3 @@ +/tests/ss/ +.env +.vscode/ \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..5245c26 --- /dev/null +++ b/README.md @@ -0,0 +1,153 @@ +# State Consumer for DeSo Blockchain + +The **State Consumer** is a Golang interface and framework designed to extract, decode, and process on-chain state from the DeSo (Decentralized Social) blockchain. The repository's purpose is to provide a flexible, datastore-agnostic mechanism that reads a binary state change file (produced by a DeSo node), decodes the state change entries, and then applies those changes (insert, update, or delete) via a configurable handler. + +> **Note:** A working DeSo node must be configured to generate state change files. This file serves as the source of truth for the consumer, while the state handler (for example, a Postgres data handler) applies those changes into the target datastore. + +--- + +## Key Concepts + +### 1. DeSo Node +A **DeSo Node** is a running instance on the DeSo blockchain. It: +- **Syncs the blockchain:** Initializes by syncing from other nodes. +- **Processes on-chain transactions:** Continuously updates its state as new transactions are mined. +- **Generates State Change Files:** Writes a binary log of state changes (state change file) that the state consumer uses as input. + +*Tip:* Make sure your node is configured to create and expose these state change files (usually via a shared directory or volume). + +### 2. State Consumer Interface +The state consumer interface is a Go interface that encapsulates the following: +- **Reading the State Change File:** It can read binary state change files produced by a DeSo node. +- **Decoding State Change Entries:** It decodes the individual state change entries (which include operations such as insert, update, or delete). +- **Calling Handler Functions:** It provides a hook to call a handler function for processing those decoded entries. + +This interface is purposely designed to be **datastore agnostic**, which means you can implement a handler to store the on-chain state in your database or any other system. + +### 3. Data Handler Service +A **Data Handler Service** is a concrete implementation of the state consumer interface. For example, the [Postgres Data Handler](https://github.com/deso-protocol/postgres-data-handler) is one such implementation that: +- **Processes entries in batches:** For high performance and consistency. +- **Handles transaction management:** Using transaction savepoints, batching, and optional mempool syncing. +- **Applies state changes into a datastore:** In this case, the service performs insert, update, and delete operations on a PostgreSQL database. + +Since the core consumer is datastore independent, you can create your own handler if you wish to persist the state in an alternative datastore (for example, a key–value store, another SQL database, or an in-memory cache). + +### 4. StateChangeEntry Encoder + +The **StateChangeEntry encoder** is a critical component of the syncing process. The type `StateChangeEntry` defines the structure of each state operation recorded by the DeSo node. Below is a detailed explanation of each property: + +- **OperationType (StateSyncerOperationType):** + - Specifies the type of operation (insert, update, or delete) that should be performed on the datastore. + +- **KeyBytes ([]byte):** + - Represents the key that identifies the record in the core Badger DB. This could relate to various entities like posts or profiles. + +- **Encoder (DeSoEncoder):** + - Holds the encoder instance responsible for serializing and deserializing the entity's data. + - This encoder abstracts the logic needed to translate structured data into a binary format. + +- **EncoderBytes ([]byte):** + - Contains a raw byte representation of the encoder. + - During operations such as hypersync, rather than re-encoding the data, the raw bytes are stored directly for improved performance. + +- **AncestralRecord (DeSoEncoder):** + - Stores the previous state (ancestral record) of the data that can be used to revert changes. + - This is especially crucial for mempool transactions where state changes might need to be reverted upon block confirmation. + +- **AncestralRecordBytes ([]byte):** + - A raw byte representation of the ancestral record. + - Used for efficiently restoring an earlier state without re-encoding. + +- **EncoderType (EncoderType):** + - Indicates which type of encoder is used. Different encoder types correspond to different on-chain data formats (e.g., posts, profiles, transactions). + - This enables the consumer to properly decode the binary data. + +- **FlushId (uuid.UUID):** + - Uniquely identifies the flush batch that this state change belongs to. + - It helps group multiple state changes that occur during the same flush operation. + +- **BlockHeight (uint64):** + - Denotes the block height at which the state change occurred. + - Acts as a temporal marker to ensure the correct ordering and consistency of state updates. + +- **Block (*MsgDeSoBlock):** + - For UTXO-based operations or when block-related data is relevant, this field contains the block information associated with the state change. + - It is only applicable for specific operation types. + +- **IsReverted (bool):** + - A flag indicating whether the state change has been reverted. + - Particularly useful in mempool scenarios where previously applied entries might need to be undone. + +By encapsulating all of these fields, the StateChangeEntry encoder provides a robust mechanism for serializing the complete on-chain state, ensuring that each update can be accurately applied or reversed during the syncing process. + +--- + +## How It Works + +1. **DeSo Node Setup:** + Your DeSo node must be configured to write a state change file (and optionally an index/state progress file) to a directory (e.g., `/state-changes`). + +2. **Running the State Consumer:** + The state consumer reads from the state change file, decodes the on-chain state change entries, and then calls the respective handler functions. These functions then process each entry based on its operation type (insert/update/delete) and the specific encoder type (e.g., posts, profiles, likes). + +3. **Implement/Customize the Data Handler:** + - For a complete, working example of a data handler, please refer to the [Postgres Data Handler](https://github.com/deso-protocol/postgres-data-handler) repository. + - To create your own data handler, implement the methods defined by the `StateSyncerDataHandler` interface as described in [`consumer/interfaces.go`](./consumer/interfaces.go). + +4. **Extensibility:** + Because the state consumer interface is designed to be datastore agnostic, you have the flexibility to write handlers for any back-end storage or processing system without modifying the core consumer logic. + +--- + +## Getting Started + +### Prerequisites +- A working DeSo node configured to emit the state change file. +- Go (Golang) installed. +- Familiarity with building and running Docker containers if you plan to deploy across multiple services. + +### Quick Setup Guide + +1. **Clone this Repository:** + ```bash + git clone https://github.com/deso-protocol/state-consumer.git + cd state-consumer + ``` + +2. **Configure Environment Variables:** + Ensure that your environment has the following variables (or equivalent configuration): + - `STATE_CHANGE_DIR`: Path to the state change file directory. + - `CONSUMER_PROGRESS_DIR`: Directory to store consumer progress files. + - Additional settings for batching (e.g., `BATCH_BYTES`, `THREAD_LIMIT`) if needed. + +3. **Implement/Customize the Data Handler:** + - For a complete, working example of a data handler, please refer to the [Postgres Data Handler](https://github.com/deso-protocol/postgres-data-handler) repository. + - To create your own data handler, implement the methods defined by the `StateSyncerDataHandler` interface as described in [`consumer/interfaces.go`](./consumer/interfaces.go). + + +### Deployment Recommendations + +- **Containerization:** + When deploying in production, consider a multi-container setup: + 1. **DeSo Node Container:** Generates the state change file. + 2. **State Consumer Container:** Reads the file and applies changes via your custom data handler. + 3. **Target Data Store Container:** (e.g., Postgres, Elasticsearch, etc.) which stores the on-chain state. + +- **Networking and Volumes:** + Make sure the state consumer container has access to the state change file directory (using shared volumes or network file shares). + +- **Monitoring and Logging:** + Use proper logging (e.g., via glog) and monitoring tools to track syncing progress and detect errors. + +--- + +## Contributing + +Contributions, bug fixes, and feature requests are welcome. Please feel free to open an issue or submit a pull request. + +## Have more questions? + +DeepWiki (powered by Devin AI) provides up-to-date documentation you can talk to for this repo, click the button below to try it out. + +[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/deso-protocol/state-consumer) + diff --git a/consumer/batching.go b/consumer/batching.go index c8d6c2d..290d3c5 100644 --- a/consumer/batching.go +++ b/consumer/batching.go @@ -1,79 +1,295 @@ package consumer import ( + "bytes" "fmt" "github.com/deso-protocol/core/lib" + "github.com/golang/glog" + lru "github.com/hashicorp/golang-lru/v2" + "github.com/pkg/errors" + "math" + "time" ) -type BatchedEntry struct { - Encoder *lib.DeSoEncoder - Key []byte +const ( + retryLimit = 10 +) + +// BatchIndex is a struct that index information for both the batch itself, and the entries within the batch. +// Batches are formed by grouping entries of the same encoder type and db operation type together. +// They are only formed when these groupings occur naturally in the order of the state change file, the ordering +// of entries within the file are never changed in order to form batches. +// They have a max batch size defined by the `batchBytes` parameter in the config. +type BatchIndexInfo struct { + // MinEntryIndex is the index of the first entry in the batch within the state change entry file. + // For example, with batches containing 100 entries, the first batch would have a MinEntryIndex of 0, the second + // would have a MinEntryIndex of 100, etc. + MinEntryIndex uint64 + // Index tracks which batch this is relative to all other executed batches. + // For example, with batches containing 100 entries, the first batch would have an Index of 0, the second would + // have an Index of 1, etc. + Index uint64 } -// BatchedEntries is a struct that stores a list of encoders that are of the same type. This is used to do bulk -// inserts into the database. -type BatchedEntries struct { - EncoderType lib.EncoderType - OperationType lib.StateSyncerOperationType - // The interface here will be one of our bun PG models. - Entries []*BatchedEntry +// manageBatchedEntries calls the data handler to process a batch of entries, and calculates & logs the current batch progress. +func (consumer *StateSyncerConsumer) manageBatchedEntries(batchedEntries []*lib.StateChangeEntry, isBatchMempool bool, entryCount uint64, batchCount uint64) { + // Call the data handler to process the batch. We do this with retries, in case the data handler fails. + err := consumer.callHandlerWithRetries(batchedEntries, 0, isBatchMempool) + if err != nil { + glog.Fatalf("consumer.manageBatchedEntries: %v", err) + } + + // Prevent multiple threads from accessing the batch index slice at the same time. + // Use an inner function to unlock the mutex with a defer statement. + func() { + consumer.ThreadMutex.Lock() + defer consumer.ThreadMutex.Unlock() + // Upon success, add the batch index info to the batch index slice. + batchInfo := &BatchIndexInfo{ + MinEntryIndex: entryCount + uint64(len(batchedEntries)), + Index: batchCount, + } + batchIndexes := consumer.BatchIndexes + consumer.BatchIndexes = insertBatchIndexInOrder(batchIndexes, batchInfo) + + // Log progress. + fmt.Printf("Handled batch %d\n", batchCount) + + // If the number of batches is greater than the thread limit, remove the first batch index from the slice. + // We know that anything outside the bounds of the thread limit must have already been processed successfully. + if len(consumer.BatchIndexes) > consumer.ThreadLimit { + consumer.BatchIndexes = consumer.BatchIndexes[1:] + } + + lastConsecutiveBatchEntryIndex := consumer.findLastConsecutiveBatchEntryIndex() + + if !isBatchMempool { + // Save the last consecutive batch entry index to file. This is used to resume from a failed state. + if err = consumer.saveConsumerProgressToFile(lastConsecutiveBatchEntryIndex); err != nil { + glog.Errorf("consumer.manageBatchedEntries: %v", err) + } + } + }() + + // Remove a value from the blocking channel to allow the next batch to be processed. + <-consumer.DBBlockingChannel + // Decrement the blocking wait group. This is used at the very end of hypersync to wait for all batches to be processed. + consumer.DBBlockingWG.Done() } -// TODO: Make this function handle inserts AND deletes. -// HandleEntryOperationBatch handles the logic for batching entries to be inserted into the database. -func (consumer *StateSyncerConsumer) HandleEntryOperationBatch(key []byte, encoder *lib.DeSoEncoder, encoderType lib.EncoderType, dbOperationType lib.StateSyncerOperationType) error { +// findLastConsecutiveBatchEntryIndex finds the last consecutive batch entry index. Because batches are processed +// asynchronously, there may be gaps in batches that have been successfully processed (e.g. batches 1, 2, 4, and 7 may +// have been processed, but batches 3, 5, and 6 may not have been processed yet). When resuming from a failed state, +// rather than tracking every unprocessed batch, we just track the last consecutive batch entry index that was processed +// successfully (2 in the example above) and start processing from there. +func (consumer *StateSyncerConsumer) findLastConsecutiveBatchEntryIndex() uint64 { + // Starting from batch 0, the last index that was processed successfully. + var lastConsecutiveBatchEntryIndex uint64 + for ii, batchIndex := range consumer.BatchIndexes { + if ii == 0 { + // We know every batch prior to the first batch index in the slice has been processed successfully. + lastConsecutiveBatchEntryIndex = batchIndex.MinEntryIndex + // Continue to avoid index out of range error below. + continue + } + + // If the current batch index is not consecutive with the prior batch index, break. + lastBatchIndex := consumer.BatchIndexes[ii-1].Index + if batchIndex.Index != lastBatchIndex+1 { + break + } + // Otherwise, set the last consecutive batch entry index to the current batch index's min entry index. + lastConsecutiveBatchEntryIndex = batchIndex.MinEntryIndex + } + return lastConsecutiveBatchEntryIndex +} + +// insertBatchIndexInOrder inserts a batch index info into a slice of batch index infos in ascending Index order. +func insertBatchIndexInOrder(batchIndexes []*BatchIndexInfo, newIndexInfo *BatchIndexInfo) []*BatchIndexInfo { + // Find the position to insert the newIndexInfo + position := -1 + for ii := len(batchIndexes) - 1; ii >= 0; ii-- { + if batchIndexes[ii].Index < newIndexInfo.Index { + position = ii + 1 + break + } + } + + // Insert the newIndexInfo at the found position + batchIndexes = append(batchIndexes, nil) // Extend the capacity of the slice + if position == -1 { + copy(batchIndexes[1:], batchIndexes) + batchIndexes[0] = newIndexInfo + } else { + copy(batchIndexes[position+1:], batchIndexes[position:]) + batchIndexes[position] = newIndexInfo + } + + return batchIndexes +} + +// callHandlerWithRetries calls the data handler to process a batch of entries. If the call fails, it will retry +// with a smaller batch size until it succeeds or hits the max number of retries. These failures can happen due to +// an overloaded database or a duplicate key error. +func (consumer *StateSyncerConsumer) callHandlerWithRetries(batchedEntries []*lib.StateChangeEntry, retries int, isMempool bool) error { + // Attempt to process the batch. + if err := consumer.DataHandler.HandleEntryBatch(batchedEntries, isMempool); err != nil { + batchSize := len(batchedEntries) + + // Make sure the batch isn't empty. This should never happen. + if batchSize == 0 { + return errors.New("consumer.callHandlerWithRetries: batch size is 0") + } + + fmt.Printf("Received error with batch of size %d: %v\n", batchSize, err) + + // If an insert is being performed, try performing an upsert instead. + // This is useful for when the database runs into duplicate key errors. + operationType := batchedEntries[0].OperationType + if operationType == lib.DbOperationTypeInsert { + operationType = lib.DbOperationTypeUpsert + } + + // Exponential backoff for retries + waitTime := 5 * time.Duration(math.Pow(2, float64(retries))) * time.Second + + // If we've hit the max number of retries, return the error. + if retries > retryLimit { + return errors.Wrapf(err, "consumer.callHandlerWithRetries: tried %d times to process batch", retries) + } else if batchSize == 1 { + time.Sleep(waitTime) + // Set the operation type. + batchedEntries[0].OperationType = operationType + err = consumer.callHandlerWithRetries(batchedEntries, retries, isMempool) + if err != nil { + return errors.Wrapf(err, "consumer.callHandlerWithRetries: ") + } + } else { + // If we failed to process a batch, try processing the batch in halves. This can be useful if the reason + // for failure was a db timeout. + batch1 := batchedEntries[:batchSize/2] + batch2 := batchedEntries[batchSize/2:] + // Set the operation type. + batch1[0].OperationType = operationType + batch2[0].OperationType = operationType + time.Sleep(waitTime) + err = consumer.callHandlerWithRetries(batch1, retries, isMempool) + if err != nil { + return errors.Wrapf(err, "consumer.callHandlerWithRetries: ") + } + time.Sleep(waitTime) + err = consumer.callHandlerWithRetries(batch2, retries, isMempool) + if err != nil { + return errors.Wrapf(err, "consumer.callHandlerWithRetries: ") + } + + } + } + return nil +} + +// QueueBatch takes a slice of state change entries and add them to the appropriate channel if we are hypersyncing. +// If we are not hypersyncing, it calls the data handler directly. +func (consumer *StateSyncerConsumer) QueueBatch(batchedEntries []*lib.StateChangeEntry, isBatchMempool bool) error { + if consumer.IsHypersyncing { + // Add bool to blocking channel so that we can block the next batch from being processed if the channel is at capacity. + consumer.DBBlockingChannel <- true + consumer.DBBlockingWG.Add(1) + // Handle the batched entries in a non-blocking way. + go consumer.manageBatchedEntries(batchedEntries, isBatchMempool, consumer.EntryCount, consumer.BatchCount) + // Only increment counts for non-mempool entries. + if !isBatchMempool { + consumer.BatchCount++ + consumer.EntryCount += uint64(len(batchedEntries)) + } + + //// Add the state change entry batch to the channel so that it can be processed by the listener. + //consumer.DBEntryChannel <- batchedEntries + } else { + // When not in hypersync, just call the data handler directly. + // We don't processNewEntriesInFile transactions concurrently, as transactions may be dependent on each other. + if err := consumer.callHandlerWithRetries(batchedEntries, 0, isBatchMempool); err != nil { + return errors.Wrapf(err, "consumer.QueueBatch: Error calling batch with retries") + } + // Skip incrementing the entry count and saving the consumer progress to file if this is a mempool entry. + if isBatchMempool { + return nil + } + // If the batch was successfully processed, increment the entry count. + consumer.EntryCount += uint64(len(batchedEntries)) + // Save the consumer progress to file, if this isn't a mempool entry. + if err := consumer.saveConsumerProgressToFile(consumer.EntryCount); err != nil { + return errors.Wrapf(err, "consumer.QueueBatch: Error saving consumer progress to file") + } + } + return nil +} + +// handleStateChangeEntry handles a batch of state change entries. It will batch entries of the same type and +// encoder type together, and will call the data handler when the batch is full or when the encoder type or db +// operation changes. +func (consumer *StateSyncerConsumer) handleStateChangeEntry(stateChangeEntry *lib.StateChangeEntry, isMempool bool) error { + + batchSize := consumer.BytesInBatch + uint64(len(stateChangeEntry.EncoderBytes)) + // If the batched entries has been set, isn't empty, and matches the current encoder type and db operation, // and the entry batch isn't past the limit, add to the batch and return. - if consumer.BatchedEntries != nil && - len(consumer.BatchedEntries.Entries) > 0 && - consumer.BatchedEntries.OperationType == dbOperationType && - encoderType == consumer.BatchedEntries.EncoderType && - len(consumer.BatchedEntries.Entries) < consumer.MaxBatchSize { - consumer.BatchedEntries.Entries = append(consumer.BatchedEntries.Entries, &BatchedEntry{ - Encoder: encoder, - Key: key, - }) + //len(consumer.BatchedEntries) < consumer.MaxBatchSize { + if len(consumer.BatchedEntries) > 0 && + consumer.BatchedEntries[0].OperationType == stateChangeEntry.OperationType && + stateChangeEntry.EncoderType == consumer.BatchedEntries[0].EncoderType && + batchSize < consumer.MaxBatchBytes { + consumer.BatchedEntries = append(consumer.BatchedEntries, stateChangeEntry) + consumer.IsBatchMempool = isMempool + consumer.BytesInBatch = batchSize return nil - } else if consumer.BatchedEntries != nil && len(consumer.BatchedEntries.Entries) > 0 { + } else if len(consumer.BatchedEntries) > 0 { // If the batched entries do exist, but the batched encoder type and db operation don't match, or the max // batched size has been reached, then do the insert/upsert/delete. - err := consumer.DataHandler.HandleEntryBatch(consumer.BatchedEntries) - if err != nil { - return err - } - fmt.Printf("Handled batch %d\n", consumer.BatchCount) - handledEntries := len(UniqueEntries(consumer.BatchedEntries.Entries)) - err = consumer.saveConsumerProgressToFile(consumer.LastScannedIndex + uint32(handledEntries)) - if err != nil { - return err + + if err := consumer.executeBatch(); err != nil { + return errors.Wrapf(err, "consumer.HandleEntryOperationBatch: Problem executing batch") } - consumer.BatchCount = consumer.BatchCount + 1 } - // Since this is either a brand new batched encoder instance, or the batched entries were just inserted, replace - // the batch with the current encoder. - consumer.BatchedEntries = &BatchedEntries{ - EncoderType: encoderType, - OperationType: dbOperationType, - Entries: []*BatchedEntry{ - &BatchedEntry{ - Encoder: encoder, - Key: key, - }, - }, + // This is either a brand new batched encoder instance, or the batched entries were just handled. Replace + // the batch with an array containing the passed StateChangeEntry param. + consumer.BatchedEntries = []*lib.StateChangeEntry{ + stateChangeEntry, + } + consumer.IsBatchMempool = isMempool + consumer.BytesInBatch = uint64(len(stateChangeEntry.EncoderBytes)) + return nil +} + +// executeBatch executes the batched entries and saves the consumer progress to file. +func (consumer *StateSyncerConsumer) executeBatch() error { + if consumer.BatchedEntries == nil || len(consumer.BatchedEntries) == 0 { + return nil + } + // This queues the batch to be handled asynchronously, so that multiple batches can be processed at once. + if err := consumer.QueueBatch(consumer.BatchedEntries, consumer.IsBatchMempool); err != nil { + return errors.Wrapf(err, "consumer.HandleEntryOperationBatch: Problem queuing batch") } + + // Reset the batched entries to an empty array after executing them. + consumer.BatchedEntries = []*lib.StateChangeEntry{} + consumer.BytesInBatch = 0 return nil } -func UniqueEntries(entries []*BatchedEntry) []*BatchedEntry { +// UniqueEntries takes a slice of state change entries and returns a slice of unique entries. +// It de-duplicates based on the key bytes. +func UniqueEntries(entries []*lib.StateChangeEntry) []*lib.StateChangeEntry { uniqueEntryMap := make(map[string]bool) - uniqueEntries := make([]*BatchedEntry, 0) + uniqueEntries := make([]*lib.StateChangeEntry, 0) // Loop through the encoders, and only add the unique ones to the return array. - for i := len(entries) - 1; i >= 0; i-- { - entry := entries[i] - keyString := string(entry.Key) + // Loop through them in reverse so that in the case of duplicates, the most recent entry is kept. + for ii := len(entries) - 1; ii >= 0; ii-- { + entry := entries[ii] + keyString := string(entry.KeyBytes) if _, exists := uniqueEntryMap[keyString]; exists { continue } else { @@ -84,10 +300,25 @@ func UniqueEntries(entries []*BatchedEntry) []*BatchedEntry { return uniqueEntries } -func KeysToDelete(entries []*BatchedEntry) [][]byte { +// FilterCachedEntries takes a slice of entries and a map of cached entries, and returns a slice of entries that are not +// in the cached entries map. +func FilterCachedEntries(entries []*lib.StateChangeEntry, cachedEntries *lru.Cache[string, []byte]) []*lib.StateChangeEntry { + filteredEntries := make([]*lib.StateChangeEntry, 0) + + for _, entry := range entries { + if cachedEntry, exists := cachedEntries.Get(string(entry.KeyBytes)); !exists || !bytes.Equal(cachedEntry, entry.EncoderBytes) { + filteredEntries = append(filteredEntries, entry) + } + } + return filteredEntries +} + +// KeysToDelete takes a slice of state change entries and returns a slice of key bytes. This helper can be used by +// the data handler to construct a slice of IDs to delete given a slice of StateChangeEntries. +func KeysToDelete(entries []*lib.StateChangeEntry) [][]byte { keysToDelete := make([][]byte, len(entries)) for i, entry := range entries { - keysToDelete[i] = entry.Key + keysToDelete[i] = entry.KeyBytes } return keysToDelete } diff --git a/consumer/consumer.go b/consumer/consumer.go index 4d889d1..0bcf387 100644 --- a/consumer/consumer.go +++ b/consumer/consumer.go @@ -1,336 +1,839 @@ package consumer import ( + "bufio" "encoding/binary" "fmt" + "io" + "os" + "path/filepath" + "sync" + "time" + "github.com/deso-protocol/core/lib" - "github.com/fsnotify/fsnotify" "github.com/golang/glog" - "log" - "os" + "github.com/google/uuid" + "github.com/pkg/errors" ) +const ( + ConsumerProgressFilename = "consumer-progress.bin" + // MaxMempoolErrors is the maximum number of consecutive mempool errors before returning an error + MaxMempoolErrors = 100 +) + +// StateSyncerConsumer is a struct that contains the persisted state that is needed to consume state changes from a file. +// This includes file readers, statuses, batch caches, and channels to facilitate multi-threaded processing. type StateSyncerConsumer struct { // File that contains the state changes. - StateChangeFile *os.File - StateChangeFileName string + StateChangeFile *os.File + StateChangeFileReader *bufio.Reader + + StateChangeMempoolFile *os.File + StateChangeMempoolFirstEntryFile *os.File + StateChangeMempoolFileReader *bufio.Reader + + // An ordered slice containing every mempool entry that has been applied to the database. + AppliedMempoolEntries []*lib.StateChangeEntry + + CurrentConfirmedEntryFlushId uuid.UUID + CurrentMempoolEntryFlushId uuid.UUID + // File that contains the byte indexes of the state change file that corresponds to db operations. - StateChangeIndexFile *os.File - StateChangeIndexFileName string + StateChangeIndexFile *os.File // Index of the entry in the state change file that the consumer should start parsing at. - LastScannedIndex uint32 + LastScannedIndex uint64 // File that contains the entry index of the last saved state change. - ConsumerProgressFile *os.File - ConsumerProgressFileName string + ConsumerProgressFile *os.File + ConsumerProgressDir string + // The data handler that will be used to process the state changes that the consumer parses. - DataHandler StateSyncerDataHandler - ProcessEntriesInBatches bool + DataHandler StateSyncerDataHandler + // An object that contains the state changes that have been parsed but not yet processed. Used for batching. - BatchedEntries *BatchedEntries - // The maximum number of entries to batch before inserting into the database. - MaxBatchSize int + BatchedEntries []*lib.StateChangeEntry + // Whether the batched entries are from a committed block or are from mempool transactions. + IsBatchMempool bool + BytesInBatch uint64 - // Track whether we're actively consuming or not. - IsScanning bool - // Track whether we're currently hypersyncing + // The maximum number of bytes to batch before inserting into the database. + MaxBatchBytes uint64 + ThreadLimit int + ThreadMutex sync.Mutex + + // Track whether we're currently hypersyncing. IsHypersyncing bool + + // Whether to wrap each batch in a db transaction. + ExecuteTransactions bool + + // Track whether we're currently syncing from the beginning. + SyncingFromBeginning bool + + // Whether to sync mempool entries, or only committed entries. + SyncMempoolEntires bool + // A counter to keep track of how many batches have been inserted. - BatchCount int - EntryCount uint32 + BatchCount uint64 + EntryCount uint64 + + // Channel to enforce a max thread limit on the listener. + DBBlockingChannel chan bool + DBBlockingWG sync.WaitGroup + + // Indexes to track asynchronous batch handling progress during hypersync. + BatchIndexes []*BatchIndexInfo + + // Whether to stop the consumer. + StopConsumer bool + + // Track consecutive mempool errors across function calls. + ConsecutiveMempoolErrors int } -func (consumer *StateSyncerConsumer) InitializeAndRun(stateChangeFileName string, stateChangeIndexFileName string, consumerProgressFilename string, processInBatches bool, batchSize int, handler StateSyncerDataHandler) error { +func (consumer *StateSyncerConsumer) InitializeAndRun( + stateChangeDir string, consumerProgressFilename string, batchBytes uint64, + threadLimit int, syncMempool bool, handler StateSyncerDataHandler) error { // initialize the consumer - err := consumer.initialize(stateChangeFileName, stateChangeIndexFileName, consumerProgressFilename, processInBatches, batchSize, handler) + err := consumer.initialize(stateChangeDir, consumerProgressFilename, batchBytes, threadLimit, syncMempool, handler) if err != nil && err.Error() != "EOF" { - return err + return errors.Wrapf(err, "consumer.InitializeAndRun: Error initializing consumer") } - // If there are entries to read, run an initial scan of the index file. - if err.Error() != "EOF" { - err = consumer.run() + // If there are entries to read, processNewEntriesInFile an initial scan of the state change file. + if err == nil || err.Error() != "EOF" { + if _, _, err = consumer.processNewEntriesInFile(false); err != nil { + return errors.Wrapf(err, "consumer.InitializeAndRun: Error running consumer") + } } - // Create a watcher to handle any new writes to the state change file. - err = consumer.watchFileAndScanOnWrite() - if err != nil { - return err + // After we've done an initial scan, create a watcher to handle any new writes to the state change file. + if err = consumer.watchFileAndScanOnWrite(); err != nil { + return errors.Wrapf(err, "consumer.InitializeAndRun: Error watching file") } return nil } // Open the state change file and the index file, and determine the byte index that the state syncer should start // parsing at. -func (consumer *StateSyncerConsumer) initialize(stateChangeFileName string, stateChangeIndexFileName string, consumerProgressFilename string, processInBatches bool, batchSize int, handler StateSyncerDataHandler) error { +func (consumer *StateSyncerConsumer) initialize(stateChangeDir string, consumerProgressDir string, batchBytes uint64, threadLimit int, syncMempool bool, handler StateSyncerDataHandler) error { // Set up the data handler initial values. - consumer.IsScanning = true consumer.IsHypersyncing = false - consumer.ProcessEntriesInBatches = processInBatches + consumer.ExecuteTransactions = false + consumer.SyncMempoolEntires = syncMempool consumer.BatchCount = 0 consumer.EntryCount = 0 - consumer.MaxBatchSize = batchSize + consumer.MaxBatchBytes = batchBytes + consumer.ThreadLimit = threadLimit + consumer.DataHandler = handler + lib.GlobalDeSoParams = *handler.GetParams() + consumer.DBBlockingChannel = make(chan bool, threadLimit) + consumer.AppliedMempoolEntries = make([]*lib.StateChangeEntry, 0) + consumer.CurrentMempoolEntryFlushId = uuid.Nil + consumer.CurrentConfirmedEntryFlushId = uuid.Nil + consumer.ConsecutiveMempoolErrors = 0 + + stateChangeFilePath := filepath.Join(stateChangeDir, lib.StateChangeFileName) + stateChangeIndexFilePath := filepath.Join(stateChangeDir, lib.StateChangeIndexFileName) + stateChangeMempoolFilePath := filepath.Join(stateChangeDir, lib.StateChangeMempoolFileName) + + // Wait for the state changes file to be created. Once it has been created, open it. + consumer.waitForStateChangesFile(stateChangeFilePath) + + // Create a new reader for the state change file. + consumer.StateChangeFileReader = bufio.NewReader(consumer.StateChangeFile) + + // Create a new reader for the mempool file. + if stateChangeMempoolFile, err := os.Open(stateChangeMempoolFilePath); err == nil { + consumer.StateChangeMempoolFile = stateChangeMempoolFile + consumer.StateChangeMempoolFileReader = bufio.NewReader(consumer.StateChangeMempoolFile) + } else { + return errors.Wrapf(err, "consumer.initialize: Error opening mempool state change file") + } - // Open the state changes file - consumer.StateChangeFileName = stateChangeFileName - stateChangeFile, err := os.Open(stateChangeFileName) - if err != nil { - return fmt.Errorf("Error opening stateChangeFile: %w", err) + if stateChangeMempoolFile, err := os.Open(stateChangeMempoolFilePath); err == nil { + consumer.StateChangeMempoolFirstEntryFile = stateChangeMempoolFile + } else { + return errors.Wrapf(err, "consumer.initialize: Error opening mempool state change file") } - consumer.StateChangeFile = stateChangeFile // Open the file that contains byte indexes for each entry in the state changes file. - consumer.StateChangeIndexFileName = stateChangeIndexFileName - indexFile, err := os.Open(stateChangeIndexFileName) + indexFile, err := os.Open(stateChangeIndexFilePath) if err != nil { - return fmt.Errorf("Error opening indexFile: %w", err) + return errors.Wrapf(err, "consumer.initialize: Error opening indexFile") } consumer.StateChangeIndexFile = indexFile // Open the file that contains the entry index of the last saved state change. - consumer.ConsumerProgressFileName = consumerProgressFilename - startEntryIndexFile, err := os.Open(consumerProgressFilename) - if err == nil { - consumer.ConsumerProgressFile = startEntryIndexFile + consumer.ConsumerProgressDir = consumerProgressDir + consumerProgressFilePath := filepath.Join(consumerProgressDir, ConsumerProgressFilename) + + if consumerProgressFile, err := os.Open(consumerProgressFilePath); err == nil { + consumer.ConsumerProgressFile = consumerProgressFile } - stateChangeFileByteIndex, err := consumer.retrieveFileIndexForDbOperation() + // Get last entry index that was synced. + lastEntrySyncedIdx, err := consumer.retrieveLastSyncedStateChangeEntryIndex() if err != nil { - if err.Error() == "EOF" { - consumer.end() - return err + return errors.Wrapf(err, "consumer.initialize: Error retrieving last synced state change entry index") + } + + // If the last entry synced index is not 0, we are resuming a previous sync. + // Revert the mempool transactions that were applied during the previous sync. + if lastEntrySyncedIdx != 0 { + err = consumer.revertStoredMempoolTransactions() + if err != nil { + return errors.Wrapf(err, "consumer.initialize: Error reverting mempool transactions") } - return err } - consumer.StateChangeFile.Seek(int64(stateChangeFileByteIndex), 0) + // Discover where we should start parsing the state change file. + stateChangeFileByteIndex, err := consumer.retrieveFileIndexForDbOperation(lastEntrySyncedIdx) + if err != nil { + return errors.Wrapf(err, "consumer.intialize: Error retrieving file index for db operation") + } - consumer.DataHandler = handler + // Set the batch count to the current batch on resume. + currentBatch := stateChangeFileByteIndex / batchBytes + consumer.BatchCount = currentBatch + + // Seek to the byte index that we should start parsing at. + if _, err = consumer.StateChangeFile.Seek(int64(stateChangeFileByteIndex), 0); err != nil { + return errors.Wrapf(err, "consumer.initialize: Error seeking to byte index") + } // If the byte index is 0, we are starting a fresh sync. if stateChangeFileByteIndex == 0 { - consumer.DataHandler.HandleSyncEvent(SyncEventStart) + consumer.SyncingFromBeginning = true + if err = consumer.DataHandler.HandleSyncEvent(SyncEventStart); err != nil { + return errors.Wrapf(err, "consumer.initialize: Error handling sync start event") + } + } + + // Check if we are starting a block sync, emit an event if so. + err = consumer.checkBlockSyncStart() + if err != nil { + return errors.Wrapf(err, "consumer.initialize: Error checking block sync start") } return nil } -func (consumer *StateSyncerConsumer) watchFileAndScanOnWrite() error { - // Create new watcher. - watcher, err := fsnotify.NewWatcher() - if err != nil { - log.Fatal(err) - } - defer watcher.Close() +// processNewEntriesInFile reads the state change file and passes each entry to the data handler. +func (consumer *StateSyncerConsumer) processNewEntriesInFile(isMempool bool) (bool, bool, error) { + revertTriggered := false + entriesProcessed := false - // Start listening for events. - go func() { - for { - select { - case event, ok := <-watcher.Events: - if !ok { - return - } - log.Println("event:", event) - if event.Op&fsnotify.Write == fsnotify.Write { - fmt.Printf("File modified. Scanning for state changes.\n") - consumer.StateChangeFile, _ = os.Open(consumer.StateChangeFileName) - // Don't start scanning if we're already scanning. - if !consumer.IsScanning { - consumer.run() - } - } - case err, ok := <-watcher.Errors: - if !ok { - return + fileEOF := false + // Read from the state change file until we reach the end. + for !fileEOF { + var err error + var stateChangeEntry *lib.StateChangeEntry + // Get the next state change entry from the state change file. + stateChangeEntry, fileEOF, err = consumer.retrieveNextEntry(isMempool) + if err != nil { + // If the error is from the mempool file, don't kill the process, just log the error. + if isMempool { + consumer.ConsecutiveMempoolErrors++ + + // If we've exceeded the maximum number of mempool errors, return the error + if consumer.ConsecutiveMempoolErrors >= MaxMempoolErrors { + return revertTriggered, entriesProcessed, errors.Wrapf(err, "consumer.processNewEntriesInFile: Maximum mempool errors (%d) exceeded", MaxMempoolErrors) } - glog.Fatalf("Error watching file: %v", err) + glog.Errorf("consumer.processNewEntriesInFile: Error reading next mempool entry from file (error %d/%d): %s", consumer.ConsecutiveMempoolErrors, MaxMempoolErrors, err.Error()) + break } + return revertTriggered, entriesProcessed, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error reading next entry from file") } - }() - // Add a path. - err = watcher.Add(consumer.StateChangeFileName) - if err != nil { - return err + // Reset consecutive mempool error count on successful retrieval + if isMempool { + consumer.ConsecutiveMempoolErrors = 0 + } + if fileEOF { + break + } + entriesProcessed = true + var entryRevertTriggered bool + if !isMempool { + entryRevertTriggered, err = consumer.SyncCommittedEntry(stateChangeEntry) + } else { + entryRevertTriggered, err = consumer.SyncMempoolEntry(stateChangeEntry) + } + + // Update the overall revertTriggered flag if this entry triggered a revert + revertTriggered = revertTriggered || entryRevertTriggered + if err != nil { + return revertTriggered, entriesProcessed, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error syncing committed entry") + } } - // Block main goroutine forever. - <-make(chan struct{}) - fmt.Printf("Done watching file.\n") - return nil + // Once we've reached the file EOF, process any remaining batched entries and cleanup. + if err := consumer.cleanup(); err != nil { + return revertTriggered, entriesProcessed, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error cleaning up") + } + + // If we are syncing from the beginning, emit a sync end event. + if consumer.SyncingFromBeginning && !isMempool && !consumer.IsHypersyncing { + consumer.SyncingFromBeginning = false + if err := consumer.DataHandler.HandleSyncEvent(SyncEventComplete); err != nil { + return revertTriggered, entriesProcessed, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error handling sync end event") + } + } + return revertTriggered, entriesProcessed, nil } -// ReadNextEntryFromFile reads the next entry from the state change file and runs the appropriate data handler function. -// The format of the file is: -// [operation type (1 byte)][encoder type (2 bytes)][key length (2 bytes)][key bytes][value length (2 bytes)][value bytes] -func (consumer *StateSyncerConsumer) readNextEntryFromFile() (bool, error) { - // Extract the operation type. - // Operation type is 0 for insert, 1 for delete, 2 for update, and 3 for upsert. - operationTypeInt, err := getUint8FromFile(consumer.StateChangeFile) - operationType := lib.StateSyncerOperationType(operationTypeInt) - if err != nil { - return true, nil - //return false, err +func (consumer *StateSyncerConsumer) SyncCommittedEntry(stateChangeEntry *lib.StateChangeEntry) (bool, error) { + revertTriggered := false + // If the entry is from a new flush (i.e. a new block), revert the current mempool entries before applying. + if stateChangeEntry.FlushId != consumer.CurrentConfirmedEntryFlushId { + + if err := consumer.RevertMempoolEntries(); err != nil { + return false, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error reverting mempool entries") + } + revertTriggered = true + // Update the current block sync flush ID. + consumer.CurrentConfirmedEntryFlushId = stateChangeEntry.FlushId + if !consumer.IsHypersyncing { + // Log the handling of the flush. + fmt.Println("Now handling flush ", stateChangeEntry.FlushId.String()) + } } + // Detect if this entry represets a sync state change and emit + if err := consumer.detectAndHandleSyncEvent(stateChangeEntry); err != nil { + return revertTriggered, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error detecting sync event") + } + // Handle the state change entry. + if err := consumer.handleStateChangeEntry(stateChangeEntry, false); err != nil { + return revertTriggered, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error handling state change entry") + } + return revertTriggered, nil +} - // If the operation type is an insert, we must be hypersyncing. - if operationType == lib.DbOperationTypeInsert && !consumer.IsHypersyncing { - consumer.IsHypersyncing = true - } else if operationType != lib.DbOperationTypeInsert && consumer.IsHypersyncing { - // If the operation type is not an insert, we must have finished hypersyncing. - consumer.IsHypersyncing = false - if err = consumer.DataHandler.HandleSyncEvent(SyncEventHypersyncComplete); err != nil { - return false, err +func (consumer *StateSyncerConsumer) SyncMempoolEntry(stateChangeEntry *lib.StateChangeEntry) (bool, error) { + revertTriggered := false + + // If the entry is from a new flush (i.e. a new block), revert the current mempool entries before applying. + if stateChangeEntry.FlushId != consumer.CurrentMempoolEntryFlushId { + if err := consumer.RevertMempoolEntries(); err != nil { + return false, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error reverting mempool entries") } + revertTriggered = true + consumer.CurrentMempoolEntryFlushId = stateChangeEntry.FlushId } - //fmt.Printf("\nHere is the operation type: %v", operationType) + // Handle the state change entry. + if err := consumer.handleStateChangeEntry(stateChangeEntry, true); err != nil { + return false, errors.Wrapf(err, "consumer.processNewEntriesInFile: Error handling state change entry") + } - // Extract which encoder the entry is encoded with. - encoderType, err := getUint16FromFile(consumer.StateChangeFile) - if err != nil { - return false, err + // Add this entry to the list of applied mempool entries. + consumer.AppliedMempoolEntries = append(consumer.AppliedMempoolEntries, stateChangeEntry) + + // Add this entry to the file log of applied mempool entries. + consumer.saveMempoolProgressToFile(stateChangeEntry) + return revertTriggered, nil +} + +func (consumer *StateSyncerConsumer) RevertMempoolEntry(stateChangeEntry *lib.StateChangeEntry) error { + // Create a copy of the stateChangeEntry. + revertEntry := *stateChangeEntry + + // If the ancestral record is nil, we need to delete the entry. + if revertEntry.AncestralRecord == nil { + revertEntry.OperationType = lib.DbOperationTypeDelete + revertEntry.Encoder = nil + } else { + // If the ancestral record exists, update the db record to that value. + revertEntry.OperationType = lib.DbOperationTypeUpsert + revertEntry.Encoder = revertEntry.AncestralRecord + revertEntry.EncoderBytes = revertEntry.AncestralRecordBytes } - //fmt.Printf("\nHere is the encoder type: %v", encoderType) - // Determine how large the key is, in bytes. - keyByteSize, err := getUint16FromFile(consumer.StateChangeFile) - if err != nil { - return false, err + // Handle the reverted state change entry. + if err := consumer.handleStateChangeEntry(&revertEntry, true); err != nil { + return errors.Wrapf(err, "consumer.processNewEntriesInFile: Error handling state change entry") } - //fmt.Printf("\nHere is the keyByteSize: %v", keyByteSize) - // Read the contents of the first uint16 from the stateChangeFile into a byte slice. - keyBytes, err := getBytesFromFile(int(keyByteSize), consumer.StateChangeFile) - if err != nil { - return false, err + if len(consumer.AppliedMempoolEntries) == 0 { + return nil } - //fmt.Printf("\nHere is the keyBytes: %v", keyBytes) - // Get encoder for the key. - isEncoder, encoder := lib.StateKeyToDeSoEncoder(keyBytes) - if !isEncoder || encoder == nil { - return false, fmt.Errorf("No encoder found for encoder type: %d", encoderType) + // Remove this entry from the list of applied mempool entries. + consumer.AppliedMempoolEntries = consumer.AppliedMempoolEntries[:len(consumer.AppliedMempoolEntries)-1] + return nil +} + +func (consumer *StateSyncerConsumer) RevertMempoolEntries() error { + // Execute any remaining batched transactions before executing the revert. + if err := consumer.executeBatch(); err != nil { + return errors.Wrapf(err, "consumer.revertMempoolEntries: Error executing batch") } - // Determine how large the entry is, in bytes. - entryByteSize, err := getUint32FromFile(consumer.StateChangeFile) + // Revert all applied mempool entries in reverse order. + for ii := len(consumer.AppliedMempoolEntries) - 1; ii >= 0; ii-- { + if err := consumer.RevertMempoolEntry(consumer.AppliedMempoolEntries[ii]); err != nil { + return errors.Wrapf(err, "consumer.revertMempoolEntries: Error reverting mempool entry") + } + } + // Execute any remaining batched transactions before finalizing the revert. + if err := consumer.executeBatch(); err != nil { + return errors.Wrapf(err, "consumer.revertMempoolEntries: Error executing batch") + } + return nil +} + +// readAndDecodeNextEntry reads the next state change entry from the state change file and decodes it as a deso encoder. +func (consumer *StateSyncerConsumer) readAndDecodeNextEntry(reader *bufio.Reader, file *os.File) (sce *lib.StateChangeEntry, eof bool, err error) { + // Get the current position in the file + currentPos, err := file.Seek(0, io.SeekCurrent) if err != nil { - return false, err + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error getting current position in file") + } + // Get the size of the next state change entry. + entryByteSize, err := lib.ReadUvarint(reader) + if err != nil && (errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF)) { + // If it's an unexpected EOF, log it and return true to signify EOF. + glog.V(2).Infof("consumer.readAndDecodeNextEntry: Error reading from state change file: %v", err) + + // Reset the reader to the position before the unexpected EOF. + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, true, nil + } else if err != nil { + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error reading from state change file") } - //fmt.Printf("\nHere is the entryByteSize: %v", entryByteSize) - if entryByteSize > 0 { - // Read the contents of the first uint16 from the stateChangeFile into a byte slice. - entryBytes, err := getBytesFromFile(int(entryByteSize), consumer.StateChangeFile) - if err != nil { - return false, err + if err = CheckSliceSize(int(entryByteSize)); err != nil { + // Reset the reader. + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") } - //fmt.Printf("\nHere is the entryBytes: %v", entryBytes) + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error checking slice size") + } - // Decode value to DeSo Encoder. - DecodeEntry(encoder, entryBytes) + // Create a buffer to hold the entry. + buffer := make([]byte, entryByteSize) + bytesRead, err := io.ReadFull(reader, buffer) + // If there are no bytes to read, return true to signify EOF. + if bytesRead == 0 { + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, true, nil + } else if err != nil && (errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF)) { + // If it's an unexpected EOF, log it and return true to signify EOF. + glog.V(2).Infof("consumer.readAndDecodeNextEntry: Error reading from state change file: %v", err) + // Reset the reader to the position before the unexpected EOF. + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, true, nil + } else if err != nil { + // Reset the reader to the position before the unexpected EOF. + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error reading from state change file") + } else if bytesRead < int(entryByteSize) { + // Reset the reader to the position before the unexpected EOF. + if _, err = file.Seek(currentPos, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error seeking to current position in file") + } + return nil, false, fmt.Errorf("consumer.readAndDecodeNextEntry: Not enough bytes read from state change file. Expected %d, got %d", entryByteSize, bytesRead) } - consumer.EntryCount += 1 - //fmt.Printf("\nHere is the entryCounter: %v", consumer.EntryCounter) - //fmt.Printf("\nHere is the encoder: %+v", encoder) + // Decode the state change entry. + stateChangeEntry := &lib.StateChangeEntry{} - // Pass the parsed values to the appropriate data handler function. - if consumer.ProcessEntriesInBatches { - return false, consumer.HandleEntryOperationBatch(keyBytes, &encoder, lib.EncoderType(encoderType), operationType) - } else { - return false, consumer.DataHandler.HandleEntry(keyBytes, encoder, lib.EncoderType(encoderType), operationType) + // Create deferred function to handle any panics that occur during decoding. + defer func() { + if r := recover(); r != nil { + file.Seek(currentPos, io.SeekStart) + err = fmt.Errorf("consumer.readAndDecodeNextEntry: Panic decoding entry: %v", r) + eof = false + sce = nil + } + + }() + if err = DecodeEntry(stateChangeEntry, buffer); err != nil { + file.Seek(currentPos, io.SeekStart) + return nil, false, errors.Wrapf(err, "consumer.readAndDecodeNextEntry: Error decoding entry") } + + return stateChangeEntry, false, err } -func (consumer *StateSyncerConsumer) run() error { - fileEOF := false - for !fileEOF { - var err error - fileEOF, err = consumer.readNextEntryFromFile() - if err != nil { - return err +// retrieveNextEntry reads the next StateChangeEntry bytes from the state change file and decode them. +func (consumer *StateSyncerConsumer) retrieveNextEntry(isMempool bool) (*lib.StateChangeEntry, bool, error) { + var reader *bufio.Reader + var file *os.File + if isMempool { + reader = consumer.StateChangeMempoolFileReader + file = consumer.StateChangeMempoolFile + } else { + reader = consumer.StateChangeFileReader + file = consumer.StateChangeFile + } + + // If mempool, check first entry to see if the flush ID has changed. + if isMempool { + // Scan the first entry in the mempool file to see if the flush ID has changed. + if _, err := consumer.StateChangeMempoolFirstEntryFile.Seek(0, io.SeekStart); err != nil { + return nil, false, errors.Wrapf(err, "consumer.retrieveNextEntry: Error seeking to start of mempool file") + } + // Read the first mempool entry to see if the flush ID has changed. + firstEntryReader := bufio.NewReader(consumer.StateChangeMempoolFirstEntryFile) + + mempoolFirstEntry, eof, err := consumer.readAndDecodeNextEntry(firstEntryReader, consumer.StateChangeMempoolFirstEntryFile) + if eof { + return nil, true, nil + } else if err != nil { + return nil, false, errors.Wrapf(err, "consumer.retrieveNextEntry: Error reading and decoding first mempool entry") } + + // If the flush ID has changed, revert the current mempool entries and reset the mempool reader. + if mempoolFirstEntry.FlushId != consumer.CurrentMempoolEntryFlushId { + if err = consumer.RevertMempoolEntries(); err != nil { + return nil, false, errors.Wrapf(err, "consumer.retrieveNextEntry: Error reverting mempool entries") + } + // Set the flush ID to the new flush ID. + consumer.CurrentMempoolEntryFlushId = mempoolFirstEntry.FlushId + // Reset the mempool reader, so that the next entry read will be the first entry in the new flush. + consumer.StateChangeMempoolFile.Seek(0, io.SeekStart) + consumer.StateChangeMempoolFileReader = bufio.NewReader(consumer.StateChangeMempoolFile) + // Set the reader to the newly reset mempool file reader. + reader = consumer.StateChangeMempoolFileReader + } + } + stateChangeEntry, eof, err := consumer.readAndDecodeNextEntry(reader, file) + if eof { + return nil, true, nil + } else if err != nil { + return nil, false, errors.Wrapf(err, "consumer.retrieveNextEntry: Error reading and decoding entry") } - consumer.IsScanning = false - return consumer.end() + + return stateChangeEntry, false, nil } -func (consumer *StateSyncerConsumer) end() error { - consumer.cleanup() - if err := consumer.StateChangeFile.Close(); err != nil { - return err +// detectAndHandleSyncEvent determines if the state change entry represents a sync event and emits it to the data handler. +func (consumer *StateSyncerConsumer) detectAndHandleSyncEvent(stateChangeEntry *lib.StateChangeEntry) error { + // Determine if hypersync is beginning or ending. + if stateChangeEntry.OperationType == lib.DbOperationTypeInsert && !consumer.IsHypersyncing { + consumer.IsHypersyncing = true + if err := consumer.DataHandler.HandleSyncEvent(SyncEventHypersyncStart); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling hypersync start event") + } + } else if stateChangeEntry.OperationType != lib.DbOperationTypeInsert && consumer.IsHypersyncing { + // If the operation type is not an insert, we must have finished hypersyncing. + // First, wait for any remaining batch threads to finish. + consumer.DBBlockingWG.Wait() + // Set the hypersyncing flag to false and close the channels. + consumer.IsHypersyncing = false + consumer.ExecuteTransactions = true + close(consumer.DBBlockingChannel) + if err := consumer.DataHandler.HandleSyncEvent(SyncEventHypersyncComplete); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling hypersync complete event") + } + if err := consumer.DataHandler.HandleSyncEvent(SyncEventBlocksyncStart); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling hypersync complete event") + } + } else if consumer.LastScannedIndex == 0 && stateChangeEntry.OperationType != lib.DbOperationTypeInsert { + consumer.ExecuteTransactions = true + if err := consumer.DataHandler.HandleSyncEvent(SyncEventHypersyncComplete); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling hypersync complete event") + } + if err := consumer.DataHandler.HandleSyncEvent(SyncEventBlocksyncStart); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling hypersync complete event") + } } - if err := consumer.StateChangeIndexFile.Close(); err != nil { - return err + + // Determine if we've reached a new transaction type during hypersync, log it if so. + if consumer.IsHypersyncing && len(consumer.BatchedEntries) > 0 && stateChangeEntry.EncoderType != consumer.BatchedEntries[0].EncoderType { + fmt.Printf("Now hypersyncing encoder type %d\n", stateChangeEntry.EncoderType) } - if consumer.ConsumerProgressFile != nil { - if err := consumer.ConsumerProgressFile.Close(); err != nil { - return err + + return nil +} + +// watchFileAndScanOnWrite continually triggers a new processNewEntriesInFile of the consumer. If there are any new changes that have been +// written, they will be captured by the processNewEntriesInFile, otherwise the processNewEntriesInFile will exit. +func (consumer *StateSyncerConsumer) watchFileAndScanOnWrite() (err error) { + for !consumer.StopConsumer { + err = func() error { + // Short sleep to prevent busy-waiting. + time.Sleep(25 * time.Millisecond) + + // If we are executing transactions, initiate a new transaction. + // This should occur after hypersync is complete. + if consumer.ExecuteTransactions { + err = consumer.DataHandler.InitiateTransaction() + if err != nil { + return errors.Wrapf(err, "consumer.processNewEntriesInFile: Error initiating transaction") + } + defer func() { + // Call CommitTransaction and handle any potential error. + if commitErr := consumer.DataHandler.CommitTransaction(); commitErr != nil { + // If there's an error, wrap it with additional context and assign it to the named return variable. + err = fmt.Errorf("consumer.processNewEntriesInFile: error committing transaction: %w", commitErr) + } + }() + } + // Process any new committed entries. + revertTriggered, _, err := consumer.processNewEntriesInFile(false) + if err != nil { + return errors.Wrapf(err, "consumer.watchFileAndScanOnWrite: Error scanning committed entries") + } + + // Process any new mempool entries + if consumer.SyncMempoolEntires { + // If a revert was triggered during the committed entries, and we are executing transactions, + // we need to process mempool entries UNTIL we see new entries get applied. + // This is to ensure that any entries that were reverted but not included in the new block are + // re-applied before the transaction is committed. + if revertTriggered && consumer.ExecuteTransactions { + for { + _, entriesProcessed, err := consumer.processNewEntriesInFile(true) + if err != nil { + return errors.Wrapf(err, "consumer.watchFileAndScanOnWrite: Error scanning mempool entries") + } + // Break if we processed entries, since we've now synced the new mempool state + if entriesProcessed { + break + } + // Small sleep to prevent busy waiting + time.Sleep(25 * time.Millisecond) + } + } else { + // Just process once if no revert or not executing transactions + _, _, err := consumer.processNewEntriesInFile(true) + if err != nil { + return errors.Wrapf(err, "consumer.watchFileAndScanOnWrite: Error scanning mempool entries") + } + } + } + + return nil + }() + if err != nil { + return errors.Wrapf(err, "consumer.watchFileAndScanOnWrite: Error processing new entries") } } return nil } -// RetrieveFileIndexForDbOperation retrieves the byte index in the state change file for the next db operation. -// It does this by reading the last saved entry index from the entry index file and multiplying it by 4 to get the -// byte index in the state change index file. -func (consumer *StateSyncerConsumer) retrieveFileIndexForDbOperation() (uint32, error) { - startEntryIndex := uint32(0) - var err error - if consumer.ConsumerProgressFile != nil { - startEntryIndex, err = getUint32FromFile(consumer.ConsumerProgressFile) - if err != nil { - return 0, err +// waitForStateChangesFile blocks execution until the state changes file is created, and then assigns it to the consumer. +// It blocks until the file is non-empty. This prevents the consumer from starting before the state changes file has been +// fully initialized, causing an EOF read error. +func (consumer *StateSyncerConsumer) waitForStateChangesFile(stateChangeFileName string) { + for { + // Attempt to open the state changes file. If it doesn't exist, wait 5 seconds and try again. + if stateChangeFile, err := os.Open(stateChangeFileName); err == nil { + consumer.StateChangeFile = stateChangeFile + // Once the file successfully is open, check if it is empty. If it is, wait 5 seconds and try again. + stateChangeFileInfo, err := stateChangeFile.Stat() + if err == nil { + // If the file is non-empty, break out of the loop and stop blocking the thread. + if stateChangeFileInfo.Size() > 0 { + break + } + } } + fmt.Println("Waiting for state changes file to be created...") + time.Sleep(5 * time.Second) } +} + +// retrieveLastSyncedStateChangeEntryIndex looks up the last synced state change entry index from the consumer progress file. +// This is used to determine where to start scanning the state changes file from after a restart. +func (consumer *StateSyncerConsumer) retrieveLastSyncedStateChangeEntryIndex() (uint64, error) { + // Attempt to open the consumer progress file. If it exists, it should have a single uint32 representing the + // last StateChangeEntry index that was processed. + if consumer.ConsumerProgressFile != nil { + return getUint64FromFile(consumer.ConsumerProgressFile) + } + return 0, nil +} + +// retrieveFileIndexForDbOperation retrieves the byte index in the state change file for the next db operation. +// It does this by reading the last saved entry index from the entry index file and multiplying it by 4 to get the +// byte index in the state change index file. +func (consumer *StateSyncerConsumer) retrieveFileIndexForDbOperation(startEntryIndex uint64) (uint64, error) { + consumer.EntryCount = startEntryIndex consumer.LastScannedIndex = startEntryIndex - fmt.Printf("Last scanned index: %d\n", startEntryIndex) - // Each entry byte index is represented as a uint32. This means the entry byte index exists at it'consumer - // index * 4. - entryIndexBytes := make([]byte, 4) - fileBytesPosition := int64(startEntryIndex * 4) + // Find the byte index in the state change file for the next db operation. Each entry byte index is represented + // in the index file as a uint64. This means the entry byte index exists at its consumer progress index * 8. + entryIndexBytes := make([]byte, 8) + fileBytesPosition := int64(startEntryIndex * 8) bytesRead, err := consumer.StateChangeIndexFile.ReadAt(entryIndexBytes, fileBytesPosition) - if err != nil { - return 0, err + if bytesRead == 0 { + return consumer.retrieveFileIndexForDbOperation(startEntryIndex - 1) + } else if err != nil { + return 0, errors.Wrapf(err, "consumer.retrieveFileIndexForDbOperation: Error reading from state change index file") } // If we read no bytes, we're at EOF. if bytesRead == 0 { - return 0, fmt.Errorf("EOF reached") + return 0, errors.New("consumer.retrieveFileIndexForDbOperation: EOF reached") } - // If we read a weird number of bytes, something is wrong. - if bytesRead < 4 { - return 0, fmt.Errorf("Too few bytes read") + // If we read a non uint64 number of bytes, something is wrong. + if bytesRead < 8 { + return 0, errors.New("consumer.retrieveFileIndexForDbOperation: Too few bytes read") } - // Use binary package to read a uint32 index from the byte slice representing the index of the db operation. - dbIndex := binary.LittleEndian.Uint32(entryIndexBytes) + // Use binary package to read a uint64 index from the byte slice representing the index of the db operation. + dbIndex := binary.LittleEndian.Uint64(entryIndexBytes) return dbIndex, nil } -func (consumer *StateSyncerConsumer) saveConsumerProgressToFile(entryIndex uint32) error { - file, err := os.Create(consumer.ConsumerProgressFileName) +// peekNextStateChangeEntry reads the next entry from the state change file without advancing the file pointer. +func (consumer *StateSyncerConsumer) peekNextStateChangeEntry(reader *bufio.Reader, file *os.File) (*lib.StateChangeEntry, error) { + // Get the current byte position in the state change file. + currentPos, err := file.Seek(0, io.SeekCurrent) + + // Read the next entry from the state change file. + stateChangeEntry, _, err := consumer.readAndDecodeNextEntry(reader, file) + if err != nil { + return nil, errors.Wrapf(err, "consumer.peekNextStateChangeEntry: Error reading next entry") + } + + // Seek back to the original position in the file. + if _, err = consumer.StateChangeMempoolFile.Seek(currentPos, io.SeekStart); err != nil { + return nil, errors.Wrapf(err, "consumer.retrieveNextEntry: Error seeking to current position in mempool file") + } + + return stateChangeEntry, nil +} + +// checkBlockSyncStart checks if the next entry in the state change file is a blocksync event. If it is, emit a +// SyncEventBlocksyncStart event to the data handler. +func (consumer *StateSyncerConsumer) checkBlockSyncStart() error { + // Peek at the next state change entry + nextStateChangeEntry, err := consumer.peekNextStateChangeEntry(consumer.StateChangeFileReader, consumer.StateChangeFile) + if err != nil { + return errors.Wrapf(err, "consumer.checkBlockSyncStart: Error peeking at next state change entry") + } + if nextStateChangeEntry == nil { + return nil + } + if nextStateChangeEntry.OperationType != lib.DbOperationTypeInsert { + consumer.ExecuteTransactions = true + if err = consumer.DataHandler.HandleSyncEvent(SyncEventBlocksyncStart); err != nil { + return errors.Wrapf(err, "consumer.detectAndHandleSyncEvent: Error handling blocksync start event") + } + } + return nil +} + +// saveConsumerProgressToFile saves the last StateChangeEntry index that was processed to the consumer progress file. +// This is represented as a single uint32 encoded to bytes. +func (consumer *StateSyncerConsumer) saveConsumerProgressToFile(entryIndex uint64) error { + consumerProgressFilepath := filepath.Join(consumer.ConsumerProgressDir, ConsumerProgressFilename) + // Create the file if it doesn't exist. + file, err := createDirAndFile(consumerProgressFilepath) if err != nil { - return err + return errors.Wrapf(err, "consumer.saveConsumerProgressToFile: Error creating consumer progress file: %s", consumer.ConsumerProgressDir) } defer file.Close() + // Write the entry index to the file. err = binary.Write(file, binary.LittleEndian, entryIndex) if err != nil { - return err + return errors.Wrapf(err, "consumer.saveConsumerProgressToFile: Error writing entry index to consumer progress file: %s", consumer.ConsumerProgressDir) } consumer.LastScannedIndex = entryIndex return nil } +// saveMempoolProgressToFile appends the last applied mempool entry to the mempool progress file. +func (consumer *StateSyncerConsumer) saveMempoolProgressToFile(mempoolStateChangeEntry *lib.StateChangeEntry) error { + mempoolStatusFilepath := filepath.Join(consumer.ConsumerProgressDir, lib.StateChangeMempoolFileName) + + // Create the file if it doesn't exist. + file, err := createDirAndFile(mempoolStatusFilepath) + if err != nil { + return errors.Wrapf(err, "consumer.saveConsumerProgressToFile: Error creating applied mempool entries file: %s", consumer.ConsumerProgressDir) + } + defer file.Close() + + mempoolEntryBytes := lib.EncodeByteArray(lib.EncodeToBytes(mempoolStateChangeEntry.BlockHeight, mempoolStateChangeEntry)) + + if _, err := file.Write(mempoolEntryBytes); err != nil { + return errors.Wrapf(err, "consumer.saveConsumerProgressToFile: Error writing to applied mempool entries: %s", consumer.ConsumerProgressDir) + } + return nil +} + +// revertStoredMempoolTransactions extracts all applied mempool entries from the mempool progress file and reverts them. +// This is used when re-starting the state syncer, so that the database is able to revert back to the last known chain-state. +func (consumer *StateSyncerConsumer) revertStoredMempoolTransactions() error { + mempoolStatusFilepath := filepath.Join(consumer.ConsumerProgressDir, lib.StateChangeMempoolFileName) + // Create the file if it doesn't exist. + file, err := os.Open(mempoolStatusFilepath) + if os.IsNotExist(err) { + // If the file doesn't exist, we can assume there were no mempool transactions to revert. + return nil + } else if err != nil { + return errors.Wrapf(err, "consumer.revertStoredMempoolTransactions: Error opening applied mempool entries file: %s", consumer.ConsumerProgressDir) + } + defer file.Close() + + var mempoolEntries []*lib.StateChangeEntry + fileEof := false + + reader := bufio.NewReader(file) + + for !fileEof { + var mempoolEntry *lib.StateChangeEntry + mempoolEntry, fileEof, err = consumer.readAndDecodeNextEntry(reader, file) + if fileEof { + break + } else if err != nil { + return errors.Wrapf(err, "consumer.revertStoredMempoolTransactions: Error reading from applied mempool entries file: %s", consumer.ConsumerProgressDir) + } + mempoolEntries = append(mempoolEntries, mempoolEntry) + } + + // Revert the mempool entries in reverse order. + for i := len(mempoolEntries) - 1; i >= 0; i-- { + mempoolEntry := mempoolEntries[i] + if err := consumer.RevertMempoolEntry(mempoolEntry); err != nil { + return errors.Wrapf(err, "consumer.revertStoredMempoolTransactions: Error reverting mempool entry: %s", consumer.ConsumerProgressDir) + } + } + return nil +} + +// truncateMempoolProgressFile truncates the mempool progress file to 0 bytes. +func (consumer *StateSyncerConsumer) truncateMempoolProgressFile() error { + mempoolStatusFilepath := filepath.Join(consumer.ConsumerProgressDir, lib.StateChangeMempoolFileName) + // Create the file if it doesn't exist. + file, err := createDirAndFile(mempoolStatusFilepath) + if os.IsNotExist(err) { + // If the file doesn't exist, we can assume there were no mempool transactions to revert. + return nil + } else if err != nil { + return errors.Wrapf(err, "consumer.truncateMempoolProgressFile: Error creating applied mempool entries file: %s", consumer.ConsumerProgressDir) + } + defer file.Close() + + if err := file.Truncate(0); err != nil { + return errors.Wrapf(err, "consumer.truncateMempoolProgressFile: Error truncating applied mempool entries file: %s", consumer.ConsumerProgressDir) + } + return nil +} + +// cleanup performs any final operations before the consumer exits. This mainly consists of handling any remaining +// batched entries that haven't been processed yet. func (consumer *StateSyncerConsumer) cleanup() error { // If there are still bulk operations to perform, perform them now. - if consumer.ProcessEntriesInBatches && consumer.BatchedEntries != nil && len(consumer.BatchedEntries.Entries) > 0 { - err := consumer.DataHandler.HandleEntryBatch(consumer.BatchedEntries) - if err != nil { - return err - } - handledEntries := len(UniqueEntries(consumer.BatchedEntries.Entries)) - return consumer.saveConsumerProgressToFile(consumer.LastScannedIndex + uint32(handledEntries)) + if err := consumer.executeBatch(); err != nil { + return errors.Wrapf(err, "consumer.cleanup: Error executing final batch") } return nil } + +func (consumer *StateSyncerConsumer) Stop() { + consumer.StopConsumer = true +} diff --git a/consumer/helpers.go b/consumer/helpers.go index 7309d61..7e8d880 100644 --- a/consumer/helpers.go +++ b/consumer/helpers.go @@ -5,41 +5,83 @@ import ( "encoding/binary" "encoding/hex" "encoding/json" - "errors" "fmt" - "github.com/deso-protocol/core/lib" "os" + "path/filepath" "reflect" "time" + + "github.com/btcsuite/btcd/btcec/v2" + "github.com/deso-protocol/backend/routes" + "github.com/deso-protocol/core/lib" + "github.com/deso-protocol/uint256" + "github.com/golang/glog" + "github.com/pkg/errors" + "github.com/uptrace/bun/extra/bunbig" ) // CopyStruct takes 2 structs and copies values from fields of the same name from the source struct to the destination struct. -// This is used to copy values from a deso entry struct to a protobuf entry struct. +// This helper can be used by the data handler to easily copy values between the deso encoder and whichever struct type +// is needed to perform the db operations by the handler. +// This function also handles decoding fields that need to be decoded in some way. These fields are marked with a +// "decode_function" tag in the destination struct. +// The "decode_src_field_name" tag is used to specify the name of the source struct field that contains the data to be decoded. +// The "decode_body_field_name" tag is used to specify the name of the destination struct field that the decoded data should be copied to. +// The "decode_function" tag can be one of the following: "blockhash", "bytehash", "deso_body_schema", "base_58_check", +// "extra_data", and "timestamp". func CopyStruct(src interface{}, dst interface{}) error { srcValue := reflect.ValueOf(src).Elem() dstValue := reflect.ValueOf(dst).Elem() if srcValue.Kind() != reflect.Struct || dstValue.Kind() != reflect.Struct { - return fmt.Errorf("both srcValue and dst must be structs") + return errors.New("both srcValue and dst must be structs") } // Loop through all the fields in the source struct, and copy them over to the destination struct // if the destination struct contains a field of the same name and type. - for i := 0; i < dstValue.NumField(); i++ { + for ii := 0; ii < dstValue.NumField(); ii++ { // Get properties of the source field. - dstFieldName := dstValue.Type().Field(i).Name - dstFieldType := dstValue.Type().Field(i).Type - dstFieldDecodeFunction := dstValue.Type().Field(i).Tag.Get("decode_function") - dstFieldDecodeSrcField := dstValue.Type().Field(i).Tag.Get("decode_src_field_name") + dstFieldName := dstValue.Type().Field(ii).Name + dstFieldType := dstValue.Type().Field(ii).Type + dstFieldDecodeFunction := dstValue.Type().Field(ii).Tag.Get("decode_function") + dstFieldDecodeSrcField := dstValue.Type().Field(ii).Tag.Get("decode_src_field_name") srcField := srcValue.FieldByName(dstFieldName) dstField := dstValue.FieldByName(dstFieldName) + // TODO: Break each of these out into their own functions. + // TODO: Create comprehensive documentation of the various decoder functions. + // TODO: all these functions that convert from bytes to hex strings should be consolidated. // If the field needs to be decoded in some way, handle that here. if dstFieldDecodeFunction == "blockhash" { - if srcValue.FieldByName(dstFieldDecodeSrcField).IsValid() && srcValue.FieldByName(dstFieldDecodeSrcField).Elem().IsValid() { - postHashBytes := srcValue.FieldByName(dstFieldDecodeSrcField).Elem().Slice(0, lib.HashSizeBytes).Bytes() + fieldValue := srcValue.FieldByName(dstFieldDecodeSrcField) + if fieldValue.IsValid() && fieldValue.Elem().IsValid() { + postHashBytes := fieldValue.Elem().Slice(0, lib.HashSizeBytes).Bytes() + dstValue.FieldByName(dstFieldName).SetString(hex.EncodeToString(postHashBytes)) + } + } else if dstFieldDecodeFunction == "group_key_name" { + fieldValue := srcValue.FieldByName(dstFieldDecodeSrcField) + if fieldValue.IsValid() && fieldValue.Elem().IsValid() { + groupKeyNameBytes := fieldValue.Elem().Slice(0, lib.MaxAccessGroupKeyNameCharacters).Bytes() + dstValue.FieldByName(dstFieldName).SetString(hex.EncodeToString(groupKeyNameBytes)) + } + } else if dstFieldDecodeFunction == "pkid" { + fieldValue := srcValue.FieldByName(dstFieldDecodeSrcField) + if fieldValue.IsValid() && fieldValue.Elem().IsValid() { + pkidBytes := fieldValue.Elem().Slice(0, lib.PublicKeyLenCompressed).Bytes() + dstValue.FieldByName(dstFieldName).Set(reflect.ValueOf(pkidBytes)) + } + } else if dstFieldDecodeFunction == "bytehash" { + fieldValue := srcValue.FieldByName(dstFieldDecodeSrcField) + if fieldValue.IsValid() && fieldValue.Len() > 0 { + postHashBytes := fieldValue.Slice(0, lib.HashSizeBytes).Bytes() dstValue.FieldByName(dstFieldName).SetString(hex.EncodeToString(postHashBytes)) } + } else if dstFieldDecodeFunction == "uint256" { + srcInt, ok := srcField.Interface().(uint256.Int) + if !ok { + return errors.New("could not convert src field to uint256.Int") + } + dstField.Set(reflect.ValueOf(bunbig.FromMathBig(srcInt.ToBig()))) } else if dstFieldDecodeFunction == "deso_body_schema" { bodyField := srcValue.FieldByName(dstFieldDecodeSrcField) bodyBytes := bodyField.Bytes() @@ -49,13 +91,23 @@ func CopyStruct(src interface{}, dst interface{}) error { return err } - dstValue.FieldByName(dstValue.Type().Field(i).Tag.Get("decode_body_field_name")).SetString(body.Body) - dstValue.FieldByName(dstValue.Type().Field(i).Tag.Get("decode_image_urls_field_name")).Set(reflect.ValueOf(body.ImageURLs)) - dstValue.FieldByName(dstValue.Type().Field(i).Tag.Get("decode_video_urls_field_name")).Set(reflect.ValueOf(body.VideoURLs)) + dstValue.FieldByName(dstValue.Type().Field(ii).Tag.Get("decode_body_field_name")).SetString(body.Body) + dstValue.FieldByName(dstValue.Type().Field(ii).Tag.Get("decode_image_urls_field_name")).Set(reflect.ValueOf(body.ImageURLs)) + dstValue.FieldByName(dstValue.Type().Field(ii).Tag.Get("decode_video_urls_field_name")).Set(reflect.ValueOf(body.VideoURLs)) + } else if dstFieldDecodeFunction == "string_bytes" { + stringField := srcValue.FieldByName(dstFieldDecodeSrcField) + stringBytes := stringField.Bytes() + dstValue.FieldByName(dstFieldName).SetString(string(stringBytes)) + } else if dstFieldDecodeFunction == "nested_value" { + structField := srcValue.FieldByName(dstFieldDecodeSrcField) + if structField.IsValid() { + dstValue.FieldByName(dstFieldName).Set(structField.FieldByName(dstValue.Type().Field(ii).Tag.Get("nested_field_name"))) + } } else if dstFieldDecodeFunction == "base_58_check" { - if srcValue.FieldByName(dstFieldDecodeSrcField).IsValid() { + fieldValue := srcValue.FieldByName(dstFieldDecodeSrcField) + if fieldValue.IsValid() { // If syncing against testnet, these params should be changed. - pkString := lib.PkToString(srcValue.FieldByName(dstFieldDecodeSrcField).Bytes(), &lib.DeSoMainnetParams) + pkString := lib.PkToString(fieldValue.Bytes(), &lib.DeSoMainnetParams) dstValue.FieldByName(dstFieldName).SetString(pkString) } } else if dstFieldDecodeFunction == "extra_data" { @@ -93,6 +145,95 @@ func CopyStruct(src interface{}, dst interface{}) error { return nil } +func createDirAndFile(filePath string) (*os.File, error) { + dir := filepath.Dir(filePath) + if err := os.MkdirAll(dir, 0755); err != nil { + return nil, errors.Wrapf(err, "Error creating directory:") + } + return os.Create(filePath) +} + +// Convert timestamp nanos to time.Time. +func UnixNanoToTime(unixNano uint64) time.Time { + return time.Unix(0, int64(unixNano)) +} + +// Convert public key bytes to base58check string. +func PublicKeyBytesToBase58Check(publicKey []byte, params *lib.DeSoParams) string { + // If running against testnet data, a different set of params should be used. + return lib.PkToString(publicKey, params) +} + +// Convert public key bytes to base58check string. +func ConvertRoyaltyMapToByteStrings(royaltyMap map[lib.PKID]uint64) map[string]uint64 { + newMap := make(map[string]uint64) + for key, value := range royaltyMap { + newMap[key.ToString()] = value + } + return newMap +} + +func DecodeDesoBodySchema(bodyBytes []byte) (*lib.DeSoBodySchema, error) { + var body lib.DeSoBodySchema + err := json.Unmarshal(bodyBytes, &body) + if err != nil { + return nil, err + } + return &body, nil +} + +var extraDataCustomEncodings = map[string]func([]byte, *lib.DeSoParams, *lib.UtxoView) string{ + lib.RepostedPostHash: routes.DecodeHexString, + lib.IsQuotedRepostKey: routes.DecodeBoolString, + lib.IsFrozenKey: routes.DecodeBoolString, + + lib.USDCentsPerBitcoinKey: routes.Decode64BitUintString, + lib.MinNetworkFeeNanosPerKBKey: routes.Decode64BitUintString, + lib.CreateProfileFeeNanosKey: routes.Decode64BitUintString, + lib.CreateNFTFeeNanosKey: routes.Decode64BitUintString, + lib.MaxCopiesPerNFTKey: routes.Decode64BitUintString, + + lib.ForbiddenBlockSignaturePubKeyKey: routes.DecodePkToString, + + lib.DiamondLevelKey: routes.Decode64BitIntString, + lib.DiamondPostHashKey: routes.DecodeHexString, + + lib.DerivedPublicKey: routes.DecodePkToString, + + lib.MessagingPublicKey: routes.DecodePkToString, + lib.SenderMessagingPublicKey: routes.DecodePkToString, + lib.SenderMessagingGroupKeyName: routes.DecodeString, + lib.RecipientMessagingPublicKey: routes.DecodePkToString, + lib.RecipientMessagingGroupKeyName: routes.DecodeString, + + lib.BuyNowPriceKey: routes.Decode64BitUintString, + + lib.DESORoyaltiesMapKey: routes.DecodePubKeyToUint64MapString, + lib.CoinRoyaltiesMapKey: routes.DecodePubKeyToUint64MapString, + lib.TokenTradingFeesByPkidMapKey: routes.DecodePubKeyToUint64MapString, + + lib.MessagesVersionString: routes.Decode64BitUintString, + + lib.NodeSourceMapKey: routes.Decode64BitUintString, + + lib.DerivedKeyMemoKey: routes.DecodeDerivedKeyMemo, + + lib.TransactionSpendingLimitKey: routes.DecodeString, // This differs from backend since UtxoView is nil. +} + +func ExtraDataBytesToString(extraData map[string][]byte, params *lib.DeSoParams) map[string]string { + newMap := make(map[string]string) + for key, value := range extraData { + if encoderFunc, exists := extraDataCustomEncodings[key]; exists { + newMap[key] = encoderFunc(value, params, nil) + continue + } + newMap[key] = string(value) + } + return newMap +} + +// DecodeEntry decodes bytes and returns a deso entry struct. func DecodeEntry(encoder lib.DeSoEncoder, entryBytes []byte) error { if encoder == nil { return errors.New("Error getting encoder") @@ -100,63 +241,1343 @@ func DecodeEntry(encoder lib.DeSoEncoder, entryBytes []byte) error { rr := bytes.NewReader(entryBytes) - if exists, err := lib.DecodeFromBytes(encoder, rr); exists && err == nil { - return nil - } else { - return errors.New("Error decoding entry") + if _, err := lib.DecodeFromBytes(encoder, rr); err != nil { + return errors.Wrapf(err, "Error decoding entry") } + return nil } -func getUint32FromFile(file *os.File) (uint32, error) { +// getUint64FromFile reads the next 4 bytes from the stateChangeFile and returns a uint32. +func getUint64FromFile(file *os.File) (uint64, error) { // Read the contents of the next 4 bytes from the stateChangeFile into a byte slice. - uint32Bytes, err := getBytesFromFile(4, file) + uint64Bytes, err := getBytesFromFile(8, file) if err != nil { return 0, err } // Use binary package to read a uint16 structSize from the byte slice representing the size of the following struct - value := binary.LittleEndian.Uint32(uint32Bytes) + value := binary.LittleEndian.Uint64(uint64Bytes) return value, nil } -func getUint16FromFile(file *os.File) (uint16, error) { - // Read the contents of the next 2 bytes from the stateChangeFile into a byte slice. - uint16Bytes, err := getBytesFromFile(2, file) +// getBytesFromFile reads the next entryByteSize bytes from the stateChangeFile and returns a byte slice. +func getBytesFromFile(entryByteSize int, file *os.File) ([]byte, error) { + // Read the contents of the entry from the stateChangeFile into a byte slice + structBytes := make([]byte, entryByteSize) + bytesRead, err := file.Read(structBytes) if err != nil { - return 0, err + return nil, err } + if bytesRead < entryByteSize { + return nil, errors.New("Too few bytes read") + } + return structBytes, nil +} - // Use binary package to read a uint16 from the byte slice. - value := binary.LittleEndian.Uint16(uint16Bytes) - return value, nil +func GetPKIDBytesFromKey(key []byte) []byte { + if len(key) < len(lib.Prefixes.PrefixPKIDToProfileEntry) { + return nil + } + prefixLen := len(lib.Prefixes.PrefixPKIDToProfileEntry) + return key[prefixLen:] } -func getUint8FromFile(file *os.File) (uint8, error) { - // Read the contents of the next byte from the stateChangeFile into a byte slice - uint8Bytes, err := getBytesFromFile(1, file) - if err != nil { - return 0, err +func GetAccessGroupMemberFieldsFromKey(key []byte) (accessGroupMemberPublicKey []byte, accessGroupOwnerPublicKey []byte, accessGroupKeyName []byte, err error) { + + prefixLen := len(lib.Prefixes.PrefixAccessGroupMembershipIndex) + totalKeyLen := prefixLen + lib.PublicKeyLenCompressed*2 + lib.MaxAccessGroupKeyNameCharacters + + if len(key) < totalKeyLen { + return nil, nil, nil, errors.New("key length is less than expected") } - // Use binary.Read to read a uint8 value from the byte slice - var value uint8 - err = binary.Read(bytes.NewReader(uint8Bytes), binary.LittleEndian, &value) - if err != nil { - return 0, err + accessGroupMemberPublicKey = key[prefixLen : prefixLen+lib.PublicKeyLenCompressed] + accessGroupOwnerPublicKey = key[prefixLen+lib.PublicKeyLenCompressed : prefixLen+lib.PublicKeyLenCompressed*2] + accessGroupKeyName = key[prefixLen+lib.PublicKeyLenCompressed*2 : totalKeyLen] + + return accessGroupMemberPublicKey, accessGroupOwnerPublicKey, accessGroupKeyName, nil +} + +func GetBlockHashBytesFromKey(key []byte) []byte { + if len(key) < len(lib.Prefixes.PrefixBlockHashToBlock) { + return nil } + prefixLen := len(lib.Prefixes.PrefixBlockHashToBlock) + return key[prefixLen:] +} - return value, nil +// getDisconnectOperationTypeForPrevEntry returns the operation type for a given utxoOp entry in order to perform a mempool disconnect. +// If the encoder is nil, the operation type is delete. Otherwise, it is upsert. +func getDisconnectOperationTypeForPrevEntry(prevEntry lib.DeSoEncoder) lib.StateSyncerOperationType { + // Use reflection to determine if the previous entry is nil. + val := reflect.ValueOf(prevEntry) + if val.Kind() == reflect.Ptr && val.IsNil() { + // If the previous entry is nil, we should delete the current entry to revert it to its previous state. + return lib.DbOperationTypeDelete + } else { + // If the previous entry isn't nil, an upsert will bring it back to its previous state. + return lib.DbOperationTypeUpsert + } } -func getBytesFromFile(entryByteSize int, file *os.File) ([]byte, error) { - // Read the contents of the entry from the stateChangeFile into a byte slice - structBytes := make([]byte, entryByteSize) - bytesRead, err := file.Read(structBytes) +// TransactionExtraMetadata tracks additional metadata for a transaction that isn't stored in txindex. +type TransactionExtraMetadata interface{} + +// Additional fields needed for SubmitPost transactions. +type PostTransactionExtraMetadata struct { + PosterPublicKeyBase58Check string + RelatedPublicKeyBase58Check string +} + +type FollowTransactionExtraMetadata struct { + FollowedPublicKeyBase58Check string +} + +// A combined view of the existing txindex metadata struct and the additional metadata fields. +type ConsumerTxIndexMetadata struct { + lib.DeSoEncoder + TransactionExtraMetadata +} + +// Combine the JSON data from the DeSoEncoder and the TransactionExtraMetadata into a single JSON object. +func (c ConsumerTxIndexMetadata) MarshalJSON() ([]byte, error) { + // Create temporary maps to hold each interface's JSON data + encoderData, err := json.Marshal(c.DeSoEncoder) if err != nil { return nil, err } - if bytesRead < entryByteSize { - return nil, fmt.Errorf("Too few bytes read") + + metadataData, err := json.Marshal(c.TransactionExtraMetadata) + if err != nil { + return nil, err } - return structBytes, nil + + // Unmarshal the JSON data into separate maps + var encoderMap map[string]interface{} + var metadataMap map[string]interface{} + if err := json.Unmarshal(encoderData, &encoderMap); err != nil { + return nil, err + } + if err := json.Unmarshal(metadataData, &metadataMap); err != nil { + return nil, err + } + + // Combine both maps + for k, v := range metadataMap { + encoderMap[k] = v + } + + // Marshal the combined map back into JSON + return json.Marshal(encoderMap) +} + +func ComputeTransactionMetadata(txn *lib.MsgDeSoTxn, blockHashHex string, params *lib.DeSoParams, + fees uint64, txnIndexInBlock uint64, utxoOps []*lib.UtxoOperation) (*lib.TransactionMetadata, *TransactionExtraMetadata, error) { + + var err error + var transactionExtraMetadata TransactionExtraMetadata + txnMeta := &lib.TransactionMetadata{ + TxnIndexInBlock: txnIndexInBlock, + TxnType: txn.TxnMeta.GetTxnType().String(), + + // This may be overwritten later on, for example if we're dealing with a + // BitcoinExchange txn which doesn't set the txn.PublicKey + TransactorPublicKeyBase58Check: lib.PkToString(txn.PublicKey, params), + + // General transaction metadata + BasicTransferTxindexMetadata: &lib.BasicTransferTxindexMetadata{ + // TODO: compute fees for pre-balance model txns. + FeeNanos: fees, + // TODO: This doesn't add much value, and it makes output hard to read because + // it's so long so I'm commenting it out for now. + //UtxoOpsDump: spew.Sdump(utxoOps), + + // We need to include the utxoOps because it allows us to compute implicit + // outputs. + UtxoOps: utxoOps, + }, + + TxnOutputs: txn.TxOutputs, + } + + if blockHashHex != "" { + txnMeta.BlockHashHex = blockHashHex + } + + extraData := txn.ExtraData + + // Set the affected public keys for the basic transfer. + for _, output := range txn.TxOutputs { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(output.PublicKey, params), + Metadata: "BasicTransferOutput", + }) + } + + switch txn.TxnMeta.GetTxnType() { + case lib.TxnTypeCreatorCoin: + // Get the txn metadata + realTxMeta := txn.TxnMeta.(*lib.CreatorCoinMetadataa) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeCreatorCoin) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing creator coin utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.CreatorCoinStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing creator coin state change metadata error: %v", txn.Hash().String()) + } + + // Rosetta needs to know the change in DESOLockedNanos so it can model the change in + // total deso locked in the creator coin. Calculate this by comparing the current CreatorCoinEntry + // to the previous CreatorCoinEntry + prevCoinEntry := utxoOp.PrevCoinEntry + + desoLockedNanosDiff := int64(0) + if prevCoinEntry == nil { + glog.Errorf("Update TxIndex: missing DESOLockedNanosDiff error: %v", txn.Hash().String()) + } else { + desoLockedNanosDiff = int64(stateChangeMetadata.ProfileDeSoLockedNanos - prevCoinEntry.DeSoLockedNanos) + } + + // Set the amount of the buy/sell/add + txnMeta.CreatorCoinTxindexMetadata = &lib.CreatorCoinTxindexMetadata{ + DeSoToSellNanos: realTxMeta.DeSoToSellNanos, + CreatorCoinToSellNanos: realTxMeta.CreatorCoinToSellNanos, + DeSoToAddNanos: realTxMeta.DeSoToAddNanos, + DESOLockedNanosDiff: desoLockedNanosDiff, + } + + // Set the type of the operation. + if realTxMeta.OperationType == lib.CreatorCoinOperationTypeBuy { + txnMeta.CreatorCoinTxindexMetadata.OperationType = "buy" + } else if realTxMeta.OperationType == lib.CreatorCoinOperationTypeSell { + txnMeta.CreatorCoinTxindexMetadata.OperationType = "sell" + } else { + txnMeta.CreatorCoinTxindexMetadata.OperationType = "add" + } + + // Set the affected public key to the owner of the creator coin so that they + // get notified. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ProfilePublicKey, params), + Metadata: "CreatorPublicKey", + }) + case lib.TxnTypeCreatorCoinTransfer: + realTxMeta := txn.TxnMeta.(*lib.CreatorCoinTransferMetadataa) + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeCreatorCoinTransfer) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing cc transfer utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.CCTransferStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing cc transfer state change metadata error: %v", txn.Hash().String()) + } + + txnMeta.CreatorCoinTransferTxindexMetadata = &lib.CreatorCoinTransferTxindexMetadata{ + CreatorUsername: string(stateChangeMetadata.CreatorProfileEntry.Username), + CreatorCoinToTransferNanos: realTxMeta.CreatorCoinToTransferNanos, + } + + diamondLevelBytes, hasDiamondLevel := txn.ExtraData[lib.DiamondLevelKey] + diamondPostHash, hasDiamondPostHash := txn.ExtraData[lib.DiamondPostHashKey] + if hasDiamondLevel && hasDiamondPostHash { + diamondLevel, bytesRead := lib.Varint(diamondLevelBytes) + if bytesRead <= 0 { + glog.Errorf("Update TxIndex: Error reading diamond level for txn: %v", txn.Hash().String()) + } else { + txnMeta.CreatorCoinTransferTxindexMetadata.DiamondLevel = diamondLevel + txnMeta.CreatorCoinTransferTxindexMetadata.PostHashHex = hex.EncodeToString(diamondPostHash) + } + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ReceiverPublicKey, params), + Metadata: "ReceiverPublicKey", + }) + case lib.TxnTypeUpdateProfile: + realTxMeta := txn.TxnMeta.(*lib.UpdateProfileMetadata) + + txnMeta.UpdateProfileTxindexMetadata = &lib.UpdateProfileTxindexMetadata{} + if len(realTxMeta.ProfilePublicKey) == btcec.PubKeyBytesLenCompressed { + txnMeta.UpdateProfileTxindexMetadata.ProfilePublicKeyBase58Check = + lib.PkToString(realTxMeta.ProfilePublicKey, params) + } + txnMeta.UpdateProfileTxindexMetadata.NewUsername = string(realTxMeta.NewUsername) + txnMeta.UpdateProfileTxindexMetadata.NewDescription = string(realTxMeta.NewDescription) + txnMeta.UpdateProfileTxindexMetadata.NewProfilePic = string(realTxMeta.NewProfilePic) + txnMeta.UpdateProfileTxindexMetadata.NewCreatorBasisPoints = realTxMeta.NewCreatorBasisPoints + txnMeta.UpdateProfileTxindexMetadata.NewStakeMultipleBasisPoints = realTxMeta.NewStakeMultipleBasisPoints + txnMeta.UpdateProfileTxindexMetadata.IsHidden = realTxMeta.IsHidden + + // Add the ProfilePublicKey to the AffectedPublicKeys + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ProfilePublicKey, params), + Metadata: "ProfilePublicKeyBase58Check", + }) + case lib.TxnTypeSubmitPost: + realTxMeta := txn.TxnMeta.(*lib.SubmitPostMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeSubmitPost) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing submit post utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.SubmitPostStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing submit post state change metadata error: %v", txn.Hash().String()) + } + + txnMeta.SubmitPostTxindexMetadata = &lib.SubmitPostTxindexMetadata{} + if len(realTxMeta.PostHashToModify) == lib.HashSizeBytes { + txnMeta.SubmitPostTxindexMetadata.PostHashBeingModifiedHex = hex.EncodeToString( + realTxMeta.PostHashToModify) + } + if len(realTxMeta.ParentStakeID) == lib.HashSizeBytes { + txnMeta.SubmitPostTxindexMetadata.ParentPostHashHex = hex.EncodeToString( + realTxMeta.ParentStakeID) + } + // If a post hash didn't get set then the hash of the transaction itself will + // end up being used as the post hash so set that here. + if txnMeta.SubmitPostTxindexMetadata.PostHashBeingModifiedHex == "" { + txnMeta.SubmitPostTxindexMetadata.PostHashBeingModifiedHex = + hex.EncodeToString(txn.Hash()[:]) + } + + // If ParentPostHashHex is set then get the parent posts public key and + // mark it as affected. We only check this if PostHashToModify is not set + // so we only generate a notification the first time someone comments on your post. + // ParentPosterPublicKeyBase58Check is in AffectedPublicKeys + if len(realTxMeta.PostHashToModify) == 0 && len(realTxMeta.ParentStakeID) == lib.HashSizeBytes { + postHash := &lib.BlockHash{} + copy(postHash[:], realTxMeta.ParentStakeID) + postEntry := stateChangeMetadata.ParentPostEntry + if postEntry == nil { + glog.V(2).Infof( + "UpdateTxindex: Error creating SubmitPostTxindexMetadata; "+ + "missing parent post for hash %v: %v", postHash, err) + } else { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params), + Metadata: "ParentPosterPublicKeyBase58Check", + }) + transactionExtraMetadata = &PostTransactionExtraMetadata{ + PosterPublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params), + } + } + } + + // The profiles that are mentioned are in the AffectedPublicKeys + // MentionedPublicKeyBase58Check in AffectedPublicKeys. We need to + // parse them out of the post and then look up their public keys. + // + // Start by trying to parse the body JSON + bodyObj := &lib.DeSoBodySchema{} + if err = json.Unmarshal(realTxMeta.Body, &bodyObj); err != nil { + // Don't worry about bad posts unless we're debugging with high verbosity. + glog.V(2).Infof("UpdateTxindex: Error parsing post body for @ mentions: "+ + "%v %v", string(realTxMeta.Body), err) + } else { + for _, mentionedProfile := range stateChangeMetadata.ProfilesMentioned { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(mentionedProfile.PublicKey, params), + Metadata: "MentionedPublicKeyBase58Check", + }) + } + + // Additionally, we need to check if this post is a repost and + // fetch the original poster + if repostedPostHash, isRepost := extraData[lib.RepostedPostHash]; isRepost { + repostedBlockHash := &lib.BlockHash{} + copy(repostedBlockHash[:], repostedPostHash) + // TODO: How to get this + repostPost := stateChangeMetadata.RepostPostEntry + if repostPost != nil { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(repostPost.PosterPublicKey, params), + Metadata: "RepostedPublicKeyBase58Check", + }) + transactionExtraMetadata = &PostTransactionExtraMetadata{ + RelatedPublicKeyBase58Check: lib.PkToString(repostPost.PosterPublicKey, params), + } + } + } + } + case lib.TxnTypeLike: + realTxMeta := txn.TxnMeta.(*lib.LikeMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeLike) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing like utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.LikeStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing like state change metadata error: %v", txn.Hash().String()) + } + + txnMeta.LikeTxindexMetadata = &lib.LikeTxindexMetadata{ + IsUnlike: realTxMeta.IsUnlike, + PostHashHex: hex.EncodeToString(realTxMeta.LikedPostHash[:]), + } + + // Get the public key of the poster and set it as having been affected + // by this like. + // + // PosterPublicKeyBase58Check in AffectedPublicKeys + postHash := &lib.BlockHash{} + copy(postHash[:], realTxMeta.LikedPostHash[:]) + // TODO: How to get this. + postEntry := stateChangeMetadata.LikedPostEntry + if postEntry == nil { + glog.V(2).Infof( + "UpdateTxindex: Error creating LikeTxindexMetadata; "+ + "missing post for hash %v: %v", postHash, err) + } else { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params), + Metadata: "PosterPublicKeyBase58Check", + }) + } + case lib.TxnTypeFollow: + realTxMeta := txn.TxnMeta.(*lib.FollowMetadata) + + txnMeta.FollowTxindexMetadata = &lib.FollowTxindexMetadata{ + IsUnfollow: realTxMeta.IsUnfollow, + } + + // FollowedPublicKeyBase58Check in AffectedPublicKeys + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.FollowedPublicKey, params), + Metadata: "FollowedPublicKeyBase58Check", + }) + + transactionExtraMetadata = &FollowTransactionExtraMetadata{ + FollowedPublicKeyBase58Check: lib.PkToString(realTxMeta.FollowedPublicKey, params), + } + case lib.TxnTypePrivateMessage: + realTxMeta := txn.TxnMeta.(*lib.PrivateMessageMetadata) + + txnMeta.PrivateMessageTxindexMetadata = &lib.PrivateMessageTxindexMetadata{ + TimestampNanos: realTxMeta.TimestampNanos, + } + + // RecipientPublicKeyBase58Check in AffectedPublicKeys + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.RecipientPublicKey, params), + Metadata: "RecipientPublicKeyBase58Check", + }) + case lib.TxnTypeSwapIdentity: + realTxMeta := txn.TxnMeta.(*lib.SwapIdentityMetadataa) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeSwapIdentity) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing swap identity utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.SwapIdentityStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing swap identity state change metadata error: %v", txn.Hash().String()) + } + + // Rosetta needs to know the current locked deso in each profile so it can model the swap of + // the creator coins. Rosetta models a swap identity as two INPUTs and two OUTPUTs effectively + // swapping the balances of total deso locked. If no profile exists, from/to is zero. + fromNanos := uint64(0) + fromProfile := stateChangeMetadata.FromProfile + if fromProfile != nil { + fromNanos = fromProfile.CreatorCoinEntry.DeSoLockedNanos + } + + toNanos := uint64(0) + toProfile := stateChangeMetadata.ToProfile + if toProfile != nil { + toNanos = toProfile.CreatorCoinEntry.DeSoLockedNanos + } + + txnMeta.SwapIdentityTxindexMetadata = &lib.SwapIdentityTxindexMetadata{ + FromPublicKeyBase58Check: lib.PkToString(realTxMeta.FromPublicKey, params), + ToPublicKeyBase58Check: lib.PkToString(realTxMeta.ToPublicKey, params), + FromDeSoLockedNanos: fromNanos, + ToDeSoLockedNanos: toNanos, + } + + // The to and from public keys are affected by this. + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.FromPublicKey, params), + Metadata: "FromPublicKeyBase58Check", + }) + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ToPublicKey, params), + Metadata: "ToPublicKeyBase58Check", + }) + case lib.TxnTypeNFTBid: + realTxMeta := txn.TxnMeta.(*lib.NFTBidMetadata) + + isBuyNow := false + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeNFTBid) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing nft bid utxo op error: %v", txn.Hash().String()) + } + var nftRoyaltiesMetadata lib.NFTRoyaltiesMetadata + var ownerPublicKeyBase58Check string + var creatorPublicKeyBase58Check string + // We don't send notifications for standing offers. + if realTxMeta.SerialNumber != 0 { + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.NFTBidStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing nft bid state change metadata error: %v", txn.Hash().String()) + } + postEntry := stateChangeMetadata.PostEntry + + creatorPublicKeyBase58Check = lib.PkToString(postEntry.PosterPublicKey, params) + + if utxoOp.PrevNFTEntry != nil && utxoOp.PrevNFTEntry.IsBuyNow { + isBuyNow = true + } + + ownerPublicKeyBase58Check = stateChangeMetadata.OwnerPublicKeyBase58Check + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: ownerPublicKeyBase58Check, + Metadata: "NFTOwnerPublicKeyBase58Check", + }) + + if isBuyNow { + nftRoyaltiesMetadata = lib.NFTRoyaltiesMetadata{ + CreatorCoinRoyaltyNanos: utxoOp.NFTBidCreatorRoyaltyNanos, + CreatorRoyaltyNanos: utxoOp.NFTBidCreatorDESORoyaltyNanos, + CreatorPublicKeyBase58Check: creatorPublicKeyBase58Check, + AdditionalCoinRoyaltiesMap: lib.PubKeyRoyaltyPairToBase58CheckToRoyaltyNanosMap( + utxoOp.NFTBidAdditionalCoinRoyalties, params), + AdditionalDESORoyaltiesMap: lib.PubKeyRoyaltyPairToBase58CheckToRoyaltyNanosMap( + utxoOp.NFTBidAdditionalDESORoyalties, params), + } + } + } + + txnMeta.NFTBidTxindexMetadata = &lib.NFTBidTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + SerialNumber: realTxMeta.SerialNumber, + BidAmountNanos: realTxMeta.BidAmountNanos, + IsBuyNowBid: isBuyNow, + NFTRoyaltiesMetadata: &nftRoyaltiesMetadata, + OwnerPublicKeyBase58Check: ownerPublicKeyBase58Check, + } + + if isBuyNow { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: creatorPublicKeyBase58Check, + Metadata: "NFTCreatorPublicKeyBase58Check", + }) + + for pubKeyIter, amountNanos := range txnMeta.NFTBidTxindexMetadata.NFTRoyaltiesMetadata.AdditionalCoinRoyaltiesMap { + pubKey := pubKeyIter + // Skip affected pub key if no royalty received + if amountNanos == 0 { + continue + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCreatorPublicKeyBase58Check", + }) + } + + for pubKeyIter, amountNanos := range txnMeta.NFTBidTxindexMetadata.NFTRoyaltiesMetadata.AdditionalDESORoyaltiesMap { + pubKey := pubKeyIter + // Skip affected pub key if no royalty received + if amountNanos == 0 { + continue + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCoinPublicKeyBase58Check", + }) + } + } + case lib.TxnTypeAcceptNFTBid: + realTxMeta := txn.TxnMeta.(*lib.AcceptNFTBidMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeAcceptNFTBid) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing accept bid utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.AcceptNFTBidStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing accept bid state change metadata error: %v", txn.Hash().String()) + } + + creatorPublicKeyBase58Check := lib.PkToString(utxoOp.PrevPostEntry.PosterPublicKey, params) + + txnMeta.AcceptNFTBidTxindexMetadata = &lib.AcceptNFTBidTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + SerialNumber: realTxMeta.SerialNumber, + BidAmountNanos: realTxMeta.BidAmountNanos, + NFTRoyaltiesMetadata: &lib.NFTRoyaltiesMetadata{ + CreatorCoinRoyaltyNanos: utxoOp.AcceptNFTBidCreatorRoyaltyNanos, + CreatorRoyaltyNanos: utxoOp.AcceptNFTBidCreatorDESORoyaltyNanos, + CreatorPublicKeyBase58Check: creatorPublicKeyBase58Check, + AdditionalCoinRoyaltiesMap: lib.PubKeyRoyaltyPairToBase58CheckToRoyaltyNanosMap( + utxoOp.AcceptNFTBidAdditionalCoinRoyalties, params), + AdditionalDESORoyaltiesMap: lib.PubKeyRoyaltyPairToBase58CheckToRoyaltyNanosMap( + utxoOp.AcceptNFTBidAdditionalDESORoyalties, params), + }, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: stateChangeMetadata.BidderPublicKeyBase58Check, + Metadata: "NFTBidderPublicKeyBase58Check", + }) + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: creatorPublicKeyBase58Check, + Metadata: "NFTCreatorPublicKeyBase58Check", + }) + + for pubKeyIter, amountNanos := range txnMeta.AcceptNFTBidTxindexMetadata.NFTRoyaltiesMetadata.AdditionalCoinRoyaltiesMap { + pubKey := pubKeyIter + // Skip affected pub key if no royalty received + if amountNanos == 0 { + continue + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCreatorPublicKeyBase58Check", + }) + } + + for pubKeyIter, amountNanos := range txnMeta.AcceptNFTBidTxindexMetadata.NFTRoyaltiesMetadata.AdditionalDESORoyaltiesMap { + pubKey := pubKeyIter + // Skip affected pub key if no royalty received + if amountNanos == 0 { + continue + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCoinPublicKeyBase58Check", + }) + } + case lib.TxnTypeCreateNFT: + realTxMeta := txn.TxnMeta.(*lib.CreateNFTMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeCreateNFT) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing create nft utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.CreateNFTStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing create nft state change metadata error: %v", txn.Hash().String()) + } + + additionalDESORoyaltiesMap := stateChangeMetadata.AdditionalDESORoyaltiesMap + additionalCoinRoyaltiesMap := stateChangeMetadata.AdditionalCoinRoyaltiesMap + txnMeta.CreateNFTTxindexMetadata = &lib.CreateNFTTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + AdditionalDESORoyaltiesMap: additionalDESORoyaltiesMap, + AdditionalCoinRoyaltiesMap: additionalCoinRoyaltiesMap, + } + for pubKeyIter := range additionalDESORoyaltiesMap { + pubKey := pubKeyIter + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCreatorPublicKeyBase58Check", + }) + } + for pubKeyIter := range additionalCoinRoyaltiesMap { + pubKey := pubKeyIter + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCoinPublicKeyBase58Check", + }) + } + case lib.TxnTypeUpdateNFT: + realTxMeta := txn.TxnMeta.(*lib.UpdateNFTMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeUpdateNFT) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing update nft utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.UpdateNFTStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing update nft state change metadata error: %v", txn.Hash().String()) + } + + postEntry := stateChangeMetadata.NFTPostEntry + + additionalDESORoyaltiesMap := stateChangeMetadata.AdditionalDESORoyaltiesMap + additionalCoinRoyaltiesMap := stateChangeMetadata.AdditionalCoinRoyaltiesMap + txnMeta.UpdateNFTTxindexMetadata = &lib.UpdateNFTTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + IsForSale: realTxMeta.IsForSale, + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params), + Metadata: "NFTCreatorPublicKeyBase58Check", + }) + for pubKeyIter := range additionalDESORoyaltiesMap { + pubKey := pubKeyIter + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCreatorPublicKeyBase58Check", + }) + } + for pubKeyIter := range additionalCoinRoyaltiesMap { + pubKey := pubKeyIter + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: pubKey, + Metadata: "AdditionalNFTRoyaltyToCoinPublicKeyBase58Check", + }) + } + case lib.TxnTypeNFTTransfer: + realTxMeta := txn.TxnMeta.(*lib.NFTTransferMetadata) + + txnMeta.NFTTransferTxindexMetadata = &lib.NFTTransferTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + SerialNumber: realTxMeta.SerialNumber, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ReceiverPublicKey, params), + Metadata: "NFTTransferRecipientPublicKeyBase58Check", + }) + case lib.TxnTypeAcceptNFTTransfer: + realTxMeta := txn.TxnMeta.(*lib.AcceptNFTTransferMetadata) + + txnMeta.AcceptNFTTransferTxindexMetadata = &lib.AcceptNFTTransferTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + SerialNumber: realTxMeta.SerialNumber, + } + case lib.TxnTypeBurnNFT: + realTxMeta := txn.TxnMeta.(*lib.BurnNFTMetadata) + + txnMeta.BurnNFTTxindexMetadata = &lib.BurnNFTTxindexMetadata{ + NFTPostHashHex: hex.EncodeToString(realTxMeta.NFTPostHash[:]), + SerialNumber: realTxMeta.SerialNumber, + } + case lib.TxnTypeBasicTransfer: + // Add the public key of the receiver to the affected public keys. + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeAddBalance) + if utxoOp != nil { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(utxoOp.BalancePublicKey, params), + Metadata: "BasicTransferAddBalancePublicKeyBase58Check", + }) + } + diamondLevelBytes, hasDiamondLevel := txn.ExtraData[lib.DiamondLevelKey] + diamondPostHash, hasDiamondPostHash := txn.ExtraData[lib.DiamondPostHashKey] + if hasDiamondLevel && hasDiamondPostHash { + diamondLevel, bytesRead := lib.Varint(diamondLevelBytes) + if bytesRead <= 0 { + glog.Errorf("Update TxIndex: Error reading diamond level for txn: %v", txn.Hash().String()) + } else { + txnMeta.BasicTransferTxindexMetadata.DiamondLevel = diamondLevel + txnMeta.BasicTransferTxindexMetadata.PostHashHex = hex.EncodeToString(diamondPostHash) + } + } + case lib.TxnTypeDAOCoin: + realTxMeta := txn.TxnMeta.(*lib.DAOCoinMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeDAOCoin) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.DAOCoinStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin state change metadata error: %v", txn.Hash().String()) + } + + creatorProfileEntry := stateChangeMetadata.CreatorProfileEntry + + var metadata string + var operationString string + switch realTxMeta.OperationType { + case lib.DAOCoinOperationTypeMint: + metadata = "DAOCoinMintPublicKeyBase58Check" + operationString = "mint" + case lib.DAOCoinOperationTypeBurn: + metadata = "DAOCoinBurnPublicKeyBase58Check" + operationString = "burn" + case lib.DAOCoinOperationTypeDisableMinting: + metadata = "DAOCoinDisableMintingPublicKeyBase58Check" + operationString = "disable_minting" + case lib.DAOCoinOperationTypeUpdateTransferRestrictionStatus: + metadata = "DAOCoinUpdateTransferRestrictionStatus" + operationString = "update_transfer_restriction_status" + } + + txnMeta.DAOCoinTxindexMetadata = &lib.DAOCoinTxindexMetadata{ + CreatorUsername: string(creatorProfileEntry.Username), + OperationType: operationString, + CoinsToMintNanos: &realTxMeta.CoinsToMintNanos, + CoinsToBurnNanos: &realTxMeta.CoinsToBurnNanos, + TransferRestrictionStatus: realTxMeta.TransferRestrictionStatus.String(), + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(creatorProfileEntry.PublicKey, params), + Metadata: metadata, + }) + case lib.TxnTypeDAOCoinTransfer: + realTxMeta := txn.TxnMeta.(*lib.DAOCoinTransferMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeDAOCoinTransfer) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin transfer utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.DAOCoinTransferStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin transfer state change metadata error: %v", txn.Hash().String()) + } + + creatorProfileEntry := stateChangeMetadata.CreatorProfileEntry + txnMeta.DAOCoinTransferTxindexMetadata = &lib.DAOCoinTransferTxindexMetadata{ + CreatorUsername: string(creatorProfileEntry.Username), + DAOCoinToTransferNanos: realTxMeta.DAOCoinToTransferNanos, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.ReceiverPublicKey, params), + Metadata: "ReceiverPublicKey", + }) + case lib.TxnTypeDAOCoinLimitOrder: + realTxMeta := txn.TxnMeta.(*lib.DAOCoinLimitOrderMetadata) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeDAOCoinLimitOrder) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin limit order utxo op error: %v", txn.Hash().String()) + } + + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.DAOCoinLimitOrderStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin limit order state change metadata error: %v", txn.Hash().String()) + } + + // We only update the mempool if the transactor submitted a new + // order. Not if the transactor cancelled an existing order. + if realTxMeta.CancelOrderID != nil { + break + } + + if !realTxMeta.BuyingDAOCoinCreatorPublicKey.IsZeroPublicKey() { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.BuyingDAOCoinCreatorPublicKey.ToBytes(), params), + Metadata: "BuyingDAOCoinCreatorPublicKey", + }) + } + + if !realTxMeta.SellingDAOCoinCreatorPublicKey.IsZeroPublicKey() { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.SellingDAOCoinCreatorPublicKey.ToBytes(), params), + Metadata: "SellingDAOCoinCreatorPublicKey", + }) + } + + uniquePublicKeyMap := make(map[string]bool) + fulfilledOrderMetadata := []*lib.FilledDAOCoinLimitOrderMetadata{} + for _, filledOrder := range stateChangeMetadata.FilledDAOCoinLimitOrdersMetadata { + uniquePublicKeyMap[filledOrder.TransactorPublicKeyBase58Check] = true + fulfilledOrderMetadata = append(fulfilledOrderMetadata, &lib.FilledDAOCoinLimitOrderMetadata{ + TransactorPublicKeyBase58Check: filledOrder.TransactorPublicKeyBase58Check, + BuyingDAOCoinCreatorPublicKey: filledOrder.BuyingDAOCoinCreatorPublicKey, + SellingDAOCoinCreatorPublicKey: filledOrder.SellingDAOCoinCreatorPublicKey, + CoinQuantityInBaseUnitsBought: filledOrder.CoinQuantityInBaseUnitsBought, + CoinQuantityInBaseUnitsSold: filledOrder.CoinQuantityInBaseUnitsSold, + IsFulfilled: filledOrder.IsFulfilled, + }) + } + + for uniquePublicKey := range uniquePublicKeyMap { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: uniquePublicKey, + Metadata: "FilledOrderPublicKey", + }) + } + + txnMeta.DAOCoinLimitOrderTxindexMetadata = &lib.DAOCoinLimitOrderTxindexMetadata{ + FilledDAOCoinLimitOrdersMetadata: fulfilledOrderMetadata, + BuyingDAOCoinCreatorPublicKey: lib.PkToString( + realTxMeta.BuyingDAOCoinCreatorPublicKey.ToBytes(), params), + SellingDAOCoinCreatorPublicKey: lib.PkToString( + realTxMeta.SellingDAOCoinCreatorPublicKey.ToBytes(), params), + ScaledExchangeRateCoinsToSellPerCoinToBuy: realTxMeta.ScaledExchangeRateCoinsToSellPerCoinToBuy, + QuantityToFillInBaseUnits: realTxMeta.QuantityToFillInBaseUnits, + } + + case lib.TxnTypeCreateUserAssociation: + realTxMeta := txn.TxnMeta.(*lib.CreateUserAssociationMetadata) + targetUserPublicKeyBase58Check := lib.PkToString(realTxMeta.TargetUserPublicKey.ToBytes(), params) + appPublicKeyBase58Check := lib.PkToString(realTxMeta.AppPublicKey.ToBytes(), params) + + txnMeta.CreateUserAssociationTxindexMetadata = &lib.CreateUserAssociationTxindexMetadata{ + TargetUserPublicKeyBase58Check: targetUserPublicKeyBase58Check, + AppPublicKeyBase58Check: appPublicKeyBase58Check, + AssociationType: string(realTxMeta.AssociationType), + AssociationValue: string(realTxMeta.AssociationValue), + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: targetUserPublicKeyBase58Check, + Metadata: "AssociationTargetUserPublicKeyBase58Check", + }) + + case lib.TxnTypeDeleteUserAssociation: + realTxMeta := txn.TxnMeta.(*lib.DeleteUserAssociationMetadata) + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeDeleteUserAssociation) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing delete user association utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.DeleteUserAssociationStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing delete user association state change metadata error: %v", txn.Hash().String()) + } + + prevAssociationEntry := &lib.UserAssociationEntry{} + targetUserPublicKeyBase58Check := "" + appPublicKeyKeyBase58Check := "" + if utxoOps[len(utxoOps)-1].PrevUserAssociationEntry != nil { + prevAssociationEntry = utxoOps[len(utxoOps)-1].PrevUserAssociationEntry + targetUserPublicKeyBase58Check = stateChangeMetadata.TargetUserPublicKeyBase58Check + appPublicKeyKeyBase58Check = stateChangeMetadata.AppPublicKeyBase58Check + } + + txnMeta.DeleteUserAssociationTxindexMetadata = &lib.DeleteUserAssociationTxindexMetadata{ + AssociationIDHex: hex.EncodeToString(realTxMeta.AssociationID.ToBytes()), + TargetUserPublicKeyBase58Check: targetUserPublicKeyBase58Check, + AppPublicKeyBase58Check: appPublicKeyKeyBase58Check, + AssociationType: string(prevAssociationEntry.AssociationType), + AssociationValue: string(prevAssociationEntry.AssociationValue), + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: targetUserPublicKeyBase58Check, + Metadata: "AssociationTargetUserPublicKeyBase58Check", + }) + + case lib.TxnTypeCreatePostAssociation: + realTxMeta := txn.TxnMeta.(*lib.CreatePostAssociationMetadata) + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeCreatePostAssociation) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing create post association utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.CreatePostAssociationStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing create post association state change metadata error: %v", txn.Hash().String()) + } + + appPublicKeyBase58Check := lib.PkToString(realTxMeta.AppPublicKey.ToBytes(), params) + + txnMeta.CreatePostAssociationTxindexMetadata = &lib.CreatePostAssociationTxindexMetadata{ + PostHashHex: hex.EncodeToString(realTxMeta.PostHash.ToBytes()), + AppPublicKeyBase58Check: appPublicKeyBase58Check, + AssociationType: string(realTxMeta.AssociationType), + AssociationValue: string(realTxMeta.AssociationValue), + } + + postEntry := stateChangeMetadata.PostEntry + + transactionExtraMetadata = &PostTransactionExtraMetadata{PosterPublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params)} + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(postEntry.PosterPublicKey, params), + Metadata: "AssociationTargetPostCreatorPublicKeyBase58Check", + }) + + case lib.TxnTypeDeletePostAssociation: + realTxMeta := txn.TxnMeta.(*lib.DeletePostAssociationMetadata) + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeDeletePostAssociation) + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing delete post association utxo op error: %v", txn.Hash().String()) + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.DeletePostAssociationStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing delete post association state change metadata error: %v", txn.Hash().String()) + } + + prevAssociationEntry := &lib.PostAssociationEntry{} + postHashHex := "" + appPublicKeyKeyBase58Check := "" + postAuthorPublicKeyBase58Check := "" + if utxoOps[len(utxoOps)-1].PrevPostAssociationEntry != nil { + prevAssociationEntry = utxoOps[len(utxoOps)-1].PrevPostAssociationEntry + postHashHex = hex.EncodeToString(prevAssociationEntry.PostHash.ToBytes()) + appPublicKeyKeyBase58Check = stateChangeMetadata.AppPublicKeyBase58Check + postAuthorPublicKeyBase58Check = lib.PkToString(stateChangeMetadata.PostEntry.PosterPublicKey, params) + } + + txnMeta.DeletePostAssociationTxindexMetadata = &lib.DeletePostAssociationTxindexMetadata{ + AssociationIDHex: hex.EncodeToString(realTxMeta.AssociationID.ToBytes()), + PostHashHex: postHashHex, + AppPublicKeyBase58Check: appPublicKeyKeyBase58Check, + AssociationType: string(prevAssociationEntry.AssociationType), + AssociationValue: string(prevAssociationEntry.AssociationValue), + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: postAuthorPublicKeyBase58Check, + Metadata: "AssociationTargetPostCreatorPublicKeyBase58Check", + }) + + case lib.TxnTypeAccessGroup: + realTxMeta := txn.TxnMeta.(*lib.AccessGroupMetadata) + txnMeta.AccessGroupTxindexMetadata = &lib.AccessGroupTxindexMetadata{ + AccessGroupOwnerPublicKey: *lib.NewPublicKey(realTxMeta.AccessGroupOwnerPublicKey), + AccessGroupPublicKey: *lib.NewPublicKey(realTxMeta.AccessGroupPublicKey), + AccessGroupKeyName: *lib.NewGroupKeyName(realTxMeta.AccessGroupKeyName), + AccessGroupOperationType: realTxMeta.AccessGroupOperationType, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.AccessGroupOwnerPublicKey, params), + Metadata: "AccessGroupCreateOwnerPublicKeyBase58Check", + }) + + case lib.TxnTypeAccessGroupMembers: + realTxMeta := txn.TxnMeta.(*lib.AccessGroupMembersMetadata) + txnMeta.AccessGroupMembersTxindexMetadata = &lib.AccessGroupMembersTxindexMetadata{ + AccessGroupOwnerPublicKey: *lib.NewPublicKey(realTxMeta.AccessGroupOwnerPublicKey), + AccessGroupKeyName: *lib.NewGroupKeyName(realTxMeta.AccessGroupKeyName), + AccessGroupMembersList: realTxMeta.AccessGroupMembersList, + AccessGroupMemberOperationType: realTxMeta.AccessGroupMemberOperationType, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.AccessGroupOwnerPublicKey, params), + Metadata: "AccessGroupMembersOwnerPublicKeyBase58Check", + }) + + case lib.TxnTypeNewMessage: + realTxMeta := txn.TxnMeta.(*lib.NewMessageMetadata) + txnMeta.NewMessageTxindexMetadata = &lib.NewMessageTxindexMetadata{ + SenderAccessGroupOwnerPublicKey: realTxMeta.SenderAccessGroupOwnerPublicKey, + SenderAccessGroupKeyName: realTxMeta.SenderAccessGroupKeyName, + RecipientAccessGroupOwnerPublicKey: realTxMeta.RecipientAccessGroupOwnerPublicKey, + RecipientAccessGroupKeyName: realTxMeta.RecipientAccessGroupKeyName, + TimestampNanos: realTxMeta.TimestampNanos, + NewMessageType: realTxMeta.NewMessageType, + NewMessageOperation: realTxMeta.NewMessageOperation, + } + + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.SenderAccessGroupOwnerPublicKey.ToBytes(), params), + Metadata: "NewMessageSenderAccessGroupOwnerPublicKey", + }) + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(realTxMeta.RecipientAccessGroupOwnerPublicKey.ToBytes(), params), + Metadata: "NewMessageRecipientAccessGroupOwnerPublicKe", + }) + case lib.TxnTypeRegisterAsValidator: + realTxMeta := txn.TxnMeta.(*lib.RegisterAsValidatorMetadata) + + validatorPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + // Cast domains from []byte to string. + var domains []string + for _, domain := range realTxMeta.Domains { + domains = append(domains, string(domain)) + } + + // Construct TxindexMetadata. + txnMeta.RegisterAsValidatorTxindexMetadata = &lib.RegisterAsValidatorTxindexMetadata{ + ValidatorPublicKeyBase58Check: validatorPublicKeyBase58Check, + Domains: domains, + DisableDelegatedStake: realTxMeta.DisableDelegatedStake, + VotingPublicKey: realTxMeta.VotingPublicKey.ToString(), + VotingAuthorization: realTxMeta.VotingAuthorization.ToString(), + } + + // Construct AffectedPublicKeys. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: validatorPublicKeyBase58Check, + Metadata: "RegisteredValidatorPublicKeyBase58Check", + }) + case lib.TxnTypeUnregisterAsValidator: + validatorPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeUnregisterAsValidator) + + if utxoOp == nil || utxoOp.StateChangeMetadata == nil { + return nil, nil, fmt.Errorf("ComputeTransactionMetadata: missing dao coin utxo op error: %v", txn.Hash().String()) + + } + stateChangeMetadata, ok := utxoOp.StateChangeMetadata.(*lib.UnregisterAsValidatorStateChangeMetadata) + if !ok { + return nil, nil, fmt.Errorf( + "ComputeTransactionMetadata: missing unregister as validator state change metadata error: %v", + txn.Hash().String()) + } + var unstakedStakers []*lib.UnstakedStakerTxindexMetadata + for _, stakeEntry := range utxoOp.PrevStakeEntries { + // Look up the staker's public key from the state change metadata + stakerPublicKeyBase58Check := stateChangeMetadata. + StakerPKIDToPublicKeyBase58CheckMap[*stakeEntry.StakerPKID] + unstakedStakers = append(unstakedStakers, &lib.UnstakedStakerTxindexMetadata{ + StakerPublicKeyBase58Check: stakerPublicKeyBase58Check, + UnstakeAmountNanos: stakeEntry.StakeAmountNanos, + }) + } + + // Construct TxindexMetadata. + txnMeta.UnregisterAsValidatorTxindexMetadata = &lib.UnregisterAsValidatorTxindexMetadata{ + ValidatorPublicKeyBase58Check: validatorPublicKeyBase58Check, + UnstakedStakers: unstakedStakers, + } + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: validatorPublicKeyBase58Check, + Metadata: "UnregisteredValidatorPublicKeyBase58Check", + }) + for _, unstakedStaker := range unstakedStakers { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: unstakedStaker.StakerPublicKeyBase58Check, + Metadata: "UnstakedStakerPublicKeyBase58Check", + }) + } + case lib.TxnTypeStake: + realTxMeta := txn.TxnMeta.(*lib.StakeMetadata) + + stakerPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + // Convert ValidatorPublicKey to ValidatorPublicKeyBase58Check. + validatorPublicKeyBase58Check := lib.PkToString(realTxMeta.ValidatorPublicKey.ToBytes(), params) + + // Construct TxindexMetadata. + txnMeta.StakeTxindexMetadata = &lib.StakeTxindexMetadata{ + StakerPublicKeyBase58Check: stakerPublicKeyBase58Check, + ValidatorPublicKeyBase58Check: validatorPublicKeyBase58Check, + StakeAmountNanos: realTxMeta.StakeAmountNanos, + } + + // Construct AffectedPublicKeys. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, []*lib.AffectedPublicKey{ + { + PublicKeyBase58Check: stakerPublicKeyBase58Check, + Metadata: "StakerPublicKeyBase58Check", + }, + { + PublicKeyBase58Check: validatorPublicKeyBase58Check, + Metadata: "ValidatorStakedToPublicKeyBase58Check", + }, + }...) + case lib.TxnTypeUnstake: + realTxMeta := txn.TxnMeta.(*lib.UnstakeMetadata) + // Convert TransactorPublicKeyBytes to StakerPublicKeyBase58Check. + stakerPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + // Convert ValidatorPublicKey to ValidatorPublicKeyBase58Check. + validatorPublicKeyBase58Check := lib.PkToString(realTxMeta.ValidatorPublicKey.ToBytes(), params) + + // Construct TxindexMetadata. + txnMeta.UnstakeTxindexMetadata = &lib.UnstakeTxindexMetadata{ + StakerPublicKeyBase58Check: stakerPublicKeyBase58Check, + ValidatorPublicKeyBase58Check: validatorPublicKeyBase58Check, + UnstakeAmountNanos: realTxMeta.UnstakeAmountNanos, + } + + // Construct AffectedPublicKeys. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, []*lib.AffectedPublicKey{ + { + PublicKeyBase58Check: stakerPublicKeyBase58Check, + Metadata: "UnstakerPublicKeyBase58Check", + }, + { + PublicKeyBase58Check: validatorPublicKeyBase58Check, + Metadata: "ValidatorUnstakedFromPublicKeyBase58Check", + }, + }...) + case lib.TxnTypeUnlockStake: + realTxMeta := txn.TxnMeta.(*lib.UnlockStakeMetadata) + + // Convert TransactorPublicKeyBytes to StakerPublicKeyBase58Check. + stakerPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + // Convert ValidatorPublicKey to ValidatorPublicKeyBase58Check. + validatorPublicKeyBase58Check := lib.PkToString(realTxMeta.ValidatorPublicKey.ToBytes(), params) + + // Calculate TotalUnlockedAmountNanos. + totalUnlockedAmountNanos := uint256.NewInt(0) + utxoOp := GetUtxoOpByOperationType(utxoOps, lib.OperationTypeUnlockStake) + var err error + for _, prevLockedStakeEntry := range utxoOp.PrevLockedStakeEntries { + totalUnlockedAmountNanos, err = lib.SafeUint256().Add( + totalUnlockedAmountNanos, prevLockedStakeEntry.LockedAmountNanos, + ) + if err != nil { + glog.Errorf("CreateUnlockStakeTxindexMetadata: error calculating TotalUnlockedAmountNanos: %v", err) + totalUnlockedAmountNanos = uint256.NewInt(0) + break + } + } + + // Construct TxindexMetadata. + txnMeta.UnlockStakeTxindexMetadata = &lib.UnlockStakeTxindexMetadata{ + StakerPublicKeyBase58Check: stakerPublicKeyBase58Check, + ValidatorPublicKeyBase58Check: validatorPublicKeyBase58Check, + StartEpochNumber: realTxMeta.StartEpochNumber, + EndEpochNumber: realTxMeta.EndEpochNumber, + TotalUnlockedAmountNanos: totalUnlockedAmountNanos, + } + + // Construct AffectedPublicKeys. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: stakerPublicKeyBase58Check, + Metadata: "UnlockedStakerPublicKeyBase58Check", + }) + case lib.TxnTypeUnjailValidator: + // Cast ValidatorPublicKey to ValidatorPublicKeyBase58Check. + validatorPublicKeyBase58Check := lib.PkToString(txn.PublicKey, params) + + txnMeta.UnjailValidatorTxindexMetadata = &lib.UnjailValidatorTxindexMetadata{} + // Construct AffectedPublicKeys. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: validatorPublicKeyBase58Check, + Metadata: "UnjailedValidatorPublicKeyBase58Check", + }) + case lib.TxnTypeCoinLockup: + realTxMeta := txn.TxnMeta.(*lib.CoinLockupMetadata) + profilePublicKey := realTxMeta.ProfilePublicKey.ToBytes() + recipientPublicKey := realTxMeta.RecipientPublicKey.ToBytes() + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(profilePublicKey, params), + Metadata: "CoinLockupProfilePublicKeyBase58Check", + }) + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(recipientPublicKey, params), + Metadata: "CoinLockupRecipientPublicKeyBase58Check", + }) + case lib.TxnTypeUpdateCoinLockupParams: + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(txn.PublicKey, params), + Metadata: "UpdateCoinLockupParamsPublicKeyBase58Check", + }) + case lib.TxnTypeCoinLockupTransfer: + realTxMeta := txn.TxnMeta.(*lib.CoinLockupTransferMetadata) + profilePublicKey := realTxMeta.ProfilePublicKey.ToBytes() + recipientPublicKey := realTxMeta.RecipientPublicKey.ToBytes() + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(profilePublicKey, params), + Metadata: "CoinLockupTransferProfilePublicKeyBase58Check", + }) + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(recipientPublicKey, params), + Metadata: "CoinLockupTransferRecipientPublicKeyBase58Check", + }) + case lib.TxnTypeCoinUnlock: + realTxMeta := txn.TxnMeta.(*lib.CoinUnlockMetadata) + profilePublicKey := realTxMeta.ProfilePublicKey.ToBytes() + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: lib.PkToString(profilePublicKey, params), + Metadata: "CoinUnlockProfilePublicKeyBase58Check", + }) + case lib.TxnTypeAtomicTxnsWrapper: + realTxMeta := txn.TxnMeta.(*lib.AtomicTxnsWrapperMetadata) + txnMeta.AtomicTxnsWrapperTxindexMetadata = &lib.AtomicTxnsWrapperTxindexMetadata{} + txnMeta.AtomicTxnsWrapperTxindexMetadata.InnerTxnsTransactionMetadata = []*lib.TransactionMetadata{} + // Find the utxo op for the atomic txn wrapper + var atomicWrapperUtxoOp *lib.UtxoOperation + for _, utxoOp := range utxoOps { + if utxoOp.Type == lib.OperationTypeAtomicTxnsWrapper { + atomicWrapperUtxoOp = utxoOp + } + } + // This should never happen. + if atomicWrapperUtxoOp == nil { + return nil, nil, errors.New("ComputeTransactionMetadata: Could not find utxo op for atomic txn wrapper") + } + innerUtxoOps := atomicWrapperUtxoOp.AtomicTxnsInnerUtxoOps + if len(innerUtxoOps) != len(realTxMeta.Txns) { + return nil, nil, errors.New("ComputeTransactionMetadata: Number of inner utxo ops does not match number of inner txns") + } + for ii, innerTxn := range realTxMeta.Txns { + // Compute the transaction metadata for each inner transaction. + var innerTxnsTxnMetadata *lib.TransactionMetadata + innerTxnsTxnMetadata, _, err = ComputeTransactionMetadata( + innerTxn, + blockHashHex, + params, + innerTxn.TxnFeeNanos, + txnIndexInBlock, + innerUtxoOps[ii], + ) + if err != nil { + return nil, nil, errors.Wrapf(err, "ComputeTransactionMetadata: Error computing inner transaction metadata") + } + txnMeta.AtomicTxnsWrapperTxindexMetadata.InnerTxnsTransactionMetadata = append( + txnMeta.AtomicTxnsWrapperTxindexMetadata.InnerTxnsTransactionMetadata, innerTxnsTxnMetadata) + + // Create a global list of all affected public keys from each inner transaction. + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, innerTxnsTxnMetadata.AffectedPublicKeys...) + } + } + // Check if the transactor is an affected public key. If not, add them. + // We skip this for atomic transactions as their transactor is the ZeroPublicKey. + if txnMeta.TransactorPublicKeyBase58Check != "" && + !bytes.Equal(txn.PublicKey, lib.ZeroPublicKey.ToBytes()) && + txn.TxnMeta.GetTxnType() != lib.TxnTypeAtomicTxnsWrapper { + transactorPublicKeyFound := false + for _, affectedPublicKey := range txnMeta.AffectedPublicKeys { + if affectedPublicKey.PublicKeyBase58Check == txnMeta.TransactorPublicKeyBase58Check { + transactorPublicKeyFound = true + break + } + } + if !transactorPublicKeyFound { + txnMeta.AffectedPublicKeys = append(txnMeta.AffectedPublicKeys, &lib.AffectedPublicKey{ + PublicKeyBase58Check: txnMeta.TransactorPublicKeyBase58Check, + Metadata: "TransactorPublicKeyBase58Check", + }) + } + } + return txnMeta, &transactionExtraMetadata, nil +} + +func GetUtxoOpByOperationType(utxoOps []*lib.UtxoOperation, operationType lib.OperationType) *lib.UtxoOperation { + for _, utxoOp := range utxoOps { + if utxoOp.Type == operationType { + return utxoOp + } + } + return nil +} + +func FilterEntriesByPrefix(entries []*lib.StateChangeEntry, prefix []byte) []*lib.StateChangeEntry { + filteredEntries := make([]*lib.StateChangeEntry, 0) + for _, entry := range entries { + if isPrefixMatch(entry.KeyBytes, prefix) { + filteredEntries = append(filteredEntries, entry) + } + } + return filteredEntries +} + +func FilterKeysByPrefix(keys [][]byte, prefix []byte) [][]byte { + filteredKeys := make([][]byte, 0) + for _, key := range keys { + if isPrefixMatch(key, prefix) { + filteredKeys = append(filteredKeys, key) + } + } + return filteredKeys +} + +func isPrefixMatch(key []byte, prefix []byte) bool { + if len(key) < len(prefix) { + return false + } + return bytes.Equal(key[:len(prefix)], prefix) +} + +// CheckSliceSize checks if the requested slice size is within safe limits. +func CheckSliceSize(length int) error { + const maxInt = int(^uint(0) >> 1) // platform-dependent maximum int value + + if length < 0 { + return errors.New("length or capacity cannot be negative") + } + if length > maxInt { + return errors.New("requested slice size exceeds maximum allowed size") + } + return nil } diff --git a/consumer/helpers_test.go b/consumer/helpers_test.go index 566d073..f8118c2 100644 --- a/consumer/helpers_test.go +++ b/consumer/helpers_test.go @@ -5,20 +5,79 @@ import ( "encoding/json" "fmt" "github.com/deso-protocol/core/lib" + "github.com/deso-protocol/uint256" + "github.com/stretchr/testify/require" + "github.com/uptrace/bun/extra/bunbig" "testing" "time" ) -type testResponse struct { +type testPostResponse struct { PosterPublicKey string `decode_function:"base_58_check" decode_src_field_name:"PosterPublicKey"` PostHash string `decode_function:"blockhash" decode_src_field_name:"PostHash"` + ParentPostHash string `decode_function:"bytehash" decode_src_field_name:"ParentStakeID"` Body string `decode_function:"deso_body_schema" decode_src_field_name:"Body" decode_body_field_name:"Body" decode_image_urls_field_name:"ImageUrls" decode_video_urls_field_name:"VideoUrls"` ImageUrls []string VideoUrls []string Timestamp time.Time `decode_function:"timestamp" decode_src_field_name:"TimestampNanos"` } -func TestCopyStruct(t *testing.T) { +type testProfileResponse struct { + Username string `decode_function:"string_bytes" decode_src_field_name:"Username"` + Description string `decode_function:"string_bytes" decode_src_field_name:"Description"` + ExtraData map[string]string `decode_function:"extra_data" decode_src_field_name:"ExtraData"` + DaoCoinMintingDisabled bool `decode_function:"nested_value" decode_src_field_name:"DAOCoinEntry" nested_field_name:"MintingDisabled"` + DaoCoinTransferRestrictionStatus lib.TransferRestrictionStatus `decode_function:"nested_value" decode_src_field_name:"DAOCoinEntry" nested_field_name:"TransferRestrictionStatus"` +} + +type testFollowResponse struct { + FollowerPkid []byte `pg:",use_zero" decode_function:"pkid" decode_src_field_name:"FollowerPKID"` + FollowedPkid []byte `pg:",use_zero" decode_function:"pkid" decode_src_field_name:"FollowedPKID"` +} + +func TestCopyFollowStruct(t *testing.T) { + followEntry := &lib.FollowEntry{ + FollowerPKID: lib.NewPKID([]byte{2, 57, 123, 26, 128, 235, 160, 166, 6, 68, 101, 10, 241, 60, 42, 111, 253, 251, 191, 56, 131, 12, 175, 195, 73, 55, 167, 93, 221, 68, 184, 206, 82}), + FollowedPKID: lib.NewPKID([]byte{2, 57, 123, 26, 128, 235, 160, 166, 6, 68, 101, 10, 241, 60, 42, 111, 253, 251, 191, 56, 131, 12, 175, 195, 73, 55, 167, 93, 221, 68, 184, 206, 82}), + } + responseStruct := &testFollowResponse{} + err := CopyStruct(followEntry, responseStruct) + require.NoError(t, err) + fmt.Printf("Response: %+v", responseStruct) +} + +func TestCopyProfileStruct(t *testing.T) { + usernameBytes := []byte("test_username") + descriptionBytes := []byte("test_description") + extraData := map[string][]byte{"test_key": []byte("test_value")} + profileEntry := &lib.ProfileEntry{ + Username: usernameBytes, + Description: descriptionBytes, + ExtraData: extraData, + DAOCoinEntry: lib.CoinEntry{ + MintingDisabled: true, + TransferRestrictionStatus: lib.TransferRestrictionStatusUnrestricted, + }, + } + responseStruct := &testProfileResponse{} + err := CopyStruct(profileEntry, responseStruct) + require.NoError(t, err) + require.Equal(t, "test_username", responseStruct.Username) + require.Equal(t, "test_description", responseStruct.Description) + require.Equal(t, "test_value", responseStruct.ExtraData["test_key"]) + require.Equal(t, true, responseStruct.DaoCoinMintingDisabled) + require.Equal(t, lib.TransferRestrictionStatusUnrestricted, responseStruct.DaoCoinTransferRestrictionStatus) + + profileEntry.Description = []byte{} + + err = CopyStruct(profileEntry, responseStruct) + require.NoError(t, err) + require.Equal(t, "test_username", responseStruct.Username) + require.Equal(t, "", responseStruct.Description) + require.Equal(t, "test_value", responseStruct.ExtraData["test_key"]) +} + +func TestCopyPostStruct(t *testing.T) { postBytesHex := "13a546bba07e9cd96e29cea659b3bb6de1b5144a50bf2a0c94d05701861d8254" byteArray, err := hex.DecodeString(postBytesHex) if err != nil { @@ -28,6 +87,7 @@ func TestCopyStruct(t *testing.T) { blockHash := lib.NewBlockHash(byteArray) + blockHash.ToBytes() postBody := &lib.DeSoBodySchema{ Body: "Test string", ImageURLs: []string{"https://test.com/image1.jpg", "https://test.com/image2.jpg"}, @@ -36,15 +96,59 @@ func TestCopyStruct(t *testing.T) { bodyBytes, err := json.Marshal(postBody) + currentTimeNanos := time.Now() + struct1 := &lib.PostEntry{ - TimestampNanos: uint64(time.Now().UnixNano()), + TimestampNanos: uint64(currentTimeNanos.UnixNano()), PostHash: blockHash, + ParentStakeID: blockHash.ToBytes(), Body: bodyBytes, PosterPublicKey: []byte{2, 57, 123, 26, 128, 235, 160, 166, 6, 68, 101, 10, 241, 60, 42, 111, 253, 251, 191, 56, 131, 12, 175, 195, 73, 55, 167, 93, 221, 68, 184, 206, 82}, } - struct2 := &testResponse{} + struct2 := &testPostResponse{} err = CopyStruct(struct1, struct2) + + require.NoError(t, err) fmt.Printf("struct2: %+v\n", struct2) + require.Equal(t, currentTimeNanos.UnixNano(), struct2.Timestamp.UnixNano()) + struct2.Timestamp = time.Time{} + require.Equal(t, &testPostResponse{ + PosterPublicKey: "BC1YLg7Bk5sq9iNY17bAwoAYiChLYpmWEi6nY6q5gnA1UQV6xixHjfV", + PostHash: "13a546bba07e9cd96e29cea659b3bb6de1b5144a50bf2a0c94d05701861d8254", + ParentPostHash: "13a546bba07e9cd96e29cea659b3bb6de1b5144a50bf2a0c94d05701861d8254", + Body: "Test string", + ImageUrls: []string{"https://test.com/image1.jpg", "https://test.com/image2.jpg"}, + VideoUrls: []string{"https://test.com/video1.mp4", "https://test.com/video2.mp4"}, + Timestamp: time.Time{}, + }, struct2) +} + +type testBalanceResponse struct { + BalanceNanos *bunbig.Int `decode_function:"uint256" decode_src_field_name:"BalanceNanos"` +} + +func TestConvertUint256ToBigInt(t *testing.T) { + balanceUint256, err := uint256.FromHex("0x3ADE68B1") + require.NoError(t, err) + balanceEntry := &lib.BalanceEntry{ + BalanceNanos: *balanceUint256, + } + + responseStruct := &testBalanceResponse{} + err = CopyStruct(balanceEntry, responseStruct) + require.NoError(t, err) + + require.Equal(t, responseStruct.BalanceNanos.ToUInt64(), uint64(987654321)) +} + +func TestGetDisconnectOperationTypeForPrevEntry(t *testing.T) { + prevPostEntry := &lib.PostEntry{} + prevPostEntry = nil + operationType := getDisconnectOperationTypeForPrevEntry(prevPostEntry) + require.Equal(t, lib.DbOperationTypeDelete, operationType) + prevPostEntry = &lib.PostEntry{} + operationType = getDisconnectOperationTypeForPrevEntry(prevPostEntry) + require.Equal(t, lib.DbOperationTypeUpsert, operationType) } diff --git a/consumer/interfaces.go b/consumer/interfaces.go index 04f1653..5ce9863 100644 --- a/consumer/interfaces.go +++ b/consumer/interfaces.go @@ -4,17 +4,24 @@ import "github.com/deso-protocol/core/lib" type SyncEvent uint8 +// SyncEvent is an enum that represents the different sync events that can occur while consuming DeSo state. const ( // We intentionally skip zero as otherwise that would be the default value. SyncEventStart SyncEvent = 0 - SyncEventHypersyncComplete SyncEvent = 1 - SyncEventComplete SyncEvent = 2 + SyncEventHypersyncStart SyncEvent = 1 + SyncEventHypersyncComplete SyncEvent = 2 + // TODO: implement this. Should fire when the consumer has caught up to the tip. + SyncEventComplete SyncEvent = 3 + SyncEventBlocksyncStart SyncEvent = 4 ) // The StateSyncerDataHandler interface is implemented by the data handler implementation. It is used by the // consumer to get the relevant encoder for a given prefix id. type StateSyncerDataHandler interface { - HandleEntry(key []byte, encoder lib.DeSoEncoder, encoderType lib.EncoderType, dbOprationType lib.StateSyncerOperationType) error - HandleEntryBatch(batchedEntries *BatchedEntries) error + HandleEntryBatch(batchedEntries []*lib.StateChangeEntry, isMempool bool) error HandleSyncEvent(syncEvent SyncEvent) error + InitiateTransaction() error + CommitTransaction() error + RollbackTransaction() error + GetParams() *lib.DeSoParams } diff --git a/go.mod b/go.mod index eae264a..4fc8445 100644 --- a/go.mod +++ b/go.mod @@ -1,85 +1,245 @@ -module consumer +module github.com/deso-protocol/state-consumer -go 1.18 +go 1.24.0 + +toolchain go1.24.1 replace github.com/deso-protocol/core => ../core/ replace github.com/deso-protocol/backend => ../backend/ +replace github.com/deso-protocol/postgres-data-handler => ../postgres-data-handler/ + require ( - github.com/deso-protocol/core v0.0.0-00010101000000-000000000000 - github.com/golang/glog v1.0.0 - github.com/spf13/viper v1.7.1 - github.com/uptrace/bun v1.1.12 - github.com/uptrace/bun/dialect/pgdialect v1.1.12 - github.com/uptrace/bun/driver/pgdriver v1.1.12 - github.com/uptrace/bun/extra/bundebug v1.1.12 + github.com/btcsuite/btcd/btcec/v2 v2.3.4 + github.com/deso-protocol/backend v1.2.9 + github.com/deso-protocol/core v1.2.9 + github.com/deso-protocol/postgres-data-handler v0.0.0-00010101000000-000000000000 + github.com/deso-protocol/uint256 v1.3.2 + github.com/golang/glog v1.2.5 + github.com/google/uuid v1.6.0 + github.com/hashicorp/golang-lru/v2 v2.0.7 + github.com/pkg/errors v0.9.1 + github.com/stretchr/testify v1.10.0 + github.com/uptrace/bun/extra/bunbig v1.2.3 ) require ( - github.com/DataDog/datadog-go v4.5.0+incompatible // indirect - github.com/DataDog/zstd v1.4.8 // indirect - github.com/Microsoft/go-winio v0.4.16 // indirect - github.com/btcsuite/btcd v0.21.0-beta // indirect - github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f // indirect - github.com/btcsuite/btcutil v1.0.2 // indirect - github.com/bwesterb/go-ristretto v1.2.0 // indirect + cel.dev/expr v0.23.1 // indirect + cloud.google.com/go v0.121.0 // indirect + cloud.google.com/go/auth v0.16.1 // indirect + cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect + cloud.google.com/go/compute/metadata v0.6.0 // indirect + cloud.google.com/go/iam v1.5.2 // indirect + cloud.google.com/go/monitoring v1.24.2 // indirect + cloud.google.com/go/storage v1.54.0 // indirect + dario.cat/mergo v1.0.1 // indirect + github.com/AlecAivazis/survey/v2 v2.3.7 // indirect + github.com/DataDog/appsec-internal-go v1.11.2 // indirect + github.com/DataDog/datadog-agent/comp/core/tagger/origindetection v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/obfuscate v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/proto v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/remoteconfig/state v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/trace v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/util/log v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/util/scrubber v0.64.3 // indirect + github.com/DataDog/datadog-agent/pkg/version v0.64.3 // indirect + github.com/DataDog/datadog-go/v5 v5.6.0 // indirect + github.com/DataDog/go-libddwaf/v3 v3.5.4 // indirect + github.com/DataDog/go-runtime-metrics-internal v0.0.4-0.20241206090539-a14610dc22b6 // indirect + github.com/DataDog/go-sqllexer v0.1.6 // indirect + github.com/DataDog/go-tuf v1.1.0-0.5.2 // indirect + github.com/DataDog/gostackparse v0.7.0 // indirect + github.com/DataDog/opentelemetry-mapping-go/pkg/otlp/attributes v0.27.0 // indirect + github.com/DataDog/sketches-go v1.4.7 // indirect + github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect + github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect + github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect + github.com/Masterminds/goutils v1.1.1 // indirect + github.com/Masterminds/semver/v3 v3.3.1 // indirect + github.com/Masterminds/sprig/v3 v3.3.0 // indirect + github.com/Microsoft/go-winio v0.6.2 // indirect + github.com/andygrunwald/go-jira v1.16.0 // indirect + github.com/btcsuite/btcd v0.24.2 // indirect + github.com/btcsuite/btcd/btcutil v1.1.6 // indirect + github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0 // indirect + github.com/btcsuite/btclog v0.0.0-20241017175713-3428138b75c7 // indirect + github.com/bwesterb/go-ristretto v1.2.3 // indirect github.com/cespare/xxhash v1.1.0 // indirect - github.com/cespare/xxhash/v2 v2.1.1 // indirect - github.com/cloudflare/circl v1.1.0 // indirect - github.com/davecgh/go-spew v1.1.1 // indirect - github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect - github.com/decred/dcrd/lru v1.1.1 // indirect - github.com/deso-protocol/go-deadlock v1.0.0 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575 // indirect + github.com/cloudflare/circl v1.6.1 // indirect + github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect + github.com/coreos/go-semver v0.3.1 // indirect + github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect + github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect + github.com/decred/dcrd/crypto/blake256 v1.1.0 // indirect + github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect + github.com/deso-protocol/go-deadlock v1.0.1 // indirect github.com/deso-protocol/go-merkle-tree v1.0.0 // indirect - github.com/dgraph-io/badger/v3 v3.2103.0 // indirect - github.com/dgraph-io/ristretto v0.1.0 // indirect - github.com/dustin/go-humanize v1.0.0 // indirect - github.com/ethereum/go-ethereum v1.9.25 // indirect - github.com/fatih/color v1.14.1 // indirect - github.com/fsnotify/fsnotify v1.4.9 // indirect + github.com/dgraph-io/badger/v3 v3.2103.5 // indirect + github.com/dgraph-io/ristretto v0.2.0 // indirect + github.com/dustin/go-humanize v1.0.1 // indirect + github.com/eapache/queue/v2 v2.0.0-20230407133247-75960ed334e4 // indirect + github.com/ebitengine/purego v0.8.2 // indirect + github.com/emirpasic/gods v1.18.1 // indirect + github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect + github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect + github.com/ethereum/go-ethereum v1.15.11 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/fatih/structs v1.1.0 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/fsnotify/fsnotify v1.9.0 // indirect github.com/gernest/mention v2.0.0+incompatible // indirect - github.com/go-pg/pg/v10 v10.10.0 // indirect + github.com/git-chglog/git-chglog v0.15.4 // indirect + github.com/go-jose/go-jose/v4 v4.1.0 // indirect + github.com/go-logr/logr v1.4.2 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-ole/go-ole v1.3.0 // indirect + github.com/go-pg/pg/v10 v10.14.0 // indirect github.com/go-pg/zerochecker v0.2.0 // indirect + github.com/go-viper/mapstructure/v2 v2.2.1 // indirect + github.com/gofrs/uuid v4.4.0+incompatible // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect - github.com/golang/protobuf v1.5.2 // indirect - github.com/golang/snappy v0.0.3 // indirect - github.com/google/flatbuffers v2.0.0+incompatible // indirect - github.com/google/uuid v1.2.0 // indirect - github.com/hashicorp/hcl v1.0.0 // indirect - github.com/holiman/uint256 v1.1.1 // indirect + github.com/golang-jwt/jwt/v4 v4.5.2 // indirect + github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect + github.com/golang/protobuf v1.5.4 // indirect + github.com/golang/snappy v1.0.0 // indirect + github.com/google/flatbuffers v25.2.10+incompatible // indirect + github.com/google/go-querystring v1.1.0 // indirect + github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4 // indirect + github.com/google/s2a-go v0.1.9 // indirect + github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect + github.com/googleapis/gax-go/v2 v2.14.1 // indirect + github.com/gorilla/mux v1.8.1 // indirect + github.com/h2non/bimg v1.1.9 // indirect + github.com/hashicorp/go-secure-stdlib/parseutil v0.2.0 // indirect + github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect + github.com/hashicorp/go-sockaddr v1.0.7 // indirect + github.com/hashicorp/go-version v1.7.0 // indirect + github.com/holiman/uint256 v1.3.2 // indirect + github.com/huandu/xstrings v1.5.0 // indirect + github.com/imdario/mergo v0.3.16 // indirect + github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect - github.com/magiconair/properties v1.8.1 // indirect - github.com/mattn/go-colorable v0.1.13 // indirect - github.com/mattn/go-isatty v0.0.17 // indirect - github.com/mitchellh/mapstructure v1.1.2 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect + github.com/kevinburke/go-types v0.0.0-20240719050749-165e75e768f7 // indirect + github.com/kevinburke/rest v0.0.0-20240617045629-3ed0ad3487f0 // indirect + github.com/kevinburke/twilio-go v0.0.0-20240716172313-813590983ccc // indirect + github.com/klauspost/compress v1.18.0 // indirect + github.com/kyokomi/emoji/v2 v2.2.13 // indirect + github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect + github.com/mattn/go-colorable v0.1.14 // indirect + github.com/mattn/go-isatty v0.0.20 // indirect + github.com/mattn/goveralls v0.0.12 // indirect + github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect + github.com/mitchellh/copystructure v1.2.0 // indirect + github.com/mitchellh/go-homedir v1.1.0 // indirect + github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c // indirect + github.com/mitchellh/reflectwalk v1.0.2 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/montanaflynn/stats v0.7.1 // indirect + github.com/nyaruka/phonenumbers v1.6.1 // indirect github.com/oleiade/lane v1.0.1 // indirect - github.com/pelletier/go-toml v1.7.0 // indirect - github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 // indirect - github.com/pkg/errors v0.9.1 // indirect - github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/onflow/crypto v0.25.3 // indirect + github.com/onsi/ginkgo v1.16.5 // indirect + github.com/onsi/gomega v1.19.0 // indirect + github.com/outcaste-io/ristretto v0.2.3 // indirect + github.com/pelletier/go-toml/v2 v2.2.4 // indirect + github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a // indirect + github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect + github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect + github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect + github.com/puzpuzpuz/xsync/v3 v3.4.0 // indirect + github.com/richardartoul/molecule v1.0.1-0.20240531184615-7ca0df43c0b3 // indirect + github.com/robinjoseph08/go-pg-migrations/v3 v3.1.0 // indirect + github.com/russross/blackfriday/v2 v2.1.0 // indirect + github.com/ryanuber/go-glob v1.0.0 // indirect + github.com/sagikazarmark/locafero v0.9.0 // indirect + github.com/secure-systems-lab/go-securesystemslib v0.9.0 // indirect + github.com/sendgrid/rest v2.6.9+incompatible // indirect + github.com/sendgrid/sendgrid-go v3.16.0+incompatible // indirect github.com/shibukawa/configdir v0.0.0-20170330084843-e180dbdc8da0 // indirect - github.com/spf13/afero v1.1.2 // indirect - github.com/spf13/cast v1.3.0 // indirect - github.com/spf13/jwalterweatherman v1.0.0 // indirect - github.com/spf13/pflag v1.0.5 // indirect - github.com/subosito/gotenv v1.2.0 // indirect + github.com/shirou/gopsutil/v4 v4.25.3 // indirect + github.com/shopspring/decimal v1.4.0 // indirect + github.com/sourcegraph/conc v0.3.0 // indirect + github.com/spaolacci/murmur3 v1.1.0 // indirect + github.com/spf13/afero v1.14.0 // indirect + github.com/spf13/cast v1.7.1 // indirect + github.com/spf13/cobra v1.9.1 // indirect + github.com/spf13/pflag v1.0.6 // indirect + github.com/spf13/viper v1.20.1 // indirect + github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect + github.com/subosito/gotenv v1.6.0 // indirect + github.com/tinylib/msgp v1.2.5 // indirect + github.com/tklauser/go-sysconf v0.3.15 // indirect + github.com/tklauser/numcpus v0.10.0 // indirect github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect + github.com/trivago/tgo v1.0.7 // indirect + github.com/tsuyoshiwada/go-gitcmd v0.0.0-20180205145712-5f1f5f9475df // indirect + github.com/ttacon/builder v0.0.0-20170518171403-c099f663e1c2 // indirect + github.com/ttacon/libphonenumber v1.2.1 // indirect github.com/tyler-smith/go-bip39 v1.1.0 // indirect - github.com/unrolled/secure v1.0.8 // indirect + github.com/unrolled/secure v1.17.0 // indirect + github.com/uptrace/bun v1.2.3 // indirect + github.com/uptrace/bun/dialect/pgdialect v1.2.3 // indirect + github.com/uptrace/bun/driver/pgdriver v1.2.3 // indirect + github.com/uptrace/bun/extra/bundebug v1.2.3 // indirect + github.com/urfave/cli/v2 v2.27.6 // indirect github.com/vmihailenco/bufpool v0.1.11 // indirect - github.com/vmihailenco/msgpack/v5 v5.3.5 // indirect + github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect github.com/vmihailenco/tagparser v0.1.2 // indirect github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect - go.opencensus.io v0.23.0 // indirect - golang.org/x/crypto v0.6.0 // indirect - golang.org/x/net v0.6.0 // indirect - golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect - golang.org/x/sys v0.5.0 // indirect - golang.org/x/text v0.7.0 // indirect - google.golang.org/protobuf v1.26.0 // indirect - gopkg.in/ini.v1 v1.51.0 // indirect + github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect + github.com/yusufpapurcu/wmi v1.2.4 // indirect + github.com/zeebo/errs v1.4.0 // indirect + go.opencensus.io v0.24.0 // indirect + go.opentelemetry.io/auto/sdk v1.1.0 // indirect + go.opentelemetry.io/collector/component v1.30.0 // indirect + go.opentelemetry.io/collector/featuregate v1.30.0 // indirect + go.opentelemetry.io/collector/internal/telemetry v0.124.0 // indirect + go.opentelemetry.io/collector/pdata v1.30.0 // indirect + go.opentelemetry.io/collector/pdata/pprofile v0.124.0 // indirect + go.opentelemetry.io/collector/semconv v0.124.0 // indirect + go.opentelemetry.io/contrib/bridges/otelzap v0.10.0 // indirect + go.opentelemetry.io/contrib/detectors/gcp v1.35.0 // indirect + go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 // indirect + go.opentelemetry.io/otel v1.35.0 // indirect + go.opentelemetry.io/otel/log v0.11.0 // indirect + go.opentelemetry.io/otel/metric v1.35.0 // indirect + go.opentelemetry.io/otel/sdk v1.35.0 // indirect + go.opentelemetry.io/otel/sdk/metric v1.35.0 // indirect + go.opentelemetry.io/otel/trace v1.35.0 // indirect + go.uber.org/atomic v1.11.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + go.uber.org/zap v1.27.0 // indirect + golang.org/x/crypto v0.38.0 // indirect + golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 // indirect + golang.org/x/image v0.27.0 // indirect + golang.org/x/mod v0.24.0 // indirect + golang.org/x/net v0.39.0 // indirect + golang.org/x/oauth2 v0.30.0 // indirect + golang.org/x/sync v0.14.0 // indirect + golang.org/x/sys v0.33.0 // indirect + golang.org/x/term v0.32.0 // indirect + golang.org/x/text v0.25.0 // indirect + golang.org/x/time v0.11.0 // indirect + golang.org/x/tools v0.32.0 // indirect + golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect + gonum.org/v1/gonum v0.16.0 // indirect + google.golang.org/api v0.232.0 // indirect + google.golang.org/genproto v0.0.0-20250428153025-10db94c68c34 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20250505200425-f936aa4a68b2 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20250505200425-f936aa4a68b2 // indirect + google.golang.org/grpc v1.72.0 // indirect + google.golang.org/protobuf v1.36.6 // indirect + gopkg.in/DataDog/dd-trace-go.v1 v1.72.2 // indirect + gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect - mellium.im/sasl v0.3.1 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + mellium.im/sasl v0.3.2 // indirect ) diff --git a/go.sum b/go.sum new file mode 100644 index 0000000..df0a24c --- /dev/null +++ b/go.sum @@ -0,0 +1,872 @@ +cel.dev/expr v0.23.1 h1:K4KOtPCJQjVggkARsjG9RWXP6O4R73aHeJMa/dmCQQg= +cel.dev/expr v0.23.1/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw= +cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= +cloud.google.com/go v0.121.0 h1:pgfwva8nGw7vivjZiRfrmglGWiCJBP+0OmDpenG/Fwg= +cloud.google.com/go v0.121.0/go.mod h1:rS7Kytwheu/y9buoDmu5EIpMMCI4Mb8ND4aeN4Vwj7Q= +cloud.google.com/go/auth v0.16.1 h1:XrXauHMd30LhQYVRHLGvJiYeczweKQXZxsTbV9TiguU= +cloud.google.com/go/auth v0.16.1/go.mod h1:1howDHJ5IETh/LwYs3ZxvlkXF48aSqqJUM+5o02dNOI= +cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc= +cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c= +cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I= +cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg= +cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8= +cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE= +cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc= +cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA= +cloud.google.com/go/longrunning v0.6.7 h1:IGtfDWHhQCgCjwQjV9iiLnUta9LBCo8R9QmAFsS/PrE= +cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY= +cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM= +cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U= +cloud.google.com/go/storage v1.53.0/go.mod h1:7/eO2a/srr9ImZW9k5uufcNahT2+fPb8w5it1i5boaA= +cloud.google.com/go/storage v1.54.0 h1:Du3XEyliAiftfyW0bwfdppm2MMLdpVAfiIg4T2nAI+0= +cloud.google.com/go/storage v1.54.0/go.mod h1:hIi9Boe8cHxTyaeqh7KMMwKg088VblFK46C2x/BWaZE= +cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4= +cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI= +dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s= +dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= +github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ= +github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo= +github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= +github.com/DataDog/appsec-internal-go v1.11.2 h1:Q00pPMQzqMIw7jT2ObaORIxBzSly+deS0Ely9OZ/Bj0= +github.com/DataDog/appsec-internal-go v1.11.2/go.mod h1:9YppRCpElfGX+emXOKruShFYsdPq7WEPq/Fen4tYYpk= +github.com/DataDog/datadog-agent/comp/core/tagger/origindetection v0.64.3 h1:FNklG6UnKVA6uSWoMcNPsJJMfhlbp6iJxMFj7zZZYuM= +github.com/DataDog/datadog-agent/comp/core/tagger/origindetection v0.64.3/go.mod h1:lzCtnMSGZm/3RMk5RBRW/6IuK1TNbDXx1ttHTxN5Ykc= +github.com/DataDog/datadog-agent/pkg/obfuscate v0.64.3 h1:hgf4Yp2MRnNnlstRh+He2pixCOwLKGQmrWLRHwM7LGI= +github.com/DataDog/datadog-agent/pkg/obfuscate v0.64.3/go.mod h1:izbemZjqzBn9upkZj8SyT9igSGPMALaQYgswJ0408vY= +github.com/DataDog/datadog-agent/pkg/proto v0.64.3 h1:ox1oqM50cXfVEK2CpVvMyodyF5PTTemt1PkJGxkG2d8= +github.com/DataDog/datadog-agent/pkg/proto v0.64.3/go.mod h1:q324yHcBN5hIeCU8eoinM7lP9c7MOA2FTj7oeWAl3Pc= +github.com/DataDog/datadog-agent/pkg/remoteconfig/state v0.64.3 h1:0Y7YoUzDbXjA0RnrH11QwOn3LFD+oW1/L5et0qEvyHY= +github.com/DataDog/datadog-agent/pkg/remoteconfig/state v0.64.3/go.mod h1:1AAhFoEuoXs8jfpj7EiGW6lsqvCYgQc0B0pRpYAPEW4= +github.com/DataDog/datadog-agent/pkg/trace v0.64.3 h1:zroFsJasdLhltdHKkr4MP2gzgpBVpffevlcYfDDIZnI= +github.com/DataDog/datadog-agent/pkg/trace v0.64.3/go.mod h1:RIHqRISquXGlmIGROSWp3pzwPETBIvhQkmj2BlCj4qY= +github.com/DataDog/datadog-agent/pkg/util/log v0.64.3 h1:JpT79bpG8nKNWSPsTWLxAZ2I8an7Dav3ArfH1p2yi+Q= +github.com/DataDog/datadog-agent/pkg/util/log v0.64.3/go.mod h1:T6qw2pFyCopAFWrk6Vp0QmCTSusCbU42NlVPSUIZkv8= +github.com/DataDog/datadog-agent/pkg/util/scrubber v0.64.3 h1:v9iVXXvHamynoEUoFeeXS/CvqnItQe7ZgpwjIFxLeV4= +github.com/DataDog/datadog-agent/pkg/util/scrubber v0.64.3/go.mod h1:W0q1265hKJowCAMm4mMd3XTXtzL0V7aLBAqY64sZeec= +github.com/DataDog/datadog-agent/pkg/version v0.64.3 h1:mgzZpyGLyaiiid2hvha1/Vlj0LfSJn4vXC8+YaYGc9E= +github.com/DataDog/datadog-agent/pkg/version v0.64.3/go.mod h1:DgOVsfSRaNV4GZNl/qgoZjG3hJjoYUNWPPhbfTfTqtY= +github.com/DataDog/datadog-go/v5 v5.6.0 h1:2oCLxjF/4htd55piM75baflj/KoE6VYS7alEUqFvRDw= +github.com/DataDog/datadog-go/v5 v5.6.0/go.mod h1:K9kcYBlxkcPP8tvvjZZKs/m1edNAUFzBbdpTUKfCsuw= +github.com/DataDog/go-libddwaf/v3 v3.5.4 h1:cLV5lmGhrUBnHG50EUXdqPQAlJdVCp9n3aQ5bDWJEAg= +github.com/DataDog/go-libddwaf/v3 v3.5.4/go.mod h1:HoLUHdj0NybsPBth/UppTcg8/DKA4g+AXuk8cZ6nuoo= +github.com/DataDog/go-runtime-metrics-internal v0.0.4-0.20241206090539-a14610dc22b6 h1:bpitH5JbjBhfcTG+H2RkkiUXpYa8xSuIPnyNtTaSPog= +github.com/DataDog/go-runtime-metrics-internal v0.0.4-0.20241206090539-a14610dc22b6/go.mod h1:quaQJ+wPN41xEC458FCpTwyROZm3MzmTZ8q8XOXQiPs= +github.com/DataDog/go-sqllexer v0.1.6 h1:skEXpWEVCpeZFIiydoIa2f2rf+ymNpjiIMqpW4w3YAk= +github.com/DataDog/go-sqllexer v0.1.6/go.mod h1:GGpo1h9/BVSN+6NJKaEcJ9Jn44Hqc63Rakeb+24Mjgo= +github.com/DataDog/go-tuf v1.1.0-0.5.2 h1:4CagiIekonLSfL8GMHRHcHudo1fQnxELS9g4tiAupQ4= +github.com/DataDog/go-tuf v1.1.0-0.5.2/go.mod h1:zBcq6f654iVqmkk8n2Cx81E1JnNTMOAx1UEO/wZR+P0= +github.com/DataDog/gostackparse v0.7.0 h1:i7dLkXHvYzHV308hnkvVGDL3BR4FWl7IsXNPz/IGQh4= +github.com/DataDog/gostackparse v0.7.0/go.mod h1:lTfqcJKqS9KnXQGnyQMCugq3u1FP6UZMfWR0aitKFMM= +github.com/DataDog/opentelemetry-mapping-go/pkg/otlp/attributes v0.27.0 h1:5US5SqqhfkZkg/E64uvn7YmeTwnudJHtlPEH/LOT99w= +github.com/DataDog/opentelemetry-mapping-go/pkg/otlp/attributes v0.27.0/go.mod h1:VRo4D6rj92AExpVBlq3Gcuol9Nm1bber12KyxRjKGWw= +github.com/DataDog/sketches-go v1.4.7 h1:eHs5/0i2Sdf20Zkj0udVFWuCrXGRFig2Dcfm5rtcTxc= +github.com/DataDog/sketches-go v1.4.7/go.mod h1:eAmQ/EBmtSO+nQp7IZMZVRPT4BQTmIc5RZQ+deGlTPM= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 h1:ErKg/3iS1AKcTkf3yixlZ54f9U1rljCkQyEXWUnIUxc= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0/go.mod h1:yAZHSGnqScoU556rBOVkwLze6WP5N+U11RHuWaGVxwY= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 h1:fYE9p3esPxA/C0rQ0AHhP0drtPXDRhaWiwg1DPqO7IU= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0/go.mod h1:BnBReJLvVYx2CS/UHOgVz2BXKXD9wsQPxZug20nZhd0= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0 h1:OqVGm6Ei3x5+yZmSJG1Mh2NwHvpVmZ08CB5qJhT9Nuk= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0/go.mod h1:SZiPHWGOOk3bl8tkevxkoiwPgsIl6CwrWcbwjfHZpdM= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 h1:6/0iUd0xrnX7qt+mLNRwg5c0PGv8wpE8K90ryANQwMI= +github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0/go.mod h1:otE2jQekW/PqXk1Awf5lmfokJx4uwuqcj1ab5SpGeW0= +github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= +github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/Masterminds/semver/v3 v3.3.1 h1:QtNSWtVZ3nBfk8mAOu/B6v7FMJ+NHTIgUPi7rj+4nv4= +github.com/Masterminds/semver/v3 v3.3.1/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= +github.com/Masterminds/sprig/v3 v3.3.0 h1:mQh0Yrg1XPo6vjYXgtf5OtijNAKJRNcTdOOGZe3tPhs= +github.com/Masterminds/sprig/v3 v3.3.0/go.mod h1:Zy1iXRYNqNLUolqCpL4uhk6SHUMAOSCzdgBfDb35Lz0= +github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= +github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= +github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= +github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s= +github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w= +github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE= +github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= +github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII= +github.com/andygrunwald/go-jira v1.16.0 h1:PU7C7Fkk5L96JvPc6vDVIrd99vdPnYudHu4ju2c2ikQ= +github.com/andygrunwald/go-jira v1.16.0/go.mod h1:UQH4IBVxIYWbgagc0LF/k9FRs9xjIiQ8hIcC6HfLwFU= +github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= +github.com/brianvoe/gofakeit v3.18.0+incompatible h1:wDOmHc9DLG4nRjUVVaxA+CEglKOW72Y5+4WNxUIkjM8= +github.com/brianvoe/gofakeit v3.18.0+incompatible/go.mod h1:kfwdRA90vvNhPutZWfH7WPaDzUjz+CZFqG+rPkOjGOc= +github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ= +github.com/btcsuite/btcd v0.22.0-beta.0.20220111032746-97732e52810c/go.mod h1:tjmYdS6MLJ5/s0Fj4DbLgSbDHbEqLJrtnHecBFkdz5M= +github.com/btcsuite/btcd v0.23.5-0.20231215221805-96c9fd8078fd/go.mod h1:nm3Bko6zh6bWP60UxwoT5LzdGJsQJaPo6HjduXq9p6A= +github.com/btcsuite/btcd v0.24.2 h1:aLmxPguqxza+4ag8R1I2nnJjSu2iFn/kqtHTIImswcY= +github.com/btcsuite/btcd v0.24.2/go.mod h1:5C8ChTkl5ejr3WHj8tkQSCmydiMEPB0ZhQhehpq7Dgg= +github.com/btcsuite/btcd/btcec/v2 v2.1.0/go.mod h1:2VzYrv4Gm4apmbVVsSq5bqf1Ec8v56E48Vt0Y/umPgA= +github.com/btcsuite/btcd/btcec/v2 v2.1.3/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE= +github.com/btcsuite/btcd/btcec/v2 v2.3.4 h1:3EJjcN70HCu/mwqlUsGK8GcNVyLVxFDlWurTXGPFfiQ= +github.com/btcsuite/btcd/btcec/v2 v2.3.4/go.mod h1:zYzJ8etWJQIv1Ogk7OzpWjowwOdXY1W/17j2MW85J04= +github.com/btcsuite/btcd/btcutil v1.0.0/go.mod h1:Uoxwv0pqYWhD//tfTiipkxNfdhG9UrLwaeswfjfdF0A= +github.com/btcsuite/btcd/btcutil v1.1.0/go.mod h1:5OapHB7A2hBBWLm48mmw4MOHNJCcUBTwmWH/0Jn8VHE= +github.com/btcsuite/btcd/btcutil v1.1.5/go.mod h1:PSZZ4UitpLBWzxGd5VGOrLnmOjtPP/a6HaFo12zMs00= +github.com/btcsuite/btcd/btcutil v1.1.6 h1:zFL2+c3Lb9gEgqKNzowKUPQNb8jV7v5Oaodi/AYFd6c= +github.com/btcsuite/btcd/btcutil v1.1.6/go.mod h1:9dFymx8HpuLqBnsPELrImQeTQfKBQqzqGbbV3jK55aE= +github.com/btcsuite/btcd/chaincfg/chainhash v1.0.0/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc= +github.com/btcsuite/btcd/chaincfg/chainhash v1.0.1/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc= +github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0 h1:59Kx4K6lzOW5w6nFlA0v5+lk/6sjybR934QNHSJZPTQ= +github.com/btcsuite/btcd/chaincfg/chainhash v1.1.0/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc= +github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f/go.mod h1:TdznJufoqS23FtqVCzL0ZqgP5MqXbb4fg/WgDys70nA= +github.com/btcsuite/btclog v0.0.0-20241017175713-3428138b75c7 h1:Sy/7AwD/XuTsfXHMvcmjF8ZvAX0qR2TMcDbBANuMTR4= +github.com/btcsuite/btclog v0.0.0-20241017175713-3428138b75c7/go.mod h1:w7xnGOhwT3lmrS4H3b/D1XAXxvh+tbhUm8xeHN2y3TQ= +github.com/btcsuite/btcutil v0.0.0-20190425235716-9e5f4b9a998d/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg= +github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd/go.mod h1:HHNXQzUsZCxOoE+CPiyCTO6x34Zs86zZUiwtpXoGdtg= +github.com/btcsuite/goleveldb v0.0.0-20160330041536-7834afc9e8cd/go.mod h1:F+uVaaLLH7j4eDXPRvw78tMflu7Ie2bzYOH4Y8rRKBY= +github.com/btcsuite/goleveldb v1.0.0/go.mod h1:QiK9vBlgftBg6rWQIj6wFzbPfRjiykIEhBH4obrXJ/I= +github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc= +github.com/btcsuite/snappy-go v1.0.0/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc= +github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY= +github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs= +github.com/bwesterb/go-ristretto v1.2.3 h1:1w53tCkGhCQ5djbat3+MH0BAQ5Kfgbt56UZQ/JMzngw= +github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= +github.com/bxcodec/faker v2.0.1+incompatible h1:P0KUpUw5w6WJXwrPfv35oc91i4d8nf40Nwln+M/+faA= +github.com/bxcodec/faker v2.0.1+incompatible/go.mod h1:BNzfpVdTwnFJ6GtfYTcQu6l6rHShT+veBxNCnjCx5XM= +github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= +github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= +github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= +github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575 h1:kHaBemcxl8o/pQ5VM1c8PVE1PubbNx3mjUr09OqWGCs= +github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575/go.mod h1:9d6lWj8KzO/fd/NrVaLscBKmPigpZpn5YawRPw+e3Yo= +github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= +github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0= +github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs= +github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= +github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f h1:C5bqEmzEPLsHm9Mv73lSE9e9bKV23aB1vxOsmZrkl3k= +github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= +github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= +github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk= +github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= +github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4= +github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec= +github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE= +github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo= +github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= +github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI= +github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4= +github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= +github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/decred/dcrd/crypto/blake256 v1.0.0/go.mod h1:sQl2p6Y26YV+ZOcSTP6thNdn47hh8kt6rqSlvmrXFAc= +github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8= +github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo= +github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs= +github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc= +github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40= +github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218= +github.com/deso-protocol/go-deadlock v1.0.1 h1:lRziC+SaEU0X0fjoprwDlPWxVHCxRIy5LgT8EcD1av8= +github.com/deso-protocol/go-deadlock v1.0.1/go.mod h1:7TistpkoQ9QHKLRMQSuX2wQk9PoB/BEj2jVrhov1HK0= +github.com/deso-protocol/go-merkle-tree v1.0.0 h1:9zkI5dQsITYy77s4kbTGPQmZnhQ+LsH/kRdL5l/Yzvg= +github.com/deso-protocol/go-merkle-tree v1.0.0/go.mod h1:V/vbg/maaNv6G7zf9VVs645nLFx/jsO2L/awFB/S/ZU= +github.com/deso-protocol/uint256 v1.3.2 h1:nHwqfdCKgWimWLJbiN/9DV95qDJ5lZcf8n5cAHbdG6o= +github.com/deso-protocol/uint256 v1.3.2/go.mod h1:Wq2bibbApz3TsiL+VPUnzr+UkhG4eBeQ0DpbQcjQYcA= +github.com/dgraph-io/badger/v3 v3.2103.5 h1:ylPa6qzbjYRQMU6jokoj4wzcaweHylt//CH0AKt0akg= +github.com/dgraph-io/badger/v3 v3.2103.5/go.mod h1:4MPiseMeDQ3FNCYwRbbcBOGJLf5jsE0PPFzRiKjtcdw= +github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA= +github.com/dgraph-io/ristretto v0.2.0 h1:XAfl+7cmoUDWW/2Lx8TGZQjjxIQ2Ley9DSf52dru4WE= +github.com/dgraph-io/ristretto v0.2.0/go.mod h1:8uBHCU/PBV4Ag0CJrP47b9Ofby5dqWNh4FicAdoqFNU= +github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= +github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38= +github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= +github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= +github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= +github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= +github.com/eapache/queue/v2 v2.0.0-20230407133247-75960ed334e4 h1:8EXxF+tCLqaVk8AOC29zl2mnhQjwyLxxOTuhUazWRsg= +github.com/eapache/queue/v2 v2.0.0-20230407133247-75960ed334e4/go.mod h1:I5sHm0Y0T1u5YjlyqC5GVArM7aNZRUYtTjmJ8mPJFds= +github.com/ebitengine/purego v0.8.2 h1:jPPGWs2sZ1UgOSgD2bClL0MJIqu58nOmIcBuXr62z1I= +github.com/ebitengine/purego v0.8.2/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ= +github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= +github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ= +github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= +github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= +github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= +github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M= +github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA= +github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A= +github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw= +github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI= +github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4= +github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= +github.com/envoyproxy/protoc-gen-validate v1.2.1 h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8= +github.com/envoyproxy/protoc-gen-validate v1.2.1/go.mod h1:d/C80l/jxXLdfEIhX1W2TmLfsJ31lvEjwamM4DxlWXU= +github.com/ethereum/go-ethereum v1.15.11 h1:JK73WKeu0WC0O1eyX+mdQAVHUV+UR1a9VB/domDngBU= +github.com/ethereum/go-ethereum v1.15.11/go.mod h1:mf8YiHIb0GR4x4TipcvBUPxJLw1mFdmxzoDi11sDRoI= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo= +github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/fergusstrange/embedded-postgres v1.19.0 h1:NqDufJHeA03U7biULlPHZ0pZ10/mDOMKPILEpT50Fyk= +github.com/fergusstrange/embedded-postgres v1.19.0/go.mod h1:0B+3bPsMvcNgR9nN+bdM2x9YaNYDnf3ksUqYp1OAub0= +github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= +github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= +github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= +github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= +github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= +github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= +github.com/gernest/mention v2.0.0+incompatible h1:pTXnujBC6tqlw5awDkLojq92TXbt0F+4+8FBlQC+di8= +github.com/gernest/mention v2.0.0+incompatible/go.mod h1:/z3Hb+4gaPF+vL8og/lj6Au5j8hh5EfU7/EknmDUuO4= +github.com/git-chglog/git-chglog v0.15.4 h1:BwPDj7AghQTfpXO+UxG4mZM5MUTe9wfDuenF3jpyNf0= +github.com/git-chglog/git-chglog v0.15.4/go.mod h1:BmWdTpqBVzPjKNrBTZGcQCrQV9zq6gFKurhWNnJbYDA= +github.com/go-jose/go-jose/v4 v4.1.0 h1:cYSYxd3pw5zd2FSXk2vGdn9igQU2PS8MuxrCOCl0FdY= +github.com/go-jose/go-jose/v4 v4.1.0/go.mod h1:GG/vqmYm3Von2nYiB2vGTXzdoNKE5tix5tuc6iAd+sw= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= +github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= +github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE= +github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= +github.com/go-pg/pg/v10 v10.14.0 h1:giXuPsJaWjzwzFJTxy39eBgGE44jpqH1jwv0uI3kBUU= +github.com/go-pg/pg/v10 v10.14.0/go.mod h1:6kizZh54FveJxw9XZdNg07x7DDBWNsQrSiJS04MLwO8= +github.com/go-pg/zerochecker v0.2.0 h1:pp7f72c3DobMWOb2ErtZsnrPaSvHd2W4o9//8HtF4mU= +github.com/go-pg/zerochecker v0.2.0/go.mod h1:NJZ4wKL0NmTtz0GKCoJ8kym6Xn/EQzXRl2OnAe7MmDo= +github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP3XYfe4= +github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE= +github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIxtHqx8aGss= +github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= +github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA= +github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI= +github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= +github.com/golang/glog v1.2.5 h1:DrW6hGnjIhtvhOIiAKT6Psh/Kd/ldepEa81DKeiRJ5I= +github.com/golang/glog v1.2.5/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= +github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ= +github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw= +github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= +github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= +github.com/golang/mock v1.7.0-rc.1 h1:YojYx61/OLFsiv6Rw1Z96LpldJIy31o+UHmwAUMJ6/U= +github.com/golang/mock v1.7.0-rc.1/go.mod h1:s42URUywIqd+OcERslBJvOjepvNymP31m3q8d/GkuRs= +github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= +github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= +github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= +github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= +github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= +github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= +github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= +github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= +github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= +github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs= +github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/google/flatbuffers v1.12.1/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= +github.com/google/flatbuffers v25.2.10+incompatible h1:F3vclr7C3HpB1k9mxCGRMXq6FdUalZ6H/pNX4FP1v0Q= +github.com/google/flatbuffers v25.2.10+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= +github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= +github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= +github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= +github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= +github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= +github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc= +github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0= +github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4 h1:gD0vax+4I+mAj+jEChEf25Ia07Jq7kYOFO5PPhAxFl4= +github.com/google/pprof v0.0.0-20250423184734-337e5dd93bb4/go.mod h1:5hDyRhoBCxViHszMt12TnOpEI4VVi+U8Gm9iphldiMA= +github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0= +github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM= +github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4= +github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA= +github.com/googleapis/gax-go/v2 v2.14.1 h1:hb0FFeiPaQskmvakKu5EbCbpntQn48jyHuvrkurSS/Q= +github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA= +github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= +github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= +github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= +github.com/h2non/bimg v1.1.9 h1:WH20Nxko9l/HFm4kZCA3Phbgu2cbHvYzxwxn9YROEGg= +github.com/h2non/bimg v1.1.9/go.mod h1:R3+UiYwkK4rQl6KVFTOFJHitgLbZXBZNFh2cv3AEbp8= +github.com/hashicorp/go-secure-stdlib/parseutil v0.2.0 h1:U+kC2dOhMFQctRfhK0gRctKAPTloZdMU5ZJxaesJ/VM= +github.com/hashicorp/go-secure-stdlib/parseutil v0.2.0/go.mod h1:Ll013mhdmsVDuoIXVfBtvgGJsXDYkTw1kooNcoCXuE0= +github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 h1:kes8mmyCpxJsI7FTwtzRqEy9CdjCtrXrXGuOpxEA7Ts= +github.com/hashicorp/go-secure-stdlib/strutil v0.1.2/go.mod h1:Gou2R9+il93BqX25LAKCLuM+y9U2T4hlwvT1yprcna4= +github.com/hashicorp/go-sockaddr v1.0.7 h1:G+pTkSO01HpR5qCxg7lxfsFEZaG+C0VssTy/9dbT+Fw= +github.com/hashicorp/go-sockaddr v1.0.7/go.mod h1:FZQbEYa1pxkQ7WLpyXJ6cbjpT8q0YgQaK/JakXqGyWw= +github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= +github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= +github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= +github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog= +github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68= +github.com/holiman/uint256 v1.3.2 h1:a9EgMPSC1AAaj1SZL5zIQD3WbwTuHrMGOerLjGmM/TA= +github.com/holiman/uint256 v1.3.2/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E= +github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= +github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI= +github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= +github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= +github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= +github.com/inconshreveable/log15 v3.0.0-testing.5+incompatible/go.mod h1:cOaXtrgN4ScfRrD9Bre7U1thNq5RtJ8ZoP4iXVGRj6o= +github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= +github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= +github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= +github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= +github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E= +github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc= +github.com/jrick/logrotate v1.0.0/go.mod h1:LNinyqDIJnpAur+b8yyulnQw/wDuN1+BYKlTRt3OuAQ= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= +github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= +github.com/kevinburke/go-types v0.0.0-20240719050749-165e75e768f7 h1:36PMhfw/I1YYAjOOuA66ll5X7NJ8v3cJEqsAxiMv7bE= +github.com/kevinburke/go-types v0.0.0-20240719050749-165e75e768f7/go.mod h1:8tQOif9eUJLpDnvfDcGtesfv6VpL2UvDbW4l8kXnSDE= +github.com/kevinburke/rest v0.0.0-20240617045629-3ed0ad3487f0 h1:qksAIHu0d4vkA0rIePBn+K9eO33RxkUMiceFn3T7lO4= +github.com/kevinburke/rest v0.0.0-20240617045629-3ed0ad3487f0/go.mod h1:dcLMT8KO9krIMJQ4578Lex1Su6ewuJUqEDeQ1nTORug= +github.com/kevinburke/twilio-go v0.0.0-20240716172313-813590983ccc h1:cDRzcR6IuXvxkrXA1GY1RGR7bfUzDjvI9DC1xs+V1eI= +github.com/kevinburke/twilio-go v0.0.0-20240716172313-813590983ccc/go.mod h1:G52lJ9gSqbkLzwqB9e3sBJ/nhvYswVIqwZcTw4NiXdY= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= +github.com/klauspost/compress v1.12.3/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg= +github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= +github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= +github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kyokomi/emoji/v2 v2.2.13 h1:GhTfQa67venUUvmleTNFnb+bi7S3aocF7ZCXU9fSO7U= +github.com/kyokomi/emoji/v2 v2.2.13/go.mod h1:JUcn42DTdsXJo1SWanHh4HKDEyPaR5CqkmoirZZP9qE= +github.com/lib/pq v1.10.4 h1:SO9z7FRPzA03QhHKJrH5BXA6HU1rS4V2nIVrrNC1iYk= +github.com/lib/pq v1.10.4/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= +github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 h1:PpXWgLPs+Fqr325bN2FD2ISlRRztXibcX6e8f5FR5Dc= +github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg= +github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= +github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= +github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= +github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= +github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= +github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= +github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= +github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= +github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/mattn/goveralls v0.0.12 h1:PEEeF0k1SsTjOBQ8FOmrOAoCu4ytuMaWCnWe94zxbCg= +github.com/mattn/goveralls v0.0.12/go.mod h1:44ImGEUfmqH8bBtaMrYKsM65LXfNLWmwaxFGjZwgMSQ= +github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI= +github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= +github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= +github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= +github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= +github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= +github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c h1:cqn374mizHuIWj+OSJCajGr/phAmuMug9qIX3l9CflE= +github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= +github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE= +github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= +github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= +github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= +github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= +github.com/nyaruka/phonenumbers v1.6.1 h1:XAJcTdYow16VrVKfglznMpJZz8KMJoMjx/91sX+K940= +github.com/nyaruka/phonenumbers v1.6.1/go.mod h1:7gjs+Lchqm49adhAKB5cdcng5ZXgt6x7Jgvi0ZorUtU= +github.com/oleiade/lane v1.0.1 h1:hXofkn7GEOubzTwNpeL9MaNy8WxolCYb9cInAIeqShU= +github.com/oleiade/lane v1.0.1/go.mod h1:IyTkraa4maLfjq/GmHR+Dxb4kCMtEGeb+qmhlrQ5Mk4= +github.com/onflow/crypto v0.25.3 h1:XQ3HtLsw8h1+pBN+NQ1JYM9mS2mVXTyg55OldaAIF7U= +github.com/onflow/crypto v0.25.3/go.mod h1:+1igaXiK6Tjm9wQOBD1EGwW7bYWMUGKtwKJ/2QL/OWs= +github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= +github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= +github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= +github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY= +github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE= +github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU= +github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= +github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= +github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= +github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= +github.com/onsi/gomega v1.19.0 h1:4ieX6qQjPP/BfC3mpsAtIGGlxTWPeA3Inl/7DtXw1tw= +github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro= +github.com/open-telemetry/opentelemetry-collector-contrib/pkg/sampling v0.120.1 h1:lK/3zr73guK9apbXTcnDnYrC0YCQ25V3CIULYz3k2xU= +github.com/open-telemetry/opentelemetry-collector-contrib/pkg/sampling v0.120.1/go.mod h1:01TvyaK8x640crO2iFwW/6CFCZgNsOvOGH3B5J239m0= +github.com/open-telemetry/opentelemetry-collector-contrib/processor/probabilisticsamplerprocessor v0.120.1 h1:TCyOus9tym82PD1VYtthLKMVMlVyRwtDI4ck4SR2+Ok= +github.com/open-telemetry/opentelemetry-collector-contrib/processor/probabilisticsamplerprocessor v0.120.1/go.mod h1:Z/S1brD5gU2Ntht/bHxBVnGxXKTvZDr0dNv/riUzPmY= +github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs= +github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc= +github.com/outcaste-io/ristretto v0.2.3 h1:AK4zt/fJ76kjlYObOeNwh4T3asEuaCmp26pOvUOL9w0= +github.com/outcaste-io/ristretto v0.2.3/go.mod h1:W8HywhmtlopSB1jeMg3JtdIhf+DYkLAr0VN/s4+MHac= +github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= +github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= +github.com/petermattis/goid v0.0.0-20240813172612-4fcff4a6cae7/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= +github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a h1:S+AGcmAESQ0pXCUNnRH7V+bOUIgkSX5qVt2cNKCrm0Q= +github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= +github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c h1:dAMKvw0MlJT1GshSTtih8C2gDs04w8dReiOGXrGLNoY= +github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo= +github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= +github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU= +github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= +github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= +github.com/puzpuzpuz/xsync/v3 v3.4.0 h1:DuVBAdXuGFHv8adVXjWWZ63pJq+NRXOWVXlKDBZ+mJ4= +github.com/puzpuzpuz/xsync/v3 v3.4.0/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA= +github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE= +github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= +github.com/richardartoul/molecule v1.0.1-0.20240531184615-7ca0df43c0b3 h1:4+LEVOB87y175cLJC/mbsgKmoDOjrBldtXvioEy96WY= +github.com/richardartoul/molecule v1.0.1-0.20240531184615-7ca0df43c0b3/go.mod h1:vl5+MqJ1nBINuSsUI2mGgH79UweUT/B5Fy8857PqyyI= +github.com/robinjoseph08/go-pg-migrations/v3 v3.1.0 h1:EjexnDlSIZoK/gMfQmKIqB7tYsI+SS5hqxmXd63RLb4= +github.com/robinjoseph08/go-pg-migrations/v3 v3.1.0/go.mod h1:9yEG60N97UVFGD/UKQUXoGVZh/t8KXx3JxEpxhKFlKY= +github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII= +github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= +github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g= +github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= +github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= +github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk= +github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc= +github.com/sagikazarmark/locafero v0.9.0 h1:GbgQGNtTrEmddYDSAH9QLRyfAHY12md+8YFTqyMTC9k= +github.com/sagikazarmark/locafero v0.9.0/go.mod h1:UBUyz37V+EdMS3hDF3QWIiVr/2dPrx49OMO0Bn0hJqk= +github.com/secure-systems-lab/go-securesystemslib v0.9.0 h1:rf1HIbL64nUpEIZnjLZ3mcNEL9NBPB0iuVjyxvq3LZc= +github.com/secure-systems-lab/go-securesystemslib v0.9.0/go.mod h1:DVHKMcZ+V4/woA/peqr+L0joiRXbPpQ042GgJckkFgw= +github.com/sendgrid/rest v2.6.9+incompatible h1:1EyIcsNdn9KIisLW50MKwmSRSK+ekueiEMJ7NEoxJo0= +github.com/sendgrid/rest v2.6.9+incompatible/go.mod h1:kXX7q3jZtJXK5c5qK83bSGMdV6tsOE70KbHoqJls4lE= +github.com/sendgrid/sendgrid-go v3.16.0+incompatible h1:i8eE6IMkiCy7vusSdacHHSBUpXyTcTXy/Rl9N9aZ/Qw= +github.com/sendgrid/sendgrid-go v3.16.0+incompatible/go.mod h1:QRQt+LX/NmgVEvmdRw0VT/QgUn499+iza2FnDca9fg8= +github.com/shibukawa/configdir v0.0.0-20170330084843-e180dbdc8da0 h1:Xuk8ma/ibJ1fOy4Ee11vHhUFHQNpHhrBneOCNHVXS5w= +github.com/shibukawa/configdir v0.0.0-20170330084843-e180dbdc8da0/go.mod h1:7AwjWCpdPhkSmNAgUv5C7EJ4AbmjEB3r047r3DXWu3Y= +github.com/shirou/gopsutil/v4 v4.25.3 h1:SeA68lsu8gLggyMbmCn8cmp97V1TI9ld9sVzAUcKcKE= +github.com/shirou/gopsutil/v4 v4.25.3/go.mod h1:xbuxyoZj+UsgnZrENu3lQivsngRR5BdjbJwf2fv4szA= +github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k= +github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME= +github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo= +github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0= +github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= +github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI= +github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= +github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= +github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA= +github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo= +github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= +github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y= +github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo= +github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU= +github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo= +github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0= +github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o= +github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s= +github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4= +github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4= +github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE= +github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= +github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= +github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= +github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= +github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= +github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= +github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= +github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= +github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= +github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc= +github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d h1:vfofYNRScrDdvS342BElfbETmL1Aiz3i2t0zfRj16Hs= +github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d/go.mod h1:RRCYJbIwD5jmqPI9XoAFR0OcDxqUctll6zUj/+B4S48= +github.com/tinylib/msgp v1.2.5 h1:WeQg1whrXRFiZusidTQqzETkRpGjFjcIhW6uqWH09po= +github.com/tinylib/msgp v1.2.5/go.mod h1:ykjzy2wzgrlvpDCRc4LA8UXy6D8bzMSuAF3WD57Gok0= +github.com/tklauser/go-sysconf v0.3.15 h1:VE89k0criAymJ/Os65CSn1IXaol+1wrsFHEB8Ol49K4= +github.com/tklauser/go-sysconf v0.3.15/go.mod h1:Dmjwr6tYFIseJw7a3dRLJfsHAMXZ3nEnL/aZY+0IuI4= +github.com/tklauser/numcpus v0.10.0 h1:18njr6LDBk1zuna922MgdjQuJFjrdppsZG60sHGfjso= +github.com/tklauser/numcpus v0.10.0/go.mod h1:BiTKazU708GQTYF4mB+cmlpT2Is1gLk7XVuEeem8LsQ= +github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc h1:9lRDQMhESg+zvGYmW5DyG0UqvY96Bu5QYsTLvCHdrgo= +github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc/go.mod h1:bciPuU6GHm1iF1pBvUfxfsH0Wmnc2VbpgvbI9ZWuIRs= +github.com/trivago/tgo v1.0.7 h1:uaWH/XIy9aWYWpjm2CU3RpcqZXmX2ysQ9/Go+d9gyrM= +github.com/trivago/tgo v1.0.7/go.mod h1:w4dpD+3tzNIIiIfkWWa85w5/B77tlvdZckQ+6PkFnhc= +github.com/tsuyoshiwada/go-gitcmd v0.0.0-20180205145712-5f1f5f9475df h1:Y2l28Jr3vOEeYtxfVbMtVfOdAwuUqWaP9fvNKiBVeXY= +github.com/tsuyoshiwada/go-gitcmd v0.0.0-20180205145712-5f1f5f9475df/go.mod h1:pnyouUty/nBr/zm3GYwTIt+qFTLWbdjeLjZmJdzJOu8= +github.com/ttacon/builder v0.0.0-20170518171403-c099f663e1c2 h1:5u+EJUQiosu3JFX0XS0qTf5FznsMOzTjGqavBGuCbo0= +github.com/ttacon/builder v0.0.0-20170518171403-c099f663e1c2/go.mod h1:4kyMkleCiLkgY6z8gK5BkI01ChBtxR0ro3I1ZDcGM3w= +github.com/ttacon/libphonenumber v1.2.1 h1:fzOfY5zUADkCkbIafAed11gL1sW+bJ26p6zWLBMElR4= +github.com/ttacon/libphonenumber v1.2.1/go.mod h1:E0TpmdVMq5dyVlQ7oenAkhsLu86OkUl+yR4OAxyEg/M= +github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= +github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U= +github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= +github.com/unrolled/secure v1.17.0 h1:Io7ifFgo99Bnh0J7+Q+qcMzWM6kaDPCA5FroFZEdbWU= +github.com/unrolled/secure v1.17.0/go.mod h1:BmF5hyM6tXczk3MpQkFf1hpKSRqCyhqcbiQtiAF7+40= +github.com/uptrace/bun v1.2.3 h1:6KDc6YiNlXde38j9ATKufb8o7MS8zllhAOeIyELKrk0= +github.com/uptrace/bun v1.2.3/go.mod h1:8frYFHrO/Zol3I4FEjoXam0HoNk+t5k7aJRl3FXp0mk= +github.com/uptrace/bun/dialect/pgdialect v1.2.3 h1:YyCxxqeL0lgFWRZzKCOt6mnxUsjqITcxSo0mLqgwMUA= +github.com/uptrace/bun/dialect/pgdialect v1.2.3/go.mod h1:Vx9TscyEq1iN4tnirn6yYGwEflz0KG3rBZTBCLpKAjc= +github.com/uptrace/bun/driver/pgdriver v1.2.3 h1:VA5TKB0XW7EtreQq2R8Qu/vCAUX2ECaprxGKI9iDuDE= +github.com/uptrace/bun/driver/pgdriver v1.2.3/go.mod h1:yDiYTZYd4FfXFtV01m4I/RkI33IGj9N254jLStaeJLs= +github.com/uptrace/bun/extra/bunbig v1.2.3 h1:S0Nd2u/tNk1Nax8GNyF43vJOCtLpeWDpdp74ufe4IYk= +github.com/uptrace/bun/extra/bunbig v1.2.3/go.mod h1:1+LVar7Ras4JMvULZ0tLO8TNx1W/5LxrK9cS6g57F20= +github.com/uptrace/bun/extra/bundebug v1.2.3 h1:2QBykz9/u4SkN9dnraImDcbrMk2fUhuq2gL6hkh9qSc= +github.com/uptrace/bun/extra/bundebug v1.2.3/go.mod h1:bihsYJxXxWZXwc1R3qALTHvp+npE0ElgaCvcjzyPPdw= +github.com/urfave/cli/v2 v2.27.6 h1:VdRdS98FNhKZ8/Az8B7MTyGQmpIr36O1EHybx/LaZ4g= +github.com/urfave/cli/v2 v2.27.6/go.mod h1:3Sevf16NykTbInEnD0yKkjDAeZDS0A6bzhBH5hrMvTQ= +github.com/vmihailenco/bufpool v0.1.11 h1:gOq2WmBrq0i2yW5QJ16ykccQ4wH9UyEsgLm6czKAd94= +github.com/vmihailenco/bufpool v0.1.11/go.mod h1:AFf/MOy3l2CFTKbxwt0mp2MwnqjNEs5H/UxrkA5jxTQ= +github.com/vmihailenco/msgpack/v4 v4.3.13 h1:A2wsiTbvp63ilDaWmsk2wjx6xZdxQOvpiNlKBGKKXKI= +github.com/vmihailenco/msgpack/v4 v4.3.13/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4= +github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8= +github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok= +github.com/vmihailenco/tagparser v0.1.2 h1:gnjoVuB/kljJ5wICEEOpx98oXMWPLj22G67Vbd1qPqc= +github.com/vmihailenco/tagparser v0.1.2/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI= +github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g= +github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= +github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 h1:nIPpBwaJSVYIxUFsDv3M8ofmx9yWTog9BfvIu0q41lo= +github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos= +github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= +github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4= +github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= +github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0= +github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= +github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM= +github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4= +go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= +go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= +go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= +go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= +go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= +go.opentelemetry.io/collector/component v1.30.0 h1:HXjqBHaQ47/EEuWdnkjr4Y3kRWvmyWIDvqa1Q262Fls= +go.opentelemetry.io/collector/component v1.30.0/go.mod h1:vfM9kN+BM6oHBXWibquiprz8CVawxd4/aYy3nbhme3E= +go.opentelemetry.io/collector/component/componentstatus v0.120.0 h1:hzKjI9+AIl8A/saAARb47JqabWsge0kMp8NSPNiCNOQ= +go.opentelemetry.io/collector/component/componentstatus v0.120.0/go.mod h1:kbuAEddxvcyjGLXGmys3nckAj4jTGC0IqDIEXAOr3Ag= +go.opentelemetry.io/collector/component/componenttest v0.123.0 h1:h0B/kBj0URKq+i9iMbMuLhc6/dZ2GWL0y9L6tqHRNuA= +go.opentelemetry.io/collector/component/componenttest v0.123.0/go.mod h1:4Y6EMvsgE9fUNM98G0eW5+LFXfcxdhTHQDhaOxJrgN8= +go.opentelemetry.io/collector/consumer v1.26.0 h1:0MwuzkWFLOm13qJvwW85QkoavnGpR4ZObqCs9g1XAvk= +go.opentelemetry.io/collector/consumer v1.26.0/go.mod h1:I/ZwlWM0sbFLhbStpDOeimjtMbWpMFSoGdVmzYxLGDg= +go.opentelemetry.io/collector/consumer/consumertest v0.120.0 h1:iPFmXygDsDOjqwdQ6YZcTmpiJeQDJX+nHvrjTPsUuv4= +go.opentelemetry.io/collector/consumer/consumertest v0.120.0/go.mod h1:HeSnmPfAEBnjsRR5UY1fDTLlSrYsMsUjufg1ihgnFJ0= +go.opentelemetry.io/collector/consumer/xconsumer v0.120.0 h1:dzM/3KkFfMBIvad+NVXDV+mA+qUpHyu5c70TFOjDg68= +go.opentelemetry.io/collector/consumer/xconsumer v0.120.0/go.mod h1:eOf7RX9CYC7bTZQFg0z2GHdATpQDxI0DP36F9gsvXOQ= +go.opentelemetry.io/collector/featuregate v1.30.0 h1:mx7+iP/FQnY7KO8qw/xE3Qd1MQkWcU8VgcqLNrJ8EU8= +go.opentelemetry.io/collector/featuregate v1.30.0/go.mod h1:Y/KsHbvREENKvvN9RlpiWk/IGBK+CATBYzIIpU7nccc= +go.opentelemetry.io/collector/internal/telemetry v0.124.0 h1:kzd1/ZYhLj4bt2pDB529mL4rIRrRacemXodFNxfhdWk= +go.opentelemetry.io/collector/internal/telemetry v0.124.0/go.mod h1:ZjXjqV0dJ+6D4XGhTOxg/WHjnhdmXsmwmUSgALea66Y= +go.opentelemetry.io/collector/pdata v1.30.0 h1:j3jyq9um436r6WzWySzexP2nLnFdmL5uVBYAlyr9nDM= +go.opentelemetry.io/collector/pdata v1.30.0/go.mod h1:0Bxu1ktuj4wE7PIASNSvd0SdBscQ1PLtYasymJ13/Cs= +go.opentelemetry.io/collector/pdata/pprofile v0.124.0 h1:ZjL9wKqzP4BHj0/F1jfGxs1Va8B7xmYayipZeNVoWJE= +go.opentelemetry.io/collector/pdata/pprofile v0.124.0/go.mod h1:1EN3Gw5LSI4fSVma/Yfv/6nqeuYgRTm1/kmG5nE5Oyo= +go.opentelemetry.io/collector/pdata/testdata v0.120.0 h1:Zp0LBOv3yzv/lbWHK1oht41OZ4WNbaXb70ENqRY7HnE= +go.opentelemetry.io/collector/pdata/testdata v0.120.0/go.mod h1:PfezW5Rzd13CWwrElTZRrjRTSgMGUOOGLfHeBjj+LwY= +go.opentelemetry.io/collector/pipeline v0.124.0 h1:hKvhDyH2GPnNO8LGL34ugf36sY7EOXPjBvlrvBhsOdw= +go.opentelemetry.io/collector/pipeline v0.124.0/go.mod h1:TO02zju/K6E+oFIOdi372Wk0MXd+Szy72zcTsFQwXl4= +go.opentelemetry.io/collector/processor v0.120.0 h1:No+I65ybBLVy4jc7CxcsfduiBrm7Z6kGfTnekW3hx1A= +go.opentelemetry.io/collector/processor v0.120.0/go.mod h1:4zaJGLZCK8XKChkwlGC/gn0Dj4Yke04gQCu4LGbJGro= +go.opentelemetry.io/collector/processor/processortest v0.120.0 h1:R+VSVSU59W0/mPAcyt8/h1d0PfWN6JI2KY5KeMICXvo= +go.opentelemetry.io/collector/processor/processortest v0.120.0/go.mod h1:me+IVxPsj4IgK99I0pgKLX34XnJtcLwqtgTuVLhhYDI= +go.opentelemetry.io/collector/processor/xprocessor v0.120.0 h1:mBznj/1MtNqmu6UpcoXz6a63tU0931oWH2pVAt2+hzo= +go.opentelemetry.io/collector/processor/xprocessor v0.120.0/go.mod h1:Nsp0sDR3gE+GAhi9d0KbN0RhOP+BK8CGjBRn8+9d/SY= +go.opentelemetry.io/collector/semconv v0.124.0 h1:YTdo3UFwNyDQCh9DiSm2rbzAgBuwn/9dNZ0rv454goA= +go.opentelemetry.io/collector/semconv v0.124.0/go.mod h1:te6VQ4zZJO5Lp8dM2XIhDxDiL45mwX0YAQQWRQ0Qr9U= +go.opentelemetry.io/contrib/bridges/otelzap v0.10.0 h1:ojdSRDvjrnm30beHOmwsSvLpoRF40MlwNCA+Oo93kXU= +go.opentelemetry.io/contrib/bridges/otelzap v0.10.0/go.mod h1:oTTm4g7NEtHSV2i/0FeVdPaPgUIZPfQkFbq0vbzqnv0= +go.opentelemetry.io/contrib/detectors/gcp v1.35.0 h1:bGvFt68+KTiAKFlacHW6AhA56GF2rS0bdD3aJYEnmzA= +go.opentelemetry.io/contrib/detectors/gcp v1.35.0/go.mod h1:qGWP8/+ILwMRIUf9uIVLloR1uo5ZYAslM4O6OqUi1DA= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 h1:sbiXRNDSWJOTobXh5HyQKjq6wUC5tNybqjIqDpAY4CU= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0/go.mod h1:69uWxva0WgAA/4bu2Yy70SLDBwZXuQ6PbBpbsa5iZrQ= +go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ= +go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.35.0 h1:PB3Zrjs1sG1GBX51SXyTSoOTqcDglmsk7nT6tkKPb/k= +go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.35.0/go.mod h1:U2R3XyVPzn0WX7wOIypPuptulsMcPDPs/oiSVOMVnHY= +go.opentelemetry.io/otel/log v0.11.0 h1:c24Hrlk5WJ8JWcwbQxdBqxZdOK7PcP/LFtOtwpDTe3Y= +go.opentelemetry.io/otel/log v0.11.0/go.mod h1:U/sxQ83FPmT29trrifhQg+Zj2lo1/IPN1PF6RTFqdwc= +go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M= +go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE= +go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY= +go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg= +go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o= +go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w= +go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs= +go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc= +go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= +go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE= +go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= +go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= +golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= +golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8= +golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw= +golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= +golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 h1:R84qjqJb5nVJMxqWYb3np9L5ZsaDtB+a39EqjV0JSUM= +golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0/go.mod h1:S9Xr4PYopiDyqSyp5NjCrhFrqg6A5zA2E/iPHPhqnS8= +golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w= +golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g= +golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= +golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= +golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= +golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU= +golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww= +golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= +golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= +golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= +golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= +golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns= +golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY= +golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E= +golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= +golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8= +golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= +golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ= +golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= +golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220330033206-e17cdc41300f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220627191245-f75cf1eec38b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw= +golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= +golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY= +golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= +golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg= +golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= +golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4= +golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA= +golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0= +golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= +golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= +golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= +golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4= +golang.org/x/tools v0.32.0 h1:Q7N1vhpkQv7ybVzLFtTjvQya2ewbwNDZzUgfXGqtMWU= +golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da h1:noIWHXmPHxILtqtCOPIhSt0ABwskkZKjD3bXGnZGpNY= +golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da/go.mod h1:NDW/Ps6MPRej6fsCIbMTohpP40sJ/P/vI1MoTEGwX90= +gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= +gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= +google.golang.org/api v0.231.0/go.mod h1:H52180fPI/QQlUc0F4xWfGZILdv09GCWKt2bcsn164A= +google.golang.org/api v0.232.0 h1:qGnmaIMf7KcuwHOlF3mERVzChloDYwRfOJOrHt8YC3I= +google.golang.org/api v0.232.0/go.mod h1:p9QCfBWZk1IJETUdbTKloR5ToFdKbYh2fkjsUL6vNoY= +google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= +google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= +google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= +google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds= +google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= +google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= +google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= +google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= +google.golang.org/genproto v0.0.0-20250428153025-10db94c68c34 h1:oklGWmm0ZiCw4efmdYZo5MF9t6nRvGzM5+0klSjOmGM= +google.golang.org/genproto v0.0.0-20250428153025-10db94c68c34/go.mod h1:hiH/EqX5GBdTyIpkqMqDGUHDiBniln8b4FCw+NzPxQY= +google.golang.org/genproto/googleapis/api v0.0.0-20250428153025-10db94c68c34/go.mod h1:0awUlEkap+Pb1UMeJwJQQAdJQrt3moU7J2moTy69irI= +google.golang.org/genproto/googleapis/api v0.0.0-20250505200425-f936aa4a68b2 h1:vPV0tzlsK6EzEDHNNH5sa7Hs9bd7iXR7B1tSiPepkV0= +google.golang.org/genproto/googleapis/api v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:pKLAc5OolXC3ViWGI62vvC0n10CpwAtRcTNCFwTKBEw= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250428153025-10db94c68c34/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250505200425-f936aa4a68b2 h1:IqsN8hx+lWLqlN+Sc3DoMy/watjofWiU8sRFgQ8fhKM= +google.golang.org/genproto/googleapis/rpc v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A= +google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= +google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= +google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= +google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= +google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= +google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= +google.golang.org/grpc v1.72.0 h1:S7UkcVa60b5AAQTaO6ZKamFp1zMZSU0fGDK2WZLbBnM= +google.golang.org/grpc v1.72.0/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM= +google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= +google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= +google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= +google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= +google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= +google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= +google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= +google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY= +google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY= +gopkg.in/DataDog/dd-trace-go.v1 v1.72.2 h1:SLcih9LB+I1l76Wd7aUSpzISemewzjq6djntMnBnzkA= +gopkg.in/DataDog/dd-trace-go.v1 v1.72.2/go.mod h1:XqDhDqsLpThFnJc4z0FvAEItISIAUka+RHwmQ6EfN1U= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= +gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA= +gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= +gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= +gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= +gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= +honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= +honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= +k8s.io/apimachinery v0.31.4 h1:8xjE2C4CzhYVm9DGf60yohpNUh5AEBnPxCryPBECmlM= +k8s.io/apimachinery v0.31.4/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= +lukechampine.com/uint128 v1.3.0 h1:cDdUVfRwDUDovz610ABgFD17nXD4/uDgVHl2sC3+sbo= +lukechampine.com/uint128 v1.3.0/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk= +mellium.im/sasl v0.3.2 h1:PT6Xp7ccn9XaXAnJ03FcEjmAn7kK1x7aoXV6F+Vmrl0= +mellium.im/sasl v0.3.2/go.mod h1:NKXDi1zkr+BlMHLQjY3ofYuU4KSPFxknb8mfEu6SveY= +modernc.org/cc/v3 v3.41.0 h1:QoR1Sn3YWlmA1T4vLaKZfawdVtSiGx8H+cEojbC7v1Q= +modernc.org/cc/v3 v3.41.0/go.mod h1:Ni4zjJYJ04CDOhG7dn640WGfwBzfE0ecX8TyMB0Fv0Y= +modernc.org/ccgo/v3 v3.16.15 h1:KbDR3ZAVU+wiLyMESPtbtE/Add4elztFyfsWoNTgxS0= +modernc.org/ccgo/v3 v3.16.15/go.mod h1:yT7B+/E2m43tmMOT51GMoM98/MtHIcQQSleGnddkUNI= +modernc.org/libc v1.37.6 h1:orZH3c5wmhIQFTXF+Nt+eeauyd+ZIt2BX6ARe+kD+aw= +modernc.org/libc v1.37.6/go.mod h1:YAXkAZ8ktnkCKaN9sw/UDeUVkGYJ/YquGO4FTi5nmHE= +modernc.org/mathutil v1.6.0 h1:fRe9+AmYlaej+64JsEEhoWuAYBkOtQiMEU7n/XgfYi4= +modernc.org/mathutil v1.6.0/go.mod h1:Ui5Q9q1TR2gFm0AQRqQUaBWFLAhQpCwNcuhBOSedWPo= +modernc.org/memory v1.7.2 h1:Klh90S215mmH8c9gO98QxQFsY+W451E8AnzjoE2ee1E= +modernc.org/memory v1.7.2/go.mod h1:NO4NVCQy0N7ln+T9ngWqOQfi7ley4vpwvARR+Hjw95E= +modernc.org/opt v0.1.3 h1:3XOZf2yznlhC+ibLltsDGzABUGVx8J6pnFMS3E4dcq4= +modernc.org/opt v0.1.3/go.mod h1:WdSiB5evDcignE70guQKxYUl14mgWtbClRi5wmkkTX0= +modernc.org/sqlite v1.28.0 h1:Zx+LyDDmXczNnEQdvPuEfcFVA2ZPyaD7UCZDjef3BHQ= +modernc.org/sqlite v1.28.0/go.mod h1:Qxpazz0zH8Z1xCFyi5GSL3FzbtZ3fvbjmywNogldEW0= +modernc.org/strutil v1.2.0 h1:agBi9dp1I+eOnxXeiZawM8F4LawKv4NzGWSaLfyeNZA= +modernc.org/strutil v1.2.0/go.mod h1:/mdcBmfOibveCTBxUl5B5l6W+TTH1FXPLHZE6bTosX0= +modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= +modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM= +pgregory.net/rapid v0.4.7 h1:MTNRktPuv5FNqOO151TM9mDTa+XHcX6ypYeISDVD14g= +pgregory.net/rapid v0.4.7/go.mod h1:UYpPVyjFHzYBGHIxLFoupi8vwk6rXNzRY9OMvVxFIOU= diff --git a/test.Dockerfile b/test.Dockerfile new file mode 100644 index 0000000..03bb70d --- /dev/null +++ b/test.Dockerfile @@ -0,0 +1,61 @@ +FROM alpine:latest AS core + +RUN apk update +RUN apk upgrade +RUN apk add --update bash cmake g++ gcc git make vips vips-dev + +COPY --from=golang:1.24-alpine /usr/local/go/ /usr/local/go/ +ENV PATH="/usr/local/go/bin:${PATH}" + +WORKDIR /state-consumer/src + +COPY backend/go.mod backend/ +COPY backend/go.mod backend/ +COPY backend/go.sum backend/ +COPY core/go.mod core/ +COPY core/go.sum core/ +COPY postgres-data-handler/go.mod postgres-data-handler/ +COPY postgres-data-handler/go.sum postgres-data-handler/ +COPY state-consumer/go.mod state-consumer/ +COPY state-consumer/go.sum state-consumer/ + +WORKDIR /state-consumer/src/state-consumer + +RUN go mod download + +# include backend src +COPY backend/apis ../backend/apis +COPY backend/config ../backend/config +COPY backend/cmd ../backend/cmd +COPY backend/miner ../backend/miner +COPY backend/routes ../backend/routes +COPY backend/countries ../backend/countries +COPY backend/scripts ../backend/scripts + +## include core src +COPY core/desohash ../core/desohash +COPY core/cmd ../core/cmd +COPY core/lib ../core/lib +COPY core/migrate ../core/migrate +COPY core/bls ../core/bls +COPY core/collections ../core/collections +COPY core/consensus ../core/consensus + +## include postgres-data-handler src +COPY postgres-data-handler/handler ../postgres-data-handler/handler +COPY postgres-data-handler/entries ../postgres-data-handler/entries +COPY postgres-data-handler/entries ../postgres-data-handler/migrations +COPY postgres-data-handler/main.go ../postgres-data-handler/main.go +COPY postgres-data-handler/tests ../postgres-data-handler/tests +COPY postgres-data-handler/migrations ../postgres-data-handler/migrations + +## include state-consumer src +COPY state-consumer/consumer consumer +COPY state-consumer/consumer . +COPY state-consumer/tests tests + +RUN go mod tidy + +# No need to build since we're just running tests +# ENTRYPOINT ["ls", "../"] +ENTRYPOINT ["go", "test", "./tests", "-v", "-failfast", "-p", "1"] diff --git a/tests/consumer.go b/tests/consumer.go new file mode 100644 index 0000000..2ed4962 --- /dev/null +++ b/tests/consumer.go @@ -0,0 +1,654 @@ +package tests + +import ( + "bytes" + "context" + "fmt" + "os" + "testing" + + "github.com/deso-protocol/backend/config" + "github.com/deso-protocol/backend/routes" + coreCmd "github.com/deso-protocol/core/cmd" + "github.com/deso-protocol/core/lib" + "github.com/deso-protocol/postgres-data-handler/entries" + pdh_tests "github.com/deso-protocol/postgres-data-handler/tests" + "github.com/deso-protocol/state-consumer/consumer" + "github.com/google/uuid" + "github.com/pkg/errors" + "github.com/stretchr/testify/require" +) + +const ( + globalStateSharedSecret = "abcdef" +) + +// Variables to be used in the tests. These are defined here so we can de-reference them in the tests. +var ( + trueValue = true + falseValue = false + consumerEventBatch = ConsumerEventBatch + consumerEventTransaction = ConsumerEventTransaction + consumerEventSyncEvent = ConsumerEventSyncEvent +) + +type TestHandler struct { + // Params is a struct containing the current blockchain parameters. + // It is used to determine which prefix to use for public keys. + Params *lib.DeSoParams + + // ConsumerEventChan is a channel that receives StateConsumerEvents. + ConsumerEventChan chan *StateConsumerEvent + + // ConsumedEvents is a list of all the events that have been consumed. + ConsumedEvents []*StateConsumerEvent + + LastTransactionEvent TransactionEvent +} + +func NewTestHandler(params *lib.DeSoParams) *TestHandler { + th := &TestHandler{} + th.Params = params + + th.ConsumerEventChan = make(chan *StateConsumerEvent) + th.ConsumedEvents = []*StateConsumerEvent{} + th.LastTransactionEvent = TransactionEventUndefined + + return th +} + +type ConsumerEvent uint16 + +// Block view encoder types. These types to different structs implementing the DeSoEncoder interface. +const ( + ConsumerEventBatch ConsumerEvent = 0 + ConsumerEventTransaction ConsumerEvent = 1 + ConsumerEventSyncEvent ConsumerEvent = 2 + ConsumerEventUndefined ConsumerEvent = 3 +) + +type TransactionEvent uint16 + +const ( + TransactionEventInitiate TransactionEvent = 0 + TransactionEventCommit TransactionEvent = 1 + TransactionEventRollback TransactionEvent = 2 + TransactionEventUndefined TransactionEvent = 3 +) + +type StateConsumerEvent struct { + BatchedEntries []*lib.StateChangeEntry + IsMempool bool + EventType ConsumerEvent + TransactionEvent TransactionEvent +} + +func (th *TestHandler) HandleEntryBatch(batchedEntries []*lib.StateChangeEntry, isMempool bool) error { + // Add the batched entries to the channel. + th.ConsumerEventChan <- &StateConsumerEvent{ + BatchedEntries: batchedEntries, + IsMempool: isMempool, + EventType: ConsumerEventBatch, + } + + return nil +} + +func (th *TestHandler) HandleSyncEvent(syncEvent consumer.SyncEvent) error { + //th.SyncEventChan <- syncEvent + return nil +} + +func (th *TestHandler) InitiateTransaction() error { + th.ConsumerEventChan <- &StateConsumerEvent{ + TransactionEvent: TransactionEventInitiate, + EventType: ConsumerEventTransaction, + } + return nil +} + +func (th *TestHandler) CommitTransaction() error { + th.ConsumerEventChan <- &StateConsumerEvent{ + TransactionEvent: TransactionEventCommit, + EventType: ConsumerEventTransaction, + } + return nil +} + +func (th *TestHandler) RollbackTransaction() error { + th.ConsumerEventChan <- &StateConsumerEvent{ + TransactionEvent: TransactionEventRollback, + EventType: ConsumerEventTransaction, + } + return nil +} + +func (th *TestHandler) GetParams() *lib.DeSoParams { + return th.Params +} + +func CleanupConsumerTestEnvironment(apiServer *routes.APIServer, nodeServer *lib.Server, cancelFunc context.CancelFunc) { + cancelFunc() + nodeServer.Stop() +} + +func SetupConsumerTestEnvironment(t *testing.T, testUserCount int, entropyStr string, params *lib.DeSoParams) (*pdh_tests.TestConfig, *TestHandler, *routes.APIServer, *lib.Server, *consumer.StateSyncerConsumer, func()) { + pdh_tests.SetupFlags("../.env") + starterAccountSeed := "verb find card ship another until version devote guilt strong lemon six" + starterUser, _, err := pdh_tests.CreateTestUser(starterAccountSeed, "", 0, params, nil) + + stateDirPostFix := pdh_tests.RandString(10) + + stateChangeDir := fmt.Sprintf("./ss/state-changes-%s-%s", t.Name(), stateDirPostFix) + consumerProgressDir := fmt.Sprintf("./ss/consumer-progress-%s-%s", t.Name(), stateDirPostFix) + + apiServer, nodeServer := newTestApiServer(t, starterUser, 17001, stateChangeDir) + + require.NoError(t, err) + + // Start the api server in a non-blocking way. + go func() { + apiServer.Start() + }() + + testConfig, err := pdh_tests.SetupTestEnvironment(testUserCount, entropyStr, false) + require.NoError(t, err) + + testHandler := NewTestHandler(params) + + stateSyncerConsumer := &consumer.StateSyncerConsumer{} + + // Initialize and run the state syncer consumer in a non-blocking thread. + // Create a context with cancel to control the goroutine + _, cancel := context.WithCancel(context.Background()) + + // Start consumer in goroutine + go func() { + err := stateSyncerConsumer.InitializeAndRun( + stateChangeDir, + consumerProgressDir, + 500000, + 1, + true, + testHandler, + ) + if err != nil && !errors.Is(err, context.Canceled) { + require.NoError(t, err) + } + }() + + // Create cleanup function to return + cleanupFunc := func() { + // Cancel context to stop goroutine + cancel() + + // Delete state change directory + if err := os.RemoveAll(stateChangeDir); err != nil { + fmt.Printf("Error removing state change dir: %v\n", err) + } + + // Delete consumer progress directory + if err := os.RemoveAll(consumerProgressDir); err != nil { + fmt.Printf("Error removing consumer progress dir: %v\n", err) + } + } + + return testConfig, testHandler, apiServer, nodeServer, stateSyncerConsumer, cleanupFunc +} + +// TODO: Make sure that state change dir gets cleaned up. +func newTestApiServer(t *testing.T, starterUser *pdh_tests.TestUser, apiPort uint16, stateChangeDir string) (*routes.APIServer, *lib.Server) { + // Create a badger db instance. + badgerDB, badgerDir := routes.GetTestBadgerDb(t) + + // Set core node's config. + coreConfig := coreCmd.LoadConfig() + coreConfig.Params = &lib.DeSoTestnetParams + coreConfig.DataDirectory = badgerDir + coreConfig.Regtest = true + coreConfig.RegtestAccelerated = true + coreConfig.TXIndex = false + coreConfig.DisableNetworking = true + coreConfig.MinerPublicKeys = []string{starterUser.PublicKeyBase58} + coreConfig.BlockProducerSeed = starterUser.SeedPhrase + coreConfig.PosValidatorSeed = starterUser.SeedPhrase + coreConfig.NumMiningThreads = 1 + coreConfig.HyperSync = false + coreConfig.MinFeerate = 2000 + coreConfig.LogDirectory = "" + coreConfig.StateChangeDir = stateChangeDir + coreConfig.GlogV = 0 + coreConfig.NoLogToStdErr = true + + // Create a core node. + shutdownListener := make(chan struct{}) + node := coreCmd.NewNode(coreConfig) + node.Start(&shutdownListener) + + // Set api server's config. + apiConfig := config.LoadConfig(coreConfig) + // - STARTER_DESO_SEED=road congress client market couple bid risk escape artwork rookie artwork food + //apiConfig.StarterDESOSeed = starterUser.SeedPhrase + apiConfig.APIPort = apiPort + apiConfig.GlobalStateRemoteNode = "" + apiConfig.GlobalStateRemoteSecret = globalStateSharedSecret + apiConfig.RunHotFeedRoutine = false + apiConfig.RunSupplyMonitoringRoutine = false + apiConfig.AdminPublicKeys = []string{starterUser.PublicKeyBase58} + apiConfig.SuperAdminPublicKeys = []string{starterUser.PublicKeyBase58} + + // Create an api server. + apiServer, err := routes.NewAPIServer( + node.Server, + node.Server.GetMempool(), + node.Server.GetBlockchain(), + node.Server.GetBlockProducer(), + node.TXIndex, + node.Params, + apiConfig, + node.Config.MinFeerate, + badgerDB, + nil, + node.Config.BlockCypherAPIKey, + ) + require.NoError(t, err) + + // Initialize api server. + apiServer.MinFeeRateNanosPerKB = node.Config.MinFeerate + + return apiServer, node.Server +} + +// Function to decode state change entries with a generic type EncoderType. +func DecodeStateChangeEntries[EncoderType lib.DeSoEncoder](entryBatch []*lib.StateChangeEntry) ([]*EncoderType, []*EncoderType, error) { + var decodedEntries []*EncoderType + var decodedAncestralRecords []*EncoderType + + for _, entry := range entryBatch { + decodedEntry, decodedAncestralRecord, err := DecodeStateChangeEntryEncoders[EncoderType](entry) + if err != nil { + return nil, nil, errors.Wrapf(err, "DecodeStateChangeEntries: Problem decoding entry") + } + decodedEntries = append(decodedEntries, decodedEntry) + decodedAncestralRecords = append(decodedAncestralRecords, decodedAncestralRecord) + } + + return decodedEntries, decodedAncestralRecords, nil +} + +// Function to decode a single state change entry with a generic type EncoderType. +func DecodeStateChangeEntryEncoders[EncoderType lib.DeSoEncoder](entry *lib.StateChangeEntry) (*EncoderType, *EncoderType, error) { + decodedEntry := entry.EncoderType.New() + var decodedAncestralRecord lib.DeSoEncoder + + // You need to pass a pointer to the value of decodedEntry. + err := consumer.DecodeEntry(decodedEntry, entry.EncoderBytes) + if err != nil { + return nil, nil, errors.Wrapf(err, "DecodeStateChangeEntry: Problem decoding entry") + } + + // Cast the decoded entry to the EncoderType. + typedDecodedEntry := decodedEntry.(EncoderType) + + if entry.AncestralRecordBytes != nil && (len(entry.AncestralRecordBytes) > 0 && entry.AncestralRecordBytes[0] != 0) { + decodedAncestralRecord = entry.EncoderType.New() + // You need to pass a pointer to the value of decodedAncestralRecord. + err = consumer.DecodeEntry(decodedAncestralRecord, entry.AncestralRecordBytes) + if err != nil { + return nil, nil, errors.Wrapf(err, "DecodeStateChangeEntry: Problem decoding ancestral record") + } + typedAncestralRecord := decodedAncestralRecord.(EncoderType) + return &typedDecodedEntry, &typedAncestralRecord, nil + } else { + return &typedDecodedEntry, nil, nil + } +} + +// EntryScanResult is a struct that contains the results of a search for a transaction or entry in the ConsumerEventChan. +type EntryScanResult struct { + EventsScanned int + EntryBatch []*lib.StateChangeEntry + ConsumedEvents []*StateConsumerEvent + RemainingConsumedEvents []*StateConsumerEvent + IsMempool bool + IsReverted bool + EncoderType lib.EncoderType + OperationType lib.StateSyncerOperationType + Txn *entries.PGTransactionEntry + RemainingTxns []*entries.PGTransactionEntry + FlushId uuid.UUID + TransactionCommits int + TransactionInitiates int + LastTransactionEvent TransactionEvent +} + +// GetNextBatch returns the next batch of entries from the ConsumerEventChan. +func (th *TestHandler) GetNextBatch() *EntryScanResult { + batchEvent := <-th.ConsumerEventChan + batchedEntries := batchEvent.BatchedEntries + return &EntryScanResult{ + EntryBatch: batchedEntries, + IsMempool: batchEvent.IsMempool, + IsReverted: batchedEntries[0].IsReverted, + EventsScanned: 1, + FlushId: batchedEntries[0].FlushId, + EncoderType: batchedEntries[0].EncoderType, + OperationType: batchedEntries[0].OperationType, + } +} + +func (th *TestHandler) ConsumeAllEvents() { + for range th.ConsumerEventChan { + if len(th.ConsumerEventChan) == 0 { + return + } + } +} + +// Struct to use as input for WaitForMatchingEntryBatch. +type ConsumerEventSearch struct { + precedingEvents []*StateConsumerEvent + targetConsumerEvent *ConsumerEvent + targetEncoderTypes []lib.EncoderType + targetOpType *lib.StateSyncerOperationType + targetTransactionHash *string + currentFlushId *uuid.UUID + targetIsMempool *bool + targetIsReverted *bool + targetTxnEvent *TransactionEvent + exitWhenEmpty bool + targetBadgerKeyBytes *[]byte +} + +// WaitForMatchingEntryBatch waits for an entry batch with the given encoder type, operation type, or flush id to appear in the ConsumerEventChan. +func (th *TestHandler) WaitForMatchingEntryBatch(searchCriteria *ConsumerEventSearch) (*EntryScanResult, error) { + // Track the number of batches we've scanned. + eventsScanned := 0 + + // Track the # of transaction initiate and commit events that have occurred. + transactionCommits := 0 + transactionInitiates := 0 + lastTransactionEvent := th.LastTransactionEvent + + // First check the preceding flush events to see if the event is there. + for ii, precedingEvent := range searchCriteria.precedingEvents { + if precedingEvent.EventType == ConsumerEventTransaction { + if precedingEvent.TransactionEvent == TransactionEventCommit { + transactionCommits += 1 + } else if precedingEvent.TransactionEvent == TransactionEventInitiate { + transactionInitiates += 1 + } + lastTransactionEvent = precedingEvent.TransactionEvent + th.LastTransactionEvent = lastTransactionEvent + } + eventsScanned += 1 + + if res, err := th.BatchMatchesSearch(precedingEvent, searchCriteria); err != nil { + return nil, errors.Wrapf(err, "WaitForMatchingEntryBatch: Problem checking for matching entry batch") + } else if res != nil { + res.EventsScanned = eventsScanned + res.TransactionCommits = transactionCommits + res.TransactionInitiates = transactionInitiates + res.LastTransactionEvent = lastTransactionEvent + // Return the events that were parsed. + res.ConsumedEvents = searchCriteria.precedingEvents[:ii+1] + // Return only the preceding events that weren't parsed. + res.RemainingConsumedEvents = searchCriteria.precedingEvents[ii+1:] + // Return the result and the remaining preceding flush events. + return res, nil + } + } + + consumedFlushEvents := []*StateConsumerEvent{} + + // Continue retrieving entries from the ConsumerEventChan until we find the transaction hash. + for consumerEvent := range th.ConsumerEventChan { + consumedFlushEvents = append(consumedFlushEvents, consumerEvent) + th.ConsumedEvents = append(th.ConsumedEvents, consumerEvent) + eventsScanned += 1 + + if consumerEvent.EventType == ConsumerEventTransaction { + if consumerEvent.TransactionEvent == TransactionEventCommit { + transactionCommits += 1 + } else if consumerEvent.TransactionEvent == TransactionEventInitiate { + transactionInitiates += 1 + } + lastTransactionEvent = consumerEvent.TransactionEvent + th.LastTransactionEvent = lastTransactionEvent + } + + if res, err := th.BatchMatchesSearch(consumerEvent, searchCriteria); err != nil { + return nil, errors.Wrapf(err, "WaitForMatchingEntryBatch: Problem checking for matching entry batch") + } else if res != nil { + res.EventsScanned = eventsScanned + res.ConsumedEvents = consumedFlushEvents + res.TransactionCommits = transactionCommits + res.TransactionInitiates = transactionInitiates + res.LastTransactionEvent = lastTransactionEvent + return res, nil + } + if searchCriteria.exitWhenEmpty && len(th.ConsumerEventChan) == 0 { + return &EntryScanResult{ + EventsScanned: eventsScanned, + ConsumedEvents: consumedFlushEvents, + TransactionCommits: transactionCommits, + TransactionInitiates: transactionInitiates, + LastTransactionEvent: lastTransactionEvent, + }, errors.New("WaitForMatchingEntryBatch: Entry not found in entry batch") + } + } + return nil, errors.New("WaitForMatchingEntryBatch: Entry not found in entry batch") +} + +// Check to see if the batch event matches the search criteria. +func (th *TestHandler) BatchMatchesSearch( + consumerEvent *StateConsumerEvent, + searchCriteria *ConsumerEventSearch, +) (*EntryScanResult, error) { + + if consumerEvent.EventType == ConsumerEventTransaction { + if searchCriteria.targetTxnEvent != nil && consumerEvent.TransactionEvent == *searchCriteria.targetTxnEvent { + return &EntryScanResult{ + EntryBatch: consumerEvent.BatchedEntries, + IsMempool: consumerEvent.IsMempool, + }, nil + } + return nil, nil + } + + nextEntryBatch := consumerEvent.BatchedEntries + + encoderType := nextEntryBatch[0].EncoderType + operationType := nextEntryBatch[0].OperationType + flushId := nextEntryBatch[0].FlushId + isMempool := consumerEvent.IsMempool + isReverted := nextEntryBatch[0].IsReverted + + transactionMatch := true + var err error + + if searchCriteria.targetTransactionHash != nil { + transactionMatch, err = th.TransactionInEntryBatch(*searchCriteria.targetTransactionHash, nextEntryBatch) + if err != nil { + return nil, errors.Wrapf(err, "BatchMatchesSearch: Problem checking for transaction in entry batch") + } + } + + badgerKeyMatch := true + if searchCriteria.targetBadgerKeyBytes != nil { + badgerKeyMatch, err = th.BadgerKeyInEntryBatch(*searchCriteria.targetBadgerKeyBytes, nextEntryBatch) + if err != nil { + return nil, errors.Wrapf(err, "BatchMatchesSearch: Problem checking for badger key in entry batch") + } + } + + if EncoderTypeMatchesTargets(encoderType, searchCriteria.targetEncoderTypes) && + (searchCriteria.targetConsumerEvent == nil || consumerEvent.EventType == *searchCriteria.targetConsumerEvent) && + (searchCriteria.targetOpType == nil || operationType == *searchCriteria.targetOpType) && + (searchCriteria.currentFlushId == nil || *searchCriteria.currentFlushId != flushId) && + (searchCriteria.targetIsMempool == nil || *searchCriteria.targetIsMempool == isMempool) && + (searchCriteria.targetIsReverted == nil || *searchCriteria.targetIsReverted == isReverted) && + (badgerKeyMatch) && + (transactionMatch) { + + return &EntryScanResult{ + EntryBatch: nextEntryBatch, + IsMempool: consumerEvent.IsMempool, + IsReverted: nextEntryBatch[0].IsReverted, + FlushId: nextEntryBatch[0].FlushId, + EncoderType: encoderType, + OperationType: operationType, + }, nil + } + return nil, nil +} + +func EncoderTypeMatchesTargets(encoderType lib.EncoderType, targetEncoderTypes []lib.EncoderType) bool { + if targetEncoderTypes == nil { + return true + } + for _, targetEncoderType := range targetEncoderTypes { + if encoderType == targetEncoderType { + return true + } + } + return false +} + +func (th *TestHandler) TransactionInEntryBatch(txnHash string, entryBatch []*lib.StateChangeEntry) (bool, error) { + txns, err := ParseTransactionsFromEntryBatch(entryBatch, th.Params) + if err != nil { + return false, errors.Wrapf(err, "WaitForTxnHash: Problem parsing transactions from entry batch") + } + + for _, txn := range txns { + if txn.TransactionHash == txnHash { + return true, nil + } + } + return false, nil +} + +func (th *TestHandler) BadgerKeyInEntryBatch(badgerKey []byte, entryBatch []*lib.StateChangeEntry) (bool, error) { + for _, entry := range entryBatch { + if bytes.Equal(entry.KeyBytes, badgerKey) { + return true, nil + } + } + return false, nil +} + +// TODO: Delete this function. +// WaitForTxnHash waits for a transaction with the given hash to appear in the ConsumerEventChan. +func (th *TestHandler) WaitForTxnHash(txnHash string, isMempool *bool) (*EntryScanResult, error) { + // Track the number of batches we've scanned. + batchesScanned := 0 + + precedingBatchesInFlush := []*StateConsumerEvent{} + currentFlushId := uuid.Nil + + // Continue retrieving entries from the ConsumerEventChan until we find the transaction hash. + for batchEvent := range th.ConsumerEventChan { + batchesScanned += 1 + + if currentFlushId == batchEvent.BatchedEntries[0].FlushId { + precedingBatchesInFlush = append(precedingBatchesInFlush, batchEvent) + } else { + precedingBatchesInFlush = []*StateConsumerEvent{batchEvent} + currentFlushId = batchEvent.BatchedEntries[0].FlushId + } + + nextEntryBatch := batchEvent.BatchedEntries + + if isMempool != nil && *isMempool != batchEvent.IsMempool { + continue + } + + txns, err := ParseTransactionsFromEntryBatch(nextEntryBatch, th.Params) + if err != nil { + return nil, errors.Wrapf(err, "WaitForTxnHash: Problem parsing transactions from entry batch") + } + + for ii, txn := range txns { + if txn.TransactionHash == txnHash { + // Return the transaction, and all the following transactions. + return &EntryScanResult{ + EntryBatch: nextEntryBatch, + EventsScanned: batchesScanned, + ConsumedEvents: precedingBatchesInFlush[:len(precedingBatchesInFlush)-1], + IsMempool: batchEvent.IsMempool, + IsReverted: nextEntryBatch[0].IsReverted, + FlushId: nextEntryBatch[0].FlushId, + EncoderType: nextEntryBatch[0].EncoderType, + OperationType: nextEntryBatch[0].OperationType, + Txn: txn, + RemainingTxns: txns[ii+1:], + }, nil + } + } + } + return nil, errors.New("WaitForTxnHash: Transaction not found in entry batch") +} + +func ParseTransactionsFromEntryBatch(entryBatch []*lib.StateChangeEntry, params *lib.DeSoParams) ([]*entries.PGTransactionEntry, error) { + encoderType := entryBatch[0].EncoderType + operationType := entryBatch[0].OperationType + + txns := []*entries.PGTransactionEntry{} + + if operationType == lib.DbOperationTypeDelete { + return txns, nil + } + + if encoderType == lib.EncoderTypeTxn { + transformedTxns, err := entries.TransformTransactionEntry(entryBatch, params) + if err != nil { + return nil, errors.Wrapf(err, "ParseTransactionsFromEntryBatch: Problem converting transaction entries") + } + return transformedTxns, nil + } + + for _, entry := range entryBatch { + if encoderType == lib.EncoderTypeBlock { + blockTxns, err := BlockToTransactionEntries(entry.Encoder.(*lib.MsgDeSoBlock), entry.KeyBytes, params) + if err != nil { + return nil, errors.Wrapf(err, "ParseTransactionsFromEntryBatch: Problem converting block entry to transaction entries") + } + txns = append(txns, blockTxns...) + } else if entry.Block != nil { + blockTxns, err := BlockToTransactionEntries(entry.Block, entry.KeyBytes, params) + if err != nil { + return nil, errors.Wrapf(err, "ParseTransactionsFromEntryBatch: Problem converting block property to transaction entries") + } + txns = append(txns, blockTxns...) + } + } + + return txns, nil +} + +func BlockToTransactionEntries(block *lib.MsgDeSoBlock, keyBytes []byte, params *lib.DeSoParams) ([]*entries.PGTransactionEntry, error) { + blockEntry, _ := entries.BlockEncoderToPGStruct(block, keyBytes, params) + txns := []*entries.PGTransactionEntry{} + for ii, txn := range block.Txns { + indexInBlock := uint64(ii) + pgTxn, err := entries.TransactionEncoderToPGStruct( + txn, + &indexInBlock, + blockEntry.BlockHash, + blockEntry.Height, + blockEntry.Timestamp, + nil, + nil, + params, + ) + if err != nil { + return txns, errors.Wrapf( + err, + "entries.transformAndBulkInsertTransactionEntry: Problem converting transaction to PG struct", + ) + } + txns = append(txns, pgTxn) + } + return txns, nil +} diff --git a/tests/consumer_test.go b/tests/consumer_test.go new file mode 100644 index 0000000..2bbc0ed --- /dev/null +++ b/tests/consumer_test.go @@ -0,0 +1,813 @@ +package tests + +import ( + "bytes" + "encoding/hex" + "fmt" + "testing" + "time" + + "github.com/deso-protocol/backend/routes" + "github.com/deso-protocol/core/lib" + pdh_tests "github.com/deso-protocol/postgres-data-handler/tests" + "github.com/deso-protocol/state-consumer/consumer" + "github.com/deso-protocol/uint256" + "github.com/stretchr/testify/require" +) + +// VerifyTransactionStateChanges is a helper function that verifies the state changes for a transaction. +// It waits for the transaction to be consumed by the state consumer, and then verifies that the +// state changes for the entries related to the transaction are applied and reverted correctly. +func VerifyTransactionStateChanges( + t *testing.T, + testHandler *TestHandler, + txnHash string, + encoderType lib.EncoderType, + targetBadgerKeyBytes *[]byte, + previousConsumedEvents []*StateConsumerEvent, + validateAppliedEntry func(*EntryScanResult, *[]byte) error, + validateRevertedEntry func(*EntryScanResult, *[]byte) error, +) { + + // We should first see a transaction event for the transaction being initiated. + // Wait for the transaction to be consumed by the state consumer. + txnRes, err := testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: previousConsumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, true, txnRes.IsMempool) + requiredTxnInitiateCount := txnRes.TransactionInitiates + requiredTxnCommitCount := txnRes.TransactionCommits + // Add the events consumed from the transaction search to the previous consumed events. + consumedEvents := append(txnRes.ConsumedEvents, txnRes.RemainingConsumedEvents...) + flushId := txnRes.FlushId + transactionConfirmed := false + flushesBeforeConfirmation := 0 + var entryRes *EntryScanResult + + // Keep waiting for balance entry operations until the transaction is confirmed. + for !transactionConfirmed { + // Should somehow confirm that the transaction and the balance entry operation occur in the same initiate/commit event. + // Wait for the next balance entry operation. + entryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: consumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{encoderType}, + targetBadgerKeyBytes: targetBadgerKeyBytes, + exitWhenEmpty: false, + }) + + require.NoError(t, err) + + err = validateAppliedEntry(entryRes, targetBadgerKeyBytes) + require.NoError(t, err) + + require.Equal(t, requiredTxnInitiateCount, entryRes.TransactionInitiates) + require.Equal(t, requiredTxnCommitCount, entryRes.TransactionCommits) + require.Equal(t, TransactionEventInitiate, txnRes.LastTransactionEvent) + // Clear the batch events that were consumed to avoid matching the same batch again. + // Only include the remaining events that weren't consumed. + consumedEvents = entryRes.RemainingConsumedEvents + + // TODO: It seems like the mempool entries disappearing is being caused by an entry being removed from the mempool, but not being included in committed state for another two blocks. + // We need to find a way to either ensure that mempool state includes these entries, or not revert them for another two blocks. + + if entryRes.IsMempool { + require.Equal(t, entryRes.IsMempool, true) + + // On the first flush, it should have the same flush ID as the transaction. + // After that, it should have a different flush ID, until it's mined. + if flushesBeforeConfirmation == 0 { + require.Equal(t, entryRes.FlushId, flushId) + } else { + require.NotEqual(t, entryRes.FlushId, flushId) + } + + flushesBeforeConfirmation++ + } else { + // Once the transaction is no longer in the mempool, it should be confirmed. + require.Equal(t, entryRes.FlushId, flushId) + transactionConfirmed = true + require.Equal(t, entryRes.IsMempool, false) + } + + flushId = entryRes.FlushId + + require.Equal(t, entryRes.EncoderType, encoderType) + require.Equal(t, entryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, entryRes.IsReverted, false) + + err = validateAppliedEntry(entryRes, targetBadgerKeyBytes) + require.NoError(t, err) + + if !transactionConfirmed { + // Wait for the next balance entry operation - it should be a delete operation. + entryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: consumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{encoderType}, + targetBadgerKeyBytes: targetBadgerKeyBytes, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, TransactionEventInitiate, entryRes.LastTransactionEvent) + // After a revert, the next mempool entry re-apply should occur within the same transaction. + requiredTxnInitiateCount = 0 + requiredTxnCommitCount = 0 + require.Equal(t, encoderType, entryRes.EncoderType) + require.Equal(t, false, entryRes.IsReverted) + require.Equal(t, !transactionConfirmed, entryRes.IsMempool) + + // Validate the reverted entry. + err = validateRevertedEntry(entryRes, targetBadgerKeyBytes) + require.NoError(t, err) + consumedEvents = entryRes.RemainingConsumedEvents + } + + } + + require.Less(t, flushesBeforeConfirmation, 4, "Flushes before confirmation should be less than 4, was %d", flushesBeforeConfirmation) + require.Greater(t, flushesBeforeConfirmation, 0, "Flushes before confirmation should be greater than 0, was %d", flushesBeforeConfirmation) + require.True(t, transactionConfirmed) + + // Search for the associated transaction associated with the committed entry. + confirmedTxnRes, err := testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: append(entryRes.ConsumedEvents, entryRes.RemainingConsumedEvents...), + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash, + exitWhenEmpty: false, + }) + require.NoError(t, err) + // Make sure that the mint transaction is confirmed, and was confirmed during the same flush as the balance entry. + require.Equal(t, flushId, confirmedTxnRes.FlushId) + require.Equal(t, confirmedTxnRes.IsMempool, false) + require.Equal(t, confirmedTxnRes.TransactionInitiates, requiredTxnInitiateCount) + require.Equal(t, confirmedTxnRes.TransactionCommits, requiredTxnCommitCount) + require.Equal(t, TransactionEventInitiate, confirmedTxnRes.LastTransactionEvent) + + // Feed the fnal check the the remaining events after the previous entry or previous confirmed txn, whichever is smaller. + finalEntryPrecedingEvents := entryRes.RemainingConsumedEvents + if len(confirmedTxnRes.RemainingConsumedEvents) < len(entryRes.RemainingConsumedEvents) { + finalEntryPrecedingEvents = confirmedTxnRes.RemainingConsumedEvents + } + // That should be the last balance entry operation in the queue. + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + precedingEvents: finalEntryPrecedingEvents, + targetEncoderTypes: []lib.EncoderType{encoderType}, + targetBadgerKeyBytes: targetBadgerKeyBytes, + exitWhenEmpty: true, + }) + require.Error(t, err) + require.Contains(t, err.Error(), "Entry not found in entry batch") +} + +func TestConsumer(t *testing.T) { + + desoParams := &lib.DeSoTestnetParams + // TODO: Cleanup the consumer test environemnt fn to remove consumer specific logic. + testConfig, testHandler, _, _, _, cleanupFunc := SetupConsumerTestEnvironment(t, 3, pdh_tests.RandString(10), desoParams) + defer cleanupFunc() + + nodeClient := testConfig.NodeClient + coinUser := testConfig.TestUsers[0] + + // Mint some DAO coins for the coin user. + mintDaoCoinReq := &routes.DAOCoinRequest{ + UpdaterPublicKeyBase58Check: coinUser.PublicKeyBase58, + ProfilePublicKeyBase58CheckOrUsername: coinUser.PublicKeyBase58, + OperationType: routes.DAOCoinOperationStringMint, + CoinsToMintNanos: *uint256.NewInt(0).SetUint64(123212312324), + TransferRestrictionStatus: routes.TransferRestrictionStatusStringUnrestricted, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + TransactionFees: nil, + } + + _, txnRes, err := nodeClient.DAOCoins(mintDaoCoinReq, coinUser.PrivateKey, false, true) + require.NoError(t, err) + + txnHash := txnRes.TxnHashHex + + // Verify the state changes for the mint transaction apply as expected. + VerifyTransactionStateChanges(t, testHandler, txnHash, lib.EncoderTypeBalanceEntry, nil, nil, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + // TODO: Move these to a generic function param. + balanceEntries, balanceAncestralEntries, err := DecodeStateChangeEntries[*lib.BalanceEntry](entryRes.EntryBatch) + if err != nil { + return err + } + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry := *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], testHandler.Params), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], testHandler.Params), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry := balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + return nil + }, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + require.Equal(t, entryRes.OperationType, lib.DbOperationTypeDelete) + balanceEntries, balanceAncestralEntries, err := DecodeStateChangeEntries[*lib.BalanceEntry](entryRes.EntryBatch) + if err != nil { + return err + } + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry := *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], testHandler.Params), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], testHandler.Params), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry := balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + return nil + }, + ) + + // Lock coins for the coin user. + lockCoinsReq := &routes.CoinLockupRequest{ + TransactorPublicKeyBase58Check: coinUser.PublicKeyBase58, + ProfilePublicKeyBase58Check: coinUser.PublicKeyBase58, + RecipientPublicKeyBase58Check: coinUser.PublicKeyBase58, + UnlockTimestampNanoSecs: time.Now().UnixNano() + 1000000000000, + VestingEndTimestampNanoSecs: time.Now().UnixNano() + 1000000000000, + LockupAmountBaseUnits: uint256.NewInt(0).SetUint64(1e9), + ExtraData: nil, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + TransactionFees: nil, + } + + _, txnRes, err = nodeClient.LockCoins(lockCoinsReq, coinUser.PrivateKey, false, true) + require.NoError(t, err) + + txnHash = txnRes.TxnHashHex + + // Verify the state changes for the lock transaction apply as expected. + VerifyTransactionStateChanges(t, testHandler, txnHash, lib.EncoderTypeLockedBalanceEntry, nil, nil, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + lockedBalanceEntries, lockedBalanceAncestralEntries, err := DecodeStateChangeEntries[*lib.LockedBalanceEntry](entryRes.EntryBatch) + if err != nil { + return err + } + require.Len(t, lockedBalanceEntries, 1) + require.Len(t, lockedBalanceAncestralEntries, 1) + require.NotNil(t, lockedBalanceEntries[0]) + lockedBalanceEntry := *lockedBalanceEntries[0] + require.True(t, lockedBalanceEntry.BalanceBaseUnits.Eq(lockCoinsReq.LockupAmountBaseUnits)) + require.Equal(t, lockCoinsReq.TransactorPublicKeyBase58Check, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.ProfilePKID[:], desoParams)) + require.Equal(t, lockCoinsReq.ProfilePublicKeyBase58Check, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.HODLerPKID[:], desoParams)) + require.Equal(t, lockCoinsReq.UnlockTimestampNanoSecs, lockedBalanceEntry.UnlockTimestampNanoSecs) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.HODLerPKID[:], desoParams), lockCoinsReq.RecipientPublicKeyBase58Check) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.ProfilePKID[:], desoParams), lockCoinsReq.ProfilePublicKeyBase58Check) + ancestralLockedBalanceEntry := lockedBalanceAncestralEntries[0] + require.Nil(t, ancestralLockedBalanceEntry) + return nil + + }, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + require.Equal(t, entryRes.OperationType, lib.DbOperationTypeDelete) + lockedBalanceEntries, lockedBalanceAncestralEntries, err := DecodeStateChangeEntries[*lib.LockedBalanceEntry](entryRes.EntryBatch) + if err != nil { + return err + } + require.Len(t, lockedBalanceEntries, 1) + require.Len(t, lockedBalanceAncestralEntries, 1) + require.NotNil(t, lockedBalanceEntries[0]) + lockedBalanceEntry := *lockedBalanceEntries[0] + require.True(t, lockedBalanceEntry.BalanceBaseUnits.Eq(lockCoinsReq.LockupAmountBaseUnits)) + require.Equal(t, lockCoinsReq.TransactorPublicKeyBase58Check, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.ProfilePKID[:], desoParams)) + require.Equal(t, lockCoinsReq.ProfilePublicKeyBase58Check, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.HODLerPKID[:], desoParams)) + require.Equal(t, lockCoinsReq.UnlockTimestampNanoSecs, lockedBalanceEntry.UnlockTimestampNanoSecs) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.HODLerPKID[:], desoParams), lockCoinsReq.RecipientPublicKeyBase58Check) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(lockedBalanceEntry.ProfilePKID[:], desoParams), lockCoinsReq.ProfilePublicKeyBase58Check) + ancestralLockedBalanceEntry := lockedBalanceAncestralEntries[0] + require.Nil(t, ancestralLockedBalanceEntry) + return nil + }, + ) +} + +func TestConsumerBulk(t *testing.T) { + + desoParams := &lib.DeSoTestnetParams + testConfig, testHandler, _, _, _, cleanupFunc := SetupConsumerTestEnvironment(t, 3, pdh_tests.RandString(10), desoParams) + defer cleanupFunc() + + parallelism := 60 + + nodeClient := testConfig.NodeClient + postUser := testConfig.TestUsers[0] + + createPostReq := &routes.SubmitPostRequest{ + UpdaterPublicKeyBase58Check: postUser.PublicKeyBase58, + BodyObj: &lib.DeSoBodySchema{ + Body: "Test Post", + }, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + } + + submitPostRes, txnRes, err := nodeClient.SubmitPost(createPostReq, postUser.PrivateKey, false, true) + require.NoError(t, err) + postHash := txnRes.PostEntryResponse.PostHashHex + + // Create a slice to store the random post associations. + randomPostAssocs := make([]string, parallelism) + + txnHashes := make([]string, parallelism) + + fmt.Printf("Post hash: %s\n", txnRes.PostEntryResponse.PostHashHex) + fmt.Printf("Submit post response: %+v\n", submitPostRes) + + // Generate the post associations. + for ii := 0; ii < parallelism; ii++ { + randomPostAssoc := pdh_tests.RandString(10) + + // Try to create the post association 10 times, in case it fails. + for jj := 0; jj < 10; jj++ { + createPostAssocReq := &routes.CreatePostAssociationRequest{ + TransactorPublicKeyBase58Check: postUser.PublicKeyBase58, + AppPublicKeyBase58Check: postUser.PublicKeyBase58, + PostHashHex: postHash, + AssociationType: randomPostAssoc, + AssociationValue: randomPostAssoc, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + } + + _, txnRes, err = nodeClient.CreatePostAssociation(createPostAssocReq, postUser.PrivateKey, false, true) + if err == nil { + break + } + } + require.NoError(t, err) + + txnHash := txnRes.TxnHashHex + + txnHashes[ii] = txnHash + randomPostAssocs[ii] = randomPostAssoc + } + + // Verify each transaction in sequence. + for ii := 0; ii < parallelism; ii++ { + txnHash := txnHashes[ii] + randomPostAssoc := randomPostAssocs[ii] + txnHashBytes, err := hex.DecodeString(txnHash) + require.NoError(t, err) + + dbKeyBytes := lib.DBKeyForPostAssociationByID(&lib.PostAssociationEntry{ + AssociationID: lib.NewBlockHash(txnHashBytes), + }) + + // Verify the state changes for the mint transaction apply as expected. + VerifyTransactionStateChanges(t, testHandler, txnHash, lib.EncoderTypePostAssociationEntry, &dbKeyBytes, testHandler.ConsumedEvents, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + + require.Greater(t, len(entryRes.EntryBatch), 0) + + stateChangeEntry := entryRes.EntryBatch[0] + found := false + // If we have a target badger key bytes, we need to find the corresponding state change entry in the batch. + if targetBadgerKeyBytes != nil { + for _, entry := range entryRes.EntryBatch { + if bytes.Equal(entry.KeyBytes, *targetBadgerKeyBytes) { + stateChangeEntry = entry + found = true + break + } + } + } + require.True(t, found) + + postAssocEntries, postAssocAncestralEntries, err := DecodeStateChangeEntries[*lib.PostAssociationEntry]([]*lib.StateChangeEntry{stateChangeEntry}) + if err != nil { + return err + } + + require.Len(t, postAssocEntries, 1) + require.Len(t, postAssocAncestralEntries, 1) + require.NotNil(t, postAssocEntries[0]) + postAssocEntry := *postAssocEntries[0] + + require.Equal(t, postUser.PublicKeyBase58, consumer.PublicKeyBytesToBase58Check(postAssocEntry.AppPKID[:], desoParams)) + require.Equal(t, postUser.PublicKeyBase58, consumer.PublicKeyBytesToBase58Check(postAssocEntry.TransactorPKID[:], desoParams)) + require.Equal(t, postHash, hex.EncodeToString(postAssocEntry.PostHash[:])) + require.Equal(t, randomPostAssoc, string(postAssocEntry.AssociationType[:])) + require.Equal(t, randomPostAssoc, string(postAssocEntry.AssociationValue[:])) + ancestralEntry := postAssocAncestralEntries[0] + require.Nil(t, ancestralEntry) + return nil + }, + func(entryRes *EntryScanResult, targetBadgerKeyBytes *[]byte) error { + // require.Equal(t, entryRes.OperationType, lib.DbOperationTypeDelete) + + require.Greater(t, len(entryRes.EntryBatch), 0) + + stateChangeEntry := entryRes.EntryBatch[0] + found := false + // If we have a target badger key bytes, we need to find the corresponding state change entry in the batch. + if targetBadgerKeyBytes != nil { + for _, entry := range entryRes.EntryBatch { + if bytes.Equal(entry.KeyBytes, *targetBadgerKeyBytes) { + stateChangeEntry = entry + found = true + break + } + } + require.Truef(t, found, "Failed to find state change entry in batch for target key bytes %s", hex.EncodeToString(*targetBadgerKeyBytes)) + } + + postAssocEntries, postAssocAncestralEntries, err := DecodeStateChangeEntries[*lib.PostAssociationEntry]([]*lib.StateChangeEntry{stateChangeEntry}) + if err != nil { + return err + } + require.Len(t, postAssocEntries, 1) + require.Len(t, postAssocAncestralEntries, 1) + require.NotNil(t, postAssocEntries[0]) + postAssocEntry := *postAssocEntries[0] + require.Equal(t, postUser.PublicKeyBase58, consumer.PublicKeyBytesToBase58Check(postAssocEntry.AppPKID[:], desoParams)) + require.Equal(t, postUser.PublicKeyBase58, consumer.PublicKeyBytesToBase58Check(postAssocEntry.TransactorPKID[:], desoParams)) + require.Equal(t, postHash, hex.EncodeToString(postAssocEntry.PostHash[:])) + require.Equal(t, randomPostAssoc, string(postAssocEntry.AssociationType[:])) + require.Equal(t, randomPostAssoc, string(postAssocEntry.AssociationValue[:])) + ancestralEntry := postAssocAncestralEntries[0] + require.Nil(t, ancestralEntry) + return nil + }, + ) + } + +} + +func TestRemoveTransaction(t *testing.T) { + + desoParams := &lib.DeSoTestnetParams + testConfig, testHandler, _, nodeServer, _, cleanupFunc := SetupConsumerTestEnvironment(t, 3, pdh_tests.RandString(10), desoParams) + defer cleanupFunc() + + nodeClient := testConfig.NodeClient + coinUser := testConfig.TestUsers[0] + + // Mint some DAO coins for the coin user. + mintDaoCoinReq := &routes.DAOCoinRequest{ + UpdaterPublicKeyBase58Check: coinUser.PublicKeyBase58, + ProfilePublicKeyBase58CheckOrUsername: coinUser.PublicKeyBase58, + OperationType: routes.DAOCoinOperationStringMint, + CoinsToMintNanos: *uint256.NewInt(0).SetUint64(123212312324), + TransferRestrictionStatus: routes.TransferRestrictionStatusStringUnrestricted, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + TransactionFees: nil, + } + + _, txnRes, err := nodeClient.DAOCoins(mintDaoCoinReq, coinUser.PrivateKey, false, true) + require.NoError(t, err) + + txnHash := txnRes.TxnHashHex + + // Wait for the transaction to be consumed by the state consumer. + mintCoinTxnRes, err := testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash, + exitWhenEmpty: false, + }) + require.NoError(t, err) + + preceedingBatchEvents := mintCoinTxnRes.ConsumedEvents + var balanceEntryRes *EntryScanResult + + err = nodeServer.GetMempool().RemoveTransaction(txnRes.Transaction.Hash()) + require.NoError(t, err) + flushId := mintCoinTxnRes.FlushId + + // Wait for the next balance entry operation. + balanceEntryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: preceedingBatchEvents, + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, flushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, false) + balanceEntries, balanceAncestralEntries, err := DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry := *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry := balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + + // This entry should be the revert of the previous entry. + balanceEntryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, flushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, true) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry = balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + + // This entry should be the revert of the revert. + balanceEntryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, flushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeDelete) + require.Equal(t, balanceEntryRes.IsReverted, true) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry = balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + + // This entry should be the revert of the original. + balanceEntryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, flushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeDelete) + require.Equal(t, balanceEntryRes.IsReverted, false) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + ancestralBalanceEntry = balanceAncestralEntries[0] + require.Nil(t, ancestralBalanceEntry) + + // The transaction shouldn't be confirmed. + balanceEntryRes, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: balanceEntryRes.ConsumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash, + targetIsMempool: &falseValue, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: true, + }) + require.Error(t, err) + + // That should be the last balance entry operation in the queue. + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: balanceEntryRes.ConsumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: true, + }) + require.Error(t, err) +} + +func TestRemoveTransactionWithAncRecord(t *testing.T) { + + desoParams := &lib.DeSoTestnetParams + testConfig, testHandler, _, nodeServer, _, cleanupFunc := SetupConsumerTestEnvironment(t, 3, pdh_tests.RandString(10), desoParams) + defer cleanupFunc() + + nodeClient := testConfig.NodeClient + coinUser := testConfig.TestUsers[0] + + // Mint some DAO coins for the coin user. + mintDaoCoinReq := &routes.DAOCoinRequest{ + UpdaterPublicKeyBase58Check: coinUser.PublicKeyBase58, + ProfilePublicKeyBase58CheckOrUsername: coinUser.PublicKeyBase58, + OperationType: routes.DAOCoinOperationStringMint, + CoinsToMintNanos: *uint256.NewInt(0).SetUint64(1e9 + 1), + TransferRestrictionStatus: routes.TransferRestrictionStatusStringUnrestricted, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + TransactionFees: nil, + } + + _, txnRes, err := nodeClient.DAOCoins(mintDaoCoinReq, coinUser.PrivateKey, false, true) + require.NoError(t, err) + + txnHash := txnRes.TxnHashHex + + // Wait for the transaction to be consumed by the state consumer and mined. + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash, + targetIsMempool: &falseValue, + exitWhenEmpty: false, + }) + require.NoError(t, err) + + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + targetIsMempool: &falseValue, + exitWhenEmpty: false, + }) + require.NoError(t, err) + + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + targetIsMempool: &falseValue, + exitWhenEmpty: false, + }) + require.NoError(t, err) + + // Clear out the rest of the event channel. + testHandler.ConsumeAllEvents() + + // Mint some DAO coins for the coin user again. This will create a new balance entry, with an ancestral record being the old balance entry. + mintDaoCoinReq2 := &routes.DAOCoinRequest{ + UpdaterPublicKeyBase58Check: coinUser.PublicKeyBase58, + ProfilePublicKeyBase58CheckOrUsername: coinUser.PublicKeyBase58, + OperationType: routes.DAOCoinOperationStringMint, + CoinsToMintNanos: *uint256.NewInt(0).SetUint64(1234982123), + TransferRestrictionStatus: routes.TransferRestrictionStatusStringUnrestricted, + MinFeeRateNanosPerKB: pdh_tests.FeeRateNanosPerKB, + TransactionFees: nil, + } + + _, txnRes2, err := nodeClient.DAOCoins(mintDaoCoinReq2, coinUser.PrivateKey, false, true) + require.NoError(t, err) + + txnHash2 := txnRes2.TxnHashHex + + // Wait for the transaction to be consumed by the state consumer. + searchRes, err := testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash2, + exitWhenEmpty: false, + }) + require.NoError(t, err) + + err = nodeServer.GetMempool().RemoveTransaction(txnRes2.Transaction.Hash()) + require.NoError(t, err) + + // We expect to see the balance entry operation coming from the mempool. + balanceEntryRes, _ := testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: searchRes.ConsumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, searchRes.FlushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, false) + balanceEntries, balanceAncestralEntries, err := DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry := *balanceEntries[0] + totalBalance := uint256.NewInt(0).Add(&mintDaoCoinReq.CoinsToMintNanos, &mintDaoCoinReq2.CoinsToMintNanos) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq2.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq2.UpdaterPublicKeyBase58Check) + require.True(t, balanceEntry.BalanceNanos.Eq(totalBalance), "Balance: %s, Total balance: %s", balanceEntry.BalanceNanos.String(), totalBalance.String()) + require.NotNil(t, balanceAncestralEntries[0]) + ancestralBalanceEntry := *balanceAncestralEntries[0] + require.True(t, ancestralBalanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + + // We expect the next entry to be a revert of the previous entry. + balanceEntryRes, _ = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, searchRes.FlushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, true) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + totalBalance = uint256.NewInt(0).Add(&mintDaoCoinReq.CoinsToMintNanos, &mintDaoCoinReq2.CoinsToMintNanos) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq2.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq2.UpdaterPublicKeyBase58Check) + require.True(t, balanceEntry.BalanceNanos.Eq(totalBalance), "Balance: %s, Total balance: %s", balanceEntry.BalanceNanos.String(), totalBalance.String()) + require.NotNil(t, balanceAncestralEntries[0]) + ancestralBalanceEntry = *balanceAncestralEntries[0] + require.True(t, ancestralBalanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + + // We expect the next entry to be a revert of the previous entry. + balanceEntryRes, _ = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, searchRes.FlushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, true) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq2.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq2.UpdaterPublicKeyBase58Check) + + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos), "Balance: %s, Total balance: %s", balanceEntry.BalanceNanos.String(), mintDaoCoinReq.CoinsToMintNanos.String()) + require.NotNil(t, balanceAncestralEntries[0]) + ancestralBalanceEntry = *balanceAncestralEntries[0] + require.True(t, ancestralBalanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + + // This should be a revert of the revert. + balanceEntryRes, _ = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + targetConsumerEvent: &consumerEventBatch, + targetEncoderTypes: []lib.EncoderType{lib.EncoderTypeBalanceEntry}, + exitWhenEmpty: false, + }) + require.NoError(t, err) + require.Equal(t, balanceEntryRes.IsMempool, true) + require.Equal(t, balanceEntryRes.FlushId, searchRes.FlushId) + + require.Equal(t, balanceEntryRes.EncoderType, lib.EncoderTypeBalanceEntry) + require.Equal(t, balanceEntryRes.OperationType, lib.DbOperationTypeUpsert) + require.Equal(t, balanceEntryRes.IsReverted, false) + balanceEntries, balanceAncestralEntries, err = DecodeStateChangeEntries[*lib.BalanceEntry](balanceEntryRes.EntryBatch) + require.NoError(t, err) + require.Len(t, balanceEntries, 1) + require.Len(t, balanceAncestralEntries, 1) + require.NotNil(t, balanceEntries[0]) + balanceEntry = *balanceEntries[0] + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq2.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(balanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq2.UpdaterPublicKeyBase58Check) + require.True(t, balanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos), "Balance: %s, Total balance: %s", balanceEntry.BalanceNanos.String(), mintDaoCoinReq.CoinsToMintNanos.String()) + require.NotNil(t, balanceAncestralEntries[0]) + ancestralBalanceEntry = *balanceAncestralEntries[0] + require.True(t, ancestralBalanceEntry.BalanceNanos.Eq(&mintDaoCoinReq.CoinsToMintNanos)) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.HODLerPKID[:], desoParams), mintDaoCoinReq.ProfilePublicKeyBase58CheckOrUsername) + require.Equal(t, consumer.PublicKeyBytesToBase58Check(ancestralBalanceEntry.CreatorPKID[:], desoParams), mintDaoCoinReq.UpdaterPublicKeyBase58Check) + + // The transaction shouldn't be confirmed. + _, err = testHandler.WaitForMatchingEntryBatch(&ConsumerEventSearch{ + precedingEvents: balanceEntryRes.ConsumedEvents, + targetConsumerEvent: &consumerEventBatch, + targetTransactionHash: &txnHash2, + targetIsMempool: &falseValue, + exitWhenEmpty: true, + }) + require.Error(t, err) + +}