Replies: 3 comments
-
|
I am hosting my own instance on an Oracle Free Tier (ARM) server with 4 vCPUs and 24 GB RAM. I am still running on v1.7.0. And that instance has been up for close to one year non-stop. I checked the commits between 1.8.1 and 1.7.0 and couldn't find any code changes that directly influence the speed certificates are processed at. Only one thing came to my mind and that's the configurable buffer sizes (see this commit and #53). Did you configure those settings? My server is barely under load. Certstream is also not limited by the CPU. I connected with a new client and randomly picked multiple certs and compared their Google xenon2026h1 2222529674: Most of the certs were pretty recent. But the IDs were all pretty close to the tree_size. Except for the TrustAsia log. There was a gap of about 743 million certs. From my experience, this sadly highly depends on the CT Log provider / the CA. I barely ever had any issues with Google Logs while Sectigo and TrustAsia Logs were unreliable. In short: I don't observe that behavior - at least on v1.7.0. Can you share if the issue affected all logs the same? Also, did you set up a grafana dashboard for certstream? Can you see a decline in processing rates for certain logs? Lastly, what comes to my mind: The CT log operator might limit the bandwidth to your IP.
If you're running two instances in parallel, the bandwidth might be limited in a way, so that you can't process all the certificates twice? |
Beta Was this translation helpful? Give feedback.
-
|
Thanks you for your response. Yes, the most observed delay are on Trustasia and Argon2026h1. Other logs are not affected. I do not configure buffer_size. To check the delay for all logs, i used this code |
Beta Was this translation helpful? Give feedback.
-
|
Hello, In the ct-watcher.go file, is it possible to make the two parameters BatchSize and ParallelFetch variable? Is it possible to specify them from the configuration file ? Increasing these two parameters allows more certificates to be processed. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
In parallel with our tests on version v1.9.0-beta, we have an instance deployed with version 1.8.1.
The server used is in Europe and has the following characteristics:
We do not add any additional CT logs.
The server is not over-used in terms of RAM, CPU, or I/O.
We observe a more or less significant read delay depending on the CT logs.
Steps :
For example after 72h, on argon2026h1, we have around 20M certificate late. Finally, in a day we are able to read less certificate than the log CT add.
Do you observe the same behavior?
Beta Was this translation helpful? Give feedback.
All reactions