Skip to content

feat: Improve HTTP client caching, memory configuration and extension enhancements#117

Merged
lcomplete merged 3 commits intomainfrom
dev
Mar 7, 2026
Merged

feat: Improve HTTP client caching, memory configuration and extension enhancements#117
lcomplete merged 3 commits intomainfrom
dev

Conversation

@lcomplete
Copy link
Copy Markdown
Owner

Summary

This PR includes several improvements across the server and browser extension:

Server Improvements

  • HTTP Client Caching: Enhanced HttpUtils with caching and client management for feed requests
  • Memory Configuration: Increased JAVA_ARGS max heap size to 1024m in Dockerfile
  • Connection Timeout: Ensured connection timeout is properly set in application.yml

Browser Extension Improvements

  • Context Menu: Added DEV flag to context menu titles in development mode
  • Article Preview: Simplified title rendering logic by removing isXTwitterSite function

Other Changes

  • Added .agents directory to .gitignore

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 7, 2026

🤖 Augment PR Summary

Summary: This PR improves server-side feed fetching behavior and runtime configuration.

Changes:

  • Reuses an `OkHttpClient` (keyed by proxy + timeout) and shares a single disk `Cache` for feed requests to reduce repeated client/cache construction.
  • Normalizes RSS feed fetch/parse failure handling by wrapping `IOException`/`FeedException` in `ConnectorFetchException`.
  • Closes Lucene `DirectoryReader` instances via try-with-resources in both indexing and searching paths.
  • Adjusts runtime settings: bumps JVM max heap (`-Xmx`) to 1024m and sets Hikari `connection-timeout` to 60s.
  • Adds `.agents/` to `.gitignore`.

Technical Notes: Feed client caching is now process-wide (static), so behavior depends on the proxy/timeout key and shared cache directory.

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 2 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

.cache(FEED_CACHE)
.connectionSpecs(Arrays.asList(ConnectionSpec.MODERN_TLS, ConnectionSpec.COMPATIBLE_TLS, ConnectionSpec.CLEARTEXT))
.followRedirects(true);
if (proxySetting != null && StringUtils.isNotBlank(proxySetting.getHost())) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proxy guard only checks host, but proxySetting.getPort() is unboxed when constructing InetSocketAddress and will NPE if the port is null. Consider also guarding port here to avoid crashing feed fetch when proxy config is partially populated.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

var size = PageSizeUtils.getPageSize(searchQuery.getSize(), 100);
var maxPage = 10000;
int startIndex = (page - 1) * size;
TopScoreDocCollector collector = TopScoreDocCollector.create(page * size, maxPage);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a request passes page <= 0, then startIndex = (page - 1) * size and TopScoreDocCollector.create(page * size, ...) can go negative/zero and Lucene may throw at query time. Consider clamping page to at least 1 before using it in these calculations.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

@lcomplete lcomplete merged commit ebe6dd2 into main Mar 7, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant