Skip to content

fix: memory, cpu usage & prisma connection limit#1114

Draft
chedieck wants to merge 17 commits intomasterfrom
fix/reduce-memory-spent
Draft

fix: memory, cpu usage & prisma connection limit#1114
chedieck wants to merge 17 commits intomasterfrom
fix/reduce-memory-spent

Conversation

@chedieck
Copy link
Collaborator

@chedieck chedieck commented Feb 25, 2026

WIP

changes removed and marked as draft due to errors in prod

Related to #

Description

We were getting some errors in production related to prisma having too many connections open, this PR puts a limit (20) on them so that further connections are queued.

Also, in my local dev initJobs was using a lot of CPU and failing due to memory overload, so I decreased the size of constants related to the initial sync parallel batch sizes in order to reduce the overload.

Finally, some refactor into the function that fetches the non-synced txs to avoid copying/storing data unnecessarily.

Summary by CodeRabbit

  • Chores
    • Optimized batch processing and concurrency parameters for improved system efficiency
    • Added new configuration for payment expiration timeouts (7 days), price connection handling, and trigger logging

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This pull request updates seven configuration constants in constants/index.ts. Three existing constants are modified with lower values, and four new constants are introduced for batch processing, connection handling, and payment expiration timing.

Changes

Cohort / File(s) Summary
Configuration Constants
constants/index.ts
Modified three existing constants: reduced INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY (128→5), TX_EMIT_BATCH_SIZE (2,000→500), and DB_COMMIT_BATCH_SIZE (2,000→500). Added four new constants: TRIGGER_LOG_BATCH_SIZE, PRICES_CONNECTION_BATCH_SIZE, PRICES_CONNECTION_TIMEOUT, and CLIENT_PAYMENT_EXPIRATION_TIME.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 Constants dance with values new and small,
Batches trim their sails through the digital hall,
Seven tweaks of config, a hop and a bound,
Seven days of payment, in constants we're found! ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning PR description is incomplete and lacks critical detail. Marked as WIP with production errors mentioned but no Test plan provided; description vaguely references three changes without specific implementation details or evidence of testing. Complete the description by: (1) removing WIP status, (2) providing a Test plan explaining how the changes were validated, (3) adding specific details about the Prisma connection limit implementation, (4) clarifying which constants were modified and their new values.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main changes: reducing memory and CPU usage while adding a Prisma connection limit, which are the core objectives of this PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/reduce-memory-spent

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Klakurka Klakurka requested a review from Fabcien February 25, 2026 22:57
@Klakurka Klakurka added the enhancement (behind the scenes) Stuff that users won't see label Feb 25, 2026
@Klakurka Klakurka added this to the Phase 3 milestone Feb 25, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
services/chronikService.ts (2)

914-914: Nit: comment is in Portuguese.

// final DB flush (se sobrou menos que DB_COMMIT_BATCH_SIZE) — consider translating for consistency with the rest of the English codebase: // final DB flush (remaining < DB_COMMIT_BATCH_SIZE).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` at line 914, Translate the Portuguese inline
comment near the DB flush logic to English for consistency; replace the comment
"// final DB flush (se sobrou menos que DB_COMMIT_BATCH_SIZE)" with an English
equivalent such as "// final DB flush (remaining < DB_COMMIT_BATCH_SIZE)" in the
block where DB_COMMIT_BATCH_SIZE is referenced (the final DB commit/flush
section).

876-901: Consider extracting the duplicated commit-and-broadcast block into a helper.

The "commit batch → append to file → broadcast + triggers" logic at lines 876–901 is nearly identical to the final-flush block at lines 915–939. Extracting a private helper (e.g., commitBatch(pairs, productionAddressesIds, runTriggers)) would reduce duplication and make future changes less error-prone.

Also applies to: 915-939

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 876 - 901, Extract the repeated
"commit batch → append to file → broadcast + triggers" logic into a private
helper (e.g., commitBatch) and replace both inlined blocks with calls to it: the
helper should accept the commitPairs (array of pair objects with .row and .raw),
productionAddressesIds, runTriggers and this.networkId/context (or take this as
bound via arrow/private method), call createManyTransactions(rows),
appendTxsToFile for produced transactions belonging to productionAddressesIds,
build rawByHash from commitPairs to map txids to raw, create broadcast data via
this.broadcastIncomingTx(createdTx.address.address, raw, createdTx), and if
runTriggers call executeTriggersBatch(triggerBatch, this.networkId); ensure the
helper returns or throws errors consistently and preserves the console log
currently using this.CHRONIK_MSG_PREFIX and createdTxs.length so both original
locations (the batch commit inside the loop and the final-flush block) can
simply await commitBatch(...) instead of duplicating code.
prisma-local/clientInstance.ts (1)

5-11: Consider making the connection limit configurable via environment variable.

The hard-coded CONNECTION_LIMIT = 20 works for the stated goal, but different deployment environments (staging, CI, local dev) may benefit from different pool sizes. A one-line change keeps the hard default while allowing overrides:

💡 Suggested improvement
-const CONNECTION_LIMIT = 20
+const CONNECTION_LIMIT = parseInt(process.env.PRISMA_CONNECTION_LIMIT ?? '20', 10)

Also, if DATABASE_URL is unset, baseUrl becomes '' and buildDatasourceUrl() returns "?connection_limit=20", which will make PrismaClient throw a confusing "invalid URL" error. A guard with a clear message would help debugging:

 function buildDatasourceUrl (): string {
   const baseUrl = process.env.DATABASE_URL ?? ''
+  if (baseUrl === '') {
+    throw new Error('DATABASE_URL environment variable is not set')
+  }
   const separator = baseUrl.includes('?') ? '&' : '?'
   return `${baseUrl}${separator}connection_limit=${CONNECTION_LIMIT}`
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prisma-local/clientInstance.ts` around lines 5 - 11, Replace the hard-coded
CONNECTION_LIMIT with a configurable env-backed default and validate
DATABASE_URL: read a numeric env var (e.g. process.env.CONNECTION_LIMIT or
PRISMA_CONNECTION_LIMIT) and fallback to 20 when parsing in
buildDatasourceUrl(), and add a guard in buildDatasourceUrl() that throws or
logs a clear error if process.env.DATABASE_URL is missing/empty instead of
returning "?connection_limit=..."; update references to CONNECTION_LIMIT and
buildDatasourceUrl() so the code uses the parsed env value and fails fast with a
descriptive message that will surface before PrismaClient is instantiated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@services/chronikService.ts`:
- Around line 429-433: The current loop uses chronikTxs.push(...workerTxs) which
can hit V8's argument limit for very large workerTxs arrays; update the block in
the function iterating over results (the loop handling workerTxs and chronikTxs)
to avoid spreading into push — either push items in a simple for/of loop over
workerTxs, or replace the spread with a non-spread-safe operation such as
chronikTxs = chronikTxs.concat(workerTxs); ensure you update the logic that
previously relied on in-place mutation (e.g., any splice-based draining) so
behavior remains the same.
- Around line 341-348: fetchPage currently swallows errors and returns [],
causing partial-address syncs to be marked complete; change fetchPage (the
method that calls getPaginatedTxs with CHRONIK_FETCH_N_TXS_PER_PAGE) to not
silently return an empty array on error — either remove the try/catch or rethrow
the caught error after logging so the caller (the per-worker catch) can mark the
address as failed and add it to failedAddressesWithErrors; ensure callers that
previously relied on [] handle the thrown error (or a sentinel) and that
failedAddressesWithErrors, addressesSynced, setSyncingBatch and
updateManyLastSynced are updated accordingly so partially-fetched addresses are
not treated as fully synced.

---

Nitpick comments:
In `@prisma-local/clientInstance.ts`:
- Around line 5-11: Replace the hard-coded CONNECTION_LIMIT with a configurable
env-backed default and validate DATABASE_URL: read a numeric env var (e.g.
process.env.CONNECTION_LIMIT or PRISMA_CONNECTION_LIMIT) and fallback to 20 when
parsing in buildDatasourceUrl(), and add a guard in buildDatasourceUrl() that
throws or logs a clear error if process.env.DATABASE_URL is missing/empty
instead of returning "?connection_limit=..."; update references to
CONNECTION_LIMIT and buildDatasourceUrl() so the code uses the parsed env value
and fails fast with a descriptive message that will surface before PrismaClient
is instantiated.

In `@services/chronikService.ts`:
- Line 914: Translate the Portuguese inline comment near the DB flush logic to
English for consistency; replace the comment "// final DB flush (se sobrou menos
que DB_COMMIT_BATCH_SIZE)" with an English equivalent such as "// final DB flush
(remaining < DB_COMMIT_BATCH_SIZE)" in the block where DB_COMMIT_BATCH_SIZE is
referenced (the final DB commit/flush section).
- Around line 876-901: Extract the repeated "commit batch → append to file →
broadcast + triggers" logic into a private helper (e.g., commitBatch) and
replace both inlined blocks with calls to it: the helper should accept the
commitPairs (array of pair objects with .row and .raw), productionAddressesIds,
runTriggers and this.networkId/context (or take this as bound via arrow/private
method), call createManyTransactions(rows), appendTxsToFile for produced
transactions belonging to productionAddressesIds, build rawByHash from
commitPairs to map txids to raw, create broadcast data via
this.broadcastIncomingTx(createdTx.address.address, raw, createdTx), and if
runTriggers call executeTriggersBatch(triggerBatch, this.networkId); ensure the
helper returns or throws errors consistently and preserves the console log
currently using this.CHRONIK_MSG_PREFIX and createdTxs.length so both original
locations (the batch commit inside the loop and the final-flush block) can
simply await commitBatch(...) instead of duplicating code.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between beb6a65 and be4d8e3.

📒 Files selected for processing (3)
  • constants/index.ts
  • prisma-local/clientInstance.ts
  • services/chronikService.ts

Comment on lines 341 to 348
private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
try {
return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
} catch (err: any) {
console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`)
return []
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Silent error swallowing in fetchPage can cause incomplete syncs to be marked as complete.

When a page fetch fails, fetchPage returns [], which causes the per-address pagination loop to break early (line 387: while (pageTxs.length > 0)). The address then appears in the addressesSynced yield (line 444), gets setSyncingBatch(…, false) (line 850), and later has its lastSynced timestamp updated (line 947) — because it never surfaces in failedAddressesWithErrors.

This means a partially-fetched address is treated as fully synced, potentially missing transactions permanently until a full re-sync is triggered.

Consider either:

  1. Re-throwing the error (or returning a sentinel) so the caller can track the address as failed, or
  2. At minimum, adding the address to a failed-set so updateManyLastSynced skips it.
Option 1: Let the error propagate to the per-worker catch (line 423)
  private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
-   try {
-     return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
-   } catch (err: any) {
-     console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`)
-     return []
-   }
+   return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
  }

The per-worker .catch at line 423 already handles errors gracefully and returns [], so the address would contribute zero txs — but importantly, you could then track it as failed there instead.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
try {
return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
} catch (err: any) {
console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`)
return []
}
}
private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 341 - 348, fetchPage currently
swallows errors and returns [], causing partial-address syncs to be marked
complete; change fetchPage (the method that calls getPaginatedTxs with
CHRONIK_FETCH_N_TXS_PER_PAGE) to not silently return an empty array on error —
either remove the try/catch or rethrow the caught error after logging so the
caller (the per-worker catch) can mark the address as failed and add it to
failedAddressesWithErrors; ensure callers that previously relied on [] handle
the thrown error (or a sentinel) and that failedAddressesWithErrors,
addressesSynced, setSyncingBatch and updateManyLastSynced are updated
accordingly so partially-fetched addresses are not treated as fully synced.

Comment on lines 429 to 433
for (const workerTxs of results) {
if (workerTxs.length > 0) {
chronikTxs.push(...workerTxs)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

push(...workerTxs) with spread can exceed the call-stack argument limit for prolific addresses.

Array.prototype.push(...arr) expands arr as individual arguments. If a single address has many pages (e.g., 500+ pages × 200 txs = 100k+ items), this can exceed V8's ~65k argument limit and throw a RangeError. Using a loop or Array.prototype.concat avoids this:

Safer alternative
      for (const workerTxs of results) {
        if (workerTxs.length > 0) {
-         chronikTxs.push(...workerTxs)
+         for (const item of workerTxs) {
+           chronikTxs.push(item)
+         }
        }
      }

Or more concisely: chronikTxs = chronikTxs.concat(workerTxs) — though that creates a new array (which splice-based draining already mitigates).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const workerTxs of results) {
if (workerTxs.length > 0) {
chronikTxs.push(...workerTxs)
}
}
for (const workerTxs of results) {
if (workerTxs.length > 0) {
for (const item of workerTxs) {
chronikTxs.push(item)
}
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 429 - 433, The current loop uses
chronikTxs.push(...workerTxs) which can hit V8's argument limit for very large
workerTxs arrays; update the block in the function iterating over results (the
loop handling workerTxs and chronikTxs) to avoid spreading into push — either
push items in a simple for/of loop over workerTxs, or replace the spread with a
non-spread-safe operation such as chronikTxs = chronikTxs.concat(workerTxs);
ensure you update the logic that previously relied on in-place mutation (e.g.,
any splice-based draining) so behavior remains the same.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
prisma-local/clientInstance.ts (1)

8-12: Consider validating DATABASE_URL before building the datasource URL.

If DATABASE_URL is unset, baseUrl becomes an empty string, producing an invalid URL like ?connection_limit=50&pool_timeout=30. This could cause a confusing PrismaClient initialization error.

Proposed defensive check
 function buildDatasourceUrl (): string {
   const baseUrl = process.env.DATABASE_URL ?? ''
+  if (baseUrl === '') {
+    throw new Error('DATABASE_URL environment variable is not set')
+  }
   const separator = baseUrl.includes('?') ? '&' : '?'
   return `${baseUrl}${separator}connection_limit=${CONNECTION_LIMIT}&pool_timeout=${POOL_TIMEOUT}`
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prisma-local/clientInstance.ts` around lines 8 - 12, The buildDatasourceUrl
function currently uses process.env.DATABASE_URL without validation; add a
defensive check at the top of buildDatasourceUrl to validate that DATABASE_URL
is a non-empty string (and optionally a valid URL) and, if missing/invalid,
throw a clear error or return early instead of constructing a malformed query
string; update references to CONNECTION_LIMIT and POOL_TIMEOUT to only append
when DATABASE_URL is valid so PrismaClient receives a proper datasource URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@prisma-local/clientInstance.ts`:
- Around line 5-6: The constant CONNECTION_LIMIT is set to 50 but the PR text
claims a hard limit of 20; update one to match the other: either change the
constant CONNECTION_LIMIT to 20 (adjust any tests/config that assume 50) or
update the PR description to state 50 as the hard limit, and ensure any related
code/comments referencing CONNECTION_LIMIT reflect the chosen value.

---

Nitpick comments:
In `@prisma-local/clientInstance.ts`:
- Around line 8-12: The buildDatasourceUrl function currently uses
process.env.DATABASE_URL without validation; add a defensive check at the top of
buildDatasourceUrl to validate that DATABASE_URL is a non-empty string (and
optionally a valid URL) and, if missing/invalid, throw a clear error or return
early instead of constructing a malformed query string; update references to
CONNECTION_LIMIT and POOL_TIMEOUT to only append when DATABASE_URL is valid so
PrismaClient receives a proper datasource URL.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be4d8e3 and a2932df.

📒 Files selected for processing (2)
  • constants/index.ts
  • prisma-local/clientInstance.ts

Comment on lines 5 to 6
const CONNECTION_LIMIT = 50
const POOL_TIMEOUT = 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Connection limit discrepancy with PR description.

The PR description states "Adds a hard limit of 20 Prisma connections" but the code sets CONNECTION_LIMIT = 50. Please clarify whether 50 is the intended value and update the PR description accordingly, or adjust the constant to match the documented intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@prisma-local/clientInstance.ts` around lines 5 - 6, The constant
CONNECTION_LIMIT is set to 50 but the PR text claims a hard limit of 20; update
one to match the other: either change the constant CONNECTION_LIMIT to 20
(adjust any tests/config that assume 50) or update the PR description to state
50 as the hard limit, and ensure any related code/comments referencing
CONNECTION_LIMIT reflect the chosen value.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
services/chronikService.ts (2)

430-433: ⚠️ Potential issue | 🟡 Minor

Avoid spread-push for large worker batches.

chronikTxs.push(...workerTxs) can throw on large arrays due to engine argument limits. Use an item loop to keep behavior safe for high-volume addresses.

Safer in-place merge
 for (const workerTxs of results) {
   if (workerTxs.length > 0) {
-    chronikTxs.push(...workerTxs)
+    for (const tx of workerTxs) {
+      chronikTxs.push(tx)
+    }
   }
 }
#!/bin/bash
# Verify current spread-push usage in TS files (read-only).
# Expected: this location is reported; replace where arrays can be large.
rg -nP --type=ts -C2 '\bpush\s*\(\s*\.\.\.[^)]+\)'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 430 - 433, The current loop merges
workerTxs into chronikTxs using chronikTxs.push(...workerTxs), which can hit
engine argument limits for large arrays; locate the for (const workerTxs of
results) block in services/chronikService.ts and replace the spread-push with a
safe in-place item loop (e.g., iterate over workerTxs and push each element
individually into chronikTxs) to avoid large-argument calls while preserving
order and behavior.

342-349: ⚠️ Potential issue | 🟠 Major

Do not silently treat failed address fetches as successful syncs.

fetchPage returns [] on errors, and worker errors are also converted to []. This makes failed/partial address fetches indistinguishable from empty history, while the batch still proceeds with normal sync flow.

Please propagate or explicitly surface per-address failures so they can be excluded from successful sync bookkeeping and timestamp updates.

Suggested direction (explicit per-address failure tracking)
 interface FetchedTxsBatch {
   chronikTxs: ChronikTxWithAddress[]
   addressesSynced: string[]
+  failedAddresses: string[]
 }

 private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
   try {
     return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
   } catch (err: any) {
     console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`)
-    return []
+    throw err
   }
 }

Then in the per-address worker aggregation, keep { address, txs, failed } metadata and yield failed addresses separately so syncAddresses can mark them in failedAddressesWithErrors and avoid treating them as successful.

Also applies to: 423-427, 374-374

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@services/chronikService.ts` around lines 342 - 349, fetchPage currently
swallows errors by returning [] which makes failed fetches indistinguishable
from empty histories; change fetchPage (and the other similar catch sites around
lines mentioned) to rethrow the error or return an explicit failure result (e.g.
an object/tuple carrying { failed: true, error } or throw so callers can catch)
instead of returning []; update the per-address worker aggregation to collect
and yield per-address metadata like { address, txs, failed, error } (or separate
success/failure streams) so syncAddresses can add failed addresses to
failedAddressesWithErrors and skip timestamp/ success bookkeeping for them;
ensure references to getPaginatedTxs and CHRONIK_FETCH_N_TXS_PER_PAGE remain and
preserve existing successful path returning Tx[].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@services/chronikService.ts`:
- Around line 430-433: The current loop merges workerTxs into chronikTxs using
chronikTxs.push(...workerTxs), which can hit engine argument limits for large
arrays; locate the for (const workerTxs of results) block in
services/chronikService.ts and replace the spread-push with a safe in-place item
loop (e.g., iterate over workerTxs and push each element individually into
chronikTxs) to avoid large-argument calls while preserving order and behavior.
- Around line 342-349: fetchPage currently swallows errors by returning [] which
makes failed fetches indistinguishable from empty histories; change fetchPage
(and the other similar catch sites around lines mentioned) to rethrow the error
or return an explicit failure result (e.g. an object/tuple carrying { failed:
true, error } or throw so callers can catch) instead of returning []; update the
per-address worker aggregation to collect and yield per-address metadata like {
address, txs, failed, error } (or separate success/failure streams) so
syncAddresses can add failed addresses to failedAddressesWithErrors and skip
timestamp/ success bookkeeping for them; ensure references to getPaginatedTxs
and CHRONIK_FETCH_N_TXS_PER_PAGE remain and preserve existing successful path
returning Tx[].

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2932df and eff0e46.

📒 Files selected for processing (2)
  • constants/index.ts
  • services/chronikService.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • constants/index.ts

@chedieck chedieck force-pushed the fix/reduce-memory-spent branch from eff0e46 to 2b817c7 Compare February 26, 2026 00:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
constants/index.ts (2)

279-281: Consider making batch/concurrency constants configurable via environment variables.

The reductions align with the PR objective to reduce memory and CPU usage. However, the 96% reduction in INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY (128 → 5) will significantly slow initial address syncing.

For operational flexibility across different environments (e.g., higher values in production with more resources vs. lower values in local dev), consider allowing these to be overridden via environment variables:

export const INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY = parseInt(process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY ?? '5', 10)
export const TX_EMIT_BATCH_SIZE = parseInt(process.env.TX_EMIT_BATCH_SIZE ?? '500', 10)
export const DB_COMMIT_BATCH_SIZE = parseInt(process.env.DB_COMMIT_BATCH_SIZE ?? '500', 10)

This allows tuning without code changes while keeping conservative defaults.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constants/index.ts` around lines 279 - 281, Make these batch/concurrency
constants configurable via environment variables by replacing the hard-coded
values for INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY, TX_EMIT_BATCH_SIZE, and
DB_COMMIT_BATCH_SIZE with logic that reads and parses corresponding env vars
(e.g., process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY,
process.env.TX_EMIT_BATCH_SIZE, process.env.DB_COMMIT_BATCH_SIZE) and falls back
to the current numeric defaults when the env var is absent or invalid; ensure
you use parseInt(..., 10) and a sensible default to preserve current behavior.

292-292: Minor: Unnecessary parentheses around the day count.

The parentheses around 7 are not needed. Consider simplifying for consistency with typical numeric expressions:

-export const CLIENT_PAYMENT_EXPIRATION_TIME = (7) * (24 * 60 * 60 * 1000) // (number of days) * (24 * 60 * 60 * 1000)
+export const CLIENT_PAYMENT_EXPIRATION_TIME = 7 * (24 * 60 * 60 * 1000) // (number of days) * (24 * 60 * 60 * 1000)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@constants/index.ts` at line 292, The constant CLIENT_PAYMENT_EXPIRATION_TIME
has unnecessary parentheses around the day count; remove the parentheses and
simplify the expression to use plain multiplication (e.g., 7 * 24 * 60 * 60 *
1000) so the value remains the same but the code is cleaner and consistent with
typical numeric expressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@constants/index.ts`:
- Around line 279-281: Make these batch/concurrency constants configurable via
environment variables by replacing the hard-coded values for
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY, TX_EMIT_BATCH_SIZE, and
DB_COMMIT_BATCH_SIZE with logic that reads and parses corresponding env vars
(e.g., process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY,
process.env.TX_EMIT_BATCH_SIZE, process.env.DB_COMMIT_BATCH_SIZE) and falls back
to the current numeric defaults when the env var is absent or invalid; ensure
you use parseInt(..., 10) and a sensible default to preserve current behavior.
- Line 292: The constant CLIENT_PAYMENT_EXPIRATION_TIME has unnecessary
parentheses around the day count; remove the parentheses and simplify the
expression to use plain multiplication (e.g., 7 * 24 * 60 * 60 * 1000) so the
value remains the same but the code is cleaner and consistent with typical
numeric expressions.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eff0e46 and 2b817c7.

📒 Files selected for processing (1)
  • constants/index.ts

@Klakurka Klakurka requested review from Klakurka and removed request for Fabcien February 26, 2026 01:13
@Klakurka Klakurka marked this pull request as draft February 26, 2026 01:14
@Klakurka Klakurka removed their request for review February 26, 2026 01:14
@Fabcien
Copy link
Collaborator

Fabcien commented Feb 26, 2026

This is impossible to review as-is. There are 3 issues:

  • A prisma error: can you actually paste the error message ?
  • A memory error: what amount of memory is used? Is it only allocated during sync or is it high all the time ?
  • A high cpu usage during sync: without a flame graph you can't really do any analysis on the root cause

@chedieck chedieck force-pushed the fix/reduce-memory-spent branch from 40967a7 to 9133b71 Compare February 26, 2026 11:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement (behind the scenes) Stuff that users won't see

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants