fix: memory, cpu usage & prisma connection limit#1114
fix: memory, cpu usage & prisma connection limit#1114
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request updates seven configuration constants in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
services/chronikService.ts (2)
914-914: Nit: comment is in Portuguese.
// final DB flush (se sobrou menos que DB_COMMIT_BATCH_SIZE)— consider translating for consistency with the rest of the English codebase:// final DB flush (remaining < DB_COMMIT_BATCH_SIZE).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/chronikService.ts` at line 914, Translate the Portuguese inline comment near the DB flush logic to English for consistency; replace the comment "// final DB flush (se sobrou menos que DB_COMMIT_BATCH_SIZE)" with an English equivalent such as "// final DB flush (remaining < DB_COMMIT_BATCH_SIZE)" in the block where DB_COMMIT_BATCH_SIZE is referenced (the final DB commit/flush section).
876-901: Consider extracting the duplicated commit-and-broadcast block into a helper.The "commit batch → append to file → broadcast + triggers" logic at lines 876–901 is nearly identical to the final-flush block at lines 915–939. Extracting a private helper (e.g.,
commitBatch(pairs, productionAddressesIds, runTriggers)) would reduce duplication and make future changes less error-prone.Also applies to: 915-939
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/chronikService.ts` around lines 876 - 901, Extract the repeated "commit batch → append to file → broadcast + triggers" logic into a private helper (e.g., commitBatch) and replace both inlined blocks with calls to it: the helper should accept the commitPairs (array of pair objects with .row and .raw), productionAddressesIds, runTriggers and this.networkId/context (or take this as bound via arrow/private method), call createManyTransactions(rows), appendTxsToFile for produced transactions belonging to productionAddressesIds, build rawByHash from commitPairs to map txids to raw, create broadcast data via this.broadcastIncomingTx(createdTx.address.address, raw, createdTx), and if runTriggers call executeTriggersBatch(triggerBatch, this.networkId); ensure the helper returns or throws errors consistently and preserves the console log currently using this.CHRONIK_MSG_PREFIX and createdTxs.length so both original locations (the batch commit inside the loop and the final-flush block) can simply await commitBatch(...) instead of duplicating code.prisma-local/clientInstance.ts (1)
5-11: Consider making the connection limit configurable via environment variable.The hard-coded
CONNECTION_LIMIT = 20works for the stated goal, but different deployment environments (staging, CI, local dev) may benefit from different pool sizes. A one-line change keeps the hard default while allowing overrides:💡 Suggested improvement
-const CONNECTION_LIMIT = 20 +const CONNECTION_LIMIT = parseInt(process.env.PRISMA_CONNECTION_LIMIT ?? '20', 10)Also, if
DATABASE_URLis unset,baseUrlbecomes''andbuildDatasourceUrl()returns"?connection_limit=20", which will makePrismaClientthrow a confusing "invalid URL" error. A guard with a clear message would help debugging:function buildDatasourceUrl (): string { const baseUrl = process.env.DATABASE_URL ?? '' + if (baseUrl === '') { + throw new Error('DATABASE_URL environment variable is not set') + } const separator = baseUrl.includes('?') ? '&' : '?' return `${baseUrl}${separator}connection_limit=${CONNECTION_LIMIT}` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma-local/clientInstance.ts` around lines 5 - 11, Replace the hard-coded CONNECTION_LIMIT with a configurable env-backed default and validate DATABASE_URL: read a numeric env var (e.g. process.env.CONNECTION_LIMIT or PRISMA_CONNECTION_LIMIT) and fallback to 20 when parsing in buildDatasourceUrl(), and add a guard in buildDatasourceUrl() that throws or logs a clear error if process.env.DATABASE_URL is missing/empty instead of returning "?connection_limit=..."; update references to CONNECTION_LIMIT and buildDatasourceUrl() so the code uses the parsed env value and fails fast with a descriptive message that will surface before PrismaClient is instantiated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@services/chronikService.ts`:
- Around line 429-433: The current loop uses chronikTxs.push(...workerTxs) which
can hit V8's argument limit for very large workerTxs arrays; update the block in
the function iterating over results (the loop handling workerTxs and chronikTxs)
to avoid spreading into push — either push items in a simple for/of loop over
workerTxs, or replace the spread with a non-spread-safe operation such as
chronikTxs = chronikTxs.concat(workerTxs); ensure you update the logic that
previously relied on in-place mutation (e.g., any splice-based draining) so
behavior remains the same.
- Around line 341-348: fetchPage currently swallows errors and returns [],
causing partial-address syncs to be marked complete; change fetchPage (the
method that calls getPaginatedTxs with CHRONIK_FETCH_N_TXS_PER_PAGE) to not
silently return an empty array on error — either remove the try/catch or rethrow
the caught error after logging so the caller (the per-worker catch) can mark the
address as failed and add it to failedAddressesWithErrors; ensure callers that
previously relied on [] handle the thrown error (or a sentinel) and that
failedAddressesWithErrors, addressesSynced, setSyncingBatch and
updateManyLastSynced are updated accordingly so partially-fetched addresses are
not treated as fully synced.
---
Nitpick comments:
In `@prisma-local/clientInstance.ts`:
- Around line 5-11: Replace the hard-coded CONNECTION_LIMIT with a configurable
env-backed default and validate DATABASE_URL: read a numeric env var (e.g.
process.env.CONNECTION_LIMIT or PRISMA_CONNECTION_LIMIT) and fallback to 20 when
parsing in buildDatasourceUrl(), and add a guard in buildDatasourceUrl() that
throws or logs a clear error if process.env.DATABASE_URL is missing/empty
instead of returning "?connection_limit=..."; update references to
CONNECTION_LIMIT and buildDatasourceUrl() so the code uses the parsed env value
and fails fast with a descriptive message that will surface before PrismaClient
is instantiated.
In `@services/chronikService.ts`:
- Line 914: Translate the Portuguese inline comment near the DB flush logic to
English for consistency; replace the comment "// final DB flush (se sobrou menos
que DB_COMMIT_BATCH_SIZE)" with an English equivalent such as "// final DB flush
(remaining < DB_COMMIT_BATCH_SIZE)" in the block where DB_COMMIT_BATCH_SIZE is
referenced (the final DB commit/flush section).
- Around line 876-901: Extract the repeated "commit batch → append to file →
broadcast + triggers" logic into a private helper (e.g., commitBatch) and
replace both inlined blocks with calls to it: the helper should accept the
commitPairs (array of pair objects with .row and .raw), productionAddressesIds,
runTriggers and this.networkId/context (or take this as bound via arrow/private
method), call createManyTransactions(rows), appendTxsToFile for produced
transactions belonging to productionAddressesIds, build rawByHash from
commitPairs to map txids to raw, create broadcast data via
this.broadcastIncomingTx(createdTx.address.address, raw, createdTx), and if
runTriggers call executeTriggersBatch(triggerBatch, this.networkId); ensure the
helper returns or throws errors consistently and preserves the console log
currently using this.CHRONIK_MSG_PREFIX and createdTxs.length so both original
locations (the batch commit inside the loop and the final-flush block) can
simply await commitBatch(...) instead of duplicating code.
services/chronikService.ts
Outdated
| private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> { | ||
| try { | ||
| return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE) | ||
| } catch (err: any) { | ||
| console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`) | ||
| return [] | ||
| } | ||
| } |
There was a problem hiding this comment.
Silent error swallowing in fetchPage can cause incomplete syncs to be marked as complete.
When a page fetch fails, fetchPage returns [], which causes the per-address pagination loop to break early (line 387: while (pageTxs.length > 0)). The address then appears in the addressesSynced yield (line 444), gets setSyncingBatch(…, false) (line 850), and later has its lastSynced timestamp updated (line 947) — because it never surfaces in failedAddressesWithErrors.
This means a partially-fetched address is treated as fully synced, potentially missing transactions permanently until a full re-sync is triggered.
Consider either:
- Re-throwing the error (or returning a sentinel) so the caller can track the address as failed, or
- At minimum, adding the address to a failed-set so
updateManyLastSyncedskips it.
Option 1: Let the error propagate to the per-worker catch (line 423)
private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> {
- try {
- return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
- } catch (err: any) {
- console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`)
- return []
- }
+ return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE)
}The per-worker .catch at line 423 already handles errors gracefully and returns [], so the address would contribute zero txs — but importantly, you could then track it as failed there instead.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> { | |
| try { | |
| return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE) | |
| } catch (err: any) { | |
| console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`) | |
| return [] | |
| } | |
| } | |
| private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> { | |
| return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/chronikService.ts` around lines 341 - 348, fetchPage currently
swallows errors and returns [], causing partial-address syncs to be marked
complete; change fetchPage (the method that calls getPaginatedTxs with
CHRONIK_FETCH_N_TXS_PER_PAGE) to not silently return an empty array on error —
either remove the try/catch or rethrow the caught error after logging so the
caller (the per-worker catch) can mark the address as failed and add it to
failedAddressesWithErrors; ensure callers that previously relied on [] handle
the thrown error (or a sentinel) and that failedAddressesWithErrors,
addressesSynced, setSyncingBatch and updateManyLastSynced are updated
accordingly so partially-fetched addresses are not treated as fully synced.
services/chronikService.ts
Outdated
| for (const workerTxs of results) { | ||
| if (workerTxs.length > 0) { | ||
| chronikTxs.push(...workerTxs) | ||
| } | ||
| } |
There was a problem hiding this comment.
push(...workerTxs) with spread can exceed the call-stack argument limit for prolific addresses.
Array.prototype.push(...arr) expands arr as individual arguments. If a single address has many pages (e.g., 500+ pages × 200 txs = 100k+ items), this can exceed V8's ~65k argument limit and throw a RangeError. Using a loop or Array.prototype.concat avoids this:
Safer alternative
for (const workerTxs of results) {
if (workerTxs.length > 0) {
- chronikTxs.push(...workerTxs)
+ for (const item of workerTxs) {
+ chronikTxs.push(item)
+ }
}
}Or more concisely: chronikTxs = chronikTxs.concat(workerTxs) — though that creates a new array (which splice-based draining already mitigates).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const workerTxs of results) { | |
| if (workerTxs.length > 0) { | |
| chronikTxs.push(...workerTxs) | |
| } | |
| } | |
| for (const workerTxs of results) { | |
| if (workerTxs.length > 0) { | |
| for (const item of workerTxs) { | |
| chronikTxs.push(item) | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/chronikService.ts` around lines 429 - 433, The current loop uses
chronikTxs.push(...workerTxs) which can hit V8's argument limit for very large
workerTxs arrays; update the block in the function iterating over results (the
loop handling workerTxs and chronikTxs) to avoid spreading into push — either
push items in a simple for/of loop over workerTxs, or replace the spread with a
non-spread-safe operation such as chronikTxs = chronikTxs.concat(workerTxs);
ensure you update the logic that previously relied on in-place mutation (e.g.,
any splice-based draining) so behavior remains the same.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
prisma-local/clientInstance.ts (1)
8-12: Consider validatingDATABASE_URLbefore building the datasource URL.If
DATABASE_URLis unset,baseUrlbecomes an empty string, producing an invalid URL like?connection_limit=50&pool_timeout=30. This could cause a confusing PrismaClient initialization error.Proposed defensive check
function buildDatasourceUrl (): string { const baseUrl = process.env.DATABASE_URL ?? '' + if (baseUrl === '') { + throw new Error('DATABASE_URL environment variable is not set') + } const separator = baseUrl.includes('?') ? '&' : '?' return `${baseUrl}${separator}connection_limit=${CONNECTION_LIMIT}&pool_timeout=${POOL_TIMEOUT}` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prisma-local/clientInstance.ts` around lines 8 - 12, The buildDatasourceUrl function currently uses process.env.DATABASE_URL without validation; add a defensive check at the top of buildDatasourceUrl to validate that DATABASE_URL is a non-empty string (and optionally a valid URL) and, if missing/invalid, throw a clear error or return early instead of constructing a malformed query string; update references to CONNECTION_LIMIT and POOL_TIMEOUT to only append when DATABASE_URL is valid so PrismaClient receives a proper datasource URL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prisma-local/clientInstance.ts`:
- Around line 5-6: The constant CONNECTION_LIMIT is set to 50 but the PR text
claims a hard limit of 20; update one to match the other: either change the
constant CONNECTION_LIMIT to 20 (adjust any tests/config that assume 50) or
update the PR description to state 50 as the hard limit, and ensure any related
code/comments referencing CONNECTION_LIMIT reflect the chosen value.
---
Nitpick comments:
In `@prisma-local/clientInstance.ts`:
- Around line 8-12: The buildDatasourceUrl function currently uses
process.env.DATABASE_URL without validation; add a defensive check at the top of
buildDatasourceUrl to validate that DATABASE_URL is a non-empty string (and
optionally a valid URL) and, if missing/invalid, throw a clear error or return
early instead of constructing a malformed query string; update references to
CONNECTION_LIMIT and POOL_TIMEOUT to only append when DATABASE_URL is valid so
PrismaClient receives a proper datasource URL.
prisma-local/clientInstance.ts
Outdated
| const CONNECTION_LIMIT = 50 | ||
| const POOL_TIMEOUT = 30 |
There was a problem hiding this comment.
Connection limit discrepancy with PR description.
The PR description states "Adds a hard limit of 20 Prisma connections" but the code sets CONNECTION_LIMIT = 50. Please clarify whether 50 is the intended value and update the PR description accordingly, or adjust the constant to match the documented intent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma-local/clientInstance.ts` around lines 5 - 6, The constant
CONNECTION_LIMIT is set to 50 but the PR text claims a hard limit of 20; update
one to match the other: either change the constant CONNECTION_LIMIT to 20
(adjust any tests/config that assume 50) or update the PR description to state
50 as the hard limit, and ensure any related code/comments referencing
CONNECTION_LIMIT reflect the chosen value.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
services/chronikService.ts (2)
430-433:⚠️ Potential issue | 🟡 MinorAvoid spread-push for large worker batches.
chronikTxs.push(...workerTxs)can throw on large arrays due to engine argument limits. Use an item loop to keep behavior safe for high-volume addresses.Safer in-place merge
for (const workerTxs of results) { if (workerTxs.length > 0) { - chronikTxs.push(...workerTxs) + for (const tx of workerTxs) { + chronikTxs.push(tx) + } } }#!/bin/bash # Verify current spread-push usage in TS files (read-only). # Expected: this location is reported; replace where arrays can be large. rg -nP --type=ts -C2 '\bpush\s*\(\s*\.\.\.[^)]+\)'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/chronikService.ts` around lines 430 - 433, The current loop merges workerTxs into chronikTxs using chronikTxs.push(...workerTxs), which can hit engine argument limits for large arrays; locate the for (const workerTxs of results) block in services/chronikService.ts and replace the spread-push with a safe in-place item loop (e.g., iterate over workerTxs and push each element individually into chronikTxs) to avoid large-argument calls while preserving order and behavior.
342-349:⚠️ Potential issue | 🟠 MajorDo not silently treat failed address fetches as successful syncs.
fetchPagereturns[]on errors, and worker errors are also converted to[]. This makes failed/partial address fetches indistinguishable from empty history, while the batch still proceeds with normal sync flow.Please propagate or explicitly surface per-address failures so they can be excluded from successful sync bookkeeping and timestamp updates.
Suggested direction (explicit per-address failure tracking)
interface FetchedTxsBatch { chronikTxs: ChronikTxWithAddress[] addressesSynced: string[] + failedAddresses: string[] } private async fetchPage (addressString: string, pageIndex: number, logPrefix: string): Promise<Tx[]> { try { return await this.getPaginatedTxs(addressString, pageIndex, CHRONIK_FETCH_N_TXS_PER_PAGE) } catch (err: any) { console.warn(`${logPrefix} page=${pageIndex} failed: ${err.message as string}`) - return [] + throw err } }Then in the per-address worker aggregation, keep
{ address, txs, failed }metadata and yield failed addresses separately sosyncAddressescan mark them infailedAddressesWithErrorsand avoid treating them as successful.Also applies to: 423-427, 374-374
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/chronikService.ts` around lines 342 - 349, fetchPage currently swallows errors by returning [] which makes failed fetches indistinguishable from empty histories; change fetchPage (and the other similar catch sites around lines mentioned) to rethrow the error or return an explicit failure result (e.g. an object/tuple carrying { failed: true, error } or throw so callers can catch) instead of returning []; update the per-address worker aggregation to collect and yield per-address metadata like { address, txs, failed, error } (or separate success/failure streams) so syncAddresses can add failed addresses to failedAddressesWithErrors and skip timestamp/ success bookkeeping for them; ensure references to getPaginatedTxs and CHRONIK_FETCH_N_TXS_PER_PAGE remain and preserve existing successful path returning Tx[].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@services/chronikService.ts`:
- Around line 430-433: The current loop merges workerTxs into chronikTxs using
chronikTxs.push(...workerTxs), which can hit engine argument limits for large
arrays; locate the for (const workerTxs of results) block in
services/chronikService.ts and replace the spread-push with a safe in-place item
loop (e.g., iterate over workerTxs and push each element individually into
chronikTxs) to avoid large-argument calls while preserving order and behavior.
- Around line 342-349: fetchPage currently swallows errors by returning [] which
makes failed fetches indistinguishable from empty histories; change fetchPage
(and the other similar catch sites around lines mentioned) to rethrow the error
or return an explicit failure result (e.g. an object/tuple carrying { failed:
true, error } or throw so callers can catch) instead of returning []; update the
per-address worker aggregation to collect and yield per-address metadata like {
address, txs, failed, error } (or separate success/failure streams) so
syncAddresses can add failed addresses to failedAddressesWithErrors and skip
timestamp/ success bookkeeping for them; ensure references to getPaginatedTxs
and CHRONIK_FETCH_N_TXS_PER_PAGE remain and preserve existing successful path
returning Tx[].
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
constants/index.tsservices/chronikService.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- constants/index.ts
eff0e46 to
2b817c7
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (2)
constants/index.ts (2)
279-281: Consider making batch/concurrency constants configurable via environment variables.The reductions align with the PR objective to reduce memory and CPU usage. However, the 96% reduction in
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY(128 → 5) will significantly slow initial address syncing.For operational flexibility across different environments (e.g., higher values in production with more resources vs. lower values in local dev), consider allowing these to be overridden via environment variables:
export const INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY = parseInt(process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY ?? '5', 10) export const TX_EMIT_BATCH_SIZE = parseInt(process.env.TX_EMIT_BATCH_SIZE ?? '500', 10) export const DB_COMMIT_BATCH_SIZE = parseInt(process.env.DB_COMMIT_BATCH_SIZE ?? '500', 10)This allows tuning without code changes while keeping conservative defaults.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constants/index.ts` around lines 279 - 281, Make these batch/concurrency constants configurable via environment variables by replacing the hard-coded values for INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY, TX_EMIT_BATCH_SIZE, and DB_COMMIT_BATCH_SIZE with logic that reads and parses corresponding env vars (e.g., process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY, process.env.TX_EMIT_BATCH_SIZE, process.env.DB_COMMIT_BATCH_SIZE) and falls back to the current numeric defaults when the env var is absent or invalid; ensure you use parseInt(..., 10) and a sensible default to preserve current behavior.
292-292: Minor: Unnecessary parentheses around the day count.The parentheses around
7are not needed. Consider simplifying for consistency with typical numeric expressions:-export const CLIENT_PAYMENT_EXPIRATION_TIME = (7) * (24 * 60 * 60 * 1000) // (number of days) * (24 * 60 * 60 * 1000) +export const CLIENT_PAYMENT_EXPIRATION_TIME = 7 * (24 * 60 * 60 * 1000) // (number of days) * (24 * 60 * 60 * 1000)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@constants/index.ts` at line 292, The constant CLIENT_PAYMENT_EXPIRATION_TIME has unnecessary parentheses around the day count; remove the parentheses and simplify the expression to use plain multiplication (e.g., 7 * 24 * 60 * 60 * 1000) so the value remains the same but the code is cleaner and consistent with typical numeric expressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@constants/index.ts`:
- Around line 279-281: Make these batch/concurrency constants configurable via
environment variables by replacing the hard-coded values for
INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY, TX_EMIT_BATCH_SIZE, and
DB_COMMIT_BATCH_SIZE with logic that reads and parses corresponding env vars
(e.g., process.env.INITIAL_ADDRESS_SYNC_FETCH_CONCURRENTLY,
process.env.TX_EMIT_BATCH_SIZE, process.env.DB_COMMIT_BATCH_SIZE) and falls back
to the current numeric defaults when the env var is absent or invalid; ensure
you use parseInt(..., 10) and a sensible default to preserve current behavior.
- Line 292: The constant CLIENT_PAYMENT_EXPIRATION_TIME has unnecessary
parentheses around the day count; remove the parentheses and simplify the
expression to use plain multiplication (e.g., 7 * 24 * 60 * 60 * 1000) so the
value remains the same but the code is cleaner and consistent with typical
numeric expressions.
|
This is impossible to review as-is. There are 3 issues:
|
40967a7 to
9133b71
Compare
WIP
changes removed and marked as draft due to errors in prod
Related to #
Description
We were getting some errors in production related to prisma having too many connections open, this PR puts a limit (20) on them so that further connections are queued.
Also, in my local dev initJobs was using a lot of CPU and failing due to memory overload, so I decreased the size of constants related to the initial sync parallel batch sizes in order to reduce the overload.
Finally, some refactor into the function that fetches the non-synced txs to avoid copying/storing data unnecessarily.
Summary by CodeRabbit