Skip to content

Bug: C++ shared memory write silently truncates data >255 bytes no warning, corrupts downstream nodes #514

@GaneshPatil7517

Description

@GaneshPatil7517

The C++ shared memory implementation in concore.hpp and concoredocker.hpp allocates a fixed 256-byte buffer for inter-node data exchange and silently truncates any payload exceeding 255 characters. There is no warning, no error, and no log output downstream nodes receive incomplete data and produce wrong results.

This affects all C++ nodes using shared memory communication (the write_SM / read_SM path, activated when concore_oport/concore_iport receive the shared memory flag).

Root Cause

Three lines create the problem:

1. Allocation fixed at 256 bytes (concore.hpp line 248):

shmId_create = shmget(key, 256, IPC_CREAT | 0666);

2. Write silent truncation (concore.hpp lines 617, 642):

std::strncpy(sharedData_create, result.c_str(), 256 - 1);
// No check: if result.size() > 255, data is silently cut off

3. Read also capped at 256 (concore.hpp line 459):

std::string message(sharedData_get, strnlen(sharedData_get, 256));

The same pattern exists identically in concoredocker.hpp (lines 221, 386).

Impact

The wire format is [simtime, val1, val2, ..., valN]. Each double takes ~8–18 characters depending on precision (e.g., 3.141592653589793 = 17 chars + comma). With the simtime prefix and brackets:

Values (doubles) Approximate payload size Fits in 256 bytes?
10 ~120 bytes Yes
20 ~220 bytes Borderline
25+ ~280+ bytes No silently truncated

For a neuromodulation control system with 32+ sensor channels (realistic for EEG or multi-electrode arrays), the C++ shared memory path silently drops the last channels. The downstream node parses the truncated string, obtains fewer values, and either crashes or produces wrong control signals with no indication that data was lost.

Cross-Language Mismatch

The Python write() function writes to files with no size limit:

# concore_base.py line 418 no truncation
outfile.write(str(data_to_write))

This means:

  • A Python node sending 100 doubles → works perfectly via file I/O
  • A C++ node sending the same 100 doubles via shared memory → silently truncated to ~25 values
  • A C++ node receiving from a Python file-based node → works (reads file, no limit)
  • A C++ node receiving from another C++ shared memory node → truncated

Reproduction Steps

  1. Create a C++ node that writes a vector with 30+ doubles via shared memory:
concore cc;
cc.concore_oport(1, "shared_memory");  // triggers write_SM path
vector<double> large_data(50, 1.23456789);
cc.write(1, "test", large_data);
  1. Read from another node — only the first ~25 values arrive.
  2. No error is printed, no exception is thrown. Output silently contains wrong data.

Suggested Fix

Replace the fixed 256-byte buffer with a dynamically sized allocation, or at minimum add a size check with a clear error message:

Option A Dynamic allocation (preferred):

size_t needed = result.size() + 1;
shmId_create = shmget(key, std::max(needed, (size_t)256), IPC_CREAT | 0666);

Option B Error on overflow (minimal fix):

if (result.size() >= 256) {
    std::cerr << "ERROR: write_SM payload (" << result.size() 
              << " bytes) exceeds 256-byte shared memory limit. Data truncated!" << std::endl;
}
std::strncpy(sharedData_create, result.c_str(), 256 - 1);

Affected Files

  • concore.hpp
  • concoredocker.hpp
  • sample/src/concore.hpp (copy of concore.hpp, same issue)

Environment

  • All platforms using Linux shared memory (__linux__ codepath)
  • Both local execution and Docker container workflows

hello @pradeeban happy to raise PR on this....

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions