This repository contains the code for the paper accepted at MICCAI'24:
NeuroConText paper at MICCAI'24.
NeuroConText paper extended version at Imaging Neuroscience, MIT Press, 2026.
NeuroConText Supplementary Material.
Follow these steps to set up the environment, download the data, and run the training pipeline.
We use uv for extremely fast and reproducible dependency management.
- Install uv (if not already installed):
curl -LsSf https://astral-sh.uv.install.sh | sh
- Initialize the environment: Create the virtual environment and install all dependencies:
uv sync
- Activate the environment:
source .venv/bin/activate
We provide a high-performance parallel downloader to handle the ~8GB dataset from Zenodo. This script automates the download, extraction, and directory placement.
# Uses pycurl for parallel downloading; extracts to the data/ folder
uv run utils/download_data.py
Once the environment is synced and the data is downloaded, execute the training pipeline:
uv run main.py
NeuroConText/
│
├── data/ # Populated by download_data.py
│ └── data_NeuroConText/
│ └── (Extracted .pkl files)
│
├── src/ # Core utilities
│ └── utils.py
│
├── utils/
│ └── download_data.py # Parallel downloader
│
├── layers.py # Model architectures
├── losses.py # Contrastive losses
├── main.py # Training entry point
├── metrics.py # Evaluation logic
├── plotting.py # Visualizations
├── training.py # Training loop
└── README.md
For any issues or questions regarding the code, please contact fateme[dot]ghayem[at]gmail[dot]com.
This work is supported by the KARAIB AI chair (ANR-20-CHIA-0025-01), the ANR-22-PESN-0012 France 2030 program, and the HORIZON-INFRA-2022-SERV-B-01 EBRAINS 2.0 infrastructure project.
Thank you for using NeuroConText!