Skip to content
This repository was archived by the owner on Dec 24, 2025. It is now read-only.
This repository was archived by the owner on Dec 24, 2025. It is now read-only.

[priority 3] Some API calls are not scalable as filtering is not done at query level #31

@ehsan6sha

Description

@ehsan6sha

how pool_id is encoded or structured in the storage key
I think we have a bad mechanism here specifically in get_all_pool_users, as we are getting 1000 of records and then filter them.. What happens if we have 2000 users ? then we never reach the next 1000? We have 200 users per pool and if we have 10 pools then the user count is 2000 easily. The filtering should be done at query_key Level but it seems not working
pub async fn get_all_pool_users(
data: web::Data,
req: web::Json,
) -> error::Result {
let api = &data.api;
let mut result_array = Vec::new();

let query_key = sugarfunge::storage().pool().users_root().to_root_bytes();
// println!("query_key pool_root len: {}", query_key.len());


// if let Some(account_value) = req.account.clone() {
//     let account = AccountId32::try_from(&account_value).map_err(map_account_err)?;
//     StorageMapKey::new(account, StorageHasher::Blake2_128Concat).to_bytes(&mut query_key);
//     // println!("query_key class_id len: {}", query_key.len());
// }


let storage = api.storage().at_latest().await.map_err(map_subxt_err)?;


let keys = storage
    .fetch_keys(&query_key, 200, None)
    .await
    .map_err(map_subxt_err)?;

=> Mauricio: check the data storage structure and adjust so that calls are scalable

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Ready

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions