Skip to content

fix: namespace package conflict of optimum and optimum-quanto #298

Merged
johannaSommer merged 12 commits intoPrunaAI:mainfrom
ParagEkbote:fix-namespace-conflict
Aug 29, 2025
Merged

fix: namespace package conflict of optimum and optimum-quanto #298
johannaSommer merged 12 commits intoPrunaAI:mainfrom
ParagEkbote:fix-namespace-conflict

Conversation

@ParagEkbote
Copy link
Copy Markdown
Contributor

@ParagEkbote ParagEkbote commented Aug 9, 2025

Description

As described in the issue, I have removed the namespace conflict by only using the optimum-quanto package and removing optimum from deps. Could you please review?

cc: @davidberenstein1957

Related Issue

Fixes #296

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Additional Notes

@johannaSommer johannaSommer self-requested a review August 14, 2025 15:23
Copy link
Copy Markdown
Member

@johannaSommer johannaSommer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ParagEkbote thanks a lot for your effort on fixing this! The changes look good to me content wise, did you get the chance to test this locally for any tiny model?
Additionally, it would be important to keep the "import_algorithm_packages" function - it helps us isolate imports for better overall handling of all of our different packages. Otherwise almost ready to merge, thanks again 😊🫡

@johannaSommer johannaSommer requested a review from simlang August 14, 2025 15:25
@ParagEkbote
Copy link
Copy Markdown
Contributor Author

I've added back the function as requested but resolving the namespace conflict remains a bit more difficult. By using pipdeptree, I was able to find out that whisper-s2t also depends on optimum.

Warning!!! Possibly conflicting dependencies found:
* whisper-s2t==1.3.1
 - optimum [required: Any, installed: ?]
------------------------------------------------------------------------
├── optimum-quanto [required: >=0.2.5, installed: 0.2.7]
│   ├── torch [required: >=2.6.0, installed: 2.7.0]
│   │   ├── filelock [required: Any, installed: 3.19.1]
│   │   ├── typing_extensions [required: >=4.10.0, installed: 4.14.1]
│   │   ├── setuptools [required: Any, installed: 80.9.0]
│   │   ├── sympy [required: >=1.13.3, installed: 1.14.0]
--
    ├── optimum [required: Any, installed: ?]
    ├── transformers [required: Any, installed: 4.52.4]
    │   ├── filelock [required: Any, installed: 3.19.1]
    │   ├── huggingface-hub [required: >=0.30.0,<1.0, installed: 0.34.4]
    │   │   ├── filelock [required: Any, installed: 3.19.1]
    │   │   ├── fsspec [required: >=2023.5.0, installed: 2024.12.0]

But for now, by perfoming my changes, the optimum-quanto package is selected only. The test code is as follows:

import numpy as np
import torch
import torchvision
from torchvision import transforms

from pruna import SmashConfig, smash


# 1. Load a small Torch model (just for testing)
model = torchvision.models.resnet18(weights=None)  
model.eval()


# Initialize SmashConfig
smash_config = SmashConfig()
smash_config["quantizer"] = "quanto"

# Smash the model
smashed_model = smash(
    model=model,
    smash_config=smash_config,
)

image = np.random.randint(0, 256, size=(224, 224, 3), dtype=np.float32)
input_tensor = transforms.ToTensor()(image).unsqueeze(0)

# 5. Run inference
with torch.no_grad():
    output = smashed_model(input_tensor)

print("Output shape:", output.shape)

WDYT?

Copy link
Copy Markdown
Member

@johannaSommer johannaSommer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think my initial comment wasn't super clear - sorry about that. 😕 We definitely have to keep the logic of the imported modules and accessing these items through the dict (e.g. no top level optimum import.

After reviewing it, I think the change we need is only to remove it from the pyproject.toml and adjust the import as follows:

    def import_algorithm_packages(self) -> Dict[str, Any]:
        """
        Provide a algorithm packages for the algorithm.

        Returns
        -------
        Dict[str, Any]
            The algorithm packages.
        """
        import optimum.quanto as quanto
        from optimum.quanto import Calibration, freeze, quantize

        return dict(Calibration=Calibration, freeze=freeze, quantize=quantize, quanto=quanto)

And then adjust the getattr to

weights = getattr(imported_modules["quanto"], smash_config["weight_bits"])
activations = getattr(imported_modules["quanto"], smash_config["act_bits"]) if smash_config["act_bits"] is not None else None

cursor[bot]

This comment was marked as outdated.

# limitations under the License.

from typing import Any, Dict
from typing import Any, Dict, TYPE_CHECKING
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this is only to keep mypy happy, this will not execute at runtime.

@ParagEkbote
Copy link
Copy Markdown
Contributor Author

I've added back the function as requested but resolving the namespace conflict remains a bit more difficult. By using pipdeptree, I was able to find out that whisper-s2t also depends on optimum.

Warning!!! Possibly conflicting dependencies found:
* whisper-s2t==1.3.1
 - optimum [required: Any, installed: ?]
------------------------------------------------------------------------
├── optimum-quanto [required: >=0.2.5, installed: 0.2.7]
│   ├── torch [required: >=2.6.0, installed: 2.7.0]
│   │   ├── filelock [required: Any, installed: 3.19.1]
│   │   ├── typing_extensions [required: >=4.10.0, installed: 4.14.1]
│   │   ├── setuptools [required: Any, installed: 80.9.0]
│   │   ├── sympy [required: >=1.13.3, installed: 1.14.0]
--
    ├── optimum [required: Any, installed: ?]
    ├── transformers [required: Any, installed: 4.52.4]
    │   ├── filelock [required: Any, installed: 3.19.1]
    │   ├── huggingface-hub [required: >=0.30.0,<1.0, installed: 0.34.4]
    │   │   ├── filelock [required: Any, installed: 3.19.1]
    │   │   ├── fsspec [required: >=2023.5.0, installed: 2024.12.0]

But for now, by perfoming my changes, the optimum-quanto package is selected only. The test code is as follows:

import numpy as np
import torch
import torchvision
from torchvision import transforms

from pruna import SmashConfig, smash


# 1. Load a small Torch model (just for testing)
model = torchvision.models.resnet18(weights=None)  
model.eval()


# Initialize SmashConfig
smash_config = SmashConfig()
smash_config["quantizer"] = "quanto"

# Smash the model
smashed_model = smash(
    model=model,
    smash_config=smash_config,
)

image = np.random.randint(0, 256, size=(224, 224, 3), dtype=np.float32)
input_tensor = transforms.ToTensor()(image).unsqueeze(0)

# 5. Run inference
with torch.no_grad():
    output = smashed_model(input_tensor)

print("Output shape:", output.shape)

WDYT?

Is the whisper-s2t dep going to be updated in future versions?

Copy link
Copy Markdown
Member

@simlang simlang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for looking at this! I left some comments, which are probably some leftovers from adding imported_modules again.
What do you think?

Comment thread src/pruna/algorithms/quantization/quanto.py
Comment thread src/pruna/algorithms/quantization/quanto.py
Comment thread src/pruna/algorithms/quantization/quanto.py Outdated
Comment thread src/pruna/algorithms/quantization/quanto.py Outdated
Comment thread src/pruna/algorithms/quantization/quanto.py Outdated
cursor[bot]

This comment was marked as outdated.

@ParagEkbote ParagEkbote requested a review from simlang August 26, 2025 17:42
Copy link
Copy Markdown
Member

@simlang simlang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! LGTM! 🚀
Thanks!

Copy link
Copy Markdown
Member

@johannaSommer johannaSommer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your patience and for tackling this!

@johannaSommer johannaSommer changed the title Fix namespace package conflict of optimum and optimum-quanto fix: namespace package conflict of optimum and optimum-quanto Aug 29, 2025
@johannaSommer johannaSommer merged commit f7b7342 into PrunaAI:main Aug 29, 2025
5 of 6 checks passed
@ParagEkbote ParagEkbote deleted the fix-namespace-conflict branch September 2, 2025 15:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Fix namespace package conflict of optimum and optimum-quanto

3 participants