fix: namespace package conflict of optimum and optimum-quanto #298
fix: namespace package conflict of optimum and optimum-quanto #298johannaSommer merged 12 commits intoPrunaAI:mainfrom
Conversation
johannaSommer
left a comment
There was a problem hiding this comment.
Hey @ParagEkbote thanks a lot for your effort on fixing this! The changes look good to me content wise, did you get the chance to test this locally for any tiny model?
Additionally, it would be important to keep the "import_algorithm_packages" function - it helps us isolate imports for better overall handling of all of our different packages. Otherwise almost ready to merge, thanks again 😊🫡
|
I've added back the function as requested but resolving the namespace conflict remains a bit more difficult. By using But for now, by perfoming my changes, the optimum-quanto package is selected only. The test code is as follows: import numpy as np
import torch
import torchvision
from torchvision import transforms
from pruna import SmashConfig, smash
# 1. Load a small Torch model (just for testing)
model = torchvision.models.resnet18(weights=None)
model.eval()
# Initialize SmashConfig
smash_config = SmashConfig()
smash_config["quantizer"] = "quanto"
# Smash the model
smashed_model = smash(
model=model,
smash_config=smash_config,
)
image = np.random.randint(0, 256, size=(224, 224, 3), dtype=np.float32)
input_tensor = transforms.ToTensor()(image).unsqueeze(0)
# 5. Run inference
with torch.no_grad():
output = smashed_model(input_tensor)
print("Output shape:", output.shape)WDYT? |
johannaSommer
left a comment
There was a problem hiding this comment.
I think my initial comment wasn't super clear - sorry about that. 😕 We definitely have to keep the logic of the imported modules and accessing these items through the dict (e.g. no top level optimum import.
After reviewing it, I think the change we need is only to remove it from the pyproject.toml and adjust the import as follows:
def import_algorithm_packages(self) -> Dict[str, Any]:
"""
Provide a algorithm packages for the algorithm.
Returns
-------
Dict[str, Any]
The algorithm packages.
"""
import optimum.quanto as quanto
from optimum.quanto import Calibration, freeze, quantize
return dict(Calibration=Calibration, freeze=freeze, quantize=quantize, quanto=quanto)And then adjust the getattr to
weights = getattr(imported_modules["quanto"], smash_config["weight_bits"])
activations = getattr(imported_modules["quanto"], smash_config["act_bits"]) if smash_config["act_bits"] is not None else None| # limitations under the License. | ||
|
|
||
| from typing import Any, Dict | ||
| from typing import Any, Dict, TYPE_CHECKING |
There was a problem hiding this comment.
Note that this is only to keep mypy happy, this will not execute at runtime.
Is the whisper-s2t dep going to be updated in future versions? |
johannaSommer
left a comment
There was a problem hiding this comment.
Thanks a lot for your patience and for tackling this!
Description
As described in the issue, I have removed the namespace conflict by only using the optimum-quanto package and removing optimum from deps. Could you please review?
cc: @davidberenstein1957
Related Issue
Fixes #296
Type of Change
How Has This Been Tested?
Checklist
Additional Notes