I'm trying to load a pretrained model with torch.load.
I get the following error:
ModuleNotFoundError: No module named 'utils'
I've checked that the path I am using is correct by opening it from the command line. What could be causing this?
Here's my code:
import torch
import sys
PATH = './gan.pth'
model = torch.load(PATH)
model.eval()
EDIT: Entire error stack:
Traceback (most recent call last):
File "load.py", line 6, in <module>
model = torch.load(PATH)
File "C:\Users\user\anaconda3\envs\pytorch-flask\lib\site-packages\torch\serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\user\anaconda3\envs\pytorch-flask\lib\site-packages\torch\serialization.py", line 774, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'utils'
I've had the same exact error and was wondering what the problem was.
Turns out the problem is that the data saved with torch.load()
needed the module utils
.
Example:
from utils import some_function
model = some_function()
torch.save(model)
When saving with torch in the given example, it recognizes that the module utils was used to get the desired data. Thus, when loading the '.pth' file, you need to import that same module utils
.
EDIT this answer doesn't provide the answer for the question but addresses another issue in the given code
the .pth
file just stores the parameters of a model, not the model itself. When you want to load a model you will need the .pt/-h
file and the python code of your model class. Then you can load it like this:
# your model
class YourModel(nn.Modules):
def __init__(self):
super(YourModel, self).__init__()
. . .
def forward(self, x):
. . .
# the pytorch save-file in which you stored your trained model
model_file = "<your path>"
model = Model()
model = model.load_state_dict(torch.load(model_file))
model.eval()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With