If you are getting this error while following the code from this tutorial
https://pixellib.readthedocs.io/en/latest/image_ade20k.html
ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers
If you are getting this error while following the code from this tutorial
https://pixellib.readthedocs.io/en/latest/image_ade20k.html
ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers
The issue can be solved by installing these versions*
!pip3 install tensorflow==2.6.0
!pip3 install keras==2.6.0
!pip3 install imgaug
!pip3 install pillow==8.2.0
!pip install pixellib==0.5.2
!pip install labelme2coco==0.1.2
I got it working by changing some imports in pixellib/semantic/deeplab.py.
You can substitute all imports tensorflow.python.keras
with tensorflow.keras
exept for from tensorflow.python.keras.utils.layer_utils import get_source_inputs
.
The problem which I feel is that to load this pretrained model we need a GPU so just to solve it temporarily we can do it in google colaboratory
google colaboratory > Runtime > Change runtime type > Hardware accelerator > GPU
and then upload the .h5 file in the colab notebook which you are on(this might take 30 mins(...i know its a bit long but yeah...))
after this use this code
import pixellib
from pixellib.semantic import semantic_segmentation
segment_image = semantic_segmentation()
segment_image.load_ade20k_model("deeplabv3_xception65_ade20k.h5")
segment_image.segmentAsAde20k("path_to_image", output_image_name= "path_to_output_image")
Run this code line by line It worked for me It may also work for other probably