Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

High-resolution image classification

Usually pre-trained networks like VGG16 / Inception etc. works with low resolution like < 500px.

Is it possible to add a high-resolution convolution layer (or two) before the very first layer of pre-trained VGG16 / Inception to make the network be able to consume high-resolution pictures?

As far as I know - the first layers are hardest to train, it took a lot of data and resources to train it.

I wonder if it will be possible to freeze pre-trained network and train only the newly attached high-resolution layer on an average GPU card and about 3000 examples? Could it be done in a couple of hours?

Also if you know any examples how to use high-resolution images for image classification please share the link.

P.S.

The problem with usual downscaling approach is that in our case the tiny details like tiny cracks or tiny dirt dots are very important and they are lost on lower-resolution images.

like image 469
Sergey Devitsyn Avatar asked Jan 21 '26 12:01

Sergey Devitsyn


1 Answers

It's unlikely you'll be able to freeze a pretrained network and then just add extra layers at the start unfortunately, since the initial layers require three channel inputs and are designed to spot features in the image.

Instead, you could try modifying the architecture of the network so that the initial layer does take in the 1024x1024 images, before downscaling using pooling or striding.

For example, you could try adjusting the stride for the first conv layer in the Slim model definition of Inception V3 to be 8 instead of 2: https://github.com/tensorflow/models/blob/master/slim/nets/inception_v3.py

That would allow you to read in 4x larger images, while keeping the rest of the network the same. I expect you'll need to do a full retraining though unfortunately.

like image 193
Pete Warden Avatar answered Jan 24 '26 16:01

Pete Warden



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!