Using Keras with Tensorflow as backend to train cifar10 using vgg16.py from keras
By : Ken_g6
Date : March 29 2020, 07:55 AM
To fix the issue you can do What I found after trying different configurations is that VGG16 architecture is too big for an image of size 32x32.I tried to use VGG16 until block3_pool and then added a dense 512 fully_connected followed by softmax classifier for10 classes. Below is modified code: code :
base_model = VGG16(weights=None, include_top=False,
input_shape=X_train.shape[1:], classes=10)
x = base_model.get_layer('block3_pool').output
x = Flatten(name='Flatten')(x)
x = Dense(512, activation='relu', name='fc1')(x)
predictions = Dense(nb_classes, activation='softmax')(x)
model = Model(input=base_model.input, output=predictions)

Keras CIFAR10 finetuning on VGG16: How can I preprocess the input data to fit the VGG16 network?
By : L.Athan
Date : March 29 2020, 07:55 AM

Still downloading even Keras has the VGG16 pretrained model in ./keras/models
By : Ilya Putilin
Date : March 29 2020, 07:55 AM
wish help you to fix your issue The default value of include_top parameter in VGG16 function is True. This means if you want to use a full layer pretrained VGG network (with fully connected parts) you need to download vgg16_weights_tf_dim_ordering_tf_kernels.h5 file, not vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5.

You must feed value for placeholder *_sample_weights while training UNET from VGG16
By : user1638674
Date : March 29 2020, 07:55 AM
Hope that helps The problem was with DataGenerator.getitem(): resize does not return a new numpy array. It changes the original array and returns nothing. Therefore the getitem method returned None, None. The keras error messages is misleading.

Keras VGG16 preprocess_input modes
By : Vijay Saravate
Date : March 29 2020, 07:55 AM

