通过使用 GRAD_CAM 可视化类激活来理解 CNN

数据挖掘 喀拉斯 美国有线电视新闻网 可视化 热图 解释
2022-02-28 17:37:00

我关注了CNN 在哪里寻找的博客理解和可视化类激活以预测某些事情。给定的示例效果很好。

我开发了一个自定义模型,使用自动编码器来实现图像相似性。该模型接受 2 张图像并预测相似度得分。该模型具有以下层:


Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 256, 256, 3)  0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 256, 256, 3)  0                                            
__________________________________________________________________________________________________
encoder (Sequential)            (None, 7, 7, 256)    3752704     input_1[0][0]                    
                                                                 input_2[0][0]                    
__________________________________________________________________________________________________
Merged_feature_map (Concatenate (None, 7, 7, 512)    0           encoder[1][0]                    
                                                                 encoder[2][0]                    
__________________________________________________________________________________________________
mnet_conv1 (Conv2D)             (None, 7, 7, 1024)   2098176     Merged_feature_map[0][0]         
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 7, 7, 1024)   4096        mnet_conv1[0][0]                 
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 7, 7, 1024)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
mnet_pool1 (MaxPooling2D)       (None, 3, 3, 1024)   0           activation_1[0][0]               
__________________________________________________________________________________________________
mnet_conv2 (Conv2D)             (None, 3, 3, 2048)   8390656     mnet_pool1[0][0]                 
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 3, 3, 2048)   8192        mnet_conv2[0][0]                 
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 3, 3, 2048)   0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
mnet_pool2 (MaxPooling2D)       (None, 1, 1, 2048)   0           activation_2[0][0]               
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 1, 2048)      0           mnet_pool2[0][0]                 
__________________________________________________________________________________________________
fc1 (Dense)                     (None, 1, 256)       524544      reshape_1[0][0]                  
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 1, 256)       1024        fc1[0][0]                        
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 1, 256)       0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 1, 256)       0           activation_3[0][0]               
__________________________________________________________________________________________________
fc2 (Dense)                     (None, 1, 128)       32896       dropout_1[0][0]                  
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 1, 128)       512         fc2[0][0]                        
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 1, 128)       0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 1, 128)       0           activation_4[0][0]               
__________________________________________________________________________________________________
fc3 (Dense)                     (None, 1, 64)        8256        dropout_2[0][0]                  
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 1, 64)        256         fc3[0][0]                        
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 1, 64)        0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 1, 64)        0           activation_5[0][0]               
__________________________________________________________________________________________________
fc4 (Dense)                     (None, 1, 1)         65          dropout_3[0][0]                  
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 1, 1)         4           fc4[0][0]                        
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 1, 1)         0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 1, 1)         0           activation_6[0][0]               
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 1)            0           dropout_4[0][0]                  
==================================================================================================

编码器层由以下层组成:

conv2d_1
batch_normalization_1
activation_1
max_pooling2d_1
conv2d_2
batch_normalization_2
activation_2
max_pooling2d_2
conv2d_3
batch_normalization_3
activation_3
conv2d_4
batch_normalization_4
activation_4
conv2d_5
batch_normalization_5
activation_5
max_pooling2d_3

我想更改我的自定义网络以接受一个输入而不是仅使用编码器部分的两个输入,并生成热图以了解编码器部分学到了什么。

因此,我们的想法是,如果网络预测“不相似”,那么我可以一张一张地生成图像的热图并进行比较。

我所做的如下:

我已经将这两个图像传递给网络并得到了博客中描述的预测:

preds = model.predict([x, y])
class_idx = np.argmax(preds[0])
class_output = model.output[:, class_idx]

设置最后一个卷积层并计算类输出值相对于特征图的梯度。

last_conv_layer = model.get_layer('encoder')
grads = K.gradients(class_output, last_conv_layer.get_output_at(-1))[0]

毕业生的输出:

Tensor("gradients/Merged_feature_map/concat_grad/Slice_1:0", shape=(?, 7, 7, 256), dtype=float32)

然后我按照博客中的描述完成了渐变:

pooled_grads = K.mean(grads, axis=(0, 1, 2))
iterate = K.function([input_img], [pooled_grads, last_conv_layer.get_output_at(-1)[0]])

此时,当我检查输入和输出时,它显示以下内容:

iterate.inputs
[<tf.Tensor 'input_1:0' shape=(?, 256, 256, 3) dtype=float32>]

iterate.outputs
[<tf.Tensor 'Mean:0' shape=(256,) dtype=float32>, <tf.Tensor 'strided_slice_1:0' shape=(7, 7, 256) dtype=float32>]

但我现在在以下代码行中收到错误:

pooled_grads_value, conv_layer_output_value = iterate([x])

错误是:

You must feed a value for placeholder tensor 'input_2' with dtype float and shape [?,256,256,3]
     [[{{node input_2}}]]

似乎它要求输入第二张图像,但如上所示,“iterate.inputs”只是一张图像。

我在哪里做错了?如何限制它只接受一张图片?或者,有什么其他方式可以更轻松地完成任务?

0个回答
没有发现任何回复~