我不知道为什么我会得到这么好的结果。
Epoch 3/10 2937/2937 [==============================] - 12s 4ms/step -
loss: 0.2836 - acc: 0.4679 - val_loss: 0.1937 - val_acc: 0.1980
Epoch 4/10 2937/2937 [==============================] - 12s 4ms/step -
loss: 0.1355 - acc: 0.4679 - val_loss: 0.0866 - val_acc: 0.1980
>Epoch
5/10 2937/2937 [==============================] - 13s 4ms/step - loss:
0.0580 - acc: 0.4679 - val_loss: 0.0342 - val_acc: 0.1980
Epoch 6/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0223 - acc: 0.4679 - val_loss: 0.0120 - val_acc: 0.1980
Epoch 7/10 2937/2937 [==============================] - 14s 5ms/step -
loss: 0.0082 - acc: 0.4679 - val_loss: 0.0040 - val_acc: 0.1980
我的训练和标签集是 [-0.05, 0.05] 范围内的浮点数数组,我正在使用Keras.sequential.model.lstm. 为什么会发生这种情况?以前,我在这里遇到了相反的问题:loss/val_loss 正在减少,但 LSTM 中的准确度是相同的!,但我无法理解这个问题。
编辑:我改变了我的代码:
model.compile(optimizer = 'adam', loss = 'mean_square_error', metrics=['accuracy'])
到:
model.compile(optimizer = 'adam', loss = 'mean_absolute_error', metrics=['accuracy'])
但结果是一样的。
然后我将上面的代码行更改为:
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['mean_squared_error'])
但它没有用,结果如下:
Train on 2937 samples, validate on 735 samples Epoch 1/10 2937/2937
[==============================] - 90s 31ms/step - loss: 1.6645 -
mean_squared_error: 0.0019 - val_loss: 0.7620 -
val_mean_squared_error: 0.0010
Epoch 2/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.5503 - mean_squared_error: 0.0019 - val_loss: 0.3890 -
val_mean_squared_error: 0.0010
Epoch 3/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.2837 - mean_squared_error: 0.0019 - val_loss: 0.1938 -
val_mean_squared_error: 0.0010
Epoch 4/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.1355 - mean_squared_error: 0.0019 - val_loss: 0.0866 -
val_mean_squared_error: 0.0010
Epoch 5/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0580 - mean_squared_error: 0.0019 - val_loss: 0.0342 -
val_mean_squared_error: 0.0010
Epoch 6/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0223 - mean_squared_error: 0.0019 - val_loss: 0.0120 -
val_mean_squared_error: 0.0010
Epoch 7/10 2937/2937 [==============================] - 13s 5ms/step -
loss: 0.0082 - mean_squared_error: 0.0019 - val_loss: 0.0040 -
val_mean_squared_error: 0.0010
Epoch 8/10 2937/2937 [==============================] - 14s 5ms/step -
loss: 0.0035 - mean_squared_error: 0.0019 - val_loss: 0.0017 -
val_mean_squared_error: 0.0010
Epoch 9/10 2937/2937 [==============================] - 13s 5ms/step -
loss: 0.0022 - mean_squared_error: 0.0019 - val_loss: 0.0011 -
val_mean_squared_error: 0.0010
Epoch 10/10 2937/2937 [==============================] - 13s 5ms/step
- loss: 0.0019 - mean_squared_error: 0.0019 - val_loss: 0.0010 - val_mean_squared_error: 0.0010