I am learning neural networks and I built a simple one in Keras for the iris dataset classification from the UCI machine learning repository. I used a one hidden layer network with a 8 hidden nodes. Adam optimizer is used with a learning rate of 0.0005 and is run for 200 Epochs. Softmax is used at the output with loss as catogorical-crossentropy. I am getting the following learning curves.
As you can see, the learning curve for the accuracy has a lot of flat regions and I don't understand why. The error seems to be decreasing constantly but the accuracy doesn't seem to be increasing in the same manner. What does the flat regions in the accuracy learning curve imply? Why is the accuracy not increasing at those regions even though error seems to be decreasing?
Is this normal in training or it is more likely that I am doing something wrong here?
dataframe = pd.read_csv("iris.csv", header=None)
dataset = dataframe.values
X = dataset[:,0:4].astype(float)
y = dataset[:,4]
scalar = StandardScaler()
X = scalar.fit_transform(X)
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(y)
encoder = OneHotEncoder()
y = encoder.fit_transform(y.reshape(-1,1)).toarray()
# create model
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))
# Compile model
adam = optimizers.Adam(lr=0.0005, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Fit the model
log = model.fit(X, y, epochs=200, batch_size=5, validation_split=0.2)
fig = plt.figure()
fig.suptitle("Adam, lr=0.0006, one hidden layer")
ax = fig.add_subplot(1,2,1)
ax.set_title('Cost')
ax.plot(log.history['loss'], label='Training')
ax.plot(log.history['val_loss'], label='Validation')
ax.legend()
ax = fig.add_subplot(1,2,2)
ax.set_title('Accuracy')
ax.plot(log.history['acc'], label='Training')
ax.plot(log.history['val_acc'], label='Validation')
ax.legend()
fig.show()
Question&Answers:os