• Wayne Cheng

Methodology for Tuning Artificial Neural Networks

Updated: Feb 15



Do you need help tuning your artificial neural network, but don't know where to start?


Tuning artificial neural networks is a matter of trial and error. The difficulty is knowing which parameter to tune, and how to systematically adjust each parameter while tracking the results.


The following explains the methods that I find effective for tuning artificial neural networks. I am creating my neural network models in Jupyter Notebook, and running Keras version 2.3.1 and Tensorflow version 2.0.0.


Start with a Single Hidden Layer


Studies have shown that the majority of machine learning problems can be solved with a single hidden layer. So this is a good place to start.


In addition, shallow neural networks are less prone to instability issues caused by vanishing or exploding gradients. A deep network may be more efficient than a shallow network, but the deep network may be more unstable.


If the dataset is too large to be adequately solved with a single hidden layer, the dataset can be reduced to a smaller size. Running experiments on a smaller size model is not only faster, but can provide helpful information for tackling a larger size model.


Adjusting and training with a single hidden layer can be done using a for loop :

history = [] score = [] for i in range(5,13) : node = 2**i model = Sequential() model.add(Dense(node, ...)) model.compile(...) model.summary() history.append(model.fit(...)) score.append(model.evaluate(...))

In this example, the nodes in the hidden layer are increased by a multiple of 2 for each iteration, starting with 32 nodes and ending with 4096 nodes. The history and scores are stored into a list, which can be reviewed later.


Similar adjustments of the filter size for a CNN can be made.


Incrementally Add Additional Hidden Layers


After an adequate size for the first hidden layer is chosen, another hidden layer can be added. The size of the second layer can be adjusted in a similar fashion to the first layer.


If possible, the type of the second hidden layer can also be explored. For example, would a second layer of LSTM nodes be better than a Dense layer? What if a pooling layer was added in between the hidden layers?


Layers should only be added if they show significant improvement. Adding more layers may increase accuracy, at the expense of increasing instability.


Adjust Remaining Parameters


Once the number of nodes, the type of layers, and the number of layers have been determined, the remaining parameters can be adjusted. The parameters that can be adjusted include :


  • dropout rate

  • kernel / stride size (for CNN)

  • step size (for RNN)

  • batch size



Thank you for reading. I hope you find this guide helpful for tuning your artificial neural networks.


Questions or comments? You can reach me at info@audoir.com



Wayne Cheng is an A.I., machine learning, and deep learning developer at Audoir, LLC. His research involves the use of artificial neural networks to create music. Prior to starting Audoir, LLC, he worked as an engineer in various Silicon Valley startups. He has an M.S.E.E. degree from UC Davis, and a Music Technology degree from Foothill College.

Copyright © 2020 Audoir, LLC

All rights reserved