Does deep learning have deep flaws?

flaws of deep learning

Deep learning belongs to Artificial Neural Networks (ANN’s) with many layers. Deep Learning is growing in Machine Learning because of positive results in applications where the target is complex and data sets are large. There are also flaws of deep learning.

Deep Learning is a kind of machine learning algorithms that use a stream of layers of nonlinear processing units for feature extraction and transformation. The algorithms may be supervised (Here the teacher gives example inputs and their desired outputs to the computer) or unsupervised (In this case nothing is given to the computer; the learning system itself has to find structure in its input.).Applications include pattern analysis (unsupervised) and classification (supervised).

Deep learning is a process of learning multiple levels of features or representations of the data. Higher level features are derived from lower level to form a hierarchical representation. Deep learning is also called as hierarchical learning or deep structured learning or deep machine learning.

Flaws of Deep Learning:

As with Artificial Neural Networks (ANN’s), many issues can arise with Deep Neural Networks (DNN’s) if they are blindly trained without any experience .Some common issues are as follows:

  1. Over Fitting:Deep neural networks are suffering from over fitting because of added layers of abstraction, which allow them to model rare dependencies in the training data. To prevent over fitting, Regularization methods such as Weight decay or Sparsity can be applied during training.

Recently applied regularization method to deep neural networks is dropout, here some units of randomly exclude from hidden layers during training, which helps to break rare dependencies that occur in training data.

  1. Computation time: More computation time is expected if the net is deeper than 2 layers. The simple version of convolution neural network (gets about 3% error rate) can be implemented in 2-4 hours, depending on how much you are familiar with it, and also the assumption that you already have a working and flexible implementation of back propagation. The training time will be quite familiar. For complicated configurations, you can use libraries which work faster.According to recent study of neural networks, for every classified image, one can generate visually same adversarial image. This says there are deep flaws in neural networks.
  2. Vanishing gradient problem: Difficulty in training Artificial Neural Networks with gradient based learning methods and back propagation (used in conjunction with an optimization method such as gradient descent).

Leave a Reply

Your email address will not be published. Required fields are marked *