A Not-So-Deep Introduction to Deep Learning

Deep learning introduction simplified

Sagun Raj Lage
4 min readOct 1, 2022
Photo by Pietro Jeng on Unsplash

Some background first

Artificial intelligence is the field of computer science which focuses upon building algorithms that enable machines to mimic the working of the human brain to process data and information so that they become able to provide informed future decisions. Artificial intelligence is a vague field which has various branches like natural language processing, robotics, expert systems etc. Machine learning is one of such branches of AI that uses computer science and statistics to enable machines to learn from data and past experience to solve a problem by identifying patterns and making predictions or giving decisions without the need to be explicitly programmed.

Machine learning algorithms require human intervention to be taught what features should be extracted from the data and what the desired output is. However, there is a sub-field of machine learning, called deep learning, that can automatically extract the useful features from the data to be able to make predictions or give decisions. While machine learning algorithms require more structured data to learn, deep learning algorithms are powerful enough to process raw unstructured data and to determine the set of features that differentiate the data under various classes.

TensorFlow and PyTorch are some popular tools used for deep learning.

Why Deep Learning?

Traditional machine learning algorithms require more structure data and human intervention to determine the set of features that enable them to figure out the difference between the input data. For instance, if face detection is to be performed using traditional machine learning algorithms, the approach that they take is identifying the various parts of the face like eyes, ears, mouth, nose etc. first. To be able to identify those parts, the algorithm has to decompose each of those parts and study their low-level features like the orientation and direction of their lines or pairs of lines etc., which is complex and time consuming when performed manually. This means that the human expert has to teach the machine algorithm about the different features that may be present in a face.

But this is different when it comes to deep learning. Deep learning algorithms are so powerful that they, without requiring the human expert to teach the different features of a face, can automatically figure out the set of features that are different from each other. It is done by detecting the lines and edges first, leading to the detection of various corners and facial parts like eyes, ears, nose etc. Finally, all of those facial parts are composed together to detect the structure of the face. In this way, deep learning delivers a better performance with lesser human intervention even when unstructured data like images are fed to the algorithms.

Why is Deep Learning trending today?

Deep learning algorithms have been in existence for a long time now. But it is gaining traction just now because of three major reasons:

  1. Abundance of Data: Deep learning algorithms require a large amount of data to work. In the past, it was very difficult to find large amount of data. But these days, with the rise in the use of technology, electronic devices using internet in various fields, it has become easier to collect data. Also, there has been significant development in the field of storage technology too, hence, enabling the storage of large amount of data. Thus, data is abundant and the deep learning algorithms are finally getting what they need in order to work — huge amount of data.
  2. Availability of Powerful Hardware: Deep learning algorithms require high processing power to work. In the past, there was lack of powerful hardware which were able to run such algorithms. But today, with the development of powerful Graphics Processing Units (GPUs), computers are able to run such algorithms and deliver results by making use of the GPU’s parallelization capability. Parallelization, or simply parallel computing, is a type of computation which breaks down large problems into smaller ones and attempts to solve those small problems simultaneously.
  3. Availability of Software: The free and open-source software tools like TensorFlow and PyTorch have made it easier to build and deploy deep learning models in a short time. They enable users of various skill-levels to use pre-trained models or even train their own for solving various problems. After building the models, the users can deploy them on their own on-premises infrastructure, on their devices or in the cloud so that those models can be used by a larger audience.

What are the advantages of Deep Learning?

The advantages of deep learning are described below:

  • Feature engineering without human intervention: Feature engineering is the act of extracting features from raw data to improve model accuracy. Deep learning is able to perform feature engineering on its own, unlike machine learning which requires human experts to do that.
  • Works well with unstructured data: Deep learning can even process unstructured data like texts, images, audio, video etc. Unstructured data do not have a defined data model but deep learning can still derive training-relevant insights.
  • Delivers results of consistent quality: Deep learning models can perform a large number of repetitive activities in a small amount of time without compromising the quality of the results. As long as the model receives relevant training data, the model works efficiently and accurately.

That’s all for now, folks! Deep learning, like the name, is really deep, and we are just on the surface right now. Let’s just keep digging!

By the way, here’s an awesome video to learn more about deep learning:

If you found this post useful and would like to support me, please “buy me a coffee.”

--

--

Sagun Raj Lage

Author of Getting Started with WidgetKit (2021) | Research Associate at UGA Savannah River Ecology Laboratory | iOS Engineer | Full Stack Engineer