Transfer Learning
Transfer Learning

What is Transfer Learning?

Transfer Learning used in Machine Learning is the use of a pre-trained model on a new problem. In transition learning, one machine exploits knowledge gained from previous work to improve generalizations about another. For example, while training a classifier to predict whether there is food in a picture, you can use the knowledge gained during training to identify the exercises.

How does Transfer Learning work?

In computer vision, for example, neural networks typically attempt to detect edges in the first layer, shapes in the middle layer, and some functional features in the later layer. In transfer learning, the initial and middle layers are use and we only restore the later layers. This helps you take advantage of the data labeled for the task you were initially trained on.

Let’s go back to the example, of a trained model to identify a bag on an image, which will be used to identify the sun. in the earlier layers, the model has learned to recognize objects, so we will only stop the later layers again so it will learn whether it separates sunglasses from other objects.

In transfer learning, we try to transfer as much information from the previous task as possible to the model as well as training. This knowledge can take different forms depending on the problem and the data. For example, it may be how the models are made, which allows us to easily identify the objects of the novel.

How is Transfer Learning used?

The use of pre-trained models significantly reduces the time required for feature engineering and training. The first step is to choose a source model, ideally one that has a large dataset for training. Many research institutes release these models and datasets as open projects, so you don’t have to create your own.

Types of Transfer Learning:

Domain adaptation:

In this view, a dataset on which the machine was trained is different from the target dataset a great example of this would be the spam email filtering model. We say that this model was trained to identify spam emails from user A. when this model is then used for user B, domain adaptation will be used, although it works. Same as user B receives different types of emails from user A.

Multitasker Learning:

This approach involves solving two or more tasks simultaneously to take advantage of similarities and differences. Based on the idea that a model who has been trained in a related task can acquire skills that will improve their ability in the new job.

Going back to our spam email filtering model, it says that this model is learning what features it should look for when identifying spam mail for user A and user B. because users are so different, the model needs to look for different features to identify each user’s spam mail.

Zero-shot learning:

The technique involves a model that seeks to solve a task that was not revealed during the training. For example, say we are training a model to identify animals in pictures. To identify animals, the machine is taught to identify two parameters: color yellow and spots. The model is then trained on multiple images of chickens, which she learns to identify.  Because they are yellow but have no spots, and it turns out that they have spots but they are yellow. There are no colors.

One-shot learning

This approach requires that a model learn to classify an object or an object only once it has been exposed a few times. To do this, the model takes advantage of the information available about the information category.

Transfer Learning Models:

There are some pre-trained machine learning models that are quite popular. One of them is the Inception-V3model, which has been imagining the ‘’Big Visual Recognition Challenge.’’

Approaches to Transfer Learning:

Training a Model to Reuse It:

Imagine that you want to solve task A but there is not enough data to train deep neural networks. One-way around this is to find task B associated with the abundance of data. Train deep neural network on task B and use the model as a starting point to solve task A whether you need to use the whole model or just a few layers rely heavily on the problem you are solving if both of your tasks have the same input, then possibly reusing the model and making predictions for your new input is an option. Alternatively, replacing and retraining specific layers and output layers related to different tasks is one way.

Using a Pre-Trained Model:

The second approach is to use a pretrained model. There are a lot of models in them, so be sure to do some research on this. How many layers to reuse and how much to restore depends on the problem.

Feature Extraction:

Another approach is to use deep learning to find the best representation of your problem, which means finding the most important features. This approach is also known as representation learning and can result in far better performance than the one achieved with hand-designed representation.

Transfer Learning for Future Innovation

As machine learning and in-depth learning accelerate, transition learning will lead to better tasks that were previously unimaginable. Transition learning will help deep neural networks to run businesses more efficiently.

You may also like to read: What is Ensemble Learning Method? How does it work?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

three × 3 =