Results & testing

How to use the SentiSight.ai platform

In the menu go to Explore models / predict and choose:

1. Predict → Select model on the left and upload images (you can upload more than one file at a time) to use model for making predictions on new images. You can check the remaining free predictions in your User profile.

2. View training statistics to explore model performance. You will find statistical indicators calculated separately for training and validation sets. Since evaluating a model is a complex problem, you can find the model’s accuracy and some other metrics with definitions available for each of them. Find the definitions by pressing the target concept in blue or a question mark alongside. Training statistics measure how well the model has learned the features from your training data. For a single-label classification, it is usually higher than 80%. Otherwise, it might be an indication that your case is more complicated and you might need a custom project to solve it. For a multi-label classification, the numbers may vary.

Validation results show the performance of the model on unseen images. These results are much more important when deciding about the model's quality, because they show how well the model generalizes the information. The model might overfit or in over words learn the training data "by heart" and have good predictions on it. In this case only the validation data will show the realistic performance.

If validation results are satisfying, you may already try using the model for completely new images. Otherwise, you can try improving the model, for example, by adding more images to the training or increasing the training time

Learn more about the process

How to improve my model’s performance?

If
you are not satisfied with the accuracy of your model, you can experiment adding more training images and increase the training time. If this doesn't help you might also want to try modifying the other training parameters in the Advanced view of the training
window (3 section: Training the model). We recommend exploring statistics to draw better assumptions on how to improve the model. To see more performance metrics, you may choose Advanced view in the statistics window and see learning curves, confusion matrixes and much more statistical indicators that may help you to draw conclusions on how to make the model more effective.

You can also request a Custom project and our experts will come to help. They will manage the process to meet
your requirements, whether another algorithm or specific additional
data is needed.

Training accuracy, validation accuracy – what’s this?!

Machine learning is a complicated subject and there is much to learn. If you have trouble understanding "Training" and "Validation" concepts, have a look at this analogy:

Suppose your teacher gave a lot of paintings for you to analyze and
learn to recognize the style (Gothic, baroque, rococo, classical, etc).
Your task is to find the differences and get a pattern of how these
styles look like. It will not be an easy task at first, but after some
time you will start to generalize what features are common for a
particular class. In most cases, you will learn to guess all of the
pictures you've seen correctly. This is analogous to 100% train accuracy.

Now as you have advanced, you were given another set of paintings, which you previously haven't seen. This will validate whether you learned something meaningful, or missed the point and simply learned everything by heart.

Finally, testing the model would be analogous to a person who learned everything well enough and uses this skill in life. Note that a person can continue learning if he sees a wider variety of pictures later. The same can be done with the algorithm.

Sometimes an algorithm finds it easier to learn things by heart than develop general patterns.

Bad validation results may indicate that you have trained your
model on insufficient variety of data. Then you should try to collect more data.

Getting a high training accuracy is a good sign, but not a final indicator of the general performance of the model. On the other hand, if your training accuracy is low, then you for sure you don't have a good model. Sometimes this happens because of the data being too specific or
ambiguous, or the model having an unsuitable structure.

Important: to perform a quality testing, you must be sure to collect a sufficient number of testing images. Small testing data set might not be representative. Let's say if you have a small testing data set with some unusual images (such as a very
uncommon side of the object, high occlusion, etc), the testing accuracy will be much lower than expected.