As described in my previous post, the brand names are apparently not saved in the model's H5/JSON files. To address that (for now) I added the following method (shown below) to update the VALUE field while populating the neuron-to-identity dictionary in the Models_toolStripComboBox_SelectedIndexChanged(object sender, EventArgs e) method.
NOTE: Of course, this makes this tool useless for generalised use!
Now that I could "understand" what the tool/CNN was predicting, I found that on average:
* It correctly predicted the brand name when presented a full logo, but was almost always wrong when presented a partial image unique to the logo. For example, when presented the four rings of the Audi logo, it always identified it as Android!
* Because about 33% of the images I presented were partial images, only 67% of the predictions were correct. (Of course, one might be able to say that model is almost 100% accurate, if only presented full images of logos.)
BTW: I still have no idea why this is happening or how to improve the model. If and when I do, after more , I will share my information.
If anybody has any suggestions, please share them with us.
Hello ,
i agree with you what you said about the rate of Predict , and to get a 90% and above, correct predictions ,you have also to use the Automatically Objects Selection during the Training Process in the Snip_Image Form and that using the Extract Blobs Button and then you have to Train your Model on the Extracted Objects One by one Like i did in the minute 1:28 in my Explanation Video .
with that I mean the Selective Search Algorithm that Select the Objects or the Blobs that exists in the Image. anyways you can use this option by click on the Extract Blobs Button in the Snip_Image Form just like i did in the Explanation Video in the minute 1:28 .
As described in the video/documention, after you train the CNN, you must save/export the information in a model file (e.g., logo_model_3[13]). Later, when you restart the application, the various models are loaded into the LogoDetector form. If you then use the model to Predict the brand names, the names (e.g., Mercedes, Android, etc. are either lost or not properly loaded) and the "predicted" brand names are simply reduced to image8, image1, etc. So, it is impossible to determine if the actual accuracy of the model!
BTW: I am still trying to figure out how to deal with this.
Hello and thanks for Your Comment
through the Training Process and exactly in the Snip_Image Tool or in the Train_Model Form the image name doesn't Play any Role, what really matters here is the ideal set value !!
anyway i´m currently working on a Better Documented Version of the Article and i will Publish it in the next days..
best regards
Ammar Albush
The code is poorly documented, does not follow any coding standards, etc., which I could have lived with if it generally worked. But it does NOT and clearly has not been tested by the author. Normally I do not like to say things like this because I'd like to credit someone for the effort. However, this is perhaps the only time have been so negative. REASON: I just want to warn others to be careful as they could be wasting time the way I did. Alternatively, I may be too dumb to "get it"!
Hello , it's a pity that you got such a bad impression of the program, but with the program I actually wanted to present the training model I developed specifically for logo recognition, the program was just a tool to test whether the model is working well or not..
for you to Bettter understanding the Code i will document the program better in the near future..
I am open to further questions regarding the program
thank you for your understanding
best regards
Ammar Albush
I sort of understand what you are saying, but three things you state are at odds with what I see in your article/reply.
(1) "the program was just a tool to test whether the model is working well or not"; there is no explanation how you conclude it works well or not!
(2) when a model is selected and loaded, the original names such as "Mercedes" are either lost or not used; instead, they come up as "image8", "image10", etc.
(3) when the PREDICT function sees the Facebook logo, it thinks it is Mercedes and then too with 0.9962 "accuracy"; this happens in most other cases, too!
So, you can perhaps understand why I am more than a little confused.
Of course, one of the "joys" of the CNN recognizer is one has no idea how it came to a particular "conclusion". Until you publish a well-documented version so that I can resolve the three points noted above, I will periodically dive into your code over the next several months to try to figure out how it works and, who knows, I may actually begin to understand how this C# implementation of CNN works!
1) Using the same 12 logos, after tapping Start Learning, Accuracy went down
...
Epoch=98,accuracy=1,loss=0.015946...
Epoch=99,accuracy=0.96,loss=0.16199...
Epoch=100,accuracy=0.92,loss=0.256728... In your video, accuracy kept increasing through to Epoch 100
2) Using the Predict got unexpected results for several logos
Samsung logo Network SayhpValue1
Adidas logo Network SayhpValue0.94299.. In your video, didn't see such odd predictions
...
Any suggestions as to the causes of this behavior?
In the Snip_Images form, your [Extract Blobs] click produced 108 'blobs'. Did you specify the Image Name for each one and specify the Ideal Set for each one?
FYI: The SETTINGS.JSON is not included in the download package. As a result, the code throws a FILE NOT FOUND exception. To resolve this, click the Settings (at the top right corner of the LogoDetector form and when the Settings form is displayed, tap the Save button at the bottom right corner.