I have a CoreML model that I generated using CreateML. If I drag and drop that model into Xcode it will create a class for me automatically, which I can use to detect/classify the image. The generated class has prediction function which will return the class label.
My question is that if I can classify the image using the automatically generated model class then why should I use Vision Framework to classify an image OR what benefits Vision framework will provide over the auto-generated class method.
Think of Vision as a higher level abstraction that deals specifically with computer vision tasks, where Core ML can also do non-vision things (text, audio, tabular data, etc).
Vision makes it a little easier to work with images. For example, you can use UIImage with Vision but Core ML first requires you to convert the image to a CVPixelBuffer. Vision also has options for how you want to resize/crop the images before they're given to Core ML.
Using Vision also makes sense if you're doing multiple computer vision tasks on the image, i.e. not just running a Core ML model but also some of the built-in tasks, such as face detection.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With