When you have to test the capabilities of Vision and CoreML frameworks, what is a better way to do it other than building your personal Pokédex? 😎

We still remember the scenes when Ash would stumble upon some new creature and pull out his Pokedex. He would point it towards the subject and have all info about the Pokémon read out loud. And it was definitely one of the coolest things on the show.

But back then smartphones didn’t event exist and Machine Learning paired with Computer Vision was mostly the theory. It was hard to imagine that someday we’ll have that things in our pockets but 20 years later, it’s reality.


Screen1

First step into this project was training and testing the ML model itself. I used CoreML tool called CreateML, a GUI from Apple for easy model training. There are several templates to choose from and this project relies on image classifier.

The most time consuming part of this task is preparing the training and testing images as CreateML streamlines the rest of the process making it really straightforward.

After the model was ready and validated, it was just the matter of plugging camera output to the recognition model. And Vision framework provides a convenient way to do this.

When the subject is recognized, the app reads out loud the basic info about the Pokémon, similarly as it would appear on the anime. This is achieved using native AVSpeechSynthesizer.

The rest of the app is pretty simple, one-view UI which looks like a modern version of a real Déx, at least in my eyes.



Other than purely entertaining purposes, this app will also be used as the foundation for my other CoreML and Vision projects.