Using Google Teachable Machine Model inside a native App built with AppGyver

Dear App Gyver Community,

I am looking for a way to use a Google Teachable Machine model inside the AppGyver platform to build my native App. I’m trying to import such a model inside the AppGyver platform for a long time, but in vain. Do someone knows how does it works?

Hi, not sure exactly what you aim to do with the Teachable Machine, but we don’t allow importing anything inside Composer like that. Into your app you’ll be able to import React Native plugins once we have third party plugin support, but not sure if that’s what you’re looking for?

Hello Mevi, I’m very glad to read from you.
AppGyver was very good news for me since I had an App idea but I couldn’t find IT professionals to help me since I’m not an IT professional.
So I decided to study the AppGyver platform to build my idea.
The idea was to build an App which checks poses comparing them to an optimal predefined one.
I discovered that the Google Teachable Machine allows such a task, even allowing the models created to be exported in several formats (for example as TensorFlow.js, allowing to use them anywhere Javascript runs, and others formats) to be used in the creation of sites and apps.
My question is, since I’m not an IT professional (that’s why I’m trying to use AppGyver to crate my app),
1- what format to use to export my Google Teachable Machine model and
2- how to use the model thus exported inside the AppGyver platform.
Thank you for your time


unfortunately, I think it might be difficult to do this :thinking: But not impossible.

We don’t have the support needed for integrating Google Teachable Machine as of yet, but will hopefully get it during Q1/21. When we have the support, what you would need is a React Native plugin that would provide the features you need. React Native is JavaScript based, so it seems possible that there either exists such a plugin already, or it could be made.

Another option would be to request this feature on our tracker, and it might be possible that we integrate it ourselves at some point, but that is likely to take longer.

Hi Mevi,
How can I request this feature on my tracker. Even if it can take longer, I’d like to test its feasibility.
I might be learning too.

Then when it will be available during the Q1/21 I will be able to use it too.

Hi! Go here and create a post with all the information of what you would need from this :slight_smile:

Hi Melvi,
I’ve been trying to create my post in vain.
How can I solve this?
Please see the print screen attached.


That’s odd, can you try again? I tried just now and was able to submit a feature request – if it doesn’t work for you, just paste the text here and I’ll submit it for you.

Hi Melvi,
It still doesn’t work.
Please submit it for me.
Title: Allow REST API direct to Integrate integrating Google Teachable Machine Model
Details: The Google Teachable Machine is a tool that allows to recognize images, sounds, & poses. Making fast & easy to create machine learning models for sites, apps, and more. Integrating such a tool with the AppGyver Platform would result in powerfull and amazing native apps.


Hi Melvi,
I just successfully submitted my request through an other browser.
Now what is the next step?


Hi Elie,

After you have created (trained) a model, you can request a prediction for an image using the predict method. The predict method applies labels to your image based on the primary object of the image that your model predicts.

You can do this without needing the SDK for edge detection, but it is not useful for example if you want to work with video in real time. but sending a photo to the REST API for immediate analysis should be no problem at all.

Rest & Command line Online (individual) prediction example:

More resources:

Hi Tim,
Thank you for your comment, this will be helpfull.
However, the main interest of the Google Teachable Machine is to check a pose, image or sound (whose model has been previously created) in real time.
This allows some applications such as real time detection of masks for example in front of shopping centers.
I think that this is its main advantage with other Google AI or ML services.
Besides that, the application I’m trying to design need a real time decision.


by realtime, I just mean that frame by live video analysis wouldn’t be feasible without running the model on the mobile device (which wouldn’t be possible until Appgyver supports arbitrary react native components) for example:

However for now you can send an image via REST and get a (superior in accuracy vs mobile) response in a few hundred milliseconds (or less).

Which I presume would be satisfactory for your use case (which seems to be image classification)… but you wouldn’t want to use it to try to create something like a self driving vehicle for example.

Even allowing the user to record a short audio clip and allowing it to be sent to the server shouldn’t be a problem… the delay would mainly be in how long it takes to send the sound clip to the server.

good luck!

It might take some time for us to process the feature request as we are in the midst of fixing our 2.X runtime at the moment. And even after that, we will be first integrating general third party plugin support for Composer before adding any additional plugins, to allow for developers to bring their own plugins into the mix.

As such, it will take a while for anything to happen in regards to this, I’m afraid :confused: