I need help making a census app and pet tracker

Hi!
I really need help making an app. This app allows yo to take a picture of a street dog or cat and enter the location you found it, and then it gets shared with the correct authorities from a proper pet care center to pick it up. I also need to prevent duplicate reporting(two people or more reporting the same animal), so I was thinking of adding face recognition or something to prevent that. I don’t know how, though, so if anybody could help me on that front, it would be great. I do know that Thunkable has an image recognizer, but I need it to not just recognize the object, like dog or cat, but I need it to identify it properly, like how Google Photos sorts your pictures into different people.
It would be great to have some help about this.
Thanks very much in advance!

I haven’t used the image recognition in Thunkable yet but I’m pretty sure it’s not going to do what you want. You might be able to get it to recognize “dog” or even “brown dog” or if you’re lucky “small brown dog” but I didn’t think it’s going to recognize “Fifi” and be able to differentiate between “Fifi” and “Bruno.”

How are you planning to alert the authorities? I can imagine generating a text message but what number would you text? Is there even such a service set up to receive something like that? How about providing a phone number for the nearest humane society, etc.? Pretty sure you could do that with the Google Places API.

1 Like

There is an animal care centre nearby, and I have it’s number from the all-knowing Google. It is a text message, actually, to that number. But all animals that are reported will be sent to the same place. I was thinking about making it the nearest care center, but even if there is an API, I have absolutely no clue on how to integrate it into my app. About the differentiating technique between “Fifi” and “Bruno”, you will know from your phone gallery or Google Photos that it separates all the picture into different albums.Like, if you have a pic of yourself, it goes into a folder that goes by your name. If you have a couple of pictures of your cousin, though, it will go into a folder by the cousin’s name. Every time somebody reports the same dog, it increases the number of times the dog has been reported, and updates the current location. Google Places API is a really interesting choice, but if you can help me use it, that would be great.
Thanks,
@codeswept

What a great idea for an app!

People are a pretty good image recognition system. You might give the user taking the photo the opportunity to check their animal with other. Here would be the strategy I would start with

  1. User adds an animal at a particular GPS coordinate
  2. User enter very basic information like species and colors
  3. Device checks cloud database (airtable, firebase, etc.) for same species and at least one color match with X miles of the current location
  4. Device displays photos and location of other reports
  5. User enters if this is a new report or another spotting of the same animal.

One advantage of this is that if/when you find the image recognitioin system that will work for your purpose, you can simply swap out the human system for the AI system.

Happy Thunking!

3 Likes

Hi! Thanks very much for the great advice, and I’m glad you like the idea.
I do agree with you that people are very good at recognizing images. But, considering that these are mostly strays I am working with in this app, I’m not sure that they would be easily recognizable and can be differentiated from other strays of same color.
The GPS coordinate is a great idea, but I’m not sure that it would be easy for users to enter their coordinates. I’m not sure how to do this using AI. Even I was thing of the new report-old ( your point No.5), so that the device could update a new location that the dog has wandered of to. I already have created inputs for species and color. How would I create a proper Airtable for this sort of data? I’d need some help at that part. While I can’t have it absolutely personal like the way Google Photos differentiates between people, I also can’t have it absolutely basic, like the image recognizer registering just ‘A dog’. Then the user would be shown images of all the dogs that have ever been reported. I’ll try my best. I’m thinking of ways to swap out the human system for AI.
Happy Thunking to you too!
Thanks,
@codeswept

Thunkable can provide the phone’s current GPS coordinates using the Location Sensor.

1 Like

For the GPS coordinates, you can use the invisible component Location Sensor to get GPS cooridates.

I’m a big fan of firebase realtime database vs. Airtable. There are fewer issues with basic function like add/modify/delete rows. There are a variety of sorting strategies in the disucssion group, search for sort and you will find them.

Setting up the database will be a critical step and there is no one solution. I prefer JSON objects saved to realtime database.Firebase takes care of all of the concurrency issues and is quite a bit faster than other Thunkable options. Firebase data can also be accessed by CLOUD variables, which make data access to the cloud as easy as referencing an APP or STORED variable.

It also handles hierarchical/relational better than Google Sheets or Airtable. Here is an example of a JSON hierarchy.


If you do find a REST API for animal recognition that can work with Thunkable, please post it to the group!

Happy Thunking!

2 Likes

Hi @drted and @tatiang!
Firstly, thanks for the amazing suggestion. The Thunkable list of components is so long that I keep forgetting a couple of components, such as location sensor, exist :grinning:
I can use this component to keep updating the authorities on the animal’s location. I’ve not yet worked with Airtable enough to be familiar with it, but I have used Firebase RealtimeDB quite a lot.
But I’m unsure whether it is possible to store images in Firebase, or just the links to the images. An animal recognizer API would be the only answer, because there is almost 0% chance that the dog would be clicked from the same angle and position. Without such an API, there would be no way to sort them into groups. I will surely post the REST API here if I manage to find one.
Happy Thunking to you too!
Thanks,
@codeswept

Has anyone checked this out?

https://www.google.com/amp/s/www.theverge.com/platform/amp/2016/11/30/13799582/amazon-rekognition-machine-learning-image-processing

2 Likes

Oh wow! This would be great!! Now I can ask users to search by tag, such as “Golden Retriever” or “Labrador”. Thanks so much for putting it on the group @jared
Hopefully it is compatible with Thunkable :no_mouth:

1 Like

Why not try using Google ai services for image recognition

1 Like

Hi! Which API are you suggesting?

I’ve used location sensor and a map to show user position, but the location isn’t showing on the mapn it’s just blank. Any idea what might have happened?

Post a screenshot of the blocks you are using to retrieve the location and to display the location on the map.

Are you able to see the latitude and longitude values by assigning them to labels?

Yes, I’m able to see the latitude and longitude, but strangely, the location is coming as [Object object].
Here’s a screenshot:

That’s correct. The location block returns a JSON object containing the accuracy and altitude. You can see the blocks needed to view those values in the link I posted above to the location sensor documentation.

But that isn’t related to the map block so it doesn’t explain why your map is blank.

Even I am confused. Do I have to create a separate JSON object?

Not too used to JSON :sweat_smile:

No, you can ignore the location block. If you want to use the data from it, just replicate what’s shown in the documentation.

Sure, thanks very much! I will do that.