Arduino + Thunkable + OpenCV = Android Application ~ Quick Question



Hello, fellow Thunkers!

I am new to OpenCV, however, I have some experience in Android app development through Thunkable. I have some expertise in C++, so I was wondering what platform I should download so I can integrate the OpenCV logic with my Thunkable logic. This is for guidance on a project I am working on!!!

Here is my project and my problem/objective:
My app (solely created by Thunkable) currently accesses pictures from the phone’s DCIM folder and uses a Microsoft Image Recognition API to identify these images. At this point, the pictures in the phone’s DCIM folder are obtained from an external mini camera. I have to press a button on the camera for the picture to be transferred to the phone. For example, the app may access a picture of a watch (which was first taken by the camera), and the app will match the image to a tag, “watch” (using the API).

I want to eliminate the user pressing a button on the camera every time; instead, I need some sort of proximity conditional using and Arduino ultrasonic sensor and microcontroller as well.

For example, through the Arduino (its code is in C), the ultrasonic sensor - facing an object - will instantaneously produce distances in centimeters. Simultaneously, the mini camera - also facing the same object - will be recording a video. Somehow, I need to transfer the video from the camera to the OpenCV platform (?) and I also need to transfer the distance from the Arduino ultrasonic sensor to the OpenCV platform (?). I am planning for my conditional to look something like “if the [live] distance (from the Arduino sensor) is greater than X cm, then set a time stamp for the video and pushback it to a vector”. I want to be able to access all the moments in the video where the distance is greater than X centimeters: that is why I have the time stamp “vector” or list… Now, I understand that OpenCV can produce the respective individual frames/images from the video. Ultimately, I need these essential frames/images to be transferred wirelessly to a folder on the Android phone. From there, my app will take over.

It would be greatly appreciated if you could please help me in my endeavor and provide some guidance and suggestions!!!

Thank you very much!


Instead of using your phones DCIM folder or anything related to storage on your PHONE, if you are using a “raspberry pi microcontroller” , you could upload the real time images to the cloud using your external camera using OPEN CV. Once these images are uploaded to the cloud, store the URL in firebase and call them back to you phone. Add a timer so the Firebase DB is called every few seconds. This eliminates the user clicking a button and also makes your app cluster free as there are no photos stored locally.

The ultrasonic data can be sent via the Bluetooth function or similarly using Firebase and WIFI.

I believe you would require some knowledge about JSON, API’s and storing URL’S in firebase. Check out the post by Domnhall to know how to integrate API with Thunkable.

Happy Thunking!


Hello Antony,

Thank you for your response!

I am not using a raspberry pi microcontroller; I am using an Arduino Uno microcontroller. Great idea, but I have a few questions. How would I proceed with Arduino? Where do I program OPEN CV to upload the real-time images to the cloud? How do I connect it to the external camera? Are you still saying that I extrapolate the images from the camera’s video? Is there a built-in option for OPEN CV on Thunkable?

Here is the link to the camera I am using:

I look forward to hearing from you!

Thanks again, Antony!


Open CV is often easier to install using a Raspberry Pi compared to Arduino, because of the multiple “Python” packages available. Also OpenCV programming using C in Arduino is quite cumbersome. However a quick search on google would let you find results to integrating Arduino and Opencv (Try Searching for IOT with Arduino and Opencv). OpenCV has something known as Object detection and I think that is when the Object is recognized, the camera takes an Image using the OpenCV software, and you could then Upload the Image to the Cloud.

There is NO BUILT IN OPTION FOR OPENCV on Thunkbale.

Here’s a quick link to get you started with Opencv and IOT using an Arduino.

Work on the Uploading part first, and then we can figure out a way to download the data to your App. Also better come up with a Flow Diagram of your data. This would help you organize stuff and would also provide a easier outlook of your project. Also you wouldn’t have to explain your project to everyone, just upload the data flow diagram.




I downloaded MSVS 2015 and the latest OpenCV Microsoft (Community) Version 3.4.1. Right now, I am running OpenCV logic on my laptop, but my project needs! to only be the Thunkable app, my wireless camera, and Arduino! I am believing that will be the case ultimately, but let me continue with my question.

Just to be clear, the only reason I am using OpenCV is because I want to automate everything and my camera cannot take pictures automatically; thus I am resorting to live video feed.

How do I connect my OnReal wifi camera (the same as mentioned in the hyperlink) to OpenCV? The camera does emit its own wi-fi signal which a phone - such as an Android - can connect to. I need to be connected to the camera - which is always on - so I can extrapolate the images from the video according to the distances from the Arduino.

I have a few more questions, but I’ll save those for later; let me go step-by-step. I will try to research those questions individually too.



I was also using this for OpenCV installation.


If you have a wireless camera and want to send the images to your phone, what is the purpose of Arduino other than for the using the ultrasonic? Since you are using a Microcontroller, wouldn’t it make more sense to connect the camera to the arduino via a Wifi Shield or something like that?


If Yes, then there are 2 methods:

  1. You either connect the camera via a wire to the Arduino and then send commands via a Thunkable App to the Arduino to get your image. (Has a Better chance of Working)

  2. You could TRY connecting directly to your camera via Wifi and use API (JSON GET Request) from your app to achieve your goal. (It is a Moonshot, but worth a try)


For option 1:

How do I make the Arduino recognize my camera? My microcontroller only has a micro-USB slot, not a big one. Do I use my USB OTG cable on the microcontroller and then connect that to the camera using its micro-USB cable? I don’t have a wi-fi shield, so a small cable connection between the Arduino and the camera would be fine.

Once the Arduino has recognized the non-Arduino-based OnReal camera (and obviously the ultrasonic sensor is connected), then how would I transfer the frame from cam’s video to the Thunkable app? This is where I thought the OpenCV comes in: I may be wrong - we may not need OpenCV at all!?

Searching this online only yielded some information on Adafruit and Arduino-based cameras. However, my camera is not optimized or manufactured for Arduino (but I am trying to connect it to Arduino).


Okay! So I did a bit of googling and sorry to put your hopes down but I don’t think it is possible to connect you Wireless camera via a USB to the Arduino as it is simply NOT OPTIMIZED. You must either change your Arduino UNO, to something more powerful like a Due, Yen, or RaspPi, or change your camera to an Arduino module one.


What if, instead of my camera communicating with the Arduino, my phone directly connects to the camera through the wifi it emits? From there, using my new application - perhaps with OpenCV?? - would I be able to configure my Thunkable app so it can directly extract the video feed from the camera and pull out the specific frames from the feed? The Arduino ultrasonic sensor could still be connected independently to the Thunkable app (maybe via a Bluetooth module).

This is what I am thinking now:

  • The app will be getting independent input from the mini camera (wifi) and the Arduino ultrasonic sensor (Bluetooth). The app (I don’t know if I need to develop with OpenCV + Android Studio or just Thunkable for this), based on the distances from the ultrasonic sensor, would have a list/vector which stores at what time in the video is the distance greater than X cm. Based on that list, the app could pull out those images (once again: idk how!) and send them to a cloud (I think you mentioned Firebase). From there, I have logic written on Thunkable which could take over.

What do you think?

Thank you.


Since your camera is Wireless, There simply MUST BE a way to transfer those images Wirelessly. If only I knew… Anyways, Try the API link I had sent you above to get an Idea of what I’m inferring to…