Been a long time here although I put a super amount of time into trying to get into Thunkable years back.
As an ease back into things, I wanted to do a small app and do some testing.
What I’d like to do is hopefully simple. I want to create and app on android only that uses the camera as a sensor for motion. I want this to trigger the volume on and off, so that a video playing becomes audible when someone walks passed it and then mutes 1 minute later when the camera stops sensing motion.
I recon its just a few blocks if its possible at all. Can anyone put me on the right track to putting the block necessary for this to work?
I’m excited to get back into the group and helping with anything i can help with (UX design, UI design, graphic design…)
Welcome back! What you’re describing is not possible with Thunkable blocks. The camera and video blocks are pretty limited.
You can do that with Scratch but you won’t be able to build a mobile app from that.
Thanks for the feedback and the welcome tatiang.
Whilst i was waiting for a reply, i asked chatGPT what it thought and it replied with the following:
To use a phone’s camera on Android as a motion detector to trigger volume up and down actions using Thunkable, you can follow these steps:
- Set up the UI:
- Add a “Camera” component to your screen to access the phone’s camera.
- Include any additional UI elements you want, such as buttons or labels, for user interaction.
- Capture Camera Frames:
- Use the “Camera” component’s blocks to capture frames from the phone’s camera.
- Set the camera to continuously capture frames at a specific interval using a timer or an event-based approach.
- Image Processing:
- Use the available image processing blocks in Thunkable to analyze the captured frames.
- Apply techniques like frame differencing, background subtraction, or optical flow to detect motion in consecutive frames.
- Compare pixel values or calculate the difference between frames to identify changes.
- Motion Detection Algorithm:
- Implement a motion detection algorithm using Thunkable’s blocks based on the processed frames.
- Define the conditions for what constitutes motion based on pixel differences, threshold values, or other criteria.
- Volume Control:
- Use Thunkable’s blocks to control the volume up and down actions.
- Depending on the motion detected, trigger the appropriate volume control block to increase or decrease the volume.
- Implement additional features like adjusting camera settings, displaying notifications, or providing user feedback during the volume control process.
- Utilize Thunkable’s blocks for user interface interactions, audio playback, or integrating with external services.
Remember to refer to the Thunkable documentation and resources for detailed information on the available blocks and their usage. The specific blocks and functionalities may vary based on the Thunkable platform and version you are using.
It’s important to note that controlling system-level volume actions (such as volume up and down) may require special permissions on the Android device. Thunkable may have specific blocks or components to handle volume control, or you may need to explore additional extensions or APIs to achieve this functionality.
I wanted the volume to go on and off whilst a video was playing and the camera using motion detection to be the trigger.
The outcome would be for example… a video on loop in mute mode, the camera senses motion and turns the video off mute. them after 1 minute the video goes back into mute.
ChatGPT suggested something quite complicated. And as you probably know, ChatGPT tends to hallucinate and depending on the license you are using, may not even have current knowledge of Thunkable. It suggested several things that are simply not possible with Thunkable. For example, try capturing frames at a rapid rate. I think you’ll find that doesn’t work well, especially if you need to upload them to the cloud, which you would need in this case.
Do I think there is a chance you could do what it suggests using Thunkable, either by sending individual frames or (more likely) a full video to an external server for processing? Yep, I think there’s a teeny, tiny chance you could.
I personally would not use Thunkable for this. What might take you hours with this tool and be full of headaches because of the limitations with the camera blocks would be possible in Scratch in minutes.
I am asking for paid help because you were unable to help and are the only contributor to my question. I believe it is unfair that you close this post:
Video player that uses auto volume on and of with motion detection down on behalf of myself and the freelancers who would reply.
This post refers to a different more specific brief and only includes a link to this post which was for unpaid help.
Please reopen the other post as it does not break rules of the forum and I am looking for paid help of a slightly different nature in the paid help post (asking for paid help on the paid help category should surely be a obvious escalation in the process of enquiry, otherwise why would the potential exist?).
I prefer a ‘can do’ attitude. That’s how Thunkable came into existence in the first place.
I’m sorry, that was my mistake. We see quite a few duplicate posts and I didn’t notice it was a request in the Paid Help category.
It would have been helpful to state that you were looking to hire someone in the post itself but I should not have closed it.
I’ve re-opened that topic.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.