0

I am basically working on a smart trashbin project. I want a ultrasonic sensor to sense that someone has put trash in... this will then trigger the rasberry pi camera to take a picture. The problem I am having is with this next part: How do I get the rasberry pi to send the picture taken onto the google cloud, and after that, How do I recieve the data back from google cloud (the return data should be "wet" or "dry"). Lastly, Dependng on what the return data is I want the rasbery pi to move the servo motor accordingly. I am having a lot of troubles integrating rasberry pi with the google cloud. Lastly, I also want to create this function in which everytime the the picture is taken it is automatcally amended to my training data on the google cloud so that with each use the model gets smarter. I need help with:

  1. Connecting rasbery pi with the google cloud AutoMl/Vision API
  2. Sending the image taken to google cloud
  3. recieving the return data(wet or dry)
  4. connecting the return data to a servo ie. according to the data returned the servo should move.
Badro Niaimi
  • 959
  • 1
  • 14
  • 28
kanishk
  • 13
  • 3
  • Please explain and help me connect the rasberry pi camera and the picture it takes with the cloud platform. The main thing is bascially sending the picture to the cloud, taking the result back(wet or dry waste) and lastly, integrating this result with rasberry so that according to the result the motor moves left direction or right. These are the main things i need help with – kanishk Dec 23 '19 at 17:50

2 Answers2

0

Mainly it needs following things:

  • Get service account to access Vision API from Device
  • Python/Java/C++ base lib to make HTTP call Vision API

For more details refer below links:

  1. Use Google Cloud Vision On the Raspberry Pi and GoPiGo

  2. Cloud Doorbell

Tlaquetzal
  • 2,760
  • 1
  • 12
  • 18
divyang4481
  • 1,584
  • 16
  • 32
0

To get the wet/dry information, you can use AutoML Vision to train a model that can classify the images taken into one of those two categories. Furthermore, you can train it as an Edge model and then export it for offline use with Raspberry Pi.

You can use this Quickstart as a reference to get you started with AutoML Vision Edge models. And you can find the information to export your model and use it with Raspberry Pi here.

Tlaquetzal
  • 2,760
  • 1
  • 12
  • 18