I am happy to announce that we have successfully developed the World’s first AI edge camera powered by the very latest Google Tensor Processing Unit (TPU) accelerator. This is the fourth time in the last two months that we have pushed the edge computing to its very limits. In December last year, we have created two types of AI edge cameras that are powered by multiple Intel Myriad-X Vision Processing Units (VPUs). The first version was based on Raspberry Pi and the second one on UP Squared board. The video below shows a quick overview.

 

More recently, we upgraded the first version of the camera so it could run off a small solar panel. This basically solves all the problems related to infrastructure needs because this camera is completely plug-and-play, connected via 4g and running off solar energy or a battery pack that can be recharged. The video below shows a very quick demo from the initial tests.

 

The first Raspberry Pi version of this camera, whether solar or not, was based on two Intel Myriad-X VPUs each running a different convolutional neural network based on mobilenet-ssd-v2 in parallel at around 10FPS. Same models run on UP Squared with Myriad-X on a mini PCIe (AI Core X from UP Shop) much faster at around 40FPS but to be fair I have to say that Intel clearly says that the OpenVINO support for Raspberry Pi is currently experimental. Therefore, I expect the performance to improve in the future. We are probably not going to see anything near 40FPS but I hope to see at least 15FPS with future optimisations.

Google developed its own Tensor Processing Unit (TPU) ASIC designed from the ground up for machine learning. These TPUs were until now used mostly to accelerate cloud-based machine learning. Recently, Coral announced the Edge TPUand provided two different means of allowing developers to test it out. The first is a full dev board while the second one is a USB accelerator similar to Intel Neural Compute Stick with Myriad VPU. We ordered the USB accelerator and developed the World’s first AI Edge camera that is using the TPU. At this point the tools provided by Google are quite limited so converting the model was a bit of a challenge but other than that I am very impressed by the performance of this tiny, low-power device. To quickly test it out, we converted our model for the detection of personal protective equipment (PPE)to Tensorflow Lite format and then we needed to upload this to an online service that compiles the Lite model for TPU. The resulting 8-bit quantised model has only around 5MB in size and runs on Raspberry Pi at impressive 25-30FPS!

 

Google TPU Accelerator

Here are few pictures that show the Raspberry Pi 3B+, camera module, and the Google TPU. This camera can be powered either via USB cable or by PoE. We use the USB power for solar version and PoE for places with existing infrastructure.

If you have any questions, let me know and I will do my best to answer as soon as I can.