I am very excited to say that Cortexica Vision Systems have managed to develop a super fast AI Edge Camera that is capable of running multiple deep learning models in parallel and thus enable novel applications that would previously be out of reach.

In a short period of time Cortexica has become the first company to demonstrate the power of the very latest Intel Myriad-X VPU (Vision Processing Unit) at the recent IOTSWC conference that was held in Barcelona. We have presented our AI Safety solution running on our new product developed in partnership with UP and AAEON. Check out this interview and this webinar for more details.

Couple of months has passed since the IOTSWC and I am super excited to share with you yet another innovation , which I believe is the first of its kind in the world. Cortexica is very much hardware and software agnostic, which allows us to spot the emerging opportunities and act on them fast. Only couple of days ago Intel released OpenVINO R5 enabling

Raspberry Pi boards to be used as a host for Intel Neural Compute Sticks (NCS) that is leveraging the power of the Myriad-X VPU. Only couple of days since this release and Cortexica has managed to create the world’s first AI Edge Camera powered by two of these super low-power high-performance Myriad-X processors!

I have used the following hardware:

The software I’ve used:

 

The first thing I have done was to connect the PoE hat to the Raspberry Pi board.

 

Then I run a simple test to see if the PoE (Power over Ethernet) hat works fine as originally these hats had issues with over-current on USB ports. I believe that all faulty models have been pulled back and that Amazon only sells the revised model. I also plugged in two NCS2 sticks and one NCS1 without the cover/heatsink. I also bought some mini heatsinks as I was originally planning to remove the covers/heatsinks from NCS2 sticks but I had enough space so did not need to do it. The ethernet cable plugged in to a CISCO router PoE port was plugged into the Raspberry Pi and all booted up nicely without any issues.

 

I have removed all the stuff from the camera that would otherwise be used to move the camera when motion is detected.

Then I put all the hardware in. The camera module is simply placed behind one of the dummy camera lenses.

Then I placed the top cover on top, screw everything together and hang it on the wall for testing.

I have created a C++ application that runs inference on two different models in parallel. In this case, it is the face detection and the detection of the PPE. This application also provides one endpoint that returns a dynamically generated index.html containing the web interface design. In this video below you can see how it looks, there isn’t much yet other than the image placeholder with the source address pointing to the second /videostream endpoint that streams the detection results. Please send me a message if you need some help with the code as I am not going to be publishing it here.

Here is a screenshot from accessing the camera from my phone’s browser.