Disclaimer: everything written here are my own personal opinions and do not represent Cambridge Consultants in any way.
|Cambridge Consultants||Software and Electronics Lead||Python, MATLAB|
This was the other major I got to work on while interning at Cambridge Consultants (the other was DelivAir). Smarter Recycling is a prototype that can distinguish between specific pre-selected items or recycling, using Computer Vision and Machine Learning. Once the item is recognised, one of the four rings light up, indicating where the user should place the item.
The item recognition was using two cameras, placed in the wall of the detection area, angled, so that the surroundings of the bin are blocked by the wall, making recognition significantly easier. The camera images were read by the computer inside the bin setup, and the recognition neural-networks were fed a grey-scale version of both image, and the HSV histogram breakdown of both image, and returned which item it was. It was also trained to recognise if there were no items on the detection area, and were given a reasonable amount of training data on possible other objects.
The actual recognition was done by a supervised four-layer classification neural-network, developed by me. Training was done on an image set recorded with the actual bin, with a separate piece of software, to immediately save the images in the correct raw data format, with the class correctly indexed with the data. The training of the neural-network was done in MATLAB, as it proved faster than the Python/Numpy implementation. However, the actual camera read processing and neural-network evaluation was done in python, using OpenCV, numpy and pySerial.
Other than the two cameras, an Arduino Uno was connected to the computer, that was the interface to the hardware. It controlled the the LEDs and the detectors for when an item was dropped into a hole, and communicated with the computer via USB.
The computer was also hosting a local flask server, that served up a webpage that acted as the accompanying app, showing images depending on the disposed item, and whether the user put it in the right receptacle. The webpage was automatically calling the server, asking for the current image once per second, using jQuerry, and the recognition software sent updates to the server, about the current state.
I’ve also worked on the electronics, connecting up all the sensors and LEDs, and wiring the internals of the prototype.
Just days after the prototype was finished, it was shipped to Munich, Germany, to the DrinkTec expo. The press release of the project was picked up by tech sites like Engadet and International business Times, by local sites and news and was nominated for ‘Circular Economy Technology of the Year’ by Business Green and for an ‘Edie sustainability leaders‘ award.
This project taught me a lot about Neural-networks and vision, and their practical applications, about electronics and hardware interfacing, but also taught me a lot about working in a professional environment, with great designers and industry experts, to create a product, that looks great, and works great. Finally, here is an awkward blurry picture of me and the bin: