I have been learning and working on machine learning for the entire semester now and am glad to say that I’ve trodden a good distance. I have already completed the online certified course and am now am working on the more practical aspects of ML. For the same, I’d taken up a project under Prof. Debdoot Sheet which involves performing assistive robotic surgery using ML aspects.
Also, I had spent past few days installing CUDA (Compute Unified Device Architecture) for my laptop (seemingly has an issue installing on hybrid graphic laptops). I got it running eventually, with no crashes evident now.
I had started learning Torch about a week or two back, and now have an okay-ish grasp of the library (and the completely new Lua language). With the Durga Puja vacations under way, I decided to work on this more and as a consequence, have successfully completed the first image classification task. I first trained the MNIST dataset using the tutorial on torch repository and eventually learnt about CNNs and their uses. Then I moved on to the laparoscopy dataset I had been told to work with, and classified cropped images into Tool 1, Tool 2 and Negatives (Background). I know the current code is pretty basic and the model is just a 4 layer CNN in totality, but it feels great to actually work on the aspects that matter much more in the robotics field, than what I’ve been working on.
I simply built upon the face vs background recognition demo by Torch, and added a few modifications for the learning task. I wrote my own codes for parsing and image manipulation (using Python and OpenCV). The entire code (along with the dataset I edited) is up on the repository at https://github.com/thesidjway/Assistive-surgery-project
I was getting really good accuracy values, and I initially thought it was just that the orientation of most images in similar datasets was similar, so I decided to rotate the images randomly using Phatch, and fortunately the accuracy didn’t drop much.
I now have to move on to more advanced tasks than simple image classification and hence need to work even harder. I also am building a dataset at AGV, for detecting bumpers in KGP. If that works, it sure would be good success rate.
It surely has been a good time, and I wish this continues.