Latest update: Instead of SSD, I show you how to use RetinaNet, which is better and more modern. I show you both how to use a pretrained model and how to train one yourself with a custom dataset on Google Colab.
This is one of the most exciting courses I’ve done and it really shows how fast and how far deep learning has come over the years.
When I first started my deep learning series, I didn’t ever consider that I’d make two courses on convolutional neural networks.
I think what you’ll find is that, this course is so entirely different from the previous one, you will be impressed at just how much material we have to cover.
Let me give you a quick rundown of what this course is all about:
We’re going to bridge the gap between the basic CNN architecture you already know and love, to modern, novel architectures such as VGG, ResNet, and Inception (named after the movie which by the way, is also great!)
We’re going to apply these to images of blood cells, and create a system that is a better medical expert than either you or I. This brings up a fascinating idea: that the doctors of the future are not humans, but robots.
In this course, you’ll see how we can turn a CNN into an object detection system, that not only classifies images but can locate each object in an image and predict its label.
You can imagine that such a task is a basic prerequisite for self-driving vehicles. (It must be able to detect cars, pedestrians, bicycles, traffic lights, etc. in real-time)
We’ll be looking at a state-of-the-art algorithm called SSD which is both faster and more accurate than its predecessors.
Another very popular computer vision task that makes use of CNNs is calledneural style transfer.
This is where you take one image called the content image, and another image called the style image, and you combine these to make an entirely new image, that is as if you hired a painter to paint the content of the first image with the style of the other. Unlike a human painter, this can be done in a matter of seconds.
I will also introduce you to the now-famous GAN architecture (Generative Adversarial Networks), where you will learn some of the technology behind how neural networks are used to generate state-of-the-art, photo-realistic images.
Currently, we also implement object localization, which is an essential first step toward implementing a full object detection system.
I hope you’re excited to learn about these advanced applications of CNNs, I’ll see you in class!
AWESOMEFACTS:
One of the major themes of this course is that we’re moving away from the CNN itself, to systems involving CNNs.
Instead of focusing on the detailed inner workings of CNNs (which we've already done), we'll focus on high-level building blocks. The result? Almost zero math.
Another result? No complicated low-level code such as that written in Tensorflow,Theano, or PyTorch (although some optional exercises may contain them for the very advanced students). Most of the course will be in Keras which means a lot of the tedious, repetitive stuff is written for you.
"If you can't implement it, you don't understand it"
Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand".
My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch
Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code?
After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times...
Suggested Prerequisites:
Know how to build, train, and use a CNN using some library (preferably in Python)
Understand basic theoretical concepts behind convolution and neural networks
Decent Python coding skills, preferably in data science and the Numpy Stack
WHATORDERSHOULDITAKEYOURCOURSESIN?: