These are my personal notes from fast.ai course and will continue to be updated and improved if I find anything useful and relevant while I continue to review the course to study much more in-depth. Thanks for reading and happy learning!
Topics
Move from single object to multi-object detection.
Main focus is on the single shot multibox detector (SSD).
Multi-object detection by using a loss function that can combine losses from multiple objects, across both localization and classification.
Custom architecture that takes advantage of the difference receptive fields of different layers of a CNN.
YOLO CVPR 2016 talk -- the idea of using grid cells and treating detection as a regression problem is focused on in more detail.
YOLOv2 talk -- there is some good information in this talk, although some drawn explanations are omitted from the video. What I found interesting was the bit on learning anchor boxes from the dataset. There's also the crossover with NLP at the end.
A classifier is anything with dependent variable is categorical or binomial. As opposed to regression which is anything with dependent variable is continuous. Naming is a little confusing but will be sorted out in future. Here, continuous is True because our dependent variable is the coordinates of bounding box — hence this is actually a regressor data.
As you can see, the image gets rotated and lighting varies, but bounding box is not moving and is in a wrong spot [00:06:17]. This is the problem with data augmentations when your dependent variable is pixel values or in some way connected to the independent variable — they need to be augmented together.
The dependent variable needs to go through all the geometric transformation as the independent variables.
To do this [00:07:10], every transformation has an optional tfm_y parameter:
TrmType.COORD indicates that the y value represents coordinate. This needs to be added to all the augmentations as well as tfms_from_model which is responsible for cropping, zooming, resizing, padding, etc.
idx =3fig, axes = plt.subplots(3, 3, figsize=(9, 9))for i, ax inenumerate(axes.flat): x, y =next(iter(md.aug_dl)) ima = md.val_ds.denorm(to_np(x))[idx] b =bb_hw(to_np(y[idx]))print(b)show_img(ima, ax=ax)draw_rect(ax, b)
learn.summary() will run a small batch of data through a model and prints out the size of tensors at every layer. As you can see, right before the Flatten layer, the tensor has the shape of 512 by 7 by 7. So if it were a rank 1 tensor (i.e. a single vector) its length will be 25088 (512 7 7)and that is why our custom header's input size is 25088. Output size is 4 since it is the bounding box coordinates.
Single Object Detection
We combine the two to create something that can classify and localize the largest object in each image.
There are 3 things that we need to do to train a neural network:
Data
Architecture
Loss function
1. Data
We need a ModelData object whose independent variable is the images, and dependent variable is a tuple of bounding box coordinates and class label.
There are several ways to do this, but here's a particularly 'lazy' and convenient way that is to create two ModelData objects representing the two different dependent variables we want: 1. bounding boxes coordinates 2. class
BB_CSV is the CSV file for bounding boxes of the largest object. This is simply a regression with 4 outputs (predicted values). So we can use a CSV with multiple 'labels'.
CSV is the CSV file for large object classification. It contains the CSV data of image filename and class of the largest object (from annotations JSON).
A dataset can be anything with __len__ and __getitem__. Here's a dataset that adds a second label to an existing dataset:
classConcatLblDataset(Dataset):""" A dataset that adds a second label to an existing dataset. """def__init__(self,ds,y2):""" Initialize ds: contains both independent and dependent variables y2: contains the additional dependent variables """ self.ds, self.y2 = ds, y2def__len__(self):returnlen(self.ds)def__getitem__(self,i): x, y = self.ds[i]# returns an independent variable and the combination of two dependent variables.return (x, (y, self.y2[i]))
We'll use it to add the classes to the bounding boxes labels.
# Grab the two 'label' (bounding box & class) from a record in the validation dataset.val_ds2[0][1] # record at index 0. labels at index 1, input image(x) at index 0 (we are not grabbing this)
(array([ 0., 1., 223., 178.], dtype=float32),14)
We can replace the dataloaders' datasets with these new ones.
We have to denormalize the images from the dataloader before they can be plotted.
idx =9x, y =next(iter(md.val_dl))# x is image array, y is labelsima = md.val_ds.ds.denorm(to_np(x))[idx] # reverse the normalization done to a batch of images.b =bb_hw(to_np(y[0][idx]))b
print(f'type of y: {type(y)}, y length: {len(y)}')print(y[0].size())# bounding box top-left coord & bottom-right coord valuesprint(y[1].size())# object category (class)# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------type of y:<class'list'>, y length:2torch.Size([64, 4])torch.Size([64])
# y[0] returns 64 set of bounding boxes (labels).# Here's we only grab the first 2 images' bounding boxes. The returned data type is PyTorch FloatTensor in GPU.print(y[0][:2])# Grab the first 2 images' object classes. The returned data type is PyTorch LongTensor in GPU.print(y[1][:2])# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------012231787123186194[torch.cuda.FloatTensor of size 2x4 (GPU 0)]143[torch.cuda.LongTensor of size 2 (GPU 0)]
Inspect x variable:
data from GPU
x.size()# batch of 64 images, each image with 3 channels and size of 224x224# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------torch.Size([64, 3, 224, 224])
The architecture will be the same as the one we used for the classifier and bounding box regression, but we will just combine them. In other words, if we have c classes, then the number of activations we need in the final layer is 4 plus c. 4 for bounding box coordinates and c probabilities (one per class).
We'll use an extra linear layer this time, plus some dropout, to help us train a more flexible model. In general, we want our custom head to be capable of solving the problem on its own if the pre-trained backbone it is connected to is appropriate. So in this case, we are trying to do quite a bit — classifier and bounding box regression, so just the single linear layer does not seem enough.
If you were wondering why there is no BatchNorm1d after the first ReLU, ResNet backbone already has BatchNorm1d as its final layer.
The loss function needs to look at these 4 + len(cats) activations and decide if they are good — whether these numbers accurately reflect the position and class of the largest object in the image. We know how to do this. For the first 4 activations, we will use L1Loss just like we did before (L1Loss is like a Mean Squared Error — instead of sum of squared errors, it uses sum of absolute values). For rest of the activations, we can use cross entropy loss.
defdetn_loss(input,target):""" Loss function for the position and class of the largest object in the image. """ bb_t, c_t = target# bb_i: the 4 values for the bbox# c_i: the 20 classes `len(cats)` bb_i, c_i =input[:,:4],input[:,4:] bb_i = F.sigmoid(bb_i)*224# scale bbox values to stay between 0 and 224 (224 is the max img width or height) bb_l = F.l1_loss(bb_i, bb_t)# bbox loss clas_l = F.cross_entropy(c_i, c_t)# object class loss# I looked at these quantities separately first then picked a multiplier# to make them approximately equalreturn bb_l + clas_l *20defdetn_l1(input,target):""" Loss function for the first 4 activations. L1Loss is like a Mean Squared Error — instead of sum of squared errors, it uses sum of absolute values """ bb_t, _ = target bb_i =input[:,:4] bb_i = F.sigmoid(bb_i)*224return F.l1_loss(V(bb_i), V(bb_t)).datadefdetn_acc(input,target):""" Accuracy """ _, c_t = target c_i =input[:,4:]returnaccuracy(c_i, c_t)
input : activations.
target : ground truth.
bb_t, c_t = target : our custom dataset returns a tuple containing bounding box coordinates and classes. This assignment will destructure them.
bb_i, c_i = input[:, :4], input[:, 4:] : the first : is for the batch dimension. e.g.: 64 (for 64 images).
b_i = F.sigmoid(bb_i) * 224 : we know our image is 224 by 224. Sigmoid will force it to be between 0 and 1, and multiply it by 224 to help our neural net to be in the range of what it has to be.
:question: Question: As a general rule, is it better to put BatchNorm before or after ReLU [00:18:02]?
Jeremy would suggest to put it after a ReLU because BatchNorm is meant to move towards zero-mean one-standard deviation. So if you put ReLU right after it, you are truncating it at zero so there is no way to create negative numbers. But if you put ReLU then BatchNorm, it does have that ability and gives slightly better results. Having said that, it is not too big of a deal either way. You see during this part of the course, most of the time, Jeremy does ReLU then BatchNorm but sometimes does the opposite when he wants to be consistent with the paper.
:question: Question: What is the intuition behind using dropout after a BatchNorm? Doesn't BatchNorm already do a good job of regularizing [00:19:12]?
BatchNorm does an okay job of regularizing but if you think back to part 1 when we discussed a list of things we do to avoid overfitting and adding BatchNorm is one of them as is data augmentation. But it's perfectly possible that you'll still be overfitting. One nice thing about dropout is that is it has a parameter to say how much to drop out. Parameters are great specifically parameters that decide how much to regularize because it lets you build a nice big over parameterized model and then decide on how much to regularize it. Jeremy tends to always put in a drop out starting with p=0 and then as he adds regularization, he can just change the dropout parameter without worrying about if he saved a model he want to be able to load it back, but if he had dropout layers in one but no in another, it will not load anymore. So this way, it stays consistent.
Now we have out inputs and targets, we can calculate the L1 loss and add the cross entropy [00:20:39]:
This is our loss function. Cross entropy and L1 loss may be of wildly different scales — in which case in the loss function, the larger one is going to dominate. In this case, Jeremy printed out the values and found out that if we multiply cross entropy by 20, that makes them about the same scale.
A detection accuracy is in the low 80's which is the same as what it was before. This is not surprising because ResNet was designed to do classification so we wouldn't expect to be able to improve things in such a simple way. It certainly wasn't designed to do bounding box regression. It was explicitly actually designed in such a way to not care about geometry — it takes the last 7 by 7 grid of activations and averages them all together throwing away all the information about where everything came from.
Interestingly, when we do accuracy (classification) and bounding box at the same time, the L1 seems a little bit better than when we just do bounding box regression [00:22:46].
:memo: If that is counterintuitive to you, then this would be one of the main things to think about after this lesson since it is a really important idea.
The big idea is this — figuring out what the main object in an image is, is kind of the hard part. Then figuring out exactly where the bounding box is and what class it is is the easy part in a way. So when you have a single network that's both saying what is the object and where is the object, it's going to share all the computation about finding the object. And all that shared computation is very efficient. When we back propagate the errors in the class and in the place, that's all the information that is going to help the computation around finding the biggest object. So anytime you have multiple tasks which share some concept of what those tasks would need to do to complete their work, it is very likely they should share at least some layers of the network together. Later, we will look at a model where most of the layers are shared except for the last one.
Here are the result [00:24:34]. As before, it does a good job when there is single major object in the image.
We want to keep building models that are slightly more complex than the last model so that if something stops working, we know exactly where it broke.
Setup
Global scope variables:
PATH =Path('data/pascal')trn_j = json.load((PATH /'pascal_train2007.json').open())IMAGES, ANNOTATIONS, CATEGORIES = ['images','annotations','categories']FILE_NAME, ID, IMG_ID, CAT_ID, BBOX ='file_name','id','image_id','category_id','bbox'cats =dict((o[ID], o['name']) for o in trn_j[CATEGORIES])trn_fns =dict((o[ID], o[FILE_NAME]) for o in trn_j[IMAGES])trn_ids = [o[ID]for o in trn_j[IMAGES]]JPEGS ='VOCdevkit/VOC2007/JPEGImages'IMG_PATH = PATH / JPEGS
Define common functions.
Very similar to the first Pascal notebook, a model (single object detection).
defhw_bb(bb):# Example, bb = [155, 96, 196, 174]return np.array([ bb[1], bb[0], bb[3] + bb[1] -1, bb[2] + bb[0] -1 ])defget_trn_anno(): trn_anno = collections.defaultdict(lambda:[])for o in trn_j[ANNOTATIONS]:ifnot o['ignore']: bb = o[BBOX]# one bbox. looks like '[155, 96, 196, 174]'. bb =hw_bb(bb) trn_anno[o[IMG_ID]].append( (bb, o[CAT_ID]) )return trn_annotrn_anno =get_trn_anno()defshow_img(im,figsize=None,ax=None):ifnot ax: fig, ax = plt.subplots(figsize=figsize) ax.imshow(im) ax.set_xticks(np.linspace(0, 224, 8)) ax.set_yticks(np.linspace(0, 224, 8)) ax.grid() ax.set_xticklabels([]) ax.set_yticklabels([])return axdefdraw_outline(o,lw): o.set_path_effects( [patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()] )defdraw_rect(ax,b,color='white'): patch = ax.add_patch(patches.Rectangle(b[:2], *b[-2:], fill=False, edgecolor=color, lw=2))draw_outline(patch, 4)defdraw_text(ax,xy,txt,sz=14,color='white'): text = ax.text(*xy, txt, verticalalignment='top', color=color, fontsize=sz, weight='bold')draw_outline(text, 1)defbb_hw(a):return np.array( [ a[1], a[0], a[3] - a[1] +1, a[2] - a[0] +1 ] )defdraw_im(im,ann):# im is image, ann is annotations ax =show_img(im, figsize=(16, 8))for b, c in ann:# b is bbox, c is class id b =bb_hw(b)draw_rect(ax, b)draw_text(ax, b[:2], cats[c], sz=16)defdraw_idx(i):# i is image id im_a = trn_anno[i]# training annotations im =open_image(IMG_PATH / trn_fns[i])# trn_fns is training image file namesdraw_im(im, im_a)# im_a is an element of annotation
Multi class
Setup.
MC_CSV = PATH /'tmp/mc.csv'trn_anno[12]# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------[(array([ 96, 155, 269, 350]),7)]mc = [ set( [cats[p[1]] for p in trn_anno[o] ] )for o in trn_ids ]mcs = [ ' '.join( str(p) for p in o )for o in mc ] # stringify mcprint('mc:', mc[1])print('mcs:', mcs[1])# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------mc:{'horse','person'}mcs: horse persondf = pd.DataFrame({ 'fn': [trn_fns[o] for o in trn_ids], 'clas': mcs }, columns=['fn', 'clas'])df.to_csv(MC_CSV, index=False)
:memo: One of the students pointed out that by using Pandas, we can do things much simpler than using collections.defaultdict and shared this gist. The more you get to know Pandas, the more often you realize it is a good way to solve lots of different problems.
Model
Setup ResNet model and train.
f_model = resnet34sz =224bs =64tfms =tfms_from_model(f_model, sz, crop_type=CropType.NO)md = ImageClassifierData.from_csv(PATH, JPEGS, MC_CSV, tfms=tfms, bs=bs)learn = ConvLearner.pretrained(f_model, md)learn.opt_fn = optim.Adamlr =2e-2learn.fit(lr, 1, cycle_len=3, use_clr=(32, 5))# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------epoch trn_loss val_loss <lambda>00.3195390.1393470.953510.1722750.0806890.972420.1161360.0759650.975[array([0.07597]),0.9750000004768371]# Define learning rates to searchlrs = np.array([lr/100, lr/10, lr])# Freeze the model till the last 2 layers as beforelearn.freeze_to(-2)# Refit the modellearn.fit(lrs/10, 1, cycle_len=5, use_clr=(32, 5))# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------epoch trn_loss val_loss <lambda>00.0719970.0782660.973410.0553210.0826680.973720.0404070.0776820.975730.0279390.076510.975640.0199830.076760.9763[array([0.07676]),0.9763000016212463]# Save the modellearn.save('mclas')learn.load('mclas')
Evaluate the model
y = learn.predict()x, _ =next(iter(md.val_dl))x =to_np(x)fig, axes = plt.subplots(3, 4, figsize=(12, 8))for i, ax inenumerate(axes.flat): ima = md.val_ds.denorm(x)[i] ya = np.nonzero(y[i] >0.4)[0] b ='\n'.join(md.classes[o] for o in ya) ax =show_img(ima, ax=ax)draw_text(ax, (0, 0), b)plt.tight_layout()
Multi-class classification is pretty straight forward [00:28:28]. One minor tweak is the use of set in this line so that each object type appear once:
mc = [ set( [cats[p[1]] for p in trn_anno[o] ] )for o in trn_ids ]
Next up, finding multiple objects in an image.
SSD and YOLO
We have an input image that goes through a conv net which outputs a vector of size 4 + c where c = len(cats) . This gives us an object detector for a single largest object. Let's now create one that finds 16 objects. The obvious way to do this would be to take the last linear layer and rather than having 4 + c outputs, we could have 16 x (4+c) outputs. This gives us 16 sets of class probabilities and 16 sets of bounding box coordinates. Then we would just need a loss function that will check whether those 16 sets of bounding boxes correctly represented the up to 16 objects in the image (we will go into the loss function later).
The second way to do this is rather than using nn.linear, what if instead, we took from our ResNet convolutional backbone and added an nn.Conv2d with stride 2 [00:31:32]? This will give us a 4 x 4 x [# of filters] tensor — here let's make it 4 x 4 x (4 + c) so that we get a tensor where the number of elements is exactly equal to the number of elements we wanted. Now if we created a loss function that took a 4 x 4 x (4 + c) tensor and and mapped it to 16 objects in the image and checked whether each one was correctly represented by these 4 + c activations, this would work as well. It turns out, both of these approaches are actually used [00:33:48]. The approach where the output is one big long vector from a fully connected linear layer is used by a class of models known as YOLO (You Only Look Once), where else, the approach of the convolutional activations is used by models which started with something called SSD (Single Shot Detector). Since these things came out very similar times in late 2015, things are very much moved towards SSD. So the point where this morning, YOLO version 3 came out and is now doing SSD, so that's what we are going to do. We will also learn about why this makes more sense as well.
Anchor Boxes
SSD Approach
Let's imagine that we had another Conv2d(stride=2) then we would have 2 x 2 x (4 + c) tensor. Basically, it is creating a grid that looks something like this:
This is how the geometry of the activations of the second extra convolutional stride 2 layer are.
What we might do here [00:36:09]? We want each of these grid cell (Conv quadrant) to be responsible for finding the largest object in that part of the image.
Receptive Field
Why do we want each convolutional grid cell (quadrant) to be responsible for finding things that are in the corresponding part of the image? The reason is because of something called the receptive field of that convolutional grid cell. The basic idea is that throughout your convolutional layers, every piece of those tensors has a receptive field which means which part of the input image was responsible for calculating that cell. Like all things in life, the easiest way to see this is with Excel [00:38:01].
Take a single activation (in this case in the maxpool layer) and let's see where it came from [00:38:45]. In Excel you can do Formulas :arrow_right: Trace Precedents. Tracing all the way back to the input layer, you can see that it came from this 6 x 6 portion of the image (as well as filters).
Example:
If we trace one of the maxpool activation backwards:
Tracing back even farther until we get back to the source image:
What is more, the middle portion has lots of weights (or connections) coming out of where else, cells in the outside (edges) only have one (don't have many) weight coming out. In other words, the center of the box has more dependencies. So we call this 6 x 6 cells the receptive field of the one activation we picked.
Note that the receptive field is not just saying it's this box but also that the center of the box has more dependencies [00:40:27]. This is a critically important concept when it comes to understanding architectures and understanding why conv nets work the way they do.
Make a model to predict what shows up in a 4x4 grid
We're going to make a simple first model that simply predicts what object is located in each cell of a 4x4 grid. Later on we can try to improve this.
Architecture
The architecture is, we will have a ResNet backbone followed by one or more 2D convolutions (one for now) which is going to give us a 4x4 grid.
# Build a simple convolutional modelclassStdConv(nn.Module):""" A combination block of Conv2d, BatchNorm, Dropout """def__init__(self,nin,nout,stride=2,drop=0.1):super().__init__() self.conv = nn.Conv2d(nin, nout, 3, stride=stride, padding=1) self.bn = nn.BatchNorm2d(nout) self.drop = nn.Dropout(drop)defforward(self,x):return self.drop(self.bn(F.relu(self.conv(x))))defflatten_conv(x,k): bs, nf, gx, gy = x.size() x = x.permute(0, 2, 3, 1).contiguous()return x.view(bs, -1, nf//k)# This is an output convolutional model with 2 `Conv2d` layers.classOutConv(nn.Module):""" A combination block of `Conv2d`, `4 x Stride 1`, `Conv2d`, `C x Stride 1` with two layers. We are outputting `4 + C` """def__init__(self,k,nin,bias):super().__init__() self.k = k self.oconv1 = nn.Conv2d(nin, (len(id2cat) +1) * k, 3, padding=1)# +1 is adding one more class for background. self.oconv2 = nn.Conv2d(nin, 4* k, 3, padding=1) self.oconv1.bias.data.zero_().add(bias)defforward(self,x):return [flatten_conv(self.oconv1(x), self.k),flatten_conv(self.oconv2(x), self.k)]
The SSD Model
classSSD_Head(nn.Module):def__init__(self,k,bias):super().__init__() self.drop = nn.Dropout(0.25)# Stride 1 conv doesn't change the dimension size, but we have a mini neural network self.sconv0 =StdConv(512, 256, stride=1) self.sconv2 =StdConv(256, 256) self.out =OutConv(k, 256, bias)defforward(self,x): x = self.drop(F.relu(x)) x = self.sconv0(x) x = self.sconv2(x)return self.out(x)head_reg4 =SSD_Head(k, -3.)models =ConvnetBuilder(f_model, 0, 0, 0, custom_head=head_reg4)learn =ConvLearner(md, models)learn.opt_fn = optim.Adam
SSD_Head:
We start with ReLU and dropout.
Then stride 1 convolution.
The reason we start with a stride 1 convolution is because that does not change the geometry at all— it just lets us add an extra layer of calculation. It lets us create not just a linear layer but now we have a little mini neural network in our custom head. StdConv is defined above — it does convolution, ReLU, BatchNorm, and dropout. Most research code you see won't define a class like this, instead they write the entire thing again and again. Don't be like that. Duplicate code leads to errors and poor understanding.
Stride 2 convolution [00:44:56].
At the end, the output of step 3 is 4x4 which gets passed to OutConv.
OutConv has two separate convolutional layers each of which is stride 1 so it is not changing the geometry of the input. One of them is of length of the number of classes (ignore k for now and +1 is for "background" — i.e. no object was detected), the other's length is 4.
Rather than having a single conv layer that outputs 4 + c, let's have two conv layers and return their outputs in a list.
This allows these layers to specialize just a little bit. We talked about this idea that when you have multiple tasks, they can share layers, but they do not have to share all the layers.
In this case, our two tasks of creating a classifier and creating bounding box regression share every single layers except the very last one.
At the end, we flatten out the convolution because Jeremy wrote the loss function to expect flattened out tensor, but we could totally rewrite it to not do that.
It is very heavily orient towards the idea of expository programming which is the idea that programming code should be something that you can use to explain an idea, ideally as readily as mathematical notation, to somebody that understands your coding method.
How do we write a loss function for this?
The loss function needs to look at each of these 16 sets of activations, each of which has 4 bounding box coordinates and categories + 1 — c + 1 class probabilities and decide if those activations are close or far away from the object which is the closest to this grid cell in the image. If nothing is there, then whether it is predicting background correctly. That turns out to be very hard to do.
Matching Problem
The loss function actually needs to take each object in the image and match them to a convolutional grid cell.
The loss function needs to take each of the objects in the image and match them to one of these convolutional grid cells to say "this grid cell is responsible for this particular object" so then it can go ahead and say "okay, how close are the 4 coordinates and how close are the class probabilities".
Here's our goal:
Our dependent variable looks like the one on the left, and our final convolutional layer is going to be 4 x 4 x (c + 1) in this case c = 20. We then flatten that out into a vector. Our goal is to come up with a function which takes in a dependent variable and also some particular set of activations that ended up coming out of the model and returns a higher number if these activations are not a good reflection of the ground truth bounding boxes; or a lower number if it is a good reflection.
batch = learn.model(x)anchors = anchors.cuda()grid_sizes = grid_sizes.cuda()anchor_cnr = anchor_cnr.cuda()ssd_loss(batch, y, True)# -----------------------------------------------------------------------------# Output# -----------------------------------------------------------------------------0.40620.21800.13070.57620.15240.4794[torch.cuda.FloatTensor of size 6 (GPU 0)]0.1128[torch.cuda.FloatTensor of size 1 (GPU 0)]loc:10.360502243041992, clas:73.66346740722656Variable containing:84.0240[torch.cuda.FloatTensor of size 1 (GPU 0)]
x, y =next(iter(md.val_dl))# grab a single batchx, y =V(x),V(y)# turn into variableslearn.model.eval()# set model to eval mode (trained in the previous block)batch = learn.model(x)b_clas, b_bb = batch # destructure the class and the bounding box
Note that the bounding box coordinates have been scaled to between 0 and 1.
deftorch_gt(ax,ima,bbox,clas,prs=None,thresh=0.4):""" We already have `show_ground_truth` function. This function simply converts tensors into numpy array. (gt stands for ground truth) """returnshow_ground_truth(ax, ima, to_np((bbox *224).long()),to_np(clas), to_np(prs) if prs isnotNoneelseNone, thresh)
Each of these square boxes, different papers call them different things. The three terms you'll hear are: anchor boxes, prior boxes, or default boxes. We will stick with the term anchor boxes.
What we are going to do for this loss function is we are going to go through a matching problem where we are going to take every one of these 16 boxes and see which one of these three ground truth objects has the highest amount of overlap with a given square.
To do this, we have to have some way of measuring amount of overlap and a standard function for this is called Jaccard index (IoU).
IoU = area of overlap / area of union
We are going to go through and find the Jaccard overlap for each one of the three objects versus each of the 16 anchor boxes [00:57:11]. That is going to give us a 3x16 matrix.
Here are the coordinates of all of our anchor boxes (center x, center y, height, width):
Here are the amount of overlap between 3 ground truth objects and 16 anchor boxes:
Get the activations.
# a_ic: activations image corners
a_ic = actn_to_bb(b_bboxi, anchors)
fig, ax = plt.subplots(figsize=(7, 7))
# b_clasi.max(1)[1] -> object class id
# b_clasi.max(1)[0].sigmoid() -> scale class probs using sigmoid
torch_gt(ax, ima, a_ic, b_clasi.max(1)[1], b_clasi.max(1)[0].sigmoid(), thresh=0.0)
Calculate Jaccard index (all objects x all grid cells)
We are going to go through and find the Jaccard overlap for each one of the 3 ground truth objects versus each of the 16 anchor boxes. That is going to give us a 3x16 matrix.
What we could do now is we could take the max of dimension (axis) 1 (row-wise) which will tell us for each ground truth object, what the maximum amount that overlaps with some grid cell as well as the index:
# For each object, we can find the highest overlap with any grid cell.
# Returns maximum amount and the corresponding cell index.
overlaps.max(1) # axis 1 -> horizontal (left-to-right)
# -----------------------------------------------------------------------------
# Output
# -----------------------------------------------------------------------------
(
0.3985
0.4538
0.1897
[torch.cuda.FloatTensor of size 3 (GPU 0)],
14
13
11
[torch.cuda.LongTensor of size 3 (GPU 0)])
We will also going to look at max over a dimension(axis) 0 (column-wise) which will tell us what is the maximum amount of overlap for each grid cell across all of the ground truth objects:
Here, it tells us for every grid cell what is the index of the ground truth object which overlaps with it the most.
Basically what map_to_ground_truth does is it combines these two sets of overlaps in a way described in the SSD paper to assign every anchor box to a ground truth object.
The way it assign that is each of the three (row-wise max) gets assigned as is. For the rest of the anchor boxes, they get assigned to anything which they have an overlap of at least 0.5 with (column-wise). If neither applies, it is considered to be a cell which contains background.
Anywhere that has gt_overlap < 0.5 gets assigned background. The three row-wise max anchor box has high number to force the assignments. Now we can combine these values to classes:
We will end up with 16 predicted bounding boxes, most of them will be background. If you are wondering what it predicts in terms of bounding box of background, the answer is it totally ignores it.
# Plot a few pictures
fig, axes = plt.subplots(3, 4, figsize=(16, 12))
for idx, ax in enumerate(axes.flat):
# loop through each image out of 12 images
ima = md.val_ds.ds.denorm(to_np(x))[idx]
bbox, clas = get_y(y[0][idx], y[1][idx])
ima = md.val_ds.ds.denorm(to_np(x))[idx]
bbox, clas = get_y(bbox, clas); bbox, clas
a_ic = actn_to_bb(b_bb[idx], anchors)
torch_gt(ax, ima, a_ic, b_clas[idx].max(1)[1], b_clas[idx].max(1)[0].sigmoid(), 0.01)
plt.tight_layout()
In practice, we want to remove the background and also add some threshold for probabilities, but it is on the right track. The potted plant image, the result is not surprising as all of our anchor boxes were small (4x4 grid).
How can we improve?
To go from here to something that is going to be more accurate, all we are going to do is to create way more anchor boxes.
Tweak 1. How do we interpret the activations
We have to convert the activations into a scaling. We grab the activations, we stick them through tanh (it is scaled to be between -1 and 1) which forces it to be within that range.
We then grab the actual position of the anchor boxes, and we will move them around according to the value of the activations divided by two. In other words, each predicted bounding box can be moved by up to 50% of a grid size from where its default position is.
def actn_to_bb(actn, anchors):
# e.g. of actn tensor of shape (16, 4): [[0.2744 0.2912 -0.3941 -0.7735], [...]]
# normalize actn values between 1 and -1 (tanh func)
actn_bbs = torch.tanh(actn)
# actn_bbs[:, :2] grab the first 2 columns (obj bbox top-left coords) from the tensor & scale back the coords to grid sizes
# anchors[:, :2] grab the first 2 columns (center point coords)
actn_centers = (actn_bbs[:, :2] / 2 * grid_sizes) + anchors[:, :2]
# same as above but this time for bbox area (height/width)
actn_hw = (actn_bbs[:, 2:] / 2 + 1) * anchors[:, 2:]
return hw2corners(actn_centers, actn_hw)
Tweak 2. We actually use binary cross entropy loss instead of cross entropy.
Binary cross entropy is what we normally use for multi-label classification.
If it has multiple things in it, you cannot use softmax because softmax really encourages just one thing to have the high number. In our case, each anchor box can only have one object associated with it, so it is not for that reason that we are avoiding softmax. It is something else — which is it is possible for an anchor box to have nothing associated with it. There are two ways to handle this idea of "background"; one would be to say background is just a class, so let's use softmax and just treat background as one of the classes that the softmax could predict. A lot of people have done it this way. But that is a really hard thing to ask neural network to do—it is basically asking whether this grid cell does not have any of the 20 objects that I am interested with Jaccard overlap of more than 0.5. It is a really hard thing to put into a single computation. On the other hand, what if we just asked for each class; "is it a motorbike?", "is it a bus?", etc and if all the answer is no, consider that background. That is the way we do it here. It is not that we can have multiple true labels, but we can have zero.
class BCE_Loss(nn.Module):
"""
Binomial Cross Entropy Loss.
Each anchor box can only have one object associated with it. Its possible for an anchor box to have NOTHING in it.
We could:
1. treat background as a class - difficult, because its asking the NN to say 'does this square NOT have 20 other things'
2. BCE loss, checks by process of elimination - if there's no 20 object detected, then its background (0 positives)
"""
def __init__(self, num_classes):
super().__init__()
self.num_classes = num_classes
def forward(self, pred, targ):
# take the one hot embedding of the target (at this stage, we do have the idea of background)
t = one_hot_embedding(targ, self.num_classes + 1)
# remove the background column (the last one) which results in a vector either of all zeros or one one
t = V(t[:, :-1].contiguous())#.cpu()
x = pred[:, :-1]
w = self.get_weight(x, t)
# use binary cross-entropy predictions
return F.binary_cross_entropy_with_logits(x, t, w, size_average=False) / self.num_classes
def get_weight(self, x, t):
return None
This is a minor tweak, but it is the kind of minor tweak that Jeremy wants you to think about and understand because it makes a really big difference to your training and when there is some increment over a previous paper, it would be something like this [01:08:25]. It is important to understand what this is doing and more importantly why.
The ssd_loss function which is what we set as the criteria, it loops through each image in the mini-batch and call ssd_1_loss function (i.e. SSD loss for one image).
ssd_1_loss is where it is all happening. It begins by de-structuring bbox and clas.
A lot of code you find on the Internet does not work with mini-batches. It only does one thing at a time which we don't want. In this case, all these functions (get_y, actn_to_bb, map_to_ground_truth) is working on, not exactly a mini-batch at a time, but a whole bunch of ground truth objects at a time. The data loader is being fed a mini-batch at a time to do the convolutional layers.
Because we can have different numbers of ground truth objects in each image but a tensor has to be the strict rectangular shape, fastai automatically pads it with zeros (any target values that are shorter). This was something that was added recently and super handy, but that does mean that you then have to make sure that you get rid of those zeros. So get_y gets rid of any of the bounding boxes that are just padding.
More anchors!
There are 3 ways to do this:
Create anchor boxes of different sizes (zoom).
Create anchor boxes of different aspect ratios.
Use more convolutional layers as sources of anchor boxes (the boxes are randomly jittered so that we can see ones that are overlapping.
Combining these approaches, you can create lots of anchor boxes.
Create anchors
anc_grids = [4, 2, 1]
anc_zooms = [0.7, 1., 1.3]
anc_ratios = [(1., 1.), (1., 0.5), (0.5, 1.)]
anchor_scales = [(anz * i, anz * j) for anz in anc_zooms for (i, j) in anc_ratios]
k = len(anchor_scales)
anc_offsets = [1 / (o * 2) for o in anc_grids]
Make the corners:
anc_x = np.concatenate([np.repeat(np.linspace(ao, 1 - ao, ag), ag)
for ao, ag in zip(anc_offsets, anc_grids)])
anc_y = np.concatenate([np.tile(np.linspace(ao, 1 - ao, ag), ag)
for ao, ag in zip(anc_offsets, anc_grids)])
anc_ctrs = np.repeat(np.stack([anc_x, anc_y], axis=1), k, axis=0)
Make the dimensions:
anc_sizes = np.concatenate([np.array([[o / ag, p / ag] for i in range(ag * ag) for o, p in anchor_scales])
for ag in anc_grids])
grid_sizes = V(np.concatenate([np.array([1 / ag for i in range(ag * ag) for o, p in anchor_scales])
for ag in anc_grids]), requires_grad=False).unsqueeze(1)
anchors = V(np.concatenate([anc_ctrs, anc_sizes], axis=1), requires_grad=False).float()
anchor_cnr = hw2corners(anchors[:, :2], anchors[:, 2:])
anchors : center and height, width anchor_cnr : top-left and bottom-right corners
Model Architecture
We will change our architecture, so it spits out enough activations.
Try to make the activations closely represents the bounding box.
Now we can have multiple anchor boxes per grid cell.
For every object, have to figure out which anchor box which is closer.
For each anchor box, we have to find which object its responsible for.
We don't need to necessarily change the number of conv. filters. We will get these for free.
The model is nearly identical to what we had before. But we have a number of stride 2 convolutions which is going to take us through to 4x4, 2x2, and 1x1 (each stride 2 convolution halves our grid size in both directions).
After we do our first convolution to get to 4x4, we will grab a set of outputs from that because we want to save away the 4x4 anchors.
Once we get to 2x2, we grab another set of now 2x2 anchors.
Then finally we get to 1x1.
We then concatenate them all together, which gives us the correct number of activations (one activation for every anchor box).
drop = 0.4
class SSD_MultiHead(nn.Module):
def __init__(self, k, bias):
"""
k: Number of zooms x number of aspect ratios. Grids will be for free.
"""
super().__init__()
self.drop = nn.Dropout(drop)
self.sconv0 = StdConv(512, 256, stride=1, drop=drop)
self.sconv1 = StdConv(256, 256, drop=drop)
self.sconv2 = StdConv(256, 256, drop=drop)
self.sconv3 = StdConv(256, 256, drop=drop)
# Note the number of OutConv. There's many more outputs this time around.
self.out0 = OutConv(k, 256, bias)
self.out1 = OutConv(k, 256, bias)
self.out2 = OutConv(k, 256, bias)
self.out3 = OutConv(k, 256, bias)
def forward(self, x):
x = self.drop(F.relu(x))
x = self.sconv0(x)
x = self.sconv1(x)
o1c, o1l = self.out1(x)
x = self.sconv2(x)
o2c, o2l = self.out2(x)
x = self.sconv3(x)
o3c, o3l = self.out3(x)
return [torch.cat([o1c, o2c, o3c], dim=1),
torch.cat([o1l, o2l, o3l], dim=1)]
head_reg4 = SSD_MultiHead(k, -4.)
models = ConvnetBuilder(f_model, 0, 0, 0, custom_head=head_reg4)
learn = ConvLearner(md, models)
learn.opt_fn = optim.Adam
The actual contribution of this paper is to add (1 − pt)^γ to the start of the equation [01:45:06] which sounds like nothing but actually people have been trying to figure out this problem for years. When you come across a paper like this which is game-changing, you shouldn't assume you are going to have to write thousands of lines of code. Very often it is one line of code, or the change of a single constant, or adding log to a single place.
Implementing Focal Loss
When we defined the binomial cross entropy loss, you may have noticed that there was a weight which by default was None:
class BCE_Loss(nn.Module):
"""
Binomial Cross Entropy Loss.
Each anchor box can only have one object associated with it. Its possible for an anchor box to have NOTHING in it.
We could:
1. treat background as a class - difficult, because its asking the NN to say 'does this square NOT have 20 other things'
2. BCE loss, checks by process of elimination - if there's no 20 object detected, then its background (0 positives)
"""
def __init__(self, num_classes):
super().__init__()
self.num_classes = num_classes
def forward(self, pred, targ):
# take the one hot embedding of the target (at this stage, we do have the idea of background)
t = one_hot_embedding(targ, self.num_classes + 1)
# remove the background column (the last one) which results in a vector either of all zeros or one one
t = V(t[:, :-1].contiguous())#.cpu()
x = pred[:, :-1]
w = self.get_weight(x, t)
# use binary cross-entropy predictions
return F.binary_cross_entropy_with_logits(x, t, w, size_average=False) / self.num_classes
def get_weight(self, x, t):
return None
When you call F.binary_cross_entropy_with_logits, you can pass in the weight. Since we just wanted to multiply a cross entropy by something, we can just define get_weight.
Here is the entirety of focal loss:
class FocalLoss(BCE_Loss):
def get_weight(self, x, t):
alpha, gamma = 0.25, 2. # in the original code, the gamma value is 1. In paper is 2.0. Why?
p = x.sigmoid()
pt = p * t + (1 - p) * (1 - t)
w = alpha * t + (1 - alpha) * (1 - t)
return w * (1 - pt).pow(gamma)
If you were wondering why alpha and gamma are 0.25 and 2, here is another excellent thing about this paper, because they tried lots of different values and found that these work well.
So our last step, for now, is to basically figure out how to pull out just the interesting ones.
Non Maximum Suppression (NMS)
All we are going to do is we are going to go through every pair of these bounding boxes and if they overlap by more than some amount, say 0.5, using Jaccard and they are both predicting the same class, we are going to assume they are the same thing and we are going to pick the one with higher p value.
def nms(boxes, scores, overlap=0.5, top_k=100):
keep = scores.new(scores.size(0)).zero_().long()
if boxes.numel() == 0:
return keep
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
area = torch.mul(x2 - x1, y2 - y1)
v, idx = scores.sort(0) # sort in ascending order
idx = idx[-top_k:] # indices of the top-k largest vals
xx1 = boxes.new()
yy1 = boxes.new()
xx2 = boxes.new()
yy2 = boxes.new()
w = boxes.new()
h = boxes.new()
count = 0
while idx.numel() > 0:
i = idx[-1] # index of current largest val
keep[count] = i
count += 1
if idx.size(0) == 1:
break
idx = idx[:-1] # remove kept element from view
# load bboxes of next highest vals
torch.index_select(x1, 0, idx, out=xx1)
torch.index_select(y1, 0, idx, out=yy1)
torch.index_select(x2, 0, idx, out=xx2)
torch.index_select(y2, 0, idx, out=yy2)
# store element-wise max with next highest score
xx1 = torch.clamp(xx1, min=x1[i])
yy1 = torch.clamp(yy1, min=y1[i])
xx2 = torch.clamp(xx2, max=x2[i])
yy2 = torch.clamp(yy2, max=y2[i])
w.resize_as_(xx2)
h.resize_as_(yy2)
w = xx2 - xx1
h = yy2 - yy1
# check sizes of xx1 and xx2.. after each iteration
w = torch.clamp(w, min=0.0)
h = torch.clamp(h, min=0.0)
inter = w * h
# IoU = i / (area(a) + area(b) - i)
rem_areas = torch.index_select(area, 0, idx) # load remaining areas)
union = (rem_areas - inter) + area[i]
IoU = inter / union # store result in iou
# keep only elements with an IoU <= overlap
idx = idx[IoU.le(overlap)]
return keep, count
def show_nmf(idx):
ima = md.val_ds.ds.denorm(x)[idx]
bbox, clas = get_y(y[0][idx], y[1][idx])
a_ic = actn_to_bb(b_bb[idx], anchors)
clas_pr, clas_ids = b_clas[idx].max(1)
clas_pr = clas_pr.sigmoid()
conf_scores = b_clas[idx].sigmoid().t().data
out1, out2, cc = [], [], []
for cl in range(0, len(conf_scores) - 1):
c_mask = conf_scores[cl] > 0.25
if c_mask.sum() == 0:
continue
scores = conf_scores[cl][c_mask]
l_mask = c_mask.unsqueeze(1).expand_as(a_ic)
boxes = a_ic[l_mask].view(-1, 4)
ids, count = nms(boxes.data, scores, 0.4, 50)
ids = ids[:count]
out1.append(scores[ids])
out2.append(boxes.data[ids])
cc.append([cl] * count)
cc = T(np.concatenate(cc))
out1 = torch.cat(out1)
out2 = torch.cat(out2)
fig, ax = plt.subplots(figsize=(8, 8))
torch_gt(ax, ima, out2, cc, out1, 0.1)
for i in range(12):
show_nmf(i)
There are some things still to fix here. The trick will be to use something called feature pyramid. That is what we are going to do in lesson 14.
Talking a little more about SSD paper [01:54:03]
When this paper came out, Jeremy was excited because this and YOLO were the first kind of single-pass good quality object detection method that come along. There has been this continuous repetition of history in the deep learning world which is things that involve multiple passes of multiple different pieces, over time, particularly where they involve some non-deep learning pieces (like R-CNN did), over time, they always get turned into a single end-to-end deep learning model. So I tend to ignore them until that happens because that's the point where people have figured out how to show this as a deep learning model, as soon as they do that they generally end up something much faster and much more accurate. So SSD and YOLO were really important.
The model is 4 paragraphs. Papers are really concise which means you need to read them pretty carefully. Partly, though, you need to know which bits to read carefully. The bits where they say “here we are going to prove the error bounds on this model,” you could ignore that because you don't care about proving error bounds. But the bit which says here is what the model is, you need to read real carefully.
Jeremy reads a section 2.1 Model [01:56:37]
If you jump straight in and read a paper like this, these 4 paragraphs would probably make no sense. But now that we've gone through it, you read those and hopefully thinking “oh that's just what Jeremy said, only they sad it better than Jeremy and less words [02:00:37]. If you start to read a paper and go “what the heck”, the trick is to then start reading back over the citations.
Jeremy reads Matching strategy and Training objective (a.k.a. Loss function)[02:01:44]
Closing
This week, we go through the code and go through the paper and see what is going on. Remember what Jeremy did to make it easier for you was he took that loss function, he copied it into a cell and split it up so that each bit was in a separate cell. Then after every sell, he printed or plotted that value. Hopefully this is a good starting point.