library(keras)
set_random_seed(321L, disable_gpu = FALSE) # Already sets R's random seed.
= function(x){ t(apply(x, 2, rev)) }
rotate
= function(img, title = ""){
imgPlot = grey.colors(255)
col image(rotate(img), col = col, xlab = "", ylab = "", axes = FALSE,
main = paste0("Label: ", as.character(title)))
}
10 Convolutional Neural Networks (CNN)
The main purpose of convolutional neural networks is image recognition. (Sound can be understood as an image as well!) In a convolutional neural network, we have at least one convolution layer, additional to the normal, fully connected deep neural network layers.
Neurons in a convolution layer are connected only to a small spatially contiguous area of the input layer (receptive field). We use this structure (feature map) to scan the entire features / neurons (e.g. picture). Think of the feature map as a kernel or filter (or imagine a sliding window with weighted pixels) that is used to scan the image. As the name is already indicating, this operation is a convolution in mathematics. The kernel weights are optimized, but we use the same weights across the entire input neurons (shared weights).
The resulting (hidden) convolutional layer after training is called a feature map. You can think of the feature map as a map that shows you where the “shapes” expressed by the kernel appear in the input. One kernel / feature map will not be enough, we typically have many shapes that we want to recognize. Thus, the input layer is typically connected to several feature maps, which can be aggregated and followed by a second layer of feature maps, and so on.
You get one convolution map/layer for each kernel of one convolutional layer.
10.1 Example MNIST
We will show the use of convolutional neural networks with the MNIST data set. This data set is maybe one of the most famous image data sets. It consists of 60,000 handwritten digits from 0-9.
To do so, we define a few helper functions:
The MNIST data set is so famous that there is an automatic download function in Keras:
= dataset_mnist()
data = data$train
train = data$test test
Let’s visualize a few digits:
= par(mfrow = c(1, 3))
oldpar = sapply(1:3, function(x) imgPlot(train$x[x,,], train$y[x])) .n
par(oldpar)
Similar to the normal machine learning workflow, we have to scale the pixels (from 0-255) to the range of \([0, 1]\) and one hot encode the response. For scaling the pixels, we will use arrays instead of matrices. Arrays are called tensors in mathematics and a 2D array/tensor is typically called a matrix.
= array(train$x/255, c(dim(train$x), 1))
train_x = array(test$x/255, c(dim(test$x), 1))
test_x = to_categorical(train$y, 10)
train_y = to_categorical(test$y, 10) test_y
The last dimension denotes the number of channels in the image. In our case we have only one channel because the images are black and white.
Most times, we would have at least 3 color channels, for example RGB (red, green, blue) or HSV (hue, saturation, value), sometimes with several additional dimensions like transparency.
To build our convolutional model, we have to specify a kernel. In our case, we will use 16 convolutional kernels (filters) of size \(2\times2\). These are 2D kernels because our images are 2D. For movies for example, one would use 3D kernels (the third dimension would correspond to time and not to the color channels).
= keras_model_sequential()
model %>%
model layer_conv_2d(input_shape = c(28L, 28L, 1L), filters = 16L,
kernel_size = c(2L, 2L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_conv_2d(filters = 16L, kernel_size = c(3L, 3L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_flatten() %>%
layer_dense(100L, activation = "relu") %>%
layer_dense(10L, activation = "softmax")
summary(model)
Model: "sequential"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
conv2d_1 (Conv2D) (None, 27, 27, 16) 80
max_pooling2d_1 (MaxPooling2D) (None, 13, 13, 16) 0
conv2d (Conv2D) (None, 11, 11, 16) 2320
max_pooling2d (MaxPooling2D) (None, 5, 5, 16) 0
flatten (Flatten) (None, 400) 0
dense_1 (Dense) (None, 100) 40100
dense (Dense) (None, 10) 1010
================================================================================
Total params: 43,510
Trainable params: 43,510
Non-trainable params: 0
________________________________________________________________________________
Prepare/download data:
library(torch)
library(torchvision)
torch_manual_seed(321L)
set.seed(123)
= mnist_dataset(
train_ds ".",
download = TRUE,
train = TRUE,
transform = transform_to_tensor
)
= mnist_dataset(
test_ds ".",
download = TRUE,
train = FALSE,
transform = transform_to_tensor
)
Build dataloader:
= dataloader(train_ds, batch_size = 32, shuffle = TRUE)
train_dl = dataloader(test_ds, batch_size = 32)
test_dl = train_dl$.iter()
first_batch = first_batch$.next()
df
$x$size() df
[1] 32 1 28 28
Build convolutional neural network: We have here to calculate the shapes of our layers on our own:
We start with our input of shape (batch_size, 1, 28, 28)
= df$x
sample $size() sample
[1] 32 1 28 28
First convolutional layer has shape (input channel = 1, number of feature maps = 16, kernel size = 2)
= nn_conv2d(1, 16L, 2L, stride = 1L)
conv1 %>% conv1)$size() (sample
[1] 32 16 27 27
Output: batch_size = 32, number of feature maps = 16, dimensions of each feature map = \((27 , 27)\) Wit a kernel size of two and stride = 1 we will lose one pixel in each dimension… Questions:
- What happens if we increase the stride?
- What happens if we increase the kernel size?
Pooling layer summarizes each feature map
%>% conv1 %>% nnf_max_pool2d(kernel_size = 2L, stride = 2L))$size() (sample
[1] 32 16 13 13
kernel_size = 2L and stride = 2L halfs the pixel dimensions of our image.
Fully connected layer
Now we have to flatten our final output of the convolutional neural network model to use a normal fully connected layer, but to do so we have to calculate the number of inputs for the fully connected layer:
= (sample %>% conv1 %>%
dims nnf_max_pool2d(kernel_size = 2L, stride = 2L))$size()
# Without the batch size of course.
= prod(dims[-1])
final print(final)
[1] 2704
= nn_linear(final, 10L)
fc %>% conv1 %>% nnf_max_pool2d(kernel_size = 2L, stride = 2L)
(sample %>% torch_flatten(start_dim = 2L) %>% fc)$size()
[1] 32 10
Build the network:
= nn_module(
net "mnist",
initialize = function(){
$conv1 = nn_conv2d(1, 16L, 2L)
self$conv2 = nn_conv2d(16L, 16L, 3L)
self$fc1 = nn_linear(400L, 100L)
self$fc2 = nn_linear(100L, 10L)
self
},forward = function(x){
%>%
x $conv1() %>%
selfnnf_relu() %>%
nnf_max_pool2d(2) %>%
$conv2() %>%
selfnnf_relu() %>%
nnf_max_pool2d(2) %>%
torch_flatten(start_dim = 2) %>%
$fc1() %>%
selfnnf_relu() %>%
$fc2()
self
} )
We additionally used a pooling layer for downsizing the resulting feature maps. Without further specification, a \(2\times2\) pooling layer is taken automatically. Pooling layers take the input feature map and divide it into (in our case) parts of \(2\times2\) size. Then the respective pooling operation is executed. For every input map/layer, you get one (downsized) output map/layer.
As we are using the max pooling layer (there are sever other methods like the mean pooling), only the maximum value of these 4 parts is taken and forwarded further. Example input:
1 2 | 5 8 | 3 6
6 5 | 2 4 | 8 1
------------------------------
9 4 | 3 7 | 2 5
0 3 | 2 7 | 4 9
We use max pooling for every field:
max(1, 2, 6, 5) | max(5, 8, 2, 4) | max(3, 6, 8, 1)
-----------------------------------------------------------
max(9, 4, 0, 3) | max(3, 7, 2, 7) | max(2, 5, 4, 9)
So the resulting pooled information is:
6 | 8 | 8
------------------
9 | 7 | 9
In this example, a \(4\times6\) layer was transformed to a \(2\times3\) layer and thus downsized. This is similar to the biological process called lateral inhibition where active neurons inhibit the activity of neighboring neurons. It’s a loss of information but often very useful for aggregating information and prevent overfitting.
After another convolution and pooling layer, we flatten the output. This means that the following dense layer treats the previous layer as a full layer (so the dense layer is connected to all the weights from the last feature maps). You can think of this as transforming a matrix (2D) into a simple 1D vector. The full vector is then used. After flattening the layer, we can simply use our typical output layer.
The rest is as usual:
First we compile the model:
%>%
model ::compile(
kerasoptimizer = keras::optimizer_adamax(0.01),
loss = loss_categorical_crossentropy
)summary(model)
Model: "sequential"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
conv2d_1 (Conv2D) (None, 27, 27, 16) 80
max_pooling2d_1 (MaxPooling2D) (None, 13, 13, 16) 0
conv2d (Conv2D) (None, 11, 11, 16) 2320
max_pooling2d (MaxPooling2D) (None, 5, 5, 16) 0
flatten (Flatten) (None, 400) 0
dense_1 (Dense) (None, 100) 40100
dense (Dense) (None, 10) 1010
================================================================================
Total params: 43,510
Trainable params: 43,510
Non-trainable params: 0
________________________________________________________________________________
Then, we train the model:
library(tensorflow)
library(keras)
set_random_seed(321L, disable_gpu = FALSE) # Already sets R's random seed.
= 5L
epochs = 32L
batch_size %>%
model fit(
x = train_x,
y = train_y,
epochs = epochs,
batch_size = batch_size,
shuffle = TRUE,
validation_split = 0.2
)
Train model:
library(torch)
torch_manual_seed(321L)
set.seed(123)
= net()
model_torch = optim_adam(params = model_torch$parameters, lr = 0.01)
opt
for(e in 1:3){
= c()
losses ::loop(
corofor(batch in train_dl){
$zero_grad()
opt= model_torch(batch[[1]])
pred = nnf_cross_entropy(pred, batch[[2]], reduction = "mean")
loss $backward()
loss$step()
opt= c(losses, loss$item())
losses
}
)cat(sprintf("Loss at epoch %d: %3f\n", e, mean(losses)))
}
Evaluation:
$eval()
model_torch
= c()
test_losses = 0
total = 0
correct
::loop(
corofor(batch in test_dl){
= model_torch(batch[[1]])
output = batch[[2]]
labels = nnf_cross_entropy(output, labels)
loss = c(test_losses, loss$item())
test_losses = torch_max(output$data(), dim = 2)[[2]]
predicted = total + labels$size(1)
total = correct + (predicted == labels)$sum()$item()
correct
}
)
mean(test_losses)
= correct/total
test_accuracy test_accuracy
10.2 Example CIFAR
CIFAR10 is another famous image classification dataset. It consists of ten classes with colored images (see https://www.cs.toronto.edu/~kriz/cifar.html).
library(keras)
= keras::dataset_cifar10()
data = data$train
train = data$test
test = train$x[1,,,]
image %>%
image image_to_array() %>%
`/`(., 255) %>%
as.raster() %>%
plot()
## normalize pixel to 0-1
= array(train$x/255, c(dim(train$x)))
train_x = array(test$x/255, c(dim(test$x)))
test_x = to_categorical(train$y, 10)
train_y = to_categorical(test$y, 10)
test_y = keras_model_sequential()
model %>%
model layer_conv_2d(input_shape = c(32L, 32L,3L),filters = 16L, kernel_size = c(2L,2L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_dropout(0.3) %>%
layer_conv_2d(filters = 16L, kernel_size = c(3L,3L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_flatten() %>%
layer_dense(10, activation = "softmax")
summary(model)
%>%
model compile(
optimizer = optimizer_adamax(),
loss = loss_categorical_crossentropy
)= callback_early_stopping(patience = 5L)
early = 1L
epochs =20L
batch_size %>% fit(
model x = train_x,
y = train_y,
epochs = epochs,
batch_size = batch_size,
shuffle = TRUE,
validation_split = 0.2,
callbacks = c(early)
)
10.3 Exercise
10.4 Advanced Training Techniques
10.4.1 Data Augmentation
Having to train a convolutional neural network using very little data is a common problem. Data augmentation helps to artificially increase the number of images.
The idea is that a convolutional neural network learns specific structures such as edges from images. Rotating, adding noise, and zooming in and out will preserve the overall key structure we are interested in, but the model will see new images and has to search once again for the key structures.
Luckily, it is very easy to use data augmentation in Keras.
To show this, we will use our flower data set. We have to define a generator object (a specific object which infinitely draws samples from our data set). In the generator we can turn on the data augmentation.
library(tensorflow)
library(keras)
set_random_seed(321L, disable_gpu = FALSE) # Already sets R's random seed.
= EcoData::dataset_flower()
data = data$train/255
train = data$labels
labels
= keras_model_sequential()
model %>%
model layer_conv_2d(filter = 16L, kernel_size = c(5L, 5L),
input_shape = c(80L, 80L, 3L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_conv_2d(filter = 32L, kernel_size = c(3L, 3L),
activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_conv_2d(filter = 64L, kernel_size = c(3L, 3L),
strides = c(2L, 2L), activation = "relu") %>%
layer_max_pooling_2d() %>%
layer_flatten() %>%
layer_dropout(0.5) %>%
layer_dense(units = 5L, activation = "softmax")
# Data augmentation.
= image_data_generator(rotation_range = 90,
aug zoom_range = c(0.3),
horizontal_flip = TRUE,
vertical_flip = TRUE)
# Data preparation / splitting.
= sample.int(nrow(train), 0.1 * nrow(train))
indices = flow_images_from_data(train[-indices,,,],
generator k_one_hot(labels[-indices], num_classes = 5L),
generator = aug,
batch_size = 25L,
shuffle = TRUE)
= train[indices,,,]
test
## Training loop with early stopping:
# As we use an iterator (the generator), validation loss is not applicable.
# An available metric is the normal loss.
= keras::callback_early_stopping(patience = 2L, monitor = "loss")
early
%>%
model ::compile(loss = loss_categorical_crossentropy,
kerasoptimizer = keras::optimizer_adamax(learning_rate = 0.01))
%>%
model fit(generator, epochs = 20L, batch_size = 25L,
shuffle = TRUE, callbacks = c(early))
# Predictions on the training set:
= predict(model, data$train[-indices,,,]) %>% apply(1, which.max) - 1
pred ::accuracy(pred, labels[-indices])
Metricstable(pred)
# Predictions on the holdout / test set:
= predict(model, test) %>% apply(1, which.max) - 1
pred ::accuracy(pred, labels[indices])
Metricstable(pred)
# If you want to predict on the holdout for submission, use:
= predict(model, EcoData::dataset_flower()$test/255) %>%
pred apply(1, which.max) - 1
table(pred)
Using data augmentation we can artificially increase the number of images.
In Torch, we have to change the transform function (but only for the train dataloader):
library(torch)
torch_manual_seed(321L)
set.seed(123)
= function(img){
train_transforms %>%
img transform_to_tensor() %>%
transform_random_horizontal_flip(p = 0.3) %>%
transform_random_resized_crop(size = c(28L, 28L)) %>%
transform_random_vertical_flip(0.3)
}
= mnist_dataset(".", download = TRUE, train = TRUE,
train_ds transform = train_transforms)
= mnist_dataset(".", download = TRUE, train = FALSE,
test_ds transform = transform_to_tensor)
= dataloader(train_ds, batch_size = 100L, shuffle = TRUE)
train_dl = dataloader(test_ds, batch_size = 100L)
test_dl
= net()
model_torch = optim_adam(params = model_torch$parameters, lr = 0.01)
opt
for(e in 1:1){
= c()
losses ::loop(
corofor(batch in train_dl){
$zero_grad()
opt= model_torch(batch[[1]])
pred = nnf_cross_entropy(pred, batch[[2]], reduction = "mean")
loss $backward()
loss$step()
opt= c(losses, loss$item())
losses
}
)
cat(sprintf("Loss at epoch %d: %3f\n", e, mean(losses)))
}
$eval()
model_torch
= c()
test_losses = 0
total = 0
correct
::loop(
corofor(batch in test_dl){
= model_torch(batch[[1]])
output = batch[[2]]
labels = nnf_cross_entropy(output, labels)
loss = c(test_losses, loss$item())
test_losses = torch_max(output$data(), dim = 2)[[2]]
predicted = total + labels$size(1)
total = correct + (predicted == labels)$sum()$item()
correct
}
)
= correct/total
test_accuracy print(test_accuracy)
10.4.2 Transfer Learning
Another approach to reduce the necessary number of images or to speed up convergence of the models is the use of transfer learning.
The main idea of transfer learning is that all the convolutional layers have mainly one task - learning to identify highly correlated neighboring features. This knowledge is then used for new tasks. The convolutional layers learn structures such as edges in images and only the top layer, the dense layer is the actual classifier of the convolutional neural network for a specific task. Thus, one could think that we could only train the top layer as classifier. To do so, it will be confronted by sets of different edges/structures and has to decide the label based on these.
Again, this sounds very complicated but it is again quite easy with Keras and Torch.
We will do this now with the CIFAR10 data set, so we have to prepare the data:
library(tensorflow)
library(keras)
set_random_seed(321L, disable_gpu = FALSE) # Already sets R's random seed.
= keras::dataset_cifar10()
data = data$train
train = data$test
test
rm(data)
= train$x[5,,,]
image %>%
image image_to_array() %>%
`/`(., 255) %>%
as.raster() %>%
plot()
= array(train$x/255, c(dim(train$x)))
train_x = array(test$x/255, c(dim(test$x)))
test_x = to_categorical(train$y, 10)
train_y = to_categorical(test$y, 10)
test_y
rm(train, test)
Keras provides download functions for all famous architectures/convolutional neural network models which are already trained on the imagenet data set (another famous data set). These trained networks come already without their top layer, so we have to set include_top to false and change the input shape.
= application_densenet201(include_top = FALSE,
densenet input_shape = c(32L, 32L, 3L))
Now, we will not use a sequential model but just a “keras_model” where we can specify the inputs and outputs. Thereby, the output is our own top layer, but the inputs are the densenet inputs, as these are already pre-trained.
= keras::keras_model(
model inputs = densenet$input,
outputs = layer_flatten(
layer_dense(densenet$output, units = 10L, activation = "softmax")
)
)
# Notice that this snippet just creates one (!) new layer.
# The densenet's inputs are connected with the model's inputs.
# The densenet's outputs are connected with our own layer (with 10 nodes).
# This layer is also the output layer of the model.
In the next step we want to freeze all layers except for our own last layer. Freezing means that these are not trained: We do not want to train the complete model, we only want to train the last layer. You can check the number of trainable weights via summary(model).
%>% freeze_weights(to = length(model$layers) - 1)
model summary(model)
Model: "model"
________________________________________________________________________________
Layer (type) Output Shape Param # Connected to Trainable
================================================================================
input_1 (InputLayer) [(None, 32, 3 0 [] N
2, 3)]
zero_padding2d (Zero (None, 38, 38 0 ['input_1[0][0]'] N
Padding2D) , 3)
conv1/conv (Conv2D) (None, 16, 16 9408 ['zero_padding2d[0][0 N
, 64) ]']
conv1/bn (BatchNorma (None, 16, 16 256 ['conv1/conv[0][0]'] N
lization) , 64)
conv1/relu (Activati (None, 16, 16 0 ['conv1/bn[0][0]'] N
on) , 64)
zero_padding2d_1 (Ze (None, 18, 18 0 ['conv1/relu[0][0]'] N
roPadding2D) , 64)
pool1 (MaxPooling2D) (None, 8, 8, 0 ['zero_padding2d_1[0] N
64) [0]']
conv2_block1_0_bn (B (None, 8, 8, 256 ['pool1[0][0]'] N
atchNormalization) 64)
conv2_block1_0_relu (None, 8, 8, 0 ['conv2_block1_0_bn[0 N
(Activation) 64) ][0]']
conv2_block1_1_conv (None, 8, 8, 8192 ['conv2_block1_0_relu N
(Conv2D) 128) [0][0]']
conv2_block1_1_bn (B (None, 8, 8, 512 ['conv2_block1_1_conv N
atchNormalization) 128) [0][0]']
conv2_block1_1_relu (None, 8, 8, 0 ['conv2_block1_1_bn[0 N
(Activation) 128) ][0]']
conv2_block1_2_conv (None, 8, 8, 36864 ['conv2_block1_1_relu N
(Conv2D) 32) [0][0]']
conv2_block1_concat (None, 8, 8, 0 ['pool1[0][0]', N
(Concatenate) 96) 'conv2_block1_2_conv
[0][0]']
conv2_block2_0_bn (B (None, 8, 8, 384 ['conv2_block1_concat N
atchNormalization) 96) [0][0]']
conv2_block2_0_relu (None, 8, 8, 0 ['conv2_block2_0_bn[0 N
(Activation) 96) ][0]']
conv2_block2_1_conv (None, 8, 8, 12288 ['conv2_block2_0_relu N
(Conv2D) 128) [0][0]']
conv2_block2_1_bn (B (None, 8, 8, 512 ['conv2_block2_1_conv N
atchNormalization) 128) [0][0]']
conv2_block2_1_relu (None, 8, 8, 0 ['conv2_block2_1_bn[0 N
(Activation) 128) ][0]']
conv2_block2_2_conv (None, 8, 8, 36864 ['conv2_block2_1_relu N
(Conv2D) 32) [0][0]']
conv2_block2_concat (None, 8, 8, 0 ['conv2_block1_concat N
(Concatenate) 128) [0][0]',
'conv2_block2_2_conv
[0][0]']
conv2_block3_0_bn (B (None, 8, 8, 512 ['conv2_block2_concat N
atchNormalization) 128) [0][0]']
conv2_block3_0_relu (None, 8, 8, 0 ['conv2_block3_0_bn[0 N
(Activation) 128) ][0]']
conv2_block3_1_conv (None, 8, 8, 16384 ['conv2_block3_0_relu N
(Conv2D) 128) [0][0]']
conv2_block3_1_bn (B (None, 8, 8, 512 ['conv2_block3_1_conv N
atchNormalization) 128) [0][0]']
conv2_block3_1_relu (None, 8, 8, 0 ['conv2_block3_1_bn[0 N
(Activation) 128) ][0]']
conv2_block3_2_conv (None, 8, 8, 36864 ['conv2_block3_1_relu N
(Conv2D) 32) [0][0]']
conv2_block3_concat (None, 8, 8, 0 ['conv2_block2_concat N
(Concatenate) 160) [0][0]',
'conv2_block3_2_conv
[0][0]']
conv2_block4_0_bn (B (None, 8, 8, 640 ['conv2_block3_concat N
atchNormalization) 160) [0][0]']
conv2_block4_0_relu (None, 8, 8, 0 ['conv2_block4_0_bn[0 N
(Activation) 160) ][0]']
conv2_block4_1_conv (None, 8, 8, 20480 ['conv2_block4_0_relu N
(Conv2D) 128) [0][0]']
conv2_block4_1_bn (B (None, 8, 8, 512 ['conv2_block4_1_conv N
atchNormalization) 128) [0][0]']
conv2_block4_1_relu (None, 8, 8, 0 ['conv2_block4_1_bn[0 N
(Activation) 128) ][0]']
conv2_block4_2_conv (None, 8, 8, 36864 ['conv2_block4_1_relu N
(Conv2D) 32) [0][0]']
conv2_block4_concat (None, 8, 8, 0 ['conv2_block3_concat N
(Concatenate) 192) [0][0]',
'conv2_block4_2_conv
[0][0]']
conv2_block5_0_bn (B (None, 8, 8, 768 ['conv2_block4_concat N
atchNormalization) 192) [0][0]']
conv2_block5_0_relu (None, 8, 8, 0 ['conv2_block5_0_bn[0 N
(Activation) 192) ][0]']
conv2_block5_1_conv (None, 8, 8, 24576 ['conv2_block5_0_relu N
(Conv2D) 128) [0][0]']
conv2_block5_1_bn (B (None, 8, 8, 512 ['conv2_block5_1_conv N
atchNormalization) 128) [0][0]']
conv2_block5_1_relu (None, 8, 8, 0 ['conv2_block5_1_bn[0 N
(Activation) 128) ][0]']
conv2_block5_2_conv (None, 8, 8, 36864 ['conv2_block5_1_relu N
(Conv2D) 32) [0][0]']
conv2_block5_concat (None, 8, 8, 0 ['conv2_block4_concat N
(Concatenate) 224) [0][0]',
'conv2_block5_2_conv
[0][0]']
conv2_block6_0_bn (B (None, 8, 8, 896 ['conv2_block5_concat N
atchNormalization) 224) [0][0]']
conv2_block6_0_relu (None, 8, 8, 0 ['conv2_block6_0_bn[0 N
(Activation) 224) ][0]']
conv2_block6_1_conv (None, 8, 8, 28672 ['conv2_block6_0_relu N
(Conv2D) 128) [0][0]']
conv2_block6_1_bn (B (None, 8, 8, 512 ['conv2_block6_1_conv N
atchNormalization) 128) [0][0]']
conv2_block6_1_relu (None, 8, 8, 0 ['conv2_block6_1_bn[0 N
(Activation) 128) ][0]']
conv2_block6_2_conv (None, 8, 8, 36864 ['conv2_block6_1_relu N
(Conv2D) 32) [0][0]']
conv2_block6_concat (None, 8, 8, 0 ['conv2_block5_concat N
(Concatenate) 256) [0][0]',
'conv2_block6_2_conv
[0][0]']
pool2_bn (BatchNorma (None, 8, 8, 1024 ['conv2_block6_concat N
lization) 256) [0][0]']
pool2_relu (Activati (None, 8, 8, 0 ['pool2_bn[0][0]'] N
on) 256)
pool2_conv (Conv2D) (None, 8, 8, 32768 ['pool2_relu[0][0]'] N
128)
pool2_pool (AverageP (None, 4, 4, 0 ['pool2_conv[0][0]'] N
ooling2D) 128)
conv3_block1_0_bn (B (None, 4, 4, 512 ['pool2_pool[0][0]'] N
atchNormalization) 128)
conv3_block1_0_relu (None, 4, 4, 0 ['conv3_block1_0_bn[0 N
(Activation) 128) ][0]']
conv3_block1_1_conv (None, 4, 4, 16384 ['conv3_block1_0_relu N
(Conv2D) 128) [0][0]']
conv3_block1_1_bn (B (None, 4, 4, 512 ['conv3_block1_1_conv N
atchNormalization) 128) [0][0]']
conv3_block1_1_relu (None, 4, 4, 0 ['conv3_block1_1_bn[0 N
(Activation) 128) ][0]']
conv3_block1_2_conv (None, 4, 4, 36864 ['conv3_block1_1_relu N
(Conv2D) 32) [0][0]']
conv3_block1_concat (None, 4, 4, 0 ['pool2_pool[0][0]', N
(Concatenate) 160) 'conv3_block1_2_conv
[0][0]']
conv3_block2_0_bn (B (None, 4, 4, 640 ['conv3_block1_concat N
atchNormalization) 160) [0][0]']
conv3_block2_0_relu (None, 4, 4, 0 ['conv3_block2_0_bn[0 N
(Activation) 160) ][0]']
conv3_block2_1_conv (None, 4, 4, 20480 ['conv3_block2_0_relu N
(Conv2D) 128) [0][0]']
conv3_block2_1_bn (B (None, 4, 4, 512 ['conv3_block2_1_conv N
atchNormalization) 128) [0][0]']
conv3_block2_1_relu (None, 4, 4, 0 ['conv3_block2_1_bn[0 N
(Activation) 128) ][0]']
conv3_block2_2_conv (None, 4, 4, 36864 ['conv3_block2_1_relu N
(Conv2D) 32) [0][0]']
conv3_block2_concat (None, 4, 4, 0 ['conv3_block1_concat N
(Concatenate) 192) [0][0]',
'conv3_block2_2_conv
[0][0]']
conv3_block3_0_bn (B (None, 4, 4, 768 ['conv3_block2_concat N
atchNormalization) 192) [0][0]']
conv3_block3_0_relu (None, 4, 4, 0 ['conv3_block3_0_bn[0 N
(Activation) 192) ][0]']
conv3_block3_1_conv (None, 4, 4, 24576 ['conv3_block3_0_relu N
(Conv2D) 128) [0][0]']
conv3_block3_1_bn (B (None, 4, 4, 512 ['conv3_block3_1_conv N
atchNormalization) 128) [0][0]']
conv3_block3_1_relu (None, 4, 4, 0 ['conv3_block3_1_bn[0 N
(Activation) 128) ][0]']
conv3_block3_2_conv (None, 4, 4, 36864 ['conv3_block3_1_relu N
(Conv2D) 32) [0][0]']
conv3_block3_concat (None, 4, 4, 0 ['conv3_block2_concat N
(Concatenate) 224) [0][0]',
'conv3_block3_2_conv
[0][0]']
conv3_block4_0_bn (B (None, 4, 4, 896 ['conv3_block3_concat N
atchNormalization) 224) [0][0]']
conv3_block4_0_relu (None, 4, 4, 0 ['conv3_block4_0_bn[0 N
(Activation) 224) ][0]']
conv3_block4_1_conv (None, 4, 4, 28672 ['conv3_block4_0_relu N
(Conv2D) 128) [0][0]']
conv3_block4_1_bn (B (None, 4, 4, 512 ['conv3_block4_1_conv N
atchNormalization) 128) [0][0]']
conv3_block4_1_relu (None, 4, 4, 0 ['conv3_block4_1_bn[0 N
(Activation) 128) ][0]']
conv3_block4_2_conv (None, 4, 4, 36864 ['conv3_block4_1_relu N
(Conv2D) 32) [0][0]']
conv3_block4_concat (None, 4, 4, 0 ['conv3_block3_concat N
(Concatenate) 256) [0][0]',
'conv3_block4_2_conv
[0][0]']
conv3_block5_0_bn (B (None, 4, 4, 1024 ['conv3_block4_concat N
atchNormalization) 256) [0][0]']
conv3_block5_0_relu (None, 4, 4, 0 ['conv3_block5_0_bn[0 N
(Activation) 256) ][0]']
conv3_block5_1_conv (None, 4, 4, 32768 ['conv3_block5_0_relu N
(Conv2D) 128) [0][0]']
conv3_block5_1_bn (B (None, 4, 4, 512 ['conv3_block5_1_conv N
atchNormalization) 128) [0][0]']
conv3_block5_1_relu (None, 4, 4, 0 ['conv3_block5_1_bn[0 N
(Activation) 128) ][0]']
conv3_block5_2_conv (None, 4, 4, 36864 ['conv3_block5_1_relu N
(Conv2D) 32) [0][0]']
conv3_block5_concat (None, 4, 4, 0 ['conv3_block4_concat N
(Concatenate) 288) [0][0]',
'conv3_block5_2_conv
[0][0]']
conv3_block6_0_bn (B (None, 4, 4, 1152 ['conv3_block5_concat N
atchNormalization) 288) [0][0]']
conv3_block6_0_relu (None, 4, 4, 0 ['conv3_block6_0_bn[0 N
(Activation) 288) ][0]']
conv3_block6_1_conv (None, 4, 4, 36864 ['conv3_block6_0_relu N
(Conv2D) 128) [0][0]']
conv3_block6_1_bn (B (None, 4, 4, 512 ['conv3_block6_1_conv N
atchNormalization) 128) [0][0]']
conv3_block6_1_relu (None, 4, 4, 0 ['conv3_block6_1_bn[0 N
(Activation) 128) ][0]']
conv3_block6_2_conv (None, 4, 4, 36864 ['conv3_block6_1_relu N
(Conv2D) 32) [0][0]']
conv3_block6_concat (None, 4, 4, 0 ['conv3_block5_concat N
(Concatenate) 320) [0][0]',
'conv3_block6_2_conv
[0][0]']
conv3_block7_0_bn (B (None, 4, 4, 1280 ['conv3_block6_concat N
atchNormalization) 320) [0][0]']
conv3_block7_0_relu (None, 4, 4, 0 ['conv3_block7_0_bn[0 N
(Activation) 320) ][0]']
conv3_block7_1_conv (None, 4, 4, 40960 ['conv3_block7_0_relu N
(Conv2D) 128) [0][0]']
conv3_block7_1_bn (B (None, 4, 4, 512 ['conv3_block7_1_conv N
atchNormalization) 128) [0][0]']
conv3_block7_1_relu (None, 4, 4, 0 ['conv3_block7_1_bn[0 N
(Activation) 128) ][0]']
conv3_block7_2_conv (None, 4, 4, 36864 ['conv3_block7_1_relu N
(Conv2D) 32) [0][0]']
conv3_block7_concat (None, 4, 4, 0 ['conv3_block6_concat N
(Concatenate) 352) [0][0]',
'conv3_block7_2_conv
[0][0]']
conv3_block8_0_bn (B (None, 4, 4, 1408 ['conv3_block7_concat N
atchNormalization) 352) [0][0]']
conv3_block8_0_relu (None, 4, 4, 0 ['conv3_block8_0_bn[0 N
(Activation) 352) ][0]']
conv3_block8_1_conv (None, 4, 4, 45056 ['conv3_block8_0_relu N
(Conv2D) 128) [0][0]']
conv3_block8_1_bn (B (None, 4, 4, 512 ['conv3_block8_1_conv N
atchNormalization) 128) [0][0]']
conv3_block8_1_relu (None, 4, 4, 0 ['conv3_block8_1_bn[0 N
(Activation) 128) ][0]']
conv3_block8_2_conv (None, 4, 4, 36864 ['conv3_block8_1_relu N
(Conv2D) 32) [0][0]']
conv3_block8_concat (None, 4, 4, 0 ['conv3_block7_concat N
(Concatenate) 384) [0][0]',
'conv3_block8_2_conv
[0][0]']
conv3_block9_0_bn (B (None, 4, 4, 1536 ['conv3_block8_concat N
atchNormalization) 384) [0][0]']
conv3_block9_0_relu (None, 4, 4, 0 ['conv3_block9_0_bn[0 N
(Activation) 384) ][0]']
conv3_block9_1_conv (None, 4, 4, 49152 ['conv3_block9_0_relu N
(Conv2D) 128) [0][0]']
conv3_block9_1_bn (B (None, 4, 4, 512 ['conv3_block9_1_conv N
atchNormalization) 128) [0][0]']
conv3_block9_1_relu (None, 4, 4, 0 ['conv3_block9_1_bn[0 N
(Activation) 128) ][0]']
conv3_block9_2_conv (None, 4, 4, 36864 ['conv3_block9_1_relu N
(Conv2D) 32) [0][0]']
conv3_block9_concat (None, 4, 4, 0 ['conv3_block8_concat N
(Concatenate) 416) [0][0]',
'conv3_block9_2_conv
[0][0]']
conv3_block10_0_bn ( (None, 4, 4, 1664 ['conv3_block9_concat N
BatchNormalization) 416) [0][0]']
conv3_block10_0_relu (None, 4, 4, 0 ['conv3_block10_0_bn[ N
(Activation) 416) 0][0]']
conv3_block10_1_conv (None, 4, 4, 53248 ['conv3_block10_0_rel N
(Conv2D) 128) u[0][0]']
conv3_block10_1_bn ( (None, 4, 4, 512 ['conv3_block10_1_con N
BatchNormalization) 128) v[0][0]']
conv3_block10_1_relu (None, 4, 4, 0 ['conv3_block10_1_bn[ N
(Activation) 128) 0][0]']
conv3_block10_2_conv (None, 4, 4, 36864 ['conv3_block10_1_rel N
(Conv2D) 32) u[0][0]']
conv3_block10_concat (None, 4, 4, 0 ['conv3_block9_concat N
(Concatenate) 448) [0][0]',
'conv3_block10_2_con
v[0][0]']
conv3_block11_0_bn ( (None, 4, 4, 1792 ['conv3_block10_conca N
BatchNormalization) 448) t[0][0]']
conv3_block11_0_relu (None, 4, 4, 0 ['conv3_block11_0_bn[ N
(Activation) 448) 0][0]']
conv3_block11_1_conv (None, 4, 4, 57344 ['conv3_block11_0_rel N
(Conv2D) 128) u[0][0]']
conv3_block11_1_bn ( (None, 4, 4, 512 ['conv3_block11_1_con N
BatchNormalization) 128) v[0][0]']
conv3_block11_1_relu (None, 4, 4, 0 ['conv3_block11_1_bn[ N
(Activation) 128) 0][0]']
conv3_block11_2_conv (None, 4, 4, 36864 ['conv3_block11_1_rel N
(Conv2D) 32) u[0][0]']
conv3_block11_concat (None, 4, 4, 0 ['conv3_block10_conca N
(Concatenate) 480) t[0][0]',
'conv3_block11_2_con
v[0][0]']
conv3_block12_0_bn ( (None, 4, 4, 1920 ['conv3_block11_conca N
BatchNormalization) 480) t[0][0]']
conv3_block12_0_relu (None, 4, 4, 0 ['conv3_block12_0_bn[ N
(Activation) 480) 0][0]']
conv3_block12_1_conv (None, 4, 4, 61440 ['conv3_block12_0_rel N
(Conv2D) 128) u[0][0]']
conv3_block12_1_bn ( (None, 4, 4, 512 ['conv3_block12_1_con N
BatchNormalization) 128) v[0][0]']
conv3_block12_1_relu (None, 4, 4, 0 ['conv3_block12_1_bn[ N
(Activation) 128) 0][0]']
conv3_block12_2_conv (None, 4, 4, 36864 ['conv3_block12_1_rel N
(Conv2D) 32) u[0][0]']
conv3_block12_concat (None, 4, 4, 0 ['conv3_block11_conca N
(Concatenate) 512) t[0][0]',
'conv3_block12_2_con
v[0][0]']
pool3_bn (BatchNorma (None, 4, 4, 2048 ['conv3_block12_conca N
lization) 512) t[0][0]']
pool3_relu (Activati (None, 4, 4, 0 ['pool3_bn[0][0]'] N
on) 512)
pool3_conv (Conv2D) (None, 4, 4, 131072 ['pool3_relu[0][0]'] N
256)
pool3_pool (AverageP (None, 2, 2, 0 ['pool3_conv[0][0]'] N
ooling2D) 256)
conv4_block1_0_bn (B (None, 2, 2, 1024 ['pool3_pool[0][0]'] N
atchNormalization) 256)
conv4_block1_0_relu (None, 2, 2, 0 ['conv4_block1_0_bn[0 N
(Activation) 256) ][0]']
conv4_block1_1_conv (None, 2, 2, 32768 ['conv4_block1_0_relu N
(Conv2D) 128) [0][0]']
conv4_block1_1_bn (B (None, 2, 2, 512 ['conv4_block1_1_conv N
atchNormalization) 128) [0][0]']
conv4_block1_1_relu (None, 2, 2, 0 ['conv4_block1_1_bn[0 N
(Activation) 128) ][0]']
conv4_block1_2_conv (None, 2, 2, 36864 ['conv4_block1_1_relu N
(Conv2D) 32) [0][0]']
conv4_block1_concat (None, 2, 2, 0 ['pool3_pool[0][0]', N
(Concatenate) 288) 'conv4_block1_2_conv
[0][0]']
conv4_block2_0_bn (B (None, 2, 2, 1152 ['conv4_block1_concat N
atchNormalization) 288) [0][0]']
conv4_block2_0_relu (None, 2, 2, 0 ['conv4_block2_0_bn[0 N
(Activation) 288) ][0]']
conv4_block2_1_conv (None, 2, 2, 36864 ['conv4_block2_0_relu N
(Conv2D) 128) [0][0]']
conv4_block2_1_bn (B (None, 2, 2, 512 ['conv4_block2_1_conv N
atchNormalization) 128) [0][0]']
conv4_block2_1_relu (None, 2, 2, 0 ['conv4_block2_1_bn[0 N
(Activation) 128) ][0]']
conv4_block2_2_conv (None, 2, 2, 36864 ['conv4_block2_1_relu N
(Conv2D) 32) [0][0]']
conv4_block2_concat (None, 2, 2, 0 ['conv4_block1_concat N
(Concatenate) 320) [0][0]',
'conv4_block2_2_conv
[0][0]']
conv4_block3_0_bn (B (None, 2, 2, 1280 ['conv4_block2_concat N
atchNormalization) 320) [0][0]']
conv4_block3_0_relu (None, 2, 2, 0 ['conv4_block3_0_bn[0 N
(Activation) 320) ][0]']
conv4_block3_1_conv (None, 2, 2, 40960 ['conv4_block3_0_relu N
(Conv2D) 128) [0][0]']
conv4_block3_1_bn (B (None, 2, 2, 512 ['conv4_block3_1_conv N
atchNormalization) 128) [0][0]']
conv4_block3_1_relu (None, 2, 2, 0 ['conv4_block3_1_bn[0 N
(Activation) 128) ][0]']
conv4_block3_2_conv (None, 2, 2, 36864 ['conv4_block3_1_relu N
(Conv2D) 32) [0][0]']
conv4_block3_concat (None, 2, 2, 0 ['conv4_block2_concat N
(Concatenate) 352) [0][0]',
'conv4_block3_2_conv
[0][0]']
conv4_block4_0_bn (B (None, 2, 2, 1408 ['conv4_block3_concat N
atchNormalization) 352) [0][0]']
conv4_block4_0_relu (None, 2, 2, 0 ['conv4_block4_0_bn[0 N
(Activation) 352) ][0]']
conv4_block4_1_conv (None, 2, 2, 45056 ['conv4_block4_0_relu N
(Conv2D) 128) [0][0]']
conv4_block4_1_bn (B (None, 2, 2, 512 ['conv4_block4_1_conv N
atchNormalization) 128) [0][0]']
conv4_block4_1_relu (None, 2, 2, 0 ['conv4_block4_1_bn[0 N
(Activation) 128) ][0]']
conv4_block4_2_conv (None, 2, 2, 36864 ['conv4_block4_1_relu N
(Conv2D) 32) [0][0]']
conv4_block4_concat (None, 2, 2, 0 ['conv4_block3_concat N
(Concatenate) 384) [0][0]',
'conv4_block4_2_conv
[0][0]']
conv4_block5_0_bn (B (None, 2, 2, 1536 ['conv4_block4_concat N
atchNormalization) 384) [0][0]']
conv4_block5_0_relu (None, 2, 2, 0 ['conv4_block5_0_bn[0 N
(Activation) 384) ][0]']
conv4_block5_1_conv (None, 2, 2, 49152 ['conv4_block5_0_relu N
(Conv2D) 128) [0][0]']
conv4_block5_1_bn (B (None, 2, 2, 512 ['conv4_block5_1_conv N
atchNormalization) 128) [0][0]']
conv4_block5_1_relu (None, 2, 2, 0 ['conv4_block5_1_bn[0 N
(Activation) 128) ][0]']
conv4_block5_2_conv (None, 2, 2, 36864 ['conv4_block5_1_relu N
(Conv2D) 32) [0][0]']
conv4_block5_concat (None, 2, 2, 0 ['conv4_block4_concat N
(Concatenate) 416) [0][0]',
'conv4_block5_2_conv
[0][0]']
conv4_block6_0_bn (B (None, 2, 2, 1664 ['conv4_block5_concat N
atchNormalization) 416) [0][0]']
conv4_block6_0_relu (None, 2, 2, 0 ['conv4_block6_0_bn[0 N
(Activation) 416) ][0]']
conv4_block6_1_conv (None, 2, 2, 53248 ['conv4_block6_0_relu N
(Conv2D) 128) [0][0]']
conv4_block6_1_bn (B (None, 2, 2, 512 ['conv4_block6_1_conv N
atchNormalization) 128) [0][0]']
conv4_block6_1_relu (None, 2, 2, 0 ['conv4_block6_1_bn[0 N
(Activation) 128) ][0]']
conv4_block6_2_conv (None, 2, 2, 36864 ['conv4_block6_1_relu N
(Conv2D) 32) [0][0]']
conv4_block6_concat (None, 2, 2, 0 ['conv4_block5_concat N
(Concatenate) 448) [0][0]',
'conv4_block6_2_conv
[0][0]']
conv4_block7_0_bn (B (None, 2, 2, 1792 ['conv4_block6_concat N
atchNormalization) 448) [0][0]']
conv4_block7_0_relu (None, 2, 2, 0 ['conv4_block7_0_bn[0 N
(Activation) 448) ][0]']
conv4_block7_1_conv (None, 2, 2, 57344 ['conv4_block7_0_relu N
(Conv2D) 128) [0][0]']
conv4_block7_1_bn (B (None, 2, 2, 512 ['conv4_block7_1_conv N
atchNormalization) 128) [0][0]']
conv4_block7_1_relu (None, 2, 2, 0 ['conv4_block7_1_bn[0 N
(Activation) 128) ][0]']
conv4_block7_2_conv (None, 2, 2, 36864 ['conv4_block7_1_relu N
(Conv2D) 32) [0][0]']
conv4_block7_concat (None, 2, 2, 0 ['conv4_block6_concat N
(Concatenate) 480) [0][0]',
'conv4_block7_2_conv
[0][0]']
conv4_block8_0_bn (B (None, 2, 2, 1920 ['conv4_block7_concat N
atchNormalization) 480) [0][0]']
conv4_block8_0_relu (None, 2, 2, 0 ['conv4_block8_0_bn[0 N
(Activation) 480) ][0]']
conv4_block8_1_conv (None, 2, 2, 61440 ['conv4_block8_0_relu N
(Conv2D) 128) [0][0]']
conv4_block8_1_bn (B (None, 2, 2, 512 ['conv4_block8_1_conv N
atchNormalization) 128) [0][0]']
conv4_block8_1_relu (None, 2, 2, 0 ['conv4_block8_1_bn[0 N
(Activation) 128) ][0]']
conv4_block8_2_conv (None, 2, 2, 36864 ['conv4_block8_1_relu N
(Conv2D) 32) [0][0]']
conv4_block8_concat (None, 2, 2, 0 ['conv4_block7_concat N
(Concatenate) 512) [0][0]',
'conv4_block8_2_conv
[0][0]']
conv4_block9_0_bn (B (None, 2, 2, 2048 ['conv4_block8_concat N
atchNormalization) 512) [0][0]']
conv4_block9_0_relu (None, 2, 2, 0 ['conv4_block9_0_bn[0 N
(Activation) 512) ][0]']
conv4_block9_1_conv (None, 2, 2, 65536 ['conv4_block9_0_relu N
(Conv2D) 128) [0][0]']
conv4_block9_1_bn (B (None, 2, 2, 512 ['conv4_block9_1_conv N
atchNormalization) 128) [0][0]']
conv4_block9_1_relu (None, 2, 2, 0 ['conv4_block9_1_bn[0 N
(Activation) 128) ][0]']
conv4_block9_2_conv (None, 2, 2, 36864 ['conv4_block9_1_relu N
(Conv2D) 32) [0][0]']
conv4_block9_concat (None, 2, 2, 0 ['conv4_block8_concat N
(Concatenate) 544) [0][0]',
'conv4_block9_2_conv
[0][0]']
conv4_block10_0_bn ( (None, 2, 2, 2176 ['conv4_block9_concat N
BatchNormalization) 544) [0][0]']
conv4_block10_0_relu (None, 2, 2, 0 ['conv4_block10_0_bn[ N
(Activation) 544) 0][0]']
conv4_block10_1_conv (None, 2, 2, 69632 ['conv4_block10_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block10_1_bn ( (None, 2, 2, 512 ['conv4_block10_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block10_1_relu (None, 2, 2, 0 ['conv4_block10_1_bn[ N
(Activation) 128) 0][0]']
conv4_block10_2_conv (None, 2, 2, 36864 ['conv4_block10_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block10_concat (None, 2, 2, 0 ['conv4_block9_concat N
(Concatenate) 576) [0][0]',
'conv4_block10_2_con
v[0][0]']
conv4_block11_0_bn ( (None, 2, 2, 2304 ['conv4_block10_conca N
BatchNormalization) 576) t[0][0]']
conv4_block11_0_relu (None, 2, 2, 0 ['conv4_block11_0_bn[ N
(Activation) 576) 0][0]']
conv4_block11_1_conv (None, 2, 2, 73728 ['conv4_block11_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block11_1_bn ( (None, 2, 2, 512 ['conv4_block11_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block11_1_relu (None, 2, 2, 0 ['conv4_block11_1_bn[ N
(Activation) 128) 0][0]']
conv4_block11_2_conv (None, 2, 2, 36864 ['conv4_block11_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block11_concat (None, 2, 2, 0 ['conv4_block10_conca N
(Concatenate) 608) t[0][0]',
'conv4_block11_2_con
v[0][0]']
conv4_block12_0_bn ( (None, 2, 2, 2432 ['conv4_block11_conca N
BatchNormalization) 608) t[0][0]']
conv4_block12_0_relu (None, 2, 2, 0 ['conv4_block12_0_bn[ N
(Activation) 608) 0][0]']
conv4_block12_1_conv (None, 2, 2, 77824 ['conv4_block12_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block12_1_bn ( (None, 2, 2, 512 ['conv4_block12_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block12_1_relu (None, 2, 2, 0 ['conv4_block12_1_bn[ N
(Activation) 128) 0][0]']
conv4_block12_2_conv (None, 2, 2, 36864 ['conv4_block12_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block12_concat (None, 2, 2, 0 ['conv4_block11_conca N
(Concatenate) 640) t[0][0]',
'conv4_block12_2_con
v[0][0]']
conv4_block13_0_bn ( (None, 2, 2, 2560 ['conv4_block12_conca N
BatchNormalization) 640) t[0][0]']
conv4_block13_0_relu (None, 2, 2, 0 ['conv4_block13_0_bn[ N
(Activation) 640) 0][0]']
conv4_block13_1_conv (None, 2, 2, 81920 ['conv4_block13_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block13_1_bn ( (None, 2, 2, 512 ['conv4_block13_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block13_1_relu (None, 2, 2, 0 ['conv4_block13_1_bn[ N
(Activation) 128) 0][0]']
conv4_block13_2_conv (None, 2, 2, 36864 ['conv4_block13_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block13_concat (None, 2, 2, 0 ['conv4_block12_conca N
(Concatenate) 672) t[0][0]',
'conv4_block13_2_con
v[0][0]']
conv4_block14_0_bn ( (None, 2, 2, 2688 ['conv4_block13_conca N
BatchNormalization) 672) t[0][0]']
conv4_block14_0_relu (None, 2, 2, 0 ['conv4_block14_0_bn[ N
(Activation) 672) 0][0]']
conv4_block14_1_conv (None, 2, 2, 86016 ['conv4_block14_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block14_1_bn ( (None, 2, 2, 512 ['conv4_block14_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block14_1_relu (None, 2, 2, 0 ['conv4_block14_1_bn[ N
(Activation) 128) 0][0]']
conv4_block14_2_conv (None, 2, 2, 36864 ['conv4_block14_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block14_concat (None, 2, 2, 0 ['conv4_block13_conca N
(Concatenate) 704) t[0][0]',
'conv4_block14_2_con
v[0][0]']
conv4_block15_0_bn ( (None, 2, 2, 2816 ['conv4_block14_conca N
BatchNormalization) 704) t[0][0]']
conv4_block15_0_relu (None, 2, 2, 0 ['conv4_block15_0_bn[ N
(Activation) 704) 0][0]']
conv4_block15_1_conv (None, 2, 2, 90112 ['conv4_block15_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block15_1_bn ( (None, 2, 2, 512 ['conv4_block15_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block15_1_relu (None, 2, 2, 0 ['conv4_block15_1_bn[ N
(Activation) 128) 0][0]']
conv4_block15_2_conv (None, 2, 2, 36864 ['conv4_block15_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block15_concat (None, 2, 2, 0 ['conv4_block14_conca N
(Concatenate) 736) t[0][0]',
'conv4_block15_2_con
v[0][0]']
conv4_block16_0_bn ( (None, 2, 2, 2944 ['conv4_block15_conca N
BatchNormalization) 736) t[0][0]']
conv4_block16_0_relu (None, 2, 2, 0 ['conv4_block16_0_bn[ N
(Activation) 736) 0][0]']
conv4_block16_1_conv (None, 2, 2, 94208 ['conv4_block16_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block16_1_bn ( (None, 2, 2, 512 ['conv4_block16_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block16_1_relu (None, 2, 2, 0 ['conv4_block16_1_bn[ N
(Activation) 128) 0][0]']
conv4_block16_2_conv (None, 2, 2, 36864 ['conv4_block16_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block16_concat (None, 2, 2, 0 ['conv4_block15_conca N
(Concatenate) 768) t[0][0]',
'conv4_block16_2_con
v[0][0]']
conv4_block17_0_bn ( (None, 2, 2, 3072 ['conv4_block16_conca N
BatchNormalization) 768) t[0][0]']
conv4_block17_0_relu (None, 2, 2, 0 ['conv4_block17_0_bn[ N
(Activation) 768) 0][0]']
conv4_block17_1_conv (None, 2, 2, 98304 ['conv4_block17_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block17_1_bn ( (None, 2, 2, 512 ['conv4_block17_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block17_1_relu (None, 2, 2, 0 ['conv4_block17_1_bn[ N
(Activation) 128) 0][0]']
conv4_block17_2_conv (None, 2, 2, 36864 ['conv4_block17_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block17_concat (None, 2, 2, 0 ['conv4_block16_conca N
(Concatenate) 800) t[0][0]',
'conv4_block17_2_con
v[0][0]']
conv4_block18_0_bn ( (None, 2, 2, 3200 ['conv4_block17_conca N
BatchNormalization) 800) t[0][0]']
conv4_block18_0_relu (None, 2, 2, 0 ['conv4_block18_0_bn[ N
(Activation) 800) 0][0]']
conv4_block18_1_conv (None, 2, 2, 102400 ['conv4_block18_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block18_1_bn ( (None, 2, 2, 512 ['conv4_block18_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block18_1_relu (None, 2, 2, 0 ['conv4_block18_1_bn[ N
(Activation) 128) 0][0]']
conv4_block18_2_conv (None, 2, 2, 36864 ['conv4_block18_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block18_concat (None, 2, 2, 0 ['conv4_block17_conca N
(Concatenate) 832) t[0][0]',
'conv4_block18_2_con
v[0][0]']
conv4_block19_0_bn ( (None, 2, 2, 3328 ['conv4_block18_conca N
BatchNormalization) 832) t[0][0]']
conv4_block19_0_relu (None, 2, 2, 0 ['conv4_block19_0_bn[ N
(Activation) 832) 0][0]']
conv4_block19_1_conv (None, 2, 2, 106496 ['conv4_block19_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block19_1_bn ( (None, 2, 2, 512 ['conv4_block19_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block19_1_relu (None, 2, 2, 0 ['conv4_block19_1_bn[ N
(Activation) 128) 0][0]']
conv4_block19_2_conv (None, 2, 2, 36864 ['conv4_block19_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block19_concat (None, 2, 2, 0 ['conv4_block18_conca N
(Concatenate) 864) t[0][0]',
'conv4_block19_2_con
v[0][0]']
conv4_block20_0_bn ( (None, 2, 2, 3456 ['conv4_block19_conca N
BatchNormalization) 864) t[0][0]']
conv4_block20_0_relu (None, 2, 2, 0 ['conv4_block20_0_bn[ N
(Activation) 864) 0][0]']
conv4_block20_1_conv (None, 2, 2, 110592 ['conv4_block20_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block20_1_bn ( (None, 2, 2, 512 ['conv4_block20_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block20_1_relu (None, 2, 2, 0 ['conv4_block20_1_bn[ N
(Activation) 128) 0][0]']
conv4_block20_2_conv (None, 2, 2, 36864 ['conv4_block20_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block20_concat (None, 2, 2, 0 ['conv4_block19_conca N
(Concatenate) 896) t[0][0]',
'conv4_block20_2_con
v[0][0]']
conv4_block21_0_bn ( (None, 2, 2, 3584 ['conv4_block20_conca N
BatchNormalization) 896) t[0][0]']
conv4_block21_0_relu (None, 2, 2, 0 ['conv4_block21_0_bn[ N
(Activation) 896) 0][0]']
conv4_block21_1_conv (None, 2, 2, 114688 ['conv4_block21_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block21_1_bn ( (None, 2, 2, 512 ['conv4_block21_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block21_1_relu (None, 2, 2, 0 ['conv4_block21_1_bn[ N
(Activation) 128) 0][0]']
conv4_block21_2_conv (None, 2, 2, 36864 ['conv4_block21_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block21_concat (None, 2, 2, 0 ['conv4_block20_conca N
(Concatenate) 928) t[0][0]',
'conv4_block21_2_con
v[0][0]']
conv4_block22_0_bn ( (None, 2, 2, 3712 ['conv4_block21_conca N
BatchNormalization) 928) t[0][0]']
conv4_block22_0_relu (None, 2, 2, 0 ['conv4_block22_0_bn[ N
(Activation) 928) 0][0]']
conv4_block22_1_conv (None, 2, 2, 118784 ['conv4_block22_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block22_1_bn ( (None, 2, 2, 512 ['conv4_block22_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block22_1_relu (None, 2, 2, 0 ['conv4_block22_1_bn[ N
(Activation) 128) 0][0]']
conv4_block22_2_conv (None, 2, 2, 36864 ['conv4_block22_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block22_concat (None, 2, 2, 0 ['conv4_block21_conca N
(Concatenate) 960) t[0][0]',
'conv4_block22_2_con
v[0][0]']
conv4_block23_0_bn ( (None, 2, 2, 3840 ['conv4_block22_conca N
BatchNormalization) 960) t[0][0]']
conv4_block23_0_relu (None, 2, 2, 0 ['conv4_block23_0_bn[ N
(Activation) 960) 0][0]']
conv4_block23_1_conv (None, 2, 2, 122880 ['conv4_block23_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block23_1_bn ( (None, 2, 2, 512 ['conv4_block23_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block23_1_relu (None, 2, 2, 0 ['conv4_block23_1_bn[ N
(Activation) 128) 0][0]']
conv4_block23_2_conv (None, 2, 2, 36864 ['conv4_block23_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block23_concat (None, 2, 2, 0 ['conv4_block22_conca N
(Concatenate) 992) t[0][0]',
'conv4_block23_2_con
v[0][0]']
conv4_block24_0_bn ( (None, 2, 2, 3968 ['conv4_block23_conca N
BatchNormalization) 992) t[0][0]']
conv4_block24_0_relu (None, 2, 2, 0 ['conv4_block24_0_bn[ N
(Activation) 992) 0][0]']
conv4_block24_1_conv (None, 2, 2, 126976 ['conv4_block24_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block24_1_bn ( (None, 2, 2, 512 ['conv4_block24_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block24_1_relu (None, 2, 2, 0 ['conv4_block24_1_bn[ N
(Activation) 128) 0][0]']
conv4_block24_2_conv (None, 2, 2, 36864 ['conv4_block24_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block24_concat (None, 2, 2, 0 ['conv4_block23_conca N
(Concatenate) 1024) t[0][0]',
'conv4_block24_2_con
v[0][0]']
conv4_block25_0_bn ( (None, 2, 2, 4096 ['conv4_block24_conca N
BatchNormalization) 1024) t[0][0]']
conv4_block25_0_relu (None, 2, 2, 0 ['conv4_block25_0_bn[ N
(Activation) 1024) 0][0]']
conv4_block25_1_conv (None, 2, 2, 131072 ['conv4_block25_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block25_1_bn ( (None, 2, 2, 512 ['conv4_block25_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block25_1_relu (None, 2, 2, 0 ['conv4_block25_1_bn[ N
(Activation) 128) 0][0]']
conv4_block25_2_conv (None, 2, 2, 36864 ['conv4_block25_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block25_concat (None, 2, 2, 0 ['conv4_block24_conca N
(Concatenate) 1056) t[0][0]',
'conv4_block25_2_con
v[0][0]']
conv4_block26_0_bn ( (None, 2, 2, 4224 ['conv4_block25_conca N
BatchNormalization) 1056) t[0][0]']
conv4_block26_0_relu (None, 2, 2, 0 ['conv4_block26_0_bn[ N
(Activation) 1056) 0][0]']
conv4_block26_1_conv (None, 2, 2, 135168 ['conv4_block26_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block26_1_bn ( (None, 2, 2, 512 ['conv4_block26_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block26_1_relu (None, 2, 2, 0 ['conv4_block26_1_bn[ N
(Activation) 128) 0][0]']
conv4_block26_2_conv (None, 2, 2, 36864 ['conv4_block26_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block26_concat (None, 2, 2, 0 ['conv4_block25_conca N
(Concatenate) 1088) t[0][0]',
'conv4_block26_2_con
v[0][0]']
conv4_block27_0_bn ( (None, 2, 2, 4352 ['conv4_block26_conca N
BatchNormalization) 1088) t[0][0]']
conv4_block27_0_relu (None, 2, 2, 0 ['conv4_block27_0_bn[ N
(Activation) 1088) 0][0]']
conv4_block27_1_conv (None, 2, 2, 139264 ['conv4_block27_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block27_1_bn ( (None, 2, 2, 512 ['conv4_block27_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block27_1_relu (None, 2, 2, 0 ['conv4_block27_1_bn[ N
(Activation) 128) 0][0]']
conv4_block27_2_conv (None, 2, 2, 36864 ['conv4_block27_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block27_concat (None, 2, 2, 0 ['conv4_block26_conca N
(Concatenate) 1120) t[0][0]',
'conv4_block27_2_con
v[0][0]']
conv4_block28_0_bn ( (None, 2, 2, 4480 ['conv4_block27_conca N
BatchNormalization) 1120) t[0][0]']
conv4_block28_0_relu (None, 2, 2, 0 ['conv4_block28_0_bn[ N
(Activation) 1120) 0][0]']
conv4_block28_1_conv (None, 2, 2, 143360 ['conv4_block28_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block28_1_bn ( (None, 2, 2, 512 ['conv4_block28_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block28_1_relu (None, 2, 2, 0 ['conv4_block28_1_bn[ N
(Activation) 128) 0][0]']
conv4_block28_2_conv (None, 2, 2, 36864 ['conv4_block28_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block28_concat (None, 2, 2, 0 ['conv4_block27_conca N
(Concatenate) 1152) t[0][0]',
'conv4_block28_2_con
v[0][0]']
conv4_block29_0_bn ( (None, 2, 2, 4608 ['conv4_block28_conca N
BatchNormalization) 1152) t[0][0]']
conv4_block29_0_relu (None, 2, 2, 0 ['conv4_block29_0_bn[ N
(Activation) 1152) 0][0]']
conv4_block29_1_conv (None, 2, 2, 147456 ['conv4_block29_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block29_1_bn ( (None, 2, 2, 512 ['conv4_block29_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block29_1_relu (None, 2, 2, 0 ['conv4_block29_1_bn[ N
(Activation) 128) 0][0]']
conv4_block29_2_conv (None, 2, 2, 36864 ['conv4_block29_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block29_concat (None, 2, 2, 0 ['conv4_block28_conca N
(Concatenate) 1184) t[0][0]',
'conv4_block29_2_con
v[0][0]']
conv4_block30_0_bn ( (None, 2, 2, 4736 ['conv4_block29_conca N
BatchNormalization) 1184) t[0][0]']
conv4_block30_0_relu (None, 2, 2, 0 ['conv4_block30_0_bn[ N
(Activation) 1184) 0][0]']
conv4_block30_1_conv (None, 2, 2, 151552 ['conv4_block30_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block30_1_bn ( (None, 2, 2, 512 ['conv4_block30_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block30_1_relu (None, 2, 2, 0 ['conv4_block30_1_bn[ N
(Activation) 128) 0][0]']
conv4_block30_2_conv (None, 2, 2, 36864 ['conv4_block30_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block30_concat (None, 2, 2, 0 ['conv4_block29_conca N
(Concatenate) 1216) t[0][0]',
'conv4_block30_2_con
v[0][0]']
conv4_block31_0_bn ( (None, 2, 2, 4864 ['conv4_block30_conca N
BatchNormalization) 1216) t[0][0]']
conv4_block31_0_relu (None, 2, 2, 0 ['conv4_block31_0_bn[ N
(Activation) 1216) 0][0]']
conv4_block31_1_conv (None, 2, 2, 155648 ['conv4_block31_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block31_1_bn ( (None, 2, 2, 512 ['conv4_block31_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block31_1_relu (None, 2, 2, 0 ['conv4_block31_1_bn[ N
(Activation) 128) 0][0]']
conv4_block31_2_conv (None, 2, 2, 36864 ['conv4_block31_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block31_concat (None, 2, 2, 0 ['conv4_block30_conca N
(Concatenate) 1248) t[0][0]',
'conv4_block31_2_con
v[0][0]']
conv4_block32_0_bn ( (None, 2, 2, 4992 ['conv4_block31_conca N
BatchNormalization) 1248) t[0][0]']
conv4_block32_0_relu (None, 2, 2, 0 ['conv4_block32_0_bn[ N
(Activation) 1248) 0][0]']
conv4_block32_1_conv (None, 2, 2, 159744 ['conv4_block32_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block32_1_bn ( (None, 2, 2, 512 ['conv4_block32_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block32_1_relu (None, 2, 2, 0 ['conv4_block32_1_bn[ N
(Activation) 128) 0][0]']
conv4_block32_2_conv (None, 2, 2, 36864 ['conv4_block32_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block32_concat (None, 2, 2, 0 ['conv4_block31_conca N
(Concatenate) 1280) t[0][0]',
'conv4_block32_2_con
v[0][0]']
conv4_block33_0_bn ( (None, 2, 2, 5120 ['conv4_block32_conca N
BatchNormalization) 1280) t[0][0]']
conv4_block33_0_relu (None, 2, 2, 0 ['conv4_block33_0_bn[ N
(Activation) 1280) 0][0]']
conv4_block33_1_conv (None, 2, 2, 163840 ['conv4_block33_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block33_1_bn ( (None, 2, 2, 512 ['conv4_block33_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block33_1_relu (None, 2, 2, 0 ['conv4_block33_1_bn[ N
(Activation) 128) 0][0]']
conv4_block33_2_conv (None, 2, 2, 36864 ['conv4_block33_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block33_concat (None, 2, 2, 0 ['conv4_block32_conca N
(Concatenate) 1312) t[0][0]',
'conv4_block33_2_con
v[0][0]']
conv4_block34_0_bn ( (None, 2, 2, 5248 ['conv4_block33_conca N
BatchNormalization) 1312) t[0][0]']
conv4_block34_0_relu (None, 2, 2, 0 ['conv4_block34_0_bn[ N
(Activation) 1312) 0][0]']
conv4_block34_1_conv (None, 2, 2, 167936 ['conv4_block34_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block34_1_bn ( (None, 2, 2, 512 ['conv4_block34_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block34_1_relu (None, 2, 2, 0 ['conv4_block34_1_bn[ N
(Activation) 128) 0][0]']
conv4_block34_2_conv (None, 2, 2, 36864 ['conv4_block34_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block34_concat (None, 2, 2, 0 ['conv4_block33_conca N
(Concatenate) 1344) t[0][0]',
'conv4_block34_2_con
v[0][0]']
conv4_block35_0_bn ( (None, 2, 2, 5376 ['conv4_block34_conca N
BatchNormalization) 1344) t[0][0]']
conv4_block35_0_relu (None, 2, 2, 0 ['conv4_block35_0_bn[ N
(Activation) 1344) 0][0]']
conv4_block35_1_conv (None, 2, 2, 172032 ['conv4_block35_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block35_1_bn ( (None, 2, 2, 512 ['conv4_block35_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block35_1_relu (None, 2, 2, 0 ['conv4_block35_1_bn[ N
(Activation) 128) 0][0]']
conv4_block35_2_conv (None, 2, 2, 36864 ['conv4_block35_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block35_concat (None, 2, 2, 0 ['conv4_block34_conca N
(Concatenate) 1376) t[0][0]',
'conv4_block35_2_con
v[0][0]']
conv4_block36_0_bn ( (None, 2, 2, 5504 ['conv4_block35_conca N
BatchNormalization) 1376) t[0][0]']
conv4_block36_0_relu (None, 2, 2, 0 ['conv4_block36_0_bn[ N
(Activation) 1376) 0][0]']
conv4_block36_1_conv (None, 2, 2, 176128 ['conv4_block36_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block36_1_bn ( (None, 2, 2, 512 ['conv4_block36_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block36_1_relu (None, 2, 2, 0 ['conv4_block36_1_bn[ N
(Activation) 128) 0][0]']
conv4_block36_2_conv (None, 2, 2, 36864 ['conv4_block36_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block36_concat (None, 2, 2, 0 ['conv4_block35_conca N
(Concatenate) 1408) t[0][0]',
'conv4_block36_2_con
v[0][0]']
conv4_block37_0_bn ( (None, 2, 2, 5632 ['conv4_block36_conca N
BatchNormalization) 1408) t[0][0]']
conv4_block37_0_relu (None, 2, 2, 0 ['conv4_block37_0_bn[ N
(Activation) 1408) 0][0]']
conv4_block37_1_conv (None, 2, 2, 180224 ['conv4_block37_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block37_1_bn ( (None, 2, 2, 512 ['conv4_block37_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block37_1_relu (None, 2, 2, 0 ['conv4_block37_1_bn[ N
(Activation) 128) 0][0]']
conv4_block37_2_conv (None, 2, 2, 36864 ['conv4_block37_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block37_concat (None, 2, 2, 0 ['conv4_block36_conca N
(Concatenate) 1440) t[0][0]',
'conv4_block37_2_con
v[0][0]']
conv4_block38_0_bn ( (None, 2, 2, 5760 ['conv4_block37_conca N
BatchNormalization) 1440) t[0][0]']
conv4_block38_0_relu (None, 2, 2, 0 ['conv4_block38_0_bn[ N
(Activation) 1440) 0][0]']
conv4_block38_1_conv (None, 2, 2, 184320 ['conv4_block38_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block38_1_bn ( (None, 2, 2, 512 ['conv4_block38_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block38_1_relu (None, 2, 2, 0 ['conv4_block38_1_bn[ N
(Activation) 128) 0][0]']
conv4_block38_2_conv (None, 2, 2, 36864 ['conv4_block38_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block38_concat (None, 2, 2, 0 ['conv4_block37_conca N
(Concatenate) 1472) t[0][0]',
'conv4_block38_2_con
v[0][0]']
conv4_block39_0_bn ( (None, 2, 2, 5888 ['conv4_block38_conca N
BatchNormalization) 1472) t[0][0]']
conv4_block39_0_relu (None, 2, 2, 0 ['conv4_block39_0_bn[ N
(Activation) 1472) 0][0]']
conv4_block39_1_conv (None, 2, 2, 188416 ['conv4_block39_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block39_1_bn ( (None, 2, 2, 512 ['conv4_block39_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block39_1_relu (None, 2, 2, 0 ['conv4_block39_1_bn[ N
(Activation) 128) 0][0]']
conv4_block39_2_conv (None, 2, 2, 36864 ['conv4_block39_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block39_concat (None, 2, 2, 0 ['conv4_block38_conca N
(Concatenate) 1504) t[0][0]',
'conv4_block39_2_con
v[0][0]']
conv4_block40_0_bn ( (None, 2, 2, 6016 ['conv4_block39_conca N
BatchNormalization) 1504) t[0][0]']
conv4_block40_0_relu (None, 2, 2, 0 ['conv4_block40_0_bn[ N
(Activation) 1504) 0][0]']
conv4_block40_1_conv (None, 2, 2, 192512 ['conv4_block40_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block40_1_bn ( (None, 2, 2, 512 ['conv4_block40_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block40_1_relu (None, 2, 2, 0 ['conv4_block40_1_bn[ N
(Activation) 128) 0][0]']
conv4_block40_2_conv (None, 2, 2, 36864 ['conv4_block40_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block40_concat (None, 2, 2, 0 ['conv4_block39_conca N
(Concatenate) 1536) t[0][0]',
'conv4_block40_2_con
v[0][0]']
conv4_block41_0_bn ( (None, 2, 2, 6144 ['conv4_block40_conca N
BatchNormalization) 1536) t[0][0]']
conv4_block41_0_relu (None, 2, 2, 0 ['conv4_block41_0_bn[ N
(Activation) 1536) 0][0]']
conv4_block41_1_conv (None, 2, 2, 196608 ['conv4_block41_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block41_1_bn ( (None, 2, 2, 512 ['conv4_block41_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block41_1_relu (None, 2, 2, 0 ['conv4_block41_1_bn[ N
(Activation) 128) 0][0]']
conv4_block41_2_conv (None, 2, 2, 36864 ['conv4_block41_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block41_concat (None, 2, 2, 0 ['conv4_block40_conca N
(Concatenate) 1568) t[0][0]',
'conv4_block41_2_con
v[0][0]']
conv4_block42_0_bn ( (None, 2, 2, 6272 ['conv4_block41_conca N
BatchNormalization) 1568) t[0][0]']
conv4_block42_0_relu (None, 2, 2, 0 ['conv4_block42_0_bn[ N
(Activation) 1568) 0][0]']
conv4_block42_1_conv (None, 2, 2, 200704 ['conv4_block42_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block42_1_bn ( (None, 2, 2, 512 ['conv4_block42_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block42_1_relu (None, 2, 2, 0 ['conv4_block42_1_bn[ N
(Activation) 128) 0][0]']
conv4_block42_2_conv (None, 2, 2, 36864 ['conv4_block42_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block42_concat (None, 2, 2, 0 ['conv4_block41_conca N
(Concatenate) 1600) t[0][0]',
'conv4_block42_2_con
v[0][0]']
conv4_block43_0_bn ( (None, 2, 2, 6400 ['conv4_block42_conca N
BatchNormalization) 1600) t[0][0]']
conv4_block43_0_relu (None, 2, 2, 0 ['conv4_block43_0_bn[ N
(Activation) 1600) 0][0]']
conv4_block43_1_conv (None, 2, 2, 204800 ['conv4_block43_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block43_1_bn ( (None, 2, 2, 512 ['conv4_block43_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block43_1_relu (None, 2, 2, 0 ['conv4_block43_1_bn[ N
(Activation) 128) 0][0]']
conv4_block43_2_conv (None, 2, 2, 36864 ['conv4_block43_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block43_concat (None, 2, 2, 0 ['conv4_block42_conca N
(Concatenate) 1632) t[0][0]',
'conv4_block43_2_con
v[0][0]']
conv4_block44_0_bn ( (None, 2, 2, 6528 ['conv4_block43_conca N
BatchNormalization) 1632) t[0][0]']
conv4_block44_0_relu (None, 2, 2, 0 ['conv4_block44_0_bn[ N
(Activation) 1632) 0][0]']
conv4_block44_1_conv (None, 2, 2, 208896 ['conv4_block44_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block44_1_bn ( (None, 2, 2, 512 ['conv4_block44_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block44_1_relu (None, 2, 2, 0 ['conv4_block44_1_bn[ N
(Activation) 128) 0][0]']
conv4_block44_2_conv (None, 2, 2, 36864 ['conv4_block44_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block44_concat (None, 2, 2, 0 ['conv4_block43_conca N
(Concatenate) 1664) t[0][0]',
'conv4_block44_2_con
v[0][0]']
conv4_block45_0_bn ( (None, 2, 2, 6656 ['conv4_block44_conca N
BatchNormalization) 1664) t[0][0]']
conv4_block45_0_relu (None, 2, 2, 0 ['conv4_block45_0_bn[ N
(Activation) 1664) 0][0]']
conv4_block45_1_conv (None, 2, 2, 212992 ['conv4_block45_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block45_1_bn ( (None, 2, 2, 512 ['conv4_block45_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block45_1_relu (None, 2, 2, 0 ['conv4_block45_1_bn[ N
(Activation) 128) 0][0]']
conv4_block45_2_conv (None, 2, 2, 36864 ['conv4_block45_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block45_concat (None, 2, 2, 0 ['conv4_block44_conca N
(Concatenate) 1696) t[0][0]',
'conv4_block45_2_con
v[0][0]']
conv4_block46_0_bn ( (None, 2, 2, 6784 ['conv4_block45_conca N
BatchNormalization) 1696) t[0][0]']
conv4_block46_0_relu (None, 2, 2, 0 ['conv4_block46_0_bn[ N
(Activation) 1696) 0][0]']
conv4_block46_1_conv (None, 2, 2, 217088 ['conv4_block46_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block46_1_bn ( (None, 2, 2, 512 ['conv4_block46_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block46_1_relu (None, 2, 2, 0 ['conv4_block46_1_bn[ N
(Activation) 128) 0][0]']
conv4_block46_2_conv (None, 2, 2, 36864 ['conv4_block46_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block46_concat (None, 2, 2, 0 ['conv4_block45_conca N
(Concatenate) 1728) t[0][0]',
'conv4_block46_2_con
v[0][0]']
conv4_block47_0_bn ( (None, 2, 2, 6912 ['conv4_block46_conca N
BatchNormalization) 1728) t[0][0]']
conv4_block47_0_relu (None, 2, 2, 0 ['conv4_block47_0_bn[ N
(Activation) 1728) 0][0]']
conv4_block47_1_conv (None, 2, 2, 221184 ['conv4_block47_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block47_1_bn ( (None, 2, 2, 512 ['conv4_block47_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block47_1_relu (None, 2, 2, 0 ['conv4_block47_1_bn[ N
(Activation) 128) 0][0]']
conv4_block47_2_conv (None, 2, 2, 36864 ['conv4_block47_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block47_concat (None, 2, 2, 0 ['conv4_block46_conca N
(Concatenate) 1760) t[0][0]',
'conv4_block47_2_con
v[0][0]']
conv4_block48_0_bn ( (None, 2, 2, 7040 ['conv4_block47_conca N
BatchNormalization) 1760) t[0][0]']
conv4_block48_0_relu (None, 2, 2, 0 ['conv4_block48_0_bn[ N
(Activation) 1760) 0][0]']
conv4_block48_1_conv (None, 2, 2, 225280 ['conv4_block48_0_rel N
(Conv2D) 128) u[0][0]']
conv4_block48_1_bn ( (None, 2, 2, 512 ['conv4_block48_1_con N
BatchNormalization) 128) v[0][0]']
conv4_block48_1_relu (None, 2, 2, 0 ['conv4_block48_1_bn[ N
(Activation) 128) 0][0]']
conv4_block48_2_conv (None, 2, 2, 36864 ['conv4_block48_1_rel N
(Conv2D) 32) u[0][0]']
conv4_block48_concat (None, 2, 2, 0 ['conv4_block47_conca N
(Concatenate) 1792) t[0][0]',
'conv4_block48_2_con
v[0][0]']
pool4_bn (BatchNorma (None, 2, 2, 7168 ['conv4_block48_conca N
lization) 1792) t[0][0]']
pool4_relu (Activati (None, 2, 2, 0 ['pool4_bn[0][0]'] N
on) 1792)
pool4_conv (Conv2D) (None, 2, 2, 1605632 ['pool4_relu[0][0]'] N
896)
pool4_pool (AverageP (None, 1, 1, 0 ['pool4_conv[0][0]'] N
ooling2D) 896)
conv5_block1_0_bn (B (None, 1, 1, 3584 ['pool4_pool[0][0]'] N
atchNormalization) 896)
conv5_block1_0_relu (None, 1, 1, 0 ['conv5_block1_0_bn[0 N
(Activation) 896) ][0]']
conv5_block1_1_conv (None, 1, 1, 114688 ['conv5_block1_0_relu N
(Conv2D) 128) [0][0]']
conv5_block1_1_bn (B (None, 1, 1, 512 ['conv5_block1_1_conv N
atchNormalization) 128) [0][0]']
conv5_block1_1_relu (None, 1, 1, 0 ['conv5_block1_1_bn[0 N
(Activation) 128) ][0]']
conv5_block1_2_conv (None, 1, 1, 36864 ['conv5_block1_1_relu N
(Conv2D) 32) [0][0]']
conv5_block1_concat (None, 1, 1, 0 ['pool4_pool[0][0]', N
(Concatenate) 928) 'conv5_block1_2_conv
[0][0]']
conv5_block2_0_bn (B (None, 1, 1, 3712 ['conv5_block1_concat N
atchNormalization) 928) [0][0]']
conv5_block2_0_relu (None, 1, 1, 0 ['conv5_block2_0_bn[0 N
(Activation) 928) ][0]']
conv5_block2_1_conv (None, 1, 1, 118784 ['conv5_block2_0_relu N
(Conv2D) 128) [0][0]']
conv5_block2_1_bn (B (None, 1, 1, 512 ['conv5_block2_1_conv N
atchNormalization) 128) [0][0]']
conv5_block2_1_relu (None, 1, 1, 0 ['conv5_block2_1_bn[0 N
(Activation) 128) ][0]']
conv5_block2_2_conv (None, 1, 1, 36864 ['conv5_block2_1_relu N
(Conv2D) 32) [0][0]']
conv5_block2_concat (None, 1, 1, 0 ['conv5_block1_concat N
(Concatenate) 960) [0][0]',
'conv5_block2_2_conv
[0][0]']
conv5_block3_0_bn (B (None, 1, 1, 3840 ['conv5_block2_concat N
atchNormalization) 960) [0][0]']
conv5_block3_0_relu (None, 1, 1, 0 ['conv5_block3_0_bn[0 N
(Activation) 960) ][0]']
conv5_block3_1_conv (None, 1, 1, 122880 ['conv5_block3_0_relu N
(Conv2D) 128) [0][0]']
conv5_block3_1_bn (B (None, 1, 1, 512 ['conv5_block3_1_conv N
atchNormalization) 128) [0][0]']
conv5_block3_1_relu (None, 1, 1, 0 ['conv5_block3_1_bn[0 N
(Activation) 128) ][0]']
conv5_block3_2_conv (None, 1, 1, 36864 ['conv5_block3_1_relu N
(Conv2D) 32) [0][0]']
conv5_block3_concat (None, 1, 1, 0 ['conv5_block2_concat N
(Concatenate) 992) [0][0]',
'conv5_block3_2_conv
[0][0]']
conv5_block4_0_bn (B (None, 1, 1, 3968 ['conv5_block3_concat N
atchNormalization) 992) [0][0]']
conv5_block4_0_relu (None, 1, 1, 0 ['conv5_block4_0_bn[0 N
(Activation) 992) ][0]']
conv5_block4_1_conv (None, 1, 1, 126976 ['conv5_block4_0_relu N
(Conv2D) 128) [0][0]']
conv5_block4_1_bn (B (None, 1, 1, 512 ['conv5_block4_1_conv N
atchNormalization) 128) [0][0]']
conv5_block4_1_relu (None, 1, 1, 0 ['conv5_block4_1_bn[0 N
(Activation) 128) ][0]']
conv5_block4_2_conv (None, 1, 1, 36864 ['conv5_block4_1_relu N
(Conv2D) 32) [0][0]']
conv5_block4_concat (None, 1, 1, 0 ['conv5_block3_concat N
(Concatenate) 1024) [0][0]',
'conv5_block4_2_conv
[0][0]']
conv5_block5_0_bn (B (None, 1, 1, 4096 ['conv5_block4_concat N
atchNormalization) 1024) [0][0]']
conv5_block5_0_relu (None, 1, 1, 0 ['conv5_block5_0_bn[0 N
(Activation) 1024) ][0]']
conv5_block5_1_conv (None, 1, 1, 131072 ['conv5_block5_0_relu N
(Conv2D) 128) [0][0]']
conv5_block5_1_bn (B (None, 1, 1, 512 ['conv5_block5_1_conv N
atchNormalization) 128) [0][0]']
conv5_block5_1_relu (None, 1, 1, 0 ['conv5_block5_1_bn[0 N
(Activation) 128) ][0]']
conv5_block5_2_conv (None, 1, 1, 36864 ['conv5_block5_1_relu N
(Conv2D) 32) [0][0]']
conv5_block5_concat (None, 1, 1, 0 ['conv5_block4_concat N
(Concatenate) 1056) [0][0]',
'conv5_block5_2_conv
[0][0]']
conv5_block6_0_bn (B (None, 1, 1, 4224 ['conv5_block5_concat N
atchNormalization) 1056) [0][0]']
conv5_block6_0_relu (None, 1, 1, 0 ['conv5_block6_0_bn[0 N
(Activation) 1056) ][0]']
conv5_block6_1_conv (None, 1, 1, 135168 ['conv5_block6_0_relu N
(Conv2D) 128) [0][0]']
conv5_block6_1_bn (B (None, 1, 1, 512 ['conv5_block6_1_conv N
atchNormalization) 128) [0][0]']
conv5_block6_1_relu (None, 1, 1, 0 ['conv5_block6_1_bn[0 N
(Activation) 128) ][0]']
conv5_block6_2_conv (None, 1, 1, 36864 ['conv5_block6_1_relu N
(Conv2D) 32) [0][0]']
conv5_block6_concat (None, 1, 1, 0 ['conv5_block5_concat N
(Concatenate) 1088) [0][0]',
'conv5_block6_2_conv
[0][0]']
conv5_block7_0_bn (B (None, 1, 1, 4352 ['conv5_block6_concat N
atchNormalization) 1088) [0][0]']
conv5_block7_0_relu (None, 1, 1, 0 ['conv5_block7_0_bn[0 N
(Activation) 1088) ][0]']
conv5_block7_1_conv (None, 1, 1, 139264 ['conv5_block7_0_relu N
(Conv2D) 128) [0][0]']
conv5_block7_1_bn (B (None, 1, 1, 512 ['conv5_block7_1_conv N
atchNormalization) 128) [0][0]']
conv5_block7_1_relu (None, 1, 1, 0 ['conv5_block7_1_bn[0 N
(Activation) 128) ][0]']
conv5_block7_2_conv (None, 1, 1, 36864 ['conv5_block7_1_relu N
(Conv2D) 32) [0][0]']
conv5_block7_concat (None, 1, 1, 0 ['conv5_block6_concat N
(Concatenate) 1120) [0][0]',
'conv5_block7_2_conv
[0][0]']
conv5_block8_0_bn (B (None, 1, 1, 4480 ['conv5_block7_concat N
atchNormalization) 1120) [0][0]']
conv5_block8_0_relu (None, 1, 1, 0 ['conv5_block8_0_bn[0 N
(Activation) 1120) ][0]']
conv5_block8_1_conv (None, 1, 1, 143360 ['conv5_block8_0_relu N
(Conv2D) 128) [0][0]']
conv5_block8_1_bn (B (None, 1, 1, 512 ['conv5_block8_1_conv N
atchNormalization) 128) [0][0]']
conv5_block8_1_relu (None, 1, 1, 0 ['conv5_block8_1_bn[0 N
(Activation) 128) ][0]']
conv5_block8_2_conv (None, 1, 1, 36864 ['conv5_block8_1_relu N
(Conv2D) 32) [0][0]']
conv5_block8_concat (None, 1, 1, 0 ['conv5_block7_concat N
(Concatenate) 1152) [0][0]',
'conv5_block8_2_conv
[0][0]']
conv5_block9_0_bn (B (None, 1, 1, 4608 ['conv5_block8_concat N
atchNormalization) 1152) [0][0]']
conv5_block9_0_relu (None, 1, 1, 0 ['conv5_block9_0_bn[0 N
(Activation) 1152) ][0]']
conv5_block9_1_conv (None, 1, 1, 147456 ['conv5_block9_0_relu N
(Conv2D) 128) [0][0]']
conv5_block9_1_bn (B (None, 1, 1, 512 ['conv5_block9_1_conv N
atchNormalization) 128) [0][0]']
conv5_block9_1_relu (None, 1, 1, 0 ['conv5_block9_1_bn[0 N
(Activation) 128) ][0]']
conv5_block9_2_conv (None, 1, 1, 36864 ['conv5_block9_1_relu N
(Conv2D) 32) [0][0]']
conv5_block9_concat (None, 1, 1, 0 ['conv5_block8_concat N
(Concatenate) 1184) [0][0]',
'conv5_block9_2_conv
[0][0]']
conv5_block10_0_bn ( (None, 1, 1, 4736 ['conv5_block9_concat N
BatchNormalization) 1184) [0][0]']
conv5_block10_0_relu (None, 1, 1, 0 ['conv5_block10_0_bn[ N
(Activation) 1184) 0][0]']
conv5_block10_1_conv (None, 1, 1, 151552 ['conv5_block10_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block10_1_bn ( (None, 1, 1, 512 ['conv5_block10_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block10_1_relu (None, 1, 1, 0 ['conv5_block10_1_bn[ N
(Activation) 128) 0][0]']
conv5_block10_2_conv (None, 1, 1, 36864 ['conv5_block10_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block10_concat (None, 1, 1, 0 ['conv5_block9_concat N
(Concatenate) 1216) [0][0]',
'conv5_block10_2_con
v[0][0]']
conv5_block11_0_bn ( (None, 1, 1, 4864 ['conv5_block10_conca N
BatchNormalization) 1216) t[0][0]']
conv5_block11_0_relu (None, 1, 1, 0 ['conv5_block11_0_bn[ N
(Activation) 1216) 0][0]']
conv5_block11_1_conv (None, 1, 1, 155648 ['conv5_block11_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block11_1_bn ( (None, 1, 1, 512 ['conv5_block11_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block11_1_relu (None, 1, 1, 0 ['conv5_block11_1_bn[ N
(Activation) 128) 0][0]']
conv5_block11_2_conv (None, 1, 1, 36864 ['conv5_block11_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block11_concat (None, 1, 1, 0 ['conv5_block10_conca N
(Concatenate) 1248) t[0][0]',
'conv5_block11_2_con
v[0][0]']
conv5_block12_0_bn ( (None, 1, 1, 4992 ['conv5_block11_conca N
BatchNormalization) 1248) t[0][0]']
conv5_block12_0_relu (None, 1, 1, 0 ['conv5_block12_0_bn[ N
(Activation) 1248) 0][0]']
conv5_block12_1_conv (None, 1, 1, 159744 ['conv5_block12_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block12_1_bn ( (None, 1, 1, 512 ['conv5_block12_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block12_1_relu (None, 1, 1, 0 ['conv5_block12_1_bn[ N
(Activation) 128) 0][0]']
conv5_block12_2_conv (None, 1, 1, 36864 ['conv5_block12_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block12_concat (None, 1, 1, 0 ['conv5_block11_conca N
(Concatenate) 1280) t[0][0]',
'conv5_block12_2_con
v[0][0]']
conv5_block13_0_bn ( (None, 1, 1, 5120 ['conv5_block12_conca N
BatchNormalization) 1280) t[0][0]']
conv5_block13_0_relu (None, 1, 1, 0 ['conv5_block13_0_bn[ N
(Activation) 1280) 0][0]']
conv5_block13_1_conv (None, 1, 1, 163840 ['conv5_block13_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block13_1_bn ( (None, 1, 1, 512 ['conv5_block13_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block13_1_relu (None, 1, 1, 0 ['conv5_block13_1_bn[ N
(Activation) 128) 0][0]']
conv5_block13_2_conv (None, 1, 1, 36864 ['conv5_block13_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block13_concat (None, 1, 1, 0 ['conv5_block12_conca N
(Concatenate) 1312) t[0][0]',
'conv5_block13_2_con
v[0][0]']
conv5_block14_0_bn ( (None, 1, 1, 5248 ['conv5_block13_conca N
BatchNormalization) 1312) t[0][0]']
conv5_block14_0_relu (None, 1, 1, 0 ['conv5_block14_0_bn[ N
(Activation) 1312) 0][0]']
conv5_block14_1_conv (None, 1, 1, 167936 ['conv5_block14_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block14_1_bn ( (None, 1, 1, 512 ['conv5_block14_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block14_1_relu (None, 1, 1, 0 ['conv5_block14_1_bn[ N
(Activation) 128) 0][0]']
conv5_block14_2_conv (None, 1, 1, 36864 ['conv5_block14_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block14_concat (None, 1, 1, 0 ['conv5_block13_conca N
(Concatenate) 1344) t[0][0]',
'conv5_block14_2_con
v[0][0]']
conv5_block15_0_bn ( (None, 1, 1, 5376 ['conv5_block14_conca N
BatchNormalization) 1344) t[0][0]']
conv5_block15_0_relu (None, 1, 1, 0 ['conv5_block15_0_bn[ N
(Activation) 1344) 0][0]']
conv5_block15_1_conv (None, 1, 1, 172032 ['conv5_block15_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block15_1_bn ( (None, 1, 1, 512 ['conv5_block15_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block15_1_relu (None, 1, 1, 0 ['conv5_block15_1_bn[ N
(Activation) 128) 0][0]']
conv5_block15_2_conv (None, 1, 1, 36864 ['conv5_block15_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block15_concat (None, 1, 1, 0 ['conv5_block14_conca N
(Concatenate) 1376) t[0][0]',
'conv5_block15_2_con
v[0][0]']
conv5_block16_0_bn ( (None, 1, 1, 5504 ['conv5_block15_conca N
BatchNormalization) 1376) t[0][0]']
conv5_block16_0_relu (None, 1, 1, 0 ['conv5_block16_0_bn[ N
(Activation) 1376) 0][0]']
conv5_block16_1_conv (None, 1, 1, 176128 ['conv5_block16_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block16_1_bn ( (None, 1, 1, 512 ['conv5_block16_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block16_1_relu (None, 1, 1, 0 ['conv5_block16_1_bn[ N
(Activation) 128) 0][0]']
conv5_block16_2_conv (None, 1, 1, 36864 ['conv5_block16_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block16_concat (None, 1, 1, 0 ['conv5_block15_conca N
(Concatenate) 1408) t[0][0]',
'conv5_block16_2_con
v[0][0]']
conv5_block17_0_bn ( (None, 1, 1, 5632 ['conv5_block16_conca N
BatchNormalization) 1408) t[0][0]']
conv5_block17_0_relu (None, 1, 1, 0 ['conv5_block17_0_bn[ N
(Activation) 1408) 0][0]']
conv5_block17_1_conv (None, 1, 1, 180224 ['conv5_block17_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block17_1_bn ( (None, 1, 1, 512 ['conv5_block17_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block17_1_relu (None, 1, 1, 0 ['conv5_block17_1_bn[ N
(Activation) 128) 0][0]']
conv5_block17_2_conv (None, 1, 1, 36864 ['conv5_block17_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block17_concat (None, 1, 1, 0 ['conv5_block16_conca N
(Concatenate) 1440) t[0][0]',
'conv5_block17_2_con
v[0][0]']
conv5_block18_0_bn ( (None, 1, 1, 5760 ['conv5_block17_conca N
BatchNormalization) 1440) t[0][0]']
conv5_block18_0_relu (None, 1, 1, 0 ['conv5_block18_0_bn[ N
(Activation) 1440) 0][0]']
conv5_block18_1_conv (None, 1, 1, 184320 ['conv5_block18_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block18_1_bn ( (None, 1, 1, 512 ['conv5_block18_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block18_1_relu (None, 1, 1, 0 ['conv5_block18_1_bn[ N
(Activation) 128) 0][0]']
conv5_block18_2_conv (None, 1, 1, 36864 ['conv5_block18_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block18_concat (None, 1, 1, 0 ['conv5_block17_conca N
(Concatenate) 1472) t[0][0]',
'conv5_block18_2_con
v[0][0]']
conv5_block19_0_bn ( (None, 1, 1, 5888 ['conv5_block18_conca N
BatchNormalization) 1472) t[0][0]']
conv5_block19_0_relu (None, 1, 1, 0 ['conv5_block19_0_bn[ N
(Activation) 1472) 0][0]']
conv5_block19_1_conv (None, 1, 1, 188416 ['conv5_block19_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block19_1_bn ( (None, 1, 1, 512 ['conv5_block19_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block19_1_relu (None, 1, 1, 0 ['conv5_block19_1_bn[ N
(Activation) 128) 0][0]']
conv5_block19_2_conv (None, 1, 1, 36864 ['conv5_block19_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block19_concat (None, 1, 1, 0 ['conv5_block18_conca N
(Concatenate) 1504) t[0][0]',
'conv5_block19_2_con
v[0][0]']
conv5_block20_0_bn ( (None, 1, 1, 6016 ['conv5_block19_conca N
BatchNormalization) 1504) t[0][0]']
conv5_block20_0_relu (None, 1, 1, 0 ['conv5_block20_0_bn[ N
(Activation) 1504) 0][0]']
conv5_block20_1_conv (None, 1, 1, 192512 ['conv5_block20_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block20_1_bn ( (None, 1, 1, 512 ['conv5_block20_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block20_1_relu (None, 1, 1, 0 ['conv5_block20_1_bn[ N
(Activation) 128) 0][0]']
conv5_block20_2_conv (None, 1, 1, 36864 ['conv5_block20_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block20_concat (None, 1, 1, 0 ['conv5_block19_conca N
(Concatenate) 1536) t[0][0]',
'conv5_block20_2_con
v[0][0]']
conv5_block21_0_bn ( (None, 1, 1, 6144 ['conv5_block20_conca N
BatchNormalization) 1536) t[0][0]']
conv5_block21_0_relu (None, 1, 1, 0 ['conv5_block21_0_bn[ N
(Activation) 1536) 0][0]']
conv5_block21_1_conv (None, 1, 1, 196608 ['conv5_block21_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block21_1_bn ( (None, 1, 1, 512 ['conv5_block21_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block21_1_relu (None, 1, 1, 0 ['conv5_block21_1_bn[ N
(Activation) 128) 0][0]']
conv5_block21_2_conv (None, 1, 1, 36864 ['conv5_block21_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block21_concat (None, 1, 1, 0 ['conv5_block20_conca N
(Concatenate) 1568) t[0][0]',
'conv5_block21_2_con
v[0][0]']
conv5_block22_0_bn ( (None, 1, 1, 6272 ['conv5_block21_conca N
BatchNormalization) 1568) t[0][0]']
conv5_block22_0_relu (None, 1, 1, 0 ['conv5_block22_0_bn[ N
(Activation) 1568) 0][0]']
conv5_block22_1_conv (None, 1, 1, 200704 ['conv5_block22_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block22_1_bn ( (None, 1, 1, 512 ['conv5_block22_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block22_1_relu (None, 1, 1, 0 ['conv5_block22_1_bn[ N
(Activation) 128) 0][0]']
conv5_block22_2_conv (None, 1, 1, 36864 ['conv5_block22_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block22_concat (None, 1, 1, 0 ['conv5_block21_conca N
(Concatenate) 1600) t[0][0]',
'conv5_block22_2_con
v[0][0]']
conv5_block23_0_bn ( (None, 1, 1, 6400 ['conv5_block22_conca N
BatchNormalization) 1600) t[0][0]']
conv5_block23_0_relu (None, 1, 1, 0 ['conv5_block23_0_bn[ N
(Activation) 1600) 0][0]']
conv5_block23_1_conv (None, 1, 1, 204800 ['conv5_block23_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block23_1_bn ( (None, 1, 1, 512 ['conv5_block23_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block23_1_relu (None, 1, 1, 0 ['conv5_block23_1_bn[ N
(Activation) 128) 0][0]']
conv5_block23_2_conv (None, 1, 1, 36864 ['conv5_block23_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block23_concat (None, 1, 1, 0 ['conv5_block22_conca N
(Concatenate) 1632) t[0][0]',
'conv5_block23_2_con
v[0][0]']
conv5_block24_0_bn ( (None, 1, 1, 6528 ['conv5_block23_conca N
BatchNormalization) 1632) t[0][0]']
conv5_block24_0_relu (None, 1, 1, 0 ['conv5_block24_0_bn[ N
(Activation) 1632) 0][0]']
conv5_block24_1_conv (None, 1, 1, 208896 ['conv5_block24_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block24_1_bn ( (None, 1, 1, 512 ['conv5_block24_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block24_1_relu (None, 1, 1, 0 ['conv5_block24_1_bn[ N
(Activation) 128) 0][0]']
conv5_block24_2_conv (None, 1, 1, 36864 ['conv5_block24_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block24_concat (None, 1, 1, 0 ['conv5_block23_conca N
(Concatenate) 1664) t[0][0]',
'conv5_block24_2_con
v[0][0]']
conv5_block25_0_bn ( (None, 1, 1, 6656 ['conv5_block24_conca N
BatchNormalization) 1664) t[0][0]']
conv5_block25_0_relu (None, 1, 1, 0 ['conv5_block25_0_bn[ N
(Activation) 1664) 0][0]']
conv5_block25_1_conv (None, 1, 1, 212992 ['conv5_block25_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block25_1_bn ( (None, 1, 1, 512 ['conv5_block25_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block25_1_relu (None, 1, 1, 0 ['conv5_block25_1_bn[ N
(Activation) 128) 0][0]']
conv5_block25_2_conv (None, 1, 1, 36864 ['conv5_block25_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block25_concat (None, 1, 1, 0 ['conv5_block24_conca N
(Concatenate) 1696) t[0][0]',
'conv5_block25_2_con
v[0][0]']
conv5_block26_0_bn ( (None, 1, 1, 6784 ['conv5_block25_conca N
BatchNormalization) 1696) t[0][0]']
conv5_block26_0_relu (None, 1, 1, 0 ['conv5_block26_0_bn[ N
(Activation) 1696) 0][0]']
conv5_block26_1_conv (None, 1, 1, 217088 ['conv5_block26_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block26_1_bn ( (None, 1, 1, 512 ['conv5_block26_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block26_1_relu (None, 1, 1, 0 ['conv5_block26_1_bn[ N
(Activation) 128) 0][0]']
conv5_block26_2_conv (None, 1, 1, 36864 ['conv5_block26_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block26_concat (None, 1, 1, 0 ['conv5_block25_conca N
(Concatenate) 1728) t[0][0]',
'conv5_block26_2_con
v[0][0]']
conv5_block27_0_bn ( (None, 1, 1, 6912 ['conv5_block26_conca N
BatchNormalization) 1728) t[0][0]']
conv5_block27_0_relu (None, 1, 1, 0 ['conv5_block27_0_bn[ N
(Activation) 1728) 0][0]']
conv5_block27_1_conv (None, 1, 1, 221184 ['conv5_block27_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block27_1_bn ( (None, 1, 1, 512 ['conv5_block27_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block27_1_relu (None, 1, 1, 0 ['conv5_block27_1_bn[ N
(Activation) 128) 0][0]']
conv5_block27_2_conv (None, 1, 1, 36864 ['conv5_block27_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block27_concat (None, 1, 1, 0 ['conv5_block26_conca N
(Concatenate) 1760) t[0][0]',
'conv5_block27_2_con
v[0][0]']
conv5_block28_0_bn ( (None, 1, 1, 7040 ['conv5_block27_conca N
BatchNormalization) 1760) t[0][0]']
conv5_block28_0_relu (None, 1, 1, 0 ['conv5_block28_0_bn[ N
(Activation) 1760) 0][0]']
conv5_block28_1_conv (None, 1, 1, 225280 ['conv5_block28_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block28_1_bn ( (None, 1, 1, 512 ['conv5_block28_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block28_1_relu (None, 1, 1, 0 ['conv5_block28_1_bn[ N
(Activation) 128) 0][0]']
conv5_block28_2_conv (None, 1, 1, 36864 ['conv5_block28_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block28_concat (None, 1, 1, 0 ['conv5_block27_conca N
(Concatenate) 1792) t[0][0]',
'conv5_block28_2_con
v[0][0]']
conv5_block29_0_bn ( (None, 1, 1, 7168 ['conv5_block28_conca N
BatchNormalization) 1792) t[0][0]']
conv5_block29_0_relu (None, 1, 1, 0 ['conv5_block29_0_bn[ N
(Activation) 1792) 0][0]']
conv5_block29_1_conv (None, 1, 1, 229376 ['conv5_block29_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block29_1_bn ( (None, 1, 1, 512 ['conv5_block29_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block29_1_relu (None, 1, 1, 0 ['conv5_block29_1_bn[ N
(Activation) 128) 0][0]']
conv5_block29_2_conv (None, 1, 1, 36864 ['conv5_block29_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block29_concat (None, 1, 1, 0 ['conv5_block28_conca N
(Concatenate) 1824) t[0][0]',
'conv5_block29_2_con
v[0][0]']
conv5_block30_0_bn ( (None, 1, 1, 7296 ['conv5_block29_conca N
BatchNormalization) 1824) t[0][0]']
conv5_block30_0_relu (None, 1, 1, 0 ['conv5_block30_0_bn[ N
(Activation) 1824) 0][0]']
conv5_block30_1_conv (None, 1, 1, 233472 ['conv5_block30_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block30_1_bn ( (None, 1, 1, 512 ['conv5_block30_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block30_1_relu (None, 1, 1, 0 ['conv5_block30_1_bn[ N
(Activation) 128) 0][0]']
conv5_block30_2_conv (None, 1, 1, 36864 ['conv5_block30_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block30_concat (None, 1, 1, 0 ['conv5_block29_conca N
(Concatenate) 1856) t[0][0]',
'conv5_block30_2_con
v[0][0]']
conv5_block31_0_bn ( (None, 1, 1, 7424 ['conv5_block30_conca N
BatchNormalization) 1856) t[0][0]']
conv5_block31_0_relu (None, 1, 1, 0 ['conv5_block31_0_bn[ N
(Activation) 1856) 0][0]']
conv5_block31_1_conv (None, 1, 1, 237568 ['conv5_block31_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block31_1_bn ( (None, 1, 1, 512 ['conv5_block31_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block31_1_relu (None, 1, 1, 0 ['conv5_block31_1_bn[ N
(Activation) 128) 0][0]']
conv5_block31_2_conv (None, 1, 1, 36864 ['conv5_block31_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block31_concat (None, 1, 1, 0 ['conv5_block30_conca N
(Concatenate) 1888) t[0][0]',
'conv5_block31_2_con
v[0][0]']
conv5_block32_0_bn ( (None, 1, 1, 7552 ['conv5_block31_conca N
BatchNormalization) 1888) t[0][0]']
conv5_block32_0_relu (None, 1, 1, 0 ['conv5_block32_0_bn[ N
(Activation) 1888) 0][0]']
conv5_block32_1_conv (None, 1, 1, 241664 ['conv5_block32_0_rel N
(Conv2D) 128) u[0][0]']
conv5_block32_1_bn ( (None, 1, 1, 512 ['conv5_block32_1_con N
BatchNormalization) 128) v[0][0]']
conv5_block32_1_relu (None, 1, 1, 0 ['conv5_block32_1_bn[ N
(Activation) 128) 0][0]']
conv5_block32_2_conv (None, 1, 1, 36864 ['conv5_block32_1_rel N
(Conv2D) 32) u[0][0]']
conv5_block32_concat (None, 1, 1, 0 ['conv5_block31_conca N
(Concatenate) 1920) t[0][0]',
'conv5_block32_2_con
v[0][0]']
bn (BatchNormalizati (None, 1, 1, 7680 ['conv5_block32_conca N
on) 1920) t[0][0]']
relu (Activation) (None, 1, 1, 0 ['bn[0][0]'] N
1920)
dense_3 (Dense) (None, 1, 1, 19210 ['relu[0][0]'] N
10)
flatten_2 (Flatten) (None, 10) 0 ['dense_3[0][0]'] Y
================================================================================
Total params: 18,341,194
Trainable params: 0
Non-trainable params: 18,341,194
________________________________________________________________________________
And then the usual training:
library(tensorflow)
library(keras)
set_random_seed(321L, disable_gpu = FALSE) # Already sets R's random seed.
%>%
model ::compile(loss = loss_categorical_crossentropy,
kerasoptimizer = optimizer_adamax())
%>%
model fit(
x = train_x,
y = train_y,
epochs = 1L,
batch_size = 32L,
shuffle = TRUE,
validation_split = 0.2
)
We have seen, that transfer learning can easily be done using Keras.
library(torchvision)
library(torch)
torch_manual_seed(321L)
set.seed(123)
= cifar10_dataset(".", download = TRUE, train = TRUE,
train_ds transform = transform_to_tensor)
= cifar10_dataset(".", download = TRUE, train = FALSE,
test_ds transform = transform_to_tensor)
= dataloader(train_ds, batch_size = 100L, shuffle = TRUE)
train_dl = dataloader(test_ds, batch_size = 100L)
test_dl
= model_resnet18(pretrained = TRUE)
model_torch
# We will set all model parameters to constant values:
$parameters %>%
model_torch::walk(function(param) param$requires_grad_(FALSE))
purrr
# Let's replace the last layer (last layer is named 'fc') with our own layer:
= model_torch$fc$in_features
inFeat $fc = nn_linear(inFeat, out_features = 10L)
model_torch
= optim_adam(params = model_torch$parameters, lr = 0.01)
opt
for(e in 1:1){
= c()
losses ::loop(
corofor(batch in train_dl){
$zero_grad()
opt= model_torch(batch[[1]])
pred = nnf_cross_entropy(pred, batch[[2]], reduction = "mean")
loss $backward()
loss$step()
opt= c(losses, loss$item())
losses
}
)
cat(sprintf("Loss at epoch %d: %3f\n", e, mean(losses)))
}
$eval()
model_torch
= c()
test_losses = 0
total = 0
correct
::loop(
corofor(batch in test_dl){
= model_torch(batch[[1]])
output = batch[[2]]
labels = nnf_cross_entropy(output, labels)
loss = c(test_losses, loss$item())
test_losses = torch_max(output$data(), dim = 2)[[2]]
predicted = total + labels$size(1)
total = correct + (predicted == labels)$sum()$item()
correct
}
)
= correct/total
test_accuracy print(test_accuracy)
10.4.3 Example: Flower dataset
Let’s do that with our flower data set:
library(keras)
library(tensorflow)
= EcoData::dataset_flower()
data
= data$train/127.5 - 1
train = data$test/127.5 - 1
test = data$labels
labels
# Transfer learning
# weights were trained to imagenet
= keras::application_efficientnet_b1(include_top = FALSE,
pretrained_model input_shape = c(80L, 80L, 3L))
# pretrained_model
::freeze_weights(pretrained_model)
keras
pretrained_model
# Build model
= pretrained_model$output %>%
dnn layer_flatten() %>%
layer_dropout(0.2) %>%
layer_dense(units = 5L, activation = "softmax")
dnn
= keras_model(inputs = pretrained_model$input,
model outputs = dnn
)%>%
model ::compile(loss = loss_categorical_crossentropy,
kerasoptimizer = keras::optimizer_rmsprop(learning_rate = 0.0005))
%>%
model fit(x = train, y = k_one_hot(labels, 5L), validation_split = 0.2, epochs = 5L)
# Data augmentation
# Transfer learning
# weights were trained to imagenet
= keras::application_efficientnet_b1(include_top = FALSE,
pretrained_model input_shape = c(80L, 80L, 3L))
# pretrained_model
::freeze_weights(pretrained_model)
keras
pretrained_model
# Build model
= pretrained_model$output %>%
dnn layer_flatten() %>%
layer_dropout(0.2) %>%
layer_dense(units = 5L, activation = "softmax")
dnn
= keras_model(inputs = pretrained_model$input,
model outputs = dnn
)
### Set up augmentation
= image_data_generator(rotation_range = 180, zoom_range = 0.4,
aug width_shift_range = 0.2, height_shift_range = 0.2,
vertical_flip = TRUE, horizontal_flip = TRUE)
### Set up the data
= sample.int(nrow(train), 0.1 * nrow(train)) # for validation
indices = flow_images_from_data(x = train[-indices,,,],
generator y = k_one_hot(labels[-indices], 5L),
generator = aug
)
generator
%>%
model ::compile(loss = loss_categorical_crossentropy,
kerasoptimizer = keras::optimizer_rmsprop(learning_rate = 0.0005))
= nrow(train[-indices,,,]) /45
steps_per_epoch = floor(steps_per_epoch)
steps_per_epoch
%>%
model fit(generator, epochs = 5L, batch_size = 45L, steps_per_epoch = steps_per_epoch,
validation_data = list(train[indices,,,], k_one_hot(labels[indices], 5L))
)
= predict(model, test)
pred = apply(pred, 1, which.max) - 1
pred pred