Docstrings

NumNN.ActivationType
Activation(actFun)

Arguments

  • actFun::Symbol := the activation function of this layer

Summary

mutable struct Activation <: Layer


Fields

  • actFun::Symbol := the activation function of this layer

  • channels::Integer := is the number of nodes or channels in the layer

  • inputS::Tuple := input size of the layer

  • outputS::Tuple := output size of the layer

  • forwCount::Integer := forward propagation counter

  • backCount::Integer := backward propagation counter

  • updateCount::Integer := update parameters counter

  • nextLayers::Array{Layer,1} := An array of the next layer(s)

  • prevLayer::Array{Layer,1} := An array of the previous layer(s) to be added


Supertype Hierarchy

Activation <: Layer <: An

Examples

X_Input = Input(X_train)
X = FCLayer(10, :noAct)(X_Input)
X = Activation(:relu)(X)
source
NumNN.AddLayerType
AddLayer(; [channels = 0])

Layer performs and addition of multiple previous layers

Arguments

  • channels := (Integer) number of channels/nodes of this array which equals to the same of the previous layer(s)

Summary

mutable struct AddLayer <: MILayer

Fields

  • channels::Integer := is the number of nodes or channels in the layer

  • inputS::Tuple := input size of the layer

  • outputS::Tuple := output size of the layer

  • forwCount::Integer := forward propagation counter

  • backCount::Integer := backward propagation counter

  • updateCount::Integer := update parameters counter

  • nextLayers::Array{Layer,1} := An array of the next layer(s)

  • prevLayer::Array{Layer,1} := An array of the previous layer(s) to be added


Supertype Hierarchy

AddLayer <: MILayer <: Layer <: Any

Examples

XIn1 = Input(X_train)
X1 = FCLayer(10, :relu)(XIn1)
XIn2 = Input(X_train)
X2 = FCLayer(10, :tanh)(XIn2)

Xa = AddLayer()([X1,X2])
source
NumNN.AveragePool1DType
AveragePool1D(
    f::Integer=2;
    prevLayer=nothing,
    strides::Integer=f,
    padding::Symbol=:valid,
)

Summary

mutable struct AveragePool1D <: AveragePoolLayer

Fields

channels    :: Integer
f           :: Integer
s           :: Integer
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

AveragePool1D <: AveragePoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.AveragePool2DType
AveragePool2D(
    f::Tuple{Integer,Integer}=(2,2);
    prevLayer=nothing,
    strides::Tuple{Integer,Integer}=f,
    padding::Symbol=:valid,
)

Summary

mutable struct AveragePool2D <: AveragePoolLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer}
s           :: Tuple{Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

AveragePool2D <: AveragePoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.AveragePool3DType
AveragePool3D(
    f::Tuple{Integer,Integer,Integer}=(2,2,2);
    prevLayer=nothing,
    strides::Tuple{Integer,Integer,Integer}=f,
    padding::Symbol=:valid,
)

Summary

mutable struct AveragePool3D <: AveragePoolLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer,Integer}
s           :: Tuple{Integer,Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

AveragePool3D <: AveragePoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.AveragePoolLayerType

Summary

abstract type AveragePoolLayer <: PoolLayer

Subtypes

AveragePool1D
AveragePool2D
AveragePool3D

Supertype Hierarchy

AveragePoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.BatchNormType
BatchNorm(;dim=1, ϵ=1e-10)

Batch Normalization Layer that is used ot normalize across the dimensions specified by the argument dim.

Arguments

  • dim::Integer := is the dimension to normalize across

  • ϵ::AbstractFloat := is a backup constant that is used to prevent from division on zero when $σ^2$ is zero


Summary

mutable struct BatchNorm <: Layer

Fields

  • channels::Integer := is the number of nodes in the layer

  • inputS::Tuple{Integer, Integer} := input size of the layer, of the shape (channels of the previous layer, size of mini-batch)

  • outputS::Tuple{Integer, Integer} := output size of the layer, of the shape (channels of this layer, size of mini-batch)

  • dim::Integer := the dimension to normalize across

  • ϵ::AbstractFloat := backup constant to protect from dividing on zero when $σ^2 = 0$

  • W::Array{T,2} where {T} := the scaling parameters of this layer W * X, same shape of the mean μ

  • B::Array{T,2} where {T} := the bias of this layer W * X .+ B, same shape of the variance $σ^2$

  • dW::Array{T,2} where {T} := the derivative of the loss function to the W parameters $\frac{dJ}{dW}$

  • dB::Array{T,2} where {T} := the derivative of the loss function to the B parameters $\frac{dJ}{dB}$

  • forwCount::Integer := forward propagation counter

  • backCount::Integer := backward propagation counter

  • updateCount::Integer := update parameters counter

  • prevLayer::L where {L<:Union{Layer,Nothing}} := the previous layer which is

the input of this layer

  • nextLayers::Array{Layer,1} := An array of the next layer(s)

Supertype Hierarchy

BatchNorm <: Layer <: Any

Examples

X_train = rand(14,14,3,32) #input of shape `14×14` with channels of `3` and mini-batch size `32`

X_Input = Input(X_train)
X = Conv2D(10, (3,3))(X_Input)
X = BatchNorm(dim=3) #to normalize across the channels dimension
X = Activation(:relu)
X_train = rand(128,5,32) #input of shape `128` with channels of `5` and mini-batch size `32`

X_Input = Input(X_train)
X = Conv1D(10, 5)(X_Input)
X = BatchNorm(dim=2) #to normalize across the channels dimension
X = Activation(:relu)
``

julia X_train = rand(6464,32) #input of shape `6464and mini-batch size32`

XInput = Input(Xtrain) X = FCLayer(10, :noAct)(X_Input) X = BatchNorm(dim=1) #to normalize across the features dimension X = Activation(:relu) ````

source
NumNN.ConcatLayerType
ConcatLayer(; channels = 0)

Perform concatenation of group of previous Layers

Summary

mutable struct ConcatLayer <: MILayer

Fields

channels    :: Integer
inputS      :: Tuple
outputS     :: Tuple
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
nextLayers  :: Array{Layer,1}
prevLayer   :: Array{Layer,1}

Supertype Hierarchy

ConcatLayer <: MILayer <: Layer <: Any
source
NumNN.Conv1DType

Summary

mutable struct Conv1D <: ConvLayer

Fields

channels    :: Integer
f           :: Integer
s           :: Integer
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
W           :: Array{F,3} where F
dW          :: Array{F,3} where F
K           :: Array{F,2} where F
dK          :: Array{F,2} where F
B           :: Array{F,3} where F
dB          :: Array{F,3} where F
actFun      :: Symbol
keepProb    :: AbstractFloat
V           :: Dict{Symbol,Array{F,3} where F}
S           :: Dict{Symbol,Array{F,3} where F}
V̂dk         :: Array{F,2} where F
Ŝdk         :: Array{F,2} where F
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

Conv1D <: ConvLayer <: PaddableLayer <: Layer <: Any
source
NumNN.Conv2DType

Summary

mutable struct Conv2D <: ConvLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer}
s           :: Tuple{Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
W           :: Array{F,4} where F
dW          :: Array{F,4} where F
K           :: Array{F,2} where F
dK          :: Array{F,2} where F
B           :: Array{F,4} where F
dB          :: Array{F,4} where F
actFun      :: Symbol
keepProb    :: AbstractFloat
V           :: Dict{Symbol,Array{F,4} where F}
S           :: Dict{Symbol,Array{F,4} where F}
V̂dk         :: Array{F,2} where F
Ŝdk         :: Array{F,2} where F
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

Conv2D <: ConvLayer <: PaddableLayer <: Layer <: Any
source
NumNN.Conv3DType

Summary

mutable struct Conv3D <: ConvLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer,Integer}
s           :: Tuple{Integer,Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
W           :: Array{F,5} where F
dW          :: Array{F,5} where F
K           :: Array{F,2} where F
dK          :: Array{F,2} where F
B           :: Array{F,5} where F
dB          :: Array{F,5} where F
actFun      :: Symbol
keepProb    :: AbstractFloat
V           :: Dict{Symbol,Array{F,5} where F}
S           :: Dict{Symbol,Array{F,5} where F}
V̂dk        :: Array{F,2} where F
Ŝdk         :: Array{F,2} where F
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

Conv3D <: ConvLayer <: PaddableLayer <: Layer <: Any
source
NumNN.ConvLayerType

Summary

abstract type ConvLayer <: PaddableLayer

Abstract Type to hold all ConvLayer

Subtypes

Conv1D
Conv2D
Conv3D

Supertype Hierarchy

ConvLayer <: PaddableLayer <: Layer <: Any
source
NumNN.FCLayerType
FCLayer(channels=0, actFun=:noAct, [layerInput = nothing; keepProb = 1.0])

Fully-connected layer (equivalent to Dense in TensorFlow etc.)

Arguments

  • channels := (Integer) is the number of nodes in the layer

  • actFun := (Symbol) is the activation function of this layer

  • layerInput := (Layer or Array) the input of this array (optional don't need to assign it)

  • keepProb := (AbstractFloat) the keep probability (1 - prob of the dropout rate)


Summary

mutable struct FCLayer <: Layer

Fields

  • channels::Integer := is the number of nodes in the layer

  • actFun::Symbol := the activation function of this layer

  • inputS::Tuple{Integer, Integer} := input size of the layer, of the shape (channels of the previous layer, size of mini-batch)

  • outputS::Tuple{Integer, Integer} := output size of the layer, of the shape (channels of this layer, size of mini-batch)

  • keepProb::AbstractFloat := the keep probability (rate) of the drop-out operation <1.0

  • W::Array{T,2} where {T} := the scaling parameters of this layer W * X, of the shape (channels of this layer, channels of the previous layer)

  • B::Array{T,2} where {T} := the bias of this layer W * X .+ B, of the shape (channels of this layer, 1)

  • dW::Array{T,2} where {T} := the derivative of the loss function to the W parameters $\frac{dJ}{dW}$

  • dB::Array{T,2} where {T} := the derivative of the loss function to the B parameters $\frac{dJ}{dB}$

  • forwCount::Integer := forward propagation counter

  • backCount::Integer := backward propagation counter

  • updateCount::Integer := update parameters counter

  • prevLayer::L where {L<:Union{Layer,Nothing}} := the previous layer which is

the input of this layer

  • nextLayers::Array{Layer,1} := An array of the next layer(s)

Supertype Hierarchy

FCLayer <: Layer <: Any

Examples

X_Input = Input(X_train)
X = FCLayer(20, :relu)(X_Input)

In the previous example the variable X_Input is a pointer to the Input layer, and X is an pointer to the FCLayer(20, :relu) layer. Note that the layer instance can be used as a connecting function.

source
NumNN.FlattenType
Flatten()

Flatten the input into 2D Array

Summary

mutable struct Flatten <: Layer

Fields

channels    :: Integer
inputS      :: Tuple
outputS     :: Tuple
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
nextLayers  :: Array{Layer,1}
prevLayer   :: Union{Nothing, Layer}

Supertype Hierarchy

Flatten <: Layer <: Any
source
NumNN.InputType
Input(X_shape::Tuple)

Input Layer that is used as a pointer to the input array(s).

Arguments

  • X_shape::Tuple := shape of the input Array

Summary

mutable struct Input <: Layer

Fields

  • channels::Integer := is the number of nodes or channels in the layer

  • inputS::Tuple := input size of the layer

  • outputS::Tuple := output size of the layer

  • forwCount::Integer := forward propagation counter

  • backCount::Integer := backward propagation counter

  • updateCount::Integer := update parameters counter

  • nextLayers::Array{Layer,1} := An array of the next layer(s)

  • prevLayer::Array{Layer,1} := An array of the previous layer(s) to be added


Supertype Hierarchy

Input <: Layer <: Any

Examples

X_Input = Input(size(X_train))
X = FCLayer(10, :relu)(X_Input)

It is possible to use the Array instead of its size NumNN will take care of the rest

X_Input = Input(X_train)
X = FCLayer(10, :relu)(X_Input)
source
NumNN.MaxPool1DType
MaxPool1D(
    f::Integer=2;
    prevLayer=nothing,
    strides::Integer=f,
    padding::Symbol=:valid,
)

Summary

mutable struct MaxPool1D <: MaxPoolLayer

Fields

channels    :: Integer
f           :: Integer
s           :: Integer
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

MaxPool1D <: MaxPoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.MaxPool2DType
MaxPool2D(
    f::Tuple{Integer,Integer}=(2,2);
    prevLayer=nothing,
    strides::Tuple{Integer,Integer}=f,
    padding::Symbol=:valid,
)

Summary

mutable struct MaxPool2D <: MaxPoolLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer}
s           :: Tuple{Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

MaxPool2D <: MaxPoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.MaxPool3DType
MaxPool3D(
    f::Tuple{Integer,Integer,Integer}=(2,2,2);
    prevLayer=nothing,
    strides::Tuple{Integer,Integer,Integer}=f,
    padding::Symbol=:valid,
)

Summary

mutable struct MaxPool3D <: MaxPoolLayer

Fields

channels    :: Integer
f           :: Tuple{Integer,Integer,Integer}
s           :: Tuple{Integer,Integer,Integer}
inputS      :: Tuple
outputS     :: Tuple
padding     :: Symbol
forwCount   :: Integer
backCount   :: Integer
updateCount :: Integer
prevLayer   :: Union{Nothing, Layer}
nextLayers  :: Array{Layer,1}

Supertype Hierarchy

MaxPool3D <: MaxPoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.MaxPoolLayerType

Summary

abstract type MaxPoolLayer <: PoolLayer

Abstract Type to hold all the MaxPoolLayers

Subtypes

MaxPool1D
MaxPool2D
MaxPool3D

Supertype Hierarchy

MaxPoolLayer <: PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.ModelType
function Model(
    X,
    Y,
    inLayer::Layer,
    outLayer::Layer,
    α;
    optimizer = :gds,
    β1 = 0.9,
    β2 = 0.999,
    ϵAdam = 1e-8,
    regulization = 0,
    λ = 1.0,
    lossFun = :categoricalCrossentropy,
    paramsDtype::DataType = Float64,
)

Summary

mutable struct Model <: Any

Fields

inLayer      :: Layer
outLayer     :: Layer
lossFun      :: Symbol
paramsDtype  :: DataType
regulization :: Integer
λ            :: AbstractFloat
α            :: AbstractFloat
optimizer    :: Symbol
ϵAdam        :: AbstractFloat
β1           :: AbstractFloat
β2           :: AbstractFloat
source
NumNN.PoolLayerType

Summary

abstract type PoolLayer <: PaddableLayer

Abstract Type to hold all the PoolLayers

Subtypes

AveragePoolLayer
MaxPoolLayer

Supertype Hierarchy

PoolLayer <: PaddableLayer <: Layer <: Any
source
NumNN.binaryCrossentropyType
return the average cross entropy loss over vector of labels and predictions

input:
    a := (?1, c,m) matrix of predicted values, where c is the number of classes
    y := (?1, c,m) matrix of predicted values, where c is the number of classes

    Note: in case the number of classes is one (1) it is okay to have
          a scaler values for a and y

output:
    J := scaler value of the cross entropy loss
source
NumNN.σMethod
return the Sigmoid output
inputs must be matices
source
NumNN.NNConvMethod
NNConv(cLayer::CL, Ai::AbstractArray{T,N}) where {T,N, CL <: ConvLayer}

Perform the forward propagation for cLayer::ConvLayer using fast implementation of NNlib

Return

  • Dict(:Z => Z, :A => A)
source
NumNN.chainMethod
function chain(X, arr::Array{L,1}) where {L<:Layer}

Returns the input Layer and the output Layer from an Array of layers and the input of the model as and Array X

source
NumNN.chainBackPropMethod
function chainBackProp(
    X::AbstractArray{T1,N1},
    Y::AbstractArray{T2,N2},
    model::Model,
    FCache::Dict{Layer,Dict{Symbol,AbstractArray}},
    cLayer::L = nothing,
    BCache::Dict{Layer,Dict{Symbol,AbstractArray}}=Dict{Layer,Dict{Symbol,AbstractArray}}(),
    cnt = -1;
    tMiniBatch::Integer = -1, #can be used to perform both back and update params
    kwargs...,
) where {L<:Union{Layer,Nothing},T1,T2,N1,N2}

Arguments

  • X := train data

  • Y := train labels

  • model := is the model to perform the back propagation on

  • FCache := the cached values of the forward propagation as Dict{Layer, Dict{Symbol, AbstractArray}}

  • cLayer := is an internal variable to hold the current layer

  • BCache := to hold the cache of the back propagtion (internal variable)

  • cnt := is an internal variable to count the step of back propagation currently on to avoid re-do it

Key-word Arguments

  • tMiniBatch := to perform both the back prop and update trainable parameters in the same recursive call (if less than 1 update during back propagation is ditched)

  • kwargs := other key-word arguments to be bassed to layerBackProp methods

Return

  • BCache := the cached values of the back propagation
source
NumNN.chainForPropMethod
function chainForProp(
    X::AbstractArray{T,N},
    cLayer::Layer,
    cnt::Integer = -1;
    FCache = Dict{Layer,Dict{Symbol,AbstractArray}}(),
    kwargs...,
) where {T,N}

perform the chained forward propagation using recursive calls

Arguments:

  • X::AbstractArray{T,N} := input of the input layer

  • cLayer::Layer := Input Layer

  • cnt::Integer := an internal counter used to cache the layers was performed not to redo it again

Returns

  • Cache::Dict{Layer, Dict{Symbol, Array}} := the output each layer either A, Z or together As Dict of layer to dict of Symbols and Arrays for internal use, it set again the values of Z and A in each layer to be used later in back propagation and add one to the layer forwCount value when pass through it
source
NumNN.chainUpdateParams!Method
chainUpdateParams!(model::Model,
                   cLayer::L=nothing,
                   cnt = -1;
                   tMiniBatch::Integer = 1) where {L<:Union{Layer,Nothing}}

Update trainable parameters using recursive call

Arguments

  • model := the model holds the training and update process

  • cLayer := internal variable for recursive call holds the current layer

  • cnt := an internal variable to hold the count of update in each layer not to re-do it

Key-word Arguments

  • tMiniBatch := the number of mini-batch of the total train collection

Return

  • nothing
source
NumNN.dNNConv!Method
function dNNConv!(
    cLayer::CL,
    Ai::AbstractArray{T1,N},
    dAi::AbstractArray{T2,N},
    dZ::AbstractArray{T3,N},
) where {T1, T2, T3, N, CL <: ConvLayer}

Performs the back propagation for cLayer::ConvLayer and save values to the pre-allocated Array dAi and trainable parameters W & B

Arguments

  • cLayer::ConvLayer

  • Ai::AbstractArray{T1,N} := the input activation of cLayer

  • dAi::AbstractArray{T2,N} := pre-allocated to hold the derivative of the activation

  • dZ::AbstractArray{T3,N} := the derivative of the cost to the input of the activation function

Return

nothing

source
NumNN.deepInitWB!Function
initialize W's and B's using

inputs:
    X := is the input of the neural Network
    outLayer := is the output Layer or the current layer
                of initialization
    cnt := is a counter to determinde the current step
            and its an internal variable


    kwargs:
        He := is a true/false array, whether to use the He **et al.** initialization
                or not

        coef := when not using He **et al.** initialization use this coef
                to multiply with the random numbers initialization
        zro := true/false variable whether to initialize W with zeros or not
source
NumNN.getLayerSliceMethod
getLayerSlice(cLayer::Layer, nextLayer::Layer, BCache::Dict{Layer, Dict{Symbol, AbstractArray}})

Fall back method for Layers other than ConcatLayer

source
NumNN.initWB!Method
initialize W and B for layer with inputs of size of (nl_1) and layer size
    of (nl)

returns:
    W: of size of (nl, nl_1)
source
NumNN.layerBackPropFunction
layerBackProp(
    cLayer::BatchNorm,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray = Array{Any,1}(undef,0);
    labels::AbstractArray = Array{Any,1}(undef,0),
    kwargs...,
)

Perform the back propagation of BatchNorm type on the activations and trainable parameters W and B

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerBackPropMethod
function layerBackProp(
    cLayer::ConvLayer,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    Ai::AbstractArray = Array{Any,1}(undef,0),
    Ao::AbstractArray = Array{Any,1}(undef,0),
    dAo::AbstractArray = Array{Any,1}(undef,0);
    labels::AbstractArray = Array{Any,1}(undef,0),
    kwargs...
)

Performs the layer back propagation for a ConvLayer

Arguments

  • cLayer::ConvLayer

  • model::Model

  • FCache := the cache of the forward propagation step

  • BCache := the cache of so far done back propagation

  • for test purpose Ai Ao dAo

  • labels := when cLayer is an output Layer

Return

  • Dict(:dA => dAi)
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::Activation,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray{T1,N} = Array{Any,1}(undef,0);
    labels::AbstractArray{T2,N} = Array{Any,1}(undef,0),
    kwargs...,
) where {T1,T2,N}

Perform the back propagation of Activation type

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::AddLayer,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray{T1,N} = Array{Any,1}(undef,0);
    labels::AbstractArray{T2,N} = Array{Any,1}(undef,0),
    kwargs...,
) where {T1,T2,N}

Perform the back propagation of AddLayer type

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::Flatten,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray{T1,N} = Array{Any,1}(undef,0);
    labels::AbstractArray{T2,N} = Array{Any,1}(undef,0),
    kwargs...,
) where {T1,T2,N}

Perform the back propagation of Flatten type

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::Input,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray{T1,N} = Array{Any,1}(undef,0);
    labels::AbstractArray{T2,N} = Array{Any,1}(undef,0),
    kwargs...,
) where {T1,T2,N}

Perform the back propagation of Input type

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerBackPropMethod

layerBackProp( cLayer::PoolLayer, model::Model, FCache::Dict{Layer, Dict{Symbol, AbstractArray}}, BCache::Dict{Layer, Dict{Symbol, AbstractArray}}, Ai::AbstractArray = Array{Any,1}(undef,0), Ao::AbstractArray = Array{Any,1}(undef,0), dAo::AbstractArray = Array{Any,1}(undef,0); labels::AbstractArray = Array{Any,1}(undef,0), kwargs... )

Performs the layer back propagation for a PoolLayer

Arguments

  • cLayer::ConvLayer

  • model::Model

  • FCache := the cache of the forward propagation step

  • BCache := the cache of so far done back propagation

  • for test purpose Ai Ao dAo

  • labels := when cLayer is an output Layer

Return

  • Dict(:dA => dAi)
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::Activation,
    model::Model,
    actFun::SoS,
    Ao::AbstractArray,
    labels::AbstractArray,
) where {SoS<:Union{Type{softmax},Type{σ}}}

For output Activation layers with softmax and sigmoid activation functions

source
NumNN.layerBackPropMethod
function layerBackProp(
    cLayer::ConvLayer,
    model::Model,
    actFun::SoS,
    Ao::AbstractArray,
    labels::AbstractArray,
) where {SoS<:Union{Type{softmax},Type{σ}}}

Derive the loss function to the input of the activation function when activation is either softmax or σ

Return

  • dZ::AbstractArray := the derivative of the loss function to the input of the activation function
source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::FCLayer,
    model::Model,
    actFun::SoS,
    Ao::AbstractArray,
    labels::AbstractArray,
) where {SoS<:Union{Type{softmax},Type{σ}}}

For output FCLayer layers with softmax and sigmoid activation functions

source
NumNN.layerBackPropMethod
layerBackProp(
    cLayer::FCLayer,
    model::Model,
    FCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    BCache::Dict{Layer, Dict{Symbol, AbstractArray}},
    dAo::AbstractArray{T1,2} = Array{Any,2}(undef,0,0);
    labels::AbstractArray{T2,2} = Array{Any,2}(undef,0,0),
    kwargs...,
) where {T1,T2}

Perform the back propagation of FCLayer type on the activations and trainable parameters W and B

Argument

  • cLayer := the layer to perform the backprop on

  • model := the Model

  • FCache := the cache values of the forprop

  • BCache := the cache values of the backprop from the front Layer(s)

  • dAo := (for test purpose) the derivative of the front Layer

  • labels := in case this is the output layer

Return

  • A Dict{Symbol, AbstractArray}(:dA => dAi)
source
NumNN.layerForPropFunction
layerForProp(
    cLayer::BatchNorm,
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for BatchNorm Layer and trainable parameters W and B

Arguments

  • cLayer := the layer to perform for prop on

  • Ai := is the input activation of the BatchNorm Layer

  • FCache := a cache holder of the for prop

Return

  • Dict( :μ => μ, :Ai_μ => Ai_μ, :Ai_μ_s => Ai_μ_s, :var => var, :Z => Z, :A => Ao, :Ap => Ap, )
source
NumNN.layerForPropFunction
layerForProp(
    cLayer::Input,
    X::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for Input Layer

Arguments

  • cLayer := the layer to perform for prop on

  • X := is the input data of the Input Layer

  • FCache := a cache holder of the for prop

Return

  • A Dict{Symbol, AbstractArray}(:A => Ao)
source
NumNN.layerForPropFunction
layerForProp(
    cLayer::FCLayer,
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for FCLayer Layer

Arguments

  • cLayer := the layer to perform for prop on

  • Ai := is the input activation of the FCLayer Layer

  • FCache := a cache holder of the for prop

Return

  • A Dict{Symbol, AbstractArray}(:A => Ao, :Z => Z)
source
NumNN.layerForPropFunction
layerForProp(
    cLayer::Activation,
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for Activation Layer

Arguments

  • cLayer := the layer to perform for prop on

  • Ai := is the input activation of the Activation Layer

  • FCache := a cache holder of the for prop

Return

  • A Dict{Symbol, AbstractArray}(:A => Ao)
source
NumNN.layerForPropFunction
layerForProp(
    cLayer::Flatten,
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for Flatten Layer

Arguments

  • cLayer := the layer to perform for prop on

  • Ai := is the input activation of the Flatten Layer

  • FCache := a cache holder of the for prop

Return

  • A Dict{Symbol, AbstractArray}(:A => Ao)
source
NumNN.layerForPropMethod
layerForProp(
    cLayer::AddLayer;
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...,
)

Perform forward propagation for AddLayer Layer

Arguments

  • cLayer := the layer to perform for prop on

  • FCache := a cache holder of the for prop

Return

  • A Dict{Symbol, AbstractArray}(:A => Ao)
source
NumNN.layerForPropMethod
function layerForProp(
    cLayer::ConvLayer,
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...
)

Perform the layer forward propagation for a ConvLayer

Arguments

  • cLayer::ConvLayer

  • Ai := optional activation of the previous layer

  • FCache := a Dict holds the outputs of layerForProp of the previous Layer(s)

Returns

  • Dict(:Z => Z, :A => Ao)
source
NumNN.layerForPropMethod
layerForProp(
    cLayer::PoolLayer},
    Ai::AbstractArray = Array{Any,1}(undef,0);
    FCache::Dict{Layer,Dict{Symbol, AbstractArray}},
    kwargs...
)

Perform the layer forward propagation for a PoolLayer

Arguments

  • cLayer::PoolLayer

  • Ai := optional activation of the previous layer

  • FCache := a Dict holds the outputs of layerForProp of the previous Layer(s)

Returns

  • Dict(:A => Ao)
source
NumNN.layerUpdateParams!Method
function layerUpdateParams!(
    model::Model,
    cLayer::FoB,
    cnt::Integer = -1;
    tMiniBatch::Integer = 1,
    kwargs...,
) where {FoB <: Union{FCLayer, BatchNorm}}

update trainable parameters for FCLayer and BatchNorm layers

source
NumNN.oneHotMethod
oneHot(Y; classes = [], numC = 0)

convert array of integer classes into one Hot coding.

Arguments

  • Y := a vector of classes as a number

  • classes := the classes explicity represented (in case not all the classes are present in the labels given)

  • numC := number of classes as alternative to classes variable

Examples

```julia Y = rand(0:9, 100); # a 100 item with class of [0-9]

source
NumNN.paddingMethod
function padding(Ai::AbstractArray{T,4},
                 p_H::Integer,
                 p_W::Integer=-1) where {T}

pad zeros to the Array Ai with amount of p values

inputs: Ai := Array of type T and dimension N p := integer determinde the amount of zeros padding i.e. if Ai is a 3-dimensional array the padding will be for the first dimension if Ai is a 4-dimensional array the padding will be for the first 2 dimensions if Ai is a 5-dimensional array the padding will be for the first 3 dimensions

output: PaddinView array where it contains the padded values and the original data without copying it

source
NumNN.paddingSizeMethod
function paddingSize(cLayer::PL, Ai::AbstractArray) where {PL<:PaddableLayer}

Helping function that returns the pHhi, pHlo, and (in case 2D Conv), pWhi, pWlo, and so on

source
NumNN.predictFunction
predict(model::Model, X_In::AbstractArray, Y_In = nothing; kwargs...)

Run the prediction based on the trained model

Arguments

  • model::Model := the trained Model to predict on

  • X_In := the input Array

  • Y_In := labels (optional) to evaluate the model

Key-word Arugmets

  • batchSize := default 32

  • useProgBar := (Bool) where or not to shoe the prograss bar

Return

  • a Dict of:

    • :YhatValue := Array of the output of the integer prediction values
    • :YhatProb := Array of the output probabilities
    • :accuracy := the accuracy of prediction in case Y_In is given
source
NumNN.predictBatchFunction
predictBatch(model::Model, X::AbstractArray, Y = nothing; kwargs...)

predict Y using the model and the input X and the labels Y

Inputs

  • model::Model := the trained model

  • X::AbstractArray := the input Array

  • Y := the input labels to compare with (optional)

Output

  • a Tuple of

    • := the predicted values
    • Ŷ_bool := the predicted labels
    • "accuracy" := the accuracy of the predicted labels
source
NumNN.probToValueMethod
function probToValue(
    actFun::Type{σ},
    probs::AbstractArray{T,N},
    labels::Aa = nothing;
    evalConst = 0.5,
) where {Aa<:Union{<:AbstractArray,Nothing},T,N}

Convert the probabilities return out of sigmoid function to Bool value (i.e. 0,1) values based on comparing on a threshold value evalConst

Return

  • Ŷ_bool := Boolean valuse of the probabilites

  • acc := Accuracy when labels provided

source
NumNN.probToValueMethod
function probToValue(
    actFun::Type{S},
    probs::AbstractArray{T,N};
    labels = nothing,
) where {T,N,S<:softmaxFamily}

convert the probabilites out of softmax or softmax-like functions into Bool values, where the max value gets 1 and the other get zeros

Return

  • Ŷ_bool := Boolean valuse of the probabilites

  • acc := Accuracy when labels provided

source
NumNN.resetCount!Method
resetCount!(outLayer::Layer, cnt::Symbol)

to reset a counter in all layers under outLayer.

Arguments

  • outLayer::Layer := the layer from start reseting the counter

  • cnt::Symbol := the counter to be reseted

Examples

X_train = rand(128, 100);

X_Input = Input(X_train);
X = FCLayer(50, :relu)(X_Input);
X_out = FCLayer(10, :softmax)(X);

FCache = chainForProp(X_train, X_Input);

# Now to reset the forwCount in all layers

resetCount!(X_out, :forwCount)
source
NumNN.trainMethod
train(
      X_train,
      Y_train,
      model::Model,
      epochs;
      testData = nothing,
      testLabels = nothing,
      kwargs...,
      )

Repeat the trainging (forward/backward propagation and update parameters)

Argument

  • X_train := the training data

  • Y_train := the training labels

  • model := the model to train

  • epochs := the number of repetitions of the training phase

Key-word Arguments

  • testData := to evaluate the training process over test data too

  • testLabels := to evaluate the training process over test data too

  • batchSize := the size of training when mini batch training

useProgBar` := (true, false) value to use prograss bar

  • kwargs := other key-word Arguments to pass for the lower functions in hierarchy

Return

  • A Dict{Symbol, Vector} of:

    • :trainAccuracies := an Array of the accuracies of training data at each epoch

    • :trainCosts := an Array of the costs of training data at each epoch

    • In case testDate and testLabels are givens:

      • :testAccuracies := an Array of the accuracies of test data at each epoch
      • :testCosts := an Array of the costs of test data at each epoch
source
NumNN.unrollFunction
unroll(cLayer::Conv3D, AiS::Tuple, param::Symbol=:W)

unroll the param of Conv3D into 2D matrix

Arguments

  • cLayer := the layer of the paramters to unroll

  • AiS := the padded input to determinde the size and shape of the output of unroll

  • param := Conv1D parameter to be unrolled

Return

  • K := 2D Matrix of the param
source
NumNN.unrollFunction
unroll(cLayer::Conv2D, AiS::Tuple, param::Symbol=:W)

unroll the param of Conv1D into 2D matrix

Arguments

  • cLayer := the layer of the paramters to unroll

  • AiS := the padded input to determinde the size and shape of the output of unroll

  • param := Conv1D parameter to be unrolled

Return

  • K := 2D Matrix of the param
source
NumNN.unrollFunction
unroll(cLayer::Conv1D, AiS::Tuple, param::Symbol=:W)

unroll the param of Conv1D into 2D matrix

Arguments

  • cLayer := the layer of the paramters to unroll

  • AiS := the padded input to determinde the size and shape of the output of unroll

  • param := Conv1D parameter to be unrolled

Return

  • K := 2D Matrix of the param
source
NumNN.PaddableLayerType

Summary

abstract type PaddableLayer <: Layer

Abstract Type to hold all Paddable Layers (i.e. ConvLayer & PoolLayer)

Subtypes

ConvLayer
PoolLayer

Supertype Hierarchy

PaddableLayer <: Layer <: Any
source
Base.getindexMethod
getindex(it, key; default) = haskey(it, key) ? it[key] : default

Examples

D = Dict(:A=>"A", :B=>"B")

A = getindex(D, :A)

## this will return an error
#C = getindex(D: :C)

#instead
C = getindex(D, :C; default="C")
#this will return the `String` C
source
NumNN.costMethod
function cost(
    loss::Type{binaryCrossentropy},
    A::AbstractArray{T1,N},
    Y::AbstractArray{T2,N},
) where {T1, T2, N}

Compute the cost for binaryCrossentropy loss function

source
NumNN.costMethod
function cost(
    loss::Type{categoricalCrossentropy},
    A::AbstractArray{T1,N},
    Y::AbstractArray{T2,N},
) where {T1, T2, N}

Compute the cost for categoricalCrossentropy loss function

source