A multilayer perceptron (MLP) neural networks is feed-forward neural network typically used for predict response of one or more variables by one or many explanatory variables trained by backpropagation algorithm. In this study, we select appropriate monthly period of climate variables based on highest coefficient correlation between SOS/EOS and climate variables. As a pre-processing approach, all of the data normalized to 0-1, this is require to dodge saturating in the activation function. In total 110 data observation we randomized separate data into training set (2/3 of the records) and independent test set (1/3 of the records). A multilayer perceptron (MLP) neural network architecture we adopt for predicting SOS/EOS based on explanatory variables (climate variables). Traditional MLP neural network categorized with three layers, the first is input layer (i); the second is hidden layer (h) that receive and process information from input layers; and the third is output neurons (o) as well as response of the network architecture. A trial-and-error approach used for determining the optimal neural network. To purpose generate the optimal models, we comparing combination of several parameters such as; number of hidden layers, activation function, output function, and starting weight. We evaluate model by different number of nodes in the hidden layer range from 1 to 20, combining with two activation function including ‘logistic’ and ‘tangent hyperbolic’ used for transfer information from input nodes to hidden layers and from hidden layers to output. In addition, we also implement linear activation function for calculate output layer. Two different set starting weights including -0. 3 and 0. 3 implement for each model. We also Applying 5-fold cross calibration, which is recomended for small data-set. Test set data used to judge the quality of training model using linear relationships indicate by coefficient determination (R2) between observed and predicted values from neural network models.
Various studies were concern to “illuminate the black box” of ANNs, they are believed to offer little explanation of relationships among variables described by model. Sensitivity analysis (SA) have been widely used for investigate contribute of input variables in the neural network. We applying sensitivity analysis to interpret the importance input variable and analysis of neural network response to input variables by following methods: Garson’s Algorithm, Olden’s Method, Lek’s Profile Method. We examined sensitivity analysis using R package NeuralNetTools developed by Beck.
Garson algorithm established by Garson, later modified by. This method essentially partitioning all of the connection weight in the neural network architecture from the input layer to hidden neuron and hidden neuron to output. The detail and implementation of the Garson algorithm available in the appendix of Goh. This method summarize all of connection weight for all input nodes then scaled into absolute magnitude for describing the relative importance of the input variables.
Similar from the previous one, Olden algorithm use connection weight approach for obtaining the relative importance variable from the input layers. This method quantifies the raw values of connection weight from input-hidden and hidden-output, whereas Garson algorithm using absolute value. The summed of the connection weight potentially provide positive or negative sigh of importance variables.
Another alternative method to determine sensitivity of input variables in the neural network is the Lek Profile approach. The principle of the Lek Profile technique is evaluate behavior of output variable as a response of the input variable using plot profile wherein the input variable kept in the percentile class (e. g, 0th, 25th, 50th, 75th, 100th). Every single input variable processes above-mentioned, and as the results generate response curves depend on the change of input variable.
A multilayer
perceptron
(MLP)
neural
networks
is feed
-forward
neural
network
typically
used
for predict
response
of one or more variables by one or
many
explanatory variables trained by
backpropagation
algorithm
. In this study, we select appropriate monthly period of climate variables based on highest coefficient correlation between SOS/
EOS
and climate variables. As a
pre-processing
approach
,
all of the
data
normalized to 0-1, this is
require
to dodge
saturating in the activation
function
. In total 110
data
observation we randomized separate
data
into training set (2/3 of the records) and independent
test
set (1/3 of the records). A multilayer
perceptron
(MLP)
neural
network
architecture we adopt for predicting SOS/
EOS
based on explanatory variables (climate variables). Traditional MLP
neural
network
categorized with three
layers
, the
first
is
input
layer
(i); the second is
hidden
layer
(h) that receive and process information from
input
layers
; and the third is
output
neurons (o)
as well
as
response
of the
network
architecture. A trial-and-error
approach
used
for determining the optimal
neural
network
. To purpose generate the optimal
models
,
we comparing
combination of several parameters such as; number of
hidden
layers
, activation
function
,
output
function
, and starting
weight
. We evaluate
model
by
different
number of nodes in the
hidden
layer
range from 1 to 20, combining with two activation
function
including ‘logistic’ and ‘tangent hyperbolic’
used
for transfer information from
input
nodes to
hidden
layers
and from
hidden
layers
to
output
.
In addition
, we
also
implement linear activation
function
for calculate
output
layer
. Two
different
set starting
weights
including -0. 3 and 0. 3 implement for each
model
. We
also
Applying 5-fold cross calibration, which is
recomended
for
small
data-set.
Test
set
data
used
to judge the quality of training
model
using linear relationships indicate by coefficient determination (R2) between observed and predicted values from
neural
network
models.
Various studies were concern to “illuminate the black box” of
ANNs
, they
are believed
to offer
little
explanation of relationships among variables
described
by
model
.
Sensitivity
analysis
(SA) have been
widely
used
for investigate contribute of
input
variables in the
neural
network
.
We applying
sensitivity
analysis
to interpret the
importance
input
variable
and
analysis
of
neural
network
response
to
input
variables by following
methods
:
Garson
’s
Algorithm
, Olden’s
Method
,
Lek
’s
Profile
Method
. We examined
sensitivity
analysis
using R package
NeuralNetTools
developed by Beck.
Garson
algorithm
established by
Garson
, later modified by. This
method
essentially
partitioning
all of the
connection
weight
in the
neural
network
architecture from the
input
layer
to
hidden
neuron and
hidden
neuron to
output
. The detail and implementation of the
Garson
algorithm
available in the appendix of
Goh
. This
method
summarize
all of
connection
weight
for all
input
nodes then scaled into absolute magnitude for describing the relative
importance
of the
input
variables.
Similar from the previous one, Olden
algorithm
use
connection
weight
approach
for obtaining the relative
importance
variable
from the
input
layers
. This
method
quantifies the raw values of
connection
weight
from input-hidden and hidden-output, whereas
Garson
algorithm
using absolute value. The summed of the
connection
weight
potentially
provide
positive
or
negative
sigh of
importance
variables.
Another alternative
method
to determine
sensitivity
of
input
variables in the
neural
network
is the
Lek
Profile
approach
. The principle of the
Lek
Profile
technique is
evaluate
behavior of
output
variable
as a
response
of the
input
variable
using plot
profile
wherein the
input
variable
kept
in the percentile
class
(e. g, 0th, 25th, 50th, 75th, 100th). Every single
input
variable
processes above-mentioned, and as the results generate
response
curves depend on the
change
of
input
variable
.