Iris Publishers - Global Journal of Engineering Sciences (GJES)
Artificial Neural Networks and Hopfield Type Modeling
Authored by Haydar Akca
From
the mathematical point of view, an artificial neural network corresponds to a
non- linear transformation of some inputs into certain outputs. Many types of
neural networks have been proposed and studied in the literature and the
Hopfield-type network has be- come an important one due to its potential for
applications in various fields of daily life.
A
neural network is a network that performs computational tasks such as
associative memory, pattern recognition, optimization, model identification,
signal processing, etc. on a given pattern via interaction between a number of
interconnected units characterized by simple functions. From the mathematical
point of view, an artificial neural network corresponds to a nonlinear
transformation of some inputs into certain outputs. There are a number of
terminologies commonly used for describing neural networks. Neural networks can
be characterized by an architecture or topology, node characteristics, and a
learning mechanism [1]. The interconnection topology consists of a set of
processing elements arranged in a particular fashion. The processing elements
are connected by links and have weights associated with them. Each processing
elements is associated with:
• A
state of activation (state variable)
• An
output function (transfer function)
• A
propagation rule for transfer of activation between processing elements
• An
activation rule, which determines the new state of activation of a processing
element from its inputs weight associated with the inputs, and current
activation.
Neural
networks may also be classified based on the type of input, which is either
binary or continuous valued, or whether the networks are trained with or
without supervision. There are many different types of network structures, but
the main types are feed-forward networks and recurrent networks. Feed-forward
networks have unidirectional links, usually from input layers to output layers,
and there are no cycles or feedback connections. In recurrent networks, links
can form arbitrary topologies and there may be arbitrary feed- back
connections. Recurrent neural networks have been very successful in time series
prediction. Hopfield networks are a special case of recurrent networks. These
networks have feedback connections, have no hidden layers, and the weight
matrix is symmetric.
Neural
networks are analytic techniques capable of predicting new observations from
other observations after executing a process of so-called learning from
existing data. Neural network techniques can also be used as a component of
analysis designed to build explanatory models. Now there is neural network
software that uses sophisticated algorithms directly contributing to the model
building process.
In
1943, neuro physiologist Warren McCulloch and mathematician Walter Pitts [2]
wrote a paper on how neurons might work. In order to describe how neurons in
the brain might work, they modeled a simple neural network using electrical
circuits. As computers be- came more advanced in the 1950’s, it was possible to
simulate a hypothetical neural net- work. In 1982, John Hopfield presented a
paper [3]. His approach was to create more useful machines by using
bidirectional lines. The model proposed by Hopfield, also known as Hopfield’s
graded response neural network, is based on an analogue circuit consisting of
capacitors, resistors and amplifiers. Previously, the connections between
neurons was only one way. At the same years, scientist introduced a “Hybrid
network” with multiple layers, each layer using a different problem-solving
strategy.
Now,
neural networks are used in several applications. The fundamental idea behind
the nature of neural networks is that if it works in nature, it must be able to
work in computers. The future of neural networks, though, lies in the development
of hardware. Research that concentrates on developing neural networks is
relatively slow. Due to the limitations of processors, neural networks take
weeks to learn. Nowadays trying to create what is called a “silicon compiler”,
“organic compiler” to generate a specific type of integrated circuit that is
optimized for the application of neural networks. Digital, analog, and optical
chips are the different types of chips being developed.
The
brain manages to perform extremely complex tasks. The brain is principally com-
posed of about 10 billion neurons, each connected to about 10,000 other
neurons. Each neuronal cell bodies (soma) are connect with the input and output
channels (dendrites and axons). Each neuron receives electrochemical inputs
from other neurons at the dendrites. If the sum of these electrical inputs is
sufficiently powerful to activate the neuron, it transmits an electrochemical
signal along the axon, and passes this signal to the other neurons whose
dendrites are attached at any of the axon terminals. These attached neurons may
then fire. It is important to note that a neuron fires only if the total signal
received at the cell body exceeds a certain level. The neuron either fires or
it doesn’t, there aren’t different grades of firing. So, our entire brain is
composed of these interconnected electro- chemical transmitting neurons. This
is the model on which artificial neural networks are based. Thus for,
artificial neural networks haven’t even come close to modeling the complexity
of the brain, but they have shown to be good at problems which are easy for a
human but difficult for a traditional computer, such as image recognition and
predictions based on past knowledge.
Fundamental
difference between traditional computers and artificial neural networks is the
way in which they function. One of the major advantages of the neural network
is its ability to do many things at once. With traditional computers,
processing is sequential– one task, then the next, then the next, and so on.
While computers function logically with a set of rules and calculations,
artificial neural networks can function via Equation, pictures, and concepts.
Based upon the way they function, traditional computers have to learn by rules,
while artificial neural networks learn by example, by doing something and then
learning from it.
Hopfield
neural networks have found applications in a broad range of disciplines [3-5]
and have been studied both in the continuous and discrete time cases by many
researchers. Most neural networks can be classified as either continuous or
discrete. In spite of this broad classification, there are many real-world
systems and natural processes that behave in a piecewise continuous style
interlaced with instantaneous and abrupt changes (impulses). Periodic dynamics
of the Hopfield neural networks is one of the realistic and attractive
modellings for the researchers. Hopfield networks are a special case of
recurrent networks. These networks have feedback connections, have no hidden
layers, and the weight matrix is symmetric. These networks are most appropriate
when the input can be represented in exact binary form. Signal transmission
between the neurons causes time delays. Therefore, the dynamics of Hopfield
neural networks with discrete or distributed delays has a fundamental concern.
Many neural networks today use less than 100 neurons and only need occasional
training. In these situations, software simulation is usually found sufficient.
Expected and optimistic development on all current neural network’s
technologies will improve in very near future and researchers develop better
methods and network architectures.
In the
present paper, we briefly summarized historical background as well as
developments of the artificial neural networks and present recent formulations
of the continuous and discrete counterpart of a class of Hopfield-type neural
networks modeling using functional differential equations in the presence of
delay, periodicity, impulses and finite distributed delays. Combining some
ideas of [4,6-10] and [11], we obtain a sufficient condition for the existence
and global exponential stability of a unique periodic solution of the discrete
system considered.
Artificial
Neural Networks (ANN)
An
artificial neural network (ANN) is an information processing paradigm that is
in- spired by the way biological nervous systems, such as the brain, process
information sees more details [12] and references given therein. The key
element of this paradigm is the novel structure of the information processing
system. It is composed of a large number of highly interconnected processing
elements (neurons) working in unison to solve specific problems. ANNs, like
people, learn by example. An ANN is configured for a specific application, such
as pattern recognition or data classification, through a learning process.
Learning in biological systems involves adjustments to the synaptic connections
that exist between the neurons. This is true of ANNs as well.
The first artificial
neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the
logician Walter Pitts [2]. But the technology available at that time did not
allow them to do too much. Neural networks process information in a similar way
the human brain does. The network is composed of a large number of highly
interconnected processing elements (neurons) working in parallel to solve a
specific problem. Neural net- works learn by example. Much is still unknown
about how the brain trains itself to process information, so theories abound. An
artificial neuron is a device with many inputs and one output (Figure 1). The
neuron has two modes of operation; the training mode and the using mode. In the
training mode, the neuron can be trained to fire (or not), for particular input
patterns. In the using mode, when a taught input pattern is detected at the
input, its associated output becomes the current output. If the input pattern
does not belong in the taught list of input patterns, the firing rule is used
to determine whether to fire or not. An important application of neural
networks is pattern recognition. Pattern recognition can be implemented by
using a feed-forward (Figure 2) neural network that has been trained
accordingly. During training, the network is trained to associate outputs with in-
put patterns. When the network is used, it identifies the input pattern and
tries to output the associated output pattern. The power of neural networks
comes to life when a pattern that has no output associated with it, is given as
an input. In this case, the network gives the output that corresponds to a
taught input pattern that is least different from the given pat- tern.
Hopfield-type neural networks are mainly applied either as associative memories
or as optimization solvers. In both applications, the stability of the networks
is prerequisite. The equilibrium points (stable states) of networks
characterize all possible optimal solutions of the optimization problem, and
stability of the network’s grantee the convergence to the optimal solutions. Therefore,
the stability is fundamental for the network design. As a result of this fact
the stability analysis of the Hopfield-type networks has received extensive
attention from the many researchers, [4,6-9,11,13] and references given
therein. The above neuron does not do anything that conventional computers do
not already do. A more sophisticated neuron (Figure 3) is the McCulloch and
Pitts model (MCP). The difference from the previous model is that the inputs
are ‘weighted’, the effect that each input has at decision making is dependent
on the weight of the particular input. The weight of an input is a number which
when multiplied with the input gives the weighted input. These weighted inputs
are then added together and if they exceed a pre-set threshold value, the
neuron fires. In any other case the neuron does not fire. In mathematical
terms, the neuron fires if and only if
X1W1 + X22 +
X3W3 +
…. > T,
where Wi, i =
1, 2, . . ., are weights, Xi, i = 1, 2, . . ., inputs, and T a threshold. The
addition of input weights and of the threshold makes this neuron a very
flexible and powerful one. The MCP neuron has the ability to adapt to a
particular situation by changing its weights and/or threshold. Various
algorithms exist that cause the neuron to ‘adapt’; the most used ones are the
Delta rule and the back-error propagation. The former is used in feed-forward
networks and the latter in feedback networks.
Neural networks have wide
applicability to real world business problems. In fact, they have already been
successfully applied in many industries. Since neural networks are best at
identifying patterns or trends in data, they are well suited for prediction or
forecasting needs including sales forecasting, industrial process control,
customer research, data validation, risk management, target marketing.
ANN are also used in
the following specific paradigms: recognition of speakers in communications;
diagnosis of hepatitis; recovery of telecommunications from faulty software;
interpretation of multi-meaning Chinese words; undersea mine detection; texture
analysis; three-dimensional object recognition; hand-written word recognition;
and facial recognition.
To read more about this article https://irispublishers.com/gjes/fulltext/artificial-neural-networks-and-hopfield.ID.000601.php
Indexing
List of Iris Publishers: https://medium.com/@irispublishers/what-is-the-indexing-list-of-iris-publishers-4ace353e4eee
Iris
publishers google scholar citations: https://scholar.google.co.in/scholar?hl=en&as_sdt=0%2C5&q=irispublishers&btnG=

Comments
Post a Comment