# Associate Memory Network

## Associate Memory Network

- An
**associate memory network**refers to a content addressable memory structure that associates a relationship between the set of input patterns and output patterns. A content addressable memory structure may be a quite memory structure that permits the recollection of knowledge supported the intensity of similarity between the input pattern and therefore the patterns stored within the memory. The figure given below illustrates a memory containing the names of varied people. - If the given memory is content addressable, the wrong string
**"Albert Einstein"**as a key's sufficient to recover the right name "Albert Einstein." In this condition, this kind of memory is robust and fault-tolerant due to this kind of memory model, and a few form of error-correction capability.

There are two types of associate memory- an auto-associative memory and hetero associative memory.

Associate Memory Network

## Auto-associative Memory

- An auto-associative memory recovers a previously stored pattern that the majority closely relates to the present pattern. It's also referred to as an auto-associative correlator.
- Consider
**x[1], x[2], x[3],….. x[M],**be the amount of stored pattern vectors, and let x[m] be the element of these vectors, showing characteristics obtained from the patterns. The auto-associative memory will end in a pattern vector x[m] when putting a noisy or incomplete version of x[m].

Auto Associate Memory Network

## Hetero-associative Memory

- In a hetero-associate memory, the recovered pattern is generally different from the input pattern not only in type and format but also in content. it's also known as a hetero-associative correlator.
- Consider we've variety of key response pairs
**{a(1), x(1)}, {a(2),x(2)},…..,{a(M), x(M)}.**The hetero-associative memory will provides a pattern vector x(m) when a noisy or incomplete version of the a(m) is given. - Neural networks are usually wont to implement these associative memory models called neural associative memory (NAM). The linear associate is that the easiest artificial neural associative memory.
- These models follow distinct neural network architecture to memorize data.

## Working of Associative Memory

- Associative memory may be a depository of associated pattern which in some form. If the depository is triggered with a pattern, the associated pattern pair appear at the output. The input might be a particular or partial representation of a stored pattern.
- If the memory is produced with an input pattern, may say α, the associated pattern ω is recovered automatically.
- These are the terms which are related to the Associative memory network:

Associate Memory Network

## Encoding or Memorization

- Encoding or memorization refers to putting together an associative memory. It implies constructing an association weight matrix w such when an input pattern is given, the stored pattern connected with the input pattern is recovered.

(W_{ij})_{k} = (p_{i})_{k} (q_{j})_{k}

Where,

(P_{i})_{k} represents the i_{th} component of pattern p_{k}, and

(q_{j})_{k} represents the j_{th} component of pattern q_{k}

Where,

strong>i = 1,2, …,m and j = 1,2,…,n.

Constructing the association weight matrix w is accomplished by adding the individual correlation matrices w_{k} , i.e.,

Where **α = Constructing constant.**

## Errors and Noise

- The input pattern may hold errors and noise or may contain an incomplete version of some previously encoded pattern. If a corrupted input pattern is presented, the network will recover the stored Pattern that's adjacent to the particular input pattern. The existence of noise or errors results only in an absolute decrease rather than total degradation within the efficiency of the network. Thus, associative memories are robust and error-free due to many processing units performing highly parallel and distributed computations.

## Performance Measures

- The measures taken for the associative memory performance to correct recovery are memory capacity and content addressability. Memory capacity can be defined because the maximum number of associated pattern pairs which will be stored and properly recovered.
**Content- addressability**refers to the power of the network to recover the correct stored pattern. - If input patterns are mutually orthogonal, perfect recovery is possible. If stored input patterns aren't mutually orthogonal, non-perfect recovery can happen due to intersection among the patterns.

## Associative Memory Models

- Linear associator is that the simplest and most widely used associative memory models.
- It's a set of simple processing units which have a quite complex collective computational capability and behavior. The Hopfield model computes its output that returns in time until the system becomes stable. Hopfield networks are constructed using bipolar units and a learning process. The Hopfield model is an
**auto-associative memory**suggested by John Hopfield in 1982. Bidirectional Associative Memory (BAM) and therefore the Hopfield model are some other popular artificial neural network models used as associative memories.

## Network architectures of Associate Memory Models

- The neural associative memory models pursue various neural network architectures to memorize data. The network comprises either one layer or two layers. The linear associator model refers to a feed-forward type network, comprises of two layers of different processing units- the first layer serving because the input layer while the other layer as an output layer. The
**Hopfield model**refers to one layer of processing elements where each unit is related to every other unit within the given network. The**bidirectional associative memory**(BAM) model is that the same because the linear associator, but the associations are bidirectional. - The neural network architectures of those given models and therefore the structure of the corresponding association weight matrix w of the associative memory are depicted.

## Linear Associator model (two layers)

- The linear associator model may be a
**feed-forward type**network where produced output is within the form of single feed-forward computation. The model comprises of two layers of processing unit. The input is directly associated with the outputs, through a series of weights. The connections carrying weights link each input to each output. The addition of the products of the weights and therefore the input is decided in each neuron node.

All p inputs units are associated to all or any q output units via associated weight matrix

W = [w_{ij}]p * q where w_{ij} describes the strength of the unidirectional association of the i^{th} input unit to the j^{th} output unit.

The connection weight matrix stores the z different associated pattern pairs **{(X _{k},Y_{k}); k= 1,2,3,…,z}.** Constructing an associative memory is building the connection weight matrix w such if an input pattern is presented, the stored pattern associated with the input pattern is recovered.

If you want to learn about Artificial Intelligence Course , you can refer the following links Artificial Intelligence Training in Chennai , Machine Learning Training in Chennai , Python Training in Chennai , Data Science Training in Chennai.