尊敬的 微信汇率:1円 ≈ 0.046166 元 支付宝汇率:1円 ≈ 0.046257元 [退出登录]
SlideShare a Scribd company logo
A Presentation on
By:
Edutechlearners
www.edutechlearners.com
 The perceptron was first proposed by Rosenblatt (1958) is a simple
neuron that is used to classify its input into one of two categories.
 A perceptron is a single processing unit of a neural network. A
perceptron uses a step function that returns +1 if weighted sum of its
input  0 and -1 otherwise.
x1
x2
xn
w2
w1
wn
b (bias)
v y
(v)
 While in actual neurons the dendrite receives electrical signals from the
axons of other neurons, in the perceptron these electrical signals are
represented as numerical values. At the synapses between the dendrite
and axons, electrical signals are modulated in various amounts. This is
also modeled in the perceptron by multiplying each input value by a
value called the weight.
 An actual neuron fires an output signal only when the total strength of
the input signals exceed a certain threshold. We model this
phenomenon in a perceptron by calculating the weighted sum of the
inputs to represent the total strength of the input signals, and applying a
step function on the sum to determine its output. As in biological neural
networks, this output is fed to other perceptrons.
 Perceptron can be defined as a single artificial neuron that
computes its weighted input with the help of the threshold activation
function or step function.
 It is also called as a TLU (Threshold Logical Unit).

x1
x2
xn
.
.
.
w1
w2
wn
w0
 wi xi
1 if  wi xi >0
f(xi)=
-1 otherwise
o
{
n
i=0
i=0
n
Supervised learning is used when we have a set of training data.This
training data consists of some input data that is connected with some
correct output values. The output values are often referred to as target
values. This training data is used by learning algorithms like back
propagation or genetic algorithms.
 In machine learning, the perceptron is an algorithm
for supervised classification of an input into one of several possible
non-binary outputs.
 Perceptron can be defined as a single artificial neuron that computes its
weighted input with the help of the threshold activation function or step
function.
 The Perceptron is used for binary Classification.
 The Perceptron can only model linearly separable classes.
 First train a perceptron for a classification task.
- Find suitable weights in such a way that the training examples are
correctly classified.
- Geometrically try to find a hyper-plane that separates the examples of
the two classes.
 Linear separability is the concept wherein the separation of the input
space into regions is based on whether the network response is positive
or negative.
 When the two classes are not linearly separable, it may be desirable to
obtain a linear separator that minimizes the mean squared error.
 Definition : Sets of points in 2-D space are linearly separable if the sets
can be separated by a straight line.
 Generalizing, a set of points in n-dimensional space are linearly
separable if there is a hyper plane of (n-1) dimensions separates the
sets.
 Consider a network having positive response in the first quadrant and
negative response in all other quadrants (AND function) with either
binary or bipolar data, then the decision line is drawn separating the
positive response region from the negative response region.
 The net input to the output Neuron is:
Yin = w0 + Ʃi xi wi
Where Yin = The net inputs to the ouput neurons.
i = any integer
w0 = initial weight
 The following relation gives the boundary region of net
input.
b + Ʃi xi wi = 0
 The equation can be used to determine the decision
boundary between the region where Yin> 0 and Yin < 0.
 Depending on the number of input neurons in the network.
this equation represents a line, a plane or a hyper-plane.
 If it is possible to find the weights so that all of the training
input vectors for which the correct response is 1. lie on the
either side of the boundary, then the problem is called
linearly separable.
 Otherwise. If the above criteria is not met, the problem is
called linearly non-separable.
 Even parity means even number of 1 bits in the input
 Odd parity means odd number of 1 bits in the input
 There is no way to draw a single straight line so that the circles are on
one side of the line and the dots on the other side.
 Perceptron is unable to find a line separating even parity input patterns
from odd parity input patterns.
 The perceptron can only model linearly separable functions,
− those functions which can be drawn in 2-dim graph and single
straight line separates values in two part.
Boolean functions given below are linearly separable:
− AND
− OR
− COMPLEMENT
It cannot model XOR function as it is non linearly separable.
− When the two classes are not linearly separable, it may be desirable
to obtain a linear separator that minimizes the mean squared error.
 A Single Layer Perceptron consists of an input and an output layer. The
activation function employed is a hard limiting function.
 Definition : An arrangement of one input layer of neurons feed forward
to one output layer of neurons is known as Single Layer Perceptron.
 Step 1 : Create a perceptron with (n+1) input neurons x0 , x1 , . . . . . , . xn ,
where x0 = 1 is the bias input. Let O be the output neuron.
 Step 2 : Initialize weight W = (w0, w1, . . . . . , . wn ) to random weights.
 Step 3 :Iterate through the input patterns xj of the training set using the
weight set; i.e compute the weighted sum of inputs
net j = Ʃ Xi wi For i=1 to n
for each input pattern j .
 Step 4 : Compute the output Yj using the step function
 Step 5 :Compare the computed output yj with the target output yj
for each input pattern j .
 If all the input patterns have been classified correctly, then output
(read) the weights and exit.
 Step 6 : Otherwise, update the weights as given below : If the
computed outputs yj is 1 but should have been 0,
 Then wi = wi - α xi , i= 0, 1, 2, . . . . , n
 If the computed outputs yj is 0 but should have been 1,Then wi =
wi + α xi , i= 0, 1, 2, . . . . , n
 where α is the learning parameter and is constant.
 Step 7 : goto step 3
 END
 Multilayer perceptrons (MLP) are the most popular type of neural
networks in use today. They belong to a general class of structures
called feedforward neural networks, a basic type of neural network
capable of approximating generic classes of functions, including
continuous and integrable functions.
 A multilayer perceptron:
has one or more hidden layers with any number of units.
uses linear combination functions in the input layers.
uses generally sigmoid activation functions in the hidden layers.
has any number of outputs with any activation function.
has connections between the input layer and the first hidden layer,
between the hidden layers, and between the last hidden layer and the
output layer.
xn
x1
x2
Input
Output
Hidden layers
 The input layer:
• Introduces input values into the network.
• No activation function or other processing.
The hidden layer(s):
• Performs classification of features.
• Two hidden layers are sufficient to solve any problem.
• Features imply more layers may be better.
The output layer:
• Functionally is just like the hidden layers.
• Outputs are passed on to the world outside the neural network.
 In 1959, Bernard Widrow and Marcian Hoff of Stanford
developed models they called ADALINE (Adaptive Linear
Neuron) and MADALINE (Multilayer ADALINE). These
models were named for their use of Multiple ADAptive
LINear Elements. MADALINE was the first neural network to
be applied to a real world problem. It is an adaptive filter
which eliminates echoes on phone lines.
 Initialize
• Assign random weights to all links
Training
• Feed-in known inputs in random sequence
• Simulate the network
• Compute error between the input and the
output (Error Function)
• Adjust weights (Learning Function)
• Repeat until total error < ε
Thinking
• Simulate the network
• Network will respond to any input
• Does not guarantee a correct solution even for trained
inputs
Initialize
Training
Thinking
 Training patterns are presented to the network's inputs; the
output is computed. Then the connection weights wj are
modified by an amount that is proportional to the product of the
difference between the actual output, y, and the desired
output, d, and the input pattern, x.
 The algorithm is as follows:
 Initialize the weights and threshold to small random numbers.
 Present a vector x to the neuron inputs and calculate the output.
 Update the weights according to:
 where
 d is the desired output,
 t is the iteration number, and
 eta is the gain or step size, where 0.0 < n < 1.0
 Repeat steps 2 and 3 until:
 the iteration error is less than a user-specified error threshold
or
 a predetermined number of iterations have been completed.
 Training of Network : Given a set of inputs ‘x’, and output/target
values ‘y’, the network finds the best linear mapping from x to y.
 Given an unpredicted ‘x’ value, we train our network to predict
what the most likely ‘y’ value will be.
 Classification of pattern is also a technique of training the
network, in which we assign a physical object, event or
phenomenon to one set of pre-specified classes (or categories).
 Let us consider an example to illustrate the concept, with 2
inputs (x1 and x2) and 1 output node, classifying input into 2
Classes (class 0 and class 1).

More Related Content

What's hot

04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
Tamer Ahmed Farrag, PhD
 
Hebb network
Hebb networkHebb network
Activation function
Activation functionActivation function
Activation function
Astha Jain
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
Muhammad Rasel
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Atul Krishna
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
Francesco Collova'
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
Haris Jamil
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation Network
Akshay Dhole
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
Tarat Diloksawatdikul
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
Aniket Maurya
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
Ahmed_hashmi
 
Handwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPTHandwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPT
RishabhTyagi48
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
NikitaRuhela
 
Multilayer & Back propagation algorithm
Multilayer & Back propagation algorithmMultilayer & Back propagation algorithm
Multilayer & Back propagation algorithm
swapnac12
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
Nagarajan
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
parry prabhu
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
The Integral Worm
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
Akash Goel
 

What's hot (20)

04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
Hebb network
Hebb networkHebb network
Hebb network
 
Activation function
Activation functionActivation function
Activation function
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation Network
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Handwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPTHandwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPT
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Multilayer & Back propagation algorithm
Multilayer & Back propagation algorithmMultilayer & Back propagation algorithm
Multilayer & Back propagation algorithm
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 

Viewers also liked

Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
Mostafa G. M. Mostafa
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Universitat Politècnica de Catalunya
 
Neural network (perceptron)
Neural network (perceptron)Neural network (perceptron)
Neural network (perceptron)
Jeonghun Yoon
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
Marina Santini
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Mohammed Bennamoun
 

Viewers also liked (6)

Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
 
Neural network (perceptron)
Neural network (perceptron)Neural network (perceptron)
Neural network (perceptron)
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 

Similar to Perceptron (neural network)

Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR example
GSURESHKUMAR11
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf
TitleTube
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
butest
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
AROCKIAJAYAIECW
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
sravanthi computers
 
Perceptron working
Perceptron workingPerceptron working
Perceptron working
Zarnigar Altaf
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
Perceptron
PerceptronPerceptron
Perceptron
Nagarajan
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
MrHacker61
 
Single Layer Rosenblatt Perceptron
Single Layer Rosenblatt PerceptronSingle Layer Rosenblatt Perceptron
Single Layer Rosenblatt Perceptron
AndriyOleksiuk
 
Lec10
Lec10Lec10
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
MDYasin34
 
MNN
MNNMNN
Best data science courses
Best data science coursesBest data science courses
Best data science courses
prathyusha1234
 
Data science institute in kolkata
Data science institute in kolkataData science institute in kolkata
Data science institute in kolkata
Bharath S
 
Python data science course
Python data science coursePython data science course
Python data science course
bhuvan8999
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
marketer1234
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbai
prathyusha1234
 

Similar to Perceptron (neural network) (20)

Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR example
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
Perceptron working
Perceptron workingPerceptron working
Perceptron working
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Perceptron
PerceptronPerceptron
Perceptron
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Single Layer Rosenblatt Perceptron
Single Layer Rosenblatt PerceptronSingle Layer Rosenblatt Perceptron
Single Layer Rosenblatt Perceptron
 
Lec10
Lec10Lec10
Lec10
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
 
MNN
MNNMNN
MNN
 
Best data science courses
Best data science coursesBest data science courses
Best data science courses
 
Data science institute in kolkata
Data science institute in kolkataData science institute in kolkata
Data science institute in kolkata
 
Python data science course
Python data science coursePython data science course
Python data science course
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbai
 

Recently uploaded

Slides Peluncuran Amalan Pemakanan Sihat.pptx
Slides Peluncuran Amalan Pemakanan Sihat.pptxSlides Peluncuran Amalan Pemakanan Sihat.pptx
Slides Peluncuran Amalan Pemakanan Sihat.pptx
shabeluno
 
nutrition in plants chapter 1 class 7...
nutrition in plants chapter 1 class 7...nutrition in plants chapter 1 class 7...
nutrition in plants chapter 1 class 7...
chaudharyreet2244
 
The basics of sentences session 8pptx.pptx
The basics of sentences session 8pptx.pptxThe basics of sentences session 8pptx.pptx
The basics of sentences session 8pptx.pptx
heathfieldcps1
 
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptx
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptxScience-9-Lesson-1-The Bohr Model-NLC.pptx pptx
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptx
Catherine Dela Cruz
 
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
Nguyen Thanh Tu Collection
 
220711130088 Sumi Basak Virtual University EPC 3.pptx
220711130088 Sumi Basak Virtual University EPC 3.pptx220711130088 Sumi Basak Virtual University EPC 3.pptx
220711130088 Sumi Basak Virtual University EPC 3.pptx
Kalna College
 
Decolonizing Universal Design for Learning
Decolonizing Universal Design for LearningDecolonizing Universal Design for Learning
Decolonizing Universal Design for Learning
Frederic Fovet
 
How to Create User Notification in Odoo 17
How to Create User Notification in Odoo 17How to Create User Notification in Odoo 17
How to Create User Notification in Odoo 17
Celine George
 
Talking Tech through Compelling Visual Aids
Talking Tech through Compelling Visual AidsTalking Tech through Compelling Visual Aids
Talking Tech through Compelling Visual Aids
MattVassar1
 
Information and Communication Technology in Education
Information and Communication Technology in EducationInformation and Communication Technology in Education
Information and Communication Technology in Education
MJDuyan
 
Post init hook in the odoo 17 ERP Module
Post init hook in the  odoo 17 ERP ModulePost init hook in the  odoo 17 ERP Module
Post init hook in the odoo 17 ERP Module
Celine George
 
Contiguity Of Various Message Forms - Rupam Chandra.pptx
Contiguity Of Various Message Forms - Rupam Chandra.pptxContiguity Of Various Message Forms - Rupam Chandra.pptx
Contiguity Of Various Message Forms - Rupam Chandra.pptx
Kalna College
 
Diversity Quiz Prelims by Quiz Club, IIT Kanpur
Diversity Quiz Prelims by Quiz Club, IIT KanpurDiversity Quiz Prelims by Quiz Club, IIT Kanpur
Diversity Quiz Prelims by Quiz Club, IIT Kanpur
Quiz Club IIT Kanpur
 
8+8+8 Rule Of Time Management For Better Productivity
8+8+8 Rule Of Time Management For Better Productivity8+8+8 Rule Of Time Management For Better Productivity
8+8+8 Rule Of Time Management For Better Productivity
RuchiRathor2
 
Brand Guideline of Bashundhara A4 Paper - 2024
Brand Guideline of Bashundhara A4 Paper - 2024Brand Guideline of Bashundhara A4 Paper - 2024
Brand Guideline of Bashundhara A4 Paper - 2024
khabri85
 
(T.L.E.) Agriculture: "Ornamental Plants"
(T.L.E.) Agriculture: "Ornamental Plants"(T.L.E.) Agriculture: "Ornamental Plants"
(T.L.E.) Agriculture: "Ornamental Plants"
MJDuyan
 
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
biruktesfaye27
 
The Science of Learning: implications for modern teaching
The Science of Learning: implications for modern teachingThe Science of Learning: implications for modern teaching
The Science of Learning: implications for modern teaching
Derek Wenmoth
 
Keynote given on June 24 for MASSP at Grand Traverse City
Keynote given on June 24 for MASSP at Grand Traverse CityKeynote given on June 24 for MASSP at Grand Traverse City
Keynote given on June 24 for MASSP at Grand Traverse City
PJ Caposey
 
Opportunity scholarships and the schools that receive them
Opportunity scholarships and the schools that receive themOpportunity scholarships and the schools that receive them
Opportunity scholarships and the schools that receive them
EducationNC
 

Recently uploaded (20)

Slides Peluncuran Amalan Pemakanan Sihat.pptx
Slides Peluncuran Amalan Pemakanan Sihat.pptxSlides Peluncuran Amalan Pemakanan Sihat.pptx
Slides Peluncuran Amalan Pemakanan Sihat.pptx
 
nutrition in plants chapter 1 class 7...
nutrition in plants chapter 1 class 7...nutrition in plants chapter 1 class 7...
nutrition in plants chapter 1 class 7...
 
The basics of sentences session 8pptx.pptx
The basics of sentences session 8pptx.pptxThe basics of sentences session 8pptx.pptx
The basics of sentences session 8pptx.pptx
 
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptx
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptxScience-9-Lesson-1-The Bohr Model-NLC.pptx pptx
Science-9-Lesson-1-The Bohr Model-NLC.pptx pptx
 
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
BỘ BÀI TẬP TEST THEO UNIT - FORM 2025 - TIẾNG ANH 12 GLOBAL SUCCESS - KÌ 1 (B...
 
220711130088 Sumi Basak Virtual University EPC 3.pptx
220711130088 Sumi Basak Virtual University EPC 3.pptx220711130088 Sumi Basak Virtual University EPC 3.pptx
220711130088 Sumi Basak Virtual University EPC 3.pptx
 
Decolonizing Universal Design for Learning
Decolonizing Universal Design for LearningDecolonizing Universal Design for Learning
Decolonizing Universal Design for Learning
 
How to Create User Notification in Odoo 17
How to Create User Notification in Odoo 17How to Create User Notification in Odoo 17
How to Create User Notification in Odoo 17
 
Talking Tech through Compelling Visual Aids
Talking Tech through Compelling Visual AidsTalking Tech through Compelling Visual Aids
Talking Tech through Compelling Visual Aids
 
Information and Communication Technology in Education
Information and Communication Technology in EducationInformation and Communication Technology in Education
Information and Communication Technology in Education
 
Post init hook in the odoo 17 ERP Module
Post init hook in the  odoo 17 ERP ModulePost init hook in the  odoo 17 ERP Module
Post init hook in the odoo 17 ERP Module
 
Contiguity Of Various Message Forms - Rupam Chandra.pptx
Contiguity Of Various Message Forms - Rupam Chandra.pptxContiguity Of Various Message Forms - Rupam Chandra.pptx
Contiguity Of Various Message Forms - Rupam Chandra.pptx
 
Diversity Quiz Prelims by Quiz Club, IIT Kanpur
Diversity Quiz Prelims by Quiz Club, IIT KanpurDiversity Quiz Prelims by Quiz Club, IIT Kanpur
Diversity Quiz Prelims by Quiz Club, IIT Kanpur
 
8+8+8 Rule Of Time Management For Better Productivity
8+8+8 Rule Of Time Management For Better Productivity8+8+8 Rule Of Time Management For Better Productivity
8+8+8 Rule Of Time Management For Better Productivity
 
Brand Guideline of Bashundhara A4 Paper - 2024
Brand Guideline of Bashundhara A4 Paper - 2024Brand Guideline of Bashundhara A4 Paper - 2024
Brand Guideline of Bashundhara A4 Paper - 2024
 
(T.L.E.) Agriculture: "Ornamental Plants"
(T.L.E.) Agriculture: "Ornamental Plants"(T.L.E.) Agriculture: "Ornamental Plants"
(T.L.E.) Agriculture: "Ornamental Plants"
 
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
Ethiopia and Eritrea Eritrea's journey has been marked by resilience and dete...
 
The Science of Learning: implications for modern teaching
The Science of Learning: implications for modern teachingThe Science of Learning: implications for modern teaching
The Science of Learning: implications for modern teaching
 
Keynote given on June 24 for MASSP at Grand Traverse City
Keynote given on June 24 for MASSP at Grand Traverse CityKeynote given on June 24 for MASSP at Grand Traverse City
Keynote given on June 24 for MASSP at Grand Traverse City
 
Opportunity scholarships and the schools that receive them
Opportunity scholarships and the schools that receive themOpportunity scholarships and the schools that receive them
Opportunity scholarships and the schools that receive them
 

Perceptron (neural network)

  • 2.  The perceptron was first proposed by Rosenblatt (1958) is a simple neuron that is used to classify its input into one of two categories.  A perceptron is a single processing unit of a neural network. A perceptron uses a step function that returns +1 if weighted sum of its input  0 and -1 otherwise. x1 x2 xn w2 w1 wn b (bias) v y (v)
  • 3.
  • 4.  While in actual neurons the dendrite receives electrical signals from the axons of other neurons, in the perceptron these electrical signals are represented as numerical values. At the synapses between the dendrite and axons, electrical signals are modulated in various amounts. This is also modeled in the perceptron by multiplying each input value by a value called the weight.  An actual neuron fires an output signal only when the total strength of the input signals exceed a certain threshold. We model this phenomenon in a perceptron by calculating the weighted sum of the inputs to represent the total strength of the input signals, and applying a step function on the sum to determine its output. As in biological neural networks, this output is fed to other perceptrons.
  • 5.  Perceptron can be defined as a single artificial neuron that computes its weighted input with the help of the threshold activation function or step function.  It is also called as a TLU (Threshold Logical Unit).  x1 x2 xn . . . w1 w2 wn w0  wi xi 1 if  wi xi >0 f(xi)= -1 otherwise o { n i=0 i=0 n
  • 6. Supervised learning is used when we have a set of training data.This training data consists of some input data that is connected with some correct output values. The output values are often referred to as target values. This training data is used by learning algorithms like back propagation or genetic algorithms.
  • 7.  In machine learning, the perceptron is an algorithm for supervised classification of an input into one of several possible non-binary outputs.  Perceptron can be defined as a single artificial neuron that computes its weighted input with the help of the threshold activation function or step function.  The Perceptron is used for binary Classification.  The Perceptron can only model linearly separable classes.  First train a perceptron for a classification task. - Find suitable weights in such a way that the training examples are correctly classified. - Geometrically try to find a hyper-plane that separates the examples of the two classes.
  • 8.  Linear separability is the concept wherein the separation of the input space into regions is based on whether the network response is positive or negative.  When the two classes are not linearly separable, it may be desirable to obtain a linear separator that minimizes the mean squared error.  Definition : Sets of points in 2-D space are linearly separable if the sets can be separated by a straight line.  Generalizing, a set of points in n-dimensional space are linearly separable if there is a hyper plane of (n-1) dimensions separates the sets.
  • 9.
  • 10.  Consider a network having positive response in the first quadrant and negative response in all other quadrants (AND function) with either binary or bipolar data, then the decision line is drawn separating the positive response region from the negative response region.
  • 11.
  • 12.
  • 13.  The net input to the output Neuron is: Yin = w0 + Ʃi xi wi Where Yin = The net inputs to the ouput neurons. i = any integer w0 = initial weight  The following relation gives the boundary region of net input. b + Ʃi xi wi = 0
  • 14.  The equation can be used to determine the decision boundary between the region where Yin> 0 and Yin < 0.  Depending on the number of input neurons in the network. this equation represents a line, a plane or a hyper-plane.  If it is possible to find the weights so that all of the training input vectors for which the correct response is 1. lie on the either side of the boundary, then the problem is called linearly separable.  Otherwise. If the above criteria is not met, the problem is called linearly non-separable.
  • 15.  Even parity means even number of 1 bits in the input  Odd parity means odd number of 1 bits in the input
  • 16.  There is no way to draw a single straight line so that the circles are on one side of the line and the dots on the other side.  Perceptron is unable to find a line separating even parity input patterns from odd parity input patterns.
  • 17.  The perceptron can only model linearly separable functions, − those functions which can be drawn in 2-dim graph and single straight line separates values in two part. Boolean functions given below are linearly separable: − AND − OR − COMPLEMENT It cannot model XOR function as it is non linearly separable. − When the two classes are not linearly separable, it may be desirable to obtain a linear separator that minimizes the mean squared error.
  • 18.  A Single Layer Perceptron consists of an input and an output layer. The activation function employed is a hard limiting function.  Definition : An arrangement of one input layer of neurons feed forward to one output layer of neurons is known as Single Layer Perceptron.
  • 19.
  • 20.  Step 1 : Create a perceptron with (n+1) input neurons x0 , x1 , . . . . . , . xn , where x0 = 1 is the bias input. Let O be the output neuron.  Step 2 : Initialize weight W = (w0, w1, . . . . . , . wn ) to random weights.  Step 3 :Iterate through the input patterns xj of the training set using the weight set; i.e compute the weighted sum of inputs net j = Ʃ Xi wi For i=1 to n for each input pattern j .  Step 4 : Compute the output Yj using the step function
  • 21.  Step 5 :Compare the computed output yj with the target output yj for each input pattern j .  If all the input patterns have been classified correctly, then output (read) the weights and exit.  Step 6 : Otherwise, update the weights as given below : If the computed outputs yj is 1 but should have been 0,  Then wi = wi - α xi , i= 0, 1, 2, . . . . , n  If the computed outputs yj is 0 but should have been 1,Then wi = wi + α xi , i= 0, 1, 2, . . . . , n  where α is the learning parameter and is constant.  Step 7 : goto step 3  END
  • 22.
  • 23.  Multilayer perceptrons (MLP) are the most popular type of neural networks in use today. They belong to a general class of structures called feedforward neural networks, a basic type of neural network capable of approximating generic classes of functions, including continuous and integrable functions.  A multilayer perceptron: has one or more hidden layers with any number of units. uses linear combination functions in the input layers. uses generally sigmoid activation functions in the hidden layers. has any number of outputs with any activation function. has connections between the input layer and the first hidden layer, between the hidden layers, and between the last hidden layer and the output layer.
  • 25.  The input layer: • Introduces input values into the network. • No activation function or other processing. The hidden layer(s): • Performs classification of features. • Two hidden layers are sufficient to solve any problem. • Features imply more layers may be better. The output layer: • Functionally is just like the hidden layers. • Outputs are passed on to the world outside the neural network.
  • 26.  In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models they called ADALINE (Adaptive Linear Neuron) and MADALINE (Multilayer ADALINE). These models were named for their use of Multiple ADAptive LINear Elements. MADALINE was the first neural network to be applied to a real world problem. It is an adaptive filter which eliminates echoes on phone lines.
  • 27.
  • 28.  Initialize • Assign random weights to all links Training • Feed-in known inputs in random sequence • Simulate the network • Compute error between the input and the output (Error Function) • Adjust weights (Learning Function) • Repeat until total error < ε Thinking • Simulate the network • Network will respond to any input • Does not guarantee a correct solution even for trained inputs Initialize Training Thinking
  • 29.  Training patterns are presented to the network's inputs; the output is computed. Then the connection weights wj are modified by an amount that is proportional to the product of the difference between the actual output, y, and the desired output, d, and the input pattern, x.  The algorithm is as follows:  Initialize the weights and threshold to small random numbers.  Present a vector x to the neuron inputs and calculate the output.  Update the weights according to:
  • 30.  where  d is the desired output,  t is the iteration number, and  eta is the gain or step size, where 0.0 < n < 1.0  Repeat steps 2 and 3 until:  the iteration error is less than a user-specified error threshold or  a predetermined number of iterations have been completed.
  • 31.
  • 32.  Training of Network : Given a set of inputs ‘x’, and output/target values ‘y’, the network finds the best linear mapping from x to y.  Given an unpredicted ‘x’ value, we train our network to predict what the most likely ‘y’ value will be.  Classification of pattern is also a technique of training the network, in which we assign a physical object, event or phenomenon to one set of pre-specified classes (or categories).
  • 33.  Let us consider an example to illustrate the concept, with 2 inputs (x1 and x2) and 1 output node, classifying input into 2 Classes (class 0 and class 1).
  翻译: