Let’s build a Single Layer Perceptron (SLP) with Python

Single Layer Perceptron SLP header

This article aims to explore the world of perceptrons, focusing in particular on the Single Layer Perceptron (SLP), which, although it constitutes only a small fraction of the overall architecture of deep neural networks, provides a solid basis for understanding the fundamental mechanisms of Deep Learning. We will also introduce practical implementation examples in Python, illustrating how to build and visualize an SLP using libraries such as NumPy, NetworkX and Matplotlib.

[wpda_org_chart tree_id=47 theme_id=50]

Single Layer Perceptron (SLP)

The Single Layer Perceptron (SLP), or single layer perceptron, is a type of artificial neural network that forms the foundation of the most complex Deep Learning models. However, it should be noted that the term “deep” in deep learning refers to deep neural networks with more than one hidden layer, while single-layer perceptron is considered a shallow learning model.

Single Layer Perceptron SLP

The SLP is composed of a single layer of nodes, also called perceptrons or neurons, that receive weighted inputs, add them together, and apply an activation function to produce the output. The basic structure of a perceptron is as follows:

Inputs: Inputs are multiplied by the associated weights and summed.

 z = w_1x_1 + w_2x_2 + \ldots + w_nx_n

Activation function: The weighted sum is then passed through an activation function to produce the perceptron output.

 y = \varphi(z)

The activation function introduces nonlinearity into the network, allowing the perceptron to learn complex relationships between inputs and outputs.

The SLP is capable of learning only linear models and solving binary classification problems, where the output is 0 or 1. However, its main limitation is that it cannot address problems that require modeling complex nonlinear relationships. To overcome these limitations, more complex models have been developed such as Multi-Layer Perceptrons (MLPs), which consist of multiple hidden layers of perceptrons, allowing the network to learn more complex representations and tackle more difficult problems.

Let’s develop a Single Layer Perceptron (SLP) in Python

Now we will develop a Single Layer Perceptron(SLP) with Python code, without using any libraries. This could be an excellent starting point for our studies on the topic of neural networks, and for carrying out some experiments.

Let’s start by building the corresponding class which we will call SingleLayerPerceptron.

import numpy as np

class SingleLayerPerceptron:
    def __init__(self, input_size):
        # Initialization of weights and bias with random values
        self.weights = np.random.rand(input_size)
        self.bias = np.random.rand(1)

    def sigmoid(self, x):
        # Funzione di attivazione Sigmoid
        return 1 / (1 + np.exp(-x))

    def predict(self, inputs):
        # Calculation of the weighted sum of the inputs and application of the activation function
        z = np.dot(inputs, self.weights) + self.bias
        return self.sigmoid(z)

In this example, the SingleLayerPerceptron class has a predict method that takes an input vector and returns the perceptron output after applying the sigmoidal activation function. The weights and bias are randomly initialized during perceptron creation.

  • Creating the Perceptron: with init() we are defining the SingleLayerPerceptron class with a constructor that randomly initializes the weights and bias of the perceptron.
  • Sigmoid Activation Function: with sigmoid() the sigmoid of a given value is calculated, which is used as an activation function in the perceptron.
  • Prediction Function: with predict() you take an input vector, calculate the weighted sum of the weighted inputs and apply the sigmoidal activation function.

Now let’s write an example of its use in Python. Let’s create a Perceptron capable of receiving 3 inputs and then submit to it an array of 3 elements corresponding to the three input values. We call the predict() function to get the corresponding output value.

if __name__ == "__main__":
    # Creazione di un perceptron con 3 input
    input_size = 3
    perceptron = SingleLayerPerceptron(input_size)

    # Input example
    input_data = np.array([0.5, 0.3, 0.8])

    # Output prediction
    output = perceptron.predict(input_data)

    # Print the results
    print("Input:", input_data)
    print("Output:", output)

Running you get the following value:

Input: [0.5 0.3 0.8]
Output: [0.67700351]

Let’s add a graphical representation of the neural network

To visualize the structure of a single-layer perceptron, we can use the networkx library to create a graph of the nodes (neurons) and connections (weights). In this example, we will use a perceptron with 3 inputs and one output node. Install the library first, if you haven’t already done so, by running:

pip install networkx

After that, we can implement inside the class using the following Python code:

import networkx as nx
import matplotlib.pyplot as plt

    def plot_network(self):
        # Creation of a directed graph
        G = nx.DiGraph()

        # Add nodes
        G.add_nodes_from(['Input {}'.format(i+1) for i in range(len(self.weights))])
        G.add_node('Summation Node')
        G.add_node('Output')

        # Add edges
        for i, weight in enumerate(self.weights):
            G.add_edge('Input {}'.format(i+1), 'Summation Node', weight=weight)
        G.add_edge('Summation Node', 'Output', weight=self.bias[0])

        # Place the nodes
        m = np.mean(range(len(self.weights)))
        pos = {'Summation Node': (1, m)}
        for i in range(len(self.weights)):
            pos['Input {}'.format(i+1)] = (0, i)
        pos['Output'] = (2, m)

        # Draw the graph
        nx.draw(G, pos, with_labels=True, font_weight='bold', node_size=1000, node_color='skyblue', font_size=8, arrowsize=20)

        # Bow labels with weights
        edge_labels = {(edge[0], edge[1]): str(round(edge[2]['weight'], 2)) for edge in G.edges(data=True)}
        nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)

        # Show the plot
        plt.show()

In this example, the SingleLayerPerceptronVisualization class has a plot_network method that creates and displays the neural network graph using networkx. The nodes represent the inputs and output, while the edges represent the associated weights. The arcs are labeled with their respective weights.

And extend the code used in the previous example with the new function:

# Esempio di utilizzo
if __name__ == "__main__":
    #Creating a perceptron with 3 inputs
    input_size = 3
    perceptron = SingleLayerPerceptron(input_size)

    # Input example
    input_data = np.array([0.5, 0.3, 0.8])
    # Output prediction
    output = perceptron.predict(input_data)
    # Print the results
    print("Input:", input_data)
    print("Output:", output)
    
    # Network visualization
    perceptron.plot_network()

By running the code we will obtain both the previous results and the visualization of the single layer perceptron with the weight values of each single node.

Input: [0.5 0.3 0.8]
Output: [0.7432648]
Single Layer Perceptron with networkx library

You can explore this example by changing the number of inputs or changing the weights to see how they affect the structure of the neural network graph.

Let’s visualize the Decision Boundary of the Single Layer Perceptron

We extend the example of use of our Single Layer Perceptron by inserting a series of random input values, in order to graphically define a Decision Boundary. For this purpose we add another function to our class.

def plot_decision_boundary(self, inputs, labels):
        #Extract the weights and bias
        w1, w2 = self.weights[:2]
        b = self.bias[0]

        #Find the minimum and maximum values of the inputs to create a grid
        x_min, x_max = np.min(inputs[:, 0]) - 0.1, np.max(inputs[:, 0]) + 0.1
        y_min, y_max = np.min(inputs[:, 1]) - 0.1, np.max(inputs[:, 1]) + 0.1

        # Create a grid of points
        xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100), np.linspace(y_min, y_max, 100))
        grid_points = np.c_[xx.ravel(), yy.ravel()]

        # Calculate expected values for each point on the grid
        predictions = self.predict(grid_points)

        # Scales predictions to grid shape
        predictions = predictions.reshape(xx.shape)

        # Point plot
        plt.scatter(inputs[:, 0], inputs[:, 1], c=labels, cmap=plt.cm.Spectral)

        # Decision boundary plot
        plt.contourf(xx, yy, predictions, cmap=plt.cm.Spectral, alpha=0.8)

        # Axis labels
        plt.xlabel('Feature 1')
        plt.ylabel('Feature 2')

        # Show the plot
        plt.show()

To use this method we implement a new example, in which we create a random series of input data pairs and then subject it to our SLP.

if __name__ == "__main__":
    np.random.seed(42)
    inputs = np.random.rand(100, 2)
    labels = (inputs[:, 0] + inputs[:, 1] > 1).astype(int)  # Semplice decisione di classe

    # Creation and training of the perceptron
    input_size = 2
    perceptron = SingleLayerPerceptron(input_size)
    perceptron.plot_decision_boundary(inputs, labels)

By running the example code we will obtain the following graph which will correspond to the Decision Boundary of the SLP defined by us:

Single Layer Perceptron SLP - Decision Boundary

The plot_decision_boundary function extracts the weights and bias, then creates a grid of points and calculates the predicted perceptron values for each point on the grid. Finally, draw the decision boundary using matplotlib’s contourf and color the input points based on their classes.

  • Input Points: plt.scatter(inputs[:, 0], inputs[:, 1], c=labels, cmap=plt.cm.Spectral) The input points are displayed on the graph. Each point is colored according to its class.
  • Decision Boundary: plt.contourf(xx, yy, predictions, cmap=plt.cm.Spectral, alpha=0.8) The decision boundary is drawn on the point grid. The regions above and below the decision boundary represent the two classes separated by the perceptron.
  • Axis Labels:pythonCopy codeplt.xlabel(‘Feature 1’) plt.ylabel(‘Feature 2’) Axis labels to indicate which features represent the x and y axes.
  • Show Plot:pythonCopy codeplt.show() This command shows the plot.

In this example, the input points are generated randomly, and the class decision is based on the sum of the two features. The perceptron tries to separate these two classes with a decision line. You can experiment by changing the input data to see how the perceptron adapts to different data distributions.

Leave a Reply