All Courses

What is the process for creating a radial basis function (RBF) network using PyTorch?

By, a month ago
  • Bookmark

What is the process for creating a radial basis function (RBF) network using PyTorch?

Radial basis function
1 Answer

The process for creating a radial basis function (RBF) network using PyTorch typically involves the following steps:

1 . Import PyTorch and any other required libraries:

import torch
import torch.nn as nn
import torch.optim as optim

2. Define the RBF layer:

class RBF(nn.Module):
    def __init__(self, in_features, out_features, bias=True):
        super(RBF, self).__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.centers = nn.Parameter(torch.Tensor(out_features, in_features))
        self.sigmas = nn.Parameter(torch.Tensor(out_features))
        self.bias = nn.Parameter(torch.Tensor(out_features)) if bias else None

    def reset_parameters(self):
        nn.init.kaiming_uniform_(self.centers, a=math.sqrt(5))
        nn.init.constant_(self.sigmas, 1)
        if self.bias is not None:
            fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.centers)
            bound = 1 / math.sqrt(fan_in)
            nn.init.uniform_(self.bias, -bound, bound)

    def forward(self, x):
        x = x.unsqueeze(1)
        c = self.centers.unsqueeze(0)
        distances = (x - c).pow(2).sum(-1).sqrt()
        res = torch.exp(-(distances / self.sigmas).pow(2))
        if self.bias is not None:
            res += self.bias
        return res

This layer takes as input the number of input features, the number of output features (i.e., the number of RBF units), and an optional bias parameter. It initializes the RBF centers and sigmas as trainable parameters and applies the RBF function to the input data.

3. Define the neural network architecture:

class RBFNet(nn.Module):
    def __init__(self, in_features, out_features, hidden_size):
        super(RBFNet, self).__init__()
        self.rbf = RBF(in_features, hidden_size)
        self.linear = nn.Linear(hidden_size, out_features)

    def forward(self, x):
        x = self.rbf(x)
        x = self.linear(x)
        return x

This architecture includes an RBF layer with the specified input and hidden sizes and a linear layer to produce the final output.

4. Define the loss function and optimizer:

criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)

For a regression problem, mean squared error (MSE) is a commonly used loss function. Stochastic gradient descent (SGD) is a common optimizer, but other optimizers can also be used.

5. Train the network:

for epoch in range(num_epochs):
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        running_loss += loss.item()

    print('Epoch %d loss: %.3f' % (epoch + 1, running_loss / len(trainloader)))

Here, the train loader is a PyTorch DataLoader object that provides batches of training data. The network is trained using backpropagation to minimize the loss function.

6. Evaluate the network:

with torch.no_grad():
    correct = 0
    total = 0
    for data in testloader:
        images, labels =

Your Answer


Live Masterclass on : "How Machine Get Trained in Machine Learning?"

Mar 30th (7:00 PM) 516 Registered
More webinars

Related Discussions

Running random forest algorithm with one variable

View More