LuongAttention

PyTorch Attention module part of torch.nn of indicLP library.

Getting Started

Luong or Multiplicative Attention layer for pytorch has been implemented, as it often plays a crucial part in NLP.

Example
class LuongAttention(torch.nn.Module):
    def __init__(self, encoder_dim: int, decoder_dim: int):
        super().__init__(encoder_dim, decoder_dim)
        self.W = torch.nn.Parameter(torch.FloatTensor(self.decoder_dim, self.encoder_dim).uniform_(-0.1, 0.1))

    def forward(self, query, values):
        weights = self._get_weights(query,values)
        weights = torch.nn.functional.softmax(weights, dim = 0)
        return weights @ values
        
    def _get_weights(self, query, values):
        weights = query @ self.W @ values.T
        return weights/np.sqrt(self.decoder_dim)

Reference Materials

Following are some reference materials for Preprocessing module