Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix problem related to param "axis" in Transformer class #32

Merged
merged 2 commits into from
Nov 20, 2023
Merged

Fix problem related to param "axis" in Transformer class #32

merged 2 commits into from
Nov 20, 2023

Conversation

javiersgjavi
Copy link
Contributor

@javiersgjavi javiersgjavi commented Nov 19, 2023

Greetings,

I'm proposing a pull request to fix a tiny bug I have found. I'll try to provide as much information as possible in the next of the message:

1. What I want to do

I am trying to implement an NN using transformer layers as a time encoder using the class Transformer from tsl.nn.blocks.encoders import Transformer.

2. What is the problem?

if a set the param axis='time' as is established in documentation I obtain the next error:

File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 193, in __init__ transformer_layer(

File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/blocks/encoders/transformer.py", line 43, in __init__ self.att = MultiHeadAttention(embed_dim=hidden_size,

File "/home/javier/anaconda3/envs/tsl/lib/python3.10/site-packages/tsl/nn/layers/base/attention.py", line 135, in __init__ raise ValueError("Axis can either be 'steps' (0) or 'nodes' (1), "

ValueError: Axis can either be 'steps' (0) or 'nodes' (1), not 'time'.

What is the source of the problem?

Checking the code of the code of the class Transformer this can be found:

class Transformer(nn.Module):

Args:
    input_size (int): Input size.
    hidden_size (int): Dimension of the learned representations.
    ff_size (int): Units in the MLP after self attention.
    output_size (int, optional): Size of an optional linear readout.
    n_layers (int, optional): Number of Transformer layers.
    n_heads (int, optional): Number of parallel attention heads.
    axis (str, optional): Dimension on which to apply attention to update
        the representations. Can be either, 'time', 'nodes', or 'both'.
        (default: :obj:`'time'`)
    causal (bool, optional): If :obj:`True`, then causally mask attention
        scores in temporal attention (has an effect only if :attr:`axis` is
        :obj:`'time'` or :obj:`'both'`).
        (default: :obj:`True`)
    activation (str, optional): Activation function.
    dropout (float, optional): Dropout probability.
"""

def __init__(self,
             input_size,
             hidden_size,
             ff_size=None,
             output_size=None,
             n_layers=1,
             n_heads=1,
             axis='time',
             causal=True,
             activation='elu',
             dropout=0.):
    super(Transformer, self).__init__()
    self.f = getattr(F, activation)

    if ff_size is None:
        ff_size = hidden_size

    if axis in ['time', 'nodes']:
        transformer_layer = partial(TransformerLayer, axis=axis)
    elif axis == 'both':
        transformer_layer = SpatioTemporalTransformerLayer
    else:
        raise ValueError(f'"{axis}" is not a valid axis.')```

However, this is the code of the class MultiHeadAttention:

class MultiHeadAttention(nn.MultiheadAttention):

def __init__(self,
             embed_dim,
             heads,
             qdim: Optional[int] = None,
             kdim: Optional[int] = None,
             vdim: Optional[int] = None,
             axis='steps',
             dropout=0.,
             bias=True,
             add_bias_kv=False,
             add_zero_attn=False,
             device=None,
             dtype=None,
             causal=False) -> None:
    if axis in ['steps', 0]:
        shape = 's (b n) c'
    elif axis in ['nodes', 1]:
        if causal:
            raise ValueError(
                f'Cannot use causal attention for axis "{axis}".')
        shape = 'n (b s) c'
    else:
        raise ValueError("Axis can either be 'steps' (0) or 'nodes' (1), "
                         f"not '{axis}'.")

Which solution do I propose?

I want to change this pull request to update the references to steps in MultiHeadAttention class to time.

Final warning

I have tried to run the tests, but I couldn't because of some problems in my python installation. However, I expect this minor change to be suitable for all tests.

@javiersgjavi javiersgjavi changed the title Patch 1 Fix problem related to param "axis" in Transformer class Nov 19, 2023
@javiersgjavi javiersgjavi changed the base branch from dev to main November 19, 2023 17:30
@marshka marshka merged commit d9e5f7b into TorchSpatiotemporal:main Nov 20, 2023
@marshka
Copy link
Member

marshka commented Nov 20, 2023

Hi Javier, thank you so much for spotting this! It surely was a refactoring problem.

@javiersgjavi javiersgjavi deleted the patch-1 branch November 20, 2023 12:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants