Skip to content

xinli2008/clip_from_scratch

Repository files navigation

clip_from_scratch

Introduction

基于mnist手写数字训练的clip模型,用作学习多模态模型的用途,只能预测0-9

Preliminary

  • CLIP

CLIP

  • pseudocode for the core of an implementation of CLIP
    def forward(**kwargs):
        r"""
        Perform CLIP training forward process
        Args:
            image_encoder - ResNet or Vision Transformer
            text_encoder - CBOW or Text Transformer
            I[n, h, w, c] - minibatch of aligned images
            T[n, l] - minibatch of aligned texts
            W_i[d_i, d_e] - learned proj of image to embed
            W_t[d_t, d_e] - learned proj of text to embed
            t - learned temperature parameter
            extract feature representations of each modality
        Return:
            loss
        """
        I_f = image_encoder(I) #[n, d_i]
        T_f = text_encoder(T) #[n, d_t]
        # joint multimodal embedding [n, d_e]
        I_e = l2_normalize(np.dot(I_f, W_i), axis=1)
        T_e = l2_normalize(np.dot(T_f, W_t), axis=1)
        # scaled pairwise cosine similarities [n, n]
        logits = np.dot(I_e, T_e.T) * np.exp(t)
        # symmetric loss function
        labels = np.arange(n)
        loss_i = cross_entropy_loss(logits, labels, axis=0)
        loss_t = cross_entropy_loss(logits, labels, axis=1)
        loss = (loss_i + loss_t)/2
        return loss
  • Loss

loss

Acknowledgements

About

基于mnist手写数字训练的clip模型

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages