Skip to content

IRVLab/SVAM-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SVAM: Saliency-guided Visual Attention Modeling

svam-fig

Pointers

SVAM-Net Model

  • Jointly accommodate bottom-up and top-down learning in two branches sharing the same encoding layers
  • Incorporates four spatial attention modules (SAMs) along these learning pathways
  • Exploits coarse-level and fine-level semantic features for SOD at four stages of abstractions
  • The bottom-up pipeline (SVAM-Net_Light) performs abstract saliency prediction at fast rates
  • The top-down pipeline ensures fine-grained saliency estimation by aresidual refinement module (RRM)
  • Pretrained weights can be downloaded from this Google-Drive link

SVAM-Net Features

  • Provides SOTA performance for SOD on underwater imagery
  • Exhibits significantly better generalization performance than existing solutions
  • Achieves fast end-to-end inference
    • The end-to-end SVAM-Net : 20.07 FPS in GTX-1080, 4.5 FPS on Jetson Xavier
    • Decoupled SVAM-Net_Light: 86.15 FPS in GTX-1080, 21.77 FPS on Jetson Xavier

USOD Dataset

Bibliography entry:

@article{islam2020svam,
    title={{SVAM: Saliency-guided Visual Attention Modeling 
    	    by Autonomous Underwater Robots}},
    author={Islam, Md Jahidul and Wang, Ruobing and de Langis, Karin and Sattar, Junaed},
    journal={arXiv preprint arXiv:2011.06252},
    year={2020}
}

Acknowledgements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages