This paper was accepted on the workshop “Self-Supervised Studying – Principle and Observe” at NeurIPS 2022.
Many state-of-the-art self-supervised studying approaches essentially depend on transformations utilized to the enter with a view to selectively extract task-relevant info. Lately, the sphere of equivariant deep studying has developed to introduce construction into the function area of deep neural networks, particularly with respect to such enter transformations. On this work, we observe each theoretically and empirically, that by means of the lens of equivariant representations, many current self-supervised studying algorithms could be each unified and generalized. Particularly, we introduce a common framework we name Structured Self-Supervised Studying (S-SSL), and theoretically present the way it could subsume the idea of input-augmentations supplied a sufficiently structured illustration. We validate this concept experimentally for easy augmentations, show how the framework fails when representational construction is eliminated, and additional empirically discover how the parameters of this framework relate to these conventional augmentation-based self-supervised studying. We conclude with a dialogue of the potential advantages of this framework as a brand new perspective on self-supervised studying.