Renderers are Good Zero-Shot Representation Learners: Exploring Diffusion Latents for Metric Learning

06/19/2023
by   Michael Tang, et al.
0

Can the latent spaces of modern generative neural rendering models serve as representations for 3D-aware discriminative visual understanding tasks? We use retrieval as a proxy for measuring the metric learning properties of the latent spaces of Shap-E, including capturing view-independence and enabling the aggregation of scene representations from the representations of individual image views, and find that Shap-E representations outperform those of the classical EfficientNet baseline representations zero-shot, and is still competitive when both methods are trained using a contrative loss. These findings give preliminary indication that 3D-based rendering and generative models can yield useful representations for discriminative tasks in our innately 3D-native world. Our code is available at <https://github.com/michaelwilliamtang/golden-retriever>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset