Many perceptual tasks require fine discrimination between stimuli with latent parameters that vary across trials, most importantly, parameters associated with transformations such as translation, rotation, and scaling that leave invariant shape features. An important challenge is to find neural mechanisms that allow adequate perceptual performance in spite of such trial-to-trial variability. To answer this question, we investigated neural network models for Vernier-type perceptual discrimination between two visual stimuli that undergo trial-to-trial variability in latent parameters. We compare the network performance to Ideal Observer benchmak. While a linear readout model performs poorly in such tasks, we find that Quadratic nonlinearity is sufficient to peform well; in some conditions, its performance is close to the Ideal Observer level. We explore neuronal implementation of such nonlinearity. I will also discuss future plans of extending this work to more realistic invariant recognition problems.