When looking for a particular object, how does the brain decide where to look next? Recent physiological evidence imposes strong contraints on the mechanism of attentional selection during free-gaze visual search. We will describe a simple mechanistic model of visual search which both fits the existing evidence and predicts human behaviour in a visual search task among complex objects. Our model posits that a target-specific modulation is applied at every point of a retinotopic area, selective for visual features of intermediate complexity (identified with LIP), with local normalization through divisive inhibition. To validate this model against human behaviour, we collected data from human subjects during an object search task, and ran the model on the same task. The model is able to predict human fixations on single trials well above chance, including error and target-absent trials. Finally, we will introduce a statistical method designed to determine which features are used by the brain to guide visual search, simultaneously estimating the impact of various features in both top-down (target-dependent) and bottom-up (target-independent) guidance of search.