Bains pointed out that some of our nonwords were in fact real words and that an algorithm using only information about single letters and their positions achieves the same level of accuracy as baboons in discriminating words from nonwords. We clarify the operational definition of words and nonwords in our study and point out possible limits of the proposed algorithm.
Reading is a highly complex task that relies on the integration of visual, orthographic, phonological and semantic information. This complexity is clearly reflected in current computational models of reading (Coltheart et al., 2001; Harm & Seidenberg, 1999, 2004; Perry, Ziegler, & Zorzi, 2007, 2010; Plaut et al., 1996). These models specify the "ingredients" of the reading process in a precise and detailed fashion as they implement the units and computations that are necessary to go from the visual information to word recognition and word production. Such models make it possible to simulate real reading performance in terms of reading latencies (how long it takes to compute the pronunciation of a word or pseudoword) and reading accuracy (whether the output of the model is correct). Computational models are particularly well suited to helping us understand reading impairments, such as developmental or acquired dyslexia.