Computational modelling has served as a powerful tool to advance our understanding of language processes by making theoretical ideas rigorously specified and testable (a form of “open science” for theory building). In reading research, one of the most influential computational modelling frameworks is the triangle model of reading that characterises the mappings between orthography, phonology and semantics. Currently, most instantiations of the triangle modelling framework start the processes from orthographic levels which abstract away visual processing. Moreover, without visual processing, most models do not provide an opportunity to investigate visual-related dyslexia. To bridge this crucial gap, the present study extended the existing triangle models by implementing an additional visual input. We trained the model to learn to read from visual input without pre-defined orthographic representations. The model was assessed by reading tasks in both intact and after damage (to mimic acquire alexias). The simulation results demonstrated that the model was able to name word and nonwords as well as make lexical decisions. Damage to the visual, phonological or semantic components of the model resulted in the expected reading impairments associated with pure alexia, phonological dyslexia, and surface dyslexia, respectively. The simulation results demonstrated for the first time that both typical and neurologically-impaired reading including both central and peripheral dyslexia could be addressed in this extended triangle model of reading. The findings are consistent with the primary systems account.