Color vision models: Some simulations, a general n-dimensional model, and the colourvision R package
2018
The development of color vision models has allowed the appraisal of color vision independent
of the human experience. These models are now widely used in ecology
and evolution studies. However, in common scenarios of color measurement, color
vision models may generate spurious results. Here I present a guide to color vision
modeling (Chittka (1992, Journal of Comparative Physiology A, 170, 545) color hexagon,
Endler & Mielke (2005, Journal Of The Linnean Society, 86, 405) model, and the
linear and log-linear
receptor noise limited models (Vorobyev & Osorio 1998,
Proceedings of the Royal Society B, 265, 351; Vorobyev et al. 1998, Journal of
Comparative Physiology A, 183, 621)) using a series of simulations, present a unified
framework that extends and generalize current models, and provide an R package to
facilitate the use of color vision models. When the specific requirements of each
model are met, between-model
results are qualitatively and quantitatively similar.
However, under many common scenarios of color measurements, models may generate
spurious values. For instance, models that log-transform
data and use relative
photoreceptor outputs are prone to generate spurious outputs when the stimulus
photon catch is smaller than the background photon catch; and models may generate
unrealistic predictions when the background is chromatic (e.g. leaf reflectance) and
the stimulus is an achromatic low reflectance spectrum. Nonetheless, despite differences,
all three models are founded on a similar set of assumptions. Based on that, I
provide a new formulation that accommodates and extends models to any number of
photoreceptor types, offers flexibility to build user-defined
models, and allows users
to easily adjust chromaticity diagram sizes to account for changes when using different
number of photoreceptors.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
57
References
17
Citations
NaN
KQI