Large Datasets: A Mixed Method to Adapt and Improve Their Learning by Neural Networks Used in Regression Contexts

2011 
The purpose of this work is to further study the relevance of accelerating the Monte-Carlo calculations for the gamma rays external radiotherapy through feed-forward neural networks. We have previously presented a parallel incremental algorithm that builds neural networks of reduced size, while providing high quality approximations of the dose deposit [4]. Our parallel algorithm consists in an optimized decomposition of the initial learning dataset (also called learning domain) in as much subsets as available processors. However, although that decomposition provides subsets of similar signal complexities, their sizes may be quite different, still implying potential differences in their learning times. This paper presents an efficient data extraction allowing a good and balanced training without any loss of signal information. As will be shown, the resulting irregular decomposition permits an important improvement in the learning time of the global network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    1
    Citations
    NaN
    KQI
    []