Charities are constantly trying to decide what action they should take next with each constituent (person) in their database to encourage a donation. One way to make this decision is to use machine learning to build a model based on constituents' past actions, and use that model to choose which action to take next. This has been done in the past by using a fixed window of a constituent's previous actions, and by using the parameters of the emails involved with each action. Using this method, recurrent neural networks (RNNs) were able to learn a model that could predict donations within $25 of the ground truth for one charity. Building upon this work, we experimented with RNNs and convolutional neural networks (CNNs), as well as with various sized windows of previous actions, adding features describing constituents to the data, and anonymously combining data sets. RNNs showed similar performance, while CNNs dropped the mean squared error for one charity to $16, providing charities with an improved tool to choose actions and email parameters in order to maximize donations.
Automated image interpretation is an important task in numerous applications ranging from security systems to natural resource inventorization based on remote-sensing. Recently, a second generation of adaptive machine-learned image interpretation systems have shown expertlevel performance in several challenging domains. While demonstrating an unprecedented improvement over hand-engineered or first generation machine learned systems in terms of cross-domain portability, design cycle time, and robustness, such systems are still severely limited. In this paper we inspect the anatomy of a state-of-the-art adaptive image interpretation system and discuss the range of the corresponding machine learning problems. We then report on the novel machine learning approaches engaged and the resulting improvements.
Recent adaptive image interpretation systems can reach optimal performance for a given domain via machine learning, without human intervention. The policies are learned over an extensive generic image processing operator library. One of the principal weaknesses of the method lies with the large size of such libraries, which can make the machine learning process intractable. We demonstrate how evolutionary algorithms can be used to reduce the size of the operator library, thereby speeding up learning of the policy while still keeping human experts out of the development loop. Experiments in a challenging domain of forestry image interpretation exhibited a 95% reduction in the average time required to interpret an image, while maintaining the image interpretation accuracy of the full library.