Adaptive figurai abstraction test with generated exercises

2016 
Figurai abstraction test was proven to be one of the best ways of measuring inteiiigence. Over the last year we have been working on a smartphone version of such a test which included adaptiveness. When the user is solving the test, the difficulty of the next exercise is determined by the previously answered questions, meaning if the majority of the answers were correct, the difficulty level is raised, otherwise lowered. In this application the exercises were randomly generated using pre­determined patterns. This way the number of different exercises were significantly higher than in the case of such a test, where all the exercises are created one by one. The next step is to make the generation of these patterns automatic, thus raising the number of differently generated exercises even more. We use a description language to describe the logic behind the exercises, calling the different logics rules. One of the major challenges is to determine which of these rules are solvable by humans, thus are usable in the test. We are planning on doing this by formulating conditions. If these conditions are met in a generated rule, the rule is considered solvable. The second problem is determining the difficulty of each rule. A naive difficulty level can be determined for each rule based on how many different entities it contains and other similar factors. This difficulty level can be later on fine-tuned by a series of tests made by a group of people, whose intelligence level is determined by a different test.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    0
    Citations
    NaN
    KQI
    []