|
Teaching and learning are difficult tasks not only when people are involved but
also with regard to computer programs and machines: When the teaching/learning
units are too small, we cannot express sufficient context to teach a differentiated
lesson; when they are too large, the complexity of the learning task can increase
dramatically such that it will take forever to teach and learn a lesson. Thus, the
question arises, how we can teach and learn complex concepts and strategies, or
more specifically: How can the lesson be structured and scaled such that efficient
and effective learning can be achieved?
Reinforcement learning has developed as a successful learning approach for domains
that are not fully understood and that are too complex to be described in
closed form. However, reinforcement learning does not scale well to large and continuous
problems; furthermore, knowledge acquired in one environment cannot be
transferred to new environments. Although this latter phenomenon also has been observed
in human learning situations to a certain extent, it is desirable to generalize
suitable insights for application also in new situations.
In this book, Lutz Frommberger investigates whether deficiencies of reinforcement
learning can be overcome by suitable abstraction methods. He discusses various
forms of spatial abstraction, in particular qualitative abstraction, a form of representing
knowledge that has been thoroughly investigated and successfully applied
in spatial cognition research. With his approach, Lutz Frommberger exploits spatial
structures and structural similarity to support the learning process by abstracting
from less important features and stressing the essential ones. The author demonstrates
his learning approach and the transferability of knowledge by having his
system learn in a virtual robot simulation system and consequently transferring the
acquired knowledge to a physical robot.
Reinforcement learning has developed as a successful learning approach for domains that are not fully understood and that are too complex to be described in closed form. However, reinforcement learning does not scale well to large and continuous problems. Furthermore, acquired knowledge specific to the learned task, and transfer of knowledge to new tasks is crucial. In this book the author investigates whether deficiencies of reinforcement learning can be overcome by suitable abstraction methods. He discusses various forms of spatial abstraction, in particular qualitative abstraction, a form of representing knowledge that has been thoroughly investigated and successfully applied in spatial cognition research. With his approach, he exploits spatial structures and structural similarity to support the learning process by abstracting from less important features and stressing the essential ones. The author demonstrates his learning approach and the transferability of knowledge by having his system learn in a virtual robot simulation system and consequently transfer the acquired knowledge to a physical robot. The approach is influenced by findings from cognitive science. The book is suitable for researchers working in artificial intelligence, in particular knowledge representation, learning, spatial cognition, and robotics. |
|