PRIMORIS      Contacts      FAQs      INSTICC Portal

Universities, R&D Groups and Academic Networks

NCTA is a unique forum for universities, research groups and research projects to present their research and scientific results, be it by presenting a paper, hosting a tutorial or instructional course or demonstrating its research products in demo sessions, by contributing towards panels and discussions in the event's field of interest or by presenting their project, be it by setting up an exhibition booth, by being profiled in the event's web presence or printed materials or by suggesting keynote speakers or specific thematic sessions.

Special conditions are also available for Research Projects which wish to hold meetings at INSTICC events.

Current Academic Partners:

Context-Dependent Associative Learning

Natural systems (e.g. the human brain) have a remarkable ability to adjust to changing situations without lengthy re-learning procedures. Adaptivity presumes, among other things, the coupling of state-action associations to the context, within which these associations were acquired. We hypothesised that the temporal statistics of the operating environment convey morsels of information that are vitally important in this kind of adaptation. This project, entitled “Context-Dependent Associative Learning” was a subproject within the cluster project “Neurobiologically Inspired Multimodal Recognition for Technical Communication Systems”. The project was funded by the State of Saxony-Anhalt and the Federal Ministry of Education and Research (BMBF) in the Federal Republic of Germany. The aim of the subproject was to extend to human observer’s Miyahsita’s classical experiments with non-human primates on the learning of arbitrary visuomotor associations [Miyashita, Nature 335: 817-20, 1988; Yakovlev et al, Nat Neurosci 1: 310-7, 1998]. Historically, these observations led to the development of attractor neural network theories of associative learning [Amit Behav. Brain Sci, 1995, Amit et al. Neural Comput. 1997]. We hoped to confirm and extend these observations to human observers, in order to generate additional constraints for such theories. In addition to experimental work, we developed a series of computational models of reinforcement learning and recurrent neural networks with increasing complexity [Hamid et al., BMC Neuroscience 2010, Gluege et al. Cog. Comp. 2010]. In the most sophisticated variant, the learning rate depends on predictiveness (i.e. the more predictive an event, the more slowly its weight is adjusted). Besides reproducing the corresponding behavioural data, these models proved enormously helpful in developing our thinking in regards to the underlying processes, i.e. the plasticity and dynamics of attractor networks.