General and experimental psychology
print


Breadcrumb Navigation


Content

CoTeSys

Introduction

CoTeSys stands for "COgnition for TEchnical SYstems".

The CoTeSys cluster of excellence investigates cognition for technical systems such as vehicles, robots, and factories. Cognitive technical systems are equipped with artificial sensors and actuators, integrated and embedded into physical systems, and act in a physical world.
The cluster combines the expertise of TUM, LMU, UniBW, DLR, MPI, in neuroscience, natural sciences, engineering, computer science, and the humanities. It is one of the few proposals to be accepted in an intense nation-wide competition during the past year.

Research Area A:

Neurobiological and cognitive foundationsResearch area A investigates the neuro-biological and neuro-cognitive foundations of cognition in technical systems by empirically studying cognitive capabilities of humans and animals at the behavioral and brain level. Researchers will investigate, in human subjects, the cognitive control of multi-sensory perception-action couplings in dynamic, rapidly changing environments. Research area A will follow an integrative approach by combining behavioral and cognitive-neuroscience methodologies.

Projects
Project #433: Task-related and contextual factors in human cognition (TaCoH)

PD Dr. Anna Schubö
In the second phase of this Junior Research Group we aim to further advance the knowledge we have acquired of human cognitive processes. Humans strive to optimize their behavior with respect to both their current task and the demands imposed by the environment. We study how human behavior is formed by internal, task-related factors and by the environmental context and how these two interact. We focus in particular on those properties of tasks, action goals, and the environment, which allow optimal performance. In the upcoming funding period, we seek to address specific issues such as how human agents deal with changing from one task to another, how the environment interacts with such change and how action plans modify task representations. Additionally, we will investigate how the representation of a partner agent and his/her performance is integrated into the task. The results will provide the demonstration scenarios “Cognitive Factory” and “Multi Joint Action” with empirically testable design optimizations that increase task performance of human and robotic agents in a wide range of episodes.

Project #435: Mental State Prediction for Perception, Task Control, and Action in Sequential Activities

Prof. Dr. Hermann Müller, Dr. Frank Wallhoff, Prof. Dr. Gerhard Rigoll, & Dr. Michael Zehetleitner

The goal of  project 435 is to improve the effectivity of human-robot interaction by providing a cost function for optimized sequence planning in pro-active robots, while also deepening our understanding of attention, multisensor fusion, and executive action control based on fundamental neuro-cognitive research. Pro-active robots interacting with humans require knowledge about human performance capabilities and limitations modeled in terms of humans’ internal states, in order to optimally plan action sequences. Such interactive activities pose dynamic integrative demands on humans’ perception, cognition, and action. In neuro-cognitive research, it is well established that human performance at a given moment in time depends not only on the current external context, but also on the state of an internal implicit predictive memory system. Consequently, the internal state of a human can fit to a variable degree to the upcoming sensory, cognitive, and actuator demands posed by the task at hand. Basically, performance speed and accuracy improve whenever there are repetitions (vs. changes) of the task-relevant sensory properties (features, dimensions, modalities), cognitive (control) settings, and effectors or tools.
The goal of the proposed project is to extent and formalize cognitive theories which describe and explain such effects (based on our demonstration that the underlying predictive memory system can be described by ‘weighting’ dynamics), and to make them available for cognitive robots, by providing these with explicit representations of the human’s moment-to-moment internal states as well as measures of how well these states fit to the upcoming (inter-) action demands. Cognitive robots interacting with humans can make use of such a representation in using it as a cost function for planning (inter-) action sequences.

Project #436: Improving Human-Robot Interaction by Modeling Human Gaze Control in Social Situations

Prof. Dr. Hermann Müller, Dr. Michael Zehetleitner, Dr. Kolja Kühnlenz, & Dr. Jan Zwickel
One aim of the cluster is to contribute to the development of increasing complex, cognitive robots that can become part of everyday human life. Until now, the focus was mostly on humans to adapt to robots' communication requirements. However, for their increasing use in everyday-life settings, it will become necessary to replace this robot-centered mode of communication by a human-centered mode. This will enhance robot acceptance, permit the use of robot assistance also for elderly or for (physically, cognitively) impaired people, reduce communication errors, and increase user satisfaction. To achieve this, robots have to meet humans' expectations about communication behavior. As gaze is of central importance for human-human communication, it will become critical for state-of-the-art robots to use and 'understand' this communication means, i.e., to share gaze communication codes. This is a novel and relatively undeveloped research area which will be addressed in the project. To this end, human gaze will be analyzed in social situations to allow development of a gaze model for robot camera control. The focus will be on gaze change when mind attribution occurs and on how intent can be communicated by gaze alone. Concomitantly, the project will use the knowledge gained by investigating human eye movements to evaluate how robots are perceived. Here, central questions will be what kind of robots/robot behaviors induce gaze patterns in humans that are typical in social situations - in other words: what changes to a robot/robot behavior make humans perceive the robot no longer as a machine, but rather as a social agent?

Project #439: Optimization of task coordination in multitasking situations

Prof. Dr. Torsten Schubert
The aim of the project is to improve the multitasking capabilities of robots and of humans in joint robot–human interaction. Multitasking refers to the simultaneous performance of two or more independent actions, including the perception of information, the selection of appropriate actions and their execution with appropriate effectors. The ability of humans for multitasking is known to be subject to severe limitations in their cognitive and motor systems.
Successful human-robot interaction in multitasking situations, therefore, requires that these limitations are reflected in the control of the robot. If the robot can anticipate restrictions limiting human multitasking performance, then appropriate feed-forward action control will help optimize the robot–human interaction. It is one goal of the current project to deliver knowledge about the principles of human multitasking and to implement this knowledge in applicable form so as to permit the action control of the robot to be tuned to the limitations of the human user.
Further, while the difficulties in human multitasking are caused by processing limitations of the human ‘hard-ware’ (e.g., limitations of the human brain), robots do not have to encounter such limitations. The superior processing power of robots would, in principle, allow for highly efficient parallel processing of multiple tasks by robots. However, successful action planning of robots in multitasking situations does also require appropriate goal-directed sequencing, weighting and coordination of actions. Thus, a further goal of the project is to provide knowledge about optimal action sequencing in correspondence with humans’ sequencing behaviour. This knowledge will be made available in the form of ‘scripts’ that are directly applicable by robots in multitasking situations.
The proposed work will follow two lines of research: A) Developing a mathematical model of human behaviour and human processing limitations during multitasking, which will be implemented in the robots action control device in order to optimize control settings in robot–human interaction. B) Optimizing the multitasking behaviour of robots per se by learning from human multitasking principles and implementing analogous solutions.

Project #301: CogMan - Cognitive Models of Everyday Manipulation Tasks: Cognitive Studies, Implementation, and Empirical Analysis

Prof. Dr. Michael Beetz, PD Dr. Anna Schubö, Prof. Dr. Michael Ulbrich, Prof. Dr. Heiner Deubel, Prof. Dr. Darius Burschka, Dr. Patrick van der Smagt, Prof. Dr. Werner Wolf
The project studies cognitive factors in the planning, execution and control of skilled movements, with a specific focus on human and robot grasping. Grasping of new objects in the context of changing tasks and environment is a fundamental element of robot interaction with their environment. While various computational models exist that are used for planning of reaching and grasping movements to predetermined spatial goals in simple situations, much less is yet known about how, and when during movement preparation and execution higher cognitive factors (such as the intended manipulation, intervening and future action goals, pre-knowledge about the goal object etc.) are integrated in the control of movement kinematics. Moreover, there are basic limitations that constrain the available set of actions, possibly already at a central motor level, due to couplings and synergies between effectors. These aspects will be investigated by analysing human skilled movement control in a variety of complex motor tasks. The research will allow, both, to refine current models of human grasping, and to develop grasp planning for robotic hands in a complex environment involving obstacles and aperture constraints.

Finished Projects

Project #313: Movement Coordination in Human-Human and Human-Robot Interaction


PD Dr. Anna Schubö, B.Sc. Cordula Vesper

This project examines movement coordination between two human persons in order to extract and transfer general principles to human-robot interaction situations. During any direct interaction, it is crucial to anticipate the timing and positions of the interaction partner’s future movements. In the ball track paradigm, two participants perform a sequence of pick-and-place movements while their movement trajectories are recorded. Timing and duration of movements as well as velocity and acceleration are compared with those obtained from a situation in which one person works alone. The data will be transferred to human-robot interaction scenarios.

Project #125: Foveated visual attention for cognitive mobile vehicles and cognitive humanoid robots


Prof. Martin Buss,
Prof. Hermann Müller, Prof. Werner Schneider

Goal-directed guidance of gaze control based on coordinated task and stimulus parameters is essential for steering a mobile cognitive system efficiently and autonomously through the real world. This project focuses on coordination mechanisms of top-down and bottom-up overt and covert attentional allocation, with particular consideration of the familiarity of the current local environment, local situation, and global mission. Investigations are conducted involving human subjects that cover typical scenarios relevant for the humanoid and vehicle demonstrators. Available computational resources are taken into account, utilizing competition models of visual attention guidance in order to meet the requirements of capacity limitations in the mobile demonstrators. Implementable models are derived and implemented, taking into account the particular nature of the humanoid and vehicle demonstrators. In collaboration with prop. #213, investigating vision and walking coordination and demonstrator-specific realizations, and prop. #136, developing a comprehensive framework of multi-modal attention as a multi-objective optimization problem, a triangle is spanned covering biological inspiration, specialized embodiment, and optimal technical modeling.


Project #148: Memory-based mechanisms in the cognitive control of uni- and multimodal attentional selection


Dr. Anna Schubö


Flexible performance in changing environments (e.g., getting oriented in an unknown situation or searching for a task-relevant object amongst a number of differing, potentially interesting objects) requires the acting person to remain focused on the ‘main’ task and the intended action goals while also partially inspecting the changing environment. Human actors are quite successful in doing so: cognitive control mechanisms allow them to ‘switch’ between different tasks and to perform multiple things at the same time, as, e.g., memorizing task goals while selecting relevant objects, without strong deterioration in performance.

Cognitive control in selection tasks for example, involves a number of explicit- and implicit-memory mechanisms operating in different time frames. Such memory-based cognitive control mechanisms guarantee the representation and maintenance of task-relevant information from changeable scenes. Investigating such processes in humans and transferring insights to technical systems is important for the implementation of flexible behavior in cognitive technical systems where (similar to humans) task-relevant information has to be represented and maintained across changeable scenes. The core research topic of the present proposal is the investigation of the interrelation between such explicit and implicit memory-based mechanisms in selection tasks, their integration over time, and their linkage to higher-level cognitive control processes such as task sets (implemented via instructions), working-memory processes, intentions and action goals specified in current action plans.

Besides traditional behavioral methods, event-related brain potentials (ERPs) will be recorded in most experimental studies. Due to their high temporal resolution, ERPs elicited in well-designed task situations will provide important insights into the underlying neuronal mechanisms, their temporal sequence, interplay, and the level of processing at which they are located.

Project #159: ACIPE - Adaptive cognitive interaction in production environments


Prof. Heiner Bubb
, Prof. Hermann Müller, Dr. Anna Schubö, Prof. Gerhard Rigoll, Prof. Michael F. Zäh

This proposal covers the integration of cognitive machines in currently humanly dominated continuous manual assembly environments esp. for highly variant and complex goods. The goal of adaptive cognitive interaction in production environments (ACIPE) is the dynamic allocation of work content among human workers and automated machines as well as user and situation adaptive assistance for ergonomic worker integration. Using innovative methods and devices for experimental data acquisition (emotion and gesture recognition, gaze tracking, EEG, motion tracking, etc.) information on human-human collaboration is collected. Based on this information, essential human mental models are identified and modeled with cognitive architectures. The resulting technical implementations of these models for essential aspects of the cognitive factory allow the realization of naturalistic collaboration strategies with cognitive machines. ACIPE serves as a complex and advanced testbed for the evaluation of the mental model approach as well as other basic research areas’ results in a realistic assembly line environment.


Project #165: Dynamic event monitoring in multiple sensory modalities and response production in multiple effector systems


Prof. Hermann Müller, Dr. Anna Schubö, Prof. Thomas Stoffer, Prof. Leo van Hemmen, Prof. Martin Buss

Many applied task situations are dynamic, in that the operator must continually monitor a changing task environment for significant, salient events and produce appropriate responses to such events. Critical events may occur in multiple perceptual modalities (e.g., visual, auditory, haptic), and responses may require the use of various effector systems (e.g., manual, oculo-motor, vocal). However, relatively little is known about the control of perception-action couplings in dynamically unfolding situations in which both the relevant input modalities and the requisite output systems change over time. The present project is designed to close this gap in our understanding. The aim of the planned studies, which will combine behavioral with brain activity analyses, is to characterize the hierarchical organization of the control systems on both the input and the output side of the processing system and describe how control emerges in dynamically changing situations. Following clarification of the nature, and interaction, of the control systems, these will be modeled using both rule-based and neuro-computational approaches. These (neuro-) cognitive models are intended to provide new insights into, especially, the human-factors-based design of HMIs in cognitive vehicles and factories and as well as the efficient multi-modal event monitoring in dynamic environments by cognitive vehicles and robots.

Project #167: Updating of the current scene-representation-for-action: Control mechanisms and capacity limits


Prof. Werner Schneider, Prof. Martin Buss, Dr. Dirk Wollherr

Humans process information about the environment in sensory preprocessing stage one and a following stage two that contains information for goal-directed actions ('scenerepresentation-for-action-stage'). Information at stage two contains only up to four 'object' representation at a time of the current scene. Transfer of information from the first to the second stage is handled by attentional processes. Selective processing of visual information in two stages occurs mostly when the eye is fixating – saccadic eye movements change the fixation location 3-4 times per sec. However, information from previous samples (eye fixations) is not forgotten and maintained in a trans-saccadic stage two representation. Given the strong limitations of stage two (4 objects), and given a dynamic rapidly changing environment, a fundamental control problem in updating of the current scene-representation-for-action arises. Within every eye fixation (environmental sample) a decision has to be made which information from previous samples (last few seconds) should be maintained and which new information from the current fixation should enter limited stage two. So, competition for stage two between previous sampled information and currently sampled information (current fixation) has to be constantly resolved. Control of this competition-for-updating is exerted by two types of attentional processes, namely by visual attention and 'central attention'. For visual attention, stimulus-driven and top-down (task) factors of control have been identified for processing within single fixations. However, it is not known whether these factors and computational principles of selective processing work also across multiple samples (eye fixations) in the control of updating. This is the main topic of research theme 1. The guideline question is: Are these stimulus-driven and top-down factors of visual attention also effective in updating the scene representation across multiple fixations? 'Central attention' components of updating such as 'short-term memory consolidation' (STMC) have also been identified experimentally. STMC is necessary when information at stage two has to be actively maintained against competing new environment input. However, current evidence is restricted to processing in single fixations and research scheme 2 extends this by investigating updating mechanisms of central attention across multiple eye fixations. Based on several series of experiments (theme 1 and 2) neural network models of updating will be developed and implemented in a reduced form on a cognitive mobile platform with rapidly shiftable cameras.

Project #168: Attention as an economic and flexible mechanism for maintaining complex visual represeentations


Prof. Werner Schneider, Prof. Jürgen Schmidhuber

This is a proposal to implement a model of the global cortical network structures which are involved in the interaction between visual perception and attention. Many parts of these network structures already exist in isolation, but no network model exists that incorporates them into a single structure. In this implementation different models of attention can be tested and integrated when they are complementary. Where they make different experimental predictions, such as in the case of capacity limits in visual working memory, they may suggest experimental approaches to resolve contradictions between different models. In general, models of visual attention have been hampered by the fact that form processing was on very simple artificial stimuli and therefore were limited in their predictions of the complex visual world which is around us in daily life. Recently, a biologically accurate model of form processing in visual cortex has been proposed in the literature. This model is able to perform object recognition on camera images and is claimed to be at least as good as artificial methods. It is proposed to integrate this model into the attentional network model. First of all, this would enable to investigate how attentional mechanisms perform on real images. Secondly, it would bring the model closer to the experimental psychological paradigms because the visual stimuli used in the experiments could be directly applied to the model. Finally, this project aims to create a modelling infrastructure that is relevant for other CoTeSys proposals.

Project #181: Attentional selection of movement goals: Mechanisms and capacity limitations


Prof. Heiner Deubel

Goal-directed actions, of human actors as well as of robots, presume processes that continuously select the action goals and provide all the various parameters which are relevant for the intended movement. The main goal of the project is to analyse, for a variety of increasingly complex sensorimotor tasks, the action-specific, selective spatial processing that occurs long before the onset of the movement. The planned research will focus on three closely related research themes. A first, major part of the project will study, in human subjects, the spatial and temporal properties of selective visual and tactile sensory processing before manual grasping movements. The second research theme addresses selection and strategies in movements involving several effectors, such as eye-hand and hand-hand movements directed to spatially separate targets. A third subproject focuses on the effect of central processing limitations in various sensorimotor tasks, and on human strategies to optimise behaviour in spite of these constraints.

The research will be done in close collaboration with:
  • Patrick van der Smagt, Institute of Robotics and Mechatronics, DLR
  • Werner Wolf, Institut für Mathematik und Datenverarbeitung, Bundeswehr University