Research
Key Research Areas:
- Collaborative mobile robot autonomy in 3D space
- State estimation aware control and motion planning
- On-board environment perception and decision making
- Long-term task execution in changing environments
- Networked mission planning for multiple agents
Running Projects
Automated Log Ordering through robotic Grasper
Project Leadership
Duration
01.04.2018 - 31.03.2021
Funding
Produktion der Zukunft
AutoLOG will focus on research towards automating challenging and DDD (dull, dangerous, dirty) tasks concerning the handling of raw material in production lines currently executed manually. We will focus on artificial intelligence inspired vision based approaches to categorize and segment the raw material and its geometry to subsequently define (again through AI) optimal handling/grasping poses for the automation machinery. We seek to automate existing infrastructure in a versatile and cost effective way. Thus, we will investigate in retrofittable sensors and robust control strategies for seamless and cost efficient upgrading/retrofitting/automating existing infrastructure. As a specific application scenario and immediate benefit to Austrian’s industry, we will tackle the problem of autonomously grasping logs to be placed from the truck to the processing machinery.
Coorperation Partner
Forest Inventorying with Micro Aerial vehicles for autonomous Tree-parameter Estimation
Project Leadership
Project Staff
Duration
01.09.2016 - 31.03.2020
Funding
Österreichische Forschungsförderungsgesellschaft mbH (FFG)
Homepage
We will pursue a concentrated research effort to enable research platform prototype consisting of one unmanned aerial vehicle (UAV) navigating autonomously through managed mature forest providing sufficiently dense visual data for accurate 3D reconstruction and subsequent autonomous extraction of ecological data from objects of interest. The ecological data includes the estimation of the position of the trees, the diameter as breast height, the stem shape, and coverage of herb layer. The project will yield new innovative algorithms for GPS independent, vision based autonomous UAV navigation, including self-healing state estimation, vision based obstacle avoidance, and adaptive path planning. In addition, novel 3D reconstruction algorithms will enable on-site extraction of ecological forest parameters in unprecedented precision and efficiency in both time and cost.
Coorperation Partner
MOdular, Distributed and Uncertainty aware sensor-fusionfor Long-term Exploration with autonomous Systems
MODULES tackles the challenges of high-precision state estimation with the application to approach, landing, and subsequent take-off of a multi-copter on a landing pad for autonomous recharging. The goal is to develop a reliable algorithm for state estimation using multiple sensor modalities in a consistent and real-time multi-sensor fusion framework which will serve as back-bone for the high-precision maneuvers. In addition to the data fusion of multiple sensors in a given sensor suite, the question of how to increase modularity of such a framework such that sensor signals and entire sensor modules can adaptively be added and removed in-flight without disabling state estimation for reliable UAV navigation will be analyzed. System self-calibration, fast state convergence and consistency are key aspects to be considered.
Coorperation Partner
Tightly Coupled Visual-Inertial Multi-Sensor Fusion for Robust Navigation of Computationally Constraint UAVs
Project Leadership
Project Staff
Duration
01.08.2016 - 31.12.2021
Funding
U.S. Army International Technology Center – Atlantic
Visual information fused with inertial cues has proven to be able to provide pose information to the robots in a variety of different scenarios. However, current real-time capable solutions still use most of the resources on computationally constrained platforms, require well textured and low cluttered areas, and do not make use of the dense information the camera image provides. VI-MuSe will investigate in further reducing the computational complexity of visual-inertial state estimation and at the same time increase the information used from the camera image to mitigate the current limitations. In addition, the project will investigate in the use of other sensors to mitigate failure modes in visually homogeneous areas. In contrast to state-of-the-art multi-sensor fusion algorithms, the goal is to develop a method to seamlessly add sensors during run-time without the need of pre-calibration.
Coorperation Partner
A list of all projects of the Institute of Smart System Technology can be found in the Forschungsdokumentation (FoDok).
Completed Projects
Quicklinks
Information for
Address
Universitätsstraße 65-67
9020 Klagenfurt am Wörthersee
Austria
+43 463 2700
uni [at] aau [dot] at
www.aau.at
Campus Plan