The Academic Day is an afternoon during which various researchers of the Artificial Intelligence and Computing Science departments of the University of Groningen will present their current projects. The goal of the afternoon is to show what these departments are working on.
Short presentation will give you an insight into the scientific explorations of the researcher and there is also room for questions and discussion.
This afternoon can be interesting for AI or CS students at the UG, from first year till last year students, but also for students from other programmes and universities. Besides them others like staff from other departments and universities, everyone with an interest in science can join!
At the end of the afternoon we will conclude with drinks, during which staff will be present to answer more questions or discuss other topics.
The Academic Day will take place on the 26th of February, from 13:00 till 17:00, in room 151 of the Bernoulli.
|13:20||Can logic save the world from software bugs? Jorge Pérez Fundamental Computing|
|13:40||Turning homes and buildings into intelligent spaces by coordinating the Internet of Things Ilche Georgievski Distributed Systems|
|14:00||Robust Classification of Wild Animal Species using Variants of Bag of Visual Words with Local Feature Descriptors Emmanuel Okafor Autonomous Perceptive Systems|
|14:20||Representation of Shape: From Engineering to Entertainment Jiri Kosinka Scientific Visualisation & Computer Graphics|
|14:40||On the difficulty of letting a computer get distracted Marieke van Vugt Cognitive Modeling|
|15:00||Break with tea and coffee|
|15:30||T.B.A. Lambert Schomaker Autonomous Perceptive Systems|
|15:50||Your source code is my data Dr. Mircea Lungu Software Engineering|
|16:10||Reasoning with legal evidence Charlotte Vlek Multi-agent Systems|
|16:30||Underwater imaging using an artificial fish lateral line organ Sietse van Netten Autonomous Perceptive Systems|
|16:50||Closing & drinks|
The once-futuristic vision of homes and buildings that are enriched with gadgets that may help to anticipate our needs, improve our comfort, and save energy is slowly taking shape. Thermostat may learn our preferences and consider our schedule, and lights turn on and off as we come and go. Such applications make living spaces a bit smarter, but they are not really intelligent. Even though they are connected as Internet of Things, these devices are loners in their operations and unaware of the overall situation and needs of our homes or buildings. The intelligence can be brought by coordinating the Internet of Things collectively and dynamically so that the living spaces can make fast and precise decisions about our comfort and energy usage. We look at computational techniques to develop intelligent solutions for automated control in homes and buildings.
Most of us have been faced with software bugs, in the programs we use everyday but also in the programs we write ourselves. Software bugs have a negative societal effect — how can we get rid of them? My short talk will give insights on how the design of correct programs can benefit from sound mathematical principles derived from logic. This is an active research area pursued at the Fundamental Computing group of the Johann Bernoulli Institute.
Engineering and entertainment, two major industries, use different surface representations to achieve the same goal: represent shapes (a car, a boat, a 3D animated film character) in the computer. In engineering (Computer-Aided Design), the standard representation tool is splines. On the other hand, subdivision dominates in entertainment (Pixar). Our research focuses on bridging the gap between these two shape representations, and thus the two industries. For example, if an automotive company wished to make an animated 3D film or advertisement featuring their car model, our framework would allow to use the same or automatically converted model in an animation rendering system.
Understanding the source code of software systems is challenging. However, if we treat the source code of a software systems as data, then we can the tools and techniques from data science: information visualization and data mining. Surely, the ultimate perspective of the source code of data, belongs to the compiler.
Fish are able to detect the position of objects moving under water using their lateral line organ. We are designing, producing and testing sensor elements that will be used in artificial hydrodynamic sensory arrays that are mimicking the function of the lateral line. In addition we also work on signal processing which combines the information of individual sensor elements of artificial arrays and will facilitate image formation of the under water environment.
In a criminal trial, a judge or jury is presented with a collection of evidence. Their first task, before reaching a verdict, is to draw a conclusion about what may have happened: was the suspect present at the crime scene? did she indeed fire the gun? how did the sequence of events unfold? Drawing conclusions from (legal) evidence requires logical reasoning. But with modern forensic techniques, it also often requires statistical reasoning. In our research project, we aim to develop a method for reasoning with legal evidence by combining three distinct approaches: reasoning with scenarios, reasoning with arguments and reasoning with probabilities.
Computers are very good at what they do, and they keep doing what they are told to do indefinitely. This creates a problem when we use computers to model the mind. How can we make a computer get distracted? I will demonstrate how we investigate distraction and how we simulate these results with cognitive models.
We proposed the combination of bag of visual words (BOW) with local descriptors, which is used for extractions of feature vectors in our novel wild animal datasets (Wild-Anim). These descriptors include: Histogram of Oriented Gradient (HOG), local Color-Histogram (Color-Hist),local Color and Edge Directivity Descriptor (CEDD) and Histogram of Oriented Gradient Sift (HOG-SIFT).
The process involved in setting up BOW is described as follows: extractions of patches from the unlabeled training set by applying each of the mentioned descriptors, and then the outputs from these extractions are used in constructing codebooks that are obtained from K-means clustering. Then, a sum pooling approach is applied on the codebooks for 4 different quadrants on each image in our datasets. This yields the feature vectors for different variants of the BOW (HOG-BOW, Color-Hist-BOW, CEDD-BOW and HOG-SIFT-BOW). We then applied L2-SVM classifier on the outputs of the BOW variants in order to distinguish between different animal species in our datasets.
Results showed that HOG-BOW on both gray and color datasets outperforms all other state-of-the-art feature descriptors with a classification accuracy of 82.9% and 78.6% respectively. Also, we improved the state-of-the-art HOG-BOW and BOW algorithms by incorporating a local color extractor that can process color patches which contains local color information of images in our datasets, unlike the existing methods that are based on local feature extractions of patches on gray-scale datasets. Finally, we reported experimental results of the best two feature descriptors in combination with L2-SVM on 2 animal benchmark datasets.
The Academic Day will take place in the Bernoulliborg, room 151, in Groningen