DeLTA 2023 Abstracts


Area 1 - Big Data Analytics

Short Papers
Paper Nr: 14
Title:

Dynamic Prediction of Survival Status in Patients Undergoing Cardiac Catheterization Using a Joint Modeling Approach

Authors:

Derun Xia, Yi-An Ko, Shivang Desai and Arshed A. Quyyumi

Abstract: Background: Traditional cardiovascular disease risk factors have a limited ability to precisely predict patient survival outcomes. To better stratify the risk of patients with established coronary artery disease (CAD), it is useful to develop dynamic prediction tools that can update the prediction by incorporating time-varying data to enhance disease management. Objective: To dynamically predict myocardial infarction (MI) or cardiovascular death (CV-death) and all-cause death among patients undergoing cardiac catheterization using their electronic health records (EHR) data over time and evaluate the prediction accuracy of the model. Methods: Data from 6119 participants were obtained from Emory Cardiovascular Biobank (EmCAB). We constructed the joint model with multiple longitudinal variables to dynamically predict MI/CV-death and all-cause death. The cumulative effect and slope of longitudinally measured variables were considered in the model. The time-dependent area under the receiver operating characteristic (ROC) curve (AUC) was used to assess the discriminating capability, and the time-dependent Brier score was used to assess prediction error. Results: In addition to existing risk factors including disease history, changes in several clinical variables that are routinely collected in the EHR showed significant contributions to adverse events. For example, the decrease in glomerular filtration rate (GFR), body mass index (BMI), high-density lipoprotein (HDL), systolic blood pressure (SBP) and increase in troponin-I increased the hazard of MI/CV-death and all-cause death. More rapid decrease in GFR and BMI (corresponding to decrease in slope) increased the hazard of MI/CV-death and all-cause death. More rapid increase in diastolic blood pressure (DBP) and more rapid decrease in SBP increased the hazard of all-cause death. The timedependent AUCs of the traditional Cox proportional model were higher than those of the joint model for MI/CV-death and all-cause death. The Brier scores of the joint model were also higher than those of the Cox proportional model. Conclusion: Joint modeling that incorporates longitudinally measured variables to achieve dynamic risk prediction is better than conventional risk assessment models and can be clinically useful. The joint model did not appear to perform better than a Cox regression model in our study. Possible reasons include data availability, selection bias, and quality uncertainty in the EHR. Future studies should address these issues when developing dynamic prediction models.

Area 2 - Computer Vision Applications

Full Papers
Paper Nr: 21
Title:

Vision Transformers for Galaxy Morphology Classification: Fine-Tuning Pre-Trained Networks vs. Training from Scratch

Authors:

Rahul Kumar, Md Kamruzzaman Sarker and Sheikh Rabiul Islam

Abstract: In recent years, the Transformer-based deep learning architecture has become extremely popular for downstream tasks, especially within the field of Computer Vision. However, transformer models are very data-hungry, making them challenging to adopt in many applications where data is scarce. Using transfer learning techniques, we explore the classic Vision Transformer (ViT) and its ability to transfer features from the natural image domain to classify images in the galactic image domain. Using the weights of models trained on ImageNet (a popular benchmark dataset for Computer Vision), we compare the results of two distinct ViTs: one base ViT (without pre-training) and another fine-tuned ViT pre-trained on ImageNet. Our experiments on the Galaxy10 dataset show that by using the pre-trained ViT model, we can get better accuracy compared to the ViT model built from scratch and do so with a faster training time. Experimental data further shows that the fine-tuned ViT model can achieve similar accuracy to the model built from scratch while using less training data.

Paper Nr: 24
Title:

MSDeepNet: A Novel Multi-Stream Deep Neural Network for Real-World Anomaly Detection in Surveillance Videos

Authors:

Prabhu Prasad Dev, Pranesh Das and Raju Hazari

Abstract: Anomaly detection in real time surveillance videos is a challenging task due to the scenario dependency, duration and multiple occurrences of anomalous events. Typically, weakly supervised video anomaly detection that involves video-level labels is expressed as a multiple instance learning (MIL) problem. The objective is to detect the video clips containing abnormal events, while representing each video as a collection of such clips. Existing MIL classifiers assume that the training videos only have anomalous events of short duration. However, this may not hold true for all real-life anomalies and it cannot be dismissed that there may be multiple occurrences of anomalies in the training videos. This paper demonstrates that incorporating temporal information in feature extraction can enhance the performance of anomaly detection. To achieve this objective, a novel multi-stream deep neural network (MSDeepNet) is proposed by employing spatio-temporal deep feature extractors along with weakly supervised temporal attention module (WS-TAM). The features extracted from the individual streams are fed to train the modified MIL classifier by employing a novel temporal loss function. Finally, a fuzzy fusion method is used to aggregate the anomaly detection scores. To validate the performance of the proposed method, comprehensive results have been performed on the large-scale benchmark UCF Crime dataset. The suggested multi-stream architecture outperforms state-of-the-art video anomaly detection methods with the frame-level AUC score of 84.72% for detecting anomalous events and lowest false alarm rate of 0.9% for detecting normal events.

Short Papers
Paper Nr: 28
Title:

Using Artificial Intelligence to Reduce the Risk of Transfusion Hemolytic Reactions

Authors:

Maya Trutschl, Urska Cvek and Marjan Trutschl

Abstract: The monocyte monolayer assay is a cellular assay, an in-vitro procedure that mimics extravascular hemolysis. It was developed to predict the clinical significance of red blood cell antibodies in transfusion candidates with intent to determine whether the patient needs to receive the expensive, rare, antigen-negative blood to avoid an acute hemolytic transfusion reaction that could lead to death. The assay requires a highly trained technician to spend several hours evaluating a minimum of 3,200 monocytes on a glass slide under a microscope in a cumbersome process of repetitive counting. Using the YOLO neural network model, we automate the process of identifying and categorizing monocytes from slide images, a significant improvement over the manual counting method. With this technology, blood bank technicians can save time and effort while increasing accuracy in the evaluation of blood transfusion candidates, leading to faster and better medical diagnosis. The trained model was integrated into an application that can locate, identify, and categorize monocytes, separating them from the background and noise on the images acquired by an optical microscope camera. Experiments involving a real-world data set demonstrate that F1-score, mAP scores, precision and recall are above 90%, indicating that this workflow can ease and accelerate the medical laboratory technician’s repetitive, cumbersome, and error-prone counting process, and therefore contributes to the accuracy of medical diagnosis systems.

Paper Nr: 47
Title:

GAN-Powered Model&Landmark-Free Reconstruction: A Versatile Approach for High-Quality 3D Facial and Object Recovery from Single Images

Authors:

Michael Danner, Patrik Huber, Muhammad Awais, Matthias Rätsch and Josef Kittler

Abstract: In recent years, 3D facial reconstructions from single images have garnered significant interest. Most of the approaches are based on 3D Morphable Model (3DMM) fitting to reconstruct the 3D face shape. Concurrently, the adoption of Generative Adversarial Networks (GAN) has been gaining momentum to improve the texture of reconstructed faces. In this paper, we propose a fundamentally different approach to reconstructing the 3D head shape from a single image by harnessing the power of GAN. Our method predicts three maps of normal vectors of the head's frontal, left, and right poses. We are thus presenting a model-free method that does not require any prior knowledge of the object's geometry to be reconstructed. The key advantage of our proposed approach is the substantial improvement in reconstruction quality compared to existing methods, particularly in the case of facial regions that are self-occluded in the input image. Our method is not limited to 3d face reconstruction. It is generic and applicable to multiple kinds of 3D objects. To illustrate the versatility of our method, we demonstrate its efficacy in reconstructing the entire human body. By delivering a model-free method capable of generating high-quality 3D reconstructions, this paper not only advances the field of 3D facial reconstruction but also provides a foundation for future research and applications spanning multiple object types. The implications of this work have the potential to extend far beyond facial reconstruction, paving the way for innovative solutions and discoveries in various domains.

Paper Nr: 48
Title:

GAN-Based LiDAR Intensity Simulation

Authors:

Richard Marcus, Felix Gabel, Niklas Knoop and Marc Stamminger

Abstract: Realistic vehicle sensor simulation is an important element in developing autonomous driving. As physics-based implementations of visual sensors like LiDAR are complex in practice, data-based approaches promise solutions. Using pairs of camera images and LiDAR scans from real test drives, GANs can be trained to translate between them. For this process, we contribute two additions. First, we exploit the camera images, acquiring segmentation data and dense depth maps as additional input for training. Second, we test the performance of the LiDAR simulation by testing how well an object detection network generalizes between real and synthetic point clouds to enable evaluation without ground truth point clouds. Combining both, we simulate LiDAR point clouds and demonstrate their realism.

Paper Nr: 13
Title:

Towards Exploring Adversarial Learning for Anomaly Detection in Complex Driving Scenes

Authors:

Nour Habib, Yunsu Cho, Abhishek Buragohain and Andreas Rausch

Abstract: One of the many Autonomous Systems (ASs), such as autonomous driving cars, performs various safety-critical functions. Many of these autonomous systems take the advantage of Artificial Intelligence (AI) techniques to perceive its environment. But these perceiving components could not be formally verified, since, the accuracy of such AI-based component have high dependency on the quality of training data. So Machine learning (ML) based anomaly detection, a technique to identify data that does not belong to the training data could be used as a safety measuring indicator during the development and operational time of such AI based components. Adversarial learning, a subfield of machine learning has proven its ability to detect anomalies in images and videos with impressive results on simple data sets. Therefore, in this work, we investigate and provide an insight over the performance of such techniques on a highly complex driving scenes dataset called Berkeley DeepDrive.

Paper Nr: 44
Title:

Generative Adversarial Networks for Domain Translation in Unpaired Breast DCE-MRI Datasets

Authors:

Antonio Galli, Michela Gravina, Stefano Marrone and Carlo Sansone

Abstract: Generative Adversarial Networks (GAN) gained a lot of attention in the computer vision community due to their capability of data generation, in particular for domain adaptation and image-to-image translation tasks. These properties have attracted the medical community too, in order to solve some complex biomedical challenges, such as the translation between different medical imaging acquisition protocols. Indeed, as the actual acquisition protocol is strongly dependent on factors such as the operator, the aim, the centre, etc., gathering cohorts of patients all sharing the same typology of imaging is an open challenge. In this paper, we propose to face this problem by using a GAN to realise a domain translation architecture in the case of breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI), considering two different acquisition protocols, in the context of automatic lesion classification. Despite this work wanting to be a first step toward artificial data generation in the medical domain, the obtained results have been analysed from both a quantitative and qualitative point of view, in order to evaluate the correctness and quality of the proposed architecture as well as its usability in a clinical scenario.

Paper Nr: 53
Title:

FRLL-Beautified: A Dataset of Fun Selfie Filters with Facial Attributes

Authors:

Shubham Tiwari, Yash Sethia, Ashwani Tanwar, Ritesh Kumar and Rudresh Dwivedi

Abstract: There is a need to assess the impact of filters on the performance of face recognition systems. For this, a standard dataset should be available with relevant filters applied. Currently, such datasets are not publicly available. Some datasets available with filters applied, are very low in resolution and thus not relevant for use. To mitigate these limitations, we aim to create a high-quality face dataset with a filter applied over images. The proposed dataset provides high-quality images with ten different filters applied to them. We apply ten different filters and these filters vary from beautification and AR-based filters to filters that modify facial landmarks. The wide range of filters includes occlusion and beautification applied on the selfies allowing a more diverse set of faces to be experimented with and analyzed in biometric systems. The dataset will contribute further to the set of facial datasets available publicly. This will allow researchers to study the impact of filters on facial features with a common public benchmark.

Area 3 - Models and Algorithms

Full Papers
Paper Nr: 11
Title:

Synthetic Network Traffic Data Generation and Classification of Advanced Persistent Threat Samples: A Case Study with GANs and XGBoost

Authors:

T. J. Anande and M. S. Leeson

Abstract: The need to develop more efficient network traffic data generation techniques that can reproduce the intricate features of traffic flows forms a central element in secured monitoring systems for networks and cybersecurity. This study investigates selected Generative Adversarial Network (GAN) architectures to generate realistic network traffic samples. It incorporates Extreme Gradient Boosting (XGBoost), an Ensemble Machine Learning algorithm effectively used for the classification and detection of observed and unobserved Advanced Persistent Threat (APT) attack samples in the synthetic and new data distributions. Results show that the Wasserstein GAN architectures achieve optimal generation with a sustained Earth Mover distance estimation of 10^-3 between the Critic loss and the Generator loss compared to the vanilla GAN architecture. Performance statistics using XGBoost and other evaluation metrics indicate successful generation and detection with an accuracy of 99.97% a recall rate of 99.94%, and 100% precision. Further results show a 99.97% f1 score for detecting APT samples in the synthetic data, and a Receiver Operator Characteristic Area Under the Curve (ROC_AUC) value of 1.0, indicating optimum behavior, surpassing previous state-of-the-art methods. When evaluated on unseen data, the proposed approach maintains optimal detection performance with 100% recall, 100% Area Under the Curve (AUC) and precision above 90%.

Paper Nr: 22
Title:

A Study of Neural Collapse for Text Classification

Authors:

Jia Hui (Sherry) Feng, Edmund M-K Lai and Weihua Li

Abstract: The phenomenon of Neural Collapse has raised more areas for research in terms of deep learning. In this paper, we verify that the neural collapse phenomenon also occurs in text classification. However, the NC model performed very poorly in the classification of test data. Through our experiment, it led to the discovery of the possible deficiencies in labeling of the original dataset that contributes to poor classification accuracies. We were able to find a hidden cluster that some of the data points were converging towards. This hidden cluster turned out to be an additional topic. Specifically for the AG News dataset, the NC model has been able to identify additional topics which can label the news as an impact on local or a global context. This indicates that NC can be used for cluster discovery in semi-supervised learning situations.

Paper Nr: 36
Title:

TaxoSBERT: Unsupervised Taxonomy Expansion Through Expressive Semantic Similarity

Authors:

Daniele Margiotta, Danilo Croce and Roberto Basili

Abstract: Knowledge graphs are crucial resources for a large set of document management tasks, such as text retrieval and classification as well as natural language inference. Standard examples are large-scale lexical semantic graphs, such as WordNet, useful for text tagging or sentence disambiguation purposes. The dynamics of lexical taxonomies is a critical problem as they need to be maintained to follow the language evolution across time. Taxonomy expansion, in this sense, becomes a critical semantic task, as it allows for an extension of existing resources with new properties but also to create new entries, i.e. taxonomy concepts, when necessary. Previous work on this topic suggests the use of neural learning methods able to make use of the underlying taxonomy graph as a source of training evidence. This can be done by graph-based learning, where nets are trained to encode the underlying knowledge graph and to predict appropriate inferences. This paper presents TaxoSBERT as a simple and effective way to model the taxonomy expansion problem as a retrieval task. It combines a robust semantic similarity measure and taxonomy-driven re-rank strategies. This method is unsupervised, the adopted similarity measures are trained on (large-scale) resources out of a target taxonomy and are extremely efficient. The experimental evaluation with respect to two taxonomies shows surprising results, improving far more complex state-of-the-art methods.

Short Papers
Paper Nr: 12
Title:

Improving Primate Sounds Classification Using Binary Presorting for Deep Learning

Authors:

Michael Kölle, Steffen Illium, Maximilian Zorn, Jonas Nüßlein, Patrick Suchostawski and Claudia Linnhoff-Popien

Abstract: In the field of wildlife observation and conservation, approaches involving machine learning on audio recordings are becoming increasingly popular. Unfortunately, available datasets from this field of research are often not optimal learning material; Samples can be weakly labeled, of different lengths or come with a poor signal-to-noise ratio. In this work, we introduce a generalized approach that first relabels subsegments of MEL spectrogram representations, to achieve higher performances on the actual multi-class classification tasks. For both the binary pre-sorting and the classification, we make use of convolutional neural networks (CNN) and various data-augmentation techniques. We showcase the results of this approach on the challenging ComparE 2021 dataset, with the task of classifying between different primate species sounds, and report significantly higher Accuracy and UAR scores in contrast to comparatively equipped model baselines.

Paper Nr: 15
Title:

A Machine Learning Framework for Shuttlecock Tracking and Player Service Fault Detection

Authors:

Akshay Menon, Abubakr Siddig, Cristina Hava Muntean, Pramod Pathak, Musfira Jilani and Paul Stynes

Abstract: Shuttlecock tracking is required for examining the trajectory of the shuttlecock in badminton matches. Player Service Fault Detection identifies service faults during badminton matches. The match point scored by players are analyzed by the first referee based on shuttlecock landing point and player service faults. If the first referee cannot decide, they use a technology such as a third umpire system to assist. The current challenge with the third umpire system is based on the high number of marginal errors for predicting the match score. This research proposes a Machine Learning Framework to improve the accuracy of Shuttlecock Tracking and Player Service Fault Detection. The proposed framework combines a shuttlecock trajectory model and a player service fault model. The shuttlecock trajectory model is implemented using a pre-trained Convolutional Neural Network (CNN) namely TrackNet. The Player Service Fault Detection model uses Google MediaPipe pose. A Random Forest classifier is used to classify the player service fault. The framework is trained using the badminton world federation channel dataset. The dataset consists of 100000 images of badminton players and shuttlecock positions. The models are evaluated using a confusion matrix based on loss, accuracy, precision, recall and F1 scores. Results demonstrate that the optimised TrackNet model has an accuracy of 90% which is 5% more with 2.84% less positioning error compared to current state of art (Fact). The Player Service Fault Model can classify player fault with 90% accuracy using Google MediaPipe pose 10% more compared to Openpose model. The machine learning framework for shuttlecock tracking and player service fault detection is of use to referees and the Badminton World Federation (BWF) for improving referee decision making.

Paper Nr: 33
Title:

Exploring ASR Models in Low-Resource Languages: Use-Case the Macedonian Language

Authors:

Konstantin Bogdanoski, Kostadin Mishev, Monika Simjanoska and Dimitar Trajanov

Abstract: We explore the use of Wav2Vec 2.0, NeMo, and ESPNet models\linebreak trained on a dataset in Macedonian language for the development of Automatic Speech Recognition (ASR) models for low-resource languages. The study aims to evaluate the performance of recent state-of-the-art models for speech recognition in low-resource languages, such as Macedonian, where there are limited resources available for training or fine-tuning. The paper presents a methodology used for data collection and preprocessing, as well as the details of the three architectures used in the study. The study evaluates the performance of each model using WER and CER metrics and provides a comparative analysis of the results. The findings of the research showed that Wav2Vec 2.0 outperformed the other models for the Macedonian language with a WER of 0.21, and CER of 0.09, however, NeMo and ESPNet models are still good candidates for creating ASR tools for low-resource languages such as Macedonian. The research presented provides insights into the effectiveness of different models for ASR in low-resource languages and highlights the potentials for using these models to develop ASR tools for other languages in the future. These findings have significant implications for the development of ASR tools for other low-resource languages in the future, and can potentially improve accessibility to speech recognition technology for individuals and communities who speak these languages.

Paper Nr: 35
Title:

Explainable Abnormal Time Series Subsequence Detection Using Random Convolutional Kernels

Authors:

Abdallah Amine Melakhsou and Mireille Batton-Hubert

Abstract: To identify anomalous subsequences in time series, it is a common practice to convert them into a set of features prior to the use of an anomaly detector. Feature extraction can be accomplished by manually designing the features or by automatically learning them using a neural network. However, for the former, significant domain expertise is required to design features that are effective in accurately detecting anomalies, while in the latter, it might be complex to learn useful features when dealing with unsupervised or one-class classification problems such as anomaly detection, where there are no labels available to guide the feature extraction process. In this paper, we propose an alternative approach for feature extraction that overcomes the limitations of the two previously mentioned approaches. The proposed method involves calculating the similarities between subsequences and a set of randomly generated convolutional kernels and the use of the One-Class SVM algorithm. We tested our approach on voltage signals acquired during circular welding processes in hot water tank manufacturing, the results indicate that the approach achieves higher accuracy in detecting welding defects in comparison to commonly used methods. Furthermore, we introduce in this work an approach for explaining the anomalies detected by making use of the random convolutional kernels, which addresses an important gap in time series anomaly detection.

Paper Nr: 18
Title:

An Automated Dual-Module Pipeline for Stock Prediction: Integrating N-Perception Period Power Strategy and NLP-Driven Sentiment Analysis for Enhanced Forecasting Accuracy and Investor Insight

Authors:

Siddhant Singh and Archit Thanikella

Abstract: The financial sector has witnessed considerable interest in the fields of stock prediction and reliable stock information analysis. Traditional deterministic algorithms and AI models have been extensively explored, leveraging large historical datasets. Volatility and market sentiment play crucial roles in the development of accurate stock prediction models. We hypothesize that traditional approaches, such as n-moving averages, may not capture the dynamics of stock swings, while online information influences investor sentiment, making them essential factors for prediction. To address these challenges, we propose an automated pipeline consisting of two modules: an N-Perception period power strategy for identifying potential stocks and a sentiment analysis module using NLP techniques to capture market sentiment. By incorporating these methodologies, we aim to enhance stock prediction accuracy and provide valuable insights for investors.

Area 4 - Natural Language Understanding

Short Papers
Paper Nr: 23
Title:

Research Data Reusability with Content-Based Recommender System

Authors:

M. Amin Yazdi, Marius Politze and Benedikt Heinrichs

Abstract: The use of content-based recommender systems to enable the reusability of research data artifacts has gained significant attention in recent years. This study aims to evaluate the effectiveness of such systems in improving the accessibility and reusability of research data artifacts. The study employs an empirical study to identify content-based recommender systems' strengths and limitations for recommending research data-collections (repositories). The empirical study involves developing and evaluating a prototype content-based recommender system for research data artifacts. The literature review findings reveal that content-based recommender systems have several strengths, including providing personalized recommendations, reducing information overload, and enhancing retrieved artifacts' quality, especially when dealing with cold start problems. The results of the empirical study indicate that the developed prototype content-based recommender system effectively provides relevant recommendations for research data repositories. The evaluation of the system using standard evaluation metrics shows that the system achieves an accuracy of 79% in recommending relevant items. Additionally, the user evaluation of the system confirms the relevancy of recommendations and enhances the accessibility and reusability of research data artifacts. In conclusion, the study provides evidence that content-based recommender systems can effectively enable the reusability of research data artifacts.

Paper Nr: 26
Title:

Explaining Relation Classification Models with Semantic Extents

Authors:

Lars Klöser, André Büsgen, Philipp Kohl, Bodo Kraft and Albert Zündorf

Abstract: In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions. We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantics extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research avenues for developing methods to explain deep learning models.

Paper Nr: 37
Title:

Towards Equitable AI in HR: Designing a Fair, Reliable, and Transparent Human Resource Management Application

Authors:

Michael Danner, Bakir Hadžić, Thomas Weber, Xinjuan Zhu and Matthias Rätsch

Abstract: The aim of this work is the development of artificial intelligence (AI) application to support the recruiting process that elevates the domain of human resource management by advancing its capabilities and effectiveness. This affects recruiting processes and includes solutions for active sourcing, i.e. active recruitment, pre-sorting, evaluating structured video interviews and discovering internal training potential. This work highlights four novel approaches to ethical machine learning. The first is precise machine learning for ethically relevant properties in image recognition, which focuses on accurately detecting and analysing these properties. The second is the detection of bias in training data, allowing for the identification and removal of distortions that could skew results. The third is minimising bias, which involves actively working to reduce bias in machine learning models. Finally, an unsupervised architecture is introduced that can learn fair results even without ground truth data. Together, these approaches represent important steps forward in creating ethical and unbiased machine learning systems.

Paper Nr: 42
Title:

UMLDesigner: An Automatic UML Diagram Design Tool

Authors:

Houndji Ratheil and Généreux Akotenou

Abstract: This work proposes an approach to automatically analyze software specifications and generate the corresponding UML class diagram. We use some Natural Language Processing tools such as Stanza and NLTK, and a rule-based approach for data extraction. Our tool, UMLDesigner, includes a text-to-diagram editor that allows users to create UML diagrams from textual descriptions of management rules. We trained our model on French language data for the UML class diagram case. To facilitate end-user adoption, we containerized our model in an API and developed a web application that communicates with the API to process text and generate diagram images. The different experiments performed show the efficiency of UMLDesigner.

Paper Nr: 54
Title:

CSR & Sentiment Analysis: A New Customized Dictionary

Authors:

Emma Zavarrone and Alessia Forciniti

Abstract: In 2001, EU defined Corporate Social Responsibility (CSR) as “a concept whereby companies integrate social and environmental concerns in their business operations and in their interaction with their stakeholders on a voluntary basis. Being socially responsible means not only fulfilling legal expectations, but also going beyond compliance and investing more into human capital, the environment, and the relations with stakeholders”. Following this definition, the CSR’ pillars are represented by environmental, social, and economic sustainability, and must be communicated to the society through appropriate reports. Sentiment analysis (SA) represents a fundamental sub-area of natural language processing for studying communication and classifying negative or positive opinions and emotions. Measuring sentiment identifies a task characterized by pitfalls related to the context of analysis, the methods, and the language. The lexicon-based techniques are less time- and resource-intensive than others since they are pre-built, polarized dictionaries that are either domain- or general knowledge-based. Two of the main obstacles are the lack of language resources (different from English) or the polarity classification that depends on the domain, seeing as the meanings of the words are related to the contexts. The strategic communication of CSR has no domain resources for investigating sentiment, neither in English nor in other languages. Thus, our contribution is placed within the sustainability framework, which is constantly evolving, and in a methodological setting characterized by limits and challenges. The innovative feature of our work lies in three aspects: 1) the investigation of an unmapped domain by means of a domain corpus-based approach and the building of a customized lexicon from a general pre-constructed dictionary; 2) the application for the Italian language; 3) the performance assessment of improvements through machine learning. More specifically, we use the corpus of a baseline sample of the social reports of Italian listed companies that closed the financial year on December 31, 2021, to implement the development of an algorithm for the building of a customized lexicon on CSR that implements Italian general lexicons through a multi-stage model that combines text analysis with social network analysis (SNA). We divided our data collection into five random samples under the machine learning perspective: one was utilized as a train set for the implementation, and four were used as test sets. The process revealed a notable increase in performance metrics across all samples.

Area 5 - Machine Learning

Full Papers
Paper Nr: 27
Title:

Phoneme-Based Multi-Task Assessment of Affective Vocal Bursts

Authors:

Tobias Hallmen, Silvan Mertes, Dominik Schiller, Florian Lingenfelser and Elisabeth André

Abstract: Affective speech analysis is an ongoing topic of research. A relatively new problem in this field is the analysis of affective vocal bursts, which are non-verbal vocalisations such as laughs or sighs. The current state of the art in the analysis of affective vocal bursts is predominantly based on wav2vec2 or HuBERT features. In this paper, we investigate the application of the wav2vec2 successor data2vec and the extension wav2vec2phoneme in combination with a multi-task learning pipeline to tackle different analysis problems at once, e.g., type of burst, country of origin, and conveyed emotion. Finally, we present an ablation study to validate our approach. We discovered that data2vec appears to be the best option if time and lightweightness are critical factors. On the other hand, wav2vec2phoneme is the most appropriate choice if overall performance is the primary criterion.

Paper Nr: 34
Title:

Facilitating Enterprise Model Classification via Embedding Symbolic Knowledge into Neural Network Models

Authors:

Alexander Smirnov, Nikolay Shilov and Andrew Ponomarev

Abstract: In many real life applications, the volume of available data is insufficient for training deep neural networks. One of the approaches to overcome this obstacle is to introduce symbolic knowledge to assist machine-learning models based on neural networks. In this paper, the problem of enterprise model classification by neural networks is considered to study the potential of the approach mentioned above. A number of experiments are conducted to analyze what level of accuracy can be achieved, how much training data is required and how long the training process takes, when the neural network-based model is trained without symbolic knowledge vs. when different architectures of embedding symbolic knowledge into neural networks are used.

Paper Nr: 46
Title:

A Survey on Reinforcement Learning and Deep Reinforcement Learning for Recommender Systems

Authors:

Mehrdad Rezaei and Nasseh Tabrizi

Abstract: Systems that provide recommendations are quickly taking over our daily lives. By suggesting and customizing the suggested things, they play a significant part in solving the information overload issue. Traditional recommender systems used for simple prediction issues include collaborative filtering, content-based filtering, and hybrid techniques. With new techniques used in recommender systems, such as reinforcement learning algorithms, more difficult problems can be resolved. These issues can be resolved using Markov decision processes and reinforcement learning techniques. It is now possible to employ reinforcement learning techniques to address issues with the huge environment and states thanks to recent advancements in the field. The development of traditional and reinforcement learning-based methods, their appraisal, difficulties, and suggested future research will be followed by a discussion of the reinforcement learning recommender system.

Paper Nr: 49
Title:

Evaluating Prototypes and Criticisms for Explaining Clustered Contributions in Digital Public Participation Processes

Authors:

Lars Schütz, Korinna Bade and Andreas Nürnberger

Abstract: We examine the use of prototypes and criticisms for explaining clusterings in digital public participation processes of the e-participation domain. These processes enable people to participate in various life areas such as landscape planning by submitting contributions that express their opinions or ideas. Clustering groups similar contributions together. This supports citizens and public administrations, the main participants in digital public participation processes, in exploring the submitted contributions. However, explaining clusterings remains a challenge. For this purpose, we consider the use of prototypes and criticisms. Our work generalizes the idea of applying the k-medoids algorithm for computing prototypes on raw data sets. We introduce a centroid-based clusterings method that solely considers clusterings. It allows the retrieval of multiple prototypes and criticisms per cluster. We conducted a user study with 21 participants to evaluate our centroid-based clusterings method and the MMD-critic algorithm for finding prototypes and criticisms in clustered contributions. We examined whether these methods are suitable for text data. The related contributions originate from past, real-life digital public participation processes. The user study results indicate that both methods are appropriate for clustered contributions. The results also show that the centroid-based clusterings method outperforms the MMD-critic algorithm regarding accuracy, efficiency, and perceived difficulty.

Short Papers
Paper Nr: 20
Title:

Machine Learning Applied to Speech Recordings for Parkinson’s Disease Recognition

Authors:

Lerina Aversano, Mario L. Bernardi, Marta Cimitile, Martina Iammarino, Antonella Madau and Chiara Verdone

Abstract: Parkinson's disease is a common neurological condition that occurs when dopamine production in the brain decreases significantly due to the degeneration of neurons in an area called the substantia nigra. One of its characteristics is the slow and gradual onset of symptoms, which are varied and include tremors at rest, rigidity, postural instability, and slow speech. Voice changes are very common among patients, so analysis of voice recordings could be a valuable tool for early diagnosis of the disease. In this regard, this study proposes an approach that compares different Machine Learning models for the diagnosis of the disease through the use of vocal recordings of the vowel a made by both healthy and sick patients and the identification of the subset of the most significant features The experiments were conducted on a data set available on the UCI repository, which collects 756 different recordings and takes into consideration a large number of characteristics. The results obtained are very encouraging, reaching an F-score of 95%, which demonstrates the effectiveness of the proposed approach.

Paper Nr: 25
Title:

A Novel Probabilistic Approach for Detecting Concept Drift in Streaming Data

Authors:

Sirvan Parasteh and Samira Sadaoui

Abstract: Concept drift, indicating data-distribution changes in streaming scenarios, can significantly reduce predictive performance. Existing concept drift detection methods often struggle with the trade-off between fast detection and low false alarm rates. This paper presents a novel concept drift detection algorithm, called SPNCD*, based on probabilistic methods, particularly Sum-Product Networks, that addresses this challenge by offering both high detection accuracy and low mean lag time. The proposed method is evaluated against state-of-the-art algorithms, such as DDM, ADWIN, KSWIN, and HDDM\_A, using three benchmark datasets: Mixed, RT, and Sine. Our experiments demonstrate that SPNCD* outperforms the existing algorithms in terms of true positive rate, recall, precision, and mean lag time, while improving the performance of the base classifier. The SPNCD* algorithm provides a reliable solution for detecting concept drift in real-time streaming data, enabling practitioners to maintain their machine learning models' performance in dynamic environments.

Paper Nr: 29
Title:

ALE: A Simulation-Based Active Learning Evaluation Framework for the Parameter-Driven Comparison of Query Strategies for NLP

Authors:

Philipp Kohl, Nils Freyer, Yoka Krämer, Henri Werth, Steffen Wolf, Bodo Kraft, Matthias Meinecke and Albert Zündorf

Abstract: Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manually, time-consuming, and expensive annotation process. To mitigate this challenge, Active learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance. However, practitioners face many AL strategies for different tasks and need a empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP. The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to configure the framework, extend it to other use cases, and present results on two classification datasets.

Paper Nr: 43
Title:

Graph Neural Networks for Circuit Diagram Pattern Generation

Authors:

Jaikrishna Manojkumar Patil, Johannes Bayer and Andreas Dengel

Abstract: Graph neural networks (GNNs) have found majority of applications in multiple domains including physical science, molecular biology, etc. However, there is still scope of research in the application of GNNs in electrical domain. This research work tries tries to find out if GNNs can be used to analyze circuit graphs. The end goal is to create a mechanism to iteratively predict the graph structure and thus complete a broken circuit diagram. The research work firstly tries to find out how well GNN can be used for predicting the missing node label or the missing node geometric features in the subset of the graph. Then, application of GNNs to anomaly detection problem is investigated. Next, GNN architecture is used to accurately predict the node label and approximate geometric features of the missing node in the circuit graph. Furthermore, a Graph Autoencoder (GAE) model is created and used for pruning the wrong edges in the circuit graph. The GNN model created for the purpose of anomaly detection problem gave around 90\% accuracy. The GNN model used for missing node feature estimation gave around 89\% accuracy to predict the correct label for missing node and it also performed effectively in approximating the geometric features of the missing node correctly. The link prediction model is able to classify the correct edges 92\% of the times. Finally, a mechanism is provided that iteratively predicts graph structure using the anomaly detection model, node feature prediction model and link prediction model in a cycle.

Paper Nr: 41
Title:

An Explainable Approach for Early Parkinson Disease Detection Using Deep Learning

Authors:

Lerina Aversano, Mario Luca Bernardi, Marta Cimitile, Martina Iammarino, Antonella Madau and Chiara Verdone

Abstract: Parkinson's disease (PD) is a progressive disorder that affects the nervous system and all the parts of the body controlled by it. It is the second most diffused neurodegenerative disorder, showing increasing trends in the last years and requiring new tools and procedures for diagnosis and assessment. In order to be used in medical clinics, the PD detection approaches require high effectiveness in disease detection and good capability to drive the experts in the comprehension and checking of the prediction's reasons. According to this, this paper proposes an explainable Deep Learning approach for the detection of PD from single photon emission computed tomography (SPECT) images. The approach consists of a combination of a CNN prediction model and a Gradient weighted Class Activation Mapping (Grad-CAM) interpretable technique. The validation is performed on a known dataset belonging to Parkinson's Progression Markers Initiative (PPMI). For this dataset, SPECT images of 974 patients are used showing good accuracy in the classification of healthy and PD patients and a good capability to explain the obtained prediction.