WEBIST 2024 Abstracts


Area 1 - HCI in Mobile Systems and Web Interfaces

Short Papers
Paper Nr: 50
Title:

Construction of a Questionnaire to Measure the Learner Experience in Online Tutorials

Authors:

Martin Schrepp and Harry Budi Santoso

Abstract: Online tutorials are efficient tools to support learning. They can be easily delivered over company web pages or common video platforms. In commercial contexts, they have the potential to reduce service load, replace product documentation, allow customers to explore more complex products over free trials, and ultimately simplify the learning process for customers. This can lead to increased customer satisfaction and loyalty. But if tutorials are not well-designed, then these goals can not be achieved. Therefore, it is important to be able to measure the satisfaction of learners with a tutorial. We describe the construction of a questionnaire that measures the learner experience with tutorials. The questionnaire was developed by creating a set of candidate items, which were then used by participants in a study to rate several tutorials. The results of a principal component analysis suggests that two components are relevant. The items in the first component (named Structural Clarity) describe that a tutorial is well-structured by a logical sequence of steps that are easy to follow and understand. The second component (named Transparency) refers to the way the tutorial communicates the underlying learning goals, prerequisites, and concepts and how they can be applied in practice.
Download

Paper Nr: 53
Title:

Application of Machine Learning Models to Predict e-Learning Engagement Using EEG Data

Authors:

Elias Dritsas, Maria Trigka and Phivos Mylonas

Abstract: The rapid evolution of e-learning platforms necessitates the development of innovative methods to enhance learner engagement. This study leverages machine learning (ML) techniques and models to predict e-learning engagement with the aid of Electroencephalography (EEG). Various ML models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), Gradient Boosting Machine (GBM), and Neural Networks (NN), were applied to a dataset comprising EEG signals collected during e-learning sessions. Among these models, NN demonstrated the highest accuracy (90%), with precision and F1-score of 88%, a recall of 89%, and an Area Under the Curve (AUC) of 0.92 for predicting engagement levels. The results underscore the potential of EEG-based analysis combined with advanced ML techniques to optimize e-learning environments by accurately monitoring and responding to learner engagement.
Download

Paper Nr: 17
Title:

The Web Unpacked: A Quantitative Analysis of Global Web Usage

Authors:

Henrique S. Xavier

Abstract: This paper presents an analysis of global web usage patterns based on data from 250,000 websites monitored by SimilarWeb. We estimate the total web traffic and investigate its distribution among domains and industry sectors. We detail the characteristics of the top 116 domains, which comprise an estimated one-third of all web traffic. Our analysis scrutinizes their content sources, access requirements, offline presence, and ownership features, among others. Our analysis reveals that a diminutive number of top websites captures the majority of visits. Search engines, news and media, social networks, streaming, and adult content emerge as primary attractors of web traffic, which is also highly concentrated on platforms and USA-owned websites. Much of the traffic goes to for-profit but mostly free-of-charge websites, highlighting the dominance of business models not based on paywalls.
Download

Paper Nr: 41
Title:

Is Generative AI Mature for Alternative Image Descriptions of STEM Content?

Authors:

Marina Buzzi, Giulio Galesi, Barbara Leporini and Annalisa Nicotera

Abstract: Alternative descriptions of digital images have always been an accessibility issue for screen reader users. Over time, numerous guidelines have been proposed in the literature, but the problem still exists. Recently, artificial intelligence (AI) has been introduced in digital applications to support visually impaired people in getting information about the world around them. In this way, such applications become a digital assistant for people with visual impairments. Increasingly, generative AI is being exploited to create accessible content for visually impaired people. In the education field, image description can play a crucial role in understanding even scientific content. For this reason, alternative descriptions should be accurate and educational-oriented. In this work, we investigate whether existing AI-based tools on the market are mature for describing images related to scientific content. Five AI-based tools were used to test the generated descriptions of four STEM images chosen for this preliminary study. Results indicate that answers are prompt and context dependent, and this technology can certainly support blind people in everyday tasks; but for STEM educational content more effort is required for delivering accessible and effective descriptions, supporting students in satisfying and accurate image exploration.
Download

Area 2 - Internet Technology

Full Papers
Paper Nr: 22
Title:

Technological Model for Cryptocurrency Payments in E-Commerce

Authors:

Luis Navarro, Juan Mansilla-Lopez and Christian Cipriano

Abstract: The number of cryptocurrency users worldwide increased by 190 % between 2018 and 2020, with Bitcoin being the most widely used. Credit card gateways request the payment of a usage fee, a sales tax that generates cost overruns for the businesses that use it. Likewise, virtual stores are exposed to cybersecurity threats, such as SQL injection and man-in-the-middle, which could affect the integrity and confidentiality of their information. This research proposes a Technological Model for Cryptocurrency payments based on a set of guidelines to develop a virtual store that accepts Bitcoins as a payment method and offers measures that guarantee the security of the integrity and confidentiality of its information. The structure of the model is based on a three-Tier architecture pattern that includes a private Blockchain in which the information of the sales made in the virtual store and of logistics (purchase orders, suppliers, and products) is stored. The model was validated in an online business, evidencing a reduction in the percentage of transaction costs.
Download

Paper Nr: 38
Title:

Platform-Agnostic MLOps on Edge, Fog and Cloud Platforms in Industrial IoT

Authors:

Alexander Keusch, Thomas Blumauer-Hiessl, Alireza Furutanpey, Daniel Schall and Schahram Dustdar

Abstract: The proliferation of edge computing systems drives the need for comprehensive frameworks that can seamlessly deploy machine learning models across edge, fog, and cloud layers. This work presents a platform-agnostic Machine Learning Operations (MLOps) framework tailored for industrial applications. A novel framework enables data scientists in an industrial setting to develop and deploy AI solutions across diverse deployment modes while providing a consistent experience. We evaluate our framework on real-world industrial data by collecting performance metrics and energy measurements on training and prediction runs of two ML workflows. Then, we compare edge, fog, and cloud deployments and highlight the advantages and limitations of each deployment mode. Our results emphasize the relevance of the introduced platform-agnostic MLOps frameworks in enabling flexible and efficient AI deployments.
Download

Paper Nr: 40
Title:

Investigating the Use of Accessibility Standards in Radio Frequency-Based Indoor Navigation: Challenges and Opportunities in the Development of Solutions for Visually Impaired Individuals

Authors:

Elvis Maranhão, André Araújo, Alenilton Silva and Fabio Coutinho

Abstract: The rapid advancement of digital technologies has revolutionized various aspects of modern life, significantly impacting accessibility and inclusion for people with disabilities. Among these advancements, radio frequency-based technologies have emerged as promising tools for enhancing indoor navigation and accessibility. This study aims to understand how these technologies have improved accessibility, focusing on using Beacon devices to facilitate navigation for visually impaired individuals in indoor environments. A literature review explores various applications and approaches, examining whether and how accessibility requirements, as defined by relevant norms and standards, have been integrated into the development of indoor location systems using radio frequency technologies. To address gaps identified in the current research, we propose good practices to improve the development life cycle of computing solutions for indoor environments that utilize Beacon technology. These practices ensure that solutions are effective, reliable, and inclusive, ultimately enhancing visually impaired individuals’ autonomy and quality of life.
Download

Paper Nr: 42
Title:

Implementing AI for Enhanced Public Services Gov.br: A Methodology for the Brazilian Federal Government

Authors:

Maísa Kely de Melo, Silvia Araújo dos Reis, Vinícius Di Oliveira, Allan Victor Almeida Faria, Ricardo de Lima, Li Weigang, Jose Francisco Salm Junior, Joao Gabriel de Moraes Souza, Vérica Freitas, Pedro Carvalho Brom, Herbert Kimura, Daniel Oliveira Cajueiro, Gladston Luiz da Silva and Victor Rafael R. Celestino

Abstract: The website portal of the Brazilian federal government (Gov.br) consists of pages from almost 40 ministries, 180 public agencies and up to 5000 public services for all citizens, posing a significant challenge in improving service quality. This article presents an innovative methodology to implement artificial intelligence (AI) to address these challenges, to enhance the efficiency, accessibility, and quality of services to the population. The methodology combines elements of Lean Office, Design Sprint, Analytic Hierarchy Process (AHP), and advanced AI techniques, particularly Large Language Models (LLMs), making it flexible and adaptable to the needs of government entities. Developed in collaboration with project managers, public servants, and stakeholders, the methodology includes a survey of demands, selection, and prototyping of AI projects in a complex government context. The practical application selected the Gov.br portal for prototyping, involving the development of an advanced generative agent to interact with citizens, clarify doubts, direct to the requested services, and provide human interaction when necessary. The recommended practices offer a valuable contribution to other developing countries seeking to integrate AI solutions into their public services.
Download

Paper Nr: 47
Title:

OMNIMOD: Automating Ontology Modularization for Digital Library Data Using CIDOC-CRM as Use Case

Authors:

Giulia Biagioni

Abstract: This paper introduces OMNIMOD, a new method designed to modularize ontologies, RDF-based structures that organize knowledge within specialized domains. By simplifying complex information into manageable components, OMNIMOD enhances the analysis, understandability, and navigation of large ontological frameworks while also extending its functionality to include the modularization of associated data records, known as instance data. The method has been developed based on theoretical insights gathered from Cognitive Load Theory (CLT) and has been successfully tested and applied to CIDOC-CRM (Conceptual Reference Model of the International Committee for Documentation), the ISO standard for describing data related to cultural heritage materials. The accompanying Python functions, developed for OMNIMOD and provided in the corpus of the text, empower readers to adapt and utilize OMNIMOD according to their specific needs.
Download

Paper Nr: 52
Title:

Microfront-End: Systematic Mapping

Authors:

Luiz Felipe Cirqueira dos Santos, Marcus Vinicius Santana Silva, Shexmo Richarlison Ribeiro dos Santos, Fábio Gomes Rocha and Elisrenan Barbosa da Silva

Abstract: In the context of backend development, adopting microservices has brought new challenges to frontend integration, leading to the development of microfrontends. Monolithic frontends go against the principles of microservices specialization, requiring a more modular approach. This study aims to characterize the adoption of micro frontends within microservices. To achieve this, a systematic mapping was conducted across six databases, selecting 32 articles that identified 11 tools for managing applications with micro frontends. The study discusses the increase in complexity and the contexts and conditions for applying micro frontend implementation models. As a contribution, this study outlines the main paths for adopting micro frontends within a microservices architecture, providing valuable insights for software engineering and promoting better integration between the backend and front end. Additionally, it describes the characteristics of micro frontends and the advantages and disadvantages of their adoption, highlighting security concerns, their impact on team collaboration, and how they may affect the organizational structure and team development processes.
Download

Short Papers
Paper Nr: 21
Title:

Speaking the Same Language or Automated Translation? Designing Semantic Interoperability Tools for Data Spaces

Authors:

Maximilian Stäbler, Tobias Guggenberger, DanDan Wang, Richard Mrasek, Frank Köster and Chris Langdon

Abstract: This paper tackles the challenge of semantic interoperability in the ever-evolving data management and sharing landscape, crucial for integrating diverse data sources in cross-domain use cases. Our comprehensive approach, informed by an extensive literature review, focus-group discussions and expert insights from seven professionals, led to the formulation of six innovative design principles for interoperability tools in Data Spaces. These principles, derived from key meta-requirements identified through semi-structured interviews in a focus group, address the complexities of data heterogeneity and diversity. They offer a blend of automated, scalable, and resilient strategies, bridging theoretical and practical aspects to provide actionable guidelines for semantic interoperability in contemporary data ecosystems. This research marks a significant contribution to the domain, setting a new design approach for Data Space integration and management.
Download

Paper Nr: 43
Title:

Towards Interoperability of Systems of Systems Using GraphQL

Authors:

Eduardo Dantas Luna, Vitor Pinheiro de Almeida and Eduardo Thadeu Corseuil

Abstract: The growing interconnectedness of devices and systems presents a significant opportunity to develop solutions that leverage data from diverse sources. However, integrating data from these heterogeneous systems, which may use different protocols and paradigms, poses a considerable challenge. This paper proposes an innovative solution to address this challenge by introducing an algorithm that generates a GraphQL API management layer. This layer acts as a bridge between disparate systems, enabling seamless data integration and exchange. By leveraging GraphQL’s efficient data retrieval capabilities and a knowledge graph to define relationships between data elements, the algorithm automates the creation of a processing layer that simplifies the integration process. The proposed solution offers a promising approach to overcome the complexities of data integration, paving the way for more robust and adaptable data-driven applications.
Download

Paper Nr: 45
Title:

Enable Business Users to Embed Dynamic Database Content in Existing Web-Based Systems Using Web Components and Generic Web Services

Authors:

Andreas Schmidt and Tobias Münch

Abstract: In our digitalized world and under the economic pressure of competition, every company must react flexibly to opportunities and problems that arise. One way to cope with these challenges is to use web-based Enterprise Resource Planning (ERP) or Customer Relationship Management (CRM) Systems, which provide significant functionality inside their system range. Third-party systems often have to be integrated with ERP or CRM systems but cannot be connected, for instance, because of limited Application Programming Interfaces (API) or data structures. Therefore, such tasks are complex and time-consuming and must be done by software engineers, who are limited resources in today’s enterprise context. However, HTML documents can be integrated with web-based systems such as ERP or CRM, and HTML creation is not limited to the software engineering workforce. Our low-code environment, which is based on W3C web components standards and RESTful web services with state-of-the-art authentication approaches, could solve the shortage because we empower business developers to embed dynamic database content declaratively in static HTML pages or web-based systems such as WordPress or SoftEngine ERP-Suite. Our system also allows the declarative integration of forms for creating/modifying and deleting data records (CRUD functionality). The low-code web components access the database via the RESTful service. The API of the RESTful service abstracts the database manufacturer-specific characteristics, such as the storage format of the metadata.
Download

Paper Nr: 46
Title:

Moving into Co-Creative Robotics

Authors:

Sanaz Nikghadam-Hojjati, Eda Marchetti, José Barata and Antonello Calabrò

Abstract: In the volatile, uncertain, complex, ambiguous (VUCA) world, robots should be able to adapt to different aspects of human life and create a positive impact. To achieve this, it is important to create and develop not only technically advanced but also, from a social point of view, adaptable, autonomous, creative, collaborative, and ethical robots. This paper introduces the concepts of Co-Creative Robotics and analyses the role of collaborative networks in advancing it. Alongside introducing technological dimensions of Co-Creative Robotics, the paper compares Co-Creative Robotics characteristics with computational creativity and traditional robotics. Finally, to support the advancement of this field, the authors investigated the role of different categories of collaborative networks for Co-Creative Robotics advancement.
Download

Paper Nr: 62
Title:

Decentralizing Democracy with Semantic Information Technology: The D-CENT Retrospective

Authors:

Harry Halpin

Abstract: One of the central questions facing democracy is the lack of engagement from ordinary citizens. D-CENT (Decentralized Citizens ENgagement Technologies) used cross-platform and decentralized technologies, ranging from Semantic Web ontologies to W3C federated social web standards, helps communities to autonomously share data, collaborate and organize their operations as a decentralized network. With the benefit of hindsight, we can analyze why this decentralized and standardized approach, while successful in the short-term, did not succeed in sustaining engagement in the long-term and why blockchain systems may be the next step forward.
Download

Paper Nr: 65
Title:

Impact of a Split into Single Items on the Response Rate of the User Experience Questionnaire Short (UEQ-S)

Authors:

Marco Schaa, Jessica Kollmorgen, Martin Schrepp and Jörg Thomaschewski

Abstract: Standardized questionnaires are an efficient and reliable method to measure the user experience of a product, service or system. However, response rates of such surveys are often quite low. The length of a survey has an impact on the willingness to respond. We investigate in this paper if it is useful to split a questionnaire into single items to reduce the time needed for participation. In a real web shop on the confirmation page of their order, customers are either asked to answer all the 8 items of the UEQ-S or just one randomly selected item. Results showed that the presentation of single items increased the response rate. The increase was statistically significant, but from a practical point of view not big enough to justify this method. The measured scores for the single items were statistically different for 2 of the 8 items. Thus, some context effects of neighboring items seem to impact the scores in the full UEQ-S version.
Download

Paper Nr: 70
Title:

SECURER: User-Centric Cybersecurity Testing Framework for IoT System

Authors:

Tauheed Waheed, Eda Marchetti and Antonello Calabrò

Abstract: The rapid advancement of IoT systems and their interconnected nature highlight the critical need for strong cybersecurity measures. The SECURER framework is designed to offer a user-centric, adaptable, and comprehensive approach to cybersecurity testing. It aims to strengthen the security of IoT systems by prioritizing user behavior, aligning with evolving cyber threats, utilizing existing test suites, and ensuring regulatory compliance. We seek to showcase the functionality and applications of our user-centric cybersecurity testing framework (SECURER) by outlining a practical testing methodology to counteract cyber threats targeting the rolling code technology used in the automotive industry.
Download

Paper Nr: 72
Title:

Automated Hybrid Ransomware Family Classification

Authors:

George Raul Michael Dunca and Ioan Bădărînză

Abstract: Ransomware is one of the most destructive forms of malware that exists today, posing a continuous and evolving threat to everyone from a regular user to a large corporation. Mainly ransomware can be analyzed in three ways: statically which involves extracting information without execution, dynamically which implies running the program in a controlled environment and observing its behavior, and hybrid which addresses the limitation of the previously specified two approaches by combining them. The aim of this study is to maximize the number of features extracted from Windows portable executables (PE) utilizing a hybrid approach and find what are the most useful attributes for differentiating between various ransomware families. A total of 707 samples across 99 families were successfully examined, from which 783 features were identified as the most informative. This data was then used to train a Random Forest model, which conducts the classification. RansoGuard was also developed. This is a graphical user interface Windows application that extracts hybrid attributes from a specified portable executable file. Then it uses the Random Forest model to output a prediction about the ransomware family to which the file belongs and finally generates a detailed report. The results obtained are promising, with the model achieving an accuracy of 71.83%, along with a precision of 0.79 and recall of 0.72.
Download

Paper Nr: 74
Title:

Using Chat GPT for Malicious Web Links Detection

Authors:

Thomas Kaisser and Claudia-Ioana Coste

Abstract: Over the last years, the Internet has monopolized most businesses and industries. These outstanding advancements lead to the dangerous development of specialized threats employed to outsmart everyday users, collect personal data and financial benefits. One of the most relevant attacks is malicious web links, which can be inserted into private messages, emails, social media posts and others to deceive consumers and trick them into clicking. Present approach will classify links based on multiple manually extracted features. Then, we perform a feature importance analysis. Moreover on a smaller dataset, we employ OpenAI’s models to classify and then add a new feature representing the Chat GPT classification. Thus, we manage to improve the overall performance of multiple machine learning methods. The first experiment considers only a Random Forest classifier but in the second one, we added thirteen other intelligent algorithms and ensembles constructed from the best performing ones. The best obtained accuracy (95%) is reached by the RF model on the whole dataset.
Download

Paper Nr: 16
Title:

Business Intelligence Solutions Adoption Model for Peruvians SMEs Based on UTAUT2

Authors:

Luis Javier Ortiz Leigh, Axel Gutierrez Huaman and Jymmy Stuwart Dextre Alarcon

Abstract: Small and medium-sized enterprises (SMEs) face significant challenges to grow and make strategic decisions due to their small size and limited resources. This study proposes a model to identify the factors that influence the adoption of Business Intelligence (BI) solutions in Peruvian SMEs, based on the Unified Theory of Acceptance and Use of Technology (UTAUT2). The study methodology is divided into four phases. The first phase consists of analyzing existing technology adoption models to identify critical components that affect BI adoption. In the second phase, the proposed model is developed and survey questions are designed that will measure relevant factors for statistical validation. The third phase involves data collection through surveys targeting SMEs in Peru, followed by analysis to identify significant patterns. The fourth and final phase validates the model using the Partial Least Squares Structural Equation Modeling (PLS-SEM) technique, evaluating the robustness and accuracy of the proposed model. The validation results show that the proposed factors Performance Expectation (PE), Price/Value Ratio (PV) and Competitive Pressure (CP) are the most influential in the intention to use BI solutions by Peruvian SMEs.
Download

Paper Nr: 26
Title:

Enterprise Architecture to Optimize the Sales Process Using the TOGAF ADM Cycle in Companies in the Retail Sector

Authors:

Deysi Campos Ruiz, Gianmarco Vargas Araujo and Jymmy Dextre Alarcon

Abstract: This study explored the application of enterprise architecture, specifically utilizing the TOGAF ADM cycle, to optimize the sales process in retail enterprises. In the context of digital transformation, it was crucial for retail operations to strategically adapt. The objective was to demonstrate how a structured approach based on TOGAF could enhance operational efficiency, decision-making, and competitiveness in the dynamic retail market. The methodology included several phases of the TOGAF ADM cycle: the Preliminary Phase involved designing the research framework, organizational analysis, gathering business information, establishing architectural principles, and identifying stakeholders and scope; the Architecture Vision phase defined the baseline and target architecture, identified gaps, and key resources; the Business Architecture phase provided a detailed analysis of the business model, strategies, operations, innovation, enterprise capabilities, and a SWOT analysis; the Information Systems Architecture phase assessed the technology to be used, required human resources, business evolution, and technological adaptability; the Technology Architecture phase focused on technological infrastructure, security, customer loyalty, market knowledge, and customer service. The main conclusion is that the enterprise architecture based on the TOGAF ADM cycle allowed us to optimize the sales processes in the retail sector by 63%, improving operational efficiency and adaptation to market demands, resulting in more satisfying shopping experiences.
Download

Paper Nr: 31
Title:

Model for Detecting Illegal Tree Felling in the Protected Area of Bagua in Amazonas Using Convolutional Neural Networks

Authors:

Wilmer Calle Carbajal, Julio Raúl Huamán Llantoy and Jymmy Dextre Alarcon

Abstract: Illegal logging is a problem that occurs in different regions of Peru, causing deforestation, biodiversity loss, and contributing to climate change.Despite the efforts of organizations and governments to combat this problem, constant detection and monitoring are challenging due to the vast extension of forests and the lack of human resources to effectively monitor all areas.Therefore, the use of a detection model is proposed as a solution to detect illegal logging in real time through chainsaw sound. This model consists of four phases: Input, Analysis, Execution, and Output.Phase 1 focuses on the collection of sounds from recording devices. Phase 2 analyzes and processes the characteristic chainsaw sounds. Phase 3 focuses on the execution of the model. Phase 4 will display the result of the detection as a numerical value 1 or 0 as the case may be.The results of the experimental validation were obtained by using mobile devices to record and send audio to the detection model. These results were positive and acceptable in terms of accuracy in detecting illegal logging activities, achieving a 10% reduction in such activities.
Download

Paper Nr: 44
Title:

Human-Centric Dev-X-Ops Process for Trustworthiness in AI-Based Systems

Authors:

Antonello Calabrò, Said Daoudagh, Eda Marchetti, Oum-El-kheir Aktouf and Annabelle Mercier

Abstract: Ai’s potential economic growth necessitates ethical and socially responsible AI systems. Increasing human awareness and the adoption of human-centric solutions that incorporate, combine, and assure by design the most critical properties (such as security, safety, trust, transparency, and privacy) during the development will be a challenge to mitigate and effectively prevent issues in the era of AI. In that view, this paper proposes a human-centric Dev-X-Ops process (DXO4AI) for trustworthiness in AI-based systems. DXO4AI leverages existing solutions, focusing on the AI development lifecycle with a by-design solution for multiple desired properties. It integrates multidisciplinary knowledge and stakeholder focus.
Download

Paper Nr: 59
Title:

A Systematic Literature Review on Continuous Integration and Deployment (CI/CD) for Secure Cloud Computing

Authors:

Sabbir M. Saleh, Nazim Madhavji and John Steinbacher

Abstract: As cloud environments become widespread, cybersecurity has emerged as a top priority across areas such as networks, communication, data privacy, response times, and availability. Various sectors, including industries, healthcare, and government, have recently faced cyberattacks targeting their computing systems. Ensuring secure app deployment in cloud environments requires substantial effort. With the growing interest in cloud security, conducting a systematic literature review (SLR) is critical to identifying research gaps. Continuous Software Engineering, which includes continuous integration (CI), delivery (CDE), and deployment (CD), is essential for software development and deployment. In our SLR, we reviewed 66 papers, summarising tools, approaches, and challenges related to the security of CI/CD in the cloud. We addressed key aspects of cloud security and CI/CD and reported on tools such as Harbor, SonarQube, and GitHub Actions. Challenges such as image manipulation, unauthorised access, and weak authentication were highlighted. The review also uncovered research gaps in how tools and practices address these security issues in CI/CD pipelines, revealing a need for further study to improve cloud-based security solutions.
Download

Paper Nr: 64
Title:

Analysis: Accessibility of VR Games Could Be Better

Authors:

Laura Meiser, Kristina Nagel and Maria Rauschenberger

Abstract: The rapid growth of the video game industry, including the niche of virtual reality (VR) gaming, highlights the significant market potential and demand for accessible gaming options. Despite many people with disabilities engaging in video games, a substantial part of this group finds current offerings inaccessible, thereby restricting their gaming experience. This study assesses the accessibility of popular VR games by applying a comprehensive set of guidelines adapted from existing guidelines. We analyzed the five top-rated VR-compatible games: Beat Saber, Tetris Effect: Connected, Half-Life: Alyx, Microsoft Flight Simulator (MFS), and Assetto Corsa. Our findings indicate that the overall accessibility of these games is not good, with only 42.36% of the accessibility tests passed on average. The MFS is a positive outlier with 74.71% of the tests passed, which may be attributed to Microsoft’s development of accessibility guidelines and controllers. Our study underscores the necessity for improved awareness and implementation of unified accessibility guidelines within the gaming industry. We also recommends some low-cost but high impact improvements for the tested games.
Download

Paper Nr: 68
Title:

Access Control Integration in Sparkplug-Based Industrial Internet of Things Systems: Requirements and Open Challenges

Authors:

Pietro Colombo and Elena Ferrari

Abstract: Sparkplug (ISO, 2023) is an emerging open-source software specification that supports integrating applications and devices in Industrial Internet of Things (IIoT) systems and eases data integration. Although recently introduced, Sparkplug has rapidly gained popularity, and its specifications have already been defined as an ISO standard. However, the basic data protection features it supports could hinder the adoption of Sparkplug on a large scale. Efficient access control solutions for Sparkplug-based IIoT systems open new research challenges. In this position paper, we take the first step to fill this void by presenting key requirements for integrating access control into Sparkplug systems and discussing significant issues for developing a suitable access control framework.
Download

Paper Nr: 69
Title:

FASTER-AI: A Comprehensive Framework for Enhancing the Trustworthiness of Artificial Intelligence in Web Information Systems

Authors:

Christos Troussas, Christos Papakostas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou

Abstract: With increasing embedding of artificial intelligence (AI) in web information systems (WIS), the maximum assurance on the reliability of such AI systems is solicited. Although this aspect is gaining importance, no comprehensive framework has yet been developed to ensure AI reliability. This paper aims to bridge that gap by proposing the AI FASTER framework to enhance the reliability of AI in WIS. The key dimensions of concern within the framework are FASTER-AI: Fairness/bias mitigation, explainability/transparency, security/privacy, robustness, and ethical considerations/accountability. Each one guides in precisely the area where trust shall be accomplished: a decrease in bias, model interpretability, protection of data, resilience of models, and ethics in governance. The implementation methodology for these dimensions involves preliminary assessment, planning, integration, testing, and continuous improvement. Validation of proof for FASTER-AI was created based on in-depth case studies across different verticals: e-commerce, finance, health, and fraud detection. This work has demonstrated how FASTER-AI is applied through illustrative case studies showing promising performance. From the initial results of high improvement in terms of fairness, transparency, security, and robustness, it may be effectively inferred that FASTER-AI can be successfully applied.
Download

Paper Nr: 82
Title:

Cybersecurity Testing for Cobots

Authors:

Tauheed Waheed, Eda Marchetti and Antonello Calabrò

Abstract: IoT (Internet of Things) rapid evolution and interconnected nature emphasize the urgent need for robust cyber-security measures. Cybersecurity presents considerable risks and threats for the cobots (collaborative-robots) industry. Cyber attackers can leverage these weaknesses, potentially allowing unauthorized entry and compromising critical assets. The proposed CTF (Cybersecurity Testing Framework) framework emerges as a promising answer to these issues, providing an adaptable, robust, and thorough method for cobots cybersecu-rity assurance. Understanding why cybersecurity testing is needed for cobots industry and how cobots users interact with the system is vital, considering the changing landscape of cyber threats.CTF seeks to enhance cobots cybersecurity by leveraging available testing suites and adhering to regulatory standards. We aim to showcase our testing framework’s effectiveness and potential uses by depicting a specific testing strategy to address vulnerabilities and cyber threats in cobots. The paper details the CTF theoretical foundation and critical features and presents its initial prototype to prove its suitability.
Download

Area 3 - Social Network Analytics

Full Papers
Paper Nr: 27
Title:

Utilization of Clustering Techniques and Markov Chains for Long-Tail Item Recommendation Systems

Authors:

Diogo Vinícius de Sousa Silva, Davi Silva da Cruz, Diego Corrêa da Silva, João Paulo Dias de Almeida and Frederico Araújo Durão

Abstract: The primary goal of this paper is to develop recommendation models that guide users to niche but highly relevant items in the long tail. Two major clustering techniques and representing matrices through graphs are explored for this. The first technique adopts Markov chains to calculate similarities of the nodes of a user-item graph. The second technique applies clustering to the set of items in a dataset. The results show that it is possible to improve the accuracy of the recommendations even by focusing on less popular items, in this case, niche products that form the long tail. The recall in some cases improved by about 27.9%, while the popularity of recommended items has declined. In addition, the recommendations to contain more diversified items indicate better exploitation of the long tail. Finally, an online experiment was conducted using an evaluation questionnaire with the employees of the HomeCenter store, providing the dataset. The aim is to analyze the performance of the proposed algorithms directly with the users. The results showed that the evaluators preferred the proposed algorithms, demonstrating the proposed approaches’ effectiveness.
Download

Paper Nr: 77
Title:

Hate Speech Detection Using Cross-Platform Social Media Data in English and German Language

Authors:

Gautam Kishore Shahi and Tim A. Majchrzak

Abstract: Hate speech has grown into a pervasive phenomenon, intensifying during times of crisis, elections, and social unrest. Multiple approaches have been developed to detect hate speech using artificial intelligence, however, a generalized model is yet unaccomplished. The challenge for hate speech detection as text classification is the cost of obtaining high-quality training data. This study focuses on detecting bilingual hate speech in YouTube comments and measuring the impact of using additional data from other platforms in the performance of the classification model. We examine the value of additional training datasets from cross-platforms for improving the performance of classification models. We also included factors such as content similarity, definition similarity, and common hate words to measure the impact of datasets on performance. Our findings show that adding more similar datasets based on content similarity, hate words, and definitions improves the performance of classification models. The best performance was obtained by combining datasets from YouTube comments, Twitter, and Gab with an F1-score of 0.74 and 0.68 for English and German YouTube comments.
Download

Paper Nr: 78
Title:

SPACED: A Novel Deep Learning Method for Community Detection in Social Networks

Authors:

Mohammed Tirichine, Nassim Ameur, Younes Boukacem, Hatem M. Abdelmoumen, Hodhaifa Benouaklil, Samy Ghebache, Boualem Hamroune, Malika Bessedik, Fatima Benbouzid-Si Tayeb and Riyadh Baghdadi

Abstract: Community detection is a landmark problem in social network analysis. To address this challenge, we propose SPACED: Spaced Positional Autoencoder for Community Embedding Detection, a deep learning-based approach designed to effectively tackle the complexities of community detection in social networks. SPACED generates neighborhood-aware embeddings of network nodes using an autoencoder architecture. These embeddings are then refined through a mixed learning strategy with generated community centers, making them more community-aware. This approach helps unravel network communities through an appropriate clustering strategy. Experimental evaluations across synthetic and real-world networks, as well as comparisons with state-of-the-art methods, demonstrate the high competitiveness and often superiority of SPACED for community detection while maintaining reasonable time complexities.
Download

Short Papers
Paper Nr: 32
Title:

Exploiting Data Spatial Dependencies for Employee Turnover Prediction

Authors:

Sandra Maria Pereira, Jéssica da Assunção Almeida de Lima, Alessandro Garcia Vieira and Wladmir Cardoso Brandão

Abstract: Machine learning techniques have been increasingly employed to address problems within the field of human resources. A significant issue in this domain is predicting employee turnover, related to the probability of an employee leaving the company. Employee turnover is directly related to the availability of knowledge and resources that affect the continuity of the company’s goods and services supply. Managing employee turnover involves multiple areas of expertise, rendering it a complex problem. This article proposes a methodology to determine whether prediction problems exhibits spatial dependence, thereby demanding the use of spatial models over non-spatial models for optimal resolution. Experimental results show that significant differences arise when analyzing correlations that consider the geographical positioning of the data. Particularly, prediction models that use geographic features to predict employee turnover outperform prediction models that do not use them, with gains ranging from 9.6% to 19.6% in the standard deviation of MAPE, from 5.5% to 10.4% in MAE, and from 0.99% to 2.9% in RMSE.
Download

Paper Nr: 67
Title:

Improving Recommendation Quality in Collaborative Filtering by Including Prediction Confidence Factors

Authors:

Kiriakos Sgardelis, Dionisis Margaris, Dimitris Spiliotopoulos, Costas Vassilakis and Stefanos Ougiaroglou

Abstract: Collaborative filtering is a prevalent recommender system technique which generates rating predictions based on the rating values given by the users’ near neighbours. Consequently, for each user, the items scoring the highest prediction values are recommended to them. Unfortunately, predictions inherently entail errors, which, in the case of recommender systems, manifest as unsuccessful recommendations. However, along with each rating prediction value, prediction confidence factors can be computed. As a result, items having low prediction confidence factor values, can be either declined for recommendation or have their recommendation priority demoted. In the former case, some users may receive fewer recommended items or even none, especially when using a sparse dataset. In this paper, we present an algorithm that determines the items to be recommended by considering both the rating prediction values and confidence factors of predictions, allowing for predictions with higher confidence factors to outrank predictions with higher value, but lower confidence. The presented algorithm achieves to enhance the recommendation quality, while at the same time retaining the number of recommendations for each user.
Download

Paper Nr: 71
Title:

Viewpoint Analysis of Autism-Related Comments in Reddit During COVID-19

Authors:

Narges Azizifard, Lidia Pivovarova and Eetu Mäkelä

Abstract: Major events, such as the COVID-19 health crisis, have different effects on various groups, with vulnerable populations like individuals with Autism Spectrum Disorder (ASD) being especially affected. Social media platforms capture the unique experiences of these individuals and their families, offering a wide range of perspectives and voices. This study utilizes text analysis methods to examine Reddit discussions concerning both autism and COVID-19. Through the analysis of these comments, we identify key themes, including challenges in education, employment, and family life, as well as conspiracy theories and propaganda surrounding vaccination. Our findings shed light on the struggles faced by individuals with autism during the lockdown and highlight their coping strategies. Additionally, the study reveals significant variations in sentiment and emotions across different themes within the comments, providing deeper insights into how these experiences are expressed and understood in online communities.
Download

Paper Nr: 13
Title:

Temporal Analysis of Brazilian Presidential Election on Twitter Based on Formal Concept Analysis

Authors:

Daniel Pereira, Julio Neves, Wladmir Brandão and Mark Song

Abstract: Social networks have become an environment where users express their feelings and share news in real-time. However, analyzing the content produced by users is not a simple task, given the volume of posts. It is important to comprehend the expressions made by users to gain insights into politicians, public figures, and news. The state-of-the-art lacks studies that propose how the topics discussed by social network users change over time. In this context, this work measures how topics discussed on Twitter vary over time. Formal Concept Analysis was used to measure how these topics were varying, considering the support and confidence metrics. Our solution was tested on tweets related to the Brazilian presidential election. The results confirm that it is possible to comprehend what Twitter users were discussing and how these topics changed over time. Our work is beneficial for politicians seeking to analyze the discussions about them among users. Our analysis of 3,634 tweets revealed several significant patterns, such as the association between political figures and topics like fake news and election fraud. These findings demonstrate how social media discussions evolve during key political events, providing insights that can assist political campaigns in real-time.
Download

Paper Nr: 63
Title:

Analyzing Tweets Using Topic Modeling and ChatGPT: What We Can Learn About Teachers and Topics During COVID-19 Pandemic-Related School Closures

Authors:

Anna C. Weigand, Maj F. Jacob, Maria Rauschenberger and Maria José Escalona Cuaresma

Abstract: This study examines the shifting discussions of teachers within the #twlz community on Twitter across three phases of the COVID-19 pandemic – before school closures and during the first and second school closures. We analyzed tweets from January 2020 to May 2021 to identify topics related to education, digital transformation, and the challenges of remote teaching. Using machine learning and ChatGPT, we categorized discussions that transitioned from general educational content to focused dialogues on online education tools during school closures. Before the pandemic, discussions were generally focused on education and digital transformation. During the first school closures, conversations shifted to more specific topics, such as online education and tools to adapt to distance learning. Discussions during the second school closures reflected more precise needs related to fluctuating pandemic conditions and schooling requirements. Our findings reveal a consistent increase in the specificity and urgency of the topics over time, particularly regarding digital education.
Download

Area 4 - Web Intelligence and Semantic Web

Full Papers
Paper Nr: 14
Title:

DOM-Based Online Store Comments Extraction

Authors:

Julián Alarte, Carlos Galindo, Carlos Martín and Josep Silva

Abstract: Online stores often include a customer comments section on their product pages. This section is valuable for other customers, as they can read reviews from users who have previously purchased or tried the products. This feedback is also important for the owners and managers of online stores, as they can obtain valuable information about the products they sell, such as buyer opinions and ratings. Additionally, the comments section holds significant value for the manufacturers of the products, as they can analyze comments posted on various online stores to receive valuable feedback about their products. This work presents a novel technique to automatically extract from a web page the customer comments without knowing a priori the web page structure. The technique not only extracts text but also other types of relevant content, such as images, animations, and videos. It is based on the DOM tree and only needs to load a single web page to extract its product comments; therefore, it can be used in real-time during browsing without the need for page preprocessing. To train and evaluate the technique, we have built a benchmark suite from real and heterogeneous web pages. The empirical evaluation shows that the technique achieves an average F1 score of 90.4% and reaches 100% on most web pages.
Download

Paper Nr: 24
Title:

Evaluating Diversification in Group Recommendation of Points of Interest

Authors:

Jadna Almeida da Cruz, Frederico Araújo Durão and Rosaldo J. F. Rossetti

Abstract: With the massive availability and use of the Internet, the search for Points of Interest (POI) is becoming an arduous task. POI Recommendation Systems have, therefore, emerged to help users search for and discover relevant POIs based on their preferences and behaviors. These systems combine different information sources and present numerous research challenges and questions. POI recommender systems traditionally focused on providing recommendations to individual users based on their preferences and behaviors. However, there is an increasing need to recommend POIs to groups of users rather than just individuals. People often visit POIs together in groups rather than alone. Thus, some studies indicate that the further users travel, the less relevant the POIs are to them. In addition, the recommendations belong to the same category, without diversity. This work proposes a POI Recommendation System for a group using a diversity algorithm based on members’ preferences and their locations. The evaluation of the proposal involved both online and offline experiments. Accuracy metrics were used in the evaluation, and it was observed that the level at which the results were analyzed was relevant. For the top 3, recommendations without diversity performed better, but diversification positively impacted the results at the top 5 and 10 levels.
Download

Paper Nr: 30
Title:

Learning to Predict Email Open Rates Using Subject and Sender

Authors:

Daniel Vitor de Oliveira Santos and Wladmir Cardoso Brandão

Abstract: The burgeoning daily volume of emails has metamorphosed user inboxes into a battleground where marketers vie for attention. This paper investigates the pivotal role of email subject lines in influencing open rates, a critical metric in email marketing effectiveness. We employ text mining and advanced machine learning methodologies to predict email open rates, utilizing subject lines and sender information. Our comparative analysis spans eight regression models, leveraging diverse strategies such as morphological text attributes, operational business factors, and semantic embeddings derived from TF-IDF, Word2Vec, and OpenAI’s language models. The dataset comprises historical email campaign data, enabling the development and validation of our predictive models. Notably, the CatBoost model, augmented with operational features and dimensionally reduced embeddings, demonstrates superior performance, achieving a Root Mean Squared Error (RMSE) of 5.16, Mean Absolute Error (MAE) of 3.60, a Coefficient of Determination (R2) of 77.53%, and Mean Absolute Percentage Error (MAPE) of 14.73%. These results provide actionable insights for improving subject lines and email marketing strategies, offering practical tools for practitioners and researchers.
Download

Paper Nr: 48
Title:

LLMs Based Approach for Quranic Question Answering

Authors:

Zakia Saadaoui, Ghassen Tlig and Fethi Jarray

Abstract: This paper addresses a prominent research gap in Quranic question answering, where current methodologies face challenges in capturing the nuanced aspects of inquiries related to the Quran. By presenting an innovative approach that utilizes Large Language Models (LLMs), including GPt4, Bart, and LLAMA, we aim to overcome these limitations, improving the clarity and precision of responses to Quranic queries. The evaluation of the proposed Quranic Question Answering System, using the F1 score metric, demonstrates encouraging results in comprehending and addressing various Quranic queries. Notably, the application of a chain of thought with Bart achieves an impressive F1-score of 95%. This research offers a distinctive perspective on Quranic question answering through the integration of LLMs.
Download

Paper Nr: 79
Title:

Soft Querying JSON Datasets with Personalized Preferences and Aggregations

Authors:

Paolo Fosci and Giuseppe Psaila

Abstract: Soft conditions are a powerful and established formal tool to select data on the basis of linguistic predicates. In previous work, the J-CO Framework (and its query language) was used to perform Soft Web Intelligence, i.e., a practical interpretation of the concept of Web Intelligence that exploits soft conditions to search for desired items in JSON datasets acquired from Web sources. However, the effectiveness of soft conditions depends on how elementary conditions are combined: in this sense, a plethora of proposals are available, such as the vector p-norm. This paper shows how a generic concept, named “user-defined fuzzy evaluator”, that has been recently introduced in the query language, actually allows users to define their own operators, so as to express advanced operators such as “and possibly”. The paper also shows how the AND operator defined as a vector p-norm actually behaves, depending on different configurations of parameters, so as to let the reader understand how to use it in practice.
Download

Short Papers
Paper Nr: 18
Title:

Prediction Web Application Based on a Machine Learning Model to Reduce Robberies and Thefts Rate in Los Olivos, San Martín De Porres and Comas

Authors:

Mederos Sanchez, Luis Estefano, Zelada Padilla, Carlos Antonio and Pedro S. Castañeda

Abstract: Robberies and thefts in the districts of Los Olivos, San Martin de Porres and Comas in Lima, Peru are a constant problem. The scarce police presence on the streets makes these areas ripe for crime. This project proposes analyze crime rates across the public authorities to take measures that might reduce the crime rate with the development of a Machine Learning model, through the use of Random Forest (RF) and a dataset with information from districts in similar situations to those raised in the project. The proposed solution includes a web application interface for data input and analysis, that will be used by municipal entities and everyone. Performance metrics such as Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) were included, with results showing MAEs of 29.194, 45.219, and 75.572 and RMSEs of 39.651, 58.199, and 93.110 from other districts with the same condition. The study concludes with a refinement of machine learning methodologies for crime prediction and emphasizes the potential for citizen engagement in crime prevention.
Download

Paper Nr: 20
Title:

Trust the Data You Use: Scalability Assurance Forms (SAF) for a Holistic Quality Assessment of Data Assets in Data Ecosystems

Authors:

Maximilian Stäbler, Tobias Müller, Frank Köster and Chris Langdon

Abstract: Companies generate terabytes of raw, unstructured data daily, which requires processing and organization to become valuable data assets. In the era of data-driven decision-making, evaluating these data assets’ quality is crucial for various data services, users, and ecosystems. This paper introduces ”Scalability Assurance Forms” (SAF), a novel framework to assess the quality of data assets, including raw data and semantic descriptions, with essential contextual information for cross-domain AI systems. The methodology includes a comprehensive literature review on quality models for linked data and knowledge graphs, and previous research findings on data quality. The SAF framework standardizes data asset quality assessments through 31 dimensions and 10 overarching groups derived from the literature. These dimensions enable a holistic assessment of data set quality by grouping them according to individual user requirements. The modular approach of the SAF framework ensures the maintenance of data asset quality across interconnected data sources, supporting reliable data-driven services and robust AI application development.The SAF framework addresses the need for trust in systems where participants may not know or historically trust each other, promoting the quality and reliability of data assets in diverse ecosystems.
Download

Paper Nr: 28
Title:

Leveraging Ontologies for Handicraft Business Process Modeling: Application for the Pastry-Making Domain

Authors:

Fatma Zohra Rennane and Abdelkrim Meziane

Abstract: The global business environment changes very fast, forcing organizations to seek ways to improve their operational efficiency, cut costs, and enhance their decision-making. Optimizing performance is possible only with a clear understanding of how work gets done. It is here that well-defined business processes become very instrumental. However, for an organization to understand, evaluate, and eventually improve its processes, it is important that they model them in the first place. Traditional modeling techniques, such as BPMN and UML, provide a standard framework of visual and graphical representation for these processes. Most of these methods, however, fall short of capturing domain knowledge. The evolution of semantic web technologies has necessitated ontology-based business process modeling, which provides meaningful representations for business processes through the integration of ontologies. In this paper, an ontology-based business process model (OBPM) of the handicraft domain is presented focusing on the pastry-making field.
Download

Paper Nr: 29
Title:

SLIM-RAFT: A Novel Fine-Tuning Approach to Improve Cross-Linguistic Performance for Mercosur Common Nomenclature

Authors:

Vinícius Di Oliveira, Yuri Façanha Bezerra, Li Weigang, Pedro Carvalho Brom and Victor Rafael R. Celestino

Abstract: Natural language processing (NLP) has seen significant advancements with the advent of large language models (LLMs). However, substantial improvements are still needed for languages other than English, especially for specific domains like the applications of Mercosur Common Nomenclature (NCM), a Brazilian Harmonized System (HS). To address this gap, this study uses TeenyTineLLaMA, a foundational Portuguese LLM, as an LLM source to implement the NCM application processing. Additionally, a simplified Retrieval-Augmented Fine-Tuning (RAFT) technique, termed SLIM-RAFT, is proposed for task-specific fine-tuning of LLMs. This approach retains the chain-of-thought (CoT) methodology for prompt development in a more concise and streamlined manner, utilizing brief and focused documents for training. The proposed model demonstrates an efficient and cost-effective alternative for fine-tuning smaller LLMs, significantly outperforming TeenyTineLLaMA and ChatGPT-4 in the same task. Although the research focuses on NCM applications, the methodology can be easily adapted for HS applications worldwide.
Download

Paper Nr: 73
Title:

An Empirical Study to Use Large Language Models to Extract Named Entities from Repetitive Texts

Authors:

Angelica Lo Duca

Abstract: Large language models (LLMs) are a very recent technology that assists researchers, developers, and people in general to complete their tasks quickly. The main difficulty in using this technology is defining effective instructions for the models, understanding the models’ behavior, and evaluating the correctness of the produced results. This paper describes a possible approach based on LLMs to extract named entities from repetitive texts, such as population registries. The paper focuses on two LLMs (GPT 3.5 Turbo and GPT 4), and runs some empirical experiments based on different levels of detail contained in the instructions. Results show that the best performance is achieved with GPT 4, with a high level of detail in the instructions and the highest costs. The trade-off between costs and performance is given when using GPT 3.5 Turbo when the level of detail is medium.
Download

Paper Nr: 75
Title:

Assessing Unfairness in GNN-Based Recommender Systems: A Focus on Metrics for Demographic Sub-Groups

Authors:

Nikzad Chizari, Keywan Tajfar and María N. Moreno-García

Abstract: Recommender Systems (RS) have become a central tool for providing personalized suggestions, yet the growing complexity of modern methods, such as Graph Neural Networks (GNNs), has introduced new challenges related to bias and fairness. While these methods excel at capturing intricate relationships between users and items, they often amplify biases present in the data, leading to discriminatory outcomes especially against protected demographic groups like gender and age. This study evaluates and measures fairness in GNN-based RS by investigating the extent of unfairness towards various groups and su bgroups within these systems. By employing performance metrics like NDCG, this research highlights disparities in recommendation quality across different demographic groups, emphasizing the importance of accurate, group-level measurement. This analysis not only sheds light on how these biases manifest but also lays the groundwork for developing more equitable recommendation systems that ensure fair treatment across all user groups.
Download

Paper Nr: 81
Title:

IntelliFrame: A Framework for AI-Driven, Adaptive, and Process-Oriented Student Assessments

Authors:

Asma Hadyaoui and Lilia Cheniti-Belcadhi

Abstract: The rapid integration of generative Artificial Intelligence (AI) into educational environments necessitates the development of innovative assessment methods that can effectively measure student performance in an era of dynamic content creation and problem-solving. This paper introduces "IntelliFrame," a novel AI-driven framework designed to enhance the accuracy and adaptability of student assessments. Leveraging semantic web technologies and a well-defined ontology, IntelliFrame facilitates the creation of adaptive assessment scenarios and real-time formative feedback systems. These systems are capable of evaluating the originality, process, and critical thinking involved in AI-assisted tasks with unprecedented precision. IntelliFrame's architecture integrates a personalized AI chatbot that interacts directly with students, providing tailored assistance and generating content that aligns with course objectives. The framework's ontology-driven design ensures that assessments are not only personalized but also dynamically adapted to reflect the evolving capabilities of generative AI and the student’s cognitive processes. IntelliFrame was tested in a Python programming course with 250 first-year students. The study demonstrated that IntelliFrame improved assessment accuracy by 30%, enhanced critical thinking and problem-solving skills by 25%, and increased student engagement by 35%. These results highlight IntelliFrame’s effectiveness in providing precise, personalized assessments and fostering creativity, setting a new standard for AI-integrated educational assessments.
Download

Paper Nr: 35
Title:

An Approach for Automatic Bidirectional Mapping Between Data Models and RDF-S

Authors:

Aissam Belghiat

Abstract: RDF and RDF-S are the normative language for describing web resource information in the context of the Semantic Web. Constructing RDF-S from scratch is a painful task, and deriving them from existing data sources became an important research problem. Furthermore, updating and evolving established RDF-S documents is another problem which must be taken into account. UML is widely applied to data modeling in many application domains. Building RDF-S from existing UML models is a promising technique that will facilitate elaboration of RDF-S models. Moreover, mapping RDF-S to UML will allow their intuitive updating. Thus, this work proposes an approach for mapping UML to RDF-S and RDF-S to UML. The translation makes the data modeled in UML class diagrams available for the Semantic Web and vice versa. The aim is facilitating building and evolving RDF-S documents using UML and vice versa.
Download

Paper Nr: 36
Title:

A Model Driven-Based Approach for Converting Feature Models of Software Product Lines to OWL Ontologies

Authors:

Aissam Belghiat, Mohamed Boubakir, Ghada Chouikh and Djamila Kemmache

Abstract: Software product line engineering has gained recognition as a promising approach to developing families of software systems. A Software Product Line (SPL) is a set of software products that share and support a set of Features. The variabilities and commonalities of the features of a software product line are modeled by Feature models (FM). The lack of formal semantics for these models has hindered their analysis and verification, and consequently their correction and evolution. The use of Ontology Web Language (OWL) ontologies should solve the problem. They accurately allow capturing the interrelationships between features in a FM, and to proceed, thereafter, to the analysis and the verification of these models by using the formal semantics of the OWL which is based on the description logic. In this paper, we propose to convert Feature Models into OWL ontologies using Model Driven Engineering (MDE). We have firstly proposed numerous semantic rules to enable the transformation. After that, meta-modeling and model transformation are used to implement and automate the rules. Specialized MDE tools are used (e.g. Acceleo, Eclipse modeling framework). The Protéger tool is used for reasoning on the generated OWL ontology. A case study is given to show the effectiveness of our approach.
Download

Paper Nr: 83
Title:

ODKAR: “Ontology-Based Dynamic Knowledge Acquisition and Automated Reasoning Using NLP, OWL, and SWRL”

Authors:

Claire Ponciano, Markus Schaffert and Jean-Jacques Ponciano

Abstract: This paper introduces a novel approach to dynamic ontology creation, leveraging Natural Language Processing (NLP) to automatically generate ontologies from textual descriptions and transform them into OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language) formats. Unlike traditional manual ontology engineering, our system automates the extraction of structured knowledge from text, facilitating the development of complex ontological models in domains such as fitness and nutrition. The system supports automated reasoning, ensuring logical consistency and the inference of new facts based on rules. We evaluate the performance of our approach by comparing the ontologies generated from text with those created by a Semantic Web technologies expert and by ChatGPT. In a case study focused on personalized fitness planning, the system effectively models intricate relationships between exercise routines, nutritional requirements, and progression principles such as overload and time under tension. Results demonstrate that the proposed approach generates competitive, logically sound ontologies that capture complex constraints.
Download