Research Outputs

Now showing 1 - 6 of 6
  • Publication
    Improving the affective analysis in texts. Automatic method to detect affective intensity in lexicons based on Plutchik’s wheel of emotions
    (Emerald Publishing Limited, 2019) ;
    Molina-Beltrán, Carlos
    ;
    Segura-Navarrete, Alejandra
    ;
    Vidal-Castro, Christian
    ;
    Rubio-Manzano, Clemente
    Purpose: This paper aims to propose a method for automatically labelling an affective lexicon with intensity values by using the WordNet Similarity (WS) software package with the purpose of improving the results of an affective analysis process, which is relevant to interpreting the textual information that is available in social networks. The hypothesis states that it is possible to improve affective analysis by using a lexicon that is enriched with the intensity values obtained from similarity metrics. Encouraging results were obtained when an affective analysis based on a labelled lexicon was compared with that based on another lexicon without intensity values. Design/methodology/approach: The authors propose a method for the automatic extraction of the affective intensity values of words using the similarity metrics implemented in WS. First, the intensity values were calculated for words having an affective root in WordNet. Then, to evaluate the effectiveness of the proposal, the results of the affective analysis based on a labelled lexicon were compared to the results of an analysis with and without affective intensity values. Findings: The main contribution of this research is a method for the automatic extraction of the intensity values of affective words used to enrich a lexicon compared with the manual labelling process. The results obtained from the affective analysis with the new lexicon are encouraging, as they provide a better performance than those achieved using a lexicon without affective intensity values. Research limitations/implications: Given the restrictions for calculating the similarity between two words, the lexicon labelled with intensity values is a subset of the original lexicon, which means that a large proportion of the words in the corpus are not labelled in the new lexicon. Practical implications: The practical implications of this work include providing tools to improve the analysis of the feelings of the users of social networks. In particular, it is of interest to provide an affective lexicon that improves attempts to solve the problems of a digital society, such as the detection of cyberbullying. In this case, by achieving greater precision in the detection of emotions, it is possible to detect the roles of participants in a situation of cyberbullying, for example, the bully and victim. Other problems in which the application of affective lexicons is of importance are the detection of aggressiveness against women or gender violence or the detection of depressive states in young people and children. Social implications: This work is interested in providing an affective lexicon that improves attempts to solve the problems of a digital society, such as the detection of cyberbullying. In this case, by achieving greater precision in the detection of emotions, it is possible to detect the roles of participants in a situation of cyber bullying, for example, the bully and victim. Other problems in which the application of affective lexicons is of importance are the detection of aggressiveness against women or gender violence or the detection of depressive states in young people and children. Originality/value: The originality of the research lies in the proposed method for automatically labelling the words of an affective lexicon with intensity values by using WS. To date, a lexicon labelled with intensity values has been constructed using the opinions of experts, but that method is more expensive and requires more time than other existing methods. On the other hand, the new method developed herein is applicable to larger lexicons, requires less time and facilitates automatic updating.
  • Publication
    What do our Children read about? Affect analysis of Chilean school texts
    (Bahri Publications, 2015) ;
    Fernández, Jorge
    ;
    Segura, Alejandra
    ;
    Vidal-Castro, Christian
    ;
    Rubio-Manzano, Clemente
    We present a study of the affective character of 1st to 8th year Chilean school texts, to which we applied lexicon-based affect analysis techniques to identify 6 basic emotions (anger, sadness, fear, disgust, surprise and happiness). First, we generated a corpus of 525 documents, 18176 paragraphs and 137516 words. Then, using the affective words frequency, we built a classifier based on Emotion Word Density to detect emotions in the texts. Our results show that the predominant affective states are happiness (58%), sadness (16%) and fear (12%). The 6 basic emotions are present in most literary forms with uniform relative density except for songs, where anger is absent. Classifier performance was validated by comparing its results against the opinions of experts in the field, and its results show an above-average conformity (accuracy = 63%), above-average predictive capacity (precision = 69%) and good classifier sensitivity (recall = 80% and f-measure = 93%).
  • Publication
    Towards a holistic model for quality of learning object repositories: A practical application to the indicator of metadata compliance
    (Emerald Publishing, 2017) ;
    Vidal-Castro, Christian
    ;
    Segura-Navarrete, Alejandra
    ;
    Menendez-Dominguez, Victor
    Purpose: This paper aims to address the need to ensure the quality of metadata records describing learning resources. We propose improvements to a metadata-quality model, specifically for the compliance sub-feature of the functionality feature. Compliance is defined as adherence level of the learning object metadata content to the metadata standard used for its specification. The paper proposes metrics to assess the compliance, which are applied to a set of learning objects, showing their applicability and usefulness in activities related to resources management. Design/methodology/approach: The methodology considers a first stage of metrics refinement to obtain the indicator of the sub-feature compliance. The next stage is the proposal evaluation, where it is determined if metrics can be used as a conformity indicator of learning object metadata with a standard (metadata compliance). The usefulness of this indicator in the information retrieval area is approached through an assessment of learning objects where the quality level of its metadata and the ranking in which they are retrieved by a repository are correlated. Findings: This study confirmed that the best results for metrics of standardization, completeness, congruence, coherence, correctness and understandability, which determine the compliance indicator, were obtained for learning objects whose metadata were better labelled. Moreover, it was found that the learning objects with the highest level of compliance indicator have better positions in the ranking when a repository retrieves them through an exact search based on metadata. Research limitations/implications: In this study, only a sub-feature of the quality model is detailed, specifically the compliance of learning object standard. Another limitation was the size of the learning objects set used in the experiment. Practical implications: This proposal is independent from any metadata standard and can be applied to improve processes associated with the management of learning objects in a repository-like retrieval and recommendation. Originality/value: The originality and value of this proposal are related to quality of learning object metadata considered from a holistic point of view through six metrics. These metrics quantify both technical and pedagogical aspects through automatic evaluation and supported by experts. In addition, the applicability of the indicator in recovery systems is shown, by example to be incorporated as an additional criterion in the learning object ranking.
  • Publication
    The role of WordNet similarity in the affective analysis pipeline
    (Instituto Politécnico Nacional, 2019)
    Segura-Navarrete, Alejandra
    ;
    Vidal-Castro, Christian
    ;
    Rubio-Manzano, Clemente
    ;
    Sentiment Analysis (SA) is a useful and important discipline in Computer Science, as it allows having a knowledge base about the opinions of people regarding a topic. This knowledge is used to improve decision-making processes. One approach to achieve this is based on the use of lexical knowledge structures. In particular, our aim is to enrich an affective lexicon by the analysis of the similarity relationship between words. The hypothesis of this work states that the similarities of the words belonging to an affective category, with respect to any other word, behave in a homogeneous way within each affective category. The experimental results show that words of a same affective category have a homogeneous similarity with an antonym, and that the similarities of these words with any of their antonyms have a low variability. The novelty of this paper is that it builds the bases of a mechanism that allows incorporating the intensity in an affective lexicon automatically.
  • Publication
    How useful TutorBot+ is for teaching and learning in programming courses: A preliminary study
    (IEEE, 2023) ; ;
    Gómez-Meneses, Pedro
    ;
    Maldonado-Montiel, Diego
    ;
    Segura-Navarrete, Alejandra
    ;
    Vidal-Castro, Christian
    Objective: The objective of this paper is to present preliminary work on the development of an EduChatBot tool and the measurement of the effects of its use aimed at providing effective feedback to programming course students. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT3.5 and is tasked with assisting and providing timely positive feedback to students in computer science programming courses at UCSC. Methods/Analysis: The proposed method consists of four stages: (1) Immersion in the feedback and Large Language Models (LLMs) topic; (2) Development of tutorBot+ prototypes in both non-conversational and conversational versions; (3) Experiment design; and (4) Intervention and evaluation. The first stage involves a literature review on feedback and learning, the use of intelligent tutors in the educational context, as well as the topics of LLMs and chatGPT. The second and third stages detail the development of tutorBot+ in its two versions, and the final stage lays the foundation for a quasi-experimental study involving students in the curriculum activities of Programming Workshop and Database Workshop, focusing on learning outcomes related to the development of computational thinking skills, and facilitating the use and measurement of the tool’s effects. Findings: The preliminary results of this work are promising, as two functional prototypes of tutorBot+ have been developed for both the non-conversational and conversational versions. Additionally, there is ongoing exploration into the possibility of creating a domain-specific model based on pretrained models for programming, integrating tutorBot+ with other platforms, and designing an experiment to measure student performance, motivation, and the tool’s effectiveness.
  • Publication
    Fuzzy linguistic descriptions for execution trace comprehension and their application in an introductory course in artificial intelligence
    (IOS Press, 2019)
    Rubio-Manzano, Clemente
    ;
    Lermanda-Senoceaín, Tomás
    ;
    ;
    Vidal-Castro, Christian
    ;
    Segura-Navarrete, Alejandra
    Execution traces comprehension is an important topic in computer science since it allows software engineers to get a better understanding of the system behavior. However, traces are usually very large and hence they are difficult to interpret. Parallel, execution traces comprehension is a very important topic into the algorithms learning courses since it allows students to get a better understanding of the algorithm behavior. Therefore, there is a need to investigate ways to help students (and teachers) find and understand important information conveyed in a trace despite the trace being massive. In this paper, we propose a new approximation for execution traces comprehension based on fuzzy linguistic descriptions. A new methodology and a data-driven architecture based on linguistic modelling of complex phenomenon are presented and explained. In particular, they are applied to automatically generate linguistic reports from execution traces generated during the execution of algorithm implemented by the students of an introductory course of artificial intelligence. To the best of our knowledge, it is the first time that linguistic modelling of complex phenomenon is applied to execution traces comprehension. Throughout the article, it is shown how this kind of technology can be employed as a useful computer-assisted assessment tool that provides students and teachers with technical, immediate and personalised feedback about the algorithms that are being studied and implemented. At the same time, they provide us with two useful applications: they are an indispensable pedagogical resource for improving comprehension of execution traces, and they play an important role in the process of measuring and evaluating the “believability” of the agents implemented. To show and explore the possibilities of this new technology, a web platform has been designed and implemented by one of the authors, and it has been incorporated into the process of assessment of an introductory artificial intelligence course. Finally, an empirical evaluation to confirm our hypothesis was performed and a survey directed to the students was carried out to measure the quality of the learning-teaching process by using this methodology enriched with fuzzy linguistic descriptions.