keyboard_arrow_up
Accepted Papers
Parafusion: a Large-scale Llm-driven English Paraphrase Dataset Infused With High-quality Lexical and Syntactic Diversity

Lasal Jayawardena and Prasan Yapa, School of Computing, Informatics Institute of Technology, Colombo 00600, Sri Lanka

ABSTRACT

Paraphrase generation, a pivotal task in natural language processing (NLP), has tra-ditionally relied on human-annotated paraphrase pairs, a method that is both cost-inefficient and difficult to scale. Automatically annotated paraphrase pairs, while more efficient, often lack in syntactic and lexical diversity, resulting in paraphrases that closely resemble the source sentences. Moreover, existing datasets often contain hate speech and noise, including unintentional inclusion of non-English languages. This research introduces ParaFusion, a large-scale, high-quality English paraphrase dataset developed using Large Language Models (LLM) to address these challenges. ParaFusion augments existing datasets with high-quality data, significantly enhancing both lexical and syntactic diversity while maintaining semantic similarity. It also mitigates the presence of hate speech and reduces noise, ensuring a cleaner, and more focused English dataset. The paper presents one of the most comprehensive evaluations to date, employing a range of evaluation metrics to assess different aspects of the dataset quality. The results underscore the potential of ParaFusion as a valuable resource for improving NLP applications.

KEYWORDS

Paraphrase Generation, Natural Language Generation, Deep Learning, Large Lan-guage Models, Data Centric AI.


Analysis of the Impact of Dataset Quality on Task-oriented Dialogue Management

Miguel ´Angel Medina-Ram´ırez, Cayetano Guerra-Artal, and Mario Hern´andez-Tejera, University Institute of Intelligent Systems and Numeric Applications in Engineering, University of Las Palmas de Gran Canarias, Las Palmas de Gran Canarias, Spain

ABSTRACT

Task-oriented dialogue systems (TODS) have become crucial for users to interact with machines and computers using natural language. One of its key com- ponents is the dialogue manager, which guides the conversation towards a good goal for the user by providing the best possible response. Previous works have proposed rule-based systems (RBS), reinforcement learning (RL), and supervised learning (SL) as solutions for the correct dialogue management; in other words, select the best response given input by the user. This work explores the impact of dataset quality on the performance of dialogue managers. We delve into po- tential errors in popular datasets, such as Multiwoz 2.1 and SGD. For our inves- tigation, we developed a synthetic dialogue generator to regulate the type and magnitude of errors introduced. Our findings suggest that dataset inaccuracies, like mislabeling, might play a significant role in the challenges faced in dialogue management. The code for our experiments is available in this repository: https: //github.com/miguel-kjh/Improving-Dialogue-Management.

KEYWORDS

Dialog Systems, dialogue management, dataset quality, supervised learn-ing.


An Efficient Domain-independent Approach for Supervised Keyphrase Extraction and Ranking

Sriraghavendra Ramaswamy, Amazon Development Center (India) Private Limited, Chennai, India

ABSTRACT

We present a supervised learning approach for automatic extraction of keyphrases from single documents. Our solution uses simple to compute statistical and positional features of candidate phrases and does not rely on any external knowledge base or on pre-trained language models or word embeddings. The ranking component of our proposed solution is a fairly lightweight ensemble model. Evaluation on benchmark datasets shows that our approach achieves significantly higher accuracy than several state-of-the-art baseline models, including all deep learning-based unsupervised models compared with, and is competitive with some supervised deep learning-based models too. Despite the supervised nature of our solution, the fact that does not rely on any corpus of “golden” keywords or any external knowledge corpus means that our solution bears the advantages of unsupervised solutions to a fair extent.

KEYWORDS

Keyphrase extraction, Supervised learning, Partial Ranking, Domain-agnostic solution, Non-DNN-based model.


Sas-bert: Bert for Sales and Support Conversation Classification Using a Novel Multi-objective Pre-training Framework

Aanchal Varma and Chetan Bhat, Freshworks, India

ABSTRACT

Recent emergence of large language models (LLMs), particularly GPT variants has created a lot of buzz due to their state-of-the-art performance results. However, for highly domain-specific datasets such as sales and support conversations, most LLMs do not exhibit high performance out-of-the-box. Thus, fine- tuning is needed which many budget-constrained businesses cannot afford. Also, these models have very slow inference times making them unsuitable for many real-time applications. Lack of interpretability and access to probabilistic inferences is another problem. For such reasons, BERT-based models are preferred. In this paper, we present SAS-BERT, a BERT-based architecture for sales and support conversations. Through novel pre-training enhancements and GPT-3.5 led data augmentation, we demonstrate improvement in BERT performance for highly domain-specific datasets which is comparable with fine- tuned LLMs. Our architecture has 98.5% fewer parameters compared to the largest LLM considered, trains under 72 hours, and can be hosted on a single large CPU for inference.<\p>

KEYWORDS

BERT, LLM, Text Classification, Domain pre-training, NLP applications.


Domain Adaptation Regularized Layoutlm With Automatic Domain Discovery From Topic Modelling

Chen Lin and Piush Kumar Singh and Yourong Xu and Eitan Lees and Rachna Saxena and Sasidhar Donaparthi and Hui Su, Fidelity Investments, 245 Summer Street, Boston, MA 02210

ABSTRACT

In this paper, we propose using domain adaptation to improve the generalizability and performance of LayoutLM, a pre-trained language model that incorporates layout information of a document image. Our approach uses topic modelling to automatically discover the underlying domains in a document image dataset where domain information is unknown. We evaluate our approach on the challenging RVL-CDIP dataset and demonstrate that it significantly improves the performance of LayoutLM on this dataset. Our approach can be applied to other NLP models to improve their generalization capabilities, making them more applicable in real-world scenarios, where data is often collected from a variety of domains<\p>

KEYWORDS

LayoutLM, Domain Adaptation, Automatic Domain Discovery, Topic Modelling, RVL-CDIP.


A Synergistic Approach to Wildfire Prevention and Management Using Ai, Machine Learning, and 5g Technology in the United States

Okoro, C. Stanley, Lopez Alexander, Unuriode, O. Austine, Department of Computer Science, Austin Peay State University, Clarksville, USA

ABSTRACT

In recent years, wildfires have emerged as a global environmental crisis, causing significant damage to ecosystems, and contributing to climate change. Wildfire management methods involve prevention, response, and recovery efforts. Despite advancements in detection methods, the increasing frequency of wildfires necessitates innovative solutions for early detection and efficient management. This study explores proactive approaches to detect and manage wildfires in the United States by leveraging Artificial Intelligence (AI), Machine Learning (ML), and 5G technology. The specific objective of this research covers proactive detection and prevention of wildfires using advanced technology; Active monitoring and mapping with remote sensing and signaling leveraging on 5G technology; and Advanced response mechanisms to wildfire using drones and IOT devices. This study was based on secondary data collected from government databases and analyzed using descriptive statistics. In addition, past publications were reviewed through content analysis, and narrative synthesis was used to present the observations from various studies. The results showed that developing new technology presents an opportunity to detect and manage wildfires proactively. This would save a lot of lives and prevent huge economic loss that is attributed to wildfire outbreaks and spread. Advanced technology can be used in several ways to help in the proactive detection and management of wildfires. This includes the development of the use of AI-enabled remote sensing and signaling devices and leveraging 5G technology for active monitoring and mapping of wildfires. In addition, super intelligent drones and IOT devices can be used for safer responses to wildfires. This forms the core of the recommendation to the fire Management Agencies and the government.

KEYWORDS

Wildfires, Artificial Intelligence (AI), Machine Learning (ML), 5G technology, remote sensing, drones, and IoT device.


AI’s Impact on Labor Markets: Analyzing Job Displacement, Creation, and Skill Changes in the United States of America

Unuriode, O. Austine, Okoro, C. Stanley, Afolabi, T. Osariemen, Durojaiye, M. Olalekan, Lopez Alexander, Yusuf, Y. Babatunde, Akinwande, J. Mayowa, Department of Computer Science, Austin Peay State University, Clarksville, USA

ABSTRACT

This study delves into the implications of AI adoption on the labor market. As artificial intelligence (AI) continues to transform industries, it presents a dual impact: job displacement and job creation. AI-driven automation is automating routine and repetitive tasks, which can lead to the displacement of certain roles. However, AI also creates new job opportunities, particularly in AI development and related fields. In this study, we were able to show AIs influence on the human performed tasks. The negative relationship between AI influence and tasks performed by humans shows that AI indeed has a notable and statistically significant adverse impact on human-performed tasks. We discovered that as AI technology advances and becomes more prevalent, certain tasks and roles traditionally carried out by humans are being automated or replaced by machines. Also, we were able to show the relationship between the AI model and human-performed tasks. It was found that AI models exhibit a substantial and statistically significant positive relationship with tasks performed by humans. Our finding suggests a more optimistic outlook for the labor market, where rather than displacing jobs and workers, AI technologies have the potential to enhance their capabilities and create new opportunities.

KEYWORDS

Artificial Intelligence, Automation, labor market, machine learning.


Data Analysis on Credit Card Debt: Rate of Consumption and Impact on Individuals and the Us Economy

Mayowa Akinwande, Alexander Lopez, Tobi Yusuf, Austine Unuriode, Babatunde Yusuf, Toyyibat Yussuph and Stanley Okoro, Department of Computer Science, Austin Peay State University, USA

ABSTRACT

This paper provides a comprehensive examination of the evolution of credit cards in the United States, tracing their historical development, causes, consequences, and impact on both individuals and the economy. It delves into the transformation of credit cards from specialized merchant cards to ubiquitous financial tools, driven by legal changes like the Marquette decision. Credit card debt has emerged as a significant financial challenge for many Americans due to economic factors, consumerism, high healthcare costs, and financial illiteracy. The consequences of this debt on individuals are extensive, affecting their financial well-being, credit scores, savings, and even their physical and mental health. On a larger scale, credit cards stimulate consumer spending, drive e-commerce growth, and generate revenue for financial institutions, but they can also contribute to economic instability if not managed responsibly. The paper emphasizes various strategies to prevent and manage credit card debt, including financial education, budgeting, responsible credit card uses, and professional counselling. Empirical studies support the relationship between credit card debt and factors such as financial literacy and consumer behavior. Regression analysis reveals that personal consumption and GDP positively impacts credit card debt indicating that responsible management is essential. The paper offers comprehensive recommendations for addressing credit card debt challenges and maximizing the benefits of credit card usage, encompassing financial education, policy reforms, and public awareness campaigns. These recommendations aim to transform credit cards into tools that empower individuals financially and contribute to economic stability, rather than sources of financial stress.

KEYWORDS

Debt, Financial literacy, financial well-being, Economic stability, Credit cards.


Handling Nominals, Inverse Roles and Number Restrictions Using Algebraic Reasoning

Humaira Farid and Volker Haarslev, Concordia University, Montreal, Canada

ABSTRACT

This paper presents a novel SHOIQ tableau calculus which incorporates algebraic reasoning for deciding ontology consistency. Numerical restrictions imposed by nominals and qualified number restrictions are encoded into a set of linear inequalities. Column generation and branch-and-price algorithms are used to solve these inequalities. Our preliminary experiments indicate that this calculus is more stable and often performs better on SHOIQ ontologies than standard tableau methods.

KEYWORDS

Description logic, knowledge representation, algebraic reasoning.


Beyond the Hype: a Critical Evaluation of Chatgpts Capabilities for Mathematical Calculations

Ewuradjoa Mansa Quansah, Saint Petersburg University, Russia

ABSTRACT

As generative AI systems like ChatGPT gain popularity, empirical analysis is essential to evaluate capabilities. This study investigates ChatGPT’s skills for mathematical calculations through controlled experiments. Tests involving counting numbers, finding averages, and demonstrating Excel methods reveal inconsistencies and errors, indicating lack of true contextual understanding. While ChatGPT can provide solutions, its reasoning shows gaps versus human cognition. The results provide concrete evidence of deficiencies, complementing conceptual critiques. Findings caution against over-reliance on generative models for critical tasks and highlight needs to advance reasoning and human-AI collaboration. This analysis contributes valued grounding amidst hype, urging continued progress so technologies like ChatGPT can be deployed safely and responsibly. Overall, empirical results underscore risks and limitations, providing insights to maximize benefits while mitigating harms of rapidly advancing generative AI.

KEYWORDS

ChatGPT, Artificial intelligence, AI, Generative AI, large language-based models, experiment.


Samurai:a Transformation System for Animation Characters Based on Glcic to Alleviate the Workload of Animation Directors

Takuto Tsukiyama, Sho Ooi and Mutsuo Sano, Graduate School of Osaka Institute of Technology, Osaka, Japan

ABSTRACT

People from various professions are involved in the production of anime, including directors, animation directors, character designers, and voice actors/actresses. Specifically, the role of an animation director is of importance in the realm of animation. The animation director serves as the unifying force in shaping the animations style by meticulously reviewing and redrawing the key animations provided by the key animators. The aim of this study is to develop a redrawing system using GLCIC to reduce the workload on animation directors when redrawing original key animation. Specifically, this study devised a system that employs GLCIC to analyze and learn the distinctive drawing styles of individual animation directors from images of their work and subsequently apply those styles to the conversion process. In the experiment, we asked people whose hobby is drawing to experience the developed system, and conducted a qualitative evaluation using a questionnaire and a quantitative evaluation using KLM analysis. As a result, we found that there were issues with ease of modification and UI. Additionally, the KLM analysis revealed that improving the system could reduce work time by a quarter. In the future, we think to improve the system with the aim of increasing work efficiency.

KEYWORDS

GLCIC, Image Conversion System, Animator Support.


Study of Voice Generation Method Suitable for Characters Based on Human Cognitive Characteristics

Shogo Saito, Sho Ooi, and Mutsuo Sano, Graduate School of Osaka Institute of Technology, Osaka, Japan

ABSTRACT

Previous studies have attempted to estimate existing voices from images of animated characters as a way to generate voices suitable for animated characters, but without good results. Therefore, in this study, to link the voice characteristics to match the animation character with the image, we devised a method to analyze the voice s tendency to not be uncomfortable and then establish the ratio of voice learning data based on the analyzed tendency data. Specifically, this study prepares multiple voices for one illustration of an anime character, asks subjects to evaluate the voices, and calculates an evaluation based on the evaluation values. In experiments, we conducted an evaluation experiment using the one-pair comparison method, calculated the distribution of learning data based on the evaluation values obtained, and prepared for the subsequent learning process.

KEYWORDS

Synthesized speech, Voice generation, Character.


Generalized Model-tree for Travel Mode Choice Analysis

Shangbo Wang, Department of Civil Engineering, The University of Hongkong

ABSTRACT

In recent years, the fields of statistics and machine learning have created numerous methods such as neural networks, decision trees, support vector machines etc. and they have shown their superiority in classification and choice making. However, machine learning models are seen as having a black-box characteristic and a lack of economic interpretation, which have limited their gaining strong popularity among econometricians. In this paper, we propose a generalized model tree which links economic theories of human decision-making such as underlying discrete choice models, to an ensemble soft-decision tree to improve travel mode forecasting performance, overcoming local and global preference heterogeneity without much sacrifice of interpretability and monotonicity. The generalized model tree is a two-stage model, which firstly applies soft-splitting and disjunctions-of-conjunctions rules to a Linear Combination of Compensable Attributes (LCCA) or non-compensable attributes to obtain the probability of each alternative being considered for each decision-maker, and then compensatory models are used at each output node to get the final prediction. We apply the Markov Chain Monte Carlo (MCMC) algorithm to search for the optimum tree by the derived log-likelihood function and improve the AdaBoost algorithm to overcome global preference heterogeneity. We validate the proposed method by using the 2012 California Household Travel Survey dataset (CHTS) and the 2017 National Household Travel Survey dataset (NHTS). We find that by taking into account global preference heterogeneity, the model tree can deliver improved prediction results compared to the standard popular a multinomial logit model (MNL) and the multinomial mixed logit model (MML).

KEYWORDS

MNL, MML, Discrete Choice Modeling, MCMC, AdaBoost, Travel Mode Forecast.


Development of Monitoring System for Bird’s Nests in the Swiftlet House Using Lidar

Shangbo Wang, Department of Civil Engineering, The University of HongkongTrinh Vu Duc Anh1 and Nguyen Truong Thinh2, 1Department of Electrical Engineering, University of South Florida, Tampa City, FL 33620, USA, 2Institute of Intelligent and Interactive Technologies, University of Economics HCMC- UEH, Vietnam

ABSTRACT

In this paper, an algorithm is developed for the robot to take odometry combined with LiDAR (Light Detection and Ranging) input to perform localization and 3D mapping inside a swiftlet house model. The position of the walls in the swiftlet’s house for calibrating LiDAR data is obtained beforehand and the robot system would superimpose the LiDAR map and swiftlet’s nest to the provided global swiftlet house map. The LiDAR is able to generate a 2D map from point clouds with its 360-degree scan angle. Additionally, it is mounted to a 1 DOF arm for height variation thanks to a Stepper motor to achieve a 3D map from 2D layers. Swiftlet’s nests are detected by differentiating their distinctive shape from the planar concrete wall, recorded by the robot, and monitored until they are harvested. When the robot is powered up, it can localize itself in the global map as long as the calibrating wall is in view in one scan. We evaluate the robot’s functionality in the swiftlet’s cell model with swiftlet’s nest scanned. We propose a bird nest-oriented SLAM system that builds a map of birds’ nests on wood frames of swiftlet houses. The robot system takes 3D point clouds reconstructed by a feature-based SLAM system and creates a map of the nests on the house frame. Nests are detected through segmentation and shape estimation. Experiments show that the system has reproduced the shape and size of the nests with high accuracy.

KEYWORDS

Intelligent Systems, Recognition, Lidar, Bird’s nest, Monitoring system, SLAM, identified system.


Enabling Robust Sensor Network Design With Data Processing and Optimization Making Use of Local Beehive Image and Video Files

Ephrance Eunice Namugenyi, David Tugume, Augustine Kigwana and Benjamin Rukundo, Department of Computer Networks, CoCIS, Makerere University, Uganda

ABSTRACT

The dynamic landscape of modern agriculture, characterized by a growing reliance on data-driven methodologies, has created an urgent need for innovative solutions to enhance resource utilization. One key challenge faced by local beehive farmers is efficiently managing large data files collected from sensor networks for optimal beehive management. To address this challenge, we propose a novel paradigm that leverages advanced edge computing techniques to optimize data transmission and storage. Our approach encompasses data compression for images and videos, coupled with data augmentation techniques for numerical data. Specifically, we propose a novel compression algorithm that outperforms traditional methods, such as Bzip2, in terms of compression ratio. We also develop data augmentation techniques to improve the accuracy of machine learning models trained on the collected data. A key aspect of our approach is its ability to operate in resource-constrained environments, such as those typically found in local beehive farms. To achieve this, we carefully explore key parameters such as throughput, delay tolerance, compression rate, and data retransmission. This ensures that our approach can meet the unique requirements of beehive management while minimizing the impact on resources. Overall, our study presents a holistic solution for optimizing data transmission and storage across robust sensor networks for local beehive management. Our approach has the potential to significantly improve the efficiency and effectiveness of beehive management, thereby supporting sustainable agriculture practices.


Reflection of Federal Data Protection Standards on Cloud Governance

Olga Dye, Justin Heo, Ebru Celikel Cankaya, Department of Computer Science University of Texas at Dallas

ABSTRACT

As demand for more storage and processing power increases rapidly, cloud services in general are becoming more ubiquitous and popular. This, in turn, is increasing the need for developing highly sophisticated mechanisms and governance to reduce data breach risks in cloud-based infrastructures. Our research focuses on cloud governance by harmoniously combining multiple data security measures with legislative authority. We present legal aspects aimed at the prevention of data breaches, as well as the technical requirements regarding the implementation of data protection mechanisms. Specifically, we discuss primary authority and technical frameworks addressing least privilege in correlation with its application in Amazon Web Services (AWS), one of the major Cloud Service Providers (CSPs) on the market at present.

KEYWORDS

Least privilege, attribute-based access control, FedRAMP, zero-trust architecture, condition keys.


menu
Reach Us

emailacsty@acsty2024.org


emailacstyconff@yahoo.com

close