ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://ph01.tci-thaijo.org/index.php/ecticit <p style="text-align: justify;">ECTI Transactions on Computer and Information Technology (ECTI-CIT) is published by the Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI) Association which is a professional society that aims to promote the communication between electrical engineers, computer scientists, and IT professionals. Contributed papers must be original that advance the state-of-the-art applications of Computer and Information Technology. Both theoretical contributions (including new techniques, concepts, and analyses) and practical contributions (including system experiments and prototypes, and new applications) are encouraged. The submitted manuscript must have not been copyrighted, published, submitted, or accepted for publication elsewhere. This journal employs <em><strong>a double-blind review</strong></em>, which means that throughout the review process, the identities of both the reviewer and the author are concealed from each other. The manuscript text should not contain any commercial references, such as<span class="L57vkdwH4 ZIjt03VBzHWC"> company names</span>, university names, trademarks, commercial acronyms, or part numbers. The manuscript length must be at least 8 pages and no longer than 12 pages with two (2) columns.</p> <p style="text-align: justify;"><strong>Journal Abbreviation</strong>: ECTI-CIT</p> <p style="text-align: justify;"><strong>Since</strong>: 2005</p> <p style="text-align: justify;"><strong>ISSN</strong>: 2286-9131 (Online)</p> <p style="text-align: justify;"><strong>DOI prefix for the ECTI Transactions</strong> is: 10.37936/ (https://doi.org/)</p> <p style="text-align: justify;"><strong>Language</strong>: English</p> <p style="text-align: justify;"><strong>Issues Per Year</strong>: 2 Issues (from 2005-2020), 3 Issues (in 2021), and 4 Issues (from 2022).</p> <p style="text-align: justify;"><strong>Publication Fee</strong>: Free of charge.</p> <p style="text-align: justify;"><strong>Published Articles</strong>: Review Article / Research Article / Invited Article (only for an invitation provided by editors)</p> <p style="text-align: justify;"><strong>Review Method</strong>: Double Blind</p> <p style="text-align: justify;"> </p> en-US chief.editor.cit@gmail.com (Prof.Dr.Prabhas Chongstitvattana and Prof.Dr.Chidchanok Lursinsap) chief.editor.cit@gmail.com (Managing Editor) Wed, 22 May 2024 00:00:00 +0700 OJS 3.3.0.8 http://blogs.law.harvard.edu/tech/rss 60 Stacking Ensemble Learning with Regression Models for Predicting Damage from Terrorist Attacks https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255276 <p>Terrorist attacks can cause unexpectedly enormous damage to lives and property. To prevent and mitigate damage from terrorist activities, governments and related organizations must have suitable measures and efficient tools to cope with terrorist attacks. This work proposed a new method based on stacking ensemble learning and regression for predicting damage from terrorist attacks. First, two-layer stacking classifiers were developed and used to indicate if a terrorist attack can cause deaths, injuries, and property damage. For fatal and injury attacks, regression models were utilized to forecast the number of deaths and injuries, respectively. Consequently, the proposed method can efficiently classify casualty terrorist attacks with an average area under precision-recall curves (AUPR) of 0.958. Furthermore, the stacking model can predict property damage attacks with an average AUPR of 0.910. In comparison with existing methods, the proposed method precisely estimates the number of fatalities and injuries with the lowest mean absolute errors of 1.22 and 2.32 for fatal and injury attacks, respectively. According to the superior performance shown, the stacking ensemble models with regression can be utilized as an efficient tool to support emergency prevention and management of terrorist attacks.</p> Thitipong Kawichai Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255276 Sat, 18 May 2024 00:00:00 +0700 BERTopic Analysis of Indonesian Biodiversity Policy on Social Media https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255058 <p>Indonesia, known for its diverse biodiversity, faces critical challenges such as habitat degradation and species loss. This study delves into public opinion regarding Indonesian government biodiversity policies by analyzing text data from X social media platforms. Leveraging BERTopic, an advanced topic modeling technique, we uncover nuanced topics related to biodiversity within tweets. Our research uniquely contributes by exploring diverse combinations of BERTopic parameters on Indonesian text, assessing their efficacy through coherence values and manual content evaluation. Notably, our findings highlight the optimal combination of sentence embedding, cluster model, and dimension reduction parameters, with Model 5 demonstrating the highest coherence score of 0.7733. Moreover, we elucidate the impact of outlier reduction techniques when applying BERTopic in an Indonesian context. Our study serves as a foundational model for categorizing Indonesian-language topics using BERTopic, showcasing the significance of tailored text processing techniques. We also reveal that while standard preprocessing methods enhance clustering outcomes, certain dataset characteristics, such as the inclusion of hashtags and mentions, can inuence coherence differently across models. This work not only provides insights into public perceptions of biodiversity policies but also offers methodological guidance for text analysis in similar contexts.</p> Nuraisa Novia Hidayati, Siti Shaleha Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255058 Sat, 18 May 2024 00:00:00 +0700 A Simple and Effective Edge Detection Algorithm Based on Boolean Logic https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255695 <p>Detecting the edges of objects in images is an essential issue in the field of computer vision and image processing, especially in light of the increasing need for immediate and online interaction in determining the content of these images, which requires adopting an appropriate algorithm. This paper introduced a new Simple and Effective Edge Detection (SEED) algorithm. This algorithm relies on Boolean operations to detect edges in binary digital images. SEED analyses every pair of adjacent pixels, horizontally and vertically, in a smoothly and easily manner. This algorithm showed high performance in identifying edges with advanced ability to overcome false edges. To evaluate the SEED algorithm, it has been compared with both the Sobel and Canny algorithms by adopting quantitative evaluation metrics such as the peak signal to noise ratio (PSNR) and the mean square error (MSE), in addition to the intersection-to-union (IoU) ratio index, or what is called Jaccard. The values of the above metrics reected a higher performance of the proposed algorithm. It has been also found that the detection rate of false edges decreased signicantly, making it an effective tool for applications in this field.</p> Mohammed Ali Tawfeeq, Mahmood Zeki Abdallah Abdallah Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255695 Sat, 25 May 2024 00:00:00 +0700 TIDCB: Text Image Dangerous-Scene Convolutional Baseline https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255314 <p class="Body-text">The automatic management of public area safety is one of the most challenging issues in our society. For instance, the timely evacuation of the public during incidents such as fires or large-scale shootings is paramount. However, detecting pedestrian behavior indicative of danger promptly from extensive video surveillance data may not always be feasible. This may result in untimely warnings being provided, resulting in signicant loss of life. Although existing research has proposed text-based person search, it has primarily focused on pedestrian search by matching images of pedestrian body parts to text, lacking the search for pedestrians in dangerous scenarios. To address this gap, this paper proposes an innovative warning framework that further searches for individuals in hazardous situations based on textual descriptions, aiming to prevent or mitigate crisis events. We have constructed a new public safety dataset named CHUK-PEDES-DANGER, one of the first pedestrian datasets that includes dangerous scenes. Additionally, we introduce a novel framework for public automatic evacuation. This framework leverages a multimodal deep learning architecture that combines the image model ResNet-50 with the text model RoBERTa to produce our Text-Image Dangerous-Scene Convolutional Baseline (TIDCB) model, which addresses the classification problem from text to image and image to text by matching images of pedestrian body parts and environments to text. We propose a novel loss function, cross-modal projection matching-triplet (CMPM-Triplet). After conducting extensive experiments, we have validated that our method significantly improves accuracy. Our model outperforms TIPCB with a matching rate of 76.93%, an improvement of 4.78% compared to TIPCB, and demonstrates significant advantages in handling complex scenarios.</p> Fangfang Zheng, Jian Qu Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255314 Sat, 25 May 2024 00:00:00 +0700 Observation-Based Hybrid Classification Algorithms for Customer Segmentation https://ph01.tci-thaijo.org/index.php/ecticit/article/view/256082 <p>The customer segmentation model aims to cluster customers based on their specific characteristics. Relying solely on a simple classification algorithm may not yield optimal results. Our research proposes an Observation- Based Hybrid Classification (OBHC) algorithm to enhance the customer segmentation model by utilizing customer segmentation data from a public source. Observation-based clustering methods differ from simple classification or clustering models by being a hybrid system specifically engineered to boost the performance of predictive models. Furthermore, the focus is on evaluating metric values after clustering to demonstrate performance improvement. The experiments demonstrate signicant performance improvements across various classification algorithms. The most notable enhancement observed with the proposed algorithm is up to 43.86% on average accuracy score, 24.25% on average precision score, 20.25% on average recall score, and 32% on average F1-score, as shown in the experiment section. This research contributes by introducing a novel process for data scientists to tackle customer segmentation challenges, identifying higher performing segments that meet business needs, and providing executives with the exibility to adopt them. The research underscores the significance of employing hybrid models to classify customers better, providing valuable insights for advancing business development and improving customer service.</p> Kulkatechol Kanokngamwitroj, Chetneti Srisaan Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/256082 Sat, 08 Jun 2024 00:00:00 +0700 A Relational Database Model with Interval Probability Valued Attributes for Uncertain and Imprecise Information https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255697 <p>Although the conventional relational database model (CRDB) is benecial to model, design, and implement large-scale systems, it is limited to express and deal with uncertain and imprecise information. In this paper, we introduce a new relational database model as an extension of CRDB where relational attributes may take a value associated with a probability interval, named IPRDB, for representing and handling uncertain and imprecise information in practice. To build IPRDB, we employ three key methods: (1) Probabilistic values of data types are proposed for expressing uncertain and imprecise valued attributes; (2) the probabilistic interpretations of binary relations on sets and operators on probability intervals are used for computing the uncertain degree of functional dependencies, keys, and relations on value domains of attributes; and (3) the combination strategies of probabilistic values are dened for developing new relational algebraic operations. Then, fundamental concepts of the model, such as schemas, probabilistic relations, and probabilistic relational databases, are extended coherently and consistently with those of the conventional relational database model. A set of the properties of the basic probabilistic relational algebraic operations is also formulated and proven. The built IPRDB model can represent and manipulate effectively uncertain and imprecise information in real-world applications.</p> Hoa Nguyen, Duy Nhat Le Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255697 Sat, 15 Jun 2024 00:00:00 +0700 SETA- Extractive to Abstractive Summarization with a Similarity-Based Attentional Encoder-Decoder Model https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255552 <p>Summarizing information provided within tables of scientific documents has always been a problem. A system that can summarize this vital information, which a table encapsulates, can provide readers with a quick and straightforward solution to comprehend the contents of the document. To train such systems, we need data, and finding a quality one is tricky. To mitigate this challenge, we developed a high-quality corpus that contains both extractive and abstractive summaries derived from tables, using a rule-based approach. This dataset was validated using a combination of automated and manual metrics. Subsequently, we developed a novel Encoder-Decoder framework, along with attention, to generate abstractive summaries from extractive ones. This model works on a mix of extractive summaries and inter-sentential similarity embeddings and learns to map them to corresponding abstractive summaries. On experimentation, we discovered that our model addresses the saliency factor of summarization, an aspect overlooked by previous works. Further experiments show that our model develops coherent abstractive summaries, validated by high BLEU and ROUGE scores.</p> Monalisa Dey, Sainik Kumar Mahata, Anupam Mondal, Dipankar Das Copyright (c) 2024 ECTI Transactions on Computer and Information Technology (ECTI-CIT) https://creativecommons.org/licenses/by-nc-nd/4.0 https://ph01.tci-thaijo.org/index.php/ecticit/article/view/255552 Fri, 21 Jun 2024 00:00:00 +0700