https://ph01.tci-thaijo.org/index.php/ecticit/issue/feedECTI Transactions on Computer and Information Technology (ECTI-CIT)2026-03-28T13:20:46+07:00Prof.Dr.Prabhas Chongstitvattana and Prof.Dr.Chidchanok Lursinsapchief.editor.cit@gmail.comOpen Journal Systems<p style="text-align: justify;">ECTI Transactions on Computer and Information Technology (ECTI-CIT) is published by the Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI) Association which is a professional society that aims to promote the communication between electrical engineers, computer scientists, and IT professionals. Contributed papers must be original that advance the state-of-the-art applications of Computer and Information Technology. Both theoretical contributions (including new techniques, concepts, and analyses) and practical contributions (including system experiments and prototypes, and new applications) are encouraged. The submitted manuscript must have not been copyrighted, published, submitted, or accepted for publication elsewhere. This journal employs <em><strong>a double-blind review</strong></em>, which means that throughout the review process, the identities of both the reviewer and the author are concealed from each other. The manuscript text should not contain any commercial references, such as<span class="L57vkdwH4 ZIjt03VBzHWC"> company names</span>, university names, trademarks, commercial acronyms, or part numbers. The manuscript length must be at least 8 pages and no longer than 10 pages with two (2) columns.</p> <p style="text-align: justify;"><strong>Journal Abbreviation</strong>: ECTI-CIT</p> <p style="text-align: justify;"><strong>Since</strong>: 2005</p> <p style="text-align: justify;"><strong>ISSN</strong>: 2286-9131 (Online)</p> <p style="text-align: justify;"><strong>Language</strong>: English</p> <p style="text-align: justify;"><strong>Review Method</strong>: Double Blind</p> <p style="text-align: justify;"><strong>Issues Per Year</strong>: 2 Issues (from 2005-2020), 3 Issues (in 2021), and 4 Issues (from 2022).</p> <p style="text-align: justify;"><strong>Publication Fee</strong>: Free of charge.</p> <p style="text-align: justify;"><strong>Published Articles</strong>: Review Article / Research Article / Invited Article (only for an invitation provided by editors)</p> <p style="text-align: justify;"><strong>Scopus preview:</strong> https://www.scopus.com/sourceid/21100899864</p> <p style="text-align: justify;"><strong>DOI prefix for the ECTI Transactions</strong> is: 10.37936/ (https://doi.org/)</p>https://ph01.tci-thaijo.org/index.php/ecticit/article/view/262099Intelligent Honeypot for Web Applications: Leveraging Seq2Seq and Reinforcement Learning2026-02-05T15:45:34+07:00Ananya Varadarajanpes2202100749@pesu.pes.eduAshwin Chandrasekaranpes2202100832@pesu.pes.eduRachana Binumohanpes2202100742@pesu.pes.eduRahul Huliyar Ravishankarpes2202100378@pesu.pes.eduGokul Kannan Sadasivamgokul@pes.edu<p><span style="font-weight: 400;">An intelligent honeypot system designed to mimic legitimate websites using Sequence-to-Sequence (Seq2Seq) learning and Deep Q-Learning. The system generates realistic, contextually appropriate responses to attacker queries, prolonging interactions and providing insights into malicious behaviors while safeguarding actual systems. The Seq2Seq model, trained on HTTP request-response pairs, enables the honeypot to produce responses that closely resemble those of real servers, enhancing its ability to deceive attackers. Deep Q-Learning optimizes engagement by selecting the most effective responses through a custom reward function, balancing realism and interactivity to maximize session length. Performance was evaluated using metrics such as Response Realism Rate (RRR), Semantic Consistency Accuracy (SCA), and Average Session Length (ASL). The honeypot achieved an RRR of 92.3%, an SCA of 89.7%, and a 94.5% Optimal Response Selection Rate (ORSR). These advancements increased ASL by 143.5%, from 3.2 to 7.8 exchanges, reflecting prolonged attacker engagement. By integrating Seq2Seq and Deep Q-Learning, this honeypot demonstrates significant improvements in generating convincing responses and sustaining interactions. These results contribute to modern cybersecurity by providing a practical and theoretical framework for developing next-generation honeypots capable of deceiving attackers and gathering actionable intelligence.</span></p>2026-02-28T00:00:00+07:00Copyright (c) 2026 ECTI Transactions on Computer and Information Technology (ECTI-CIT)https://ph01.tci-thaijo.org/index.php/ecticit/article/view/263972YUV-based Deep Learning Super-Resolution for Bitrate Reduction and ROI Preservation in Modern Video Codecs2026-02-12T13:34:51+07:00Lertluck Leela-amornsinl.lertluck@gmail.comNuttapon Vanakittistiennuttapon.vana@gmail.comNattee Niparnannattee@gmail.comPitchaya Sitthi-amornpitchaya@cp.eng.chula.ac.thAttawith Sudsangattawith@cp.eng.chula.ac.th<p><span style="font-weight: 400;">High Efficiency Video Coding (HEVC) and its successors, such as Versatile Video Coding (VVC), offer substantial bitrate reductions, yet challenges remain in preserving visual fidelity under bandwidth and computational constraints. This paper proposes a deep learning-based super-resolution (SR) framework that operates natively in the YUV color space, eliminating costly RGB-YUV conversions and integrating seamlessly with modern video compression pipelines. We develop two convolutional network architectures trained on YUV-formatted video data: a full 3-channel model and a lightweight two-stream variant that separately processes luminance (Y) and chrominance (UV) channels using compact subnetworks. The proposed method enhances both full-frame and region-of-interest (ROI) quality, outperforming conventional HEVC baselines in terms of rate-distortion efficiency. Evaluations on diverse video sequences demonstrate significant bitrate savings and effective ROI preservation, with the lightweight model offering a practical solution for AI-driven applications in resource-constrained environments.</span></p>2026-03-07T00:00:00+07:00Copyright (c) 2026 ECTI Transactions on Computer and Information Technology (ECTI-CIT)https://ph01.tci-thaijo.org/index.php/ecticit/article/view/264201Hybrid WangchanBERTa Architectures for Multi-Class Thai Sentiment Analysis2026-03-12T10:15:14+07:00Panida Songrampanida.s@msu.ac.thSuchart Khummaneesuchart.k@msu.ac.thKhanabhorn Kawattikulkhanabhorn_ka@rmutto.ac.thNittaya Muangnaknittaya.mu@ku.th<p class="Bodytext"><span style="font-weight: 400;">The rapid growth of the restaurant industry in Thailand has intensified the importance of online reviews, which significantly shape customer perceptions and influence business performance. Sentiment analysis has emerged as an effective computational approach for extracting customer opinions from such reviews; however, multi-class sentiment classification in Thai remains challenging due to the language's non-segmented structure and the issue of class imbalance. This study investigates three hybrid deep learning modelsWangchanBERTa-MLP, WangchanBERTa- CNN, and WangchanBERTa-BiLSTMby integrating WangchanBERTa, a Thai-specific pre-trained language model, with different neural architectures. Using a balanced dataset of restaurant reviews obtained through SMOTE, the models were evaluated based on accuracy, precision, recall, and F1-score. The experimental results show that WangchanBERTa- BiLSTM performed the best overall, achieving an accuracy of 85.22% and significantly improving the classification of neutral and positive sentiments compared to the BERT-based models and other hybrid methods.</span></p>2026-03-21T00:00:00+07:00Copyright (c) 2026 ECTI Transactions on Computer and Information Technology (ECTI-CIT)https://ph01.tci-thaijo.org/index.php/ecticit/article/view/264238Comparison of CNN Architectures for Thai Medicinal Plant Classification2026-03-12T11:07:51+07:00Sompong Valuvanathornsompong.v@ubu.ac.thChanchai Supaartagornchanchai.s@ubu.ac.th<p><span style="font-weight: 400;">Thai medicinal plants are essential to traditional healthcare and local livelihoods. However, many Thai medicinal plants have similar morphological characteristics such as shape, colour, and texture. This problem leads to misidentification and misclassification. Image classifiers utilizing convolutional neural networks (CNNs), which are a class of deep learning models, provide a scalable substitute for manual classification. This study aims to evaluate and compare the performance of three CNN architectures (DenseNet-121, EfficientNet-B3, and MobileNetV2) for classifying 10 species of Thai medicinal plants. The dataset comprises 5,000 leaf images representing 10 species (500 images per species). This study partitioned the dataset into 80% training set and a 20% test set. To enhance model generalization, we applied data augmentation techniques-specifically rotation, flipping, and colour manipulation. Furthermore, we utilized TensorFlow and Keras on Google Colab with GPU acceleration to train the models. Evaluation metrics include accuracy, precision, recall, F1 score, model size, inference time, and CPU utilization. The results highlight a trade-off between accuracy and efficiency: DenseNet-121 achieved the highest accuracy at 96.0% and a Matthews Correlation Coefficient (MCC) of 0.9558. Statistical analysis confirmed that DenseNet-121 significantly outperformed the other architectures (p < 0.05), albeit with a higher inference time (579.22 s). Notably, EfficientNet-B3 and MobileNetV2 both achieved an accuracy of 93.4%, with MobileNetV2 performing the best in terms of model size (11.07 MB) and inference time (3.86 s). In conclusion, DenseNet-121 is the most accurate model, while MobileNetV2 is best suited for real-time applications due to its lightweight and rapid inference time. EfficientNet-B3 offers an optimal balance between accuracy and computational efficiency.</span></p>2026-03-28T00:00:00+07:00Copyright (c) 2026 ECTI Transactions on Computer and Information Technology (ECTI-CIT)https://ph01.tci-thaijo.org/index.php/ecticit/article/view/265498Enhancing LTE Handover Decision using Optimised Extreme Gradient Boosting and Rule-Based Decision-Support2026-02-12T13:42:03+07:00Noormadinah Alliasnoormadinah@yahoo.comMegat Norulazmi Megat Mohamed Noormegatnorulazmi@unikl.edu.myMohd Nazri Ismailm.nazri@upnm.edu.myMohd Taha Ismailmtaha@unikl.edu.my<p>Long-Term Evolution (LTE) provides low-latency, high-data-rate services, which are essential for delay-sensitive applications such as video streaming and online gaming. Despite this, user mobility among cells can degrade network performance, so efficient handover management is crucial to maintain Quality of Service (QoS). Traditional handover mechanisms use static control parameters, such as hysteresis margin and time-to-trigger, that are not flexible for working with users' dynamic mobility or a range of user trajectories. In this paper, we present a learning-based optimised data-driven approach for LTE handover decision support. An XGBoost model trained with Hyperopt to learn the relationship between user movement angle and handover performance parameters. Interpretable if-then rules are developed to modify the handover control parameters adaptively. Experimental results further show that the performance of the fixed-parameter solutions depends on the maximum handover delay and the mean time to handover, including the minimum handover rate, indicating that a single configuration is unlikely to provide the best performance across all mobility scenarios. The solution offers an efficient, scalable, and interpretable decision-support system to improve LTE handover efficiency in dynamic wireless networks.</p>2026-03-28T00:00:00+07:00Copyright (c) 2026 ECTI Transactions on Computer and Information Technology (ECTI-CIT)