📝 Selected Publications
For a complete list of publications, please visit my Google Scholar
🔍 NLP & (M)LLM Applications
Feijiang Han, Jiaming Zhang, Chuyi Deng, Jianheng Tang, Yunhuai Liu
Key Points:
- First comprehensive study of LLMs’ capabilities in WebShell detection
- Novel BFAD framework improves LLM detection by 13.82% through function-aware analysis
- Enables both large and small LLMs to outperform traditional SOTA methods
[LaTeX2Layout: High-Fidelity, Scalable Document Layout Annotation Pipeline for Layout Detection] (Coming Soon)
Feijiang Han, Zelong Wang, Bowen Wang, Xinxin Liu, Skyler Cheung, Delip Rao, Chris Callison-Burch, Lyle Ungar
[Paper] | [Code & Dataset] (Coming Soon)
Key Points:
- Novel pipeline that extracts layout information directly from LaTeX compilation
- Custom LaTeX packages for precise element tracking and reading order preservation
- 200% improvement over zero-shot baselines through curriculum learning and data augmentation
[Beyond Detection: A Comprehensive Benchmark and Study on Representation Learning for Fine-Grained Webshell Family Classification] (Coming Soon)
Feijiang Han
[Paper] | [Code & Dataset] (Coming Soon)
Key Points:
- First systematic study on automating WebShell family classification
- Novel dynamic function call trace extraction for behavior analysis
- Comprehensive evaluation of representation methods across multiple datasets
🔮 Unlocking and Understanding LLMs
ZeroTuning: Unlocking the Initial Token’s Power to Enhance Large Language Models Without Training
Feijiang Han, Xiaodong Yu, Jianheng Tang, Delip Rao, Lyle Ungar
Key Points:
- Novel training-free optimization through initial token manipulation
- Improves LLM performance by up to 11.71% without any training
- Theoretical insights into attention mechanisms and layer/head-specific impacts
arXiv 2025
Question Tokens Deserve More Attention: Enhancing Large Language Models without Training through Step-by-Step Reading and Question Attention Recalibration
Feijiang Han, Lingfeng Guo, Haotian Cui, Zixuan Lyu
🌟 Foundation Research (RL, Unlearning, Crowdsourcing, Federated Learning)
Credit and quality intelligent learning based multi-armed bandit scheme for unknown worker selection in multimedia MCS
Jianheng Tang, Feijiang Han, Kejia Fan, et al.
Key Points:
- Novel Credit and Quality Learning based Multi-Armed Bandit (CQL-MAB) scheme for solving the Post-Unknown Worker Recruitment problem in MCS
- Integrates credit identification and quality calculation for worker selection
- Theoretically proven truthfulness and efficiency in reverse auction settings
-
UBICOMP 2025
CALM: A Ubiquitous Crowdsourced Analytic Learning Mechanism for Continual Service Construction with Data Privacy Preservation
Kejia Fan, Yuwei Huang, Jiayi He, Feijiang Han, Jianheng Tang, et al. -
arXiv 2025
APFL: Analytic Personalized Federated Learning via Dual-Stream Least Squares
Kejia Fan, Jianheng Tang, Zixuan Yang, Feijiang Han, Jiayi Li, et al. -
arXiv 2025
ACU: Analytic Continual Unlearning for Efficient and Exact Forgetting with Privacy Preservation
Jianheng Tang, Haotian Zhuang, Dongxiao Fang, Jiayi Li, Feijiang Han, et al. -
Information Sciences 2024
MAB-RP: A Multi-Armed Bandit based workers selection scheme for accurate data collection in crowdsensing
Yuwei Lou, Jianheng Tang, Feijiang Han, Anfeng Liu, et al. -
Information and Software Technology 2024
Fctree: Visualization of function calls in execution
Fei Zhou, Yifan Fan, Shengchao Lv, Lingxiao Jiang, Zhuo Chen, Jingui Yuan, Feijiang Han, et al. -
IEEE IoT Journal 2023
CRL-MABA: a completion rate learning-based accurate data collection scheme in large-scale energy internet
Kejia Fan, Jianheng Tang, Wenbin Xie, Feijiang Han, Yuwei Huang, et al. -
IEEE IoT Journal 2023
BTV-CMAB: A bi-directional trust verification-based combinatorial multiarmed bandit scheme for mobile crowdsourcing
Jianheng Tang, Kejia Fan, Wenbin Xie, Feijiang Han, et al. -
Computer Communications 2023
A Semi-supervised Sensing Rate Learning based CMAB scheme to combat COVID-19 by trustful data collection in the crowd
Jianheng Tang, Kejia Fan, Wenbin Xie, Lingxiao Zeng, Feijiang Han, et al.