Han, QiweiMarkwardt, Elias2025-03-272025-01-222025-01-22http://hdl.handle.net/10362/181475This research explored techniques to improve Large Language Models performance for Hi erarchical Product Classification (HPC), including optimized fine-tuning, optimal prompting techniques, taxonomy-specific Knowledge Graphs, leveraging Retrieval-Augmented Genera tion, and implementing LLM-based Entity Matching. Tested on benchmark datasets Icecat and WDC-222, these methods significantly enhanced LLMs’ ability to solve HPC tasks across var ious scenarios. Results achieved a hierarchical F1-score (hF) of 0.921, surpassing traditional DL benchmarks (0.85 hF). While not outperforming proprietary models like GPT, the proposed approaches offer a cost-efficient and effective alternative for businesses, demonstrating strong performance without reliance on expensive LLM solutions.engLarge Language ModelsHierarchical classificationE-CommerceIn-Context LearningFine tuningPrompt EngineeringKnowledge graphsRetrieval Augmented GenerationEntity matchingApplying LLM-based entity matching for hierarchical product categorization in e-commercemaster thesis203927737