Effective Methods for Adapting LLM to the Agricultural Business Domain
https://doi.org/10.32634/0869-8155-2025-395-06-162-166
Abstract
The article discusses the current challenges of applying large language models (LLMs) in the agricultural business sector and proposes modern approaches to address these issues. Despite the high effectiveness of LLMs in natural language processing, their adaptation to the tasks of the agricultural industry involves a number of difficulties. Key problems include the formation of specialized training corpora, balancing the quality of responses with computational costs, objective evaluation of model quality, and their integration into existing agricultural information systems. Practical approaches to solving these problems are discussed, including fine-tuning models on specialized data, computational optimization methods, and the use of hybrid architectures (in particular, RAG). The main areas of LLM application are also analyzed: text generation, search engine improvement, analysis of user reviews, and customer support automation. The research aims to improve the accuracy, relevance, and personalization of model responses in tasks related to forecasting, analysis, and automation of processes in agriculture. The proposed solutions contribute to the effective integration of LLMs into the infrastructure of the agricultural sector, enhancing decision-making quality, forecasting, and business process automation.
Keywords
About the Author
A. I. KapitanovRussian Federation
Andrey Ivanovich Kapitanov, Candidate of Technical Sciences, Associate Professor
1 Shokina Square, Moscow, 124498
References
1. Pan S., Luo L., Wang Y., Chen C., Wang J., Wu X. Unifying Large Language Models and Knowledge Graphs: A Roadmap. IEEE Transactions on Knowledge and Data Engineering. 2024; 36(7): 3580–3599. https://doi.org/10.1109/TKDE.2024.3352100
2. Hu E.J. et al. LoRA: Low-Rank Adaptation of Large Language Models. arXiv. 2106.09685. https://doi.org/10.48550/arXiv.2106.09685
3. Feldman P., Dant A., Foulds J.R., Pan S. Polling Latent Opinions: A Method for Computational Sociolinguistics Using Transformer Language Models. arXiv. 2204.07483. https://doi.org/10.48550/arXiv.2204.07483
4. Rusnachenko N., Golubev A., Loukachevitch N. Large Language Models in Targeted Sentiment Analysis for Russian. Lobachevskii Journal of Mathematics. 2024; 45(7): 3148–3158. https://doi.org/10.1134/S1995080224603758
5. Krishnan A. Exploring the Power of Topic Modeling Techniques in Analyzing Customer Reviews: A Comparative Analysis. arXiv. 2308.11520. https://doi.org/10.48550/arXiv.2308.11520
6. Lewis P. et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv. 2005.11401. https://doi.org/10.48550/arXiv.2005.11401
7. Chamieh I., Zesch T., Giebermann K. LLMs in Short Answer Scoring: Limitations and Promise of Zero-Shot and Few-Shot Approaches. Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024). Association for Computational Linguistics. 2024; 309–315.
8. Chen X. et al. Universal Self-Consistency for Large Language Model Generation. arXiv. 2311.17311. https://doi.org/10.48550/arXiv.2311.17311
9. Lin C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out. Association for Computational Linguistics. 2004; 74–81.
10. Zhang T., Kishore V., Wu F., Weinberger K.Q., Artzi Y. BERTScore: Evaluating Text Generation with BERT. arXiv. 1904.09675. https://doi.org/10.48550/arXiv.1904.09675
11. Banerjee S., Lavie A. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and (or) Summarization. Association for Computational Linguistics. 2005; 65–72.
Review
For citations:
Kapitanov A.I. Effective Methods for Adapting LLM to the Agricultural Business Domain. Agrarian science. 2025;1(6):162-166. (In Russ.) https://doi.org/10.32634/0869-8155-2025-395-06-162-166