Automatic Next Word Generation for Text based Apps using Generative Pretrained Transformer model Compared with N-gram model
Keywords:Novel Generative Pretrained Transformer, N-Gram, Tokenization, Next Word Generation, Language Model, Natural Language Processing.
Aim: The aim of this paper is to implement automatic next word generation for text based apps using novel generative pretrained transformers and improving the accuracy in comparison with n-gram approach.
Materials and Methods: N-gram and generative pretrained transformer models are applied on data, text file that consists of a sequence of words. N-gram models accuracy of the next word which compares generative pretrained transformer models. GPT-2 has been proposed and developed. The sample size was measured as per group 9876 with G-power value 0.8.
Results: The accuracy was maximum in next word generation for text based apps using novel generative pretrained transformers 89.23% with minimum mean error when compared with N-gram model for the same dataset with p value 0.02 (p<0.05).
Conclusion: The study proves that novel generative pretrained transformers exhibit better accuracy than N-gram in suggesting the next word for text based apps.