internet_ml/research/Internet-NLP/paper/abstract/main.tex

29 lines
3.2 KiB
TeX

In this paper, I present {\bf \inlp} a new control-flow wrapper abstraction to enable the utilization of data from the internet (or a knowledge-database when offline) for existing context-needing Natural Lnaguage Processing (NLP) models to function without any given context. Internet-NLP can be used, finetuned alongside existing NLP models via its config settings and additionally its Long Short Term Memory neural network (LSTM neural network) can also be trained. Additionally incorporations of Masked Language Models (MLM) such as BERT, or LinkBERT \cite{devlin-etal-2019-bert,yasunaga-etal-2022-linkbert} can be utilized to improve search queries, and therfore retrieve more accurate and reliable data. Futhermore, {\bf \inlp} utilizes a LSTM, Reinforcement Learning and caches to allow for multi-turn NLP tasks, and improvement via Reinforcement Learning from user.
Additionally in this paper, I also present new NLP and Natural Language Inference (NLI) models to assist {\bf \inlp}:
\begin{itemize}
\item Open-book question and long answer (QA) via GPT-NeoX-20b \cite{gpt-neox-library, gpt-neox-20b}
\item CrossEncoder NLI via LinkBERT \cite{reimers-2019-sentence-bert,thakur-2020-AugSBERT, yasunaga-etal-2022-linkbert}
\item Answer to context NLP via T5 \cite{https://doi.org/10.48550/arxiv.1910.10683}
\end{itemize}
Along with these models, I also present new general purpose QA and NLI datasets:
\begin{itemize}
\item ALotNLI made from ANLI, MultiNLI, and SNLI \cite{nie-etal-2020-adversarial,N18-1101,DBLP:journals/corr/BowmanAPM15}
\item ALotOpenBookQA made from CoQA, Natural Questions, and SQuAD \cite{DBLP:journals/corr/abs-1808-07042,kwiatkowski-etal-2019-natural,DBLP:journals/corr/abs-1806-03822}
\end{itemize}
As a result of these models, datasets, and Internet-NLP, the accuracy and reliability of most context-needing NLP models on most NLP tasks, especially tasks that require more factual responses with no given context increased.
Internet-NLP and the new NLP and NLI models, which were trained on the general-purpose datasets (ALotNLI, and ALotOpenBookQA). Internet-NLP, by default utilizes an Text-Generative model GPT-NeoX \cite{gpt-neox-library, gpt-neox-20b} for long responses and LinkBERT \cite{yasunaga-etal-2022-linkbert} for short responses. For 2 choices (for ex: True and False) Bi-Encoder NLI has been used and for multiple choices CrossEncoder will be used \cite{thakur-2020-AugSBERT}.
Internet-NLP, in layperson terms, provides the context for context-needing NLP models to let them function. Internet-NLP can be improved via finetuning, and training of LSTM and Reinforcement Learning model (which can be trained alongside the NLP model), which enables for better search queries, and subsequently results. It obtains state-of-the-art (SOTA) results in QA and NLI without context.
Internet-NLP is a subset of a larger package, Internet-ML and is open-source. $\footnote{Internet-NLP, subset of Internet-ML is public, and open-source: \url{https://github.com/thamognya/internet_ml}}. \label{footnote:code}$
Old versions of Internet-NLP is also publicly available. $\footnote{Old Versions of Internet-NLP is public: \url{https://pypi.org/project/internet-nlp/}}. \label{footnote:code-old}$