Content area
Abstract
With the exponential growth of data, the demand for skilled AI practitioners has increased. Large Language Models (LLMs) have come to help overcome this problem, but they still struggle with domain-specific data, extracting named entities in unconstrained environments, and dealing with limited context windows. The aim is to develop wrapper models on top of LLMs that transcend these barriers. These proposed multi-field models undergo rigorous evaluation to ensure they deliver state-of-the-art solutions. The proposed information retrieval model, SOLARIS, demonstrated outstanding performance, achieving a Rouge-L F1 score of 0.93 and 0.98 in two distinct evaluation datasets, establishing itself as a state-of-the-art (SOTA) solution for these datasets. Similarly, MedSafeX, our proposed named entity extractor attained a Rouge-L F1 score of 0.98 in one of the evaluation datasets. Additionally, DBCopilot, the large database querying framework, showcased its efficiency in SQL querying with an Execution accuracy (EX) of 0.82 and an Exact-Set-Match (EM) accuracy of 0.60 across the Spider dataset, underlining its potential for improving and automating database querying tasks. This abstract outlines pioneering efforts in leveraging Generative AI and In-Context Learning to transform human-text interaction. The goal is to redefine the landscape of Generative AI utilization, making it more accessible to all through its democratization.





