Skip to main content

Dr. Yangfeng Ji | Univerisity of Virginia

Abstract: Large language models (LLMs) have drawn significant attention from the AI research community, on building new models and improving their performance. In addition, the popularity of LLM-based applications (e.g., ChatGPT and Bard) has motivated the exploration of new applications in various domains, such as education and medicine. However, recent work shows the limitations of traditional data-driven modeling still exist in LLMs, such as vulnerability under adversarial attacks and inconsistency to linguistic variations. To show the two sides of the same coin, this talk consists of two parts. The first part provides a high-level overview of large language models and the progress on recent research built upon LLMs; the second part demonstrates the potential risks caused by the limitations of LLMs. The talk concludes with a brief summary of future research challenges.