Lun-Wei Ku received her Ph.D. degree in Computer Science and Information Engineering from National Taiwan University, Taipei, Taiwan, in 2009. She joined the Institute of Information Science, Academia Sinica as an assistant research fellow in Aug., 2012, and was promoted to be an associate research fellow in Aug., 2018. Her research expertise lies in natural language processing and information retrieval, especially in sentiment analysis and opinion mining. She often publishes papers in top conferences including ACL, AAAI, SIGIR, WWW, NAACL, and EMNLP. She is very active in the research community, and the international professional activities she involves include the general chair and the program chair of StarSem 2021, StarSem 2019 and AIRS 2019, and the area chair of ACL, NjAACL 2021, ACL, EMNLP, COLING 2020, EMNLP 2019, ACL 2017, CCL 2016, NLPCC 2016, ACL-IJCNLP and EMNLP 2015. Her research is internationally recognized and has been served as the AFNLP Member-at-Large and ACL SIGHAN Asia Information Officer. She is very experienced in academic and industrial collaborations. Her research collaborators come from US, Singapore, Sweden and Israel, and she is currently working with data companies and banks. Her current research topics focus on recommendation, visual storytelling, sensational text generation, fake news intervention, knowledge-based question answering, lie detection and social media analysis.
In recent years, we have witnessed a monumental shift in the landscape of artificial intelligence, particularly in the realm of Natural Language Processing (NLP). This transformation has been largely driven by the development of large language models, a progression that has extended from academic research to commercial applications. In this talk, we will go through the diverse landscape of these models, delving into their origins, capabilities, and future prospects. We will start with a glimpse into well-known models such as ChatGPT and GPT-4.0, which have revolutionized the way we interact with AI systems, from answering questions to engaging in natural, context-rich conversations. Then, we will continue along the evolutionary tree of language models. We will explore newer additions like BLOOM, LLAMA, ALPACA, VICUNA, FALCON, and LLAMA2, each armed with unique capabilities and datasets tailored for specific tasks. We'll dissect the datasets that underpin these models, shedding light on their vital role in training and fine-tuning, as well as the significant impact of human feedback reinforcement learning in enhancing their performance. As we traverse this landscape, we will draw comparisons between these models, evaluating their strengths and limitations. We will consider metrics such as accuracy, fluency, and coherence, offering insights into their relative advantages and trade-offs. This comparative analysis will empower us to make informed choices when selecting models for specific applications. Finally, we will showcase our immersive art system as an application of using large language models. By the end of this talk, the audience should have a comprehensive understanding of the state-of-the-art in language modeling and a vision for the exciting future they herald.
主辦單位:人工智慧研究中心、智慧運算學院
※本活動無需報名。