In his podcast A Skeptical Take on the A.I. Revolution Ezra Klein talks with Gary Marcus, emeritus professor of psychology and neural science at New York University. Marcus argues that although ChatGPT seems to produce impressive results, in fact it generates a pastiche of true and false information which it is unable to distinguish between. So, is it not to be trusted?
Some commentators such as New York Times tech columnist Kevin Roose are suggesting that ChatGPT could have value as a “teaching aid …. [which] could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults”. James Pethokoukis takes it further, seeing an upside in “the ability of such language models to aid academic research as a sort of “super research assistant” .
That aspect of ChatGPT is the subject of new research from finance professors Michael Dowling (Dublin City University) and Brian Lucie (Trinity College Dublin). In their paper ChatGPT for (Finance) Research: The Bananarama Conjecture, Dowling and Lucie report how ChatGPT’s output can be made impressively good by using domain experts to guide what it does. That opens up the possibility of using ChatGPT as an e-ResearchAssistant and of it becoming an everyday tool for researchers. It also, of course, opens up debate about authorship and copyright of papers co-authored with ChatGPT.