Center for Legal & Court Technology Hosts Conference on Problematic Generative AI

  • Problematic Generative AI
    Problematic Generative AI  Professor Fredric Lederer, Director of the Center for Legal & Court Technology, opened this year's conference.  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  WIlliam & Mary Law Dean A. Benjamin Spencer welcomed everyone to the conference.  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  The first panel, led by Dr. Iria Giuffrida, provided an overview of Generative AI and discussed strengths and weaknesses of the technology.  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  The first panel consisted of Steven Truitt (Principal Program Manager, Microsoft), Dr. Trenton Ford (Assistant Professor of Data Science, William & Mary), Dr. Yanfu Zhang (Assistant Professor of Computer Science, William & Mary), and Dr. Janice Zhang (Assistant Professor of Computer Science, William & Mary).  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  The second panel, led by Daniel Shin (William & Mary Law School’s Cybersecurity Researcher and Adjunct Professor of Law) and joined by Steven Truitt, Omar Santos (Distinguished Engineer of Cybersecurity and AI Security Research, CISCO), and (not pictured) Dr. Abby Gilbert (Co-Director, Institute for the Future of Work) and Dr. Osman Gazi Güçlütürk (Senior Policy Associate, Holistic AI), discussed how companies and policymakers are using and controlling Generative AI models, with emphasis on managing risks.  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  Dr. Nicolas Vermeys (Professor, Université de Montréal’s Faculté de droit), Dr. Scott Shackelford (Professor of Business Law & Ethics, Kelley School of Business at Indiana University), and Laura Heymann (James G. Cutler Professor of Law, William & Mary Law School), presented insights about the legal implications of Generative AI.  Photo by David F. Morrill
  • Problematic Generative AI
    Problematic Generative AI  Margaret Hu (William & Mary Law School’s Taylor Reveley Research Professor and Professor of Law) moderated the third panel, which presented insights about the legal implications of Generative AI.  Photo by David F. Morrill
Photo - of -

Since its introduction on November 30, 2022, OpenAI’s ChatGPT has grabbed the imagination, fears, and awe of the public. During this same time, other Generative AI tools from Stability AI, Midjourney, and others demonstrated striking abilities to create complex images, videos, and audio based on user-provided text instructions. Although Generative AI has demonstrated unprecedented abilities to provide utility across industries, the technology also gave rise to new issues in legal, ethical, societal, and technical dimensions.

To engage these issues, on Friday, February 9, 2024, the Center for Legal & Court Technology at William & Mary Law School hosted the 2024 Problematic Generative AI conference, where panelists shared their expertise and perspectives on Generative AI from diverse perspectives of International Law, Research, Policy, and Industry.

The first panel, led by Dr. Iria Giuffrida (William & Mary Law School’s Assistant Dean for Academic and Faculty Affairs and Professor of the Practice of Law) and joined by Dr. Trenton Ford (Assistant Professor of Data Science, William & Mary), Steven Truitt (Principal Program Manager, Microsoft), Dr. Janice Zhang (Assistant Professor of Computer Science, William & Mary), and Dr. Yanfu Zhang (Assistant Professor of Computer Science, William & Mary), provided an overview of Generative AI and discussed strengths and weaknesses of the technology.

The second panel, led by Daniel Shin (William & Mary Law School’s Cybersecurity Researcher and Adjunct Professor of Law) and joined by Dr. Abby Gilbert (Co-Director, Institute for the Future of Work), Dr. Osman Gazi Güçlütürk (Senior Policy Associate, Holistic AI), and Omar Santos (Distinguished Engineer of Cybersecurity and AI Security Research, CISCO), and Steven Truitt, discussed how companies and policymakers are using and controlling Generative AI models, with emphasis on managing risks.

The third panel, led by Margaret Hu (William & Mary Law School’s Taylor Reveley Research Professor and Professor of Law) and joined by Laura Heymann (James G. Cutler Professor of Law, William & Mary Law School), Dr. Scott Shackelford (Professor of Business Law & Ethics, Kelley School of Business at Indiana University), and Dr. Nicolas Vermeys (Professor, Université de Montréal’s Faculté de droit), presented insights about the legal implications of Generative AI.

After the panel discussions, the conference participants engaged in a discussion of what can be done to resolve Generative AI challenges. Panelists and attendees emphasized the need for upcoming regulations that account for different uses and users of Generative AI technologies. Some conference participants also noted the tendency to over-regulate new technologies, especially if existing laws already apply.

Finally, the discussion also recognized the importance of education focusing on teaching users about the capabilities and limitations of using Generative AI, which may mitigate harms arising from improperly applying this technology.

This was the second in a series of Problematic AI conferences hosted at William & Mary. The Coastal Node Commonwealth Cyber Initiative, Université de Montréal’s Cyberjustice Laboratory, Holistic AI, and William & Mary Law School’s Center for International Law & Policy co-sponsored the event. At the end of the conference, by unanimous vote of those attending, a follow-on conference was agreed to be held in February 2025.