Andrew McLaughlin Presents Annual Mervis Lecture in Intellectual Property

Noted law and technology expert Andrew McLaughlin delivered the 2024–2025 Stanley H. Mervis Lecture in Intellectual Property at William & Mary Law School on March 17.

In his talk, titled “There Is No Fate But What We Make: AI’s Next Frontiers and the Policy Gaps Ahead,” McLaughlin highlighted that AI “is not an inevitable force shaping humanity.” As the future of AI extends far beyond the large language models with which many are now familiar to the new realm of quantitative AI, its trajectory will be determined by the choices of those who design, build, and use it.

McLaughlin began with a technical overview, describing how AI systems in general are capable of performing tasks that mimic and, indeed, go beyond the cognitive capabilities of humans by training on millions of inputs to “learn” what results to generate rather than being guided by a long series of rules. For example, rather than having a system learn what kinds of e-mail are likely to be spam by creating a list of keywords for it to recognize, the system can be trained on millions of e-mails that programmers have labeled as either “spam” or “not spam” to learn how to assess future e-mails. Generative large language models take this concept a step further: Rather than using human labeling of the inputs up front, such models instead train on millions of inputs from the Internet, learning patterns from those inputs that allow the models to generate text in response to a query, with feedback provided by humans afterward.

The challenge, McLaughlin noted, is that at some point, “we may be hitting a data wall.” Once large language models have trained on all available content, how can they improve or differentiate themselves from one another? How can AI models overcome their epistemological limits to generate new and reliable insights rather than reflecting the scope of their training data, including any biases or misinformation that data contains?

Quantitative AI may provide the answer. As McLaughin described, quantitative AI models integrate high-level mathematical representations, fundamental equations of quantum mechanics, physics, and related fields, and training based on numerical data, rather than written texts, to generate novel and scientifically reliable results. Thus, unlike LLMs “that predict likely word sequences,” he noted, “quantitative AI models simulate, predict, and discover based on mathematical and scientific principles” at a scale and speed unable to be achieved by human effort.

This development can enable researchers in fields such as healthcare, materials science, and complex systems analysis to accomplish scientific discovery that would otherwise have remained beyond their grasp. For example, quantitative AI can allow scientists working on the development of a therapeutic drug to train a model that can screen more than 100,000 possible solutions in mere days, allowing the process to move much more quickly to patient trials and regulatory approvals. Even more remarkable, such models can become “self-learning,” dramatically accelerating scientific discovery by forming and testing hypothesis without human guidance.

With these new possibilities, McLaughlin cautioned, come important questions about risks and regulation. Quantitative AI is much more likely than large language models to be used for critical functions such as medical decisions, financial market actions, and national security strategy. Where should regulatory oversight lie, and to what extent can model developers create guardrails to prevent undesirable or harmful uses of their technology? Who should own the rights to scientific advances developed with such models? And how should countries prepare for the “tech sovereignty wars” — in which nations compete for AI dominance — and the related cybersecurity issues?

McLaughlin noted that he was leaving his audience with more questions than answers. But, he concluded, whatever the path for AI, “the future is up to us.”

First-year law student Elisabeth Freedman, who attended the talk, remarked that McLaughlin “distilled highly complex and dense information” into a “comprehensible and fascinating lecture.” Freedman noted that she was “intrigued, a little scared, but mostly hopeful hearing the capabilities of quantitative AI in sectors like medical diagnostics, economic analysis, and pharmaceutical innovation. . . . More importantly, I realized the necessity of lawmakers, government agencies, and international organizations to cooperate as soon as possible to create regulations, ensuring that this powerful tool is used responsibly.”

McLaughlin is Chief Operating Officer at SandboxAQ. His prior roles include serving as the founding VP and Chief Policy Officer of the Internet Corporation for Assigned Names and Numbers (ICANN), the first Head of Global Public Policy at Google, and Deputy Chief Technology Officer of the United States. After leaving the federal government, McLaughlin became a partner at betaworks, a venture fund and startup studio and served as CEO of two betaworks companies, Instapaper and Digg, and as EVP at Tumblr and Medium. He also spent four years as the founding president and COO of the urban architecture and construction startup Assembly OSM and has been a co-founder and partner at Higher Ground Labs since 2017.

About the Mervis Lectureship

The Stanley H. Mervis Lectureship in Intellectual Property was created in memory of Stanley Mervis in 2003 by his family and friends. Mr. Mervis, a member of the William & Mary Law School Class of 1950, was patent counsel for Polaroid Corporation for most of his career and was actively involved in important patent and intellectual property issues.

Other notable individuals in intellectual property law who have given Mervis Lectures in recent years include Professor Mark A. Lemley (2016); the Honorable Pierre N. Leval, U.S. Court of Appeals for the Second Circuit (2018); Paul Grewal (2019); the Honorable Kathleen M. O’Malley (ret.), U.S. Court of Appeals for the Federal Circuit (2020); Professor Colleen Chien (2022); and Professor Jessica Silbey (2023).