Professor Margaret Hu testifies for the Senate Homeland Security and Governmental Affairs Committee

 

On November 8, the Senate Homeland Security and Governmental Affairs Committee held a Hearing titled, “The Philosophy of AI: Learning From history, Shaping Our Future,” at the Dirksen Senate Office Building in Washington, D.C. At the Hearing, Professor Hu testified with Professor Shannon Vallor, University of Edinburgh, and Professor Daron Acemoglu, MIT.

The Hearing explored the philosophical, ethical, and historical aspects of the future of regulating AI, specifically focusing on the possibilities and challenges of legal solutions and AI technologies. The Committee Chairman, Sen. Gary Peters (D-Mich.) explained in his opening remarks that regulatory oversight of AI and emerging technologies is necessary because technological disruptions reflect “moments in history” that have “affected our politics, influenced our culture, and changed the fabric of our society.” “The reason we must consider the philosophy of AI is because we are at a critical juncture in history,” stated Professor Hu, Taylor Reveley Research Professor and Professor of Law, and Director of the Digital Democracy Lab at William & Mary Law School. “We are faced with a decision: either the law governs AI, or AI governs the law.”

Professor Acemoglu discussed the need to shape a future of AI where the technologies worked to support the worker, the citizen, and democratic governments. He explained that the asymmetric privatization of AI research and development skewed the technology in the direction of serving corporate profit rather than benefitting the user of the technology. Professor Vallor pointed out that “AI is a mirror,” reflecting back historical biases and discrimination that has persisted over generations. As a result, she stated that the technology also reflected a point of view and should not be viewed as objective or neutral.

In Professor Hu’s testimony, she shared that regulating AI systems effectively means that policymakers and lawmakers should not view AI oversight in purely literal terms. She explained that they are not simply literal technical components of technologies. Instead, because AI is infused with philosophies such as epistemology and ontology, it should be seen as reflecting philosophical aims and ambitions. Generative AI and machine learning, algorithmic and other AI systems “digest collective narratives” and can reflect “existing hierarchies,” she shared. The AI systems can adopt results “based on preexisting historical, philosophical, political and socioeconomic structures.” Acknowledging this at the outset allows us to visualize what is at stake and how AI may perpetuate inequities and antidemocratic values.

In the past few years, billions have been invested in private AI companies, fueling the urgency for a dialogue on rights-based AI governance. Professor’s opening statement at the Hearing encouraged the Senators to consider placing the philosophy of AI side-by-side with the philosophy of the Constitution to ask if they are consistent with one another. Professor Hu relied upon her expertise as a constitutional law scholar and teacher to describe the need to investigate how we understand the philosophical underpinnings of Constitutional Law as a method to enshrine rights and constrain power. AI must be viewed in much the same way, explained Professor Hu. Both the Constitution and AI are highly philosophical. Putting them side-by-side allows us to understand how they might be in tension with each other on a philosophical level. If we look at AI as only a technology, we will miss how AI can transform into a governing philosophy that attempts to rival the governing philosophy of a constitutional democracy.

Will AI be applied in a way that is consistent with our constitutional philosophy, or will it alter it, erode it, or mediate it?

AI is not only a knowledge structure, it is also a power and market structure, pointed out Professor Hu, during her Senate testimony. . AI is already being deployed for governance purposes. We are at a critical juncture where we must grapple with whether and to what extent constitutional rights and core governance functions were meant to be mediated through commercial enterprises and AI technologies in this way. As the capacities of AI evolve, several risk will grow exponentially and more rapidly than we can anticipate.

The humanities and philosophies that have underscored our “analogue democracy” must serve as our guide in a “digital democracy.” If we look at AI too literally as only a technology, we run the risk of not fully grasping its impact as a potential challenge facing our society. When a philosophy like a constitutional democracy can speak to a philosophy of AI, it is easier to comprehend how they may not be consistent with one another. We may miss how AI as a governing philosophy might attempt to rival or compete with the governing philosophy of a democracy. From history, we know that the law can be bent and contorted, especially when structures of power evolve into an ideology.

At the end of her testimony, Professor Hu made the case that the foundational principles of a constitutional democracy provide a touchstone for analysis. She concluded her testimony with the following statement: “I return to my opening question of whether the law will govern AI or AI will govern the law. To preserve and reinforce a constitutional democracy, there is only one answer to this question. The law must govern AI.



**Margaret Hu is Taylor Reveley Research Professor and Professor of Law and the director of the Digital Democracy Lab at William & Mary Law School. Professor Hu testified with Professor Shannon Vallor, University of Edinburgh, and Professor Daron Acemoglu, MIT, on Nov. 8 in a hearing for the Senate Committee on Homeland Security and Government Affairs on the philosophy of AI and governance.