According to a recent Monmouth University poll, 55 percent of Americans are worried about AI’s threat to humanity’s future. In an era where technological advancements are accelerating at breakneck speed, ensuring that artificial intelligence (AI) development remains in check is crucial. As AI-powered chatbots like ChatGPT become increasingly integrated into our daily lives, it is high time we address potential legal and ethical implications.
And some have done so. A recent letter signed by Elon Musk, who co-founded OpenAI, Steve Wozniak, the co-founder of Apple, and over 1,000 other AI experts and funders calls for a six-month pause in training new models. In turn, Time published an article by Eliezer Yudkowsky, the founder of the field of AI alignment, calling for a much more hard-line solution of a permanent global ban and international sanctions on any country pursuing AI research.
However, the problem with these proposals is that they require the coordination of numerous stakeholders from a wide variety of companies and government figures. Let me share a more modest proposal that’s much more in line with our existing methods of reining in potentially threatening developments: legal liability.
By leveraging legal liability, we can effectively slow AI development and make certain that these innovations align with our values and ethics. We can ensure that AI companies themselves promote safety and innovate in ways that minimize the threat they pose to society. We can ensure that AI tools are developed and used ethically and effectively, as I discuss in depth in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.
Legal liability: A vital tool for regulating AI development
Section 230 of the Communications Decency Act has long shielded internet platforms from liability for user-generated content. However, as AI technology becomes more sophisticated, the line between content creators and content hosts blurs, raising questions about whether AI-powered platforms like ChatGPT should be held liable for the content they produce.
Introducing legal liability for AI developers will compel companies to prioritize ethical considerations, ensuring that their AI products operate within the bounds of social norms and legal regulations. They will be forced to internalize what economists call negative externalities, meaning negative side effects of products or business activities that affect other parties. A negative externality might be loud music from a nightclub bothering neighbors. The threat of legal liability for negative externalities will effectively slow down AI development, providing ample time for reflection and establishing robust governance frameworks.
To curb AI’s rapid, unchecked development, it is essential to hold developers and companies accountable for the consequences of their creations. Legal liability encourages transparency and responsibility, pushing developers to prioritize refining AI algorithms, reducing the risks of harmful outputs, and ensuring compliance with regulatory standards.
For example, an AI chatbot perpetuating hate speech or misinformation could lead to significant social harm. A more advanced AI, given the task of improving a company’s stock, might–if not bound by ethical concerns–sabotage its competitors. By imposing legal liability on developers and companies, we create a potent incentive for them to invest in refining the technology to avoid such outcomes.
Legal liability, moreover, is much more doable than a six-month pause, not to speak of a permanent pause. It’s aligned with how we do things in America: instead of having the government regular business, we instead permit innovation but punish the negative consequences of harmful business activity.
The benefits of slowing down AI development
- Ensuring ethical AI. We can take a deliberate approach to integrating ethical principles in designing and deploying AI systems by slowing down AI development. This will reduce the risk of bias, discrimination, and other ethical pitfalls that could have severe societal implications.
- Avoiding technological unemployment. The rapid development of AI has the potential to disrupt labor markets, leading to widespread unemployment. By slowing down the pace of AI advancement, we provide time for labor markets to adapt and mitigate the risk of technological unemployment.
- Strengthening regulations. Regulating AI is a complex task that requires a comprehensive understanding of the technology and its implications. Slowing down AI development allows for establishing robust regulatory frameworks that effectively address the challenges posed by AI.
- Fostering public trust. Legal liability in AI development can help build public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can foster a positive relationship with the public, paving the way for a responsible and sustainable AI-driven future.
Concrete steps to implement legal liability in AI development
- Clarify Section 230. Section 230 does not appear to cover AI-generated content. The law outlines the term “information content provider” as referring to “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” The definition of “development” of content “in part” remains somewhat ambiguous. Judicial rulings have determined that a platform cannot rely on Section 230 for protection if it supplies “pre-populated answers” so that it is “much more than a passive transmitter of information provided by others.” Thus, it’s highly likely that legal cases would find that AI-generated content would not be covered by Section 230: it would be helpful for those who want a slowdown of AI development to launch legal cases that would enable courts to clarify this matter. By clarifying that AI-generated content is not exempt from liability, we create a strong incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.
- Establish AI governance bodies. In the meantime, governments and private entities should collaborate to establish AI governance bodies that develop guidelines, regulations, and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards. Doing so would help manage legal liability and facilitate innovation within ethical bounds.
- Encourage collaboration. Fostering collaboration between AI developers, regulators, and ethicists is vital for creating comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that strike a balance between innovation and responsible AI development.
- Educate the public. Public awareness and understanding of AI technology are essential for effective regulation. By educating the public on the benefits and risks of AI, we can foster informed debates and discussions that drive the development of balanced and effective regulatory frameworks.
- Develop liability insurance for AI developers. Insurance companies should offer liability insurance for AI developers, incentivizing them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.
The increasing prominence of AI technologies like ChatGPT highlights the urgent need to address the ethical and legal implications of AI development. By harnessing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritizes ethical considerations, and minimizes the risks associated with these emerging technologies. It is essential that developers, companies, regulators, and the public come together to chart a responsible course for AI development that safeguards humanity’s best interests and promotes a sustainable, equitable future.