Saturday, November 23, 2024
Share:

ChatGPT: Stop it, Pause it, or Fix it?



ChatGPT was an instant sensation garnering more than 100 million users within two months of launch. It inspired awe as the first general-purpose AI application that can be used to perform tasks historically limited to humans: writing papers, performing research, conducting legal analysis, giving advice on parenting, providing digital companionship, and has already demonstrated it can perform such tasks at elite levels. Advanced artificial intelligence technologies that transform the ways we work, learn, and create are no longer science fiction. They exist here and now — so we must act fast, not pause, to ensure their responsible development and implementation of such technology. Without American leadership in AIโ€™s development, hostile or lawless actors are bound to capture and shape it at the expense of our own interests and ethics.ย 

ChatGPT is powered by a technology commonly referred to as a Large Language Model (LLM), which uses deep learning algorithms and brain-like neural networks to process language. As powerful as LLMs are, they aren’t perfect. It is hard to imagine that a technology capable of scoring a 1410 on the SATs or in the 90th percentile of the Uniform Bar Exam is immature, but even this advanced version of ChatGPT is immature; today’s equivalent to Windows 1.0 in the 1980s. LLMs can provide completely wrong or biased responses with convincing certainty. They sometimes hallucinate or, in other words, provide confident responses that do not appear justified by their training data. For instance, ChatGPT was recently “tricked” into giving bomb-making instructions.

Large tech companies like Google, Microsoft (OpenAI) and Meta are racing to stay competitive with ChatGPT along with their Chinese counterparts Tencent, Alibaba and Baidu. Both the United States and China are in a great power struggle competing for global AI leadership, yet our institutions have been unable to keep up with the rate of change. These factors have prompted some technologists, academics, government officials, and organizations to question whether generative AI technology is being implemented too quickly. 

LLM technologies are already globally available through open source and are being implemented globally in hundreds of new applications beyond chatbots every day. Instead of attempting to slow or pause U.S. technological development and deployment, government, academia, and private industry should put their energy into building frameworks that demand safer and more trusted AI systems.

While there is no system that can guarantee perfectly safe use of any technology, we can develop clear strategies and guardrails for continuous improvement without losing out on the incredible utility generative AI brings. For instance, private industry and academia can focus on improving the quality of training data, algorithms, queries, and response validation methods. Technologists can use more mature techniques to screen for inappropriate queries that are misaligned with the system’s intended function. While many have suggested increased regulation, the reality is that without proper consideration and understanding, regulation can be a blunt instrument that creates more damage than solutions. 

Recently, an open letter signed by prominent technologists called for a minimum of a six-month pause in the implementation of new advanced AI technology to gain a better understanding of its implications and institute appropriate guardrails to ensure safe and responsible use. 

The letter raises an interesting question: should the government intervene with the intent to try to protect the public from the potential dangers of AI at the expense of all its benefits and positive uses? Can we trust the industry that built the Internet to regulate itself? While these are critical questions that need to be answered, the debate over whether to slow advanced AI implementation shouldn’t be the focus, but rather, how fast we can replace the existing generative AI applications like ChatGPT 4.0 with safer and more responsible versions. It’s time for government, industry, and academia to collaborate to build out a better AI ecosystem that supports and accelerates the development and implementation of safer AI.

The federal government is uniquely positioned to convene boards of the brightest minds in AI technology โ€“ to solidify the best ways to ensure safe deployment. It can employ the National Academies to recommend strategies and informed policies. Trusted Government agencies such as National Institute of Standards and Technology (NIST) can develop appropriate standards, trusted models, and validated datasets. Agencies such as Defense Advanced Research Projects Agency (DARPA) and National Science Foundation (NSF) can increase funding of AI safety research and development.

Academics can help illuminate the safest AI practices by building new AI research centers that focus on trusted AI and help develop K-12 AI curriculum that can help inform students and serve as a springboard for careers in the artificial intelligence field. Universities should also increase the number of advanced degrees in an Advanced Artificial Intelligence.

Industry leaders can institute safer AI practices on a day-to-day basis. Industry stakeholders have the opportunity to create AI consortiums that focus on safety, best practices, open-source libraries, common data sets, validated models, testing and validation certification. Companies leveraging AI can install Chief AI Officers responsible for safe implementation. Industry can also develop a trusted third-party organization โ€“ a ratings agency for AI risk โ€“ that can provide testing and certification services.

The debate shouldn’t be about whether we should place all of our trust in the tech industry or all of our trust in government. Instead, government, industry and academia will have to work together to build trust in AI. Everyone loses if a foreign government, especially a hostile one, has direct control over the technology of tomorrow. Everyone also loses if AI powers a lawless Wild West. Itโ€™s time for America to lead in the creation of a comprehensive, safe, ethical, and responsible AI framework and show the rest of the world how advanced AI can be responsibly implemented without stifling innovation. Letโ€™s not wait. Letโ€™s get safer systems to the public faster. Letโ€™s work together and get started now.

Gilman Louie is the CEO of Americaโ€™s Frontier Fund, Chairman of the Federation of American Scientists, and Former Commissioner of the National Security Commission on Artificial Intelligence.

This article was originally published by RealClearPolicy and made available via RealClearWire.