Trust Me, I’m Smart: HPC and Government Regulation in the Coming AI Age

Trust Me, I’m Smart: HPC and Government Regulation in the Coming AI Age January 17, 2024 by Doug Eadline

Stephen Hawking famously said that "success in creating effective AI could be the biggest event in the history of our civilization, but unless we learn how to prepare for and avoid the potential risks, AI could be the worst event in the history of our civilization."

AI in the form of LLMs (Large Langue Models) has exploded. Based on human-like conversations with tools like ChatGPT, many believe the era of widespread AI adoption is upon us. Vast amounts of money are being spent on HW and SW with the hope of automating many tasks performed by "mere humans." Recently, Google announced a reorganization and job cuts of "hundreds of employees" from the ad sales team as it automates the process with AI. Handing key operations over to AI agents or tools is new, and the effectiveness of these higher-level human-AI conversations is still unknown.

One aspect of Generative AI, including LLMs, is how fast the market moves. Researchers have developed new ideas and models for many years; however, in the last 18 months, the commercial interest in AI, particularly LLMs, has accelerated and shows no sign of slowing down. Hawking's advice could not be more relevant.

Given the rapid uptake and untested effectiveness of AI technologies, HPCwire asked Steve Conway, senior analyst at Intersect360 Research, to comment on government attempts to regulate AI and what effect these might have on HPC. Conway has closely tracked AI progress for more than a decade. He has spoken and published widely on this topic, including an AI primer for senior US military leaders co-authored with Johns Hopkins University Applied Physics Laboratory.

Steve Conway, senior analyst at Intersect360 Research

HPCwire: What's happening with government regulation of AI?

Conway: 2024 and 2025 will be a seminal period for AI regulation by governments worldwide. Recognizing that recent advances, especially generative AI, could powerfully transform their societies and economies, governments are pushing forward on regulations aimed at making them AI leaders while avoiding harmful potential. Where AI is concerned, the world today is suspended between excitement and concern, and the language of the regulations reflects both of these things. When you read through these documents, you pick up tremendous enthusiasm and a sense of urgency about taming the risks, as if a Pandora's box may have been opened.

HPCwire: Why are regulatory initiatives ramping up now?

Conway: The dominant methodologies, learning models, are very useful, but they aren't transparent and trustworthy yet. This situation creates serious potential for deliberate or unintentional misuse. A few years ago, I heard a great presentation by Rebecca Willett, University of Chicago, on the many ways in which AI can introduce errors and bias. Generative AI has caused concerns to spike, especially because it can't tell fact from fiction when it aggregates and organizes data.

For example, hyper-realistic bots such as Google Duplex are deliberately designed to conceal their artificial nature from humans, even inserting "ums" and "ers" into conversations. Stanford's Adaptive Agents Group has proposed the Shibboleth Rule for Artificial Agents. It says, "All autonomous AIs must identify themselves as such if asked to by any agent, human or otherwise."

HPCwire: What does this mean for the HPC community?

Conway: HPC is nearly indispensable at the forefront of AI R&D, and nearly all HPC sites are exploiting or exploring AI today, so it's no accident that some national AI regulations in the making are heavily aimed at scientific researchers and researchers' so-called frontier AI capabilities that are advancing the state of the art.

HPCwire: Any particular implications for HPC vendors?

Conway: As organizations designing and delivering the technology for the frontier AI capabilities that are worrisome to many government regulators, I think HPC vendors, including cloud services providers, should closely track efforts to mitigate AI risks and should consider getting directly involved in these efforts. I don’t think Japan will be the only country whose AI regulations focus heavily on the legal side of this, including protecting IP rights and imposing serious financial penalties for violations. Vendors should be paying close attention to this.

HPCwire: What is the message for researchers?

Conway: The AI transparency problem implies that researchers should continue to be mindful about when to apply AI and when to keep humans in the loop. For important applications, maybe AI use should generally be an intermediate rather than a final step, with humans verifying the AI results whenever feasible.

HPCwire: Is AI similarly defined in regulations around the world?

Conway: The definitions typically are very broad, often including a wide range of domains, from scientific research to social media, consumer electronics, automated and semi-automated driving, precision medicine, telecommunications, and many more areas. Governments have had to think hard about whether a single agency can competently manage AI regulations and policy across such diverse domains. Some governments, including the US, China, and the UK, seem headed toward a single-agency solution, while the EU is pursuing different regulatory approaches for different domains.

HPCwire: Can you briefly summarize where leading countries and regions stand on AI regulation?

Conway: Sure. I'll start with China, which took steps ahead of others to regulate generative AI with the Guidelines for Responsible Research Conduct. Among other things, the guidelines state that AI use must be clearly labeled in applications for research funding projects affecting the public. In addition, the Cyberspace Administration of China (CAC) adopted a licensing regime for frontier AI models that could threaten society, namely, China's social order. Importantly, CAC licensing deliberately encourages innovation by SMEs (Small and Medium-sized Enterprises), startups, and larger firms. There's concern worldwide that big companies might try to block this competition and become monopolies in the nascent AI market, where important breakthroughs could come from small companies.

HPCwire: How about Europe?

Conway: In the Bletchley Declaration of November 2023, 28 countries, including the UK, US, China, Japan, and many countries in EMEA (Europe, the Middle East and Africa) and the Asia-Pacific region, agreed to collaborate in managing the higher risks associated with frontier AI capabilities. The EU AI Act, expected to go into effect this year or next, is seen as the world's most comprehensive, strictest attempt at AI regulation and has been accused of stifling innovation by some businesses. The EU Act sets penalties for serious violators of seven percent of a company's prior-year income for violations, which could amount to millions of euros.

HPCwire: What's happening in the US?

Conway: President Biden's October 2023 executive order is important for laying out the principles for a potential AI bill of rights aimed at protecting privacy and punishing discrimination. Thirty US states, however, have already passed laws addressing AI's potential and risks. As you know, it's not unusual for consensus on federal regulations to bubble up from the state level in the US.

HPCwire: And Japan?

Conway: Japan is taking an approach more like the US than Europe's stricter regulations. In 2019, Japan's government issued the "Social Principles of Human-Centric AI," followed by the "AI Strategy 2022." Both were aimed at citizen protection and a sustainable society amid AI advances. The "AI, Machine Learning & Big Data Laws and Regulations 2023" centered around data and intellectual property protection, including legal liability.

HPCwire: Quantum computing is another transformational technology with potential risks for society. Are government regulations also underway for quantum computing?

Conway: Regulatory activities are at an earlier stage because broad commercialization of quantum computing is farther in the future. FINRA, the Financial Industry Regulatory Association, has published quantum regulatory considerations related to cybersecurity and data governance, and some others are pondering risks. Still, the urgency today is mostly about AI. In the PRC, for example, both Baidu and Alibaba have turned their quantum research facilities over to the government to focus harder on competing today in AI. Concerns about quantum computing risks will likely become more serious soon enough.

HPCwire: What can the HPC community do to AI mitigate risks so regulations don't stifle innovation?

Conway: I think ethics coursework should be required in all AI-related university curricula, without exception, and the training should be specific to the risks posed by AI. Many prominent universities worldwide already have AI-specific ethics courses, including the University of Cambridge, Australian National University, Georgia Tech, Stanford University, the New Jersey Institute of Technology, the University of Tokyo, the University of Stuttgart, Beijing University, and many others. This aspect is important.

Steve Conway, senior analyst at Intersect360 Research, he has closely tracked AI progress for more than a decade. He has spoken and published widely on this topic, including an AI primer for senior US military leaders co-authored with Johns Hopkins University Applied Physics Laboratory.

Related

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...