Biden’s recent AI executive order has organizations, experts, and political leaders both hopeful and wary of the future implementation of AI in the world. Tech industry leaders such as Microsoft and Google have praised the regulations in an organizational setting, taking the privacy concerns of consumers seriously. In contrast, other leaders are wary of the order’s impact on AI innovation and new developments in the field. Experts and journalists have quickly pointed out how regulations will impact governmental structure and future military advancements. Below are multiple viewpoints of what Biden’s AI executive order means for the technology industry – and greater society – moving forward.
“President Joe Biden will deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers…
The order..would streamline high-skilled immigration, create a raft of new government offices and task forces and pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more.
At the same time, the Oct. 23 draft order calls for extensive new checks on the technology, directing agencies to set standards to ensure data privacy and cybersecurity, prevent discrimination, enforce fairness and also closely monitor the competitive landscape of a fast-growing industry..
The order..represents the most significant single effort to impose national order on a technology that has shocked many people with its rapid growth, most notably the human-like capabilities of the latest and most powerful generative AI models.
The executive order will create privacy protections around the data that fuels most artificial intelligence systems…The order encourages federal agencies to adopt high-end privacy enhancing technology to protect the data they collect and the National Science Foundation to fund a new research network focused on developing, advancing and deploying privacy technology for federal agency use.”
“President Biden signed Monday an order invoking the Korean War-era Defense Production Act to compel major AI companies to notify the government when developing any system that poses a “serious risk to national security, national economic security or national public health and safety,”
Administration officials described the measures as the strongest yet taken globally to ensure the safety of AI systems, stepping into what they see as a regulatory vacuum over a technology that could lead to job losses, privacy invasions or other harms…
Some tech industry representatives said the new regulatory steps could discourage developers from training new AI systems…
NetChoice, a tech industry trade group with a history of suing government officials, criticized the order as an overreach, saying it was “dangerous for our global standing as the leading technological innovators.” The group’s vice president and general counsel, Carl Szabo, added that it was “ripe for legal action” because of the undue burdens it places on the industry.
The new requirement would apply only to big tech companies’ next-generation AI systems, and not current versions, officials suggested.”
“The order, invoking the Defense Production Act, requires that companies developing the most advanced AI platforms notify the government and share the results of safety tests. The tests are conducted through a risk assessment process called “red-teaming.”
Under the order, the National Institute of Standards and Technology will set standards for the red-team testing that are aimed at ensuring safety prior to release to the public.
IBM, which operates an AI platform called Watsonx, applauded the executive order. “This Executive Order sends a critical message: that AI used by the United States government will be responsible AI,” said IBM chairman and CEO Arvind Krishna.
Hodan Omaar, senior policy analyst for the Center for Data Innovation, said the order provides the AI industry with long-awaited guidance.”Amid a sea of chaotic chatter about how to implement appropriate guardrails for AI, today’s executive order sets a clear course for the United States,” she said…
Hundreds of scientists, tech industry executives and public figures – including leaders of Google, Microsoft and ChatGPT – sounded the alarm about artificial intelligence in a public statement in May, arguing that fast-evolving AI technology could create as great a risk of killing off humankind as nuclear war and COVID-19-like pandemics.”
“The order is an effort by the president to demonstrate that the United States, considered the leading power in fast-moving artificial intelligence technology, will also take the lead in its regulation.
Companies say they are worried about corporate liability if the more powerful systems they use are abused. And they are hoping that putting a government imprimatur on some of their A.I.-based products may alleviate concerns among consumers.
“We like the focus on innovation, the steps the U.S. government is taking to build an A.I. work force and the capability for smaller businesses to get the compute power they need to develop their own models,” Robert L. Strayer, an executive vice president at the Information Technology Industry Council, a trade group that represents large technology companies, said on Monday.
At the same time, several companies have warned against mandates for federal agencies to step up policing anticompetitive conduct and consumer harms. The U.S. Chamber of Commerce raised concerns on Monday about new directives on consumer protection, saying that the Federal Trade Commission and the Consumer Financial Protection Bureau “should not view this as a license to do as they please.”
Many of the directives in the order will be difficult to carry out, said Sarah Kreps, a professor at the Tech Policy Institute at Cornell University. It calls for the rapid hiring of A.I. experts in government, but federal agencies will be challenged to match salaries offered in the private sector.”
“The new approach is being closely coordinated with the European Union and was initially announced by the administration in July.
The executive order will push a high degree of private-public cooperation and is timed to come out just days before Silicon Valley leaders gather with international government officials in the United Kingdom to look at both the dangers and benefits of AI.
Most of the key actors in the AI space seem to be onboard with the thrust of the new regulations, and companies as varied as the chipmaker Nvidia and Open AI have already made voluntary agreements to regulate the technology along the lines of the executive order. Google is also fully involved, as is Adobe which makes Photoshop, a key area of concern because of the potential for AI manipulation.
…The idea — having a conversation about reducing the risk of disastrous military miscalculation — makes sense.Starting a conversation within NATO could be a good beginning, setting a sensible course for the 32 allied nations in terms of military developments in AI……But it is also high time to get the Pentagon cracking on the military version of such regulations, and not just including our friends in the West.”
***Guest post from FischTank PR interns: Veronica Riga and Bianca Roque ***
Want to learn more about Technology PR?
FischTank PR has extensive knowledge of the AI and the greater technology industry from working alongside industry leaders building platforms and solutions that will spur innovation. Please contact us at [email protected] to learn more about our services.