Lawmakers in Washington have been actively discussing the topic, holding hearings, and making announcements regarding A.I. safety commitments by technology companies. However, a closer examination of these actions raises doubts about their effectiveness in shaping policies for the rapidly evolving technology.
Experts and lawmakers agree that the United States has a long and challenging journey ahead before establishing comprehensive A.I. rules. While there have been meetings, speeches, and discussions with tech executives, it’s premature to predict the exact contours of regulations aimed at safeguarding consumers and addressing the risks posed by A.I. in terms of job disruption, disinformation spread, and security threats.
Chris Lewis, the president of consumer group Public Knowledge, advocates for the establishment of an independent agency to regulate A.I. and other tech companies, but even he acknowledges that crafting such laws is a work in progress.
Compared to Europe, the United States lags behind in A.I. regulation. European lawmakers are gearing up to implement A.I. laws this year, with a focus on limiting the riskiest uses of the technology. In contrast, there is significant disagreement among American lawmakers on the best approach to handling A.I., as many are still trying to understand its complexities.
For some tech companies, this lack of stringent regulations is favorable, even though they claim to welcome some rules for A.I. implementation. Nonetheless, they push back against the strict regulations being developed in Europe.
Here’s a summary of the current state of A.I. regulations in the United States:
At the White House: The Biden administration has been actively engaging with A.I. companies, academics, and civil society groups. Vice President Kamala Harris initiated this effort in May with a meeting at the White House, urging tech industry leaders to prioritize safety. Recently, seven tech companies made announcements at the White House, detailing principles to enhance the safety of their A.I. technologies. However, many of these practices were already in place or planned, and they do not introduce new regulations. Critics argue that voluntary commitments are insufficient and call for enforceable guardrails to protect individual rights and privacy. The White House previously introduced a Blueprint for an A.I. Bill of Rights, which provides guidelines on consumer protections related to A.I., but these are not enforceable regulations. Additionally, officials have been working on an executive order concerning A.I., but the specifics and timing remain undisclosed.
In Congress: Lawmakers are the loudest proponents of A.I. regulation, with some already introducing bills related to the technology. These proposals include the creation of an agency to oversee A.I., liability for A.I. technologies that spread disinformation, and licensing requirements for new A.I. tools. Hearings have been held in Congress, where lawmakers discussed potential regulations, such as nutritional labels to inform consumers of A.I. risks. However, the bills are still in early stages and lack the necessary support to progress. Senate leader Chuck Schumer has announced a monthslong process to develop A.I. legislation, including educational sessions for members.
At federal agencies: Regulatory agencies are beginning to take action by addressing certain issues arising from A.I. Recently, the Federal Trade Commission (F.T.C.) initiated an investigation into OpenAI’s ChatGPT and requested information on how the company secures its systems and how the chatbot could potentially harm consumers through the spread of false information. F.T.C. Chair Lina Khan believes that the agency has the authority to police problematic behavior by A.I. companies under existing consumer protection and competition laws. Given the typical timeline of congressional action, some experts argue that waiting for Congress to act may not be ideal, prompting regulatory agencies to take action where possible.
In conclusion, the U.S. is still at the early stages of A.I. regulation, and while there have been discussions and initiatives, the exact shape of comprehensive rules remains uncertain. The development of meaningful and effective regulations to govern A.I. will require further efforts and deliberation.