Second, he could order any federal agency to procure an AI system that could “significant impact [our] rights, opportunities or access to essential resources or services» to require that the system comply with these practices and that the suppliers provide proof of this compliance. This recognizes the power of the federal government as a customer to shape business practices. It is, after all, the biggest employer in the country and could use its buying power to dictate best practices for the algorithms used, for example, to screen and select candidates for jobs.
Third, the executive order could require anyone taking federal dollars (including state and local entities) to ensure that the AI systems they use comply with these practices. This recognizes the important role of federal investment in states and localities. For example, AI has been implicated in many components of the criminal justice system, including predictive policing, surveillance, pretrial incarceration, sentencing, and parole. Although most law enforcement practices are local, the Justice Department offers federal grants to state and local law enforcement and could attach conditions to those funds stipulating how to use the technology.
Finally, this executive order could direct agencies with regulatory authority to update and extend their regulation to processes under their jurisdiction that include AI. Some initial efforts to regulate entities using AI with Medical equipement, hiring algorithmsAnd credit score are already underway and these initiatives could be further expanded. Worker monitoring And property valuation systems are just two examples of areas that would benefit from this type of regulatory action.
Of course, the AI systems testing and monitoring regime I have described here is likely to raise a range of concerns. Some will say, for example, that other countries will overtake us if we slow down the implementation of such safeguards. But other countries are busy pass their own laws which impose significant restrictions on AI systems, and any US company seeking to operate in these countries will have to comply with their rules. The EU is about to adopt a extensive AI law which includes many of the provisions I described above, and even China is impose limits on commercially deployed AI systems which go well beyond what we are currently willing to consider.
Others may express concern that this broad set of requirements might be difficult for a small business to meet. This could be solved by linking the requirements to the degree of impact: software that can affect the livelihoods of millions of people must be carefully controlled, regardless of the size of the developer. An AI system that individuals use for recreational purposes should not be subject to the same limitations and restrictions.
It is also likely to be questioned whether these requirements are practical. Again, the power of the federal government as a market maker should not be underestimated. An executive order that calls for testing and validation frameworks will provide incentives for companies that want to translate best practices into viable commercial testing regimes. The responsible AI sector is already filling up with firms that provide algorithmic auditing and assessment services, industrial consortia who issue detailed guidelines that suppliers are expected to follow, and large consulting firms who offer advice to their clients. And independent non-profit entities like Data and society (disclaimer: I sit on their board of directors) have set up entire laboratories develop tools that assess how AI systems will affect different populations.
We did the research, we built the systems and we identified the harms. There are established ways to ensure that the technology we build and deploy can benefit everyone while reducing harm for those already shaken by a deeply unequal society. The time for studies is over – now the White House must issue an executive order and take action.