Microsoft has recently imposed a ban on the utilization of generative artificial intelligence (AI) for facial recognition by US police departments on its Azure OpenAI service, a significant cloud computing platform. This decision follows closely on the heels of Axon, a major manufacturer of tasers, introducing a novel tool employing AI to transcribe audio from police body cameras.

Reports from the US media suggest that while this new ban applies specifically to Microsoft’s Azure OpenAI service in the United States, it does not extend to law enforcement agencies in other countries. However, it does prohibit police globally from employing AI on mobile cameras, such as body cameras or dashcams, for real-time identification of individuals in public spaces.

In New Zealand, the debate continues regarding the use of body-worn cameras (BWC) by the police, even as the force initiates the deployment of new tasers without integrated cameras, amounting to a $30 million investment. Axon, leveraging its dominance in taser sales, not only provides body-worn cameras but also the requisite data handling systems for storing the captured footage.

Reports from the police’s annual review indicate a strategic interest in Axon’s offerings, with a senior official attending a conference in Australia focused on Axon’s “public safety tech” to gain insights into their Taser/BWC programs.

Axon’s latest product, Draft One, promises substantial efficiency gains for law enforcement by automating the creation of police report narratives using generative AI, thus significantly reducing the time spent on paperwork. The company asserts that its research demonstrates the technology’s lack of bias, presenting it as a boon for resource-constrained police forces.

Microsoft’s integration of the OpenAI service into its premier cloud offerings for government use underscores the widespread adoption of AI technologies by public agencies, including those in New Zealand, where Microsoft’s cloud services are commonly utilized.

In the evolving landscape of law enforcement technology, the ethical and practical implications of AI deployment remain subjects of scrutiny and debate. Microsoft’s move to restrict the use of AI facial recognition by US police departments reflects a growing awareness of the need for responsible AI governance and underscores the importance of considering the societal impacts of technological innovations.