A new policy directive from Maine Information Technology (MaineIT) has put a six-month moratorium on the adoption and use of Generative Artificial Intelligence (AI) technology within all State of Maine agencies due to “significant” cybersecurity risks.
The prohibition on AI will include large language models that generate text such as ChatGPT, as well as software that generates images, music, computer code, voice simulation, and art.
It’s unclear whether and to what extent state employees have been relying on emerging AI tools as part of their jobs. Maine may be the first state in the U.S. to impose such a moratorium.
According to an email to sent on Wednesday to all Executive Branch agencies and employees from Maine’s Acting Chief Information Officer Nick Marquis, MaineIT issued a “cybersecurity directive” prohibiting the use of AI for all state business and on all devices connected to the state’s network for six months, effective immediately.
“Generative AI systems have the capacity to automatically process data and information in a way that resembles intelligent human behavior,” Marquis said in his email. “The risks that Generative AI poses are both known and unknowable and many of these risks transcend security.”
Marquis said that MaineIT has determined AI to be a cybersecurity risk warranting a six-month prohibition on all use of AI.
“During this time, MaineIT will comprehensively evaluate Generative AI technologies within the current regulatory landscape to ensure our approach and adoption cultivates public trust and the responsible use of all AI systems while minimizing its potential negative impact,” he said.
MaineIT is the agency responsible for safeguarding the confidentiality, integrity and availability of all the state’s information systems and assets.
“As U.S. policy on AI continues to develop, caution must be taken to assess the risks involved with the use of generative AI technologies,” MaineIT wrote in their directive released Wednesday.
“Although these systems have many benefits, the expansive nature of this technology
introduces a wide array of security, privacy, algorithmic bias, and trustworthiness risks into an already complex IT landscape,” they wrote.
According to MaineIT, AI systems “lack transparency in their design,” and their use “often involves the intentional or inadvertent collection and/or dissemination of business or personal data.”
Additional concerns raised by the agency were AI’s “concerning and exploitable” security weaknesses, its ability to generate “credible-seeming” misinformation, disseminate malware, and execute sophisticated phishing techniques.
These factors, the agency writes, make it challenging to assess whether AI-based decisions are appropriate, fair, and in “alignment with organizational values” and “risk appetite.”
MaineIT’s directive states that the “complete risk associated with the use of [AI] remains unknown,” but that they are looking to federal guidance and “best practices” from the National Institute of Standards and Technology (NIST).
The agency will use the six month moratorium to conduct a “holistic risk assessment” regarding AI, and develop policies and “responsible frameworks” governing the potential use of AI technology.
Last Wednesday, the European Parliament voted to begin negotiations on the “AI Act,” a law which would ban the use of AI systems for social scoring, biometric categorization, “predictive policing” and emotion recognition in European Union member states.
If the AI Act is adopted, it would be one of the world’s first and most comprehensive laws to mitigate the potentially harmful effects of artificial intelligence, and could lead the way in the global effort to regulate AI.
Below is a full copy of the June 21 cybersecurity directive obtained by the Maine Wire: