New York Steps Up with First-of-Its-Kind AI Security Legislation
New York City becomes the first state to pass a comprehensive AI safety bill targeting frontier AI models. The RAISE Act sets mandatory transparency and reporting standards for major AI developers. Lawmakers believe the bill strikes a balance between security and innovation, safeguarding smaller startups from excessive regulation. Although there has been swift industry backlash, officials argue that New York’s economic influence makes compliance inevitable.
New York State lawmakers have approved the RAISE Act, a groundbreaking bill focused on regulating the security of advanced AI systems. This legislation targets frontier AI models created by major companies like OpenAI, Google, and Anthropic. These models, trained on massive computing power and capable of significant impacts, have raised serious security concerns among policymakers and researchers alike.
The RAISE Act, which stands for Responsible Artificial Intelligence in Societal Engagement, aims to prevent catastrophic risks by requiring developers of powerful AI systems to submit detailed safety and security assessments. These assessments include reports on potential misuse, technical vulnerabilities, and incidents involving harmful behavior or data breaches. Noncompliant companies could face civil penalties of up to $30 million.
Guardrails Without Crushing Innovation
New York State Senator Andrew Gounardes, a co-sponsor of the bill, emphasized the critical need to legislate AI security swiftly due to the rapid evolution of this technology. Notable AI experts like Geoffrey Hinton and Yoshua Bengio have supported the measure.
What sets the RAISE Act apart from previous endeavors, such as California’s vetoed SB 1047, is its targeted nature. The bill specifically applies to companies whose AI models were trained using over $100 million in computing resources and are accessible to New York residents. Gounardes clarified that the focus is not on smaller startups and academic institutions, countering criticisms that the bill stifles innovation.
Backlash from Silicon Valley
Despite its limited scope, the RAISE Act has faced strong opposition from tech investors and corporations. Anjney Midha, a general partner at Andreessen Horowitz, dismissed the legislation as another state-level AI bill. Critics argue that imposing state-level compliance requirements could lead major companies to withhold their most advanced AI products from New York residents entirely.
However, Assemblymember Alex Bores, another co-sponsor, dismissed this concern, highlighting the economic incentive to remain operational in New York. He expressed confidence that there is no economic rationale for AI companies to restrict access to their models in New York.
Setting a National Precedent?
If signed into law by Governor Kathy Hochul, the RAISE Act would establish the first legally binding security and transparency standards tailored specifically for frontier AI systems in the United States. This would position New York at the forefront of developing ethical guidelines for AI deployment, surpassing federal initiatives and other state efforts.
Simultaneously, New York is exploring additional legislation addressing algorithmic discrimination and consumer protection. The proposed NY AI Act and the Protection Act seek to regulate AI usage in significant decisions like employment and credit, requiring audits, opt-out mechanisms, and human oversight.
The post New York Steps Up with First-of-Its-Kind AI Security Legislation appeared first on CoinCentral.