BLUF
Australia risks mishandling major artificial intelligence incidents unless it creates a dedicated crisis plan. Current frameworks suit other hazards and could delay clear decisions, coordination, and fast effective action. AI developers need early involvement.Learning Outcomes
• Understand emerging risk environments — The article explains why artificial intelligence incidents create novel risks that do not align with existing hazard categories.
• Evaluate governance and coordination frameworks — It assesses Australia’s current crisis arrangements and shows how unclear authority, misaligned stakeholders, and legal gaps could undermine effective responses.