International AI Safety Report Is Out. Is Your Security In?  

You can access the entire report here.

This year, we once again are projected to see an increase in the use of Artificial Intelligence (AI) large language models, according to UnderstandingAI; OpenAI says weekly active accounts continue to grow, reaching about 800 million [8]. As the base of users grows, it is essential to know the negative sides behind cute cat images, dramatic music pieces, short videos, and summarized emails. On February 3rd, we got a handy primer on AI safety. It is the second global, scholarly rich report overseen by Yoshua Bengio, the world’s most cited AI safety scholar (1,047,356 citations on Google Scholar as of February 3rd) along with his team of global contributors from industry, academia, and civil society. The report is an outcome of the 2023 AI Safety Summit at Bletchley Park (United Kingdom), which delivered the Bletchley Declaration, mandating, among others, an international independent safety report [4,7]. 

AI safety has already been on the general public’s radar as deepfakes and instances of reliance on generative tools with mediocre outcomes flooded in, among others, academia, the justice system, and the media. AI safety is also on the minds of diverse stakeholders, including policymakers; in 2025 alone, we saw (as reported by multistate.ai) 1,208 AI-related bills introduced across all 50 states, e.g., the landmark California SB 53. “The Transparency in Frontier AI Act” would require frontier AI developers to publicly disclose their safety practices and report critical AI safety incidents, along with whistleblower protections [2]. As we move forward, various fears continue to materialize, ranging from cyberviolence against, especially, women and children, as reported by UN Women and the European Parliament Research Service, to the accelerating impact on the labor market, as researched by Harvard University scholars Saed Massoum and Guy Lichtinger [5, 6]. 

The International AI Safety Report under the helm of Bengio does two major things: it summarizes key points about AI safety and outlines its tenets, from pressing day-to-day issues to more long-term perspectives, and then jumps into direct examples of what has been happening and how we as individuals and as a society can mitigate it. For example, it offers interesting takes on risks to human creativity, noting that clinicians’ abilities to detect tumors “dropped by 6%” after several months of using AI tools [4, pp. 20–21].


Works Cited:

[1] Artificial Intelligence (AI) Legislation. (n.d.). Multistate.Ai. Retrieved February 9, 2026, from https://www.multistate.ai/artificial-intelligence-ai-legislation  

[2] California Legislature. (2025). Senate Bill No. 53: Artificial intelligence models: large developers (2025–2026 Reg. Sess.) (Cal. Stat. 2025, ch. 138). California Legislative Information. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53

[3] Cerise, S., Lentz, S., Bates, L., Mingeirou, K., & Osman, Y. (2025). How AI is exacerbating technology-facilitated violence against women and girls. UN Women – Headquarters. https://www.unwomen.org/en/digital-library/publications/2025/12/how-ai-is-exacerbating-technology-facilitated-violence-against-women-and-girls 

[4] International AI Safety Report 2026 | International AI Safety Report. (February 3, 2026). Retrieved February 7, 2026, from https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026 

[5] Lichtinger, Guy and Hosseini Maasoum, Seyed Mahdi and Hosseini Maasoum, Seyed Mahdi, Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data (August 31, 2025). https://ssrn.com/abstract=5425555 

[6] Negreiro, M. (2025). Children and deepfakes. European Parliament Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775855/EPRS_BRI%282025%29775855_EN.pdf

[7] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. (n.d.). GOV.UK. Retrieved February 9, 2026, from https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 

[8] Williams, K. (October 27, 2026). 16 charts that explain the AI boom. https://www.understandingai.org/p/16-charts-that-explain-the-ai-boom