A Calendar Page Showing July 22nd, Tuesday & a representation of AI

Mark Your Calendars: The AI Action Plan is coming soon!

July has arrived, which means we now have a full-fledged summer outside our windows, a long weekend ahead, and something else worth watching—on July 23rd, the U.S. Executive Branch will announce its artificial intelligence (AI) Action Plan. Under Executive Order 14179, issued on January 23, 2025, the administration was tasked with delivering the AI Action Plan by July 22, 2025, outlining strategic goals for maintaining U.S. leadership in AI advancement and innovation.These include approaches to regulation, energy infrastructure, national security, and supply chain resilience.

The plan is also powered by more than 10,067 public comments from a wide range of contributors submitted to the Office of Science and Technology Policy: 10 non-federal government entities, 82 academic institutions, 178 professional associations, 194 nonprofit organizations, 291 private companies (including Anthropic and OpenAI), and 9,312 individuals—which certainly uplifts my civil society–oriented heart. Among the responses submitted to the Request for Information were those from major big tech players—something I discussed earlier this year.

The involvement of civil society in this process is especially commendable, as the future of our digital (r)evolution should never be left solely to CEOs—or, as I put it in my dissertation, “CEO-Kings.” (Back when I defended that dissertation in 2024, not everyone was ready for that term—but today, it barely needs explanation).

You can read all 600 megabytes of submitted comments to OSTP here.

Among the plethora of responses, you will find a diverse array of voices hoping to shape the future of AI policy—including Indigenous peoples, established advocates for checks and balances in technology development such as the Electronic Frontier Foundation and the Future of Privacy Forum, as well as perspectives from both AI optimists and pessimists, think tanks, and insurance companies.

For example, the National Congress of American Indians called on the current administration to “address limited tribal data representation to improve AI systems” and “recognize tribal digital sovereignty to protect tribal rights in AI policy.”

There were also voices from those already affected by generative AI, such as Adam Szymczak, a visual designer, who wrote: “Do not create new copyright exemptions that allow Big Tech companies to exploit and steal from creators and everyday Americans without permission, compensation, or transparency.”

The Future of Life Institute urged policymakers to “foster human flourishing from AI by promoting the development of AI systems free from ideological agendas,” among other recommendations.

The above are just few examples of diversity of voices. Now, buckle up and brace yourselves for what is to come in three weeks. After all, the problems are pressing:

1). AI is an entire ecosystem of data, hardware, and software fueled by energy—something we are increasingly short on. Therefore, are we ready to power data centers not only with the necessary GPUs but also meet the estimated several-gigawatt energy requirements per major AI campus? Forecasts project that U.S. data center demand will rise from approximately 35 GW in 2024 to 78 GW by 2035. Especially as green-energy policies have faced growing challenges in recent months, are we truly prepared to invest in scalable solutions like green hydrogen to support this growth?

2). How do we define AI safety—and how are we going to address it? Should it be approached through local (as recently contested), national, or international regulation? For example, should we build upon the Bletchley Declaration on AI governance by using it as a foundation for a legally binding framework?

3). There is still much more to consider and discuss, including questions about intellectual property rights, particularly as they relate to AI-generated content, and especially concerning the future of employment and potential job displacement.

4). Should the U.S. government activate various national security provisions and approach Artificial General Intelligence (AGI) as a new Apollo Project—and if so, how can we ensure that safety and human rights are not compromised in the process?

5). Is the U.S. ready to compete with China in the race for AGI, something we have been grappling with even more since the official release of DeepSeek V3 on January 10, 2025? Do we need to implement additional export controls on US AI systems and/or their critical components? Initial restrictions on chips required to power large-scale AI models were introduced in October 2023 and were slated for revision under new rules announced by the Biden administration in January 2025. These rules were initially intended to take effect by May, but were subsequently revised by the current administration—particularly regarding the three-tier list of countries eligible for chip exports, designed to limit China’s AI development.

6). What does the deployment of AI for security and military purposes mean for all of us, and how can it be assessed and regulated to ensure compliance with international humanitarian law, especially concerning autonomous weapons systems?

If any or all of these questions resonate with your daily reflections, we will soon have a document that attempts to answer some of them, and will surely raise many more. What are your AI big questions? Please feel free to put in the comments section. In the meantime, let us enjoy something AI cannot do (yet?): cherishing a beautiful summer.


Works Cited:

Chessen, Matt and Craig Marthell. Beyond a Manhattan Project for Artificial Intelligence. Lawfare. (2025, April 22, 2025). Retrieved July 2, 2025, from https://www.lawfaremedia.org/article/beyond-a-manhattan-project-for-artificial-general-intelligence

Comments Received in Response To: Request for Information on the Development of an Artificial Intelligence (AI) Action Plan (“Plan”). (n.d.). The Networking and Information Technology Research and Development (NITRD) Program. Retrieved July 2, 2025, from https://www.nitrd.gov/coordination-areas/ai/90-fr-9088-responses/

Data centers to account for 8.6% of total U.S. electricity demand by 2035. (2025, April 23). https://www.review-energy.com/otras-fuentes/data-centers-to-account-for-86-of-total-us-electricity-demand-by-2035

National Congress of American Indians. Request for Information: Development of an Artificial Intelligence Action Plan. Retrieved July 2, 2025, from https://files.nitrd.gov/90-fr-9088/AI-RFI-2025-1772.pdf

Szymczak, Adam. Request for Information: Development of an Artificial Intelligence Action Plan. Retrieved July 2, 2025, from https://files.nitrd.gov/90-fr-9088/AI-RFI-2025-7591.pdf

The Bletchley ceclaration on AI safety | Digital Watch Observatory. (2023, November). https://dig.watch/resource/the-bletchley-declaration

The Future of Life Institute. Request for Information: Development of an Artificial Intelligence Action Plan. Retrieved July 2, 2025, from https://files.nitrd.gov/90-fr-9088/FLI-AI-RFI-2025.pdf

The White House. (2025, January 23). Removing Barriers to American Leadership in Artificial Intelligencehttps://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/