Billionaire Foundations' Invisible Grip on AI
$500 Million and Counting: How Philanthropy Is Steering Tomorrow's Defining Technology Without Public Oversight
Last October, ten of America's wealthiest foundations announced they would spend $500 million over five years to influence how artificial intelligence develops. The same month, OpenAI transformed itself into what may become the world's richest philanthropy, holding equity worth $130 billion. Together, these moves represent an unprecedented injection of philanthropic capital into technology governance.
Yet basic questions about how this money will be spent, who will receive it, and what outcomes it's meant to achieve are not publicly disclosed. The foundations promoting AI transparency operate with less disclosure than the tech companies they seek to influence.
This investigation, based on public records, regulatory filings, and academic research, reveals a widening gap between philanthropic rhetoric about AI accountability and the opacity surrounding foundation decision-making itself.
The $500 Million Question
On October 14, 2025, the Humanity AI coalition announced its formation with considerable fanfare. The ten founding members (Doris Duke Foundation, Ford Foundation, Lumina Foundation, Kapor Foundation, MacArthur Foundation, Mellon Foundation, Mozilla Foundation, Omidyar Network, Packard Foundation, and Siegel Family Endowment) collectively control assets exceeding $50 billion.
"The stakes are too high to defer decisions to a handful of companies and leaders within them," MacArthur Foundation President John Palfrey said in the announcement.1
The coalition identified five priority areas: democracy, education, arts and culture, labor, and security. Rockefeller Philanthropy Advisors will manage a pooled fund, with grants beginning in 2026. Individual foundations started making "aligned grants" in fall 2025.
Six weeks after the announcement, critical details remain undisclosed. The coalition's website offers no grant database, no timeline for public reporting, and no clarity on how success will be measured.
This stands in contrast to the transparency foundations demand from others. "Public trust in these technologies would significantly benefit from access to information regarding, and increased awareness of, frontier AI capabilities," reads California's SB 53, legislation that advocacy groups supported.2 The law, which took effect January 1, requires AI developers to publish detailed safety frameworks and report incidents to state authorities.
Foundations face no equivalent disclosure requirements for their AI grantmaking.
Following the Money (Or Trying To)
U.S. foundations must file annual Form 990 reports with the Internal Revenue Service, disclosing basic financial data and listing their largest grants. These documents provide some visibility, but with significant limitations.
Form 990s are filed months after a tax year ends and published even later. The most recent complete filings available for most foundations cover 2023 (two years before Humanity AI launched). When 2025 data becomes available in late 2026, it will show where money went but not why, how recipients were selected, or what impact foundations sought.
Foundations can also aggregate smaller grants into broad categories, redact "commercially sensitive" information, and decline to specify program strategies. A foundation might report "$50 million in grants for AI policy" without identifying recipients or explaining objectives.
This matters because foundations increasingly fund policy advocacy alongside research. The line between supporting academic inquiry and shaping regulatory outcomes is often unclear (and foundations don't have to clarify it).
Or take the Packard Foundation, which appears as both a Humanity AI member and a Frontier Model Forum funder. Does this dual role create coordination between philanthropic and industry approaches to AI safety?
The OpenAI Restructuring: Conflicts Built In
Two weeks after Humanity AI launched, OpenAI announced it had completed a controversial restructuring. The company's original nonprofit, now renamed OpenAI Foundation, holds 26% of the for-profit OpenAI Group PBC. At the company's current valuation, that stake is worth approximately $130 billion.4
This instantly made the foundation one of the world's wealthiest philanthropies. It announced plans to commit $25 billion to health research and "AI resilience," though specifics remain undefined.
The foundation's first significant action came December 3, 2025, when it announced $40.5 million in unrestricted grants to 208 nonprofits through its People-First AI Fund. Recipients ranged from STEM From Dance, which integrates dance and AI education for young girls of color, to Digital NEST in California's agricultural communities and Tribal Education Departments National Assembly. The grants came from nearly 3,000 applications received during a one-month window.5
The announcement provided recipient names and general descriptions but limited financial detail. Grant amounts to individual organizations were not disclosed. Selection criteria emphasized mid-sized organizations with annual budgets between $500,000 and $10 million, but the specific evaluation process, decision-makers, and reasons particular applications succeeded or failed remain undisclosed. A second wave of $9.5 million in "Board-directed grants" focusing on health initiatives is expected in early 2026, with even less transparency about selection.
The structure raises obvious questions. The foundation board appoints all members of the for-profit board through special voting rights. But seven of the eight foundation directors also serve on the for-profit board, including CEO Sam Altman.6 The foundation's wealth depends entirely on the for-profit company's success.
"There's a bazillion conflicts of interest here," said Judith Bell, chief impact officer at San Francisco Foundation and a member of Eyes on OpenAI, a coalition of California nonprofits that opposed the restructuring.7
California Attorney General Rob Bonta spent months investigating whether the conversion properly protected charitable assets. After negotiations, his office approved the deal but noted ongoing monitoring authority. "We secured concessions that ensure charitable assets are used for their intended purpose," Bonta stated.8
What those concessions entail is not public. Neither is the valuation methodology used to determine the foundation's stake was appropriate compensation for the nonprofit's original assets.
Industry Coordination, Philanthropic Support
Three years before Humanity AI formed, four AI companies (Anthropic, Google, Microsoft, and OpenAI) created the Frontier Model Forum. Announced July 26, 2023, the industry body focuses on developing "best practices" for frontier AI safety.9 Amazon and Meta joined in May 2024.
The forum established a $10 million AI Safety Fund in October 2023, supported by the founding companies and philanthropic partners: McGovern Foundation, Packard Foundation, Eric Schmidt, and Jaan Tallinn.10 The fund supports research on model evaluation, red-teaming techniques, and risk assessment.
The forum's grant-making provides a window into how philanthropic and industry funding intersect (though only a partial window). On December 11, 2025, it announced more than $5 million in grants to 11 organizations selected from over 100 competitive proposals. Recipients included Apollo Research for "building black box scheming monitors for frontier AI agents," California Institute of Technology for "AI-driven detection of protein mimetic biothreats with BioSentinel," and Faculty AI for "automated red-teaming for biosecurity risks."11
The announcements describe what projects received funding and their general focus areas. What they don't disclose: How much each organization received, who made selection decisions, what criteria were used, whether any applications were rejected due to policy positions, or how foundation funders (McGovern, Packard) influenced priorities. The forum states projects were selected through a "rigorous review process" but provides no details on that process or its participants.
On its face, this represents collaboration between industry and philanthropy to address AI safety challenges. But it also raises questions about whether philanthropic funding reinforces industry-preferred approaches over independent alternatives.
The forum's executive director, Chris Meserole, came from the Brookings Institution's AI and Emerging Technology Initiative. Brookings receives funding from tech companies and foundations, though specific amounts are not disclosed.
The forum's recent work includes guidance on "frontier capability assessments" (procedures for evaluating whether models pose catastrophic risks). These assessments are voluntary and conducted by companies on their own systems. The forum recommends practices but sets no enforceable standards.12
The Transparency Paradox
As foundations mobilize to promote AI accountability, transparency in the AI industry is declining sharply.
Stanford University's Center for Research on Foundation Models released its third annual Foundation Model Transparency Index in December 2025, scoring 13 major AI developers across 100 indicators. The results were stark: the average score fell from 58 out of 100 in 2024 to 40 in 2025 (a 17-point drop that reversed prior gains).13
Only 30% of companies submitted their own transparency reports in 2025, down from 74% the prior year. IBM's Granite models scored highest at 95, while xAI and Midjourney tied for lowest at 14.
Companies were particularly opaque about training data, computational resources, and environmental impacts. Ten companies (including Google, OpenAI, and Anthropic) disclosed no information about energy usage, carbon emissions, or water consumption.14
Meta's score plummeted from 60 to 31, driven partly by its decision not to release a technical report for Llama 4, its flagship model. Google significantly delayed documentation for Gemini 2.5, prompting British lawmakers to demand explanations.15
The Index does not evaluate foundations, which operate under different rules. But the parallel is notable: organizations advocating for AI transparency maintain limited disclosure about their own activities.
Rishi Bommasani, Stanford researcher and FMTI lead author, emphasized the urgency. "Companies are most opaque about their training data and training compute as well as the post-deployment usage and impact of their flagship models," the report states.16
California Leads, Washington Pushes Back
Against this backdrop, California enacted the nation's first comprehensive AI safety law for frontier models.
Senate Bill 53, signed by Governor Newsom on September 29 and effective January 1, 2026, requires developers of the most advanced AI systems to publish safety frameworks, report critical incidents, and protect whistleblowers.17
The law defines "frontier models" as systems trained using more than 10^26 floating-point operations (a threshold few companies have publicly confirmed meeting). It mandates transparency reports detailing model capabilities, intended uses, and risk assessments. Companies must establish internal safety protocols and allow employees to report concerns without retaliation.
SB 53 takes a notably different approach than earlier proposals. It emphasizes transparency and reporting over pre-deployment approval or mandatory testing. "Greater transparency can also advance accountability, competition, and public trust," the legislation states.18
New York followed eight days later. On December 19, Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, making New York the second state with comprehensive frontier AI regulation. The law's journey reveals the intensity of tech industry lobbying.
Legislators originally passed RAISE in June 2025 with stricter requirements than California's law, including prohibitions on deploying models posing "unreasonable risk of critical harm" and mandates for detailed safety standards. But following months of industry pressure and the Trump administration's December 11 executive order threatening to override state AI laws, Hochul proposed chapter amendments that essentially rewrote RAISE to closely mirror California's SB 53.19
Legislators negotiated back some provisions. The final version requires AI companies to report critical safety incidents within 72 hours of determining one occurred (stricter than California's 15-day window). It creates a new oversight office within New York's Department of Financial Services with broader authority than California's approach. And it maintains stronger language requiring developers to explain in "detail" how they will "handle" rather than merely "approach" various risks.20
But significant elements were removed: the prohibition on deploying unsafe models, requirements for foreign AI developers, and steeper penalties ($10 million initial violations in the original bill, reduced to $1 million in the final version). The law takes effect January 1, 2027 (a year later than California's).
"Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country," state Senator Andrew Gounardes posted after the signing.21 Yet the final product reflects substantial industry influence.
Both OpenAI and Anthropic expressed support for the New York law while calling for federal legislation. "The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them," Anthropic's head of external affairs Sarah Heck told The New York Times.22
The alignment between California and New York creates what may become a de facto national standard for frontier AI regulation (shaped significantly by industry preferences about what "reasonable" transparency looks like).
Two months later, President Donald Trump signed an executive order directing federal agencies to challenge state AI laws. The December 11 order frames state regulations as obstacles to American competitiveness, stating: "To win, United States AI companies must be free to innovate without cumbersome regulation."23
The order directs the Justice Department to establish an "AI Litigation Task Force" to sue states over their AI laws. It instructs the Commerce Department to identify "onerous" state regulations and threatens to withhold broadband funding from states that don't comply with federal policy. It directs the Federal Trade Commission and Federal Communications Commission to issue guidance asserting federal preemption of state rules.24
Legal experts question whether the administration has authority to override state laws through executive action. "An executive order doesn't/can't preempt state legislative action," Florida Governor Ron DeSantis, a Republican, posted on X. "Congress could, theoretically, preempt states through legislation."25
"They're trying to find a way to bypass Congress with these various theories in the executive order," John Bergmayer, legal director of Public Knowledge, told NPR. "Legally, I don't think they work very well."26
The order does not directly invalidate existing state laws (only Congress or courts can do that). But it signals aggressive federal opposition to state-level AI governance, creating uncertainty for companies, researchers, and state regulators.
What Can't Be Known
This investigation sought to answer basic questions about foundation influence on AI policy. Most information is not publicly available due to limited disclosure requirements.
Public disclosure varies significantly among Humanity AI foundations. Open Philanthropy maintains a comprehensive, searchable grants database listing recipients, amounts, dates, purposes, and selection criteria for each grant. The organization publishes annual progress reports and describes its evaluation methods, including a forecasting program where staff track predictions about grant outcomes. The OpenAI Foundation published its complete list of 208 People-First AI Fund grantees along with funding amounts and application process details. Schmidt Sciences disclosed its 27 AI safety program awardees and program funding levels.
Yet significant questions remain about foundation operations and influence:
Coordination and strategy: Do Humanity AI foundations coordinate their grantmaking to avoid duplication or gaps? Do they share information about grantees or coordinate with Frontier Model Forum companies when developing AI safety strategies? What role do policymakers play in shaping foundation priorities?
Accountability mechanisms: While some foundations describe evaluation approaches, comprehensive frameworks for measuring success remain unclear. What specific benchmarks do foundations use? How are grantees held accountable for deliverables? Will evaluation results be publicly reported? What happens when grants fail to achieve stated objectives?
Conflicts of interest: Some board relationships are publicly known. Patrick Collison served on Meta's AI advisory board before joining its board of directors, Open Philanthropy provided early funding to OpenAI, Eric Schmidt's extensive AI industry connections are well-documented. However, comprehensive conflict of interest policies and systematic disclosure of all financial relationships between foundation leadership and AI companies are not uniformly published. Do foundations have formal processes for managing such conflicts? Are these policies publicly available?
Industry influence: Do foundations consult with AI companies when developing grant strategies? When foundation board members have financial stakes in AI companies, how do foundations ensure independence? When foundations fund both research organizations and advocacy groups, how do they maintain the distinction between supporting independent inquiry and advancing specific policy positions?
The Accountability Gap
The situation creates a striking asymmetry. Foundations advocate for transparency from tech companies and governments but operate largely outside equivalent scrutiny themselves.
Tech companies, despite declining transparency scores, still face pressure from investors, regulators, researchers, and civil society. Their products receive public testing. Their business models undergo analysis. Their leadership answers to boards and shareholders.
Foundations answer primarily to their boards (often family members or hand-picked allies) and to state attorneys general with limited resources. They're not required to justify grant decisions, demonstrate impact, or respond to public criticism. They can and do ignore media inquiries.
This matters because foundations increasingly occupy influential positions in technology governance. They fund the research that shapes policy debates. They convene the experts who advise lawmakers. They support the advocacy organizations that lobby for regulations.
When that influence operates behind closed doors, the public has no way to assess whether foundations are genuinely serving public interest or advancing narrower agendas.
Historical Echoes
Foundation influence on public policy is not new. In the 1950s, the Reece Committee investigated whether tax-exempt foundations were improperly shaping education and social policy through strategic grantmaking. The investigation was contentious and its conclusions disputed, but it raised enduring questions about foundation accountability.27
The committee's findings were striking. Foundation archives revealed coordinated efforts among the Carnegie, Rockefeller, and Ford foundations to influence American education and social policy. The foundations had effectively divided territories: Carnegie focused on international education, Rockefeller on domestic education, Ford provided financial muscle. Committee investigators documented foundation funding of textbook development, university departments, and scholarly networks; all designed to shape how Americans understood their history and their role in the world.
The investigation found that foundations used their tax-exempt billions to promote what the committee called "empiricism, collectivism, and moral relativism" in American institutions. Foundation trustees, the committee noted, couldn't possibly oversee what their organizations actually did. A professional class of foundation administrators had emerged. This became an informal guild controlling vast resources with minimal accountability, placing their people in government agencies and universities while shaping policy from behind the scenes.
The investigation faced immediate opposition. Major foundations mobilized. The media attacked the committee as engaging in witch hunts. Congressional opponents worked to undermine the hearings from within. Within weeks, the investigation was effectively shut down. The committee managed to issue a final report, but it received almost no attention. The findings were buried.
René Wormser, the committee's general counsel, later published the findings in Foundations: Their Power and Influence (1958). He documented what the investigation had found but couldn't make the public hear: that tax exemption without accountability created power without responsibility. That foundations could operate as a parallel government, accountable only to their own trustees. That the most dangerous form of influence was the kind exercised quietly, through funding decisions and personnel placements, wrapped in the language of charity.
Those questions resurface with AI. Who decides what "safe" AI means? Whose values get embedded in systems that will shape society for decades? Can the public trust that decisions are being made in its interest when the decision-making process is opaque?
Defenders of philanthropic engagement note that foundations provide necessary counterweights to corporate power. They fund independent research that might otherwise not happen. They support civil society voices that corporations would ignore. They operate under legal obligations to serve charitable purposes and face state attorney general oversight.
What Happens Next
Several developments in 2026 will test these dynamics:
The European Union's AI Act takes full effect August 2, establishing comprehensive rules for high-risk AI systems. How U.S. companies and foundations respond will signal whether global regulatory standards are emerging (and whether American approaches align with or resist them).28
Humanity AI foundations will begin distributing pooled funds, providing the first concrete evidence of how $500 million shapes AI governance. Which organizations receive funding and what they produce will clarify whether foundations are genuinely supporting diverse perspectives or reinforcing particular approaches.
The OpenAI Foundation will face its first tests of independence as the for-profit entity pursues growth and profitability. How the foundation balances oversight with commercial imperatives will reveal whether the structure can actually work.
Federal challenges to state AI laws will move through courts, determining whether states retain authority to regulate AI or whether federal policy preempts state action. These cases could reshape the entire AI governance landscape.
Most importantly, foundation tax filings for 2025 will become public in late 2026, offering the first detailed look at where Humanity AI money went. Comparing foundation statements to actual grant patterns may reveal whether rhetoric matches reality.
The Core Issue
This investigation cannot prove that foundations are coordinating to capture AI policy, because the evidence to prove or disprove such coordination is not publicly available. That's precisely the problem.
When billions of dollars flow into AI governance through philanthropic channels, the public deserves to know where the money goes, why specific organizations receive it, and what outcomes foundations seek. The tax-exempt status foundations enjoy represents a public subsidy (in effect, taxpayers are funding foundation activities through foregone revenue).
In return, the public gets minimal disclosure and even less accountability.
Foundations argue they need flexibility to be effective. Fair enough. But flexibility without transparency breeds suspicion. When foundations advocate for AI systems that serve public interest while operating behind closed doors themselves, the contradiction undermines their credibility.
The question is not whether foundations should engage in AI governance. They clearly will, and arguably should. The question is whether they'll do so transparently enough that the public can assess whether that engagement serves broad social benefit or narrower interests.
Right now, the public can't make that assessment. The architecture of AI governance is being constructed largely out of view, by actors whose motivations and methods remain opaque.
That should concern everyone (regardless of one's views on AI regulation). Democracy requires visibility into power. Technology governance affecting billions of people requires visibility into who's shaping that governance and why.
Foundations demanding transparency from others should lead by example. Until they do, questions about their role in AI policy will persist (and intensify).
Notes
1. MacArthur Foundation, "Humanity AI Commits $500 Million to Build a People-Centered Future for AI," press release, October 14, 2025, https://www.macfound.org/press/press-releases/humanity-ai-commits-500-million-to-build-a-people-centered-future-for-ai.
2. California Legislature, Senate Bill 53: Transparency in Frontier Artificial Intelligence Act (2025), § 1, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB53.
3. Frontier Model Forum, "Anthropic, Google, Microsoft and OpenAI Announce Executive Director of the Frontier Model Forum and Over $10 Million for a New AI Safety Fund," October 25, 2023, https://www.frontiermodelforum.org/.
4. OpenAI, "Built to Benefit Everyone," October 28, 2025, https://openai.com/index/built-to-benefit-everyone/.
5. OpenAI Foundation, "Announcing the Initial People-First AI Fund Grantees," December 3, 2025, https://openai.com/index/people-first-ai-fund-grantees/.
6. OpenAI, "Built to Benefit Everyone."
7. Quoted in CalMatters, "OpenAI's Restructuring Deal with California Is Full of Holes, Critics Say," October 30, 2025.
8. California Attorney General, "Statement on OpenAI Restructuring," California Department of Justice, October 28, 2025.
9. Microsoft, "Microsoft, Anthropic, Google, and OpenAI Launch Frontier Model Forum," July 26, 2023, https://blogs.microsoft.com/on-the-issues/2023/07/26/anthropic-google-microsoft-openai-launch-frontier-model-forum/.
10. Frontier Model Forum, "Anthropic, Google, Microsoft and OpenAI Announce Executive Director."
11. Frontier Model Forum, "Announcement of New AI Safety Fund Grantees," December 11, 2025, https://www.frontiermodelforum.org/updates/announcement-of-new-ai-safety-fund-grantees/.
12. Frontier Model Forum, "Frontier Capability Assessments: Emerging Industry Practices," 2024, https://www.frontiermodelforum.org/.
13. Stanford HAI, "Transparency in AI Is on the Decline," December 9, 2025, https://hai.stanford.edu/news/transparency-in-ai-is-on-the-decline.
14. Rishi Bommasani et al., "The 2025 Foundation Model Transparency Index," Stanford Center for Research on Foundation Models, 2025, https://crfm.stanford.edu/fmti/December-2025/paper.pdf.
15. Stanford HAI, "Transparency in AI Is on the Decline."
16. Bommasani et al., "The 2025 Foundation Model Transparency Index," 3.
17. California Legislature, Senate Bill 53.
18. Ibid., § 1.
19. City & State NY, "Hochul Signs Watered Down AI Regs, but Lawmakers Still Got Some Wins," December 20, 2025, https://www.cityandstateny.com/policy/2025/12/hochul-signs-watered-down-ai-regs-lawmakers-still-got-some-wins/410328/.
20. New York Legislature, Responsible AI Safety and Education (RAISE) Act (2025), https://www.nysenate.gov/legislation/bills/2025/A6453.
21. Quoted in TechCrunch, "New York Governor Kathy Hochul Signs RAISE Act to Regulate AI Safety," December 20, 2025, https://techcrunch.com/2025/12/20/new-york-governor-kathy-hochul-signs-raise-act-to-regulate-ai-safety/.
22. Quoted in ibid.
23. White House, "Executive Order on Ensuring a National Policy Framework for Artificial Intelligence," December 11, 2025, https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.
24. Ibid.
25. Ron DeSantis (@GovRonDeSantis), "An executive order doesn't/can't preempt state legislative action," X (formerly Twitter), December 9, 2025, https://x.com/GovRonDeSantis.
26. Quoted in NPR, "Trump Is Trying to Preempt State AI Laws via an Executive Order. It May Not Be Legal," December 11, 2025, https://www.npr.org/2025/12/11/nx-s1-5638562/trump-ai-david-sacks-executive-order.
27. U.S. House of Representatives, Special Committee to Investigate Tax-Exempt Foundations and Comparable Organizations (Reece Committee), 1954.
28. European Parliament, Regulation (EU) 2024/1689 on Artificial Intelligence, Official Journal of the European Union, 2024.
Bibliography
Bommasani, Rishi, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, and Percy Liang. "The 2025 Foundation Model Transparency Index." Stanford Center for Research on Foundation Models, 2025. https://crfm.stanford.edu/fmti/December-2025/paper.pdf.
California Attorney General. "Statement on OpenAI Restructuring." California Department of Justice, October 28, 2025.
California Legislature. Senate Bill 53: Transparency in Frontier Artificial Intelligence Act. 2025. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB53.
City & State NY. "Hochul Signs Watered Down AI Regs, but Lawmakers Still Got Some Wins." December 20, 2025. https://www.cityandstateny.com/policy/2025/12/hochul-signs-watered-down-ai-regs-lawmakers-still-got-some-wins/410328/.
DeSantis, Ron (@GovRonDeSantis). "An executive order doesn't/can't preempt state legislative action." X (formerly Twitter), December 9, 2025. https://x.com/GovRonDeSantis.
European Parliament. Regulation (EU) 2024/1689 on Artificial Intelligence. Official Journal of the European Union, 2024.
Frontier Model Forum. "Announcement of New AI Safety Fund Grantees." December 11, 2025. https://www.frontiermodelforum.org/updates/announcement-of-new-ai-safety-fund-grantees/.
Frontier Model Forum. "Anthropic, Google, Microsoft and OpenAI Announce Executive Director of the Frontier Model Forum and Over $10 Million for a New AI Safety Fund." October 25, 2023. https://www.frontiermodelforum.org/.
Frontier Model Forum. "Frontier Capability Assessments: Emerging Industry Practices." 2024. https://www.frontiermodelforum.org/.
MacArthur Foundation. "Humanity AI Commits $500 Million to Build a People-Centered Future for AI." Press release, October 14, 2025. https://www.macfound.org/press/press-releases/humanity-ai-commits-500-million-to-build-a-people-centered-future-for-ai.
Microsoft. "Microsoft, Anthropic, Google, and OpenAI Launch Frontier Model Forum." July 26, 2023. https://blogs.microsoft.com/on-the-issues/2023/07/26/anthropic-google-microsoft-openai-launch-frontier-model-forum/.
New York Legislature. Responsible AI Safety and Education (RAISE) Act. 2025. https://www.nysenate.gov/legislation/bills/2025/A6453.
NPR. "Trump Is Trying to Preempt State AI Laws via an Executive Order. It May Not Be Legal." December 11, 2025. https://www.npr.org/2025/12/11/nx-s1-5638562/trump-ai-david-sacks-executive-order.
OpenAI. "Built to Benefit Everyone." October 28, 2025. https://openai.com/index/built-to-benefit-everyone/.
OpenAI Foundation. "Announcing the Initial People-First AI Fund Grantees." December 3, 2025. https://openai.com/index/people-first-ai-fund-grantees/.
Stanford HAI. "Transparency in AI Is on the Decline." December 9, 2025. https://hai.stanford.edu/news/transparency-in-ai-is-on-the-decline.
TechCrunch. "New York Governor Kathy Hochul Signs RAISE Act to Regulate AI Safety." December 20, 2025. https://techcrunch.com/2025/12/20/new-york-governor-kathy-hochul-signs-raise-act-to-regulate-ai-safety/.
U.S. House of Representatives. Special Committee to Investigate Tax-Exempt Foundations and Comparable Organizations (Reece Committee). 1954.
White House. "Executive Order on Ensuring a National Policy Framework for Artificial Intelligence." December 11, 2025. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.