Future of Life: How to Mitigate AI-Driven Power Concentration

Sponsor: Future of Life Institute
Solicitation Title: Future of Life: How to Mitigate AI-Driven Power Concentration
Event Type: Multiple Deadlines
Event Type: Rolling Deadline
Funding Amount: Up to $40,000,000
Sponsor Deadline: Monday, July 15, 2024
Solicitation Link: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/

Overview

Future of Life Institute is launching new grants to oppose and mitigate AI-driven Power Concentration

AI development is on course to concentrate power within a small number of groups, organizations, corporations, and individuals. Whether this entails the hoarding of resources, media control, or political authority, such concentration would be disastrous for everyone. We risk governments tyrannising with Orwellian surveillance, corporate monopolies crushing economic freedom, and rampant automation subverting meaningful individual agency. To combat these threats, FLI is launching a new grants program of up to $4M to support projects that work to mitigate the dangers of AI-driven power concentration and move towards a better world of meaningful human agency.

FLI’s position on power concentration
The ungoverned acceleration of AI development is on course to concentrate further the bulk of power amongst a very small number of organizations, corporations, and individuals. This would be disastrous for everyone.

Power here could mean several things. It could mean the ownership of a decisive proportion of the world’s financial, labor or material resources, or at least the ability to exploit them. It could be control of public attention, media narratives, or the algorithms that decide what information we receive. It could simply be a firm grip on political authority. Historically, power has entailed some combination of all three. A world where the transformative capabilities of AI are rolled out unfairly or unwisely will likely see most if not all power centres seized, clustered and kept in ever fewer hands.

Such concentration poses numerous risks. Governments could weaponize Orwellian levels of surveillance and societal control, using advanced AI to supercharge social media discourse manipulation. Truth decay would be locked in and democracy, or any other meaningful public participation in government, would collapse. Alternatively, giant AI corporations could become stifling monopolies with powers surpassing elected governments. Entire industries and large populations would increasingly depend on a tiny group of companies – with no satisfactory guarantees that benefits will be shared by all. In both scenarios, AI would secure cross-domain power within a specific group and render most people economically irrelevant and politically impotent. There would be no going back. Another scenario would leave no human in charge at all. AI powerful enough to command large parts of the political, social, and financial economy is also powerful enough to do so on its own. Uncontrolled artificial superintelligences could rapidly take over existing systems, and then continue amassing power and resources to achieve their objectives at the expense of human wellbeing and control, quickly bringing about our near-total disempowerment or even our extinction.

What world would we prefer to see?
We must reimagine our institutions, incentive structures, and technology development trajectory to ensure that AI is developed safely, to empower humanity, and to solve the most pressing problems of our time. AI has the potential to unlock an era of unprecendented human agency, innovation, and novel methods of cooperation. Combatting the concentration of power requires us to envision alternatives and viable pathways to get there.

Open release of AI models has been proposed as a potential solution to AI-power concentration. We are skeptical: today’s leading technology companies have grown and aggregated massive levels of power, even before generative AI, despite most core technology products having open source alternatives. Further, the benefits of “open” efforts often still favor entitities with the most resources. Hence, it is reasonable to assume open release may be a tool, especially for making some companies less dependent upon the others, but is likely insufficient alone to mitigate the continued concentration of power or meaningfully help to put power into the hands of the general populace.

Topical focus:
Projects will fit this call if they address power concentration and are broadly consistent with the vision put forth above. Possible topics include but are not limited to:

  • “Public AI”, in which AI is developed and deployed outside of the standard corporate mode, with greater public control and accountability – how it could work, an evaluation of different approaches, specifications for a particular public AI system;
  • AI assistants loyal to individuals as a counterweight to corporate power – design specifications for such systems, and how to make them available;
  • Safe decentralization: how to de-centralize governance of AI systems and still prevent proliferation of high-risk systems;
  • Effectiveness of open-source: when has open-source mitigated vs. increased power concentration and how could it do so (or not) with AI systems;
  • Responsible and safe open-release: technical and social schemes for open release that take safety concerns very seriously;
  • Income redistribution: exploring agency in a world of unvalued labour, and redistribution beyond taxation;
  • Incentive design: how to set up structures that incentivise benefit distribution rather than profit maximisation, learning from (the failure to constrain) previous large industries with negative social effects, such as the fossil fuel industry;
  • How to equip our societies with the infrastructure, resources and knowledge to convert AI insights into products that meet true human needs;
  • How to align economic, sociocultural and governance forces to ensure powerful AI is used to innovate, solve problems, increase prosperity broadly;
  • Preference aggregation: New mechanisms for discerning public preferences on social issues, beyond traditional democratic models;
  • Legal remedies: how to enable effective legal action against possible absuses of power in the AI sector;
  • Meta: Projects that address the issue of scaling small pilot projects to break through and achieve impact;
  • Meta: Mechanisms to incentivize adoption of decentralized tools to achieve a societally significant critical mass. 
     

Project Proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described above. A review panel will be convened to produce a final rank ordering of the proposals, and make budgetary adjustments if necessary. Awards will be granted and announced after each review period.

 

Solicitation Limitations:

Grant awards are sent to the applicant’s institution, and the institution’s administration is responsible for disbursing the awards. Specifically at universities, when submitting your application, please make sure to list the appropriate grant administrator that we should contact at your institution.

Other Information:

Proposals will be evaluated according to their relevance and expected impact.

The recipients could choose to allocate the funding in myriad ways, including:

  • Creating a specific tool to be scaled up at a later date;
  • Coordinating a group of actors to tackle a set problem;
  • Technical research reports on new systems;
  • Policy research;
  • General operating support for existing organizations doing work in this space;
  • Funding for specific new initiatives or even new organizations.

Evaluation Criteria & Project Eligibility

Grants totaling between $1-4M will be available to recipients in non-profit institutions, civil society organizations, and academics for projects of up to three years duration. 

Anticipated Number of Awards: We anticipate awarding between $1-4mn in grants, however the actual total and number of grants will depend of the quality of the applications.

Applications will be accepted on a rolling basis and reviewed in one of two rounds. The first round of review for projects will begin on July 15, 2024 and the second round of review will be on September 15, 2024. 


RODA ID: 2479