Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
Bearing down on the United States is a volatile combination of political extremism, polarization, and alienation. Against this backdrop, alarm has been growing over the potentially disruptive effects of AI-generated content on elections. While the causes of political dysfunction are hardly limited to emerging technologies—but instead are deep and complex—AI-generated content still warrants serious concern. To the extent it threatens to fuel the spread of inflammatory and radicalizing content, it threatens to spray gasoline on a fire.
This threat has prompted calls for regulation, and, in response, some jurisdictions have acted. Yet the history of campaign finance regulation in the United States calls into question the viability of this approach. This precedent suggests that limited pro-democratic resources might be better spent not on trying to suppress this technology through regulation, but rather on harnessing AI for positive ends—in other words, on fighting fire with fire.
An examination of campaign finance reform provides context. Over generations, federal, state, and local governments in the United States have enacted sweeping regulations of money in politics in an effort to reduce corruption, increase transparency, reduce disparities in speakers’ influence, and more. These reforms have come at a cost: they have raised genuine concerns over free speech and political entrenchment, and they have required enormous political capital to push forward.
The results have not been positive.
At the outset, the courts, and in particular the U.S. Supreme Court, have profoundly chipped away at these reforms, using the First Amendment as a cudgel. The result has been a dramatic distortion of already complicated regulatory regimes.
Enforcement of what remains has been far from adequate. At the federal level, the Federal Election Commission is dysfunctional to the point of satire (literally), and this outcome is no accident. Instead, it reflects a strategy of obstruction by those in power.
Meanwhile, technological advances have further undermined regulatory frameworks. Rules designed for television and print, for example, are poorly suited for the online platforms that now are central to political communication.
What’s more, the unintended consequences of campaign finance law have been profound and perverse: rather than strengthening parties or other potentially stabilizing institutions, regulation has empowered outside actors. This shift, in turn, appears to have produced a host of negative effects: it seems to have undermined transparency, helped to amplify the voices of the most privileged speakers in elections, and fueled polarization.
In short, it is fair to accuse campaign finance reform in the United States of failure. This precedent sounds warning bells for efforts to regulate AI.
The parallels are uncanny. At the outset, significant regulation of AI in elections would also require Herculean efforts. Even if pursued in good faith, moreover, it again would trigger genuine concerns over free speech and political entrenchment.
Were regulation actually enacted, the courts likely would strike much of it down as unconstitutional. The distorted remains would depend on competent enforcement by institutions that, again, would be vulnerable to obstruction. It is easy to imagine, moreover, how technology in this space also would quickly outpace efforts to regulate it.
Finally, the unintended consequences seem likely to again be profound and perverse. By primarily affecting the actors who are most readily constrained, regulations of AI-generated content likely would, as a comparative matter, empower actors who are harder to regulate: outside interest groups, foreign agents, those willing to engage in criminal activity, and others who are likely to be disruptive to the democratic process.
So how to move forward? The law should and will still play a role. Preexisting legal restrictions will still apply, for example, to restrict actors even as they seek to exploit new technologies to engage in voter intimidation, voter suppression, impersonation of a candidate, fraud, and so on. There may, moreover, be a role on the margins for newly enacted regulation, particularly with respect to disclosure.
Ultimately, however, the solution to the problem of AI in elections is not likely to be a legal solution. Instead, the path forward might need to be one of acceptance, coupled with a commitment to harnessing AI’s potential to advance pro-democratic ends. On this front, AI-related technology can be an excellent teacher, potentially educating voters about complicated concepts in an accessible way. It can reach people on an individual level, potentially extending outreach where traditional methods have failed. It can generate content cheaply and efficiently, potentially contributing to elections that cannot support more expensive methods. And it can evolve quickly, potentially counteracting threats that are themselves always changing.
Leaning into AI for pro-democratic purposes may not be easy or attractive. But like a backburn in a wildfire, it may be what is needed.
About the Author
Lisa Marshall Manheim
Manheim is the Charles I. Stone Professor of Law at the University of Washington School of Law. She writes and teaches in the areas of constitutional law, election law, and administrative law. Her work has appeared in leading academic journals as well as a range of national and international news outlets. She is a Co-Reporter on the Restatement of the Law, Election Litigation, a project of the American Law Institute.
About the Author
Lisa Marshall Manheim
Manheim is the Charles I. Stone Professor of Law at the University of Washington School of Law. She writes and teaches in the areas of constitutional law, election law, and administrative law. Her work has appeared in leading academic journals as well as a range of national and international news outlets. She is a Co-Reporter on the Restatement of the Law, Election Litigation, a project of the American Law Institute.
About the Author
Lisa Marshall Manheim
Manheim is the Charles I. Stone Professor of Law at the University of Washington School of Law. She writes and teaches in the areas of constitutional law, election law, and administrative law. Her work has appeared in leading academic journals as well as a range of national and international news outlets. She is a Co-Reporter on the Restatement of the Law, Election Litigation, a project of the American Law Institute.
About the Author
Lisa Marshall Manheim
Manheim is the Charles I. Stone Professor of Law at the University of Washington School of Law. She writes and teaches in the areas of constitutional law, election law, and administrative law. Her work has appeared in leading academic journals as well as a range of national and international news outlets. She is a Co-Reporter on the Restatement of the Law, Election Litigation, a project of the American Law Institute.
About the Author
Lisa Marshall Manheim
Manheim is the Charles I. Stone Professor of Law at the University of Washington School of Law. She writes and teaches in the areas of constitutional law, election law, and administrative law. Her work has appeared in leading academic journals as well as a range of national and international news outlets. She is a Co-Reporter on the Restatement of the Law, Election Litigation, a project of the American Law Institute.
More viewpoints in
Society & Communications

Oct 21, 2025
Democracy in the Age of AI
Nate Persily
Society & Communications

Oct 21, 2025
Democracy in the Age of AI
Nate Persily
Society & Communications

Oct 21, 2025
Democracy in the Age of AI
Nate Persily
Society & Communications

Oct 20, 2025
Leaning into AI
Lisa Marshall Manheim
Society & Communications

Oct 20, 2025
Leaning into AI
Lisa Marshall Manheim
Society & Communications

Oct 20, 2025
Leaning into AI
Lisa Marshall Manheim
Society & Communications

Oct 17, 2025
Free Assembly for a Free People
Jan-Werner Müller
Society & Communications

Oct 17, 2025
Free Assembly for a Free People
Jan-Werner Müller
Society & Communications

Oct 17, 2025
Free Assembly for a Free People
Jan-Werner Müller
Society & Communications