Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

Oct 21, 2025

Democracy in the Age of AI

Nate Persily

Congress AI

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

A majority of Americans consistently express concern about the effect that artificial intelligence will have on democracy.  This widely-shared anxiety is not terribly surprising, as Americans have consistently worried about the effect of tech on politics since the 2016 election, and the mainstream media continues to fan the specific flames related to AI with headlines, such as “AI deepfakes threaten to upend global elections. No one can stop them” and “AI is starting to wear down democracy.”  This panic is itself a democracy problem, however, that threatens further to erode confidence in the election infrastructure and information ecosystem.   

To be clear, there are legitimate reasons to be concerned about the democracy-related effects of AI.  As I have written elsewhere, AI amplifies the abilities of all good and bad actors in the democratic system to achieve the same goals they have always had. That includes purveyors of disinformation or hostile foreign actors who can take advantage of the tools of artificial intelligence to engage in influence operations or even “kinetic” operations, for example, to execute a cyberattack on critical infrastructure.  At the same time, those tools can help election officials more easily convey information to voters or allocate resources more efficiently, and they can lower the cost of campaigning for those who might replace a large staff with AI agents or construct advertisements and other campaign communications (in whatever language they want) at a fraction of the cost consultants now charge. 

For the most part, those who worry about the effects of AI on democracy point to the impact of synthetic imagery or “deep fakes” on voters’ perceptions of the candidates or the electoral system.  Under this view, a well-timed fake video or audio recording might shift a critical mass of voters and determine the outcome of the election.  It also might dissuade people from voting, if, for example, a fake video suggested violence at a polling place or presented evidence of official malfeasance. Indeed, there are thousands of examples of the use of AI tools to create political imagery or audio in the series of elections held in 2024.  Most notably, the Romanian Constitutional Court canceled the results of its first round of its presidential election due to a Russian influence campaign that it said employed AI tools.  Moreover, two days before the 2023 Slovakian parliamentary elections, a faked audio clip implying the purchase of votes allegedly swung the election.  However, even in these two most notorious cases, it is very difficult to disentangle the role of AI from other, more classic aspects of propaganda and election meddling.

In the 2024 U.S. election, synthetic imagery was ubiquitous, but no evidence suggests it had any effect on votes.  In fact, the use of such imagery in the American context exists as an object lesson in how such imagery interacts with memetic politics as part of an influence campaign, wholly apart from the “fake” nature of the content.  Perhaps the most glaring example of the use of AI- generated imagery surrounded the false story about Haitian immigrants in Ohio eating pets. Thousands of images flooded the internet: Donald Trump hugging cats, dogs, and ducks, a guinea pig with a MAGA hat, a cat holding up a sign that said “Kamala hates me.”  None of these images were believable but they helped reinforce a false narrative about what had happened in Ohio, which itself was less about the facts of that specific situation and more about the larger anti-immigration themes of the campaign. 

The more consequential strategy with respect to AI and elections grows from elites disclaiming true images as AI generated.  In the U.S. election, for example, Donald Trump claimed that Kamala Harris had “A.I.’d” her crowd sizes at a rally.  We have seen it in other countries as politicians seek to mute the impact of a scandal by saying it is the result of AI.  This “liar’s dividend” may be the more serious democracy-endangering consequence of AI.  However ubiquitous, synthetic imagery will still remain a small share of the total amount of political content most people consume, but the distrust generated from that small share will undermine confidence in the true information that is far more predominant.  In this way, our panic over AI may exacerbate the ongoing erosion of authority that endangers democratic deliberation and leads people to believe that “nothing is true and everything is possible.”  

We might later look back on the revolution in AI and realize that “AI’s democracy problems” were inseparable from the other challenges that this new technology poses.  If AI leads to massive labor force disruptions or it enables governments to better surveil and censor their populations, those are democracy problems as well. In the end, the best way to realize the upsides and mitigate the downsides of AI for democracy, would be for democracies to ally together to ensure that they continue to lead in AI model development, deployment, and diffusion, and steer this technology toward a pro-democratic future. 

About the Author

Nate Persily

Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and the Freeman Spogli Institute of International Studies. He is the Founding Co-Director of the Stanford Cyber Policy Center and its Program on Democracy and the Internet, as well as the Stanford-MIT Healthy Elections Project. Professor Persily’s scholarship and legal practice address issues such as voting rights, political parties, campaign finance, redistricting, and election administration – all topics covered in his coauthored election law casebook, The Law of Democracy (Foundation Press, 6th ed., 2020), with Samuel Issacharoff, Pamela Karlan, Richard Pildes and Franita Tolson.

About the Author

Nate Persily

Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and the Freeman Spogli Institute of International Studies. He is the Founding Co-Director of the Stanford Cyber Policy Center and its Program on Democracy and the Internet, as well as the Stanford-MIT Healthy Elections Project. Professor Persily’s scholarship and legal practice address issues such as voting rights, political parties, campaign finance, redistricting, and election administration – all topics covered in his coauthored election law casebook, The Law of Democracy (Foundation Press, 6th ed., 2020), with Samuel Issacharoff, Pamela Karlan, Richard Pildes and Franita Tolson.

About the Author

Nate Persily

Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and the Freeman Spogli Institute of International Studies. He is the Founding Co-Director of the Stanford Cyber Policy Center and its Program on Democracy and the Internet, as well as the Stanford-MIT Healthy Elections Project. Professor Persily’s scholarship and legal practice address issues such as voting rights, political parties, campaign finance, redistricting, and election administration – all topics covered in his coauthored election law casebook, The Law of Democracy (Foundation Press, 6th ed., 2020), with Samuel Issacharoff, Pamela Karlan, Richard Pildes and Franita Tolson.

About the Author

Nate Persily

Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and the Freeman Spogli Institute of International Studies. He is the Founding Co-Director of the Stanford Cyber Policy Center and its Program on Democracy and the Internet, as well as the Stanford-MIT Healthy Elections Project. Professor Persily’s scholarship and legal practice address issues such as voting rights, political parties, campaign finance, redistricting, and election administration – all topics covered in his coauthored election law casebook, The Law of Democracy (Foundation Press, 6th ed., 2020), with Samuel Issacharoff, Pamela Karlan, Richard Pildes and Franita Tolson.

About the Author

Nate Persily

Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and the Freeman Spogli Institute of International Studies. He is the Founding Co-Director of the Stanford Cyber Policy Center and its Program on Democracy and the Internet, as well as the Stanford-MIT Healthy Elections Project. Professor Persily’s scholarship and legal practice address issues such as voting rights, political parties, campaign finance, redistricting, and election administration – all topics covered in his coauthored election law casebook, The Law of Democracy (Foundation Press, 6th ed., 2020), with Samuel Issacharoff, Pamela Karlan, Richard Pildes and Franita Tolson.