Tax Notes Talk
With Tax Notes Talk, you’ll never miss the latest in tax news and analysis. Tune in to hear experts from around the world weigh in on the issues shaping the world of tax.
Tax Notes Talk
Taxing Generative AI: The Future of Tax Policy and Tech
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Professors Jeremy Bearer-Friend and Sarah Polcz discuss their recent paper, “Sharing the Algorithm: The Tax Solution to Generative AI,” which outlines their proposal for taxing generative AI companies.
For more, read Bearer-Friend and Polcz's article.
***
Credits
Host: David D. Stewart
Executive Producers: Jeanne Rauch-Zender, Paige Jones
Producers: Jordan Parrish, Peyton Rhodes
Audio Engineers: Jordan Parrish, Peyton Rhodes
****
This episode is sponsored by Portugal Pathways. For more information, visit portugalpathways.io.
This episode is sponsored by the University of California Irvine School of Law Graduate Tax Program. For more information, visit law.uci.edu/gradtax.
This transcript has been edited for clarity.
David D. Stewart: Welcome to the podcast. I'm David Stewart, editor in chief of Tax Notes Today International. This week: equitable taxation of AI.
The rapid evolution of generative artificial intelligence has introduced new public policy concerns, concerns that governments are struggling to address as innovation speeds ahead. While there are many regulatory proposals for mitigating the worst effects of AI, what role might tax policy have in bringing balance between the harms and the benefits?
In their recently published paper, "Sharing the Algorithm: The Tax Solution to Generative AI," professors Jeremy Bearer-Friend and Sarah Polcz argue for a tax approach which would create partial public ownership of generative AI companies.
They join me now to delve into their proposal. Jeremy, Sarah, welcome to the podcast.
Jeremy Bearer-Friend: Thanks for having us.
Sarah Polcz: Great to be here.
David D. Stewart: So why don't we start off with laying out what the public policy concerns of this new era of generative AI are?
Sarah Polcz: Sure. I would be happy to go through with these. We focus on some core public policy concerns that are raised by generative AI. And the first that we focus on is that tax is one of the government's principle instruments for achieving distributive justice. It's a society-wide policy tool that gives society-wide benefits, unlike, say, private-law remedies that try to focus on specific victims and specific perpetrators. And that really matches the harms of AI, which are on a societal scale. So we need a societal-scale tool like tax.
Copyright litigation — which is one way that people have tried to address some of the harms, the earlier harms — copyright litigation just connects one plaintiff to one defendant, whereas tax law can redistribute across the entire population. The policy concerns involve a couple of main areas. One is around stolen data. It's very widely accepted that the works on which the LLMs [large language models] have been trained are typically unauthorized copies of protected works. There are hundreds of millions of unauthorized copies that are really driving AI models.
Another factor is distributive justice. We are concerned about the inequality that is resulting from the AI change of our economy, the concentration of wealth to those who are funding and investing in these technologies, and the shutdown of some more traditional economies, potentially creative economies.
On top of that, we've got algorithmic discrimination. AI learns from historical data. Its outputs, as a consequence, can amplify prejudices that are already in the data. And this is something that's not just hypothetical, it's already been documented in the way that AI systems have produced biases in hiring, lending, law enforcement.
And beyond these legally actionable contexts, we were concerned about AI and this broader normalization problem. So citizens are going to be interacting with and consuming chatbot output, and AI's learned biases can erode civil rights gains across a lot of domains of civic life. So this needs to be managed, and our proposal allows for increasing transparency requirements and bias auditing that other forms of redress don't really access.
So those are the core policy concerns that are created. There are others, of course, but we think that these cover a variety of types of harms, not just to particular creators whose works are used to drive these technologies, but to the broader populace.
David D. Stewart: So do you see these as harms that are coming or harms that have already arrived?
Sarah Polcz: It's a mix of both. In the case of the authors whose works have been used in training datasets, there are some harms already in the way that existing works that might've had broader markets. News websites are one particular source, or shorter works that are used to produce, say, information on particular topics that typically people would've gone to a website for, where their visitation drives ad-based revenue for the authors who produced this type of work for informing the public. But now, people often just go to chatbots, for instance, to get answers. So there's some market displacement already.
We've seen that there have been big settlements, the $1.5 billion Anthropic author settlement that focused on about half a million authors, but there's already market impact on creators who post information and writings online and who rely on ad-based revenue generation for their livelihoods. So that's one type of harm, particularly on the data input side of AI.
The wealth concentration is occurring already. We see OpenAI rejecting big purchase bids early on. We know that AI lets one person or a small team just do the work of dozens of people. We can already see the ways in which, even with Claude Code, that people are scaling back on how many computer science grads they need to hire. Unless there's some intervention or a pause, we are heading toward a winner-take-all scenario. We have a handful of companies and investors who are accumulating vast wealth and that's already in place, while displacing workers and creators.
David D. Stewart: So I want to return back to a point you did mention at the beginning, but get a little bit more into it. This is a tax policy approach. Why is tax the right way to do this? Why not maybe an alternative?
Jeremy Bearer-Friend: Great question, and particularly for this audience, a whole group of tax people listening in and wondering, is tax the right tool to use here? Does tax make sense for AI?
And I'll say upfront that we don't believe that our proposal to tax generative AI firms should be the only intervention. It's not as if we shouldn't also have some regulations in this space or that we couldn't have other types of tax tools simultaneously with our proposal.
So it doesn't fully substitute, but there are reasons why our proposal is uniquely well-suited to the challenges that professor Polcz already described. When concerned with concentrations of wealth, the first tool available generally is tax. I mean, that's what we're discussing here, is this rapid concentration of wealth in one specific industry and one subset of investors, and tax is already the tool we use to share what we have as a society. So there, it doesn't seem to be as much of a stretch.
The other questions are on the appropriate use of tax for something like algorithmic bias, for example, and the role of tax is to remember that tax law has a regulatory function and in fact has consistently been used for that function. The corporate income tax was adopted in the U.S. in part to regulate corporate power and corporate managers.
And here, with a tax paid with equity rather than cash so that there is a public ownership of a noncontrolling share of these firms would mean that there's also public voice in some of the decision-making, that there would be public voice in deciding whether or not to address the bias in the algorithms that are being sold.
David D. Stewart: All right. So let's get further into this proposal that you've laid out here. So in broad terms, what is this tax proposal you have?
Jeremy Bearer-Friend: So we argue in our article for the Columbia Journal of Tax Law that generative AI firms should be taxed, and this tax should be paid with equity rather than cash. What would occur under this tax is that a proportion of ownership would be remitted, and that ownership interest would, of course, have value, not only as a commodity that could be traded, but also because it entails a proportion of control right. But we do not propose a controlling interest to be remitted under this tax, but we do see there being some governance powers associated.
David D. Stewart: What I'm curious about here is, how do you define this area of application for the tax of what do we call generative AI that would be subject to this proposal?
Jeremy Bearer-Friend: Right. And that's such a core question, and it's going to be the case really with any tax. When you tax something, well, people will generally try to get out of having to pay that tax and they'll try to get out of the defined category.
Everyone listening to this show has likely taken federal income tax class, and they remember the weeks and weeks of course material about trying to define income, and what even is income and what are the different games that people play. So here, if we're taxing AI firms, it's true, I would expect to see some firms try to get out of our definition.
In the article, we point to a number of examples. Quite helpfully, since we first drafted the work, Congress has already given its own definition of AI in some of its appropriations bills. There's the National Defense Authorization Act that defined AI, and the California Legislature in its attempts to try to regulate AI for safety purposes has also provided a definition. So there are a number of legislatures, both federal and state, that have made an effort to define them.
David D. Stewart: So tell me about this approach of tax being collected as equity. What all does that entail, and what will that mean for companies?
Jeremy Bearer-Friend: Right. So one thing that's so appealing about this type of tax, and it might not be immediately obvious, is that this type of transaction is actually very routine. So to tax people, maybe it sounds as surprising or revolutionary, but companies are very used to divvying up ownership interests of the firm and regularly sell shares of their company. So there have been multiple centuries of progress in developing securities interests and these types of contracts that allow for a tradable interest in a company.
And so, we propose essentially piggybacking on what these firms are already doing. Instead of having to create a whole new apparatus that they're not familiar with, let's piggyback on the existing ownership interests that are already out there and say, "Okay, well, 10 percent of all the outstanding stock" — for example, and we don't say there should be a specific rate, but that could be one hypothetical rate, and the 10 percent of that outstanding ownership interest should be remitted into this fund. And the company could try to buy back shares and contribute the ones that are bought back, or the company could issue new shares.
The article proposes a few options as well, because of course, once you get into the details, it can get more complicated. We walk through different possibilities. So one is to try to mirror the existing ownership interests that already exist. So some firms are dual-class firms, some with stock that have preferred stock as well as common stock, etc. And so, under one approach, you just mirror the exact proportions of every type of ownership interest that exists.
Another possibility though, is that you require the ownership of the generative AI that's being targeted to be dropped into a sub, and the law would specify what type of subsidiary, and then the ownership interests of that subsidiary would be remitted to this public fund. And in that case, then you could get some uniformity about what the sub is and what kind of ownership interest is shared. Or lastly, you could require some specific type of security interest to satisfy the tax liability and then require every type of entity to remit that same type of interest.
David D. Stewart: Would this ownership interest involve a voting right? And if so, how would decisions be made on how to exercise that voting right?
Jeremy Bearer-Friend: The whole menu of what already exists for issuing stock can exist here, right? So it would be up to the legislature to decide whether to include that. There have been past proposals to replace the corporate income tax with this type of tax. One was published in The Washington Post, others published in [Tax Law Review], and generally those proposals call for nonvoting shares to be remitted. So that is also a design option.
In our paper, we do think that some of the governance powers are an appealing aspect of the tax, are part of the intervention that's required in this sector.
David D. Stewart: Are there historical examples of this type of public ownership through a mechanism like this?
Jeremy Bearer-Friend: Yes. So in the United States, we have a number of publicly owned and managed institutional investors who do have interest in a whole range of private firms and are managed. And these are the public pension funds. So they're worth billions of dollars and they have some transparency, they have some accountability to the public, to their beneficiaries.
And those funds generally also do not have controlling interest, but they exist within a thriving capitalist economy. I consider the United States to be capitalist. And here, we do have these public funds.
Globally, also sovereign wealth funds operate this way, where you have a sovereign power that owns a partial interest in private enterprise. And again, without some type of command-control economy where all businesses are run by government.
David D. Stewart: What other ideas are out there in this current environment for addressing the similar concerns to what you have?
Sarah Polcz: So in terms of stolen data in the copyright space, there are a few other ideas that there's been some work towards. One is this idea of collective compulsory licensing, which essentially would be an adaptation of licensing that's used in music and other domains. It would allow, for instance, an AI developer to get a blanket license basically and pay into a pool for creators. So there are existing compulsory license schemes such as mechanical licenses for music.
We have that model, but there's reason to be skeptical that it can really be administered in this particular case. The way we do it in music, it's relatively narrow. It's sector-specific. Those constraints really make it feasible.
AI, on the other hand, cuts across essentially almost every genre, every format, every medium. We're not just talking about, say, audio or music or compositions. That really connects into what we can call, essentially, a fatal identification problem for who actually would be the beneficiaries in this licensing scheme or how would we get their rights particularly.
So in the case of AI authors who become part of a dataset, the effective population, it's not just professional creators, like the authors who are members of the Anthropic settlement. It's everyday people who write on Reddit, YouTube, WordPress, who have their own blog. And these are people who are not filing copyright registrations where we could say, "Oh, here's a webpage that was scraped, that became training data, here's the author." And now we can find ways to bring them into the compulsory licensing scheme. We've got a massive missing author problem just at a scale that it simply is unheard of with the existing models.
Just to bring this home, we don't know how many webpages even exist. I believe only Google has any sense of how many webpages actually exist. So the information's not even available to determine who should be a part of this compulsory licensing. That also would only address copyright, by the way. That wouldn't do anything for labor displacement or bias or wealth concentration. That's sort of the leading copyright response.
There's also models proposed around copying levies, which have worked in the case of home recording where there was a levy paid on, say, blank media, but it's just not really a framework that translates to a technology that scrapes the whole internet.
Jeremy Bearer-Friend: And from a tax standpoint, obviously we already have a corporate income tax. And so you could argue, well, these firms are liable for corporate income tax, enough.
But what we've seen in this tech sector in the past, and tax folks are already familiar with this, is that if you are a loss firm — that is, if you're spending more money than you're taking in — you're really not going to have income tax liability until down the road.
And so this was the case with Facebook, now called Meta, at the beginning. This was the case with Amazon at the beginning. The tax didn't end up becoming a factor for these firms until they already became these monopolies, these huge corporate behemoths. So we could take a wait-and-see approach and decide to just rely on a tax solution down the road, or we could try to address the problem a bit more preemptively.
David D. Stewart: What do you see as the biggest roadblocks to a proposal like this being enacted?
Jeremy Bearer-Friend: Well, something that's been really encouraging to see is, since our idea first started maybe a few years ago and then we were drafting it and then getting feedback on it, now it's out in the world, is how positive the reception has been.
I think one area where people immediately seem to appreciate the idea is from within the tech sector, because many people in that industry are already compensated with stock rather than cash. And certainly, the titans of that industry, the ones who are now billionaires, became billionaires also not because they were being paid cash, but because they were being paid stock.
So it's an industry that already appreciates there are settings where it makes sense for both parties to have a transaction in a form of property other than cash, but we've really left that off the table for the public sector, and I think that's unnecessary. So that has been encouraging.
Another area where we actually keep seeing our idea come up has been the current administration's decision to take equity positions in a number of firms. I'm concerned by that because I haven't seen it in a form that is accountable to the public. It didn't move through the legislature. It doesn't require levels of transparency that our proposal does, but it still demonstrates the viability of a noncontrolling interest held by the public in a private firm.
David D. Stewart: Are there any constitutional concerns about attempting to go forward with something like this within the constructs of as things stand now, would this hold up to scrutiny?
Jeremy Bearer-Friend: Right. And I'm always skeptical of trying to predict the future and in particular, predict our current Supreme Court. So I won't say what a future court will do, but I am quite confident in saying what the Supreme Court has done in the past.
And I can tell you when the Supreme Court looked at the specific question of a tax that was assessed in property that wasn't cash and remitted in property that wasn't cash, the court upheld that it was a tax, but that case is [Leonard & Leonard v. Earle].
It's from 1929, and the underlying facts are that Maryland wanted to tax oyster farmers, not in cash, but in oyster shells. And they specified the number of oyster shells that had to be paid and that the tax needed to be paid in oyster shell. And so, one of those taxpayers litigated it. They said they wanted to pay cash, not shells.
And what ends up happening as it rises all the way to the Supreme Court, and the Supreme Court holds that, in fact, this is a tax. It does not need to be paid in cash. It can be in other forms of property. And that the requirement is simply that as a tax, those shells needed to go toward the general welfare. And what Maryland was doing was using the shells to reseed the Potomac so that it would stay fertile for future farmers, and so it satisfied the general welfare.
So again, here, I would expect litigation because it's a substantial tax and these firms have resources to litigate, but I think it would satisfy that Leonard v. Earle [standard].
David D. Stewart: Would this be considered an income tax? Within the constitutional system, how would you define this?
Jeremy Bearer-Friend: That's a great question. I see it really as an excise tax, and the Constitution is quite clear Congress has broad powers to impose excise taxes, and here it's an excise tax on generative AI, and it's on the firms that own generative AI.
We also do not propose it as a recurring tax. It's not annual. It would be one-time unless firms then issue new equity interests, and then it applies again as firms recapitalize or further divvy up ownership, then the excise would again apply.
David D. Stewart: So now that this proposal is out there, have you gotten any feedback on the fully formed idea out in the public?
Jeremy Bearer-Friend: So I mentioned briefly about the tech sector response, which has been very encouraging. In that industry, this seems rather routine. And there's also been these promising developments in the news, just seeing the new strategies of our federal government having a partial ownership interest in firms. So it increasingly seemed quite viable.
I'd also say, the anxiety over AI and the frustration with wealth inequality continues to grow as well, and the momentum behind the billionaire tax in California, for example, pointing to a solution like this as well.
And some of the appeal of our tax paid in equity rather than cash is that it not only is a routine transaction for firms to issue stock, but also you don't have the valuation problems with things like wealth taxes that you're seeing crop up at the state level. There's no need to value the entire firm to decide how much tax is being assessed and needs to be paid. It's just a proportion of ownership interest, and that is being tracked very carefully by the board. They already have a duty to track exactly how much has been sold, and so that taxable base doesn't require a valuation.
Sarah Polcz: I would add to that, that there is a general feeling among tech workers working on foundational models that with the uncertainty that we're going to see visited upon us as generative AI begins to permeate our lives even more, it leads a lot of tech workers to focus on a mitigation strategy for their own lives of “accumulate capital.” They'll just say, "It's going to be wild. Everything good is going to happen and everything bad is going to happen." And the best way to prepare for it is to accumulate capital, to look out for their families, to ensure that they have a cushion to ride out some of the disruption that they themselves expect. And that feeds into the rationale here, to an extent. We already know that uninterrupted, the benefits will accrue to a small fraction of a society that is going to see a lot of labor readjustment. And so, I think there's an expectation that some countermeasures, including this type of tax, are going to be adopted in order to provide a buffer against that. So, I think there is acceptance that more radical rethinkings are necessary, and even an enthusiasm for it among those who are working on the technology and who have some vision into what's coming down the road that the rest of us who aren't working daily on these types of technologies are sometimes a little bit behind in anticipating the impact of.
David D. Stewart: Well, it's a very thought-provoking proposal. And for listeners that are interested in checking it out for themselves, we'll drop a link in the show notes.
Jeremy, Sarah, thank you so much for being here.
Sarah Polcz: Thanks so much.
Jeremy Bearer-Friend: Thank you.