Our members at The New York Times Guild are currently bargaining their next collective agreement. What happens in these negotiations will impact the rest of the news industry.
On Wednesday, hundreds of workers at the Times, represented by The NewsGuild of New York, took part in an action flooding management’s inbox with emails outlining five major priorities in their contract fight. A big priority on that list: artificial intelligence, which is a critical issue in newsrooms right now, from ProPublica to the 50 unionized newsrooms at Gannett, now known as USA Today Co.
Guild members scheduled a new email to be sent at the top of each minute, beginning before 8 a.m. Eastern Time. Hundreds of emails crowded the inboxes of Times’ top brass.
The first email was fired off by Isaac Aronow, an associate editor on the games team who’s on the Guild bargaining committee, on the contract action team and a shop steward. He’s also the co-chair of the Times’ AI subcommittee. He’s been at the Times for five years.
I spoke to Isaac by phone while the emails were piling up in executive inboxes to talk about management’s stance on artificial intelligence. You’ll see management’s response below my conversation with Isaac.
What AI tools currently exist in the Times newsroom?
I’d break it into two categories: we have bespoke tools that we have internally and external tools. The bespoke tools we have internally include semantic search, which we’re using to search through the Epstein files. There’s also Cheatsheet, made by my colleagues, which goes through vast swaths of data. And there’s the Manosphere Report, which was recently covered by Nieman Lab.
Management’s pushing hard for us to use a tool called Glean, which is an AI assistant. And we use Gemini for a lot of stuff.
We don’t use that much AI on games, though.
How does AI help journalists?
In some places it makes a lot of sense. And there are a lot of places where it doesn’t. Machine learning has been a huge part of our reporting for over a decade at the Times. There was a great article we did in 2023 about ships evading sanctions. My colleagues used satellite imagery and machine learning to map where ships were saying they were versus where they actually were.
We also used AI to do an investigation into the size of bombs used in Gaza based on the satellite imagery of bomb craters.
There are use cases for AI, but there are also plenty that concern me pretty heavily.
We’re not generating text and putting it on the website yet. Management is saying they won’t do that. But they could change their minds at any moment.
But there are guardrails around AI at the Times, right?
It’s something we’re fighting for in our contract. Lots of ethical protections don’t exist in our contract yet. A lot of my colleagues are uncomfortable using AI. There are plenty of people who have been doing prize-winning work for many years and haven’t touched it.
There’s also a team of reporters who are using AI to find stories to report. I think it’s here to stay and stick. But if we don’t have protections for it, it’s very scary.
So what are y’all pushing for at the bargaining table?
Right now the company seems to want complete control of how AI is used in the newsroom. That’s what’s concerning to me.
We are asking for two things in our AI proposal. First, we want a share of the licensing income that the company earns from licensing the work that we’re doing every day for AI training.
In our current contract, if I write an article that gets licensed in Brazil, I get a percentage of that income. But now if they license the entire corpus of work, we get nothing. That’s completely unfair.
The other thing we’re pushing for is ethical protections around AI. We don’t want them to make digital simulacra, essentially digital versions of us.
I don’t want management making videos of us or an AI-generated robot Isaac talking about sudoku tips or AI generating my voice. So, we’re pushing for protections in that regard.
We also want to disclose how AI is used in the reporting process. If text was generated, we should disclose that so readers continue to trust us. That’s the main thing. It’s a two-way street. Ethical protections help us to do more rigorous high quality journalism and it makes sure that readers know a real person is talking to other real people and getting real scoops.
One thing I say a lot is that AI is not going to ask hard questions of people who need to be asked them.
That work is always going to exist.
So, the company has completely agreed to everything?
God no.
We passed our AI proposal in our first bargaining session. The company returned it in the second session fully struck out and replaced it with the language in the Times Tech Guild contract. That language would create a discussion committee. We’ve heard from our co-workers covered by that agreement that without language mandating our agreement and stringent enforcement, this language doesn’t address our collective concerns.
Spoiler alert: that already exists and I’m the co-chair of it. We have a committee already and we meet somewhat regularly.
But in a later bargaining session we returned a counter after the company struck out everything. It was essentially identical to our original proposal, but we included a version of their committee language. They returned us a struck-out counter in which they included a waiver. At the table they would not admit it was a waiver, but it was.
They’re still looking for complete control. In particular, they struck out our licensing language. But left in the part about them being able to sell our data for AI training. But they don’t want to give us any money for it.
That’s where we’re at now. And we’re doing actions to move the company in this area.

Are ethics important at The New York Times?
Yes. I am fortunate to work with some of the best and smartest journalists in the United States, who have a strong ethics they apply to their work. Ethics matter a lot. Particularly in games. I remind people that we have to do things like actually call people and work at the highest quality to give people the most accurate crossword puzzle, for instance. When I first started, I was told that there are people out there who think, “if they can’t get it right in the crossword puzzle, how can I trust them to get it right about the war?”
I need to do a good job and maintain fairness for our readers and solvers. If I’m thinking about it on the 7th floor, I hope for damn sure they’re thinking about it on 2, 3, and 4 in the big newsroom. It’s a constant area of discussion for my colleagues and me, and something I’m talking to my colleagues about all the time.
You haven’t mentioned pushing for language to prevent AI from destroying jobs or lowering salaries that we’ve won in other contracts, why not?
We already have similar language, like strong automation and subcontracting language in our contract. That’s the reason that a lot of the people on the sports desk have jobs when the company argued it was subcontracting out to the Athletic.
In our contract it’s very hard for them to lay people off due to subcontracting. There are protections and guardrails against subcontracting related to layoffs.
And we’re protected if they try to automate us out of a job with AI.
Why is it important for New York Times workers to get good AI protections?
The New York Times is the leader in many areas of journalism and they give the impression that they hope to be a leader on AI. By getting these protections we set a precedent for newsrooms across the continent to create better working conditions for journalists in all newsrooms. That’s what’s motivating me through these negotiations and why we’re pushing hard.
Everyone should get paid for their work if it’s getting syndicated to another publication or an AI company.
We need to get protections in our contract now before these AI companies get even bigger. AI companies need high-quality training data and that’s becoming more and more important. It’s easier now while these companies are still small.
We’ve talked a lot about AI, but what else are you pushing for in these negotiations?
Jurisdiction is also big. AI and jurisdiction are interlinked. Keeping union work in the union is a key priority that we’re working on. It’s widely known we’re working on including our colleagues from the Athletic into the union. It’s the subcontracting issue we talked about. Keeping people’s jobs, and having a voice in our workplace about issues that affect our work, matters to us.
I reached out to interview the top managing editors at The Times on Friday afternoon, however, neither Marc Lacey nor Carolyn Ryan were available to talk. Instead, Lacey answered these two questions over email and pointed out that management had agreed to not make digital replicas of workers without permission in the last round of negotiations.
Why is New York Times management refusing to agree to ethical safeguards on artificial intelligence in negotiations when Times journalists are asking for those safeguards?
We’re in favor of safeguards and have provisions in place that provide industry leading protections to ensure A.I. tools are used ethically and with transparency, while leaving flexibility to iterate as the technology evolves. Where we have a difference in approach is that we believe it’s imprudent to focus on adding restrictions into a contract given the rapid and constantly changing nature of this technology and its use cases.
Our A.I. principles are publicly available here.
If The New York Times is entering deals with major AI companies (like Amazon in 2025), why is it refusing to compensate the journalists and workers responsible for producing the news now being used to train large language models?
Our journalists are among the highest paid in the industry and we structure salaries and compensation to reward them for their contributions to the company’s success.
Our business has long relied on licensing deals for revenue, which allows us to invest in our newsroom and the journalism. We’ve added hundreds of roles in recent years while continuing to cover the world and report from 150 countries and all 50 states each year.
